lexasub
a5203b4465
llama : minor fixes for up llama load model speed ( #11448 )
...
* impl::load change map bpe_ranks to onordered map for reduce time of impl::load on 30%
* llama_model_loader::init_mapping - replace new llama_mmap to std::make_unique<llama_mmap> for clean code & reduce (/2) time of running init_mappings
* Update src/llama-vocab.cpp
---------
Co-authored-by: lexasub <empty@empty.ru>
Co-authored-by: Diego Devesa <slarengh@gmail.com>
2025-01-27 14:42:09 +01:00
..
2025-01-21 13:18:51 +00:00
2025-01-12 11:32:42 +02:00
2025-01-12 11:32:42 +02:00
2025-01-21 13:18:51 +00:00
2025-01-21 13:18:51 +00:00
2025-01-03 10:18:53 +02:00
2025-01-03 10:18:53 +02:00
2025-01-20 14:35:07 +01:00
2025-01-09 10:07:33 +01:00
2025-01-12 11:32:42 +02:00
2025-01-12 11:32:42 +02:00
2025-01-03 10:18:53 +02:00
2025-01-03 10:18:53 +02:00
2025-01-12 11:32:42 +02:00
2025-01-03 10:18:53 +02:00
2025-01-10 09:58:08 +08:00
2025-01-12 11:32:42 +02:00
2025-01-07 18:01:58 +01:00
2025-01-03 10:18:53 +02:00
2025-01-12 11:32:42 +02:00
2025-01-03 10:18:53 +02:00
2025-01-20 16:02:43 +02:00
2025-01-06 10:52:38 +02:00
2025-01-27 14:42:09 +01:00
2025-01-16 13:54:08 +01:00
2025-01-26 16:20:34 +01:00
2025-01-17 10:57:09 +02:00
2025-01-16 13:54:08 +01:00
2025-01-03 10:18:53 +02:00
2025-01-12 11:32:42 +02:00
2025-01-12 11:32:42 +02:00
2025-01-27 14:42:09 +01:00
2025-01-12 12:15:53 +02:00
2025-01-27 12:07:12 +01:00
2024-10-08 13:27:04 +02:00
2024-10-02 15:49:55 +02:00
2025-01-18 16:18:15 +02:00
2024-12-16 12:31:45 +02:00