mirror of
https://github.com/RYDE-WORK/ktransformers.git
synced 2026-02-05 22:13:16 +08:00
⚡ release v0.2.0
This commit is contained in:
parent
83401dbb3b
commit
6f0fe953e1
@ -23,7 +23,7 @@ Our vision for KTransformers is to serve as a flexible platform for experimentin
|
|||||||
|
|
||||||
<h2 id="Updates">🔥 Updates</h2>
|
<h2 id="Updates">🔥 Updates</h2>
|
||||||
|
|
||||||
* **Fed 10, 2025**: Support DeepseekR1 and V3 on single (24GB VRAM)/multi gpu and 382G DRAM, up to 3~64x speedup. The Detailed tutorial is [here](./doc/en/DeepseekR1_V3_tutorial.md)
|
* **Fed 10, 2025**: Support Deepseek-R1 and V3 on single (24GB VRAM)/multi gpu and 382G DRAM, up to 3~64x speedup. The detailed tutorial is [here](./doc/en/DeepseekR1_V3_tutorial.md)
|
||||||
* **Aug 28, 2024**: Support 1M context under the InternLM2.5-7B-Chat-1M model, utilizing 24GB of VRAM and 150GB of DRAM. The detailed tutorial is [here](./doc/en/long_context_tutorial.md).
|
* **Aug 28, 2024**: Support 1M context under the InternLM2.5-7B-Chat-1M model, utilizing 24GB of VRAM and 150GB of DRAM. The detailed tutorial is [here](./doc/en/long_context_tutorial.md).
|
||||||
* **Aug 28, 2024**: Decrease DeepseekV2's required VRAM from 21G to 11G.
|
* **Aug 28, 2024**: Decrease DeepseekV2's required VRAM from 21G to 11G.
|
||||||
* **Aug 15, 2024**: Update detailed [TUTORIAL](doc/en/injection_tutorial.md) for injection and multi-GPU.
|
* **Aug 15, 2024**: Update detailed [TUTORIAL](doc/en/injection_tutorial.md) for injection and multi-GPU.
|
||||||
@ -50,7 +50,7 @@ https://github.com/user-attachments/assets/ebd70bfa-b2c1-4abb-ae3b-296ed38aa285
|
|||||||
- Compared to 4.51 tokens/s in llama.cpp with 2×32 cores, achieving up to **3.03× speedup**.
|
- Compared to 4.51 tokens/s in llama.cpp with 2×32 cores, achieving up to **3.03× speedup**.
|
||||||
- Upcoming Open Source Release:
|
- Upcoming Open Source Release:
|
||||||
- AMX optimizations and selective expert activation will be open-sourced in V0.3.
|
- AMX optimizations and selective expert activation will be open-sourced in V0.3.
|
||||||
- Currently available only in preview binary distribution, which can be found [here](xxx).
|
- Currently available only in preview binary distribution, which can be downloaded [here](https://github.com/kvcache-ai/ktransformers/releases/download/v0.1.4/ktransformers-0.3.0rc0+cu126torch26fancy-cp311-cp311-linux_x86_64.whl).
|
||||||
|
|
||||||
- **Local 236B DeepSeek-Coder-V2:** Running its Q4_K_M version using only 21GB VRAM and 136GB DRAM, attainable on a local desktop machine, which scores even better than GPT4-0613 in [BigCodeBench](https://huggingface.co/blog/leaderboard-bigcodebench).
|
- **Local 236B DeepSeek-Coder-V2:** Running its Q4_K_M version using only 21GB VRAM and 136GB DRAM, attainable on a local desktop machine, which scores even better than GPT4-0613 in [BigCodeBench](https://huggingface.co/blog/leaderboard-bigcodebench).
|
||||||
|
|
||||||
|
|||||||
@ -23,8 +23,8 @@ https://github.com/user-attachments/assets/ebd70bfa-b2c1-4abb-ae3b-296ed38aa285
|
|||||||
- Compared to 4.51 tokens/s in llama.cpp with 2×32 cores, achieving up to **3.03× speedup**.
|
- Compared to 4.51 tokens/s in llama.cpp with 2×32 cores, achieving up to **3.03× speedup**.
|
||||||
|
|
||||||
|
|
||||||
But we're also previewing our upcoming optimizations, including an Intel AMX-accelerated kernel and a selective expert activation method, which will significantly enhance performance. With V0.3-preview, we achieve up to 286 tokens/s for prefill, making it up to **64× faster than llama.cpp** for local inference.
|
We also give our upcoming optimizations previews, including an Intel AMX-accelerated kernel and a selective expert activation method, which will significantly enhance performance. With V0.3-preview, we achieve up to 286 tokens/s for prefill, making it up to **64× faster than llama.cpp** for local inference.
|
||||||
The binary distribution is available now and the source code will come ASAP! Check out the details [here](xxx)
|
The binary distribution is available now and the source code will come ASAP! Check out the wheel package [here](https://github.com/kvcache-ai/ktransformers/releases/download/v0.1.4/ktransformers-0.3.0rc0+cu126torch26fancy-cp311-cp311-linux_x86_64.whl)
|
||||||
|
|
||||||
|
|
||||||
## Prerequisites
|
## Prerequisites
|
||||||
@ -111,6 +111,8 @@ The parameters' meaning is the same. But As we use dual socket, we set cpu_infe
|
|||||||
#### Dual socket version (64 cores)
|
#### Dual socket version (64 cores)
|
||||||
Our local_chat test command is:
|
Our local_chat test command is:
|
||||||
``` shell
|
``` shell
|
||||||
|
wget https://github.com/kvcache-ai/ktransformers/releases/download/v0.1.4/ktransformers-0.3.0rc0+cu126torch26fancy-cp311-cp311-linux_x86_64.whl
|
||||||
|
pip install ./ktransformers-0.3.0rc0+cu126torch26fancy-cp311-cp311-linux_x86_64.whl
|
||||||
python -m ktransformers.local_chat --model_path <your model path> --gguf_path <your gguf path> --prompt_file <your prompt txt file> --cpu_infer 65 --cache_lens 1536
|
python -m ktransformers.local_chat --model_path <your model path> --gguf_path <your gguf path> --prompt_file <your prompt txt file> --cpu_infer 65 --cache_lens 1536
|
||||||
<when you see chat, then press enter to load the text prompt_file>
|
<when you see chat, then press enter to load the text prompt_file>
|
||||||
```
|
```
|
||||||
|
|||||||
@ -23,7 +23,6 @@ dependencies = [
|
|||||||
"blessed >= 1.20.0",
|
"blessed >= 1.20.0",
|
||||||
"accelerate >= 0.31.0",
|
"accelerate >= 0.31.0",
|
||||||
"sentencepiece >= 0.1.97",
|
"sentencepiece >= 0.1.97",
|
||||||
"flash_attn == 2.7.4.post1",
|
|
||||||
"setuptools",
|
"setuptools",
|
||||||
"ninja",
|
"ninja",
|
||||||
"wheel",
|
"wheel",
|
||||||
|
|||||||
@ -1,6 +1,5 @@
|
|||||||
fire
|
fire
|
||||||
transformers==4.43.2
|
transformers==4.43.2
|
||||||
flash_attn==2.7.4.post1
|
|
||||||
numpy
|
numpy
|
||||||
torch>=2.3.0
|
torch>=2.3.0
|
||||||
packaging
|
packaging
|
||||||
|
|||||||
Loading…
x
Reference in New Issue
Block a user