From 6f0fe953e1f3d494239971fc895f5032853de9a9 Mon Sep 17 00:00:00 2001 From: liam Date: Mon, 10 Feb 2025 13:52:24 +0800 Subject: [PATCH] :zap: release v0.2.0 --- README.md | 4 ++-- doc/en/DeepseekR1_V3_tutorial.md | 6 ++++-- pyproject.toml | 1 - requirements-local_chat.txt | 1 - 4 files changed, 6 insertions(+), 6 deletions(-) diff --git a/README.md b/README.md index 6735da9..d06163d 100644 --- a/README.md +++ b/README.md @@ -23,7 +23,7 @@ Our vision for KTransformers is to serve as a flexible platform for experimentin

🔥 Updates

-* **Fed 10, 2025**: Support DeepseekR1 and V3 on single (24GB VRAM)/multi gpu and 382G DRAM, up to 3~64x speedup. The Detailed tutorial is [here](./doc/en/DeepseekR1_V3_tutorial.md) +* **Fed 10, 2025**: Support Deepseek-R1 and V3 on single (24GB VRAM)/multi gpu and 382G DRAM, up to 3~64x speedup. The detailed tutorial is [here](./doc/en/DeepseekR1_V3_tutorial.md) * **Aug 28, 2024**: Support 1M context under the InternLM2.5-7B-Chat-1M model, utilizing 24GB of VRAM and 150GB of DRAM. The detailed tutorial is [here](./doc/en/long_context_tutorial.md). * **Aug 28, 2024**: Decrease DeepseekV2's required VRAM from 21G to 11G. * **Aug 15, 2024**: Update detailed [TUTORIAL](doc/en/injection_tutorial.md) for injection and multi-GPU. @@ -50,7 +50,7 @@ https://github.com/user-attachments/assets/ebd70bfa-b2c1-4abb-ae3b-296ed38aa285 - Compared to 4.51 tokens/s in llama.cpp with 2×32 cores, achieving up to **3.03× speedup**. - Upcoming Open Source Release: - AMX optimizations and selective expert activation will be open-sourced in V0.3. - - Currently available only in preview binary distribution, which can be found [here](xxx). + - Currently available only in preview binary distribution, which can be downloaded [here](https://github.com/kvcache-ai/ktransformers/releases/download/v0.1.4/ktransformers-0.3.0rc0+cu126torch26fancy-cp311-cp311-linux_x86_64.whl). - **Local 236B DeepSeek-Coder-V2:** Running its Q4_K_M version using only 21GB VRAM and 136GB DRAM, attainable on a local desktop machine, which scores even better than GPT4-0613 in [BigCodeBench](https://huggingface.co/blog/leaderboard-bigcodebench). diff --git a/doc/en/DeepseekR1_V3_tutorial.md b/doc/en/DeepseekR1_V3_tutorial.md index 24c7a87..0282ba1 100644 --- a/doc/en/DeepseekR1_V3_tutorial.md +++ b/doc/en/DeepseekR1_V3_tutorial.md @@ -23,8 +23,8 @@ https://github.com/user-attachments/assets/ebd70bfa-b2c1-4abb-ae3b-296ed38aa285 - Compared to 4.51 tokens/s in llama.cpp with 2×32 cores, achieving up to **3.03× speedup**. -But we're also previewing our upcoming optimizations, including an Intel AMX-accelerated kernel and a selective expert activation method, which will significantly enhance performance. With V0.3-preview, we achieve up to 286 tokens/s for prefill, making it up to **64× faster than llama.cpp** for local inference. -The binary distribution is available now and the source code will come ASAP! Check out the details [here](xxx) +We also give our upcoming optimizations previews, including an Intel AMX-accelerated kernel and a selective expert activation method, which will significantly enhance performance. With V0.3-preview, we achieve up to 286 tokens/s for prefill, making it up to **64× faster than llama.cpp** for local inference. +The binary distribution is available now and the source code will come ASAP! Check out the wheel package [here](https://github.com/kvcache-ai/ktransformers/releases/download/v0.1.4/ktransformers-0.3.0rc0+cu126torch26fancy-cp311-cp311-linux_x86_64.whl) ## Prerequisites @@ -111,6 +111,8 @@ The parameters' meaning is the same. But As we use dual socket, we set cpu_infe #### Dual socket version (64 cores) Our local_chat test command is: ``` shell +wget https://github.com/kvcache-ai/ktransformers/releases/download/v0.1.4/ktransformers-0.3.0rc0+cu126torch26fancy-cp311-cp311-linux_x86_64.whl +pip install ./ktransformers-0.3.0rc0+cu126torch26fancy-cp311-cp311-linux_x86_64.whl python -m ktransformers.local_chat --model_path --gguf_path --prompt_file --cpu_infer 65 --cache_lens 1536 ``` diff --git a/pyproject.toml b/pyproject.toml index 3c3700d..028c6a3 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -23,7 +23,6 @@ dependencies = [ "blessed >= 1.20.0", "accelerate >= 0.31.0", "sentencepiece >= 0.1.97", - "flash_attn == 2.7.4.post1", "setuptools", "ninja", "wheel", diff --git a/requirements-local_chat.txt b/requirements-local_chat.txt index d221e0e..0479d36 100644 --- a/requirements-local_chat.txt +++ b/requirements-local_chat.txt @@ -1,6 +1,5 @@ fire transformers==4.43.2 -flash_attn==2.7.4.post1 numpy torch>=2.3.0 packaging