From c7d62a67db4c54a7f70043289ebcca81a5051f43 Mon Sep 17 00:00:00 2001 From: TangJingqi Date: Mon, 28 Oct 2024 14:37:10 +0800 Subject: [PATCH] update supported models --- README.md | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 970859d..7d6f342 100644 --- a/README.md +++ b/README.md @@ -194,17 +194,20 @@ It features the following arguments: - `--cpu_infer`: Int (default=10). The number of CPUs used for inference. Should ideally be set to the (total number of cores - 2). -

Suggested Model

+

Suggested Model

| Model Name | Model Size | VRAM | Minimum DRAM | Recommended DRAM | | ------------------------------ | ---------- | ----- | --------------- | ----------------- | -| DeepSeek-V2-q4_k_m | 133G | 24G | 136G | 192G | +| DeepSeek-V2-q4_k_m | 133G | 11G | 136G | 192G | +| DeepSeek-V2.5-q4_k_m | 133G | 11G | 136G | 192G | +| DeepSeek-V2.5-IQ4_XS | 117G | 10G | 107G | 128G | | Qwen2-57B-A14B-Instruct-q4_k_m | 33G | 8G | 34G | 64G | | DeepSeek-V2-Lite-q4_k_m | 9.7G | 3G | 13G | 16G | | Mixtral-8x7B-q4_k_m | 25G | 1.6G | 51G | 64G | | Mixtral-8x22B-q4_k_m | 80G | 4G | 86.1G | 96G | | InternLM2.5-7B-Chat-1M | 15.5G | 15.5G | 8G(32K context) | 150G (1M context) | + More will come soon. Please let us know which models you are most interested in. Be aware that you need to be subject to their corresponding model licenses when using [DeepSeek](https://huggingface.co/deepseek-ai/DeepSeek-V2/blob/main/LICENSE) and [QWen](https://huggingface.co/Qwen/Qwen2-72B-Instruct/blob/main/LICENSE).