diff --git a/README.md b/README.md index a8df59bf..d33f2c9a 100644 --- a/README.md +++ b/README.md @@ -119,15 +119,16 @@ docker run -d --gpus all -p 80:8501 registry.cn-beijing.aliyuncs.com/chatchat/ch - [all models of OpenOrca](https://huggingface.co/Open-Orca) - [Spicyboros](https://huggingface.co/jondurbin/spicyboros-7b-2.2?not-for-all-audiences=true) + [airoboros 2.2](https://huggingface.co/jondurbin/airoboros-l2-13b-2.2) - [VMware's OpenLLaMa OpenInstruct](https://huggingface.co/VMware/open-llama-7b-open-instruct) +- [baichuan2-7b/baichuan2-13b](https://huggingface.co/baichuan-inc) - 任何 [EleutherAI](https://huggingface.co/EleutherAI) 的 pythia 模型,如 [pythia-6.9b](https://huggingface.co/EleutherAI/pythia-6.9b) - 在以上模型基础上训练的任何 [Peft](https://github.com/huggingface/peft) 适配器。为了激活,模型路径中必须有 `peft` 。注意:如果加载多个peft模型,你可以通过在任何模型工作器中设置环境变量 `PEFT_SHARE_BASE_WEIGHTS=true` 来使它们共享基础模型的权重。 以上模型支持列表可能随 [FastChat](https://github.com/lm-sys/FastChat) 更新而持续更新,可参考 [FastChat 已支持模型列表](https://github.com/lm-sys/FastChat/blob/main/docs/model_support.md)。 - 除本地模型外,本项目也支持直接接入 OpenAI API、智谱AI等在线模型,具体设置可参考 `configs/model_configs.py.example` 中的 `llm_model_dict` 的配置信息。 -在线 LLM 模型目前已支持: +在线 LLM 模型目前已支持: + - [ChatGPT](https://api.openai.com) - [智谱AI](http://open.bigmodel.cn) - [MiniMax](https://api.minimax.chat) diff --git a/README_en.md b/README_en.md index c64d7c95..82d90cee 100644 --- a/README_en.md +++ b/README_en.md @@ -111,6 +111,7 @@ The project use [FastChat](https://github.com/lm-sys/FastChat) to provide the AP - [BAAI/AquilaChat-7B](https://huggingface.co/BAAI/AquilaChat-7B) - [all models of OpenOrca](https://huggingface.co/Open-Orca) - [Spicyboros](https://huggingface.co/jondurbin/spicyboros-7b-2.2?not-for-all-audiences=true) + [airoboros 2.2](https://huggingface.co/jondurbin/airoboros-l2-13b-2.2) +- [baichuan2-7b/baichuan2-13b](https://huggingface.co/baichuan-inc) - [VMware's OpenLLaMa OpenInstruct](https://huggingface.co/VMware/open-llama-7b-open-instruct) * Any [EleutherAI](https://huggingface.co/EleutherAI) pythia model such as [pythia-6.9b](https://huggingface.co/EleutherAI/pythia-6.9b)(任何 [EleutherAI](https://huggingface.co/EleutherAI) 的 pythia 模型,如 [pythia-6.9b](https://huggingface.co/EleutherAI/pythia-6.9b)) @@ -329,8 +330,8 @@ Please refer to [FAQ](docs/FAQ.md) - [X] .csv - [ ] .xlsx - [ ] TextSplitter and Retriever - - [x] multiple TextSplitter - - [x] ChineseTextSplitter + - [X] multiple TextSplitter + - [X] ChineseTextSplitter - [ ] Reconstructed Context Retriever - [ ] Webpage - [ ] SQL @@ -340,11 +341,14 @@ Please refer to [FAQ](docs/FAQ.md) - [X] DuckDuckGo - [ ] Agent - [X] LLM Models + - [X] [FastChat](https://github.com/lm-sys/fastchat) -based LLM Models - [ ] Mutiply Remote LLM API - [X] Embedding Models + - [X] HuggingFace -based Embedding models - [ ] Mutiply Remote Embedding API - [X] FastAPI-based API - [X] Web UI + - [X] Streamlit -based Web UI diff --git a/configs/model_config.py.example b/configs/model_config.py.example index 5fb06917..064946e4 100644 --- a/configs/model_config.py.example +++ b/configs/model_config.py.example @@ -41,6 +41,10 @@ MODEL_PATH = { "chatglm2-6b-int4": "THUDM/chatglm2-6b-int4", "chatglm2-6b-32k": "THUDM/chatglm2-6b-32k", "baichuan-7b": "baichuan-inc/Baichuan-7B", + "baichuan2-7b-chat":"baichuan-inc/Baichuan2-7B-Chat", + "baichuan2-13b-chat":"baichuan-inc/Baichuan2-13B-Chat", + "baichuan2-7b":"baichuan-inc/Baichuan2-7B-Base", + "baichuan2-13b":"baichuan-inc/Baichuan2-13B-Base", }, } diff --git a/requirements.txt b/requirements.txt index fe09f5b9..0e5d2249 100644 --- a/requirements.txt +++ b/requirements.txt @@ -1,5 +1,5 @@ langchain==0.0.287 -fschat[model_worker]==0.2.28 +fschat[model_worker]==0.2.29 openai sentence_transformers transformers>=4.31.0 diff --git a/requirements_api.txt b/requirements_api.txt index 5ba05109..fd5228d7 100644 --- a/requirements_api.txt +++ b/requirements_api.txt @@ -1,5 +1,5 @@ langchain==0.0.287 -fschat[model_worker]==0.2.28 +fschat[model_worker]==0.2.29 openai sentence_transformers transformers>=4.31.0