diff --git a/README.md b/README.md index 92d4cd57..c4b37252 100644 --- a/README.md +++ b/README.md @@ -43,7 +43,7 @@ 🚩 本项目未涉及微调、训练过程,但可利用微调或训练对本项目效果进行优化。 -🌐 [AutoDL 镜像](https://www.codewithgpu.com/i/chatchat-space/Langchain-Chatchat/Langchain-Chatchat) 中 `v8` 版本所使用代码已更新至本项目 `v0.2.4` 版本。 +🌐 [AutoDL 镜像](https://www.codewithgpu.com/i/chatchat-space/Langchain-Chatchat/Langchain-Chatchat) 中 `v9` 版本所使用代码已更新至本项目 `v0.2.5` 版本。 🐳 [Docker 镜像](registry.cn-beijing.aliyuncs.com/chatchat/chatchat:0.2.3) 已经更新到 ```0.2.3``` 版本, 如果想体验最新内容请源码安装。 @@ -87,9 +87,9 @@ $ pip install -r requirements_webui.txt 如需在本地或离线环境下运行本项目,需要首先将项目所需的模型下载至本地,通常开源 LLM 与 Embedding 模型可以从 [HuggingFace](https://huggingface.co/models) 下载。 -以本项目中默认使用的 LLM 模型 [THUDM/chatglm2-6b](https://huggingface.co/THUDM/chatglm2-6b) 与 Embedding 模型 [moka-ai/m3e-base](https://huggingface.co/moka-ai/m3e-base) 为例: +以本项目中默认使用的 LLM 模型 [THUDM/ChatGLM2-6B](https://huggingface.co/THUDM/chatglm2-6b) 与 Embedding 模型 [moka-ai/m3e-base](https://huggingface.co/moka-ai/m3e-base) 为例: -下载模型需要先[安装Git LFS](https://docs.github.com/zh/repositories/working-with-files/managing-large-files/installing-git-large-file-storage),然后运行 +下载模型需要先[安装 Git LFS](https://docs.github.com/zh/repositories/working-with-files/managing-large-files/installing-git-large-file-storage),然后运行 ```Shell $ git lfs install @@ -113,11 +113,11 @@ $ python startup.py -a 如果正常启动,你将能看到以下界面 -1. FastAPI docs 界面 +1. FastAPI Docs 界面 ![](img/fastapi_docs_026.png) -2. webui启动界面示例: +2. Web UI 启动界面示例: - Web UI 对话界面: diff --git a/README_en.md b/README_en.md index ebe7ebc7..e8f55635 100644 --- a/README_en.md +++ b/README_en.md @@ -41,9 +41,9 @@ The main process analysis from the aspect of document process: 🚩 The training or fined-tuning are not involved in the project, but still, one always can improve performance by do these. -🌐 [AutoDL image](registry.cn-beijing.aliyuncs.com/chatchat/chatchat:0.2.0) is supported, and in v7 the codes are update to v0.2.3. +🌐 [AutoDL image](registry.cn-beijing.aliyuncs.com/chatchat/chatchat:0.2.5) is supported, and in v9 the codes are update to v0.2.5. -🐳 [Docker image](registry.cn-beijing.aliyuncs.com/chatchat/chatchat:0.2.0) +🐳 [Docker image](registry.cn-beijing.aliyuncs.com/chatchat/chatchat:0.2.5) ## Pain Points Addressed diff --git a/configs/__init__.py b/configs/__init__.py index 48d7ee1a..98a16be8 100644 --- a/configs/__init__.py +++ b/configs/__init__.py @@ -5,4 +5,4 @@ from .server_config import * from .prompt_config import * -VERSION = "v0.2.6-preview" +VERSION = "v0.2.6" diff --git a/img/official_account.png b/img/official_wechat_mp_account.png similarity index 100% rename from img/official_account.png rename to img/official_wechat_mp_account.png diff --git a/img/qr_code_64_2.png b/img/qr_code_64_2.png new file mode 100644 index 00000000..4fee7aa1 Binary files /dev/null and b/img/qr_code_64_2.png differ diff --git a/img/qr_code_65.jpg b/img/qr_code_65.jpg new file mode 100644 index 00000000..988595c8 Binary files /dev/null and b/img/qr_code_65.jpg differ diff --git a/img/qr_code_66.jpg b/img/qr_code_66.jpg new file mode 100644 index 00000000..fdc30b6c Binary files /dev/null and b/img/qr_code_66.jpg differ diff --git a/server/utils.py b/server/utils.py index 5dff5f1b..2069dc05 100644 --- a/server/utils.py +++ b/server/utils.py @@ -345,8 +345,8 @@ def get_model_path(model_name: str, type: str = None) -> Optional[str]: return str(path) return path_str # THUDM/chatglm06b - # 从server_config中获取服务信息 +# 从server_config中获取服务信息 def get_model_worker_config(model_name: str = None) -> dict: ''' diff --git a/webui_pages/dialogue/dialogue.py b/webui_pages/dialogue/dialogue.py index a6d44373..503190b5 100644 --- a/webui_pages/dialogue/dialogue.py +++ b/webui_pages/dialogue/dialogue.py @@ -170,6 +170,8 @@ def dialogue_page(api: ApiRequest): key="selected_kb", ) kb_top_k = st.number_input("匹配知识条数:", 1, 20, VECTOR_SEARCH_TOP_K) + + ## Bge 模型会超过1 score_threshold = st.slider("知识匹配分数阈值:", 0.0, 1.0, float(SCORE_THRESHOLD), 0.01) elif dialogue_mode == "搜索引擎问答":