mirror of
https://github.com/RYDE-WORK/Langchain-Chatchat.git
synced 2026-01-27 17:23:19 +08:00
* publish 0.2.10 (#2797) 新功能: - 优化 PDF 文件的 OCR,过滤无意义的小图片 by @liunux4odoo #2525 - 支持 Gemini 在线模型 by @yhfgyyf #2630 - 支持 GLM4 在线模型 by @zRzRzRzRzRzRzR - elasticsearch更新https连接 by @xldistance #2390 - 增强对PPT、DOC知识库文件的OCR识别 by @596192804 #2013 - 更新 Agent 对话功能 by @zRzRzRzRzRzRzR - 每次创建对象时从连接池获取连接,避免每次执行方法时都新建连接 by @Lijia0 #2480 - 实现 ChatOpenAI 判断token有没有超过模型的context上下文长度 by @glide-the - 更新运行数据库报错和项目里程碑 by @zRzRzRzRzRzRzR #2659 - 更新配置文件/文档/依赖 by @imClumsyPanda @zRzRzRzRzRzRzR - 添加日文版 readme by @eltociear #2787 修复: - langchain 更新后,PGVector 向量库连接错误 by @HALIndex #2591 - Minimax's model worker 错误 by @xyhshen - ES库无法向量检索.添加mappings创建向量索引 by MSZheng20 #2688 * Update README.md * Add files via upload * Update README.md * 修复PDF旋转的BUG * Support Chroma * perf delete unused import * 忽略测试代码 * 更新文件 * API前端丢失问题解决 * 更新了chromadb的打印的符号 * autodl代号错误 * Update README.md * Update README.md * Update README.md * 修复milvus相关bug * 支持星火3.5模型 * 修复es 知识库查询bug (#2848) * 修复es 知识库查询bug (#2848) * 更新zhipuai请求方式 * 增加对 .htm 扩展名的显式支持 * 更新readme * Docker镜像制作与K8S YAML部署操作说明 (#2892) * Dev (#2280) * 修复Azure 不设置Max token的bug * 重写agent 1. 修改Agent实现方式,支持多参数,仅剩 ChatGLM3-6b和 OpenAI GPT4 支持,剩余模型将在暂时缺席Agent功能 2. 删除agent_chat 集成到llm_chat中 3. 重写大部分工具,适应新Agent * 更新架构 * 删除web_chat,自动融合 * 移除所有聊天,都变成Agent控制 * 更新配置文件 * 更新配置模板和提示词 * 更改参数选择bug * 修复模型选择的bug * 更新一些内容 * 更新多模态 语音 视觉的内容 1. 更新本地模型语音 视觉多模态功能并设置了对应工具 * 支持多模态Grounding 1. 美化了chat的代码 2. 支持视觉工具输出Grounding任务 3. 完善工具调用的流程 * 支持XPU,修改了glm3部分agent * 添加 qwen agent * 对其ChatGLM3-6B与Qwen-14B * fix callback handler * 更新Agent工具返回 * fix: LLMChain no output when no tools selected * 跟新了langchain 0.1.x需要的依赖和修改的代码 * 更新chatGLM3 langchain0.1.x Agent写法 * 按照 langchain 0.1 重写 qwen agent * 修复 callback 无效的问题 * 添加文生图工具 * webui 支持文生图 * 集成openai plugins插件 * 删除fastchat的配置 * 增加openai插件 * 集成openai plugins插件 * 更新模型执行列表和今晚修改的内容 * 集成openai_plugins/imitater插件 * 集成openai_plugins/imitater插件 * 集成openai_plugins/imitater插件 * 减少错误的显示 * 标准配置 * vllm参数配置 * 增加智谱插件 * 删除本地fschat配置 * 删除本地fschat配置,pydantic升级到2 * 删除本地fschat workers * openai-plugins-list.json * 升级agent,pydantic升级到2 * fix model_config是系统关键词问题 * embeddings模块集成openai plugins插件,使用统一api调用 * loom模型服务update_store更新逻辑 * 集成LOOM在线embedding业务 * 本地知识库搜索字段修改 * 知识库在线api接入点配置在线api接入点配置更新逻辑 * Update model_config.py.example * 修改模型配置方式,所有模型以 openai 兼容框架的形式接入,chatchat 自身不再加载模型。 改变 Embeddings 模型改为使用框架 API,不再手动加载,删除自定义 Embeddings Keyword 代码 修改依赖文件,移除 torch transformers 等重依赖 暂时移出对 loom 的集成 后续: 1、优化目录结构 2、检查合并中有无被覆盖的 0.2.10 内容 * move document_loaders & text_splitter under server * make torch & transformers optional import pydantic Model & Field from langchain.pydantic_v1 instead of pydantic.v1 * - pydantic 限定为 v1,并统一项目中所有 pydantic 导入路径,为以后升级 v2 做准备 - 重构 api.py: - 按模块划分为不同的 router - 添加 openai 兼容的转发接口,项目默认使用该接口以实现模型负载均衡 - 添加 /tools 接口,可以获取/调用编写的 agent tools - 移除所有 EmbeddingFuncAdapter,统一改用 get_Embeddings - 待办: - /chat/chat 接口改为 openai 兼容 - 添加 /chat/kb_chat 接口,openai 兼容 - 改变 ntlk/knowledge_base/logs 等数据目录位置 * 移除 llama-index 依赖;修复 /v1/models 错误 * 原因:windows下启动失败提示补充python-multipart包 (#3184) 改动:requirements添加python-multipart==0.0.9 版本:0.0.9 Requires: Python >=3.8 Co-authored-by: XuCai <liangxc@akulaku.com> * 添加 xinference 本地模型和自定义模型配置 UI: streamlit run model_loaders/xinference_manager.py * update xinference manager ui * fix merge conflict * model_config 中补充 oneapi 默认在线模型;/v1/models 接口支持 oneapi 平台,统一返回模型列表 * 重写 calculate 工具 * 调整根目录结构,kb/logs/media/nltk_data 移动到专用数据目录(可配置,默认 data)。注意知识库文件要做相应移动 * update kb_config.py.example * 优化 ES 知识库 - 开发者 - get_OpenAIClient 的 local_wrap 默认值改为 False,避免 API 服务未启动导致其它功能受阻(如Embeddings) - 修改 ES 知识库服务: - 检索策略改为 ApproxRetrievalStrategy - 设置 timeout 为 60, 避免文档过多导致 ConnecitonTimeout Error - 修改 LocalAIEmbeddings,使用多线程进行 embed_texts,效果不明显,瓶颈可能主要在提供 Embedding 的服务器上 * 修复glm3 agent被注释的agent会话文本结构解析代码 看起来输出的文本占位符如下,目前解析代码是有问题的 Thought <|assistant|> Action\r ```python tool_call(action_input) ```<|observation|> * make qwen agent work with langchain>=0.1 (#3228) * make xinference model manager support xinference 0.9.x * 使用多进程提高导入知识库的速度 (#3276) * xinference的代码 先传 我后面来改 * Delete server/xinference directory * Create khazic * diiii diii * Revert "xinference的代码" * fix markdown header split (#1825) (#3324) * dify model_providers configuration This module provides the interface for invoking and authenticating various models, and offers Dify a unified information and credentials form rule for model providers. * fix merge conflict: langchain Embeddings not imported in server.utils * 添加 react 编写的新版 WEBUI (#3417) * feat:提交前端代码 * feat:提交logo样式切换 * feat:替换avatar、部分位置icon、chatchat相关说明、git链接、Wiki链接、关于、设置、反馈与建议等功能,关闭lobehub自检更新功能 * fix:移除多余代码 --------- Co-authored-by: liunux4odoo <41217877+liunux4odoo@users.noreply.github.com> * model_providers bootstrap * model_providers bootstrap * update to pydantic v2 (#3486) * 使用poetry管理项目 * 使用poetry管理项目 * dev分支解决pydantic版本冲突问题,增加ollama配置,支持ollama会话和向量接口 (#3508) * dev分支解决pydantic版本冲突问题,增加ollama配置,支持ollama会话和向量接口 1、因dev版本的pydantic升级到了v2版本,由于在class History(BaseModel)中使用了from server.pydantic_v1,而fastapi的引用已变为pydantic的v2版本,所以fastapi用v2版本去校验用v1版本定义的对象,当会话历史histtory不为空的时候,会报错:TypeError: BaseModel.validate() takes 2 positional arguments but 3 were given。经测试,解方法为在class History(BaseModel)中也使用v2版本即可; 2、配置文件参照其它平台配置,增加了ollama平台相关配置,会话模型用户可根据实际情况自行添加,向量模型目前支持nomic-embed-text(必须升级ollama到0.1.29以上)。 3、因ollama官方只在会话部分对openai api做了兼容,向量api暂未适配,好在langchain官方库支持OllamaEmbeddings,因而在get_Embeddings方法中添加了相关支持代码。 * 修复 pydantic 升级到 v2 后 DocumentWithVsID 和 /v1/embeddings 兼容性问题 --------- Co-authored-by: srszzw <srszzw@163.com> Co-authored-by: liunux4odoo <liunux@qq.com> * 对python的要求降级到py38 * fix bugs; make poetry using tsinghua mirror of pypi * update gitignore; remove unignored files * update wiki sub module * 20240326 * 20240326 * qqqq * 删除历史文件 * 移动项目模块 * update .gitignore; fix model version error in api_schemas * 封装ModelManager * - 重写 tool 部分: (#3553) - 简化 tool 的定义方式 - 所有 tool 和 tool_config 支持热加载 - 修复:json_schema_extra warning * 使用yaml加载用户配置适配器 * 格式化代码 * 格式化 * 优化工具定义;添加 openai 兼容的统一 chat 接口 (#3570) - 修复: - Qwen Agent 的 OutputParser 不再抛出异常,遇到非 COT 文本直接返回 - CallbackHandler 正确处理工具调用信息 - 重写 tool 定义方式: - 添加 regist_tool 简化 tool 定义: - 可以指定一个用户友好的名称 - 自动将函数的 __doc__ 作为 tool.description - 支持用 Field 定义参数,不再需要额外定义 ModelSchema - 添加 BaseToolOutput 封装 tool 返回结果,以便同时获取原始值、给LLM的字符串值 - 支持工具热加载(有待测试) - 增加 openai 兼容的统一 chat 接口,通过 tools/tool_choice/extra_body 不同参数组合支持: - Agent 对话 - 指定工具调用(如知识库RAG) - LLM 对话 - 根据后端功能更新 webui * 修复:search_local_knowledge_base 工具返回值错误;/tools 路由错误;webui 中“正在思考”一直显示 (#3571) * 添加 openai 兼容的 files 接口 (#3573) * 使用BootstrapWebBuilder适配RESTFulOpenAIBootstrapBaseWeb加载 * 格式化和代码检查说明 * 模型列表适配 * make format * chat_completions接口报文适配 * make format * xinference 插件示例 * 一些默认参数 * exec path fix * 解决ollama部署的qwen,执行agent,返回的json格式不正确问题。 * provider_configuration.py 查询所有的平台信息,包含计费策略和配置schema_validators(参数必填信息校验规则) /workspaces/current/model-providers 查询平台模型分类的详细默认信息,包含了模型类型,模型参数,模型状态 workspaces/current/models/model-types/{model_type} * 开发手册 * 兼容model_providers,集成webui及API中平台配置的初始化 (#3625) * provider_configuration init of MODEL_PLATFORMS * 开发手册 * 兼容model_providers,集成webui及API中平台配置的初始化 * Dev model providers (#3628) * gemini 初始化参数问题 * gemini 同步工具调用 * embedding convert endpoint * 修复 --api -w命令 * /v1/models 接口返回值由 List[Model] 改为 {'data': List[Model]},兼容最新版 xinference * 3.8兼容 (#3769) * 增加使用说明 * 3.8兼容性配置 * fix * formater * 不同平台兼容测试用例 * embedding兼容 * 增加日志信息 * pip源仓库设置,一些版本问题,启动说明 配置说明 (#3854) * 仓库设置,一些版本问题 * pip源仓库设置,一些版本问题,启动说明 * 配置说明 * 泛型标记错误 (#3855) * 仓库设置,一些版本问题 * pip源仓库设置,一些版本问题,启动说明 * 配置说明 * 发布的依赖信息 * 泛型标记错误 * 泛型标记错误 * CICD github action build publish pypi、Release Tag (#3886) * 测试用例 * CICD 流程 * CICD 流程 * CICD 流程 * 一些agent数据处理的问题,model_runtime模块的说明文档 (#3943) * 一些agent数据出来的问题 * Changes: - Translated and updated the Model Runtime documentation to reflect the latest changes and features. - Clarified the decoupling benefits of the Model Runtime module from the Chatchat service. - Removed outdated information regarding the model configuration storage module. - Detailed the retained functionalities post-removal of the Dify configuration page. - Provided a comprehensive overview of the Model Runtime's three-layered structure. - Included the status of the `fetch-from-remote` feature and its non-implementation in Dify. - Added instructions for custom service provider model capabilities. * - 新功能 (#3944) - streamlit 更新到 1.34,webui 支持 Dialog 操作 - streamlit-chatbox 更新到 1.1.12,更好的多会话支持 - 开发者 - 在 API 中增加项目图片路由(/img/{file_name}),方便前端使用 * 修改包名 * 修改包信息 * ollama配置解析问题 * 用户配置动态加载 (#3951) * version = "0.3.0.20240506" * version = "0.3.0.20240506" * version = "0.3.0.20240506" * version = "0.3.0.20240506" * 启动说明 * 一些bug * 修复了一些配置重载的bug * 配置的加载行为修改 * 配置的加载行为修改 * agent代码优化 * ollama 代码升级,使用openai协议 * 支持deepseek客户端 * contributing (#4043) * 添加了贡献说明 docs/contributing,包含了一些代码仓库说明和开发规范,以及在model_providers下面编写了一些单元测试的示例 * 关于providers的配置说明 * python3.8兼容 * python3.8兼容 * ollama兼容 * ollama兼容 * 一些兼容 pydantic<3,>=1.9.0 的代码, * 一些兼容 pydantic<3,>=1.9.0 model_config 的代码, * make format * test * 更新版本 * get_img_base64 * get_img_base64 * get_img_base64 * get_img_base64 * get_img_base64 * 统一模型类型编码 * 向量处理问题 * 优化目录结构 (#4058) * 优化目录结构 * 修改一些测试问题 --------- Co-authored-by: glide-the <2533736852@qq.com> * repositories * 调整日志 * 调整日志zdf * 增加可选依赖extras * feat:Added some documentation. (#4085) * feat:Added some documentation. * feat:Added some documentation. * feat:Added some documentation. --------- Co-authored-by: yuehuazhang <yuehuazhang@tencent.com> * fix code.md typos * fix chatchat-server/pyproject.toml typos * feat:README (#4118) Co-authored-by: yuehuazhang <yuehuazhang@tencent.com> * 初始化数据库集成model_providers * 关闭守护进程 * 1、修改知识库列表接口,返回全量属性字段,同时修改受影响的相关代码。 (#4119) 2、run_in_process_pool改为run_in_thread_pool,解决兼容性问题。 3、poetry配置文件修复。 * 动态更新Prompt中的知识库描述信息,使大模型更容易判断使用哪个知识库。 (#4121) * 1、修改知识库列表接口,返回全量属性字段,同时修改受影响的相关代码。 2、run_in_process_pool改为run_in_thread_pool,解决兼容性问题。 3、poetry配置文件修复。 * 1、动态更新Prompt中的知识库描述信息,使大模型更容易判断使用哪个知识库。 * fix: 补充 xinference 配置信息 (#4123) * feat:README * feat:补充 xinference 平台 llm 和 embedding 模型配置. --------- Co-authored-by: yuehuazhang <yuehuazhang@tencent.com> * 知识库工具的下拉列表改为动态获取,不必重启服务。 (#4126) * 1、知识库工具的下拉列表改为动态获取,不必重启服务。 * update README and imgs * update README and imgs * update README and imgs * update README and imgs * 修改安装说明描述问题 * make formater * 更新版本"0.3.0.20240606 * Update code.md * 优化知识库相关功能 (#4153) - 新功能 - pypi 包新增 chatchat-kb 命令脚本,对应 init_database.py 功能 - 开发者 - _model_config.py 中默认包含 xinference 配置项 - 所有涉及向量库的操作,前置检查当前 Embed 模型是否可用 - /knowledge_base/create_knowledge_base 接口增加 kb_info 参数 - /knowledge_base/list_files 接口返回所有数据库字段,而非文件名称列表 - 修正 xinference 模型管理脚本 * 消除警告 * 一些依赖问题 * 增加text2sql工具,支持特定表、智能判定表,支持对表名进行额外说明 (#4154) * 1、增加text2sql工具,支持特定表、智能判定表,支持对表名进行额外说明 * 支持SQLAlchemy大部分数据库、新增read-only模式,提高安全性、增加text2sql使用建议 (#4155) * 1、修改text2sql连接配置,支持SQLAlchemy大部分数据库; 2、新增read-only模式,若有数据库写保护需求,会从大模型判断、SQLAlchemy拦截器两个层面进行写拦截,提高安全性; 3、增加text2sql使用建议; * dotenv * dotenv 配置 * 用户工作空间操作 (#4156) 工作空间的配置预设,提供ConfigBasic建造方法产生实例。 该类的实例对象用于存储工作空间的配置信息,如工作空间的路径等 工作空间的配置信息存储在用户的家目录下的.config/chatchat/workspace/workspace_config.json文件中。 注意:不存在则读取默认 提供了操作入口 指令` chatchat-config` 工作空间配置 options: ``` -h, --help show this help message and exit -v {true,false}, --verbose {true,false} 是否开启详细日志 -d DATA, --data DATA 数据存放路径 -f FORMAT, --format FORMAT 日志格式 --clear 清除配置 ``` * 配置路径问题 * fix faiss_cache bug * Feature(File RAG): add file_rag in chatchat-server, add ensemble retriever and vectorstore retriever. * Feature(File RAG): add file_rag in chatchat-server, add ensemble retriever and vectorstore retriever. * fix xinference manager bug * Fix(File RAG): use jieba instead of cutword * Fix(File RAG): update kb_doc_api.py * 工作空间的配置预设,提供ConfigBasic建造 实例。 (#4158) - ConfigWorkSpace接口说明 ```text ConfigWorkSpace是一个配置工作空间的抽象类,提供基础的配置信息存储和读取功能。 提供ConfigFactory建造方法产生实例。 该类的实例对象用于存储工作空间的配置信息,如工作空间的路径等 工作空间的配置信息存储在用户的家目录下的.chatchat/workspace/workspace_config.json文件中。 注意:不存在则读取默认 ``` * 编写配置说明 * 编写配置说明 --------- Co-authored-by: liunux4odoo <41217877+liunux4odoo@users.noreply.github.com> Co-authored-by: glide-the <2533736852@qq.com> Co-authored-by: tonysong <tonysong@digitalgd.com.cn> Co-authored-by: songpb <songpb@gmail.com> Co-authored-by: showmecodett <showmecodett@gmail.com> Co-authored-by: zR <2448370773@qq.com> Co-authored-by: zqt <1178747941@qq.com> Co-authored-by: zqt996 <67185303+zqt996@users.noreply.github.com> Co-authored-by: fengyaojie <fengyaojie@xdf.cn> Co-authored-by: Hans WAN <hanswan@tom.com> Co-authored-by: thinklover <thinklover@gmail.com> Co-authored-by: liunux4odoo <liunux@qq.com> Co-authored-by: xucailiang <74602715+xucailiang@users.noreply.github.com> Co-authored-by: XuCai <liangxc@akulaku.com> Co-authored-by: dignfei <913015993@qq.com> Co-authored-by: Leb <khazzz1c@gmail.com> Co-authored-by: Sumkor <sumkor@foxmail.com> Co-authored-by: panhong <381500590@qq.com> Co-authored-by: srszzw <741992282@qq.com> Co-authored-by: srszzw <srszzw@163.com> Co-authored-by: yuehua-s <41819795+yuehua-s@users.noreply.github.com> Co-authored-by: yuehuazhang <yuehuazhang@tencent.com>
763 lines
26 KiB
Python
763 lines
26 KiB
Python
from fastapi import FastAPI
|
||
from pathlib import Path
|
||
import asyncio
|
||
import os
|
||
import sys
|
||
import multiprocessing as mp
|
||
from concurrent.futures import ThreadPoolExecutor, ProcessPoolExecutor, as_completed
|
||
|
||
from langchain_core.embeddings import Embeddings
|
||
from langchain.tools import BaseTool
|
||
from langchain_openai.chat_models import ChatOpenAI
|
||
from langchain_openai.llms import OpenAI
|
||
import httpx
|
||
import openai
|
||
from typing import (
|
||
Optional,
|
||
Callable,
|
||
Generator,
|
||
Dict,
|
||
List,
|
||
Any,
|
||
Awaitable,
|
||
Union,
|
||
Tuple,
|
||
Literal,
|
||
)
|
||
|
||
from chatchat.configs import (log_verbose, HTTPX_DEFAULT_TIMEOUT,
|
||
DEFAULT_LLM_MODEL, DEFAULT_EMBEDDING_MODEL, TEMPERATURE,
|
||
MODEL_PLATFORMS)
|
||
from chatchat.server.pydantic_v2 import BaseModel, Field
|
||
|
||
import logging
|
||
|
||
logger = logging.getLogger()
|
||
|
||
|
||
async def wrap_done(fn: Awaitable, event: asyncio.Event):
|
||
"""Wrap an awaitable with a event to signal when it's done or an exception is raised."""
|
||
try:
|
||
await fn
|
||
except Exception as e:
|
||
logging.exception(e)
|
||
msg = f"Caught exception: {e}"
|
||
logger.error(f'{e.__class__.__name__}: {msg}',
|
||
exc_info=e if log_verbose else None)
|
||
finally:
|
||
# Signal the aiter to stop.
|
||
event.set()
|
||
|
||
|
||
def get_config_platforms() -> Dict[str, Dict]:
|
||
# import importlib
|
||
# 不能支持重载
|
||
# from chatchat.configs import model_config
|
||
# importlib.reload(model_config)
|
||
|
||
return {m["platform_name"]: m for m in MODEL_PLATFORMS}
|
||
|
||
|
||
def get_config_models(
|
||
model_name: str = None,
|
||
model_type: Literal["llm", "embed", "image", "reranking","speech2text","tts"] = None,
|
||
platform_name: str = None,
|
||
) -> Dict[str, Dict]:
|
||
'''
|
||
获取配置的模型列表,返回值为:
|
||
{model_name: {
|
||
"platform_name": xx,
|
||
"platform_type": xx,
|
||
"model_type": xx,
|
||
"model_name": xx,
|
||
"api_base_url": xx,
|
||
"api_key": xx,
|
||
"api_proxy": xx,
|
||
}}
|
||
'''
|
||
# import importlib
|
||
# 不能支持重载
|
||
# from chatchat.configs import model_config
|
||
# importlib.reload(model_config)
|
||
|
||
result = {}
|
||
for m in MODEL_PLATFORMS:
|
||
if platform_name is not None and platform_name != m.get("platform_name"):
|
||
continue
|
||
if model_type is not None and f"{model_type}_models" not in m:
|
||
continue
|
||
|
||
if model_type is None:
|
||
model_types = [
|
||
"llm_models",
|
||
"embed_models",
|
||
"image_models",
|
||
"reranking_models",
|
||
"speech2text_models",
|
||
"tts_models",
|
||
]
|
||
else:
|
||
model_types = [f"{model_type}_models"]
|
||
|
||
for m_type in model_types:
|
||
for m_name in m.get(m_type, []):
|
||
if model_name is None or model_name == m_name:
|
||
result[m_name] = {
|
||
"platform_name": m.get("platform_name"),
|
||
"platform_type": m.get("platform_type"),
|
||
"model_type": m_type.split("_")[0],
|
||
"model_name": m_name,
|
||
"api_base_url": m.get("api_base_url"),
|
||
"api_key": m.get("api_key"),
|
||
"api_proxy": m.get("api_proxy"),
|
||
}
|
||
return result
|
||
|
||
|
||
def get_model_info(model_name: str = None, platform_name: str = None, multiple: bool = False) -> Dict:
|
||
'''
|
||
获取配置的模型信息,主要是 api_base_url, api_key
|
||
如果指定 multiple=True,则返回所有重名模型;否则仅返回第一个
|
||
'''
|
||
result = get_config_models(model_name=model_name, platform_name=platform_name)
|
||
if len(result) > 0:
|
||
if multiple:
|
||
return result
|
||
else:
|
||
return list(result.values())[0]
|
||
else:
|
||
return {}
|
||
|
||
|
||
def get_ChatOpenAI(
|
||
model_name: str = DEFAULT_LLM_MODEL,
|
||
temperature: float = TEMPERATURE,
|
||
max_tokens: int = None,
|
||
streaming: bool = True,
|
||
callbacks: List[Callable] = [],
|
||
verbose: bool = True,
|
||
local_wrap: bool = False, # use local wrapped api
|
||
**kwargs: Any,
|
||
) -> ChatOpenAI:
|
||
model_info = get_model_info(model_name)
|
||
params = dict(
|
||
streaming=streaming,
|
||
verbose=verbose,
|
||
callbacks=callbacks,
|
||
model_name=model_name,
|
||
temperature=temperature,
|
||
max_tokens=max_tokens,
|
||
**kwargs
|
||
)
|
||
try:
|
||
if local_wrap:
|
||
params.update(
|
||
openai_api_base=f"{api_address()}/v1",
|
||
openai_api_key="EMPTY",
|
||
)
|
||
else:
|
||
params.update(
|
||
openai_api_base=model_info.get("api_base_url"),
|
||
openai_api_key=model_info.get("api_key"),
|
||
openai_proxy=model_info.get("api_proxy"),
|
||
)
|
||
model = ChatOpenAI(**params)
|
||
except Exception as e:
|
||
logger.error(f"failed to create ChatOpenAI for model: {model_name}.", exc_info=True)
|
||
model = None
|
||
return model
|
||
|
||
|
||
def get_OpenAI(
|
||
model_name: str,
|
||
temperature: float,
|
||
max_tokens: int = None,
|
||
streaming: bool = True,
|
||
echo: bool = True,
|
||
callbacks: List[Callable] = [],
|
||
verbose: bool = True,
|
||
local_wrap: bool = False, # use local wrapped api
|
||
**kwargs: Any,
|
||
) -> OpenAI:
|
||
# TODO: 从API获取模型信息
|
||
model_info = get_model_info(model_name)
|
||
params = dict(
|
||
streaming=streaming,
|
||
verbose=verbose,
|
||
callbacks=callbacks,
|
||
model_name=model_name,
|
||
temperature=temperature,
|
||
max_tokens=max_tokens,
|
||
echo=echo,
|
||
**kwargs
|
||
)
|
||
try:
|
||
if local_wrap:
|
||
params.update(
|
||
openai_api_base=f"{api_address()}/v1",
|
||
openai_api_key="EMPTY",
|
||
)
|
||
else:
|
||
params.update(
|
||
openai_api_base=model_info.get("api_base_url"),
|
||
openai_api_key=model_info.get("api_key"),
|
||
openai_proxy=model_info.get("api_proxy"),
|
||
)
|
||
model = OpenAI(**params)
|
||
except Exception as e:
|
||
logger.error(f"failed to create OpenAI for model: {model_name}.", exc_info=True)
|
||
model = None
|
||
return model
|
||
|
||
|
||
def get_Embeddings(
|
||
embed_model: str = DEFAULT_EMBEDDING_MODEL,
|
||
local_wrap: bool = False, # use local wrapped api
|
||
) -> Embeddings:
|
||
from langchain_community.embeddings.openai import OpenAIEmbeddings
|
||
from langchain_community.embeddings import OllamaEmbeddings
|
||
from chatchat.server.localai_embeddings import LocalAIEmbeddings # TODO: fork of lc pr #17154
|
||
|
||
model_info = get_model_info(model_name=embed_model)
|
||
params = dict(model=embed_model)
|
||
try:
|
||
if local_wrap:
|
||
params.update(
|
||
openai_api_base=f"{api_address()}/v1",
|
||
openai_api_key="EMPTY",
|
||
)
|
||
else:
|
||
params.update(
|
||
openai_api_base=model_info.get("api_base_url"),
|
||
openai_api_key=model_info.get("api_key"),
|
||
openai_proxy=model_info.get("api_proxy"),
|
||
)
|
||
if model_info.get("platform_type") == "openai":
|
||
return OpenAIEmbeddings(**params)
|
||
elif model_info.get("platform_type") == "ollama":
|
||
return OllamaEmbeddings(base_url=model_info.get("api_base_url").replace('/v1', ''),
|
||
model=embed_model,
|
||
)
|
||
else:
|
||
return LocalAIEmbeddings(**params)
|
||
except Exception as e:
|
||
logger.error(f"failed to create Embeddings for model: {embed_model}.", exc_info=True)
|
||
|
||
|
||
def check_embed_model(embed_model: str=DEFAULT_EMBEDDING_MODEL) -> bool:
|
||
embeddings = get_Embeddings(embed_model=embed_model)
|
||
try:
|
||
embeddings.embed_query("this is a test")
|
||
return True
|
||
except Exception as e:
|
||
logger.error(f"failed to access embed model '{embed_model}': {e}", exc_info=True)
|
||
return False
|
||
|
||
|
||
def get_OpenAIClient(
|
||
platform_name: str = None,
|
||
model_name: str = None,
|
||
is_async: bool = True,
|
||
) -> Union[openai.Client, openai.AsyncClient]:
|
||
'''
|
||
construct an openai Client for specified platform or model
|
||
'''
|
||
if platform_name is None:
|
||
platform_info = get_model_info(model_name=model_name, platform_name=platform_name)
|
||
if platform_info is None:
|
||
raise RuntimeError(f"cannot find configured platform for model: {model_name}")
|
||
platform_name = platform_info.get("platform_name")
|
||
platform_info = get_config_platforms().get(platform_name)
|
||
assert platform_info, f"cannot find configured platform: {platform_name}"
|
||
params = {
|
||
"base_url": platform_info.get("api_base_url"),
|
||
"api_key": platform_info.get("api_key"),
|
||
}
|
||
httpx_params = {}
|
||
if api_proxy := platform_info.get("api_proxy"):
|
||
httpx_params = {
|
||
"proxies": api_proxy,
|
||
"transport": httpx.HTTPTransport(local_address="0.0.0.0"),
|
||
}
|
||
|
||
if is_async:
|
||
if httpx_params:
|
||
params["http_client"] = httpx.AsyncClient(**httpx_params)
|
||
return openai.AsyncClient(**params)
|
||
else:
|
||
if httpx_params:
|
||
params["http_client"] = httpx.Client(**httpx_params)
|
||
return openai.Client(**params)
|
||
|
||
|
||
class MsgType:
|
||
TEXT = 1
|
||
IMAGE = 2
|
||
AUDIO = 3
|
||
VIDEO = 4
|
||
|
||
|
||
class BaseResponse(BaseModel):
|
||
code: int = Field(200, description="API status code")
|
||
msg: str = Field("success", description="API status message")
|
||
data: Any = Field(None, description="API data")
|
||
|
||
class Config:
|
||
json_schema_extra = {
|
||
"example": {
|
||
"code": 200,
|
||
"msg": "success",
|
||
}
|
||
}
|
||
|
||
|
||
class ListResponse(BaseResponse):
|
||
data: List[Any] = Field(..., description="List of data")
|
||
|
||
class Config:
|
||
json_schema_extra = {
|
||
"example": {
|
||
"code": 200,
|
||
"msg": "success",
|
||
"data": ["doc1.docx", "doc2.pdf", "doc3.txt"],
|
||
}
|
||
}
|
||
|
||
|
||
class ChatMessage(BaseModel):
|
||
question: str = Field(..., description="Question text")
|
||
response: str = Field(..., description="Response text")
|
||
history: List[List[str]] = Field(..., description="History text")
|
||
source_documents: List[str] = Field(
|
||
..., description="List of source documents and their scores"
|
||
)
|
||
|
||
class Config:
|
||
json_schema_extra = {
|
||
"example": {
|
||
"question": "工伤保险如何办理?",
|
||
"response": "根据已知信息,可以总结如下:\n\n1. 参保单位为员工缴纳工伤保险费,以保障员工在发生工伤时能够获得相应的待遇。\n"
|
||
"2. 不同地区的工伤保险缴费规定可能有所不同,需要向当地社保部门咨询以了解具体的缴费标准和规定。\n"
|
||
"3. 工伤从业人员及其近亲属需要申请工伤认定,确认享受的待遇资格,并按时缴纳工伤保险费。\n"
|
||
"4. 工伤保险待遇包括工伤医疗、康复、辅助器具配置费用、伤残待遇、工亡待遇、一次性工亡补助金等。\n"
|
||
"5. 工伤保险待遇领取资格认证包括长期待遇领取人员认证和一次性待遇领取人员认证。\n"
|
||
"6. 工伤保险基金支付的待遇项目包括工伤医疗待遇、康复待遇、辅助器具配置费用、一次性工亡补助金、丧葬补助金等。",
|
||
"history": [
|
||
[
|
||
"工伤保险是什么?",
|
||
"工伤保险是指用人单位按照国家规定,为本单位的职工和用人单位的其他人员,缴纳工伤保险费,"
|
||
"由保险机构按照国家规定的标准,给予工伤保险待遇的社会保险制度。",
|
||
]
|
||
],
|
||
"source_documents": [
|
||
"出处 [1] 广州市单位从业的特定人员参加工伤保险办事指引.docx:\n\n\t"
|
||
"( 一) 从业单位 (组织) 按“自愿参保”原则, 为未建 立劳动关系的特定从业人员单项参加工伤保险 、缴纳工伤保 险费。",
|
||
"出处 [2] ...",
|
||
"出处 [3] ...",
|
||
],
|
||
}
|
||
}
|
||
|
||
|
||
def run_async(cor):
|
||
'''
|
||
在同步环境中运行异步代码.
|
||
'''
|
||
try:
|
||
loop = asyncio.get_event_loop()
|
||
except:
|
||
loop = asyncio.new_event_loop()
|
||
return loop.run_until_complete(cor)
|
||
|
||
|
||
def iter_over_async(ait, loop=None):
|
||
'''
|
||
将异步生成器封装成同步生成器.
|
||
'''
|
||
ait = ait.__aiter__()
|
||
|
||
async def get_next():
|
||
try:
|
||
obj = await ait.__anext__()
|
||
return False, obj
|
||
except StopAsyncIteration:
|
||
return True, None
|
||
|
||
if loop is None:
|
||
try:
|
||
loop = asyncio.get_event_loop()
|
||
except:
|
||
loop = asyncio.new_event_loop()
|
||
|
||
while True:
|
||
done, obj = loop.run_until_complete(get_next())
|
||
if done:
|
||
break
|
||
yield obj
|
||
|
||
|
||
def MakeFastAPIOffline(
|
||
app: FastAPI,
|
||
static_dir=Path(__file__).parent / "api_server" / "static",
|
||
static_url="/static-offline-docs",
|
||
docs_url: Optional[str] = "/docs",
|
||
redoc_url: Optional[str] = "/redoc",
|
||
) -> None:
|
||
"""patch the FastAPI obj that doesn't rely on CDN for the documentation page"""
|
||
from fastapi import Request
|
||
from fastapi.openapi.docs import (
|
||
get_redoc_html,
|
||
get_swagger_ui_html,
|
||
get_swagger_ui_oauth2_redirect_html,
|
||
)
|
||
from fastapi.staticfiles import StaticFiles
|
||
from starlette.responses import HTMLResponse
|
||
|
||
openapi_url = app.openapi_url
|
||
swagger_ui_oauth2_redirect_url = app.swagger_ui_oauth2_redirect_url
|
||
|
||
def remove_route(url: str) -> None:
|
||
'''
|
||
remove original route from app
|
||
'''
|
||
index = None
|
||
for i, r in enumerate(app.routes):
|
||
if r.path.lower() == url.lower():
|
||
index = i
|
||
break
|
||
if isinstance(index, int):
|
||
app.routes.pop(index)
|
||
|
||
# Set up static file mount
|
||
app.mount(
|
||
static_url,
|
||
StaticFiles(directory=Path(static_dir).as_posix()),
|
||
name="static-offline-docs",
|
||
)
|
||
|
||
if docs_url is not None:
|
||
remove_route(docs_url)
|
||
remove_route(swagger_ui_oauth2_redirect_url)
|
||
|
||
# Define the doc and redoc pages, pointing at the right files
|
||
@app.get(docs_url, include_in_schema=False)
|
||
async def custom_swagger_ui_html(request: Request) -> HTMLResponse:
|
||
root = request.scope.get("root_path")
|
||
favicon = f"{root}{static_url}/favicon.png"
|
||
return get_swagger_ui_html(
|
||
openapi_url=f"{root}{openapi_url}",
|
||
title=app.title + " - Swagger UI",
|
||
oauth2_redirect_url=swagger_ui_oauth2_redirect_url,
|
||
swagger_js_url=f"{root}{static_url}/swagger-ui-bundle.js",
|
||
swagger_css_url=f"{root}{static_url}/swagger-ui.css",
|
||
swagger_favicon_url=favicon,
|
||
)
|
||
|
||
@app.get(swagger_ui_oauth2_redirect_url, include_in_schema=False)
|
||
async def swagger_ui_redirect() -> HTMLResponse:
|
||
return get_swagger_ui_oauth2_redirect_html()
|
||
|
||
if redoc_url is not None:
|
||
remove_route(redoc_url)
|
||
|
||
@app.get(redoc_url, include_in_schema=False)
|
||
async def redoc_html(request: Request) -> HTMLResponse:
|
||
root = request.scope.get("root_path")
|
||
favicon = f"{root}{static_url}/favicon.png"
|
||
|
||
return get_redoc_html(
|
||
openapi_url=f"{root}{openapi_url}",
|
||
title=app.title + " - ReDoc",
|
||
redoc_js_url=f"{root}{static_url}/redoc.standalone.js",
|
||
with_google_fonts=False,
|
||
redoc_favicon_url=favicon,
|
||
)
|
||
|
||
|
||
# 从model_config中获取模型信息
|
||
# TODO: 移出模型加载后,这些功能需要删除或改变实现
|
||
|
||
# def list_embed_models() -> List[str]:
|
||
# '''
|
||
# get names of configured embedding models
|
||
# '''
|
||
# return list(MODEL_PATH["embed_model"])
|
||
|
||
|
||
# def get_model_path(model_name: str, type: str = None) -> Optional[str]:
|
||
# if type in MODEL_PATH:
|
||
# paths = MODEL_PATH[type]
|
||
# else:
|
||
# paths = {}
|
||
# for v in MODEL_PATH.values():
|
||
# paths.update(v)
|
||
|
||
# if path_str := paths.get(model_name): # 以 "chatglm-6b": "THUDM/chatglm-6b-new" 为例,以下都是支持的路径
|
||
# path = Path(path_str)
|
||
# if path.is_dir(): # 任意绝对路径
|
||
# return str(path)
|
||
|
||
# root_path = Path(MODEL_ROOT_PATH)
|
||
# if root_path.is_dir():
|
||
# path = root_path / model_name
|
||
# if path.is_dir(): # use key, {MODEL_ROOT_PATH}/chatglm-6b
|
||
# return str(path)
|
||
# path = root_path / path_str
|
||
# if path.is_dir(): # use value, {MODEL_ROOT_PATH}/THUDM/chatglm-6b-new
|
||
# return str(path)
|
||
# path = root_path / path_str.split("/")[-1]
|
||
# if path.is_dir(): # use value split by "/", {MODEL_ROOT_PATH}/chatglm-6b-new
|
||
# return str(path)
|
||
# return path_str # THUDM/chatglm06b
|
||
|
||
|
||
def api_address() -> str:
|
||
from chatchat.configs import API_SERVER
|
||
|
||
host = API_SERVER["host"]
|
||
if host == "0.0.0.0":
|
||
host = "127.0.0.1"
|
||
port = API_SERVER["port"]
|
||
return f"http://{host}:{port}"
|
||
|
||
|
||
def webui_address() -> str:
|
||
from chatchat.configs import WEBUI_SERVER
|
||
|
||
host = WEBUI_SERVER["host"]
|
||
port = WEBUI_SERVER["port"]
|
||
return f"http://{host}:{port}"
|
||
|
||
|
||
def get_prompt_template(type: str, name: str) -> Optional[str]:
|
||
'''
|
||
从prompt_config中加载模板内容
|
||
type: "llm_chat","knowledge_base_chat","search_engine_chat"的其中一种,如果有新功能,应该进行加入。
|
||
'''
|
||
|
||
from chatchat.configs import PROMPT_TEMPLATES
|
||
|
||
return PROMPT_TEMPLATES.get(type, {}).get(name)
|
||
|
||
|
||
def set_httpx_config(
|
||
timeout: float = HTTPX_DEFAULT_TIMEOUT,
|
||
proxy: Union[str, Dict] = None,
|
||
unused_proxies: List[str] = [],
|
||
):
|
||
'''
|
||
设置httpx默认timeout。httpx默认timeout是5秒,在请求LLM回答时不够用。
|
||
将本项目相关服务加入无代理列表,避免fastchat的服务器请求错误。(windows下无效)
|
||
对于chatgpt等在线API,如要使用代理需要手动配置。搜索引擎的代理如何处置还需考虑。
|
||
'''
|
||
|
||
import httpx
|
||
import os
|
||
|
||
httpx._config.DEFAULT_TIMEOUT_CONFIG.connect = timeout
|
||
httpx._config.DEFAULT_TIMEOUT_CONFIG.read = timeout
|
||
httpx._config.DEFAULT_TIMEOUT_CONFIG.write = timeout
|
||
|
||
# 在进程范围内设置系统级代理
|
||
proxies = {}
|
||
if isinstance(proxy, str):
|
||
for n in ["http", "https", "all"]:
|
||
proxies[n + "_proxy"] = proxy
|
||
elif isinstance(proxy, dict):
|
||
for n in ["http", "https", "all"]:
|
||
if p := proxy.get(n):
|
||
proxies[n + "_proxy"] = p
|
||
elif p := proxy.get(n + "_proxy"):
|
||
proxies[n + "_proxy"] = p
|
||
|
||
for k, v in proxies.items():
|
||
os.environ[k] = v
|
||
|
||
# set host to bypass proxy
|
||
no_proxy = [x.strip() for x in os.environ.get("no_proxy", "").split(",") if x.strip()]
|
||
no_proxy += [
|
||
# do not use proxy for locahost
|
||
"http://127.0.0.1",
|
||
"http://localhost",
|
||
]
|
||
# do not use proxy for user deployed fastchat servers
|
||
for x in unused_proxies:
|
||
host = ":".join(x.split(":")[:2])
|
||
if host not in no_proxy:
|
||
no_proxy.append(host)
|
||
os.environ["NO_PROXY"] = ",".join(no_proxy)
|
||
|
||
def _get_proxies():
|
||
return proxies
|
||
|
||
import urllib.request
|
||
urllib.request.getproxies = _get_proxies
|
||
|
||
|
||
def run_in_thread_pool(
|
||
func: Callable,
|
||
params: List[Dict] = [],
|
||
) -> Generator:
|
||
'''
|
||
在线程池中批量运行任务,并将运行结果以生成器的形式返回。
|
||
请确保任务中的所有操作是线程安全的,任务函数请全部使用关键字参数。
|
||
'''
|
||
tasks = []
|
||
with ThreadPoolExecutor() as pool:
|
||
for kwargs in params:
|
||
tasks.append(pool.submit(func, **kwargs))
|
||
|
||
for obj in as_completed(tasks):
|
||
try:
|
||
yield obj.result()
|
||
except Exception as e:
|
||
logger.error(f"error in sub thread: {e}", exc_info=True)
|
||
|
||
|
||
def run_in_process_pool(
|
||
func: Callable,
|
||
params: List[Dict] = [],
|
||
) -> Generator:
|
||
'''
|
||
在线程池中批量运行任务,并将运行结果以生成器的形式返回。
|
||
请确保任务中的所有操作是线程安全的,任务函数请全部使用关键字参数。
|
||
'''
|
||
tasks = []
|
||
max_workers = None
|
||
if sys.platform.startswith("win"):
|
||
max_workers = min(mp.cpu_count(), 60) # max_workers should not exceed 60 on windows
|
||
with ProcessPoolExecutor(max_workers=max_workers) as pool:
|
||
for kwargs in params:
|
||
tasks.append(pool.submit(func, **kwargs))
|
||
|
||
for obj in as_completed(tasks):
|
||
try:
|
||
yield obj.result()
|
||
except Exception as e:
|
||
logger.error(f"error in sub process: {e}", exc_info=True)
|
||
|
||
|
||
def get_httpx_client(
|
||
use_async: bool = False,
|
||
proxies: Union[str, Dict] = None,
|
||
timeout: float = HTTPX_DEFAULT_TIMEOUT,
|
||
unused_proxies: List[str] = [],
|
||
**kwargs,
|
||
) -> Union[httpx.Client, httpx.AsyncClient]:
|
||
'''
|
||
helper to get httpx client with default proxies that bypass local addesses.
|
||
'''
|
||
default_proxies = {
|
||
# do not use proxy for locahost
|
||
"all://127.0.0.1": None,
|
||
"all://localhost": None,
|
||
}
|
||
# do not use proxy for user deployed fastchat servers
|
||
for x in unused_proxies:
|
||
host = ":".join(x.split(":")[:2])
|
||
default_proxies.update({host: None})
|
||
|
||
# get proxies from system envionrent
|
||
# proxy not str empty string, None, False, 0, [] or {}
|
||
default_proxies.update({
|
||
"http://": (os.environ.get("http_proxy")
|
||
if os.environ.get("http_proxy") and len(os.environ.get("http_proxy").strip())
|
||
else None),
|
||
"https://": (os.environ.get("https_proxy")
|
||
if os.environ.get("https_proxy") and len(os.environ.get("https_proxy").strip())
|
||
else None),
|
||
"all://": (os.environ.get("all_proxy")
|
||
if os.environ.get("all_proxy") and len(os.environ.get("all_proxy").strip())
|
||
else None),
|
||
})
|
||
for host in os.environ.get("no_proxy", "").split(","):
|
||
if host := host.strip():
|
||
# default_proxies.update({host: None}) # Origin code
|
||
default_proxies.update({'all://' + host: None}) # PR 1838 fix, if not add 'all://', httpx will raise error
|
||
|
||
# merge default proxies with user provided proxies
|
||
if isinstance(proxies, str):
|
||
proxies = {"all://": proxies}
|
||
|
||
if isinstance(proxies, dict):
|
||
default_proxies.update(proxies)
|
||
|
||
# construct Client
|
||
kwargs.update(timeout=timeout, proxies=default_proxies)
|
||
|
||
if use_async:
|
||
return httpx.AsyncClient(**kwargs)
|
||
else:
|
||
return httpx.Client(**kwargs)
|
||
|
||
|
||
def get_server_configs() -> Dict:
|
||
'''
|
||
获取configs中的原始配置项,供前端使用
|
||
'''
|
||
_custom = {
|
||
"api_address": api_address(),
|
||
}
|
||
|
||
return {**{k: v for k, v in locals().items() if k[0] != "_"}, **_custom}
|
||
|
||
|
||
def get_temp_dir(id: str = None) -> Tuple[str, str]:
|
||
'''
|
||
创建一个临时目录,返回(路径,文件夹名称)
|
||
'''
|
||
from chatchat.configs import BASE_TEMP_DIR
|
||
import uuid
|
||
|
||
if id is not None: # 如果指定的临时目录已存在,直接返回
|
||
path = os.path.join(BASE_TEMP_DIR, id)
|
||
if os.path.isdir(path):
|
||
return path, id
|
||
|
||
id = uuid.uuid4().hex
|
||
path = os.path.join(BASE_TEMP_DIR, id)
|
||
os.mkdir(path)
|
||
return path, id
|
||
|
||
|
||
# 动态更新知识库信息
|
||
def update_search_local_knowledgebase_tool():
|
||
import re
|
||
from chatchat.server.agent.tools_factory import tools_registry
|
||
from chatchat.server.db.repository.knowledge_base_repository import list_kbs_from_db
|
||
kbs=list_kbs_from_db()
|
||
template = "Use local knowledgebase from one or more of these:\n{KB_info}\n to get information,Only local data on this knowledge use this tool. The 'database' should be one of the above [{key}]."
|
||
KB_info_str = '\n'.join([f"{kb.kb_name}: {kb.kb_info}" for kb in kbs])
|
||
KB_name_info_str = '\n'.join([f"{kb.kb_name}" for kb in kbs])
|
||
template_knowledge = template.format(KB_info=KB_info_str, key=KB_name_info_str)
|
||
|
||
search_local_knowledgebase_tool=tools_registry._TOOLS_REGISTRY.get("search_local_knowledgebase")
|
||
if search_local_knowledgebase_tool:
|
||
search_local_knowledgebase_tool.description = " ".join(re.split(r"\n+\s*", template_knowledge))
|
||
search_local_knowledgebase_tool.args["database"]["choices"]=[kb.kb_name for kb in kbs]
|
||
|
||
|
||
def get_tool(name: str = None) -> Union[BaseTool, Dict[str, BaseTool]]:
|
||
import importlib
|
||
from chatchat.server.agent import tools_factory
|
||
importlib.reload(tools_factory)
|
||
|
||
from chatchat.server.agent.tools_factory import tools_registry
|
||
update_search_local_knowledgebase_tool()
|
||
if name is None:
|
||
return tools_registry._TOOLS_REGISTRY
|
||
else:
|
||
return tools_registry._TOOLS_REGISTRY.get(name)
|
||
|
||
|
||
def get_tool_config(name: str = None) -> Dict:
|
||
import importlib
|
||
# TODO 因为使用了变量更新,不支持重载
|
||
# from chatchat.configs import model_config
|
||
# importlib.reload(model_config)
|
||
from chatchat.configs import TOOL_CONFIG
|
||
if name is None:
|
||
return TOOL_CONFIG
|
||
else:
|
||
return TOOL_CONFIG.get(name, {})
|