Merge pull request #3517 from chatchat-space/dev_module

Dev module
This commit is contained in:
glide-the 2024-03-26 13:32:05 +08:00 committed by GitHub
commit 596a0f5fa1
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
775 changed files with 165522 additions and 264 deletions

17
.gitignore vendored
View File

@ -1,15 +1,13 @@
*.log
*.log.*
*.bak
logs
/media/
/data/*
!/data/knowledge_base/samples
/data/knowledge_base/samples/vector_store
!/data/nltk_data
/chatchat/chatchat/data/*
!/chatchat/chatchat/data/knowledge_base/samples
/chatchat/chatchat/data/knowledge_base/samples/vector_store
!/chatchat/chatchat/data/nltk_data
/configs/*.py
/configs/loom.yaml
/chatchat/chatchat/configs/*.py
/chatchat/chatchat/configs/loom.yaml
.vscode/
# below are standard python ignore files
@ -183,3 +181,6 @@ configs/*.py
/knowledge_base/samples/content/202311-D平台项目工作大纲参数人员中间库表结构说明V1.1(1).docx
/knowledge_base/samples/content/imi_temeplate.txt
/chatchat/configs/*.py
/chatchat/configs/*.yaml
chatchat/data

2
.gitmodules vendored
View File

@ -1,3 +1,3 @@
[submodule "knowledge_base/samples/content/wiki"]
path = data/knowledge_base/samples/content/wiki
path = chatchat/chatchat/data/knowledge_base/samples/content/wiki
url = https://github.com/chatchat-space/Langchain-Chatchat.wiki.git

View File

@ -1,4 +1,4 @@
![](img/logo-long-chatchat-trans-v2.png)
![](chatchat/img/logo-long-chatchat-trans-v2.png)
🌍 [READ THIS IN ENGLISH](README_en.md)
🌍 [日本語で読む](README_ja.md)
@ -47,11 +47,11 @@ OpenAI GPT API 的调用,并将在后续持续扩充对各类模型及模型 A
📺 [原理介绍视频](https://www.bilibili.com/video/BV13M4y1e7cN/?share_source=copy_web&vd_source=e6c5aafe684f30fbe41925d61ca6d514)
![实现原理图](img/langchain+chatglm.png)
![实现原理图](chatchat/img/langchain+chatglm.png)
从文档处理角度来看,实现流程如下:
![实现原理图2](img/langchain+chatglm2.png)
![实现原理图2](chatchat/img/langchain+chatglm2.png)
🚩 本项目未涉及微调、训练过程,但可利用微调或训练对本项目效果进行优化。
@ -152,17 +152,17 @@ $ python startup.py -a
1. FastAPI Docs 界面
![](img/fastapi_docs_026.png)
![](chatchat/img/fastapi_docs_026.png)
2. Web UI 启动界面示例:
- Web UI 对话界面:
![img](img/LLM_success.png)
![img](chatchat/img/LLM_success.png)
- Web UI 知识库管理页面:
![](img/init_knowledge_base.jpg)
![](chatchat/img/init_knowledge_base.jpg)
### 注意
@ -193,12 +193,12 @@ $ python startup.py -a
[![Telegram](https://img.shields.io/badge/Telegram-2CA5E0?style=for-the-badge&logo=telegram&logoColor=white "langchain-chatglm")](https://t.me/+RjliQ3jnJ1YyN2E9)
### 项目交流群
<img src="img/qr_code_96.jpg" alt="二维码" width="300" />
<img src="chatchat/img/qr_code_96.jpg" alt="二维码" width="300" />
🎉 Langchain-Chatchat 项目微信交流群,如果你也对本项目感兴趣,欢迎加入群聊参与讨论交流。
### 公众号
<img src="img/official_wechat_mp_account.png" alt="二维码" width="300" />
<img src="chatchat/img/official_wechat_mp_account.png" alt="二维码" width="300" />
🎉 Langchain-Chatchat 项目官方公众号,欢迎扫码关注。

View File

@ -1,4 +1,4 @@
![](img/logo-long-chatchat-trans-v2.png)
![](chatchat/img/logo-long-chatchat-trans-v2.png)
🌍 [中文文档](README.md)
🌍 [日本語で読む](README_ja.md)
@ -53,11 +53,11 @@ to `LLM` to generate an answer.
📺[video introduction](https://www.bilibili.com/video/BV13M4y1e7cN/?share_source=copy_web&vd_source=e6c5aafe684f30fbe41925d61ca6d514)
![实现原理图](img/langchain+chatglm.png)
![实现原理图](chatchat/img/langchain+chatglm.png)
The main process analysis from the aspect of document process:
![实现原理图2](img/langchain+chatglm2.png)
![实现原理图2](chatchat/img/langchain+chatglm2.png)
🚩 The training or fine-tuning are not involved in the project, but still, one always can improve performance by do
these.
@ -147,17 +147,17 @@ $ python startup.py -a
1. FastAPI docs interface
![](img/fastapi_docs_026.png)
![](chatchat/img/fastapi_docs_026.png)
2. webui page
- Web UI dialog page:
![img](img/LLM_success.png)
![img](chatchat/img/LLM_success.png)
- Web UI knowledge base management page:
![](img/init_knowledge_base.jpg)
![](chatchat/img/init_knowledge_base.jpg)
### Note
@ -192,8 +192,8 @@ please refer to the [Wiki](https://github.com/chatchat-space/Langchain-Chatchat/
### WeChat Group
<img src="img/qr_code_90.jpg" alt="二维码" width="300" height="300" />
<img src="chatchat/img/qr_code_90.jpg" alt="二维码" width="300" height="300" />
### WeChat Official Account
<img src="img/official_wechat_mp_account.png" alt="图片" width="900" height="300" />
<img src="chatchat/img/official_wechat_mp_account.png" alt="图片" width="900" height="300" />

View File

@ -1,4 +1,4 @@
![](img/logo-long-chatchat-trans-v2.png)
![](chatchat/img/logo-long-chatchat-trans-v2.png)
🌍 [中文文档](README.md)
🌍 [READ THIS IN ENGLISH](README_en.md)
@ -54,11 +54,11 @@ Q&Aソリューションを構築することです。
📺[video introduction](https://www.bilibili.com/video/BV13M4y1e7cN/?share_source=copy_web&vd_source=e6c5aafe684f30fbe41925d61ca6d514)
![实现原理图](img/langchain+chatglm.png)
![实现原理图](chatchat/img/langchain+chatglm.png)
文書プロセスの側面からの主なプロセス分析:
![实现原理图2](img/langchain+chatglm2.png)
![实现原理图2](chatchat/img/langchain+chatglm2.png)
🚩 トレーニングやファインチューニングはプロジェクトには含まれないが、これらを行うことで必ずパフォーマンスを向上させることができます。
@ -144,17 +144,17 @@ $ python startup.py -a
1. FastAPI docs インターフェース
![](img/fastapi_docs_026.png)
![](chatchat/img/fastapi_docs_026.png)
2. webui ページ
- Web UI ダイアログページ:
![img](img/LLM_success.png)
![img](chatchat/img/LLM_success.png)
- Web UI ナレッジベースマネジメントページ:
![](img/init_knowledge_base.jpg)
![](chatchat/img/init_knowledge_base.jpg)
### 注
@ -185,8 +185,8 @@ $ python startup.py -a
### WeChat グループ
<img src="img/qr_code_90.jpg" alt="二维码" width="300" height="300" />
<img src="chatchat/img/qr_code_90.jpg" alt="二维码" width="300" height="300" />
### WeChat 公式アカウント
<img src="img/official_wechat_mp_account.png" alt="图片" width="900" height="300" />
<img src="chatchat/img/official_wechat_mp_account.png" alt="图片" width="900" height="300" />

37
chatchat-server/README.md Normal file
View File

@ -0,0 +1,37 @@
git mv .\chatchat-server\agent .\chatchat_server\
git mv .\chatchat-server\api_server .\chatchat_server\
git mv .\chatchat-server\callback_handler .\chatchat_server\
git mv .\chatchat-server\chat .\chatchat_server\
git mv .\chatchat-server\db .\chatchat_server\
git mv .\chatchat-server\document_loaders .\chatchat_server\
git mv .\chatchat-server\knowledge_base .\chatchat_server\
git mv .\chatchat-server\memory .\chatchat_server\
git mv .\chatchat-server\reranker .\chatchat_server\
git mv .\chatchat-server\text_splitter .\chatchat_server\
git mv .\chatchat-server\__init__.py .\chatchat_server\
git mv .\chatchat-server\api_allinone_stale.py .\chatchat_server\
git mv .\chatchat-server\llm_api_shutdown.py .\chatchat_server\
git mv .\chatchat-server\llm_api_stale.py .\chatchat_server\
git mv .\chatchat-server\localai_embeddings.py .\chatchat_server\
git mv .\chatchat-server\minx_chat_openai.py .\chatchat_server\
git mv .\chatchat-server\pydantic_types.py .\chatchat_server\
git mv .\chatchat-server\utils.py .\chatchat_server\
git mv .\chatchat-server\webui_allinone_stale.py .\chatchat_server\
git mv .\chatchat-server\ .\chatchat_server\
git mv .\chatchat-server\ .\chatchat_server\
git mv .\chatchat-server\ .\chatchat_server\
git mv .\chatchat-server\ .\chatchat_server\
git mv .\chatchat-server\ .\chatchat_server\
git mv .\chatchat-server\ .\chatchat_server\
git mv .\chatchat-server\ .\chatchat_server\
git mv .\chatchat-server\ .\chatchat_server\
git mv .\chatchat-server\ .\chatchat_server\
git mv .\chatchat-server\ .\chatchat_server\
git mv .\chatchat-server\ .\chatchat_server\
git mv .\chatchat-server\ .\chatchat_server\
git mv .\chatchat-server\ .\chatchat_server\
git mv .\chatchat-server\ .\chatchat_server\
git mv .\chatchat-server\ .\chatchat_server\
git mv .\chatchat-server\ .\chatchat_server\

View File

@ -8,8 +8,8 @@ from langchain_core.messages import SystemMessage
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.tools import BaseTool
from server.agent.agent_factory import ( create_structured_qwen_chat_agent)
from server.agent.agent_factory.glm3_agent import create_structured_glm3_chat_agent
from chatchat_server.agent.agent_factory import ( create_structured_qwen_chat_agent)
from chatchat_server.agent.agent_factory.glm3_agent import create_structured_glm3_chat_agent
def agents_registry(

View File

@ -16,7 +16,7 @@ from langchain.output_parsers import OutputFixingParser
from langchain.schema import AgentAction, AgentFinish, OutputParserException
from langchain.schema.language_model import BaseLanguageModel
from langchain.tools.base import BaseTool
from server.pydantic_v1 import Field, typing, model_schema
from chatchat_server.pydantic_v1 import Field, typing, model_schema
logger = logging.getLogger(__name__)

View File

@ -16,8 +16,7 @@ from langchain.schema import (AgentAction, AgentFinish, OutputParserException,
HumanMessage, SystemMessage, AIMessage)
from langchain.schema.language_model import BaseLanguageModel
from langchain.tools.base import BaseTool
from server.utils import get_prompt_template
from chatchat_server.utils import get_prompt_template
logger = logging.getLogger(__name__)

View File

@ -1,5 +1,5 @@
# LangChain 的 ArxivQueryRun 工具
from server.pydantic_v1 import BaseModel, Field
from chatchat_server.pydantic_v1 import BaseModel, Field
from langchain.tools.arxiv.tool import ArxivQueryRun

View File

@ -1,6 +1,6 @@
import base64
import os
from server.pydantic_v1 import BaseModel, Field
from chatchat_server.pydantic_v1 import BaseModel, Field
def save_base64_audio(base64_audio, file_path):
audio_data = base64.b64decode(base64_audio)
@ -15,7 +15,7 @@ def aqa_run(model, tokenizer, query):
def aqa_processor(query: str):
from server.agent.container import container
from chatchat_server.agent.container import container
if container.metadata["audios"]:
file_path = "temp_audio.mp3"
save_base64_audio(container.metadata["audios"][0], file_path)

View File

@ -1,4 +1,4 @@
from server.pydantic_v1 import BaseModel, Field
from chatchat_server.pydantic_v1 import BaseModel, Field
from langchain.utilities.bing_search import BingSearchAPIWrapper
from langchain.utilities.duckduckgo_search import DuckDuckGoSearchAPIWrapper
from configs import TOOL_CONFIG

View File

@ -1,7 +1,7 @@
from urllib.parse import urlencode
from server.pydantic_v1 import BaseModel, Field
from chatchat_server.pydantic_v1 import BaseModel, Field
from server.knowledge_base.kb_doc_api import search_docs
from chatchat_server.knowledge_base.kb_doc_api import search_docs
from configs import TOOL_CONFIG

View File

@ -1,5 +1,5 @@
from langchain_community.tools import YouTubeSearchTool
from server.pydantic_v1 import BaseModel, Field
from chatchat_server.pydantic_v1 import BaseModel, Field
def search_youtube(query: str):

View File

@ -1,5 +1,5 @@
# LangChain 的 Shell 工具
from server.pydantic_v1 import BaseModel, Field
from chatchat_server.pydantic_v1 import BaseModel, Field
from langchain_community.tools import ShellTool

View File

@ -6,11 +6,11 @@ from typing import List
import uuid
from langchain.agents import tool
from server.pydantic_v1 import Field, FieldInfo
from chatchat_server.pydantic_v1 import Field, FieldInfo
import openai
from configs.basic_config import MEDIA_PATH
from server.utils import MsgType
from chatchat_server.utils import MsgType
def get_image_model_config() -> dict:

View File

@ -1,5 +1,5 @@
from langchain_core.tools import StructuredTool
from server.agent.tools_factory import *
from chatchat_server.agent.tools_factory import *
from configs import KB_INFO
template = "Use local knowledgebase from one or more of these:\n{KB_info}\n to get informationOnly local data on this knowledge use this tool. The 'database' should be one of the above [{key}]."

View File

@ -4,10 +4,10 @@ Method Use cogagent to generate response for a given image and query.
import base64
from io import BytesIO
from PIL import Image, ImageDraw
from server.pydantic_v1 import BaseModel, Field
from chatchat_server.pydantic_v1 import BaseModel, Field
from configs import TOOL_CONFIG
import re
from server.agent.container import container
from chatchat_server.agent.container import container
def extract_between_markers(text, start_marker, end_marker):

View File

@ -1,7 +1,7 @@
"""
简单的单参数输入工具实现用于查询现在天气的情况
"""
from server.pydantic_v1 import BaseModel, Field
from chatchat_server.pydantic_v1 import BaseModel, Field
import requests
def weather(location: str, api_key: str):

View File

@ -1,6 +1,6 @@
# Langchain 自带的 Wolfram Alpha API 封装
from langchain.utilities.wolfram_alpha import WolframAlphaAPIWrapper
from server.pydantic_v1 import BaseModel, Field
from chatchat_server.pydantic_v1 import BaseModel, Field
wolfram_alpha_appid = "your key"

View File

@ -4,7 +4,7 @@ import re
from typing import Dict, List, Literal, Optional, Union
from fastapi import UploadFile
from server.pydantic_v2 import BaseModel, Field, AnyUrl, root_validator
from chatchat_server.pydantic_v1 import BaseModel, Field, AnyUrl, root_validator
from openai.types.chat import (
ChatCompletionMessageParam,
ChatCompletionToolChoiceOptionParam,

View File

@ -4,9 +4,9 @@ from typing import List
from fastapi import APIRouter, Request
from server.chat.chat import chat
from server.chat.feedback import chat_feedback
from server.chat.file_chat import file_chat
from chatchat_server.chat.chat import chat
from chatchat_server.chat.feedback import chat_feedback
from chatchat_server.chat.file_chat import file_chat
chat_router = APIRouter(prefix="/chat", tags=["ChatChat 对话"])

View File

@ -4,14 +4,14 @@ from typing import List
from fastapi import APIRouter, Request
from server.chat.file_chat import upload_temp_docs
from server.knowledge_base.kb_api import list_kbs, create_kb, delete_kb
from server.knowledge_base.kb_doc_api import (list_files, upload_docs, delete_docs,
from chatchat_server.chat.file_chat import upload_temp_docs
from chatchat_server.knowledge_base.kb_api import list_kbs, create_kb, delete_kb
from chatchat_server.knowledge_base.kb_doc_api import (list_files, upload_docs, delete_docs,
update_docs, download_doc, recreate_vector_store,
search_docs, update_info)
from server.knowledge_base.kb_summary_api import (summary_file_to_vector_store, recreate_summary_vector_store,
from chatchat_server.knowledge_base.kb_summary_api import (summary_file_to_vector_store, recreate_summary_vector_store,
summary_doc_ids_to_vector_store)
from server.utils import BaseResponse, ListResponse
from chatchat_server.utils import BaseResponse, ListResponse
kb_router = APIRouter(prefix="/knowledge_base", tags=["Knowledge Base Management"])

View File

@ -10,7 +10,7 @@ from sse_starlette.sse import EventSourceResponse
from .api_schemas import *
from configs import logger
from server.utils import get_model_info, get_config_platforms, get_OpenAIClient
from chatchat_server.utils import get_model_info, get_config_platforms, get_OpenAIClient
DEFAULT_API_CONCURRENCIES = 5 # 默认单个模型最大并发数

View File

@ -9,13 +9,13 @@ import uvicorn
from configs import VERSION, MEDIA_PATH
from configs.server_config import OPEN_CROSS_DOMAIN
from server.api_server.chat_routes import chat_router
from server.api_server.kb_routes import kb_router
from server.api_server.openai_routes import openai_router
from server.api_server.server_routes import server_router
from server.api_server.tool_routes import tool_router
from server.chat.completion import completion
from server.utils import MakeFastAPIOffline
from chatchat_server.api_server.chat_routes import chat_router
from chatchat_server.api_server.kb_routes import kb_router
from chatchat_server.api_server.openai_routes import openai_router
from chatchat_server.api_server.server_routes import server_router
from chatchat_server.api_server.tool_routes import tool_router
from chatchat_server.chat.completion import completion
from chatchat_server.utils import MakeFastAPIOffline
def create_app(run_mode: str=None):

View File

@ -2,7 +2,7 @@ from typing import Literal
from fastapi import APIRouter, Body
from server.utils import get_server_configs, get_prompt_template
from chatchat_server.utils import get_server_configs, get_prompt_template
server_router = APIRouter(prefix="/server", tags=["Server State"])

View File

Before

Width:  |  Height:  |  Size: 7.1 KiB

After

Width:  |  Height:  |  Size: 7.1 KiB

View File

@ -5,7 +5,7 @@ from typing import List
from fastapi import APIRouter, Request, Body
from configs import logger
from server.utils import BaseResponse
from chatchat_server.utils import BaseResponse
tool_router = APIRouter(prefix="/tools", tags=["Toolkits"])
@ -14,7 +14,7 @@ tool_router = APIRouter(prefix="/tools", tags=["Toolkits"])
@tool_router.get("/", response_model=BaseResponse)
async def list_tools():
import importlib
from server.agent.tools_factory import tools_registry
from chatchat_server.agent.tools_factory import tools_registry
importlib.reload(tools_registry)
data = {t.name: {"name": t.name, "description": t.description, "args": t.args} for t in tools_registry.all_tools}
@ -27,7 +27,7 @@ async def call_tool(
kwargs: dict = Body({}, examples=[{"a":1,"b":2,"operator":"+"}]),
):
import importlib
from server.agent.tools_factory import tools_registry
from chatchat_server.agent.tools_factory import tools_registry
importlib.reload(tools_registry)
tool_names = {t.name: t for t in tools_registry.all_tools}

View File

@ -2,7 +2,7 @@ from typing import Any, Dict, List
from langchain.callbacks.base import BaseCallbackHandler
from langchain.schema import LLMResult
from server.db.repository import update_message
from chatchat_server.db.repository import update_message
class ConversationCallbackHandler(BaseCallbackHandler):

View File

@ -10,15 +10,15 @@ from langchain_core.messages import AIMessage, HumanMessage
from langchain.chains import LLMChain
from langchain.prompts.chat import ChatPromptTemplate
from langchain.prompts import PromptTemplate
from server.agent.agent_factory.agents_registry import agents_registry
from server.agent.tools_factory.tools_registry import all_tools
from server.agent.container import container
from chatchat_server.agent.agent_factory.agents_registry import agents_registry
from chatchat_server.agent.tools_factory.tools_registry import all_tools
from chatchat_server.agent.container import container
from server.utils import wrap_done, get_ChatOpenAI, get_prompt_template, MsgType
from server.chat.utils import History
from server.memory.conversation_db_buffer_memory import ConversationBufferDBMemory
from server.db.repository import add_message_to_db
from server.callback_handler.agent_callback_handler import AgentExecutorAsyncIteratorCallbackHandler, AgentStatus
from chatchat_server.utils import wrap_done, get_ChatOpenAI, get_prompt_template, MsgType
from chatchat_server.chat.utils import History
from chatchat_server.memory.conversation_db_buffer_memory import ConversationBufferDBMemory
from chatchat_server.db.repository import add_message_to_db
from chatchat_server.callback_handler.agent_callback_handler import AgentExecutorAsyncIteratorCallbackHandler, AgentStatus
def create_models_from_config(configs, callbacks, stream):

View File

@ -1,13 +1,13 @@
from fastapi import Body
from sse_starlette.sse import EventSourceResponse
from server.utils import wrap_done, get_OpenAI
from chatchat_server.utils import wrap_done, get_OpenAI
from langchain.chains import LLMChain
from langchain.callbacks import AsyncIteratorCallbackHandler
from typing import AsyncIterable, Optional
import asyncio
from langchain.prompts import PromptTemplate
from server.utils import get_prompt_template
from chatchat_server.utils import get_prompt_template
async def completion(query: str = Body(..., description="用户输入", examples=["恼羞成怒"]),

View File

@ -1,7 +1,7 @@
from fastapi import Body
from configs import logger, log_verbose
from server.utils import BaseResponse
from server.db.repository import feedback_message_to_db
from chatchat_server.utils import BaseResponse
from chatchat_server.db.repository import feedback_message_to_db
def chat_feedback(message_id: str = Body("", max_length=32, description="聊天记录id"),
score: int = Body(0, max=100, description="用户评分满分100越大表示评价越高"),

View File

@ -2,16 +2,16 @@ from fastapi import Body, File, Form, UploadFile
from sse_starlette.sse import EventSourceResponse
from configs import (VECTOR_SEARCH_TOP_K, SCORE_THRESHOLD, CHUNK_SIZE, OVERLAP_SIZE, ZH_TITLE_ENHANCE)
from server.utils import (wrap_done, get_ChatOpenAI, get_Embeddings,
from chatchat_server.utils import (wrap_done, get_ChatOpenAI, get_Embeddings,
BaseResponse, get_prompt_template, get_temp_dir, run_in_thread_pool)
from server.knowledge_base.kb_cache.faiss_cache import memo_faiss_pool
from chatchat_server.knowledge_base.kb_cache.faiss_cache import memo_faiss_pool
from langchain.chains import LLMChain
from langchain.callbacks import AsyncIteratorCallbackHandler
from typing import AsyncIterable, List, Optional
import asyncio
from langchain.prompts.chat import ChatPromptTemplate
from server.chat.utils import History
from server.knowledge_base.utils import KnowledgeFile
from chatchat_server.chat.utils import History
from chatchat_server.knowledge_base.utils import KnowledgeFile
import json
import os

View File

@ -1,5 +1,5 @@
from functools import lru_cache
from server.pydantic_v2 import BaseModel, Field
from chatchat_server.pydantic_v2 import BaseModel, Field
from langchain.prompts.chat import ChatMessagePromptTemplate
from configs import logger, log_verbose
from typing import List, Tuple, Dict, Union

View File

@ -1,5 +1,5 @@
from sqlalchemy import Column, Integer, String, DateTime, JSON, func
from server.db.base import Base
from chatchat_server.db.base import Base
class ConversationModel(Base):

View File

@ -1,6 +1,6 @@
from sqlalchemy import Column, Integer, String, DateTime, func
from server.db.base import Base
from chatchat_server.db.base import Base
class KnowledgeBaseModel(Base):

View File

@ -1,6 +1,6 @@
from sqlalchemy import Column, Integer, String, DateTime, Float, Boolean, JSON, func
from server.db.base import Base
from chatchat_server.db.base import Base
class KnowledgeFileModel(Base):

View File

@ -1,6 +1,6 @@
from sqlalchemy import Column, Integer, String, DateTime, Float, Boolean, JSON, func
from server.db.base import Base
from chatchat_server.db.base import Base
class SummaryChunkModel(Base):

View File

@ -1,6 +1,6 @@
from sqlalchemy import Column, Integer, String, DateTime, JSON, func
from server.db.base import Base
from chatchat_server.db.base import Base
class MessageModel(Base):

View File

@ -1,6 +1,6 @@
from server.db.session import with_session
from chatchat_server.db.session import with_session
import uuid
from server.db.models.conversation_model import ConversationModel
from chatchat_server.db.models.conversation_model import ConversationModel
@with_session

View File

@ -1,5 +1,5 @@
from server.db.models.knowledge_base_model import KnowledgeBaseModel
from server.db.session import with_session
from chatchat_server.db.models.knowledge_base_model import KnowledgeBaseModel
from chatchat_server.db.session import with_session
@with_session

View File

@ -1,7 +1,7 @@
from server.db.models.knowledge_base_model import KnowledgeBaseModel
from server.db.models.knowledge_file_model import KnowledgeFileModel, FileDocModel
from server.db.session import with_session
from server.knowledge_base.utils import KnowledgeFile
from chatchat_server.db.models.knowledge_base_model import KnowledgeBaseModel
from chatchat_server.db.models.knowledge_file_model import KnowledgeFileModel, FileDocModel
from chatchat_server.db.session import with_session
from chatchat_server.knowledge_base.utils import KnowledgeFile
from typing import List, Dict

View File

@ -1,5 +1,5 @@
from server.db.models.knowledge_metadata_model import SummaryChunkModel
from server.db.session import with_session
from chatchat_server.db.models.knowledge_metadata_model import SummaryChunkModel
from chatchat_server.db.session import with_session
from typing import List, Dict

View File

@ -1,7 +1,7 @@
from server.db.session import with_session
from chatchat_server.db.session import with_session
from typing import Dict, List
import uuid
from server.db.models.message_model import MessageModel
from chatchat_server.db.models.message_model import MessageModel
@with_session

View File

@ -1,6 +1,6 @@
from functools import wraps
from contextlib import contextmanager
from server.db.base import SessionLocal
from chatchat_server.db.base import SessionLocal
from sqlalchemy.orm import Session

View File

@ -1,6 +1,6 @@
from typing import List
from langchain_community.document_loaders.unstructured import UnstructuredFileLoader
from server.document_loaders.ocr import get_ocr
from chatchat_server.document_loaders.ocr import get_ocr
class RapidOCRLoader(UnstructuredFileLoader):

View File

@ -4,7 +4,7 @@ import cv2
from PIL import Image
import numpy as np
from configs import PDF_OCR_THRESHOLD
from server.document_loaders.ocr import get_ocr
from chatchat_server.document_loaders.ocr import get_ocr
import tqdm

View File

@ -1,8 +1,8 @@
import urllib
from server.utils import BaseResponse, ListResponse
from server.knowledge_base.utils import validate_kb_name
from server.knowledge_base.kb_service.base import KBServiceFactory
from server.db.repository.knowledge_base_repository import list_kbs_from_db
from chatchat_server.utils import BaseResponse, ListResponse
from chatchat_server.knowledge_base.utils import validate_kb_name
from chatchat_server.knowledge_base.kb_service.base import KBServiceFactory
from chatchat_server.db.repository.knowledge_base_repository import list_kbs_from_db
from configs import DEFAULT_EMBEDDING_MODEL, logger, log_verbose
from fastapi import Body

View File

@ -1,7 +1,7 @@
from configs import CACHED_VS_NUM, CACHED_MEMO_VS_NUM
from server.knowledge_base.kb_cache.base import *
from server.utils import get_Embeddings
from server.knowledge_base.utils import get_vs_path
from chatchat_server.knowledge_base.kb_cache.base import *
from chatchat_server.utils import get_Embeddings
from chatchat_server.knowledge_base.utils import get_vs_path
from langchain.vectorstores.faiss import FAISS
from langchain.docstore.in_memory import InMemoryDocstore
from langchain.schema import Document

View File

@ -5,16 +5,16 @@ from configs import (DEFAULT_VS_TYPE, DEFAULT_EMBEDDING_MODEL,
VECTOR_SEARCH_TOP_K, SCORE_THRESHOLD,
CHUNK_SIZE, OVERLAP_SIZE, ZH_TITLE_ENHANCE,
logger, log_verbose, )
from server.utils import BaseResponse, ListResponse, run_in_thread_pool
from server.knowledge_base.utils import (validate_kb_name, list_files_from_folder, get_file_path,
from chatchat_server.utils import BaseResponse, ListResponse, run_in_thread_pool
from chatchat_server.knowledge_base.utils import (validate_kb_name, list_files_from_folder, get_file_path,
files2docs_in_thread, KnowledgeFile)
from fastapi.responses import FileResponse
from sse_starlette import EventSourceResponse
import json
from server.knowledge_base.kb_service.base import KBServiceFactory
from server.db.repository.knowledge_file_repository import get_file_detail
from chatchat_server.knowledge_base.kb_service.base import KBServiceFactory
from chatchat_server.db.repository.knowledge_file_repository import get_file_detail
from langchain.docstore.document import Document
from server.knowledge_base.model.kb_document_model import DocumentWithVSId
from chatchat_server.knowledge_base.model.kb_document_model import DocumentWithVSId
from typing import List, Dict

View File

@ -5,11 +5,11 @@ import os
from pathlib import Path
from langchain.docstore.document import Document
from server.db.repository.knowledge_base_repository import (
from chatchat_server.db.repository.knowledge_base_repository import (
add_kb_to_db, delete_kb_from_db, list_kbs_from_db, kb_exists,
load_kb_from_db, get_kb_detail,
)
from server.db.repository.knowledge_file_repository import (
from chatchat_server.db.repository.knowledge_file_repository import (
add_file_to_db, delete_file_from_db, delete_files_from_db, file_exists_in_db,
count_files_from_db, list_files_from_db, get_file_detail, delete_file_from_db,
list_docs_from_db,
@ -17,14 +17,14 @@ from server.db.repository.knowledge_file_repository import (
from configs import (kbs_config, VECTOR_SEARCH_TOP_K, SCORE_THRESHOLD,
DEFAULT_EMBEDDING_MODEL, KB_INFO)
from server.knowledge_base.utils import (
from chatchat_server.knowledge_base.utils import (
get_kb_path, get_doc_path, KnowledgeFile,
list_kbs_from_folder, list_files_from_folder,
)
from typing import List, Union, Dict, Optional, Tuple
from server.knowledge_base.model.kb_document_model import DocumentWithVSId
from chatchat_server.knowledge_base.model.kb_document_model import DocumentWithVSId
class SupportedVSType:
@ -286,29 +286,29 @@ class KBServiceFactory:
if isinstance(vector_store_type, str):
vector_store_type = getattr(SupportedVSType, vector_store_type.upper())
if SupportedVSType.FAISS == vector_store_type:
from server.knowledge_base.kb_service.faiss_kb_service import FaissKBService
from chatchat_server.knowledge_base.kb_service.faiss_kb_service import FaissKBService
return FaissKBService(kb_name, embed_model=embed_model)
elif SupportedVSType.PG == vector_store_type:
from server.knowledge_base.kb_service.pg_kb_service import PGKBService
from chatchat_server.knowledge_base.kb_service.pg_kb_service import PGKBService
return PGKBService(kb_name, embed_model=embed_model)
elif SupportedVSType.MILVUS == vector_store_type:
from server.knowledge_base.kb_service.milvus_kb_service import MilvusKBService
from chatchat_server.knowledge_base.kb_service.milvus_kb_service import MilvusKBService
return MilvusKBService(kb_name, embed_model=embed_model)
elif SupportedVSType.ZILLIZ == vector_store_type:
from server.knowledge_base.kb_service.zilliz_kb_service import ZillizKBService
from chatchat_server.knowledge_base.kb_service.zilliz_kb_service import ZillizKBService
return ZillizKBService(kb_name, embed_model=embed_model)
elif SupportedVSType.DEFAULT == vector_store_type:
from server.knowledge_base.kb_service.milvus_kb_service import MilvusKBService
from chatchat_server.knowledge_base.kb_service.milvus_kb_service import MilvusKBService
return MilvusKBService(kb_name,
embed_model=embed_model) # other milvus parameters are set in model_config.kbs_config
elif SupportedVSType.ES == vector_store_type:
from server.knowledge_base.kb_service.es_kb_service import ESKBService
from chatchat_server.knowledge_base.kb_service.es_kb_service import ESKBService
return ESKBService(kb_name, embed_model=embed_model)
elif SupportedVSType.CHROMADB == vector_store_type:
from server.knowledge_base.kb_service.chromadb_kb_service import ChromaKBService
from chatchat_server.knowledge_base.kb_service.chromadb_kb_service import ChromaKBService
return ChromaKBService(kb_name, embed_model=embed_model)
elif SupportedVSType.DEFAULT == vector_store_type: # kb_exists of default kbservice is False, to make validation easier.
from server.knowledge_base.kb_service.default_kb_service import DefaultKBService
from chatchat_server.knowledge_base.kb_service.default_kb_service import DefaultKBService
return DefaultKBService(kb_name)
@staticmethod

View File

@ -6,9 +6,9 @@ from chromadb.api.types import (GetResult, QueryResult)
from langchain.docstore.document import Document
from configs import SCORE_THRESHOLD
from server.knowledge_base.kb_service.base import KBService, SupportedVSType
from server.knowledge_base.utils import KnowledgeFile, get_kb_path, get_vs_path
from server.utils import get_Embeddings
from chatchat_server.knowledge_base.kb_service.base import KBService, SupportedVSType
from chatchat_server.knowledge_base.utils import KnowledgeFile, get_kb_path, get_vs_path
from chatchat_server.utils import get_Embeddings
def _get_result_to_documents(get_result: GetResult) -> List[Document]:

View File

@ -3,7 +3,7 @@ from typing import List
from langchain.embeddings.base import Embeddings
from langchain.schema import Document
from server.knowledge_base.kb_service.base import KBService
from chatchat_server.knowledge_base.kb_service.base import KBService
class DefaultKBService(KBService):

View File

@ -3,9 +3,9 @@ import os
import shutil
from langchain.schema import Document
from langchain_community.vectorstores.elasticsearch import ElasticsearchStore, ApproxRetrievalStrategy
from server.knowledge_base.kb_service.base import KBService, SupportedVSType
from server.knowledge_base.utils import KnowledgeFile
from server.utils import get_Embeddings
from chatchat_server.knowledge_base.kb_service.base import KBService, SupportedVSType
from chatchat_server.knowledge_base.utils import KnowledgeFile
from chatchat_server.utils import get_Embeddings
from elasticsearch import Elasticsearch, BadRequestError
from configs import logger, kbs_config, KB_ROOT_PATH

View File

@ -2,10 +2,10 @@ import os
import shutil
from configs import SCORE_THRESHOLD
from server.knowledge_base.kb_service.base import KBService, SupportedVSType
from server.knowledge_base.kb_cache.faiss_cache import kb_faiss_pool, ThreadSafeFaiss
from server.knowledge_base.utils import KnowledgeFile, get_kb_path, get_vs_path
from server.utils import get_Embeddings
from chatchat_server.knowledge_base.kb_service.base import KBService, SupportedVSType
from chatchat_server.knowledge_base.kb_cache.faiss_cache import kb_faiss_pool, ThreadSafeFaiss
from chatchat_server.knowledge_base.utils import KnowledgeFile, get_kb_path, get_vs_path
from chatchat_server.utils import get_Embeddings
from langchain.docstore.document import Document
from typing import List, Dict, Optional, Tuple

View File

@ -5,12 +5,12 @@ from langchain.vectorstores.milvus import Milvus
import os
from configs import kbs_config
from server.db.repository import list_file_num_docs_id_by_kb_name_and_file_name
from chatchat_server.db.repository import list_file_num_docs_id_by_kb_name_and_file_name
from server.knowledge_base.kb_service.base import KBService, SupportedVSType, \
from chatchat_server.knowledge_base.kb_service.base import KBService, SupportedVSType, \
score_threshold_process
from server.knowledge_base.utils import KnowledgeFile
from server.utils import get_Embeddings
from chatchat_server.knowledge_base.utils import KnowledgeFile
from chatchat_server.utils import get_Embeddings
class MilvusKBService(KBService):
@ -106,7 +106,7 @@ class MilvusKBService(KBService):
if __name__ == '__main__':
# 测试建表使用
from server.db.base import Base, engine
from chatchat_server.db.base import Base, engine
Base.metadata.create_all(bind=engine)
milvusService = MilvusKBService("test")

View File

@ -7,10 +7,10 @@ from sqlalchemy import text
from configs import kbs_config
from server.knowledge_base.kb_service.base import SupportedVSType, KBService, \
from chatchat_server.knowledge_base.kb_service.base import SupportedVSType, KBService, \
score_threshold_process
from server.knowledge_base.utils import KnowledgeFile
from server.utils import get_Embeddings
from chatchat_server.knowledge_base.utils import KnowledgeFile
from chatchat_server.utils import get_Embeddings
import shutil
import sqlalchemy
from sqlalchemy.engine.base import Engine
@ -85,7 +85,7 @@ class PGKBService(KBService):
if __name__ == '__main__':
from server.db.base import Base, engine
from chatchat_server.db.base import Base, engine
# Base.metadata.create_all(bind=engine)
pGKBService = PGKBService("test")

View File

@ -3,10 +3,10 @@ from langchain.embeddings.base import Embeddings
from langchain.schema import Document
from langchain.vectorstores import Zilliz
from configs import kbs_config
from server.knowledge_base.kb_service.base import KBService, SupportedVSType, \
from chatchat_server.knowledge_base.kb_service.base import KBService, SupportedVSType, \
score_threshold_process
from server.knowledge_base.utils import KnowledgeFile
from server.utils import get_Embeddings
from chatchat_server.knowledge_base.utils import KnowledgeFile
from chatchat_server.utils import get_Embeddings
class ZillizKBService(KBService):
@ -92,7 +92,7 @@ class ZillizKBService(KBService):
if __name__ == '__main__':
from server.db.base import Base, engine
from chatchat_server.db.base import Base, engine
Base.metadata.create_all(bind=engine)
zillizService = ZillizKBService("test")

View File

@ -5,10 +5,10 @@ from configs import (
KB_ROOT_PATH)
from abc import ABC, abstractmethod
from server.knowledge_base.kb_cache.faiss_cache import kb_faiss_pool, ThreadSafeFaiss
from chatchat_server.knowledge_base.kb_cache.faiss_cache import kb_faiss_pool, ThreadSafeFaiss
import os
import shutil
from server.db.repository.knowledge_metadata_repository import add_summary_to_db, delete_summary_from_db
from chatchat_server.db.repository.knowledge_metadata_repository import add_summary_to_db, delete_summary_from_db
from langchain.docstore.document import Document

View File

@ -2,7 +2,7 @@ from typing import List, Optional
from langchain.schema.language_model import BaseLanguageModel
from server.knowledge_base.model.kb_document_model import DocumentWithVSId
from chatchat_server.knowledge_base.model.kb_document_model import DocumentWithVSId
from configs import (logger)
from langchain.chains import StuffDocumentsChain, LLMChain
from langchain.prompts import PromptTemplate

View File

@ -2,15 +2,15 @@ from fastapi import Body
from configs import (DEFAULT_VS_TYPE, DEFAULT_EMBEDDING_MODEL,
OVERLAP_SIZE,
logger, log_verbose, )
from server.knowledge_base.utils import (list_files_from_folder)
from chatchat_server.knowledge_base.utils import (list_files_from_folder)
from sse_starlette import EventSourceResponse
import json
from server.knowledge_base.kb_service.base import KBServiceFactory
from chatchat_server.knowledge_base.kb_service.base import KBServiceFactory
from typing import List, Optional
from server.knowledge_base.kb_summary.base import KBSummaryService
from server.knowledge_base.kb_summary.summary_chunk import SummaryAdapter
from server.utils import wrap_done, get_ChatOpenAI, BaseResponse
from server.knowledge_base.model.kb_document_model import DocumentWithVSId
from chatchat_server.knowledge_base.kb_summary.base import KBSummaryService
from chatchat_server.knowledge_base.kb_summary.summary_chunk import SummaryAdapter
from chatchat_server.utils import wrap_done, get_ChatOpenAI, BaseResponse
from chatchat_server.knowledge_base.model.kb_document_model import DocumentWithVSId
def recreate_summary_vector_store(

View File

@ -3,19 +3,19 @@ from configs import (
CHUNK_SIZE, OVERLAP_SIZE,
logger, log_verbose
)
from server.knowledge_base.utils import (
from chatchat_server.knowledge_base.utils import (
get_file_path, list_kbs_from_folder,
list_files_from_folder, files2docs_in_thread,
KnowledgeFile
)
from server.knowledge_base.kb_service.base import KBServiceFactory
from server.db.models.conversation_model import ConversationModel
from server.db.models.message_model import MessageModel
from server.db.repository.knowledge_file_repository import add_file_to_db # ensure Models are imported
from server.db.repository.knowledge_metadata_repository import add_summary_to_db
from chatchat_server.knowledge_base.kb_service.base import KBServiceFactory
from chatchat_server.db.models.conversation_model import ConversationModel
from chatchat_server.db.models.message_model import MessageModel
from chatchat_server.db.repository.knowledge_file_repository import add_file_to_db # ensure Models are imported
from chatchat_server.db.repository.knowledge_metadata_repository import add_summary_to_db
from server.db.base import Base, engine
from server.db.session import session_scope
from chatchat_server.db.base import Base, engine
from chatchat_server.db.session import session_scope
import os
from dateutil.parser import parse
from typing import Literal, List

View File

@ -11,13 +11,12 @@ from configs import (
TEXT_SPLITTER_NAME,
)
import importlib
from server.text_splitter import zh_title_enhance as func_zh_title_enhance
from chatchat_server.text_splitter import zh_title_enhance as func_zh_title_enhance
import langchain_community.document_loaders
from langchain.docstore.document import Document
from langchain.text_splitter import TextSplitter, MarkdownHeaderTextSplitter
from pathlib import Path
from server.utils import run_in_thread_pool, run_in_process_pool
from chatchat_server.utils import run_in_thread_pool, run_in_process_pool
import json
from typing import List, Union, Dict, Tuple, Generator
import chardet

View File

@ -27,8 +27,7 @@ from tenacity import (
stop_after_attempt,
wait_exponential,
)
from server.utils import run_in_thread_pool
from chatchat_server.utils import run_in_thread_pool
logger = logging.getLogger(__name__)

View File

@ -4,8 +4,8 @@ from typing import Any, List, Dict
from langchain.memory.chat_memory import BaseChatMemory
from langchain.schema import get_buffer_string, BaseMessage, HumanMessage, AIMessage
from langchain.schema.language_model import BaseLanguageModel
from server.db.repository.message_repository import filter_message
from server.db.models.message_model import MessageModel
from chatchat_server.db.repository.message_repository import filter_message
from chatchat_server.db.models.message_model import MessageModel
class ConversationBufferDBMemory(BaseChatMemory):

Some files were not shown because too many files have changed in this diff Show More