Langchain-Chatchat/server/agent/custom_template.py
liunux4odoo d316efe8d3
release 0.2.6 (#1815)
## 🛠 新增功能

- 支持百川在线模型 (@hzg0601 @liunux4odoo in #1623)
- 支持 Azure OpenAI 与 claude 等 Langchain 自带模型 (@zRzRzRzRzRzRzR in #1808)
- Agent 功能大量更新,支持更多的工具、更换提示词、检索知识库 (@zRzRzRzRzRzRzR in #1626 #1666 #1785)
- 加长 32k 模型的历史记录 (@zRzRzRzRzRzRzR in #1629 #1630)
- *_chat 接口支持 max_tokens 参数 (@liunux4odoo in #1744)
- 实现 API 和 WebUI 的前后端分离 (@liunux4odoo in #1772)
- 支持 zlilliz 向量库 (@zRzRzRzRzRzRzR in #1785)
- 支持 metaphor 搜索引擎 (@liunux4odoo in #1792)
- 支持 p-tuning 模型 (@hzg0601 in #1810)
- 更新完善文档和 Wiki (@imClumsyPanda @zRzRzRzRzRzRzR @glide-the in #1680 #1811)

## 🐞 问题修复

- 修复 bge-* 模型匹配超过 1 的问题 (@zRzRzRzRzRzRzR in #1652)
- 修复系统代理为空的问题 (@glide-the in #1654)
- 修复重建知识库时 `d == self.d assert error` (@liunux4odoo in #1766)
- 修复对话历史消息错误 (@liunux4odoo in #1801)
- 修复 OpenAI 无法调用的 bug (@zRzRzRzRzRzRzR in #1808)
- 修复 windows下 BIND_HOST=0.0.0.0 时对话出错的问题 (@hzg0601 in #1810)
2023-10-20 23:16:06 +08:00

96 lines
3.7 KiB
Python

from __future__ import annotations
from langchain.agents import Tool, AgentOutputParser
from langchain.prompts import StringPromptTemplate
from typing import List
from langchain.schema import AgentAction, AgentFinish
from server.agent import model_container
class CustomPromptTemplate(StringPromptTemplate):
# The template to use
template: str
# The list of tools available
tools: List[Tool]
def format(self, **kwargs) -> str:
# Get the intermediate steps (AgentAction, Observation tuples)
# Format them in a particular way
intermediate_steps = kwargs.pop("intermediate_steps")
thoughts = ""
for action, observation in intermediate_steps:
thoughts += action.log
thoughts += f"\nObservation: {observation}\nThought: "
# Set the agent_scratchpad variable to that value
kwargs["agent_scratchpad"] = thoughts
# Create a tools variable from the list of tools provided
kwargs["tools"] = "\n".join([f"{tool.name}: {tool.description}" for tool in self.tools])
# Create a list of tool names for the tools provided
kwargs["tool_names"] = ", ".join([tool.name for tool in self.tools])
# Return the formatted templatepr
# print( self.template.format(**kwargs), end="\n\n")
return self.template.format(**kwargs)
class CustomOutputParser(AgentOutputParser):
begin: bool = False
def __init__(self):
super().__init__()
self.begin = True
def parse(self, llm_output: str) -> AgentFinish | tuple[dict[str, str], str] | AgentAction:
# Check if agent should finish
support_agent = ["Azure-OpenAI", "OpenAI", "Anthropic", "Qwen", "qwen-api", "baichuan-api"] # 目前支持agent的模型
if not any(agent in model_container.MODEL for agent in support_agent) and self.begin:
self.begin = False
stop_words = ["Observation:"]
min_index = len(llm_output)
for stop_word in stop_words:
index = llm_output.find(stop_word)
if index != -1 and index < min_index:
min_index = index
llm_output = llm_output[:min_index]
if "Final Answer:" in llm_output:
self.begin = True
return AgentFinish(
return_values={"output": llm_output.split("Final Answer:", 1)[-1].strip()},
log=llm_output,
)
# Parse out the action and action input
parts = llm_output.split("Action:")
if len(parts) < 2:
return AgentFinish(
return_values={"output": f"调用agent失败: `{llm_output}`"},
log=llm_output,
)
action = parts[1].split("Action Input:")[0].strip()
action_input = parts[1].split("Action Input:")[1].strip()
# 原来的正则化检查方式
# regex = r"Action\s*\d*\s*:(.*?)\nAction\s*\d*\s*Input\s*\d*\s*:[\s]*(.*)"
# print("llm_output",llm_output)
# match = re.search(regex, llm_output, re.DOTALL)
# print("match",match)
# if not match:
# return AgentFinish(
# return_values={"output": f"调用agent失败: `{llm_output}`"},
# log=llm_output,
# )
# action = match.group(1).strip()
# action_input = match.group(2)
# Return the action and action input
try:
ans = AgentAction(
tool=action,
tool_input=action_input.strip(" ").strip('"'),
log=llm_output
)
return ans
except:
return AgentFinish(
return_values={"output": f"调用agent失败: `{llm_output}`"},
log=llm_output,
)