* 更新了最新的readme

* 更新readme,加上目录

* 删掉所有没用的材料,之后都转wiki
This commit is contained in:
zR 2023-10-20 23:02:21 +08:00 committed by GitHub
parent df0ee992f3
commit c41d7add7d
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
22 changed files with 99 additions and 1489 deletions

View File

@ -1,22 +0,0 @@
# 贡献指南
欢迎!我们是一个非常友好的社区,非常高兴您想要帮助我们让这个应用程序变得更好。但是,请您遵循一些通用准则以保持组织有序。
1. 确保为您要修复的错误或要添加的功能创建了一个[问题](https://github.com/imClumsyPanda/langchain-ChatGLM/issues),尽可能保持它们小。
2. 请使用 `git pull --rebase` 来拉取和衍合上游的更新。
3. 将提交合并为格式良好的提交。在提交说明中单独一行提到要解决的问题,如`Fix #<bug>`(有关更多可以使用的关键字,请参见[将拉取请求链接到问题](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue))。
4. 推送到`dev`。在说明中提到正在解决的问题。
---
# Contribution Guide
Welcome! We're a pretty friendly community, and we're thrilled that you want to help make this app even better. However, we ask that you follow some general guidelines to keep things organized around here.
1. Make sure an [issue](https://github.com/imClumsyPanda/langchain-ChatGLM/issues) is created for the bug you're about to fix, or feature you're about to add. Keep them as small as possible.
2. Please use `git pull --rebase` to fetch and merge updates from the upstream.
3. Rebase commits into well-formatted commits. Mention the issue being resolved in the commit message on a line all by itself like `Fixes #<bug>` (refer to [Linking a pull request to an issue](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue) for more keywords you can use).
4. Push into `dev`. Mention which bug is being resolved in the description.

View File

@ -132,6 +132,10 @@ $ python startup.py -a
以上方式只是为了快速上手,如果需要更多的功能和自定义启动方式 ,请参考[Wiki](https://github.com/chatchat-space/Langchain-Chatchat/wiki/)
---
## 项目里程碑
---
## 联系我们
### Telegram
@ -142,6 +146,6 @@ $ python startup.py -a
🎉 Langchain-Chatchat 项目微信交流群,如果你也对本项目感兴趣,欢迎加入群聊参与讨论交流。
### 公众号
<img src="img/official_account.png" alt="图片" width="900" height="300" />
<img src="img/official_wechat_mp_account.png" alt="图片" width="900" height="300" />
🎉 Langchain-Chatchat 项目官方公众号,欢迎扫码关注。

View File

@ -2,9 +2,10 @@
🌍 [中文文档](README.md)
📃 **LangChain-Chatchat** (formerly Langchain-ChatGLM):
📃 **LangChain-Chatchat** (formerly Langchain-ChatGLM):
A LLM application aims to implement knowledge and search engine based QA based on Langchain and open-source or remote LLM API.
A LLM application aims to implement knowledge and search engine based QA based on Langchain and open-source or remote
LLM API.
---
@ -13,23 +14,36 @@ A LLM application aims to implement knowledge and search engine based QA based o
- [Introduction](README.md#Introduction)
- [Pain Points Addressed](README.md#Pain-Points-Addressed)
- [Quick Start](README.md#Quick-Start)
- [1. Environment Setup](README.md#1-Environment-Setup)
- [2. Model Download](README.md#2-Model-Download)
- [3. Initialize Knowledge Base and Configuration Files](README.md#3-Initialize-Knowledge-Base-and-Configuration-Files)
- [4. One-Click Startup](README.md#4-One-Click-Startup)
- [5. Startup Interface Examples](README.md#5-Startup-Interface-Examples)
- [1. Environment Setup](README.md#1-Environment-Setup)
- [2. Model Download](README.md#2-Model-Download)
- [3. Initialize Knowledge Base and Configuration Files](README.md#3-Initialize-Knowledge-Base-and-Configuration-Files)
- [4. One-Click Startup](README.md#4-One-Click-Startup)
- [5. Startup Interface Examples](README.md#5-Startup-Interface-Examples)
- [Contact Us](README.md#Contact-Us)
- [List of Partner Organizations](README.md#List-of-Partner-Organizations)
## Introduction
🤖️ A Q&A application based on local knowledge base implemented using the idea of [langchain](https://github.com/hwchase17/langchain). The goal is to build a KBQA(Knowledge based Q&A) solution that is friendly to Chinese scenarios and open source models and can run both offline and online.
🤖️ A Q&A application based on local knowledge base implemented using the idea
of [langchain](https://github.com/hwchase17/langchain). The goal is to build a KBQA(Knowledge based Q&A) solution that
is friendly to Chinese scenarios and open source models and can run both offline and online.
💡 Inspried by [document.ai](https://github.com/GanymedeNil/document.ai) and [ChatGLM-6B Pull Request](https://github.com/THUDM/ChatGLM-6B/pull/216) , we build a local knowledge base question answering application that can be implemented using an open source model or remote LLM api throughout the process. In the latest version of this project, [FastChat](https://github.com/lm-sys/FastChat) is used to access Vicuna, Alpaca, LLaMA, Koala, RWKV and many other models. Relying on [langchain](https://github.com/langchain-ai/langchain) , this project supports calling services through the API provided based on [FastAPI](https://github.com/tiangolo/fastapi), or using the WebUI based on [Streamlit](https://github.com/streamlit/streamlit).
💡 Inspried by [document.ai](https://github.com/GanymedeNil/document.ai)
and [ChatGLM-6B Pull Request](https://github.com/THUDM/ChatGLM-6B/pull/216) , we build a local knowledge base question
answering application that can be implemented using an open source model or remote LLM api throughout the process. In
the latest version of this project, [FastChat](https://github.com/lm-sys/FastChat) is used to access Vicuna, Alpaca,
LLaMA, Koala, RWKV and many other models. Relying on [langchain](https://github.com/langchain-ai/langchain) , this
project supports calling services through the API provided based on [FastAPI](https://github.com/tiangolo/fastapi), or
using the WebUI based on [Streamlit](https://github.com/streamlit/streamlit).
✅ Relying on the open source LLM and Embedding models, this project can realize full-process **offline private deployment**. At the same time, this project also supports the call of OpenAI GPT API- and Zhipu API, and will continue to expand the access to various models and remote APIs in the future.
✅ Relying on the open source LLM and Embedding models, this project can realize full-process **offline private
deployment**. At the same time, this project also supports the call of OpenAI GPT API- and Zhipu API, and will continue
to expand the access to various models and remote APIs in the future.
⛓️ The implementation principle of this project is shown in the graph below. The main process includes: loading files -> reading text -> text segmentation -> text vectorization -> question vectorization -> matching the `top-k` most similar to the question vector in the text vector -> The matched text is added to `prompt `as context and question -> submitted to `LLM` to generate an answer.
⛓️ The implementation principle of this project is shown in the graph below. The main process includes: loading files ->
reading text -> text segmentation -> text vectorization -> question vectorization -> matching the `top-k` most similar
to the question vector in the text vector -> The matched text is added to `prompt `as context and question -> submitted
to `LLM` to generate an answer.
📺[video introdution](https://www.bilibili.com/video/BV13M4y1e7cN/?share_source=copy_web&vd_source=e6c5aafe684f30fbe41925d61ca6d514)
@ -39,26 +53,37 @@ The main process analysis from the aspect of document process:
![实现原理图2](img/langchain+chatglm2.png)
🚩 The training or fined-tuning are not involved in the project, but still, one always can improve performance by do these.
🚩 The training or fined-tuning are not involved in the project, but still, one always can improve performance by do
these.
🌐 [AutoDL image](registry.cn-beijing.aliyuncs.com/chatchat/chatchat:0.2.5) is supported, and in v9 the codes are update to v0.2.5.
🌐 [AutoDL image](registry.cn-beijing.aliyuncs.com/chatchat/chatchat:0.2.5) is supported, and in v9 the codes are update
to v0.2.5.
🐳 [Docker image](registry.cn-beijing.aliyuncs.com/chatchat/chatchat:0.2.5)
## Pain Points Addressed
This project is a solution for enhancing knowledge bases with fully localized inference, specifically addressing the pain points of data security and private deployments for businesses.
This open-source solution is under the Apache License and can be used for commercial purposes for free, with no fees required.
We support mainstream local large prophecy models and Embedding models available in the market, as well as open-source local vector databases. For a detailed list of supported models and databases, please refer to our [Wiki](https://github.com/chatchat-space/Langchain-Chatchat/wiki/)
This project is a solution for enhancing knowledge bases with fully localized inference, specifically addressing the
pain points of data security and private deployments for businesses.
This open-source solution is under the Apache License and can be used for commercial purposes for free, with no fees
required.
We support mainstream local large prophecy models and Embedding models available in the market, as well as open-source
local vector databases. For a detailed list of supported models and databases, please refer to
our [Wiki](https://github.com/chatchat-space/Langchain-Chatchat/wiki/)
## Quick Start
### Environment Setup
First, make sure your machine has Python 3.10 installed.
```
$ python --version
Python 3.10.12
```
Then, create a virtual environment and install the project's dependencies within the virtual environment.
```shell
# 拉取仓库
@ -74,13 +99,18 @@ $ pip install -r requirements_webui.txt
# 默认依赖包括基本运行环境FAISS向量库。如果要使用 milvus/pg_vector 等向量库,请将 requirements.txt 中相应依赖取消注释再安装。
```
### Model Download
If you need to run this project locally or in an offline environment, you must first download the required models for the project. Typically, open-source LLM and Embedding models can be downloaded from HuggingFace.
If you need to run this project locally or in an offline environment, you must first download the required models for
the project. Typically, open-source LLM and Embedding models can be downloaded from HuggingFace.
Taking the default LLM model used in this project, [THUDM/chatglm2-6b](https://huggingface.co/THUDM/chatglm2-6b), and the Embedding model [moka-ai/m3e-base](https://huggingface.co/moka-ai/m3e-base) as examples:
Taking the default LLM model used in this project, [THUDM/chatglm2-6b](https://huggingface.co/THUDM/chatglm2-6b), and
the Embedding model [moka-ai/m3e-base](https://huggingface.co/moka-ai/m3e-base) as examples:
To download the models, you need to first install [Git LFS](https://docs.github.com/zh/repositories/working-with-files/managing-large-files/installing-git-large-file-storage) and then run:
To download the models, you need to first
install [Git LFS](https://docs.github.com/zh/repositories/working-with-files/managing-large-files/installing-git-large-file-storage)
and then run:
```Shell
$ git lfs install
@ -88,9 +118,10 @@ $ git clone https://huggingface.co/THUDM/chatglm2-6b
$ git clone https://huggingface.co/moka-ai/m3e-base
```
### Initializing the Knowledge Base and Config File
### Initializing the Knowledge Base and Config File
Follow the steps below to initialize your own knowledge base and config file:
```shell
$ python copy_config_example.py
$ python init_database.py --recreate-vs
@ -99,11 +130,13 @@ $ python init_database.py --recreate-vs
### One-Click Launch
To start the project, run the following command:
```shell
$ python startup.py -a
```
### Example of Launch Interface
1. FastAPI docs interface
![](img/fastapi_docs_026.png)
@ -120,11 +153,13 @@ $ python startup.py -a
### Note
The above instructions are provided for a quick start. If you need more features or want to customize the launch method, please refer to the [Wiki](https://github.com/chatchat-space/Langchain-Chatchat/wiki/).
The above instructions are provided for a quick start. If you need more features or want to customize the launch method,
please refer to the [Wiki](https://github.com/chatchat-space/Langchain-Chatchat/wiki/).
---
## Contact Us
### Telegram
[![Telegram](https://img.shields.io/badge/Telegram-2CA5E0?style=for-the-badge&logo=telegram&logoColor=white "langchain-chatglm")](https://t.me/+RjliQ3jnJ1YyN2E9)
@ -135,5 +170,4 @@ The above instructions are provided for a quick start. If you need more features
### WeChat Official Account
<img src="img/official_account.png" alt="图片" width="900" height="300" />
<img src="img/official_wechat_mp_account.png" alt="图片" width="900" height="300" />

View File

@ -1,32 +0,0 @@
## 变更日志
**[2023/04/15]**
1. 重构项目结构,在根目录下保留命令行 Demo [cli_demo.py](../cli_demo.py) 和 Web UI Demo [webui.py](../webui.py)
2. 对 Web UI 进行改进,修改为运行 Web UI 后首先按照 [configs/model_config.py](../configs/model_config.py) 默认选项加载模型,并增加报错提示信息等;
3. 对常见问题进行补充说明。
**[2023/04/12]**
1. 替换 Web UI 中的样例文件,避免出现 Ubuntu 中出现因文件编码无法读取的问题;
2. 替换`knowledge_based_chatglm.py`中的 prompt 模版,避免出现因 prompt 模版包含中英双语导致 chatglm 返回内容错乱的问题。
**[2023/04/11]**
1. 加入 Web UI V0.1 版本(感谢 [@liangtongt](https://github.com/liangtongt)
2. `README.md`中增加常见问题(感谢 [@calcitem](https://github.com/calcitem) 和 [@bolongliu](https://github.com/bolongliu)
3. 增加 LLM 和 Embedding 模型运行设备是否可用`cuda``mps``cpu`的自动判断。
4. 在`knowledge_based_chatglm.py`中增加对`filepath`的判断,在之前支持单个文件导入的基础上,现支持单个文件夹路径作为输入,输入后将会遍历文件夹中各个文件,并在命令行中显示每个文件是否成功加载。
**[2023/04/09]**
1. 使用`langchain`中的`RetrievalQA`替代之前选用的`ChatVectorDBChain`,替换后可以有效减少提问 2-3 次后因显存不足而停止运行的问题;
2. 在`knowledge_based_chatglm.py`中增加`EMBEDDING_MODEL``VECTOR_SEARCH_TOP_K``LLM_MODEL``LLM_HISTORY_LEN``REPLY_WITH_SOURCE`参数值设置;
3. 增加 GPU 显存需求更小的`chatglm-6b-int4``chatglm-6b-int4-qe`作为 LLM 模型备选项;
4. 更正`README.md`中的代码错误(感谢 [@calcitem](https://github.com/calcitem))。
**[2023/04/07]**
1. 解决加载 ChatGLM 模型时发生显存占用为双倍的问题 (感谢 [@suc16](https://github.com/suc16) 和 [@myml](https://github.com/myml))
2. 新增清理显存机制;
3. 新增`nghuyong/ernie-3.0-nano-zh``nghuyong/ernie-3.0-base-zh`作为 Embedding 模型备选项,相比`GanymedeNil/text2vec-large-chinese`占用显存资源更少 (感谢 [@lastrei](https://github.com/lastrei))。

View File

@ -1,223 +0,0 @@
### 常见问题
Q1: 本项目支持哪些文件格式?
A1: 目前已测试支持 txt、docx、md、pdf 格式文件,更多文件格式请参考 [langchain 文档](https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/unstructured_file.html)。目前已知文档中若含有特殊字符,可能存在文件无法加载的问题。
---
Q2: 使用过程中 Python 包 `nltk`发生了 `Resource punkt not found.`报错,该如何解决?
A2: 方法一https://github.com/nltk/nltk_data/raw/gh-pages/packages/tokenizers/punkt.zip 中的 `packages/tokenizers` 解压,放到 `nltk_data/tokenizers` 存储路径下。
`nltk_data` 存储路径可以通过 `nltk.data.path` 查询。
方法二执行python代码
```
import nltk
nltk.download()
```
---
Q3: 使用过程中 Python 包 `nltk`发生了 `Resource averaged_perceptron_tagger not found.`报错,该如何解决?
A3: 方法一:将 https://github.com/nltk/nltk_data/blob/gh-pages/packages/taggers/averaged_perceptron_tagger.zip 下载,解压放到 `nltk_data/taggers` 存储路径下。
`nltk_data` 存储路径可以通过 `nltk.data.path` 查询。
方法二执行python代码
```
import nltk
nltk.download()
```
---
Q4: 本项目可否在 colab 中运行?
A4: 可以尝试使用 chatglm-6b-int4 模型在 colab 中运行,需要注意的是,如需在 colab 中运行 Web UI需将 `webui.py``demo.queue(concurrency_count=3).launch( server_name='0.0.0.0', share=False, inbrowser=False)`中参数 `share`设置为 `True`
---
Q5: 在 Anaconda 中使用 pip 安装包无效如何解决?
A5: 此问题是系统环境问题,详细见 [在Anaconda中使用pip安装包无效问题](在Anaconda中使用pip安装包无效问题.md)
---
Q6: 本项目中所需模型如何下载至本地?
A6: 本项目中使用的模型均为 `huggingface.com`中可下载的开源模型,以默认选择的 `chatglm-6b``text2vec-large-chinese`模型为例,下载模型可执行如下代码:
```shell
# 安装 git lfs
$ git lfs install
# 下载 LLM 模型
$ git clone https://huggingface.co/THUDM/chatglm-6b /your_path/chatglm-6b
# 下载 Embedding 模型
$ git clone https://huggingface.co/GanymedeNil/text2vec-large-chinese /your_path/text2vec
# 模型需要更新时,可打开模型所在文件夹后拉取最新模型文件/代码
$ git pull
```
---
Q7: `huggingface.com`中模型下载速度较慢怎么办?
A7: 可使用本项目用到的模型权重文件百度网盘地址:
- ernie-3.0-base-zh.zip 链接: https://pan.baidu.com/s/1CIvKnD3qzE-orFouA8qvNQ?pwd=4wih
- ernie-3.0-nano-zh.zip 链接: https://pan.baidu.com/s/1Fh8fgzVdavf5P1omAJJ-Zw?pwd=q6s5
- text2vec-large-chinese.zip 链接: https://pan.baidu.com/s/1sMyPzBIXdEzHygftEoyBuA?pwd=4xs7
- chatglm-6b-int4-qe.zip 链接: https://pan.baidu.com/s/1DDKMOMHtNZccOOBGWIOYww?pwd=22ji
- chatglm-6b-int4.zip 链接: https://pan.baidu.com/s/1pvZ6pMzovjhkA6uPcRLuJA?pwd=3gjd
- chatglm-6b.zip 链接: https://pan.baidu.com/s/1B-MpsVVs1GHhteVBetaquw?pwd=djay
---
Q8: 下载完模型后,如何修改代码以执行本地模型?
A8: 模型下载完成后,请在 [configs/model_config.py](../configs/model_config.py) 文件中,对 `embedding_model_dict``llm_model_dict`参数进行修改,如把 `llm_model_dict`
```python
embedding_model_dict = {
"ernie-tiny": "nghuyong/ernie-3.0-nano-zh",
"ernie-base": "nghuyong/ernie-3.0-base-zh",
"text2vec": "GanymedeNil/text2vec-large-chinese"
}
```
修改为
```python
embedding_model_dict = {
"ernie-tiny": "nghuyong/ernie-3.0-nano-zh",
"ernie-base": "nghuyong/ernie-3.0-base-zh",
"text2vec": "/Users/liuqian/Downloads/ChatGLM-6B/text2vec-large-chinese"
}
```
---
Q9: 执行 `python cli_demo.py`过程中,显卡内存爆了,提示 "OutOfMemoryError: CUDA out of memory"
A9: 将 `VECTOR_SEARCH_TOP_K``LLM_HISTORY_LEN` 的值调低,比如 `VECTOR_SEARCH_TOP_K = 5``LLM_HISTORY_LEN = 2`,这样由 `query``context` 拼接得到的 `prompt` 会变短,会减少内存的占用。或者打开量化,请在 [configs/model_config.py](../configs/model_config.py) 文件中,对 `LOAD_IN_8BIT`参数进行修改
---
Q10: 执行 `pip install -r requirements.txt` 过程中遇到 python 包,如 langchain 找不到对应版本的问题
A10: 更换 pypi 源后重新安装,如阿里源、清华源等,网络条件允许时建议直接使用 pypi.org 源,具体操作命令如下:
```shell
# 使用 pypi 源
$ pip install -r requirements.txt -i https://pypi.python.org/simple
```
```shell
# 使用阿里源
$ pip install -r requirements.txt -i http://mirrors.aliyun.com/pypi/simple/
```
```shell
# 使用清华源
$ pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple/
```
---
Q11: 启动 api.py 时 upload_file 接口抛出 `partially initialized module 'charset_normalizer' has no attribute 'md__mypyc' (most likely due to a circular import)`
A11: 这是由于 charset_normalizer 模块版本过高导致的,需要降低低 charset_normalizer 的版本,测试在 charset_normalizer==2.1.0 上可用。
---
Q12: 调用api中的 `bing_search_chat` 接口时,报出 `Failed to establish a new connection: [Errno 110] Connection timed out`
A12: 这是因为服务器加了防火墙需要联系管理员加白名单如果公司的服务器的话就别想了GG--!
---
Q13: 加载 chatglm-6b-int8 或 chatglm-6b-int4 抛出 `RuntimeError: Only Tensors of floating point andcomplex dtype can require gradients`
A13: 疑为 chatglm 的 quantization 的问题或 torch 版本差异问题,针对已经变为 Parameter 的 torch.zeros 矩阵也执行 Parameter 操作,从而抛出 `RuntimeError: Only Tensors of floating point andcomplex dtype can require gradients`。解决办法是在 chatglm 项目的原始文件中的 quantization.py 文件 374 行改为:
```
try:
self.weight =Parameter(self.weight.to(kwargs["device"]), requires_grad=False)
except Exception as e:
pass
```
如果上述方式不起作用,则在.cache/hugggingface/modules/目录下针对chatglm项目的原始文件中的quantization.py文件执行上述操作若软链接不止一个按照错误提示选择正确的路径。
虽然模型可以顺利加载但在cpu上仍存在推理失败的可能即针对每个问题模型一直输出gugugugu。
因此最好不要试图用cpu加载量化模型原因可能是目前python主流量化包的量化操作是在gpu上执行的,会天然地存在gap。
---
Q14: 修改配置中路径后,加载 text2vec-large-chinese 依然提示 `WARNING: No sentence-transformers model found with name text2vec-large-chinese. Creating a new one with MEAN pooling.`
A14: 尝试更换 embedding如 text2vec-base-chinese请在 [configs/model_config.py](../configs/model_config.py) 文件中,修改 `text2vec-base`参数为本地路径,绝对路径或者相对路径均可
---
Q15: 使用pg向量库建表报错
A15: 需要手动安装对应的vector扩展(连接pg执行 CREATE EXTENSION IF NOT EXISTS vector)
---
Q16: pymilvus 连接超时
A16.pymilvus版本需要匹配和milvus对应否则会超时参考pymilvus==2.1.3
Q16: 使用vllm推理加速框架时已经下载了模型但出现HuggingFace通信问题
A16: 参照如下代码修改python环境下/site-packages/vllm/model_executor/weight_utils.py文件的prepare_hf_model_weights函数如下对应代码
```python
if not is_local:
# Use file lock to prevent multiple processes from
# downloading the same model weights at the same time.
model_path_temp = os.path.join(
os.getenv("HOME"),
".cache/huggingface/hub",
"models--" + model_name_or_path.replace("/", "--"),
"snapshots/",
)
downloaded = False
if os.path.exists(model_path_temp):
temp_last_dir = os.listdir(model_path_temp)[-1]
model_path_temp = os.path.join(model_path_temp, temp_last_dir)
base_pattern = os.path.join(model_path_temp, "pytorch_model*.bin")
files = glob.glob(base_pattern)
if len(files) > 0:
downloaded = True
if downloaded:
hf_folder = model_path_temp
else:
with get_lock(model_name_or_path, cache_dir):
hf_folder = snapshot_download(model_name_or_path,
allow_patterns=allow_patterns,
cache_dir=cache_dir,
tqdm_class=Disabledtqdm)
else:
hf_folder = model_name_or_path
```

View File

@ -1,63 +0,0 @@
# 安装
## 环境检查
```shell
# 首先,确信你的机器安装了 Python 3.8 - 3.10 版本
$ python --version
Python 3.8.13
# 如果低于这个版本可使用conda安装环境
$ conda create -p /your_path/env_name python=3.8
# 激活环境
$ source activate /your_path/env_name
# 或conda安装不指定路径, 注意以下,都将/your_path/env_name替换为env_name
$ conda create -n env_name python=3.8
$ conda activate env_name # Activate the environment
# 更新py库
$ pip3 install --upgrade pip
# 关闭环境
$ source deactivate /your_path/env_name
# 删除环境
$ conda env remove -p /your_path/env_name
```
## 项目依赖
```shell
# 拉取仓库
$ git clone https://github.com/chatchat-space/Langchain-Chatchat.git
# 进入目录
$ cd Langchain-Chatchat
# 安装全部依赖
$ pip install -r requirements.txt
# 默认依赖包括基本运行环境FAISS向量库。如果要使用 milvus/pg_vector 等向量库,请将 requirements.txt 中相应依赖取消注释再安装。
```
此外,为方便用户 API 与 webui 分离运行,可单独根据运行需求安装依赖包。
- 如果只需运行 API可执行
```shell
$ pip install -r requirements_api.txt
# 默认依赖包括基本运行环境FAISS向量库。如果要使用 milvus/pg_vector 等向量库,请将 requirements.txt 中相应依赖取消注释再安装。
```
- 如果只需运行 WebUI可执行
```shell
$ pip install -r requirements_webui.txt
```
注:使用 `langchain.document_loaders.UnstructuredFileLoader` 进行 `.docx` 等格式非结构化文件接入时,可能需要依据文档进行其他依赖包的安装,请参考 [langchain 文档](https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/unstructured_file.html)。

View File

@ -1,114 +0,0 @@
## Issue with Installing Packages Using pip in Anaconda
## Problem
Recently, when running open-source code, I encountered an issue: after creating a virtual environment with conda and switching to the new environment, using pip to install packages would be "ineffective." Here, "ineffective" means that the packages installed with pip are not in this new environment.
------
## Analysis
1. First, create a test environment called test: `conda create -n test`
2. Activate the test environment: `conda activate test`
3. Use pip to install numpy: `pip install numpy`. You'll find that numpy already exists in the default environment.
```powershell
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
Requirement already satisfied: numpy in c:\programdata\anaconda3\lib\site-packages (1.20.3)
```
4. Check the information of pip: `pip show pip`
```powershell
Name: pip
Version: 21.2.4
Summary: The PyPA recommended tool for installing Python packages.
Home-page: https://pip.pypa.io/
Author: The pip developers
Author-email: distutils-sig@python.org
License: MIT
Location: c:\programdata\anaconda3\lib\site-packages
Requires:
Required-by:
```
5. We can see that the current pip is in the default conda environment. This explains why the package is not in the new virtual environment when we directly use pip to install packages - because the pip being used belongs to the default environment, the installed package either already exists or is installed directly into the default environment.
------
## Solution
1. We can directly use the conda command to install new packages, but sometimes conda may not have certain packages/libraries, so we still need to use pip to install.
2. We can first use the conda command to install the pip package for the current virtual environment, and then use pip to install new packages.
```powershell
# Use conda to install the pip package
(test) PS C:\Users\Administrator> conda install pip
Collecting package metadata (current_repodata.json): done
Solving environment: done
....
done
# Display the information of the current pip, and find that pip is in the test environment
(test) PS C:\Users\Administrator> pip show pip
Name: pip
Version: 21.2.4
Summary: The PyPA recommended tool for installing Python packages.
Home-page: https://pip.pypa.io/
Author: The pip developers
Author-email: distutils-sig@python.org
License: MIT
Location: c:\programdata\anaconda3\envs\test\lib\site-packages
Requires:
Required-by:
# Now use pip to install the numpy package, and it is installed successfully
(test) PS C:\Users\Administrator> pip install numpy
Looking in indexes:
https://pypi.tuna.tsinghua.edu.cn/simple
Collecting numpy
Using cached https://pypi.tuna.tsinghua.edu.cn/packages/4b/23/140ec5a509d992fe39db17200e96c00fd29603c1531ce633ef93dbad5e9e/numpy-1.22.2-cp39-cp39-win_amd64.whl (14.7 MB)
Installing collected packages: numpy
Successfully installed numpy-1.22.2
# Use pip list to view the currently installed packages, no problem
(test) PS C:\Users\Administrator> pip list
Package Version
------------ ---------
certifi 2021.10.8
numpy 1.22.2
pip 21.2.4
setuptools 58.0.4
wheel 0.37.1
wincertstore 0.2
```
## Supplement
1. The reason I didn't notice this problem before might be because the packages installed in the virtual environment were of a specific version, which overwrote the packages in the default environment. The main issue was actually a lack of careful observation:), otherwise, I could have noticed `Successfully uninstalled numpy-xxx` **default version** and `Successfully installed numpy-1.20.3` **specified version**.
2. During testing, I found that if the Python version is specified when creating a new package, there shouldn't be this issue. I guess this is because pip will be installed in the virtual environment, while in our case, including pip, no packages were installed, so the default environment's pip was used.
3. There's a question: I should have specified the Python version when creating a new virtual environment before, but I still used the default environment's pip package. However, I just couldn't reproduce the issue successfully on two different machines, which led to the second point mentioned above.
4. After encountering the problem mentioned in point 3, I solved it by using `python -m pip install package-name`, adding `python -m` before pip. As for why, you can refer to the answer on [StackOverflow](https://stackoverflow.com/questions/41060382/using-pip-to-install-packages-to-anaconda-environment):
>1. If you have a non-conda pip as your default pip but conda python as your default python (as below):
>
>```shell
>>which -a pip
>/home/<user>/.local/bin/pip
>/home/<user>/.conda/envs/newenv/bin/pip
>/usr/bin/pip
>
>>which -a python
>/home/<user>/.conda/envs/newenv/bin/python
>/usr/bin/python
>```
>
>2. Then, instead of calling `pip install <package>` directly, you can use the module flag -m in python so that it installs with the anaconda python
>
>```shell
>python -m pip install <package>
>```
>
>3. This will install the package to the anaconda library directory rather than the library directory associated with the (non-anaconda) pip
>4. The reason for doing this is as follows: the pip command references a specific pip file/shortcut (which -a pip will tell you which one). Similarly, the python command references a specific python file (which -a python will tell you which one). For one reason or another, these two commands can become out of sync, so your "default" pip is in a different folder than your default python and therefore is associated with different versions of python.
>5. In contrast, the python -m pip construct does not use the shortcut that the pip command points to. Instead, it asks python to find its pip version and use that version to install a package.

View File

@ -1,674 +0,0 @@
# chatchat加载ptuning指南
P-tuning虽然是一种peft方法但并不能于huggingface的peft python包兼容而fastchat在多处以字符串匹配的方式进行硬编码加载模型因此导致fastchat和chatchat不能兼容p-tuning经langchain-chatchat开发组多次尝试给出如下指南进行p-tuning加载。
# 1. peft文件夹修改
1. 将config.json文件修改为adapter_config.json;
2. 保证文件夹包含pytorch_model.bin文件
3. 修改文件夹名称,保证文件夹包含'peft'一词;
4. 在adapter_config.json文件中增加如下字段
```json
"base_model_name_or_path": "/root/model/chatglm2-6b/"
"task_type": "CAUSAL_LM",
"peft_type": "PREFIX_TUNING",
"inference_mode": true,
"revision": "main",
"num_virtual_tokens": 16
```
**其中,"base_model_name_or_path"为基础模型的存在位置**
5. 将文件夹移入项目文件夹中如Langchain-Chatchat项目文件夹目录下
# 2. fastchat包代码修改
## 2.1 fastchat.model.model_adapter文件修改
1. 将fastchat.model.model_adapter.py文件的load_model函数修改为
```python
def load_model(
model_path: str,
device: str = "cuda",
num_gpus: int = 1,
max_gpu_memory: Optional[str] = None,
dtype: Optional[torch.dtype] = None,
load_8bit: bool = False,
cpu_offloading: bool = False,
gptq_config: Optional[GptqConfig] = None,
awq_config: Optional[AWQConfig] = None,
revision: str = "main",
debug: bool = False,
load_kwargs = {}
):
"""Load a model from Hugging Face."""
# get model adapter
adapter = get_model_adapter(model_path)
kwargs = load_kwargs
# Handle device mapping
cpu_offloading = raise_warning_for_incompatible_cpu_offloading_configuration(
device, load_8bit, cpu_offloading
)
if device == "cpu":
kwargs["torch_dtype"]= torch.float32
if CPU_ISA in ["avx512_bf16", "amx"]:
try:
import intel_extension_for_pytorch as ipex
kwargs ["torch_dtype"]= torch.bfloat16
except ImportError:
warnings.warn(
"Intel Extension for PyTorch is not installed, it can be installed to accelerate cpu inference"
)
elif device == "cuda":
kwargs["torch_dtype"] = torch.float16
if num_gpus != 1:
kwargs["device_map"] = "auto"
if max_gpu_memory is None:
kwargs[
"device_map"
] = "sequential" # This is important for not the same VRAM sizes
available_gpu_memory = get_gpu_memory(num_gpus)
kwargs["max_memory"] = {
i: str(int(available_gpu_memory[i] * 0.85)) + "GiB"
for i in range(num_gpus)
}
else:
kwargs["max_memory"] = {i: max_gpu_memory for i in range(num_gpus)}
elif device == "mps":
kwargs["torch_dtype"] = torch.float16
# Avoid bugs in mps backend by not using in-place operations.
replace_llama_attn_with_non_inplace_operations()
elif device == "xpu":
kwargs["torch_dtype"] = torch.bfloat16
# Try to load ipex, while it looks unused, it links into torch for xpu support
try:
import intel_extension_for_pytorch as ipex
except ImportError:
warnings.warn(
"Intel Extension for PyTorch is not installed, but is required for xpu inference."
)
elif device == "npu":
kwargs["torch_dtype"]= torch.float16
# Try to load ipex, while it looks unused, it links into torch for xpu support
try:
import torch_npu
except ImportError:
warnings.warn("Ascend Extension for PyTorch is not installed.")
else:
raise ValueError(f"Invalid device: {device}")
if cpu_offloading:
# raises an error on incompatible platforms
from transformers import BitsAndBytesConfig
if "max_memory" in kwargs:
kwargs["max_memory"]["cpu"] = (
str(math.floor(psutil.virtual_memory().available / 2**20)) + "Mib"
)
kwargs["quantization_config"] = BitsAndBytesConfig(
load_in_8bit_fp32_cpu_offload=cpu_offloading
)
kwargs["load_in_8bit"] = load_8bit
elif load_8bit:
if num_gpus != 1:
warnings.warn(
"8-bit quantization is not supported for multi-gpu inference."
)
else:
model, tokenizer = adapter.load_compress_model(
model_path=model_path,
device=device,
torch_dtype=kwargs["torch_dtype"],
revision=revision,
)
if debug:
print(model)
return model, tokenizer
elif awq_config and awq_config.wbits < 16:
assert (
awq_config.wbits == 4
), "Currently we only support 4-bit inference for AWQ."
model, tokenizer = load_awq_quantized(model_path, awq_config, device)
if num_gpus != 1:
device_map = accelerate.infer_auto_device_map(
model,
max_memory=kwargs["max_memory"],
no_split_module_classes=[
"OPTDecoderLayer",
"LlamaDecoderLayer",
"BloomBlock",
"MPTBlock",
"DecoderLayer",
],
)
model = accelerate.dispatch_model(
model, device_map=device_map, offload_buffers=True
)
else:
model.to(device)
return model, tokenizer
elif gptq_config and gptq_config.wbits < 16:
model, tokenizer = load_gptq_quantized(model_path, gptq_config)
if num_gpus != 1:
device_map = accelerate.infer_auto_device_map(
model,
max_memory=kwargs["max_memory"],
no_split_module_classes=["LlamaDecoderLayer"],
)
model = accelerate.dispatch_model(
model, device_map=device_map, offload_buffers=True
)
else:
model.to(device)
return model, tokenizer
kwargs["revision"] = revision
if dtype is not None: # Overwrite dtype if it is provided in the arguments.
kwargs["torch_dtype"] = dtype
# Load model
model, tokenizer = adapter.load_model(model_path, kwargs)
if (
device == "cpu"
and kwargs["torch_dtype"] is torch.bfloat16
and CPU_ISA is not None
):
model = ipex.optimize(model, dtype=kwargs["torch_dtype"])
if (device == "cuda" and num_gpus == 1 and not cpu_offloading) or device in (
"mps",
"xpu",
"npu",
):
model.to(device)
if device == "xpu":
model = torch.xpu.optimize(model, dtype=kwargs["torch_dtype"], inplace=True)
if debug:
print(model)
return model, tokenizer
```
2. 将fastchat.model.model_adapter.py的函数修改为
```python
def get_generate_stream_function(model: torch.nn.Module, model_path: str):
"""Get the generate_stream function for inference."""
from fastchat.serve.inference import generate_stream
model_type = str(type(model)).lower()
is_chatglm = "chatglm" in model_type
is_falcon = "rwforcausallm" in model_type
is_codet5p = "codet5p" in model_type
is_peft = "peft" in model_type
if is_chatglm:
return generate_stream_chatglm
elif is_falcon:
return generate_stream_falcon
elif is_codet5p:
return generate_stream_codet5p
elif peft_share_base_weights and is_peft:
# Return a curried stream function that loads the right adapter
# according to the model_name available in this context. This ensures
# the right weights are available.
@torch.inference_mode()
def generate_stream_peft(
model,
tokenizer,
params: Dict,
device: str,
context_len: int,
stream_interval: int = 2,
judge_sent_end: bool = False,
):
model.set_adapter(model_path)
if "chatglm" in str(type(model.base_model)).lower():
model.disable_adapter()
prefix_state_dict = torch.load(os.path.join(model_path, "pytorch_model.bin"))
new_prefix_state_dict = {}
for k, v in prefix_state_dict.items():
if k.startswith("transformer.prefix_encoder."):
new_prefix_state_dict[k[len("transformer.prefix_encoder."):]] = v
elif k.startswith("transformer.prompt_encoder."):
new_prefix_state_dict[k[len("transformer.prompt_encoder."):]] = v
model.transformer.prefix_encoder.load_state_dict(new_prefix_state_dict)
for x in generate_stream_chatglm(
model,
tokenizer,
params,
device,
context_len,
stream_interval,
judge_sent_end,
):
yield x
elif "rwforcausallm" in str(type(model.base_model)).lower():
for x in generate_stream_falcon(
model,
tokenizer,
params,
device,
context_len,
stream_interval,
judge_sent_end,
):
yield x
elif "codet5p" in str(type(model.base_model)).lower():
for x in generate_stream_codet5p(
model,
tokenizer,
params,
device,
context_len,
stream_interval,
judge_sent_end,
):
yield x
else:
for x in generate_stream(
model,
tokenizer,
params,
device,
context_len,
stream_interval,
judge_sent_end,
):
yield x
return generate_stream_peft
else:
return generate_stream
```
3. 将fastchat.model.model_adapter.py的PeftModelAdapter类的load_model方法修改为
```python
def load_model(self, model_path: str, from_pretrained_kwargs: dict):
"""Loads the base model then the (peft) adapter weights"""
from peft import PeftConfig, PeftModel
config = PeftConfig.from_pretrained(model_path)
base_model_path = config.base_model_name_or_path
if "peft" in base_model_path:
raise ValueError(
f"PeftModelAdapter cannot load a base model with 'peft' in the name: {config.base_model_name_or_path}"
)
# Basic proof of concept for loading peft adapters that share the base
# weights. This is pretty messy because Peft re-writes the underlying
# base model and internally stores a map of adapter layers.
# So, to make this work we:
# 1. Cache the first peft model loaded for a given base models.
# 2. Call `load_model` for any follow on Peft models.
# 3. Make sure we load the adapters by the model_path. Why? This is
# what's accessible during inference time.
# 4. In get_generate_stream_function, make sure we load the right
# adapter before doing inference. This *should* be safe when calls
# are blocked the same semaphore.
if peft_share_base_weights:
if base_model_path in peft_model_cache:
model, tokenizer = peft_model_cache[base_model_path]
# Super important: make sure we use model_path as the
# `adapter_name`.
model.load_adapter(model_path, adapter_name=model_path)
else:
base_adapter = get_model_adapter(base_model_path)
base_model, tokenizer = base_adapter.load_model(
base_model_path, from_pretrained_kwargs
)
# Super important: make sure we use model_path as the
# `adapter_name`.
from peft import get_peft_model
model = get_peft_model(base_model,config,adapter_name=model_path)
peft_model_cache[base_model_path] = (model, tokenizer)
return model, tokenizer
# In the normal case, load up the base model weights again.
base_adapter = get_model_adapter(base_model_path)
base_model, tokenizer = base_adapter.load_model(
base_model_path, from_pretrained_kwargs
)
from peft import get_peft_model
model = get_peft_model(base_model,config,adapter_name=model_path)
return model, tokenizer
```
4. 将fastchat.model.model_adapter.py的ChatglmAdapter类的load_model方法修改为
```python
def load_model(self, model_path: str, from_pretrained_kwargs: dict):
revision = from_pretrained_kwargs.get("revision", "main")
tokenizer = AutoTokenizer.from_pretrained(
model_path, trust_remote_code=True, revision=revision
)
config = AutoConfig.from_pretrained(model_path, trust_remote_code=True,**from_pretrained_kwargs)
model = AutoModel.from_pretrained(
model_path, trust_remote_code=True, config=config
)
return model, tokenizer
```
## 2.2 fastchat.serve.model_worker文件修改
1. 将fastchat.serve.model_worker文件的ModelWorker的__init__方法修改如下
```python
class ModelWorker(BaseModelWorker):
def __init__(
self,
controller_addr: str,
worker_addr: str,
worker_id: str,
model_path: str,
model_names: List[str],
limit_worker_concurrency: int,
no_register: bool,
device: str,
num_gpus: int,
max_gpu_memory: str,
dtype: Optional[torch.dtype] = None,
load_8bit: bool = False,
cpu_offloading: bool = False,
gptq_config: Optional[GptqConfig] = None,
awq_config: Optional[AWQConfig] = None,
stream_interval: int = 2,
conv_template: Optional[str] = None,
embed_in_truncate: bool = False,
seed: Optional[int] = None,
load_kwargs = {}, #修改点
**kwargs,
):
super().__init__(
controller_addr,
worker_addr,
worker_id,
model_path,
model_names,
limit_worker_concurrency,
conv_template=conv_template,
)
logger.info(f"Loading the model {self.model_names} on worker {worker_id} ...")
self.model, self.tokenizer = load_model(
model_path,
device=device,
num_gpus=num_gpus,
max_gpu_memory=max_gpu_memory,
dtype=dtype,
load_8bit=load_8bit,
cpu_offloading=cpu_offloading,
gptq_config=gptq_config,
awq_config=awq_config,
load_kwargs=load_kwargs #修改点
)
self.device = device
if self.tokenizer.pad_token == None:
self.tokenizer.pad_token = self.tokenizer.eos_token
self.context_len = get_context_length(self.model.config)
print("**"*100)
self.generate_stream_func = get_generate_stream_function(self.model, model_path)
print(f"self.generate_stream_func{self.generate_stream_func}")
print("*"*100)
self.stream_interval = stream_interval
self.embed_in_truncate = embed_in_truncate
self.seed = seed
if not no_register:
self.init_heart_beat()
```
2. 在fastchat.serve.model_worker文件的create_model_worker增加如下args参数
```python
parser.add_argument("--load_kwargs",type=dict,default={})
```
并将如下语句:
```python
worker = ModelWorker(
args.controller_address,
args.worker_address,
worker_id,
args.model_path,
args.model_names,
args.limit_worker_concurrency,
no_register=args.no_register,
device=args.device,
num_gpus=args.num_gpus,
max_gpu_memory=args.max_gpu_memory,
dtype=str_to_torch_dtype(args.dtype),
load_8bit=args.load_8bit,
cpu_offloading=args.cpu_offloading,
gptq_config=gptq_config,
awq_config=awq_config,
stream_interval=args.stream_interval,
conv_template=args.conv_template,
embed_in_truncate=args.embed_in_truncate,
seed=args.seed,
)
```
修改为:
```python
worker = ModelWorker(
args.controller_address,
args.worker_address,
worker_id,
args.model_path,
args.model_names,
args.limit_worker_concurrency,
no_register=args.no_register,
device=args.device,
num_gpus=args.num_gpus,
max_gpu_memory=args.max_gpu_memory,
dtype=str_to_torch_dtype(args.dtype),
load_8bit=args.load_8bit,
cpu_offloading=args.cpu_offloading,
gptq_config=gptq_config,
awq_config=awq_config,
stream_interval=args.stream_interval,
conv_template=args.conv_template,
embed_in_truncate=args.embed_in_truncate,
seed=args.seed,
load_kwargs=args.load_kwargs
)
```
至此我们完成了fastchat加载ptuning的所有修改在调用fastchat加载p-tuning时可以通过加入 `PEFT_SHARE_BASE_WEIGHTS=true`,并以字典的形式添加--load_kwargs参数为训练ptuning时的pre_seq_len值即可例如将2.2.2步骤中的 `parser.add_argument("--load_kwargs",type=dict,default={})`修改为:
`parser.add_argument("--load_kwargs",type=dict,default={"pre_seq_len":16})`
# 3 langchain-chatchat代码修改
1. 在configs/serve_config.py中的FSCHAT_MODEL_WORKERS字典中增加如下字段
```
"load_kwargs": {"pre_seq_len": 16} #值修改为adapter_config.json中的pre_seq_len值
```
2. 将startup.py中的create_model_worker_app修改为
```python
def create_model_worker_app(log_level: str = "INFO", **kwargs) -> FastAPI:
"""
kwargs包含的字段如下
host:
port:
model_names:[`model_name`]
controller_address:
worker_address:
对于online_api:
online_api:True
worker_class: `provider`
对于离线模型:
model_path: `model_name_or_path`,huggingface的repo-id或本地路径
device:`LLM_DEVICE`
"""
import fastchat.constants
fastchat.constants.LOGDIR = LOG_PATH
from fastchat.serve.model_worker import worker_id, logger
import argparse
logger.setLevel(log_level)
parser = argparse.ArgumentParser()
args = parser.parse_args([])
for k, v in kwargs.items():
setattr(args, k, v)
# 在线模型API
if worker_class := kwargs.get("worker_class"):
from fastchat.serve.model_worker import app
worker = worker_class(model_names=args.model_names,
controller_addr=args.controller_address,
worker_addr=args.worker_address)
sys.modules["fastchat.serve.model_worker"].worker = worker
# 本地模型
else:
from configs.model_config import VLLM_MODEL_DICT
if kwargs["model_names"][0] in VLLM_MODEL_DICT and args.infer_turbo == "vllm":
import fastchat.serve.vllm_worker
from fastchat.serve.vllm_worker import VLLMWorker,app
from vllm import AsyncLLMEngine
from vllm.engine.arg_utils import AsyncEngineArgs,EngineArgs
args.tokenizer = args.model_path # 如果tokenizer与model_path不一致在此处添加
args.tokenizer_mode = 'auto'
args.trust_remote_code= True
args.download_dir= None
args.load_format = 'auto'
args.dtype = 'auto'
args.seed = 0
args.worker_use_ray = False
args.pipeline_parallel_size = 1
args.tensor_parallel_size = 1
args.block_size = 16
args.swap_space = 4 # GiB
args.gpu_memory_utilization = 0.90
args.max_num_batched_tokens = 2560
args.max_num_seqs = 256
args.disable_log_stats = False
args.conv_template = None
args.limit_worker_concurrency = 5
args.no_register = False
args.num_gpus = 1 # vllm worker的切分是tensor并行这里填写显卡的数量
args.engine_use_ray = False
args.disable_log_requests = False
if args.model_path:
args.model = args.model_path
if args.num_gpus > 1:
args.tensor_parallel_size = args.num_gpus
for k, v in kwargs.items():
setattr(args, k, v)
engine_args = AsyncEngineArgs.from_cli_args(args)
engine = AsyncLLMEngine.from_engine_args(engine_args)
worker = VLLMWorker(
controller_addr = args.controller_address,
worker_addr = args.worker_address,
worker_id = worker_id,
model_path = args.model_path,
model_names = args.model_names,
limit_worker_concurrency = args.limit_worker_concurrency,
no_register = args.no_register,
llm_engine = engine,
conv_template = args.conv_template,
)
sys.modules["fastchat.serve.vllm_worker"].engine = engine
sys.modules["fastchat.serve.vllm_worker"].worker = worker
else:
from fastchat.serve.model_worker import app, GptqConfig, AWQConfig, ModelWorker
args.gpus = "0" # GPU的编号,如果有多个GPU可以设置为"0,1,2,3"
args.max_gpu_memory = "20GiB"
args.num_gpus = 1 # model worker的切分是model并行这里填写显卡的数量
args.load_8bit = False
args.cpu_offloading = None
args.gptq_ckpt = None
args.gptq_wbits = 16
args.gptq_groupsize = -1
args.gptq_act_order = False
args.awq_ckpt = None
args.awq_wbits = 16
args.awq_groupsize = -1
args.model_names = []
args.conv_template = None
args.limit_worker_concurrency = 5
args.stream_interval = 2
args.no_register = False
args.embed_in_truncate = False
args.load_kwargs = {"pre_seq_len": 16} # 改*************************
for k, v in kwargs.items():
setattr(args, k, v)
if args.gpus:
if args.num_gpus is None:
args.num_gpus = len(args.gpus.split(','))
if len(args.gpus.split(",")) < args.num_gpus:
raise ValueError(
f"Larger --num-gpus ({args.num_gpus}) than --gpus {args.gpus}!"
)
os.environ["CUDA_VISIBLE_DEVICES"] = args.gpus
gptq_config = GptqConfig(
ckpt=args.gptq_ckpt or args.model_path,
wbits=args.gptq_wbits,
groupsize=args.gptq_groupsize,
act_order=args.gptq_act_order,
)
awq_config = AWQConfig(
ckpt=args.awq_ckpt or args.model_path,
wbits=args.awq_wbits,
groupsize=args.awq_groupsize,
)
worker = ModelWorker(
controller_addr=args.controller_address,
worker_addr=args.worker_address,
worker_id=worker_id,
model_path=args.model_path,
model_names=args.model_names,
limit_worker_concurrency=args.limit_worker_concurrency,
no_register=args.no_register,
device=args.device,
num_gpus=args.num_gpus,
max_gpu_memory=args.max_gpu_memory,
load_8bit=args.load_8bit,
cpu_offloading=args.cpu_offloading,
gptq_config=gptq_config,
awq_config=awq_config,
stream_interval=args.stream_interval,
conv_template=args.conv_template,
embed_in_truncate=args.embed_in_truncate,
load_kwargs=args.load_kwargs #改*************************
)
sys.modules["fastchat.serve.model_worker"].args = args
sys.modules["fastchat.serve.model_worker"].gptq_config = gptq_config
sys.modules["fastchat.serve.model_worker"].worker = worker
MakeFastAPIOffline(app)
app.title = f"FastChat LLM Server ({args.model_names[0]})"
app._worker = worker
return app
```
至此我们完成了langchain-chatchat加载p-tuning的全部操作可以如下方式加载p-tuning
```shell
PEFT_SHARE_BASE_WEIGHTS=true python startup.py -a
```

View File

@ -1,49 +0,0 @@
version: '3.5'
services:
etcd:
container_name: milvus-etcd
image: quay.io/coreos/etcd:v3.5.0
environment:
- ETCD_AUTO_COMPACTION_MODE=revision
- ETCD_AUTO_COMPACTION_RETENTION=1000
- ETCD_QUOTA_BACKEND_BYTES=4294967296
- ETCD_SNAPSHOT_COUNT=50000
volumes:
- ${DOCKER_VOLUME_DIRECTORY:-.}/volumes/etcd:/etcd
command: etcd -advertise-client-urls=http://127.0.0.1:2379 -listen-client-urls http://0.0.0.0:2379 --data-dir /etcd
minio:
container_name: milvus-minio
image: minio/minio:RELEASE.2022-03-17T06-34-49Z
environment:
MINIO_ACCESS_KEY: minioadmin
MINIO_SECRET_KEY: minioadmin
volumes:
- ${DOCKER_VOLUME_DIRECTORY:-.}/volumes/minio:/minio_data
command: minio server /minio_data
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 20s
retries: 3
standalone:
container_name: milvus-standalone
image: milvusdb/milvus:v2.1.3
command: ["milvus", "run", "standalone"]
environment:
ETCD_ENDPOINTS: etcd:2379
MINIO_ADDRESS: minio:9000
volumes:
- ${DOCKER_VOLUME_DIRECTORY:-.}/volumes/milvus:/var/lib/milvus
ports:
- "19530:19530"
- "9091:9091"
depends_on:
- "etcd"
- "minio"
networks:
default:
name: milvus

View File

@ -1,13 +0,0 @@
version: "3.8"
services:
postgresql:
image: ankane/pgvector:v0.4.1
container_name: langchain_chatchat-pg-db
environment:
POSTGRES_DB: langchain_chatchat
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
ports:
- 5432:5432
volumes:
- ./data:/var/lib/postgresql/data

View File

@ -1,24 +0,0 @@
## 如何自定义分词器
### 在哪里写,哪些文件要改
1. 在```text_splitter```文件夹下新建一个文件,文件名为您的分词器名字,比如`my_splitter.py`,然后在`__init__.py`中导入您的分词器,如下所示:
```python
from .my_splitter import MySplitter
```
2. 修改```config/model_config.py```文件,将您的分词器名字添加到```text_splitter_dict```中,如下所示:
```python
MySplitter: {
"source": "huggingface", ## 选择tiktoken则使用openai的方法
"tokenizer_name_or_path": "your tokenizer", #如果选择huggingface则使用huggingface的方法部分tokenizer需要从Huggingface下载
}
TEXT_SPLITTER = "MySplitter"
```
完成上述步骤后,就能使用自己的分词器了。
### 如何贡献您的分词器
1. 将您的分词器所在的代码文件放在```text_splitter```文件夹下,文件名为您的分词器名字,比如`my_splitter.py`,然后在`__init__.py`中导入您的分词器。
2. 发起PR并说明您的分词器面向的场景或者改进之处。我们非常期待您能举例一个具体的应用场景。
3. 在Readme.md中添加您的分词器的使用方法和支持说明。

View File

@ -1,8 +0,0 @@
向量库环境 docker-compose.yml 文件在 docs/docker/vector_db 中
以 milvus 为例
```shell
cd docs/docker/vector_db/milvus
docker-compose up -d
```

View File

@ -1,37 +0,0 @@
# 启动API服务
## 通过py文件启动
可以通过直接执行`api.py`文件启动API服务默认以ip:0.0.0.0和port:7861启动http和ws服务。
```shell
python api.py
```
同时启动时支持StartOption所列的模型加载参数同时还支持IP和端口设置。
```shell
python api.py --model-name chatglm-6b-int8 --port 7862
```
## 通过cli.bat/cli.sh启动
也可以通过命令行控制文件继续启动。
```shell
cli.sh api --help
```
其他可设置参数和上述py文件启动方式相同。
# 以https、wss启动API服务
## 本地创建ssl相关证书文件
如果没有正式签发的CA证书可以[安装mkcert](https://github.com/FiloSottile/mkcert#installation)工具, 然后用如下指令生成本地CA证书
```shell
mkcert -install
mkcert api.example.com 47.123.123.123 localhost 127.0.0.1 ::1
```
默认回车保存在当前目录下会有以生成指令第一个域名命名为前缀命名的两个pem文件。
附带两个文件参数启动即可。
````shell
python api --port 7862 --ssl_keyfile api.example.com+4-key.pem --ssl_certfile api.example.com+4.pem
./cli.sh api --port 7862 --ssl_keyfile api.example.com+4-key.pem --ssl_certfile api.example.com+4.pem
````
此外可以通过前置Nginx转发实现类似效果可另行查阅相关资料。

View File

@ -1,125 +0,0 @@
## 在 Anaconda 中使用 pip 安装包无效问题
## 问题
最近在跑开源代码的时候遇到的问题:使用 conda 创建虚拟环境并切换到新的虚拟环境后,再使用 pip 来安装包会“无效”。这里的“无效”指的是使用 pip 安装的包不在这个新的环境中。
------
## 分析
1、首先创建一个测试环境 test`conda create -n test`
2、激活该测试环境`conda activate test`
3、使用 pip 安装 numpy`pip install numpy`,会发现 numpy 已经存在默认的环境中
```powershell
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
Requirement already satisfied: numpy in c:\programdata\anaconda3\lib\site-packages (1.20.3)
```
4、这时候看一下 pip 的信息,`pip show pip`
```powershell
Name: pip
Version: 21.2.4
Summary: The PyPA recommended tool for installing Python packages.
Home-page: https://pip.pypa.io/
Author: The pip developers
Author-email: distutils-sig@python.org
License: MIT
Location: c:\programdata\anaconda3\lib\site-packages
Requires:
Required-by:
```
5、可以发现当前 pip 是在默认的 conda 环境中。这也就解释了当我们直接使用 pip 安装包时为什么包不在这个新的虚拟环境中,因为使用的 pip 属于默认环境,安装的包要么已经存在,要么直接装到默认环境中去了。
------
## 解决
1、我们可以直接使用 conda 命令安装新的包,但有些时候 conda 可能没有某些包/库,所以还是得用 pip 安装
2、我们可以先使用 conda 命令为当前虚拟环境安装 pip 包,再使用 pip 安装新的包
```powershell
# 使用 conda 安装 pip 包
(test) PS C:\Users\Administrator> conda install pip
Collecting package metadata (current_repodata.json): done
Solving environment: done
....
done
# 显示当前 pip 的信息,发现 pip 在测试环境 test 中
(test) PS C:\Users\Administrator> pip show pip
Name: pip
Version: 21.2.4
Summary: The PyPA recommended tool for installing Python packages.
Home-page: https://pip.pypa.io/
Author: The pip developers
Author-email: distutils-sig@python.org
License: MIT
Location: c:\programdata\anaconda3\envs\test\lib\site-packages
Requires:
Required-by:
# 再使用 pip 安装 numpy 包,成功安装
(test) PS C:\Users\Administrator> pip install numpy
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
Collecting numpy
Using cached https://pypi.tuna.tsinghua.edu.cn/packages/4b/23/140ec5a509d992fe39db17200e96c00fd29603c1531ce633ef93dbad5e9e/numpy-1.22.2-cp39-cp39-win_amd64.whl (14.7 MB)
Installing collected packages: numpy
Successfully installed numpy-1.22.2
# 使用 pip list 查看当前安装的包,没有问题
(test) PS C:\Users\Administrator> pip list
Package Version
------------ ---------
certifi 2021.10.8
numpy 1.22.2
pip 21.2.4
setuptools 58.0.4
wheel 0.37.1
wincertstore 0.2
```
------
## 补充
1、之前没有发现这个问题可能时因为在虚拟环境中安装的包是指定版本的覆盖了默认环境中的包。其实主要还是观察不仔细不然可以发现 `Successfully uninstalled numpy-xxx`【默认版本】 以及 `Successfully installed numpy-1.20.3`【指定版本】
2、测试时发现如果在新建包的时候指定了 python 版本的话应该是没有这个问题的,猜测时因为会在虚拟环境中安装好 pip ,而我们这里包括 pip 在内啥包也没有装,所以使用的是默认环境的 pip
3、有个问题之前我在创建新的虚拟环境时应该指定了 python 版本,但还是使用的默认环境的 pip 包,但是刚在在两台机器上都没有复现成功,于是有了上面的第 2 点
4、出现了第 3 点的问题后,我当时是使用 `python -m pip install package-name` 解决的,在 pip 前面加上了 python -m。至于为什么可以参考 [StackOverflow](https://stackoverflow.com/questions/41060382/using-pip-to-install-packages-to-anaconda-environment) 上的回答:
> 1、如果你有一个非 conda 的 pip 作为你的默认 pip但是 conda 的 python 是你的默认 python如下
>
> ```shell
> >which -a pip
> /home/<user>/.local/bin/pip
> /home/<user>/.conda/envs/newenv/bin/pip
> /usr/bin/pip
>
> >which -a python
> /home/<user>/.conda/envs/newenv/bin/python
> /usr/bin/python
> ```
>
> 2、然后而不是直接调用 `pip install <package>`,你可以在 python 中使用模块标志 -m以便它使用 anaconda python 进行安装
>
> ```shell
>python -m pip install <package>
> ```
>
> 3、这将把包安装到 anaconda 库目录而不是与非anaconda pip 关联的库目录
>
> 4、这样做的原因如下命令 pip 引用了一个特定的 pip 文件 / 快捷方式which -a pip 会告诉你是哪一个)。类似地,命令 python 引用一个特定的 python 文件which -a python 会告诉你是哪个)。由于这样或那样的原因,这两个命令可能变得不同步,因此你的“默认” pip 与你的默认 python 位于不同的文件夹中,因此与不同版本的 python 相关联。
>
> 5、与此相反python -m pip 构造不使用 pip 命令指向的快捷方式。相反,它要求 python 找到它的pip 版本,并使用该版本安装一个包。
-

View File

@ -1,80 +0,0 @@
## 自定义属于自己的Agent
### 1. 创建自己的Agent工具
+ 开发者在```server/agent```文件中创建一个自己的文件,并将其添加到```tools.py```中。这样就完成了Tools的设定。
+ 当您创建了一个```custom_agent.py```文件,其中包含一个```work```函数,那么您需要在```tools.py```中添加如下代码:
```python
from custom_agent import work
Tool.from_function(
func=work,
name="该函数的名字",
description=""
)
```
+ 请注意如果你确定在某一个工程中不会使用到某个工具可以将其从Tools中移除降低模型分类错误导致使用错误工具的风险。
### 2. 修改 custom_template.py文件
开发者需要根据自己选择的大模型设定适合该模型的Agent Prompt和自自定义返回格式。
在我们的代码中提供了默认的两种方式一种是适配于GPT和Qwen的提示词
```python
"""
Answer the following questions as best you can. You have access to the following tools:
{tools}
Use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [{tool_names}]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can be repeated zero or more times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question
Begin!
history:
{history}
Question: {input}
Thought: {agent_scratchpad}
"""
```
另一种是适配于GLM-130B的提示词
```python
"""
尽可能地回答以下问题。你可以使用以下工具:{tools}
请按照以下格式进行:
Question: 需要你回答的输入问题
Thought: 你应该总是思考该做什么
Action: 需要使用的工具,应该是[{tool_names}]中的一个
Action Input: 传入工具的内容
Observation: 行动的结果
... (这个Thought/Action/Action Input/Observation可以重复N次)
Thought: 我现在知道最后的答案
Final Answer: 对原始输入问题的最终答案
现在开始!
之前的对话:
{history}
New question: {input}
Thought: {agent_scratchpad}
"""
```
### 3. 局限性
1. 在我们的实验中小于70B级别的模型若不经过微调很难达到较好的效果。因此我们建议开发者使用大于70B级别的模型进行微调以达到更好的效果。
2. 由于Agent的脆弱性temperture参数的设置对于模型的效果有很大的影响。我们建议开发者在使用自定义Agent时对于不同的模型将其设置成0.1以下,以达到更好的效果。
3. 即使使用了大于70B级别的模型开发者也应该在Prompt上进行深度优化以让模型能成功的选择工具并完成任务。
### 4. 我们已经支持的Agent
我们为开发者编写了三个运用大模型执行的Agent分别是
1. 翻译工具,实现对输入的任意语言翻译。
2. 数学工具使用LLMMathChain 实现数学计算。
3. 天气工具使用自定义的LLMWetherChain实现天气查询调用和风天气API。
4. 我们支持Langchain支持的Agent工具在代码中我们已经提供了Shell和Google Search两个工具的实现。

9
img/partners/autodl.svg Normal file

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 123 KiB

9
img/partners/aws.svg Normal file

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 42 KiB

9
img/partners/chatglm.svg Normal file

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 6.3 KiB

View File

@ -0,0 +1,9 @@
<svg width="654" height="213" viewBox="0 0 654 213" fill="none" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
<rect x="654" width="213" height="654" transform="rotate(90 654 0)" fill="url(#pattern0)"/>
<defs>
<pattern id="pattern0" patternContentUnits="objectBoundingBox" width="1" height="1">
<use xlink:href="#image0_237_57" transform="matrix(0.0204695 0 0 0.00666667 -0.00150228 0)"/>
</pattern>
<image id="image0_237_57" width="49" height="150" xlink:href="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAADEAAACWCAYAAAB3hWBKAAAMyElEQVR4Ae1de5AcRRk/AR88C2MgIdzdfB0jIioFIoggGkoeCbnt3lAVtUALygdCUSWI0X8M3LcHiUHkEQoIuem7aKR4pCJSFFqQ6cvyCMnMXngVBORhBaG0ohaiIgqCrPX1bO/OzM7uzcztZAO1VzU1szPf1/39+uvn933d19fX+9vNSgAFuwo53J/rVYDzcoWNAtaiYNVcLw5X5guCA+YKgAoodxAFODdXEByeKhVA5KyJ/nlI2mh1CfZaAOSzLeka/GMB+uqIGDgpVwBTJX7VabP2DQqEAlZNxUPfUcCWOh9nz685pu/9SfhyoRnh1tF1YQSrljhckCSjUsFaGORDbn0zCV8uNCVuLQoLw3iSjLCvbw/k7I0GL9yehC8XmpKArzYEYdWSsBYnzQgFbG/wwmNJ+TpOh8I6qyGI7ipXJMkEl/R9ADm8XuflbHMSvlxoRoasz9cFaQyIG1CwpSXOTsUizG+6CoNDyOGuEB+H1bkImCRRnN+3Fwp4NSRQA0ziUb40NPjlJPnlRlPi7IfTAsHZxtyES5qw7mkEjGcCwmErnt4/I2leudONFAePGebwIxRsHQoot5/1snuwAMVqX9/7chesl0GvBHolkK0E/JEbHsvUQ+mJI7s0W84d4kLBrsgqvOEr8S6CuJLP3D88G822Fu8qCBSDp5jSnM69uyA4403CF+C8pklf3EQw8O6KM5jVodqdPpkR0f/pCAgvfSpd5qBZ7LBgf68D4ewPXRYpW/YoGK0fGtPuxXMPy5ZSF7lKnH0nBEKw5V0UJ1vWP1k0+GEU7M06EM52rl/St2e21LrERdNp5DBRB6HNkmyy/VQ8YqAuwLldEt/PFgV7KQQg2D6SPnPA7oLg7K0eCF39uqwJmjIkMCK3NkiTobkI87tanXqZ90qggyWAwjoh7ay1iZ73z+ugSOmTQs52TruLFdb16XPuIEcPRH1Ef7drgrNfYhGO6mDlyD8p5OzqYBvK3f2bB6QrxJyBIAgU8Os88sk9TW01r7cH9iaZfnLPtNMZlDhcFNRGiVtndzqP3NPDAhwfBIEcbss9005ngBzOD4EQsL3TeaRKjxpmqqVoyH9dt5K8nCrTThOjYH8Ol2pdsIYZp9GI499x9lCn5UqVXidAkNknVaadJp4WCA7vIGc3dN3EQ9EDWZanJW79AAtweKcLtZfebl8COhClEf6GyNkXd3uhowKWBHw/1I1ytixKQ79rJs1ADC27IY6uK+90Iwz09a3cU9TLhMAK2NIVgeMy7YGIK5VuvOtpohulHpdnTxNxpdKNd+8NTTQNdjDWZE8twvwSt04OjRMcnoqja3q3aHBu7sqJaiIkaGAQzPw+7/0TVEI9EEk01dNEwtaECw/tb2qMgVCfaX/bFQ07IdYeWa8EeiVQKwHPkWd5jv2Yp2Q1y+UqO1VcrA5jHbI+E1XANUv6976cs49H30/521P28iyCB3nSgKDdAcjhGdrysGLxoR8pcRg2/g3kek/g2JRCBwk2bx7b33PsN4ICZXlOCuL6hfM+iII9iQXrKvKfLz/TOgQ53IHcGiH3gJ67cUgHYlKNn5JF6ChPUhBITkoBt+s9S4IVDIgVC2cfhILdqCPg0oJwHVtEBfIc+1bPsTHNVdk0/qWghuOekVujyOFvtXC9paUACEM/wgdPxNQglH1kGIT9rEmwk3cSGDn7J1UhSrdm/73N98TC9rotWMAqFDCeKu9yubyXq+zX6kAcO5e42BJnS5DD76khN4GgNYuxTgrr+tQgKEHPkffUQShZdTeu/ViqkkhAjEsO2g85PIoCtmFhzj6hNiEaWz5x8dzDUlcnDULJc4IgqMtNIFdqkuWczdJBkxzWxIEg+zAKeDgTCNe95QBX2W8ZIK6yd1aruEdqKRMw4BA7DnXgpLW83sVSb0VB9gJeoB1jmUBQ3p6S6w0IuruOPZ6md5p07MQxgCVh/RiFv+EW+cAc3eUW4Sg9htBgl7ZhEwDXketcZf87CCL1s2MnjovVRmsOjw4PDX4hqjzaeJgpViRYlVILb+ZaKUBEBe/I7/cKCMdz7Punc1UmxrobK94RdfYS6ZVAcwnQIDfpyKOp349envqFnvc0c+0mb6rr1+/pKftbniNfatXNViZkwYi7rbzmcG9ibNhVtuMq2f0dYdu2rdnHVbLcSvjG+9EhA0Kvyc0YoWS1smn0k+ZbqztNAmk2G/yOHH5LE0K9VY4miNMY7G5oCNraUBDUxBP3rTs4yOM6csoTVPAMmE1L0RAIAb/Rs1oOm0pF9o1Ma+xtD645xHXk2xGB3nYd6dbGjf+Zb0EQJIir5IvmmzchrwkKF33WEz86/4azv4QCxAS8ghyuq81uz/dBMBnlb/vbVfapdUH0xE8+75btfsMUnFNNbhoLHRHjOnKz4XWVbBsDSNUEuXUmctiEgt2NBbgYOdw6zFkFOXvePxsHVmbShKdk0QhC94qyv24A0N1z5L8a3xttwv9m32u+UQMP8sU96+qkjQRwszZgC1iJAu7V6wwdGNkpEBtHQ+HSSUFQ1YsTPPiuFQgU7EF/1ad3mXmp1xNRTXhKnhPM2HPs101pR9uE58hJ88117LuCfHHPbUAsR84kcrYZs6yxPWWfZAShu+vYz2wv37ifEaKVJlw1dnKIT9lTHm7VGoR1gh9nW6tOaRdFNQtgvQfygcintWmTbE/K/m9dWLJHKblS91pKvlN/T+DV2MkGeNy91uv8DMmyQcYCDj/3Sx5eoLN0/P2vGdsEZegp+86gQBmeH48TPPguBMKYZ/SdPYcF9hXk8B9/fU222JRdLGXkKnuWp+xnMwhfpTaTZLSmfHR14mx9GJw/YlNb8Pf7ZVxjayDa4iE3pAHiKrmjokaPDQrV7lkfbCUGjwjSoICV2kCweNbByNktmU37wUQrE6MnuI5c6jpylPr+6GqP5lgVx76WeiqyHgZ5e8+9EnivlQD1MtHVXOLfajz/KJt2BV4pr53tKntjmp6pmdb+abs8cv/mOfbWZqFaL47iabsIgqpQvFC7BoQ+wm+6Gw1p/bCrQGARDtQW8YUzDjDVCzl8DTmbJDcwXSjgMhrZzfdE94oaOzsKgpyRiZhTEmERgLyjwwKepg2HtZDuJ5Gzh5DDtTWX1x30nCrpaHUiB0uqBFIQ+yDgduSwAgXs8H3X7GrtFi4CUFKXFwc+Smc+p0i2r69mb/qr0QZZyLdsWb93qkQSEl9esBg5VIgcOawmrdQOyaINuNlBUIKekmMGRO1eTChXYrIRAZ/THiEOt2FhzkwdFsFhgg7ZpcMTzflQmTRBUjQ55B05mVi6hIR+xABc45/WAjv02qEIByJnzyGHX+lzaykSjo5+TeuMJxl2lNd+yHXkK0Ft0Gw2oXypyHDhjAP0sZXcuoQYa435d/6GXCjTnY62bErUc6QKCpjPc7LBjsyVly6YfSyFP5SEdSEumjWXutkmoaMvPEfelI/gwQGxPQhtPKPqQtE0ZLqkSBs6Pj8mmDI23slV8qJug9ANm8P9NEaEo6Dh1ZBZk2jiqlNFyQXdBmFqhx6hBazS1g2KuqGD2Ws9lqGJvZMdKfGUOsa5kog34VTcN+EPfNYISoez0wZ16mp7Z8+aUundd0UJuI59fDDgpOLYF0wnkmZbec3MYHrk58gdh+vYF0d7KPIGPVKWmU4D2urYnwin136c6AjAOBAkhO8JGrskrVZ2KxCmNEkrW8tr9XQ4SantliA0GDIOK/t71Wp1yrPCOw0CFw3OHSlA+7DUmOr0qNFC891+cCqtaGd8wI/tqanbBA1qtDCK07Q+fJezZTRvarlXPDp/Ik8oRSe7jvxjMwhZJQ+Rq+wLW2kliyboQAZcNPApWi8gHdAQ2aukTfv6ZHl2ZBxQ8jmHJoHGnas9Rc0ru3rkPrl5Kw+MD0QTnSaIZ7RVQx99DDf7R73qY5BfjjrrQ/lGq1PUotFOK7Xg3+8GE8wGAm5Czo6subkuQw4LdOg1nVdLz4K9aIJ/g3nVn6cCQYSkFfJLxFYv3R3LstFKJhCC3ehPAMlLCpdhgZ1mQJQEnN5qt34qEIaYZq0to20c+Q9vQn67uWHLtv/2KvQPbDg80hEQUZ+0AWDuNa/q6lZa8ZSM9m5tQVC6tLXAaEI3Yq0JHa28gbRCFkCTf+w9SXWKY2yrlZRdLPKp2gT8qa0JMysIAvbEfev2dZVN4UQh33VYS1OPE4Eu9nHqZmv7Ye9Gzh7wn+FhFHBnXGHqd9MBYRKd3CRPJI9pWHhjLEgAol6dAv8aKDhWcLZMN/QhdpzJM3TvBAhKkEycrrJXNWslAQgKD2qxvdPvbtkyvculZtIMAcjjh6vGjwitvROusVvJQiYdY8psRfOuff9/T15+hUcbNtcAAAAASUVORK5CYII="/>
</defs>
</svg>

After

Width:  |  Height:  |  Size: 4.9 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 108 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 188 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 240 KiB