glide-the 5f010823aa Build an abstract dialogue model using classes and methods to represent different dialogue elements
Fixes # None

Before submitting
If you want to review, please refer to the quick start example provided in langchain/chains/dialogue_answering/main.py. You may need to set the openaikey and the following startup parameters: --dialogue-path: the location of the dialogue file, --embedding-model: the HuggingFaceEmbeddings model to use (defaults to GanymedeNil/text2vec-large-chinese) if not specified.

Regarding the format of the dialogue file, please refer to the following information:

```text
sun:
Has the offline model been run?

glide-the:
Yes, it has been run, but the results are not very satisfactory.

glide-the:
It lacks chat intelligence and falls far behind in terms of logic and reasoning.

sun:
Are you available for voice chat?

glide-the:
I'm considering using this offline model: https://huggingface.co/chat

glide-the:
voice chat okay.

glide-the:
You can take a look at the dev_agent branch of the langchain-chatglm project.

glide-the:
There's a dialogue model question-answering example under the agent.

sun:
Alright.

glide-the:
The specified chat record file is exported from WeChat.
```
2023-05-26 01:49:07 +08:00

37 lines
1.4 KiB
Python

import sys
import os
import argparse
import asyncio
from argparse import Namespace
sys.path.append(os.path.dirname(os.path.abspath(__file__)) + '/../../')
from chains.dialogue_answering import *
from langchain.llms import OpenAI
from models.base import (BaseAnswer,
AnswerResult)
import models.shared as shared
from models.loader.args import parser
from models.loader import LoaderCheckPoint
async def dispatch(args: Namespace):
args_dict = vars(args)
shared.loaderCheckPoint = LoaderCheckPoint(args_dict)
llm_model_ins = shared.loaderLLM()
if not os.path.isfile(args.dialogue_path):
raise FileNotFoundError(f'Invalid dialogue file path for demo mode: "{args.dialogue_path}"')
llm = OpenAI(temperature=0)
dialogue_instance = DialogueWithSharedMemoryChains(zero_shot_react_llm=llm, ask_llm=llm_model_ins, params=args_dict)
dialogue_instance.agent_chain.run(input="What did David say before, summarize it")
if __name__ == '__main__':
parser.add_argument('--dialogue-path', default='', type=str, help='dialogue-path')
parser.add_argument('--embedding-model', default='', type=str, help='embedding-model')
args = parser.parse_args(['--dialogue-path', '/home/dmeck/Downloads/log.txt',
'--embedding-mode', '/media/checkpoint/text2vec-large-chinese/'])
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
loop.run_until_complete(dispatch(args))