panhong d63bf0bda5
添加 react 编写的新版 WEBUI (#3417)
* feat:提交前端代码

* feat:提交logo样式切换

* feat:替换avatar、部分位置icon、chatchat相关说明、git链接、Wiki链接、关于、设置、反馈与建议等功能,关闭lobehub自检更新功能

* fix:移除多余代码

---------

Co-authored-by: liunux4odoo <41217877+liunux4odoo@users.noreply.github.com>
2024-03-19 14:30:26 +08:00

22 lines
1.0 KiB
Plaintext

# Integrating with Ollama
Ollama is a powerful framework for running large language models (LLMs) locally, supporting various language models including Llama 2, Mistral, and more. Now, LobeChat supports integration with Ollama, meaning you can easily use the language models provided by Ollama to enhance your application within LobeChat.
This document will guide you on how to configure and deploy LobeChat to use Ollama:
## Running Ollama Locally
First, you need to install Ollama. For detailed steps on installing and configuring Ollama, please refer to the [Ollama Website](https://ollama.com).
## Running LobeChat Locally
Assuming you have already started the Ollama service locally on port `11434`. Run the following Docker command to start LobeChat locally:
```bash
docker run -d -p 3210:3210 -e OLLAMA_PROXY_URL=http://host.docker.internal:11434/v1 lobehub/lobe-chat
```
Now, you can use LobeChat to converse with the local LLM.
For more information on using Ollama in LobeChat, please refer to [Ollama Usage](/en/usage/providers/ollama).