mirror of
https://github.com/RYDE-WORK/MiniCPM.git
synced 2026-02-05 22:53:34 +08:00
add community session
This commit is contained in:
parent
855434ef75
commit
f2a8c95990
10
README-en.md
10
README-en.md
@ -44,6 +44,7 @@ We release all model parameters for research and limited commercial use. In futu
|
|||||||
- [Updates](#0)
|
- [Updates](#0)
|
||||||
- [Downloading](#1)
|
- [Downloading](#1)
|
||||||
- [Quick Start](#2)
|
- [Quick Start](#2)
|
||||||
|
- [Community](#community)
|
||||||
- [Benchmark](#3)
|
- [Benchmark](#3)
|
||||||
- [Deployment on Mobile Phones](#4)
|
- [Deployment on Mobile Phones](#4)
|
||||||
- [Demo & API](#5)
|
- [Demo & API](#5)
|
||||||
@ -55,6 +56,7 @@ We release all model parameters for research and limited commercial use. In futu
|
|||||||
<p id="0"></p>
|
<p id="0"></p>
|
||||||
|
|
||||||
## Update Log
|
## Update Log
|
||||||
|
- 2024/02/09 We have included a [Community](#community) section in the README to encourage support for MiniCPM from the open-source community.
|
||||||
- 2024/02/08 We updated the [llama-format model weights](#llamaformat), which can be loaded into LlamaModel directly. We also supporting llama.cpp and ollama, making it more convenient for everyone to use our model quickly.
|
- 2024/02/08 We updated the [llama-format model weights](#llamaformat), which can be loaded into LlamaModel directly. We also supporting llama.cpp and ollama, making it more convenient for everyone to use our model quickly.
|
||||||
- 2024/02/01 Initial release.
|
- 2024/02/01 Initial release.
|
||||||
|
|
||||||
@ -202,6 +204,14 @@ ollama run minicpm
|
|||||||
```
|
```
|
||||||
(Note: We have noticed that this quantized model has noticable performance decrease and are trying to fix it)
|
(Note: We have noticed that this quantized model has noticable performance decrease and are trying to fix it)
|
||||||
|
|
||||||
|
<p id="Community"></p>
|
||||||
|
|
||||||
|
## Community
|
||||||
|
|
||||||
|
- [ChatLLM](https://github.com/foldl/chatllm.cpp) :[Run MiniCPM on CPU](https://huggingface.co/openbmb/MiniCPM-2B-dpo-bf16/discussions/2#65c59c4f27b8c11e43fc8796)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
<p id="3"></p>
|
<p id="3"></p>
|
||||||
|
|
||||||
## Evaluation results
|
## Evaluation results
|
||||||
|
|||||||
11
README.md
11
README.md
@ -45,6 +45,7 @@ MiniCPM 是面壁智能与清华大学自然语言处理实验室共同开源的
|
|||||||
- [更新日志](#0)
|
- [更新日志](#0)
|
||||||
- [模型下载](#1)
|
- [模型下载](#1)
|
||||||
- [快速上手](#2)
|
- [快速上手](#2)
|
||||||
|
- [开源社区](#community)
|
||||||
- [评测结果](#3)
|
- [评测结果](#3)
|
||||||
- [手机部署](#4)
|
- [手机部署](#4)
|
||||||
- [Demo & API 部署](#5)
|
- [Demo & API 部署](#5)
|
||||||
@ -56,6 +57,7 @@ MiniCPM 是面壁智能与清华大学自然语言处理实验室共同开源的
|
|||||||
<p id="0"></p>
|
<p id="0"></p>
|
||||||
|
|
||||||
## 更新日志
|
## 更新日志
|
||||||
|
- 2024/02/09 我们在readme里加入了一个[开源社区](#community)章节,用来收集开源社区对MiniCPM的支持案例。
|
||||||
- 2024/02/08 我们更新了[llama-format的模型权重](#llamaformat),支持了llama.cpp调用和ollama调用,方便大家更加快捷地使用我们的模型。
|
- 2024/02/08 我们更新了[llama-format的模型权重](#llamaformat),支持了llama.cpp调用和ollama调用,方便大家更加快捷地使用我们的模型。
|
||||||
- 2024/02/01 初始发布。
|
- 2024/02/01 初始发布。
|
||||||
|
|
||||||
@ -212,6 +214,15 @@ ollama run minicpm
|
|||||||
```
|
```
|
||||||
(注:我们注意到这个量化后的模型性能有较大损失,正在尝试解决)
|
(注:我们注意到这个量化后的模型性能有较大损失,正在尝试解决)
|
||||||
|
|
||||||
|
|
||||||
|
<p id="community"></p>
|
||||||
|
|
||||||
|
## 开源社区
|
||||||
|
|
||||||
|
- [ChatLLM框架](https://github.com/foldl/chatllm.cpp):[在CPU上跑MiniCPM](https://huggingface.co/openbmb/MiniCPM-2B-dpo-bf16/discussions/2#65c59c4f27b8c11e43fc8796)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
<p id="3"></p>
|
<p id="3"></p>
|
||||||
|
|
||||||
## 评测结果
|
## 评测结果
|
||||||
|
|||||||
Loading…
x
Reference in New Issue
Block a user