mirror of
https://github.com/RYDE-WORK/MiniCPM.git
synced 2026-01-19 21:03:39 +08:00
update readme
This commit is contained in:
parent
2d3212613d
commit
ff3dba000c
@ -200,12 +200,12 @@ Solving [this issue](https://github.com/ollama/ollama/issues/2383)
|
||||
|
||||
## Community
|
||||
|
||||
- [ChatLLM](https://github.com/foldl/chatllm.cpp) :[Run MiniCPM on CPU](https://huggingface.co/openbmb/MiniCPM-2B-dpo-bf16/discussions/2#65c59c4f27b8c11e43fc8796)
|
||||
- [ChatLLM](https://github.com/foldl/chatllm.cpp): [Run MiniCPM on CPU](https://huggingface.co/openbmb/MiniCPM-2B-dpo-bf16/discussions/2#65c59c4f27b8c11e43fc8796)
|
||||
|
||||
**fastllm**
|
||||
1. install [fastllm](https://github.com/ztxz16/fastllm)
|
||||
2. inference
|
||||
```
|
||||
```python
|
||||
import torch
|
||||
from transformers import AutoTokenizer, LlamaTokenizerFast, AutoModelForCausalLM
|
||||
path = 'openbmb/MiniCPM-2B-dpo-fp16'
|
||||
|
||||
@ -205,7 +205,7 @@ MiniCPM支持[llama.cpp](https://github.com/ggerganov/llama.cpp/) 、[ollama](ht
|
||||
**fastllm**
|
||||
1. [编译安装fastllm](https://github.com/ztxz16/fastllm)
|
||||
2. 模型推理
|
||||
```
|
||||
```python
|
||||
import torch
|
||||
from transformers import AutoTokenizer, LlamaTokenizerFast, AutoModelForCausalLM
|
||||
path = 'openbmb/MiniCPM-2B-dpo-fp16'
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user