mirror of
https://github.com/RYDE-WORK/MiniCPM.git
synced 2026-02-02 13:15:44 +08:00
update readme
This commit is contained in:
parent
2d3212613d
commit
ff3dba000c
@ -200,12 +200,12 @@ Solving [this issue](https://github.com/ollama/ollama/issues/2383)
|
|||||||
|
|
||||||
## Community
|
## Community
|
||||||
|
|
||||||
- [ChatLLM](https://github.com/foldl/chatllm.cpp) :[Run MiniCPM on CPU](https://huggingface.co/openbmb/MiniCPM-2B-dpo-bf16/discussions/2#65c59c4f27b8c11e43fc8796)
|
- [ChatLLM](https://github.com/foldl/chatllm.cpp): [Run MiniCPM on CPU](https://huggingface.co/openbmb/MiniCPM-2B-dpo-bf16/discussions/2#65c59c4f27b8c11e43fc8796)
|
||||||
|
|
||||||
**fastllm**
|
**fastllm**
|
||||||
1. install [fastllm](https://github.com/ztxz16/fastllm)
|
1. install [fastllm](https://github.com/ztxz16/fastllm)
|
||||||
2. inference
|
2. inference
|
||||||
```
|
```python
|
||||||
import torch
|
import torch
|
||||||
from transformers import AutoTokenizer, LlamaTokenizerFast, AutoModelForCausalLM
|
from transformers import AutoTokenizer, LlamaTokenizerFast, AutoModelForCausalLM
|
||||||
path = 'openbmb/MiniCPM-2B-dpo-fp16'
|
path = 'openbmb/MiniCPM-2B-dpo-fp16'
|
||||||
|
|||||||
@ -205,7 +205,7 @@ MiniCPM支持[llama.cpp](https://github.com/ggerganov/llama.cpp/) 、[ollama](ht
|
|||||||
**fastllm**
|
**fastllm**
|
||||||
1. [编译安装fastllm](https://github.com/ztxz16/fastllm)
|
1. [编译安装fastllm](https://github.com/ztxz16/fastllm)
|
||||||
2. 模型推理
|
2. 模型推理
|
||||||
```
|
```python
|
||||||
import torch
|
import torch
|
||||||
from transformers import AutoTokenizer, LlamaTokenizerFast, AutoModelForCausalLM
|
from transformers import AutoTokenizer, LlamaTokenizerFast, AutoModelForCausalLM
|
||||||
path = 'openbmb/MiniCPM-2B-dpo-fp16'
|
path = 'openbmb/MiniCPM-2B-dpo-fp16'
|
||||||
|
|||||||
Loading…
x
Reference in New Issue
Block a user