update readme

This commit is contained in:
zh-zheng 2024-04-09 22:06:37 +08:00
parent 2d3212613d
commit ff3dba000c
2 changed files with 3 additions and 3 deletions

View File

@ -200,12 +200,12 @@ Solving [this issue](https://github.com/ollama/ollama/issues/2383)
## Community ## Community
- [ChatLLM](https://github.com/foldl/chatllm.cpp) :[Run MiniCPM on CPU](https://huggingface.co/openbmb/MiniCPM-2B-dpo-bf16/discussions/2#65c59c4f27b8c11e43fc8796) - [ChatLLM](https://github.com/foldl/chatllm.cpp): [Run MiniCPM on CPU](https://huggingface.co/openbmb/MiniCPM-2B-dpo-bf16/discussions/2#65c59c4f27b8c11e43fc8796)
**fastllm** **fastllm**
1. install [fastllm](https://github.com/ztxz16/fastllm) 1. install [fastllm](https://github.com/ztxz16/fastllm)
2. inference 2. inference
``` ```python
import torch import torch
from transformers import AutoTokenizer, LlamaTokenizerFast, AutoModelForCausalLM from transformers import AutoTokenizer, LlamaTokenizerFast, AutoModelForCausalLM
path = 'openbmb/MiniCPM-2B-dpo-fp16' path = 'openbmb/MiniCPM-2B-dpo-fp16'

View File

@ -205,7 +205,7 @@ MiniCPM支持[llama.cpp](https://github.com/ggerganov/llama.cpp/) 、[ollama](ht
**fastllm** **fastllm**
1. [编译安装fastllm](https://github.com/ztxz16/fastllm) 1. [编译安装fastllm](https://github.com/ztxz16/fastllm)
2. 模型推理 2. 模型推理
``` ```python
import torch import torch
from transformers import AutoTokenizer, LlamaTokenizerFast, AutoModelForCausalLM from transformers import AutoTokenizer, LlamaTokenizerFast, AutoModelForCausalLM
path = 'openbmb/MiniCPM-2B-dpo-fp16' path = 'openbmb/MiniCPM-2B-dpo-fp16'