From ff3dba000c93b8ff94436156d427dc0de45caa6f Mon Sep 17 00:00:00 2001 From: zh-zheng Date: Tue, 9 Apr 2024 22:06:37 +0800 Subject: [PATCH] update readme --- README-en.md | 4 ++-- README.md | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/README-en.md b/README-en.md index 7f034fc..e57da85 100644 --- a/README-en.md +++ b/README-en.md @@ -200,12 +200,12 @@ Solving [this issue](https://github.com/ollama/ollama/issues/2383) ## Community -- [ChatLLM](https://github.com/foldl/chatllm.cpp) :[Run MiniCPM on CPU](https://huggingface.co/openbmb/MiniCPM-2B-dpo-bf16/discussions/2#65c59c4f27b8c11e43fc8796) +- [ChatLLM](https://github.com/foldl/chatllm.cpp): [Run MiniCPM on CPU](https://huggingface.co/openbmb/MiniCPM-2B-dpo-bf16/discussions/2#65c59c4f27b8c11e43fc8796) **fastllm** 1. install [fastllm](https://github.com/ztxz16/fastllm) 2. inference -``` +```python import torch from transformers import AutoTokenizer, LlamaTokenizerFast, AutoModelForCausalLM path = 'openbmb/MiniCPM-2B-dpo-fp16' diff --git a/README.md b/README.md index 0237372..a2c3ae0 100644 --- a/README.md +++ b/README.md @@ -205,7 +205,7 @@ MiniCPM支持[llama.cpp](https://github.com/ggerganov/llama.cpp/) 、[ollama](ht **fastllm** 1. [编译安装fastllm](https://github.com/ztxz16/fastllm) 2. 模型推理 -``` +```python import torch from transformers import AutoTokenizer, LlamaTokenizerFast, AutoModelForCausalLM path = 'openbmb/MiniCPM-2B-dpo-fp16'