mirror of
https://github.com/RYDE-WORK/llama.cpp.git
synced 2026-01-22 23:59:34 +08:00
readme : update hot topics
This commit is contained in:
parent
922754a8d6
commit
9656026b53
@ -10,6 +10,7 @@ Inference of [LLaMA](https://arxiv.org/abs/2302.13971) model in pure C/C++
|
||||
|
||||
### Hot topics
|
||||
|
||||
- Using `llama.cpp` with AWS instances: https://github.com/ggerganov/llama.cpp/discussions/4225
|
||||
- Looking for contributions to improve and maintain the `server` example: https://github.com/ggerganov/llama.cpp/issues/4216
|
||||
- Collecting Apple Silicon performance stats: https://github.com/ggerganov/llama.cpp/discussions/4167
|
||||
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user