Logo
Explore Help
Register Sign In
RYDE-WORK/llama.cpp
1
0
Fork 0
You've already forked llama.cpp
mirror of https://github.com/RYDE-WORK/llama.cpp.git synced 2026-01-20 13:43:26 +08:00
Code Issues Actions Packages Projects Releases Wiki Activity
llama.cpp/examples/server/tests/features
History
Johan 911b3900dd
server : add_special option for tokenize endpoint (#7059)
2024-05-08 15:27:58 +03:00
..
steps
server : add_special option for tokenize endpoint (#7059)
2024-05-08 15:27:58 +03:00
embeddings.feature
Improve usability of --model-url & related flags (#6930)
2024-04-30 00:52:50 +01:00
environment.py
server tests : more pythonic process management; fix bare except: (#6146)
2024-03-20 06:33:49 +01:00
issues.feature
server: tests: passkey challenge / self-extend with context shift demo (#5832)
2024-03-02 22:00:14 +01:00
parallel.feature
common: llama_load_model_from_url split support (#6192)
2024-03-23 18:07:00 +01:00
passkey.feature
server: tests: passkey challenge / self-extend with context shift demo (#5832)
2024-03-02 22:00:14 +01:00
results.feature
Server: add tests for batch size, different seeds (#6950)
2024-05-01 17:52:55 +02:00
security.feature
json-schema-to-grammar improvements (+ added to server) (#5978)
2024-03-21 11:50:43 +00:00
server.feature
server : add_special option for tokenize endpoint (#7059)
2024-05-08 15:27:58 +03:00
slotsave.feature
llama : save and restore kv cache for single seq id (#6341)
2024-04-08 15:43:30 +03:00
wrong_usages.feature
server: tests: passkey challenge / self-extend with context shift demo (#5832)
2024-03-02 22:00:14 +01:00
Powered by Gitea Version: 1.23.8 Page: 33ms Template: 2ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API