Logo
Explore Help
Register Sign In
RYDE-WORK/llama.cpp
1
0
Fork 0
You've already forked llama.cpp
mirror of https://github.com/RYDE-WORK/llama.cpp.git synced 2026-01-19 21:23:26 +08:00
Code Issues Actions Packages Projects Releases Wiki Activity
llama.cpp/.devops
History
Georgi Gerganov dbc2ec59b5
docker : drop to CUDA 12.4 (#11869)
* docker : drop to CUDA 12.4

* docker : update readme [no ci]
2025-02-14 14:48:40 +02:00
..
nix
nix: allow to override rocm gpu targets (#10794)
2024-12-14 10:17:36 -08:00
cloud-v-pipeline
build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
2024-06-13 00:41:52 +01:00
cpu.Dockerfile
docker : add GGML_CPU_ARM_ARCH arg to select ARM architecture to build for (#11419)
2025-01-25 17:22:41 +01:00
cuda.Dockerfile
docker : drop to CUDA 12.4 (#11869)
2025-02-14 14:48:40 +02:00
intel.Dockerfile
devops : add docker-multi-stage builds (#10832)
2024-12-22 23:22:58 +01:00
llama-cli-cann.Dockerfile
docker: use GGML_NATIVE=OFF (#10368)
2024-11-18 00:21:53 +01:00
llama-cpp-cuda.srpm.spec
devops : remove clblast + LLAMA_CUDA -> GGML_CUDA (#8139)
2024-06-26 19:32:07 +03:00
llama-cpp.srpm.spec
build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
2024-06-13 00:41:52 +01:00
musa.Dockerfile
musa: bump MUSA SDK version to rc3.1.1 (#11822)
2025-02-13 13:28:18 +01:00
rocm.Dockerfile
devops : add docker-multi-stage builds (#10832)
2024-12-22 23:22:58 +01:00
tools.sh
docker: add perplexity and bench commands to full image (#11438)
2025-01-28 10:42:32 +00:00
vulkan.Dockerfile
ci : fix build CPU arm64 (#11472)
2025-01-29 00:02:56 +01:00
Powered by Gitea Version: 1.23.8 Page: 39ms Template: 3ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API