Atream
|
c5f036e8a4
|
Merge pull request #333 from kvcache-ai/feat_experts_gpu
toy support for experts on GPU, no CUDA Graph
|
2025-02-15 23:30:24 +08:00 |
|
Atream
|
c189d55bd1
|
toy support for experts on GPU, no CUDA Graph
|
2025-02-15 15:16:00 +00:00 |
|
Shuaiyi
|
22280bf17f
|
get dirname if gguf_path is a file
|
2025-02-15 07:08:22 +00:00 |
|
Atream
|
1946493f2d
|
warm_up before capture
|
2025-02-14 15:52:21 +00:00 |
|
Atream
|
bb35dc5b0d
|
init support for MLA using Attention kernel
|
2025-02-13 15:01:14 +00:00 |
|
liam
|
d07087a7e2
|
⚡ support R1 force thinking
|
2025-02-11 15:43:41 +08:00 |
|
Azure
|
f873558a89
|
update rope calculation; update modeling.py; update gate for moe
|
2025-02-01 07:32:21 +00:00 |
|
Azure
|
5a50b34627
|
fix hard coding caused by rope dim calculation, load from config now
|
2025-01-31 15:25:50 +00:00 |
|
UnicornChan
|
a81a7ffe21
|
Merge pull request #77 from TKONIY/fix-prefill-and-generate
Fix: Wrong type of token list returned by prefill_and_generate
|
2024-10-09 19:04:27 +08:00 |
|
yangshen
|
ee72cee050
|
Fix: the tokens return by prefill_and_generate
|
2024-09-05 05:29:23 +00:00 |
|
Yap Sok Ann
|
be356c1b8d
|
Support IQ4_XS dequantize
|
2024-09-02 09:10:19 +07:00 |
|
chenxl
|
4d1d561d28
|
[feature] release 0.1.3
|
2024-08-28 16:11:43 +00:00 |
|
molamooo
|
29f4151ebc
|
[fix] f16 dequantize device ignored
|
2024-08-22 15:10:06 +08:00 |
|
Atream
|
412055d450
|
[feature] experts can be injected using CPUInfer
[fix] fix ktransformers interface when use new CUDAGraphRunner
[fix] fix YAML and optimize logic, the top rule has the highest priority
|
2024-08-14 16:10:54 +08:00 |
|
BITcyman
|
7c4cb520bd
|
[feature] support q2_k & q3_k dequantize on gpu
|
2024-08-12 12:53:12 +00:00 |
|
chenxl
|
f5f79f5c0e
|
[ADD] support multi-gpu qlen>1 q5_k
|
2024-08-12 11:41:26 +00:00 |
|
chenxl
|
18c42e67df
|
Initial commit
|
2024-07-27 16:06:58 +08:00 |
|