diff --git a/README.md b/README.md
index 1ab12d0..a80fe67 100644
--- a/README.md
+++ b/README.md
@@ -82,11 +82,12 @@ Some preparation:
Installation
1. Use a Docker image, see [documentation for Docker](./doc/en/docker.md)
-2. You can install using Pypi:
+2. You can install using Pypi (for linux):
```
pip install ktransformers --no-build-isolation
```
+ for windows we prepare a pre compiled whl package in [ktransformers-0.1.1+cu125torch24avx2-cp311-cp311-win_amd64.whl](https://github.com/kvcache-ai/ktransformers/releases/download/v0.1.1/ktransformers-0.1.1+cu125torch24avx2-cp311-cp311-win_amd64.whl), which require cuda-12.5, torch-2.4, python-3.11, more pre compiled package are being produced.
3. Or you can download source code and compile:
- init source code
@@ -97,11 +98,16 @@ Some preparation:
git submodule update
```
- [Optional] If you want to run with website, please [compile the website](./doc/en/api/server/website.md) before execute ```bash install.sh```
- - Compile and install
+ - Compile and install (for Linux)
```
bash install.sh
```
+ - Compile and install(for Windows)
+ ```
+ install.bat
+ ```
+
Local Chat
We provide a simple command-line local chat Python script that you can run for testing.
diff --git a/ktransformers/ktransformers_ext/cpu_backend/task_queue.h b/ktransformers/ktransformers_ext/cpu_backend/task_queue.h
index d4e6d8a..13836b7 100644
--- a/ktransformers/ktransformers_ext/cpu_backend/task_queue.h
+++ b/ktransformers/ktransformers_ext/cpu_backend/task_queue.h
@@ -4,7 +4,7 @@
* @Date : 2024-07-16 10:43:18
* @Version : 1.0.0
* @LastEditors : chenxl
- * @LastEditTime : 2024-08-08 04:23:51
+ * @LastEditTime : 2024-08-12 12:28:25
* @Copyright (c) 2024 by KVCache.AI, All Rights Reserved.
**/
#ifndef CPUINFER_TASKQUEUE_H
@@ -51,7 +51,7 @@ public:
#ifdef _WIN32
ReleaseMutex(global_mutex);
#else
- global_mutex.lock();
+ global_mutex.unlock();
#endif
}
};
@@ -74,4 +74,4 @@ class TaskQueue {
std::atomic sync_flag;
std::atomic exit_flag;
};
-#endif
\ No newline at end of file
+#endif