diff --git a/doc/en/DeepseekR1_V3_tutorial.md b/doc/en/DeepseekR1_V3_tutorial.md
index 45a5aab..1bc1be8 100644
--- a/doc/en/DeepseekR1_V3_tutorial.md
+++ b/doc/en/DeepseekR1_V3_tutorial.md
@@ -1,6 +1,6 @@
# Report
## Prerequisites
-We run our best performance tests on
+We run our best performance tests(V0.2) on
cpu: Intel(R) Xeon(R) Gold 6454S 1T DRAM(2 NUMA nodes)
gpu: 4090D 24G VRAM
## Bench result
@@ -50,7 +50,7 @@ The main acceleration comes from
*From our research on DeepSeekV2, DeepSeekV3 and DeepSeekR1,
when we slightly decrease the activation experts num in inference,
-the output quality doesn't change,But the speed of decoding and prefill
+the output quality doesn't change. But the speed of decoding and prefill
is speed up which is inspiring. So our showcase makes use of this finding*
## how to run