This guide provides a streamlined, step-by-step process to install and run DeepSeek-V3 on Windows. The setup has been simplified to be as beginner-friendly as possible.
System Requirements
Minimum and Recommended Specifications
Component | Minimum Requirements | Recommended Requirements |
---|---|---|
Operating System | Windows 10 or Windows 11 | Windows 11 |
Python Version | Python 3.9 or higher | Python 3.9 |
GPU | NVIDIA GPU with 16GB VRAM (for 7B model) | NVIDIA GPU with 24GB VRAM |
RAM | 32GB | 64GB |
Disk Space | 50GB free disk space | 100GB SSD space |
Installation Guide
Step 1: Install Required Software
Install Python 3.9:
- Download Python 3.9 from Python.org
- Select “Windows installer (64-bit)”
- During installation:
- Check “Add Python 3.9 to PATH”
- Click “Install Now”
- Verify installation:
python --version
Expected output:Python 3.9.x
Install Git:
- Download from Git SCM
- Use default installation options
- Verify installation:
git --version
Install CUDA Toolkit:
- Download CUDA 11.8 from NVIDIA CUDA Toolkit Archive
- Choose:
- Windows
- x86_64
- Version 11
- exe (local)
- Run the installer with default options
- Verify installation:
nvcc --version
Step 2: Download DeepSeek-V3 Repository
Clone the Repository:
git clone https://github.com/khanfar/DeepSeek-Windows.git
cd DeepSeek-Windows
Create and Activate Virtual Environment:
python -m venv venv
venv\Scripts\activate # For Windows
Step 3: Install Dependencies
Install PyTorch:
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
Install Other Requirements:
pip install -r requirements.txt
Step 4: Download the Model
Run the Download Script:
python download_model.py
This will download the 7B parameter model (approximately 14GB).
Convert Model Format (if needed):
python fp8_cast_bf16.py --input-fp8-hf-path model_weights --output-bf16-hf-path model_weights_bf16
Step 5: Start the Server
Run the Server:
python windows_server.py --model model_weights_bf16 --trust-remote-code
The server will start at: http://127.0.0.1:30000
Test the Model:
Once the server is running, you can interact with the model through the provided interface or an API client.
By following this guide, you should be able to successfully install and run DeepSeek-V3 on your Windows machine.