This guide provides a streamlined, step-by-step process to install and run DeepSeek-V3 on Windows. The setup has been simplified to be as beginner-friendly as possible.

System Requirements

Minimum and Recommended Specifications

ComponentMinimum RequirementsRecommended Requirements
Operating SystemWindows 10 or Windows 11Windows 11
Python VersionPython 3.9 or higherPython 3.9
GPUNVIDIA GPU with 16GB VRAM (for 7B model)NVIDIA GPU with 24GB VRAM
RAM32GB64GB
Disk Space50GB free disk space100GB SSD space

Installation Guide

Step 1: Install Required Software

Install Python 3.9:

  1. Download Python 3.9 from Python.org
  2. Select “Windows installer (64-bit)”
  3. During installation:
    • Check “Add Python 3.9 to PATH”
    • Click “Install Now”
  4. Verify installation: python --version Expected output: Python 3.9.x

Install Git:

  1. Download from Git SCM
  2. Use default installation options
  3. Verify installation: git --version

Install CUDA Toolkit:

  1. Download CUDA 11.8 from NVIDIA CUDA Toolkit Archive
  2. Choose:
    • Windows
    • x86_64
    • Version 11
    • exe (local)
  3. Run the installer with default options
  4. Verify installation: nvcc --version

Step 2: Download DeepSeek-V3 Repository

Clone the Repository:

git clone https://github.com/khanfar/DeepSeek-Windows.git
cd DeepSeek-Windows

Create and Activate Virtual Environment:

python -m venv venv
venv\Scripts\activate  # For Windows

Step 3: Install Dependencies

Install PyTorch:

pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118

Install Other Requirements:

pip install -r requirements.txt

Step 4: Download the Model

Run the Download Script:

python download_model.py

This will download the 7B parameter model (approximately 14GB).

Convert Model Format (if needed):

python fp8_cast_bf16.py --input-fp8-hf-path model_weights --output-bf16-hf-path model_weights_bf16

Step 5: Start the Server

Run the Server:

python windows_server.py --model model_weights_bf16 --trust-remote-code

The server will start at: http://127.0.0.1:30000

Test the Model:

Once the server is running, you can interact with the model through the provided interface or an API client.


By following this guide, you should be able to successfully install and run DeepSeek-V3 on your Windows machine.

Was this article helpful?
YesNo

Similar Posts