Running DeepSeek Locally on Windows (All Versions)
The hardware requirements for running DeepSeek locally depend on the model size. Below is a table outlining the minimum and recommended hardware for each version.
DeepSeek Model Hardware Requirements
Model | VRAM (GPU) | RAM (System) | CPU | Storage (SSD/NVMe) | Recommended GPU |
---|---|---|---|---|---|
1.5B | 4GB+ | 16GB | Intel i5 / Ryzen 5 | 50GB | NVIDIA RTX 2060 |
7B | 16GB+ | 32GB | Intel i7 / Ryzen 7 | 100GB | NVIDIA RTX 3090 / 4090 |
8B | 24GB+ | 64GB | Intel i9 / Ryzen 9 | 150GB | NVIDIA RTX 4090 / A100 |
14B | 32GB+ | 128GB | Intel i9 / Ryzen 9 | 200GB | NVIDIA A100 (40GB) |
32B | 48GB+ | 256GB | AMD EPYC / Xeon | 400GB | NVIDIA H100 (80GB) |
70B | 80GB+ | 512GB | AMD EPYC / Xeon | 1TB | 2× NVIDIA H100 (80GB) |
671B | 512GB+ (Multiple GPUs) | 1.5TB+ | AMD EPYC / Xeon | 10TB+ | 8× NVIDIA H100 (80GB) |
Installation Steps for Windows
- Install Dependencies
- Install Python 3.9+
- Install CUDA 11.8+ (Ensure GPU compatibility)
- Install cuDNN (For better performance)
- Set Up Virtual Environment
python -m venv deepseek_env
deepseek_env\Scripts\activate
- Install PyTorch with CUDA Support
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
- Install DeepSeek
pip install deepseek
- Run DeepSeek Locally
python -m deepseek --model deepseek-7b
Key Notes:
- 1.5B to 8B models can run on high-end gaming GPUs (RTX 3090/4090).
- 14B and above require professional AI hardware (A100, H100).
- 32B+ models may need multiple GPUs with tensor parallelism.
- 671B model is too large for local use, requiring cloud clusters.