DeepSeek-R1 32B System Requirements
Component | Minimum Requirement | Recommended Requirement |
---|---|---|
GPU | NVIDIA RTX 3090 (24GB VRAM) | NVIDIA RTX 4090 / A100 (40GB+ VRAM) |
CPU | 8-core processor (Intel i7 / AMD Ryzen 7) | 16-core processor (Intel i9 / AMD Ryzen 9) |
RAM | 32GB | 64GB+ |
Storage | 100GB SSD | 1TB NVMe SSD |
OS | Windows 10/11 | Windows 11 |
Docker Support | WSL2 enabled | WSL2 enabled |
Installation Methods
I’ll provide three different installation methods:
- Using Docker (For easy containerized deployment)
- Using Ollama (For a simplified local installation)
- Using WebUI (For an interactive browser-based experience)
1️⃣ Installing DeepSeek-R1 32B Using Docker
Step 1: Install Docker
- Download Docker Desktop for Windows.
- Run the installer and enable WSL2 Backend during setup.
- Restart your system and verify installation with:
docker --version
Step 2: Pull DeepSeek-R1 32B Docker Image
Run the following command to download the DeepSeek-R1 32B image:
docker pull deepseek/deepseek-r1:32b
Step 3: Run the Docker Container
Start the DeepSeek model in a container:
docker run -d --gpus all -p 8080:8080 --name deepseek-r1-32b deepseek/deepseek-r1:32b
-d
→ Runs in detached mode--gpus all
→ Allocates all available GPUs-p 8080:8080
→ Maps container port to the local machine
Step 4: Access the WebUI
- Open your browser and go to:
http://localhost:8080
- Start interacting with DeepSeek-R1 32B!
2️⃣ Installing DeepSeek-R1 32B Using Ollama
Ollama is an alternative that simplifies running LLMs locally.
Step 1: Install Ollama
- Download Ollama for Windows
- Run the installer and follow the setup instructions.
Step 2: Install DeepSeek-R1 32B Model
Once Ollama is installed, run the following command:
ollama pull deepseek-r1:32b
This will download and prepare the model.
Step 3: Run DeepSeek-R1 Locally
Start DeepSeek-R1 in interactive mode:
ollama run deepseek-r1:32b
Now, you can chat with DeepSeek directly from the terminal.
3️⃣ Running DeepSeek-R1 with a WebUI
If you prefer an interactive WebUI, follow these additional steps.
Step 1: Install a WebUI (text-generation-webui)
Clone and install a text-generation-webui:
git clone https://github.com/oobabooga/text-generation-webui.git
cd text-generation-webui
pip install -r requirements.txt
Step 2: Start the WebUI with DeepSeek
Launch the WebUI and specify the DeepSeek model:
python server.py --model deepseek-r1:32b
Now, you can access DeepSeek-R1 via WebUI at:
http://localhost:5000
Which Method Should You Choose?
Method | Best For | Ease of Setup |
---|---|---|
Docker | Running in an isolated container | ⭐⭐⭐ |
Ollama | Quick setup and local execution | ⭐⭐⭐⭐⭐ |
WebUI | Browser-based interaction | ⭐⭐⭐⭐ |
If you have a powerful GPU, Docker or Ollama are great choices.
If you prefer browser access, go for the WebUI setup.