DeepSeek-R1 32B System Requirements

ComponentMinimum RequirementRecommended Requirement
GPUNVIDIA RTX 3090 (24GB VRAM)NVIDIA RTX 4090 / A100 (40GB+ VRAM)
CPU8-core processor (Intel i7 / AMD Ryzen 7)16-core processor (Intel i9 / AMD Ryzen 9)
RAM32GB64GB+
Storage100GB SSD1TB NVMe SSD
OSWindows 10/11Windows 11
Docker SupportWSL2 enabledWSL2 enabled

Installation Methods

I’ll provide three different installation methods:

  1. Using Docker (For easy containerized deployment)
  2. Using Ollama (For a simplified local installation)
  3. Using WebUI (For an interactive browser-based experience)

1️⃣ Installing DeepSeek-R1 32B Using Docker

Step 1: Install Docker

  1. Download Docker Desktop for Windows.
  2. Run the installer and enable WSL2 Backend during setup.
  3. Restart your system and verify installation with: docker --version

Step 2: Pull DeepSeek-R1 32B Docker Image

Run the following command to download the DeepSeek-R1 32B image:

docker pull deepseek/deepseek-r1:32b

Step 3: Run the Docker Container

Start the DeepSeek model in a container:

docker run -d --gpus all -p 8080:8080 --name deepseek-r1-32b deepseek/deepseek-r1:32b
  • -d → Runs in detached mode
  • --gpus all → Allocates all available GPUs
  • -p 8080:8080 → Maps container port to the local machine

Step 4: Access the WebUI

  1. Open your browser and go to: http://localhost:8080
  2. Start interacting with DeepSeek-R1 32B!

2️⃣ Installing DeepSeek-R1 32B Using Ollama

Ollama is an alternative that simplifies running LLMs locally.

Step 1: Install Ollama

  1. Download Ollama for Windows
  2. Run the installer and follow the setup instructions.

Step 2: Install DeepSeek-R1 32B Model

Once Ollama is installed, run the following command:

ollama pull deepseek-r1:32b

This will download and prepare the model.

Step 3: Run DeepSeek-R1 Locally

Start DeepSeek-R1 in interactive mode:

ollama run deepseek-r1:32b

Now, you can chat with DeepSeek directly from the terminal.


3️⃣ Running DeepSeek-R1 with a WebUI

If you prefer an interactive WebUI, follow these additional steps.

Step 1: Install a WebUI (text-generation-webui)

Clone and install a text-generation-webui:

git clone https://github.com/oobabooga/text-generation-webui.git
cd text-generation-webui
pip install -r requirements.txt

Step 2: Start the WebUI with DeepSeek

Launch the WebUI and specify the DeepSeek model:

python server.py --model deepseek-r1:32b

Now, you can access DeepSeek-R1 via WebUI at:

http://localhost:5000

Which Method Should You Choose?

MethodBest ForEase of Setup
DockerRunning in an isolated container⭐⭐⭐
OllamaQuick setup and local execution⭐⭐⭐⭐⭐
WebUIBrowser-based interaction⭐⭐⭐⭐

If you have a powerful GPU, Docker or Ollama are great choices.
If you prefer browser access, go for the WebUI setup.

Was this article helpful?
YesNo

Similar Posts