Introduction
Running AI models like DeepSeek-R1 locally can be both an exciting learning experience and a powerful way to leverage AI capabilities without relying on cloud-based solutions. In this guide, we’ll explore two different methods for running DeepSeek-R1:
- Using Ollama directly on your system – A simple, quick setup for running AI models locally.
- Using Docker for portability and isolation – A containerized approach that ensures consistency across different environments.
By following these steps, you’ll be able to interact with DeepSeek-R1 in no time!
DeepSeek-R1 Model Requirements
Before setting up DeepSeek-R1, it’s essential to understand the model variations and their hardware requirements. DeepSeek offers multiple versions, ranging from lightweight models for personal use to enterprise-grade solutions.
Model Version | Minimum Hardware Requirements |
---|---|
DeepSeek-R1 1.5B | Basic CPU, minimal RAM (for chatbots & personal assistants) |
DeepSeek-R1 7B/8B | Dedicated GPU (RTX 3060/3070), suitable for AI writing tools, summarization, and sentiment analysis |
DeepSeek-R1 14B/32B | 16GB – 32GB RAM, high-end GPU (RTX 4090), ideal for enterprise AI applications and real-time processing |
DeepSeek-R1 70B | 64GB+ RAM, multiple GPUs, designed for large-scale AI research and business automation |
DeepSeek-R1 671B | Server-grade hardware (768GB RAM, NVIDIA A100 GPUs), used for large-scale AI research and commercial applications |
Now that we’ve reviewed the hardware requirements, let’s dive into the setup process.
Method 1: Running DeepSeek-R1 Directly with Ollama
This method is ideal for those who want a straightforward installation without using Docker.
Step 1: Install Ollama
- Visit the official Ollama website and download the appropriate version for your operating system.
- Follow the installation instructions for your OS.
Step 2: Download DeepSeek-R1
Ollama provides different versions of DeepSeek-R1, ranging from 1.5B to 671B parameters. For this guide, we’ll use the 8B version.
Run the following command in your terminal to download DeepSeek-R1 8B:
ollama run deepseek-r1:8b
Step 3: Verify Installation
Check if the model is installed correctly by running:
ollama list
You should see deepseek-r1
listed as one of the available models.
Step 4: Run the Model
To start the model, execute:
ollama run deepseek-r1:8b
You can now interact with DeepSeek-R1 in the terminal by entering prompts. Congratulations! You’ve successfully set up DeepSeek-R1 locally.
Method 2: Running DeepSeek-R1 Inside a Docker Container with Ollama
If you prefer to run DeepSeek-R1 in an isolated environment, Docker is the best option. It ensures portability and avoids dependency conflicts.
Step 1: Run the Ollama Docker Container
Start the Ollama container using the following command:
docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
-d
runs the container in detached mode.-v ollama:/root/.ollama
mounts a volume for persistence.-p 11434:11434
exposes Ollama’s API on port 11434.
Step 2: Access the Container & Download the Model
Log into the running container:
docker exec -it ollama /bin/bash
Inside the container, pull the DeepSeek-R1 1.5B model (or any version you prefer):
ollama pull deepseek-r1:1.5b
Step 3: Run the Model
Once the model is downloaded, start it with:
ollama run deepseek-r1:1.5b
Now you can interact with the model inside the container.
Step 4: Enable Web-Based Interaction
For an improved experience, you can use a Web UI to interact with DeepSeek-R1. Run the following command to deploy Open-WebUI:
docker run -d -p 3000:8080 -e OLLAMA_BASE_URL=http://localhost:11434 -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
- This command launches a web-based interface for interacting with DeepSeek-R1.
- After starting the container, navigate to http://localhost:3000 in your browser.
Step 5: Verify Running Containers
To check running containers, use:
docker ps
You should see both Ollama and Open-WebUI containers running.
Conclusion
Both methods allow you to run DeepSeek-R1 locally, each with its own advantages:
- Direct installation via Ollama is quick and easy.
- Using Docker ensures isolation and compatibility across different systems.