Deploying DeepSeek-R1 14B on Linux requires a capable hardware setup and a properly configured software stack. This guide will cover hardware requirements, installation steps, and optimizations to ensure smooth operation.


1. Hardware Requirements

ComponentMinimum RequirementRecommended Requirement
CPUAMD Ryzen 7 / Intel i7AMD Ryzen 9 / Intel i9
RAM32GB DDR464GB+ DDR5
GPU1x NVIDIA RTX 3090 (24GB VRAM)1x NVIDIA RTX 4090 / A6000 (48GB VRAM)
VRAM16GB minimum24GB+ recommended
Storage512GB NVMe SSD1TB+ NVMe SSD (PCIe 4.0)
Power Supply750W+850W+
CoolingStandard air coolingWater cooling for GPUs

Note: The DeepSeek-R1 14B model can run on a single 16GB+ GPU, but performance is better with 24GB+ VRAM.


2. Install Required Software

Step 1: Update Linux and Install Dependencies

sudo apt update && sudo apt upgrade -y
sudo apt install -y build-essential git curl wget python3 python3-pip

Step 2: Install NVIDIA Drivers, CUDA & cuDNN

1. Install NVIDIA Drivers

sudo apt install -y nvidia-driver-535
reboot

Verify installation:

nvidia-smi

2. Install CUDA 12.3

wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-repo-ubuntu2204_12.3.0-1_amd64.deb
sudo dpkg -i cuda-repo-ubuntu2204_12.3.0-1_amd64.deb
sudo apt update
sudo apt install -y cuda

3. Install cuDNN

sudo apt install -y libcudnn8 libcudnn8-dev

3. Install Docker & NVIDIA Container Toolkit

Step 1: Install Docker

sudo apt install -y docker.io
sudo systemctl enable docker
sudo systemctl start docker

Step 2: Install NVIDIA Container Toolkit

sudo apt install -y nvidia-container-toolkit
sudo systemctl restart docker

Step 3: Verify NVIDIA Docker Support

docker run --rm --gpus all nvidia/cuda:12.3.0-base nvidia-smi

If successful, you should see your NVIDIA GPUs listed.


4. Install Ollama & Pull DeepSeek-R1 14B

Step 1: Install Ollama

curl -fsSL https://ollama.ai/install.sh | sh

Verify installation:

ollama --version

Step 2: Pull the DeepSeek-R1 14B Model

ollama pull deepseek/deepseek-r1-14b

This is a large model (~1.2TB), ensure you have sufficient storage.

Step 3: Run DeepSeek-R1 14B with Ollama

ollama run deepseek/deepseek-r1-14b

5. Set Up WebUI for DeepSeek-R1 14B

Step 1: Clone WebUI Repository

git clone https://github.com/deepseek-ai/webui.git
cd webui

Step 2: Build & Run WebUI with Docker

  1. Build WebUI Docker Image
   docker build -t deepseek-webui .
  1. Run WebUI Container
   docker run --gpus all --shm-size=128G -p 7860:7860 -v deepseek_cache:/root/.cache deepseek-webui

--shm-size=128G increases shared memory for better model execution.

Step 3: Access WebUI

  • Open your browser and go to:
  http://localhost:7860
  • Now, you can interact with DeepSeek-R1 14B via WebUI.

6. Performance Optimization

Enable Multi-GPU Scaling (NCCL)

export NCCL_P2P_DISABLE=0
export NCCL_IB_DISABLE=0
export NCCL_DEBUG=INFO

Allocate More Memory to Docker

docker run --gpus all --shm-size=256G -p 7860:7860 deepseek-webui

Run in Background with Logs

nohup ollama run deepseek/deepseek-r1-14b > deepseek.log 2>&1 &

7. Conclusion

You have successfully set up DeepSeek-R1 14B on Linux using Ollama, Docker, and WebUI.

Next Steps:

  • Monitor GPU performance using nvidia-smi
  • Optimize memory allocation for better efficiency
  • Experiment with smaller DeepSeek models for faster inference

Was this article helpful?
YesNo

Similar Posts