Running AI models in a Docker container ensures portability, isolation, and efficient execution. If you want to deploy DeepSeek-R1 8B inside a Docker container using Ollama, this guide will walk you through the step-by-step process.

Why Use Docker for DeepSeek-R1 8B?

  • Portability – Run the model on any system with Docker installed.
  • Isolation – Keeps dependencies contained within the container.
  • Scalability – Easily deploy across multiple environments.

Step-by-Step Deployment Guide

1. Run the Ollama Docker Container

First, start the Ollama container by running the following command:

docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama

This will:

  • Start the Ollama container in detached mode (-d).
  • Mount the Ollama volume for persistent storage.
  • Expose the API on port 11434.

2. Log into the Ollama Container

To access the running Ollama container, use:

docker exec -it ollama /bin/bash

This will allow you to interact with the container’s shell.

3. Pull the DeepSeek-R1 8B Model

Once inside the container, pull the required DeepSeek-R1 8B model:

ollama pull deepseek-r1:8b

This command fetches all necessary dependencies.

4. Run the Model

Now, run the downloaded model:

ollama run deepseek-r1:8b

This will launch the model in interactive mode, allowing you to enter prompts.

Example:

>>> Hello!

5. Run the Ollama Web GUI Container

To enable a web-based interface, run the Ollama Web UI container. Replace <YOUR-IP> with your system’s local IP:

docker run -d -p 3000:8080 -e OLLAMA_BASE_URL=http://<YOUR-IP>:11434 \
-v open-webui:/app/backend/data --name open-webui --restart always \
ghcr.io/open-webui/open-webui:main

6. Access the Web Interface

Now, open your browser and go to:

http://<YOUR-IP>:3000

You can now interact with the DeepSeek model using a user-friendly web interface.

Conclusion

Deploying DeepSeek-R1 8B with Docker and Ollama provides a robust and efficient way to run AI models in a controlled environment. With Docker, you gain portability and isolation, while Ollama simplifies the model execution. Whether through command-line interaction or a web-based UI, this setup ensures seamless AI deployment.

Was this article helpful?
YesNo

Similar Posts