This guide walks you through the step-by-step process of installing and running DeepSeek-v3 (671B) locally with Ollama on both Windows and Linux.


Step 1: Install Ollama

For Windows:

  1. Download the Installer:
  2. Install Ollama:
    • Run the downloaded .exe file.
    • Follow the on-screen instructions to complete the installation.
  3. Verify Installation:
    • Open Command Prompt and run: ollama --version
    • If installed correctly, this should return the version number.

For Linux:

  1. Download the Installer:
  2. Install Ollama:
    • Open Terminal and navigate to the folder where the package was downloaded.
    • Run the following command: sudo dpkg -i ollama-latest-linux.deb
    • If you see a dependency error, fix it by running: sudo apt --fix-broken install
  3. Verify Installation:
    • Run the following command to check if Ollama is installed correctly: ollama --version

Step 2: Install the DeepSeek-v3 (671B) Model

Once Ollama is installed, you can proceed to install DeepSeek-v3 (671B) on both Windows and Linux.

Install the Model:

  1. Open Command Prompt (Windows) or Terminal (Linux).
  2. Run the following command to download and install the model: ollama pull deepseek-v3:671b
    • This will download the 671 billion parameter version of DeepSeek-v3.

Verify Installation:

To confirm the model was installed successfully, list all available models:

ollama models

You should see deepseek-v3:671b in the output list.


Step 3: Run DeepSeek-v3 Locally

Once the model is installed, you can start using it.

Run the Model:

ollama run deepseek-v3:671b
  • This launches the model, allowing you to interact with it directly through the command line.

Interact with the Model:

You can enter a query using:

ollama run deepseek-v3:671b --prompt "Your query here"
  • The model will process the input and return a response in the terminal.

Step 4 (Optional): Use DeepSeek-v3 in a Python Script

To use DeepSeek-v3 programmatically, you can call it from a Python script using subprocess.

Example Python Script:

import subprocess

def run_deepseek(prompt):
    result = subprocess.run(
        ['ollama', 'run', 'deepseek-v3:671b', '--prompt', prompt],
        capture_output=True, text=True
    )
    return result.stdout

# Example query
response = run_deepseek("What is the capital of France?")
print(response)
  • This script sends a prompt to the model and prints the response.

Step 5: Troubleshooting

1. Memory Requirements

  • The DeepSeek-v3 (671B) model is large and requires significant system resources.
  • Ensure your system has enough RAM and, if using a GPU, sufficient VRAM.
  • If you run into memory issues, try reducing the batch size or running the model on a higher-spec machine.

2. Common Errors & Fixes

Error MessagePossible CauseSolution
“Model not found”Incorrect model tagEnsure you’re using deepseek-v3:671b
“Failed to run model”Incomplete installationReinstall Ollama and the model
“Dependency errors on Linux”Missing packagesRun sudo apt --fix-broken install

3. Internet Connection

  • The model download is large, so ensure you have a stable and fast internet connection.

By following these steps, you should be able to install, run, and interact with DeepSeek-v3 (671B) on both Windows and Linux using Ollama.

Was this article helpful?
YesNo

Similar Posts