How to Run DeepSeek R1 Locally: A Comprehensive Guide
How to Run DeepSeek R1 Locally: A Comprehensive Guide
DeepSeek R1 is a powerful open-source AI model that stands out in the realm of language processing. Its ability to perform reasoning tasks akin to advanced human capabilities makes it an attractive choice for developers, researchers, and AI enthusiasts. Running DeepSeek R1 locally allows users to maintain control over their data while benefiting from lower latency. This guide will take you through the essential steps to set up and run DeepSeek R1 on your local machine, regardless of whether you're using Mac, Windows, or Linux.
Why Run DeepSeek Locally?
Running DeepSeek locally offers several advantages:
- Data Privacy: You maintain full control over your data without relying on third-party servers.
- Cost Savings: Avoid potential fees associated with cloud services.
- Customizability: Tailor the model according to your specific needs.
Prerequisites for Running DeepSeek R1
Before you begin, ensure your machine meets the following minimum hardware requirements:
- Storage: Sufficient disk space for the model size.
- RAM: Depending on the model size, you might need up to 161 GB of RAM.
- GPU: A capable NVIDIA GPU is recommended, as certain model sizes will necessitate a multi-GPU setup.
Model Size Comparison
Here’s a quick overview of the different DeepSeek R1 models and their requirements:
Model | Parameters (B) | Disk Space | RAM | Recommended GPU |
---|---|---|---|---|
DeepSeek-R1-Distill-Qwen-1.5B | 1.5B | 1.1 GB | ~3.5 GB | NVIDIA RTX 3060 12GB or higher |
DeepSeek-R1-Distill-Qwen-7B | 7B | 4.7 GB | ~16 GB | NVIDIA RTX 4080 16GB or higher |
DeepSeek-R1-Distill-Qwen-14B | 14B | 9 GB | ~32 GB | Multi-GPU setup |
DeepSeek-R1 | 671B | 404 GB | ~1,342 GB | Multi-GPU setup |
It's advisable to start with the smaller models, especially if you're new to running AI models locally.
Method 1: Installing DeepSeek R1 Using Ollama
Step 1: Install Ollama
- Visit the Ollama website and download the installer for your operating system.
- Follow the installation instructions.
- Test if it’s installed correctly by running the command:
ollama --version
Step 2: Download and Run the DeepSeek R1 Model
- In your terminal, run the command to install the model (replace
[model size]
appropriately):ollama run deepseek-r1:[model size]
- Wait for the model to download and start running.
Step 3: Install Chatbox for a User-Friendly Interface
- Download Chatbox from its official website and follow the installation instructions.
- Set the Model Provider in the settings to Ollama and the API host to
http://127.0.0.1:11434
to start interacting with the model.
Method 2: Using Docker
Docker provides a reliable environment and simplifies the installation process.
Step 1: Install Docker
- Download and install Docker Desktop from the official Docker website.
- Open the Docker app and log in if necessary.
Step 2: Pull the DeepSeek Docker Image
Run the following command in your terminal:
docker pull ghcr.io/open-webui/open-webui:main
Step 3: Run the Docker Container
Execute this command to start the container:
docker run -d -p 9783:8080 -v open-webui:/app/backend/data --name open-webui ghcr.io/open-webui/open-webui:main
Step 4: Open Web UI
Open your browser and navigate to http://localhost:9783/
to access the interface.
Method 3: Using LM Studio
LM Studio is suitable for users who prefer not to use the terminal.
- Download LM Studio from its official site and install it.
- Search for the DeepSeek R1 model within the application and download it.
- Once downloaded, you can interact with the model directly via the app.
Final Thoughts
DeepSeek R1 provides robust capabilities for natural language processing tasks and excels in reasoning. Whether you are a beginner or an advanced user, there are various methods available for installation that can cater to your technical comfort level. If you're just starting, consider trying Ollama for its simplicity, while Docker can be an excellent option for more experienced users. Regardless of your choice, DeepSeek R1 can enhance your AI projects significantly by running them locally.