DeepSeek-R1 has been creating quite a buzz in the AI community. Developed by a Chinese AI company DeepSeek, this model is being compared to OpenAI's top models. The excitement around DeepSeek-R1 is not just because of its capabilities but also because it is open-sourced, allowing anyone to download and run it locally. In this blog, I'll guide you through setting up DeepSeek-R1 on your machine using Ollama.
Why DeepSeek-R1?
DeepSeek-R1 stands out for several reasons. Not only is it cheaper than many other models, but it also excels in problem-solving, reasoning, and coding. Its built-in chain of thought reasoning enhances its efficiency, making it a strong contender against other models. Let's dive into how you can get this model running on your local system.
Getting Started with Ollama
Before we begin, let's discuss Ollama. Ollama is a free, open-source tool that allows users to run Natural Language Processing models locally. With Ollama, you can easily download and run the DeepSeek-R1 model.
Here's how you can get started:
Step 1: Install Ollama
First, you'll need to download and install Ollama. Visit the Ollama website and download the version that matches your operating system.
Follow the installation instructions provided on the site.
Step 2: Download DeepSeek-R1
As you can see when you go to Ollama website, you can run the different parameters of DeepSeek-R1. You can find the details of requirements here (as shown above in the screenshot)
You can run 1.5b, 7b, 8b, 14b, 32b, 70b, 671b and obviously the hardware requirements increase as you choose bigger parameter. I used 7b one in my tutorial.
Once Ollama is installed, open your terminal and type the following command to download the DeepSeek-R1 model:
ollama run deepseek-r1
This command tells Ollama to download the model. Depending on your internet speed, this might take some time. Grab a coffee while it completes!
Step 3: Verify Installation
After downloading, verify the installation by running:
ollama list
You should see deepseek-r1 in the list of available models. If you do, great job! You're ready to run the model.
Step 4: Run DeepSeek-R1
Now, let's start the model using the command:
ollama run deepseek-r1
And just like that, you're interacting with DeepSeek-R1 locally. It's that simple!
Step 5: Ask a Query
Chain-of-thought reasoning by the model.
The model looks good with coding tasks also. Let's check that approach too.
The detailed anwer for the above code related query.
Below is a complete step-by-step video of using DeepSeek-R1 for different use cases.
My first impression about DeepSeek-R1 is just mind blowing:)
By following this guide, you've successfully set up DeepSeek-R1 on your local machine using Ollama. This setup offers a powerful solution for AI integration, providing privacy, speed, and control over your applications. Enjoy experimenting with DeepSeek-R1 and exploring the potential of local AI models. BTW, having a robust database for your AI/ML applications is a must. I recommend using an all-in-one data platform like SingleStore.
Let's Build a RAG Application using DeepSeek and SingleStore
If you like to extend your learning and build a simple RAG application, you can follow this tutorial.
We will set the DeepSeek API key from NVIDIA NIM microservice (Yes, I'll show you how). NVIDIA NIM (Inference Microservices) is a set of microservices that help deploy AI models across clouds, data centers, and workstations. We will be using LangChain as our LLM framework to bind everything. We will be using SingleStore as our vector database.
First thing is to create a free SingleStore account. Login to your account and create a workspace and a database for yourself.
After creating a workspace, create a database attached to that workspace. Click on create a database as shown in the dashboard screenshot to create a database.
Cool. Now, we created our database to store our custom documents for our RAG application.
The next step is to create a Notebook. Yes, a free notebook environment. SingleStore has this cool integrated feature where you can use their Notebooks [Just like your Google collab]
Go to Data Studio and then create a new Notebook.
Give a name to your Notebook
This is where you will land.
Make sure to select your workspace and database you created from the dropdown as shown below. My workspace name is 'pavappy-workspace-1' and the database I created is 'DeepSeek'. So I have selected both.
Now we are all set to code our RAG application. Start adding all the below code step-by-step into the newly created notebook (make sure to run each code snippet aswell)
Start with installing all the required libraries and dependencies.
from langchain.document_loaders import PyPDFLoader
from langchain_nvidia_ai_endpoints import ChatNVIDIA
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.chains import RetrievalQA
from langchain.vectorstores import SingleStoreDB
import os
Load your custom document [I have used a publicly available pdf, you can replace and use your own)
file_path = "https://unctad.org/system/files/official-document/wesp2023_en.pdf"
loader = PyPDFLoader(file_path)
data = loader.load()
os.environ["OPENAI_API_KEY"] = "Add your OpenAI API key"
embedding = OpenAIEmbeddings()
Store embeddings in SingleStore
docsearch = SingleStoreDB.from_documents(
texts,
embedding,
table_name="deepseek_rag", # Replace table name with any name
host="admin:password@host_url:3306/database_name", # Replace with your SingleStore connection
port=3306
)
In the above code, admin is constant and don't change that. You can get your password from the access tab and host url can get as shown below. Go to your deployments tab, you should see your workspace, click on connect and then see the dropdown as below. Select 'SQL IDE' from there and you will see all the required details.
Next, Initialize DeepSeek through NVIDIA NIM
Get your DeepSeek-R1 API Key for free from NVIDIA NIM microservice. Get it from HERE.
client = ChatNVIDIA(
model="deepseek-ai/deepseek-r1",
api_key="Add your DeepSeek-R1 API key you received from NVIDIA NIM microservice", # Replace with your NVIDIA API key
temperature=0.7,
top_p=0.8,
max_tokens=4096
)
RAG Application With DeepSeek-R1, LangChain and SingleStore
We will set the DeepSeek API key from NVIDIA, as we will be using NVIDIA NIM Microservice.
NVIDIA NIM (Inference Microservices) is a set of microservices that help deploy AI models across clouds, data centers, and workstations.
We will be using LangChain as our LLM framework to bind everything. We will be using SingleStore as our vector database.
Refer to my article on devto to know more about how you can run DeepSeek-R1 locally.
I have also included this repo in the article and explained how you can create a simple RAG application.
Thank you for reading my article. Hope you tried the entire tutorial. You can also follow me through my Youtube channel. I create AI/ML/Data related videos on a weekly basis.