AI is everywhere today — powering chatbots, generating images, even writing code.
But have you ever wondered what makes this possible? How does AI actually work?
And more importantly… can you run it on your own computer?”
Behind all this AI magic are AI models — powerful algorithms trained on massive amounts of data.
These models learn patterns, understand human language, and even make decisions.
But here’s the catch… most of them rely on powerful cloud servers, meaning you’re dependent on third-party services.
So, your application will interact with these models hosted on cloud to send messages in the form of prompts and received the required response.
This approach has many drawbacks.

Since there is network communication, this means
A. Slower performance,
B. Privacy concerns since your data goes to their servers,
C. A constant need for an internet connection.
D. And most importantly: Money.
Since creating an API key is free but using it for exchanging data will cost you money on use basis.
But what if you could run AI models locally — on your own machine — without sending your data to the cloud and use it for free?
Yes it is possible.
Here comes Ollama — an open-source tool that lets you download and run AI models directly on your PC or laptop!

No complex setup, no cloud dependency, just fast, private, and offline AI at your fingertips.
To use it, you have to install it on your system.
In today’s video, we will learn how to run the latest DeepSeek R1 AI model on local system using Ollama.
So, without any delay lets get to action.
Installing Ollama
Open a browser and go to https://ollama.com/.
Download it as per your system.
If you are using windows, then you should have windows 10 and above.
This will download an executable file that you need to run and it will install just like any other software.
After it is installed, open command prompt.
Type ollama and enter.
You will see ollama commands which means that it is installed.
There are commands to run a model, stop a model etc.
Getting Deepseek-R1
Go back to Ollama website. Click on Models and then on deepseek-R1 and select a model as per your requirement.
For this example, we will go with deepseek-r1:671b
Copy its name.
Back to command prompt and type
ollama pull deepseek-r1:671b
This will start downloading the model and take some time according to the size of model.
Even if you directly run the model, without pull, it will download the model automatically.
Running Deepseek-R1
Run the model using ollama run deepseek-r1:671b
command. You will see 3 angle arrows(>>> ) followed by a prompt.
Congratulations! You have an Deepseek-R1 running on your local system.
Ask some questions to it such as who are you?.
You will get a response as below
To stop the model, type ctrl d
or /bye
.
Remember that running an AI model locally means that it will be consuming your system resources.
So, if you type ollama ps
to check the process of the model, you will see its CPU usage, which will most probably be high.
Hope the article was useful.