How to run DeepSeek R1 locally on Windows macOS Android and iPhone

Home · AI Blog · Basic concepts · How to run DeepSeek R1 locally on Windows macOS Android and iPhone

The arrival of DeepSeek R1, developed by a Chinese team, has shaken the artificial intelligence industry. This model has not only surpassed ChatGPT in popularity but has also unleashed a wave of reactions in the U.S. tech market. While you can access DeepSeek R1 for free on its website, concerns about data privacy stored in China have led many to seek alternatives to run it locally.

For those who wish to use DeepSeek R1 on their PC or Mac, the good news is that it is possible to do so with tools like LM Studio and Ollama. Below, we present a step-by-step guide to help you get started.

Requirements to run DeepSeek R1 locally

Before you dive into the adventure, make sure your computer meets the necessary requirements. To run DeepSeek R1, it is essential to have at least 8 GB of RAM. With this memory, you will be able to efficiently handle the small 1.5B model, generating around 13 tokens per second. If you dare to try the 7B model, keep in mind that it could consume about 4 GB, which might slow your system down a bit.

If your machine has more memory, you could experiment with larger models like the 14B, 32B, or even 70B, although you will need a faster CPU and GPU. Currently, many programs do not leverage the NPU (Neural Processing Unit) to run AI models locally, so they mainly rely on the CPU, and in some cases, on high-end GPUs.

Running DeepSeek R1 on PC using LM Studio

One of the easiest ways to run DeepSeek R1 on PC, Mac, and Linux systems is through LM Studio. This application has a user-friendly interface that allows you to explore and download compatible AI models with just a few clicks. Additionally, using LM Studio is completely free. Here’s a very comprehensive and straightforward tutorial from our friend Xavier for you to try right now if you want:

 

Running DeepSeek R1 locally using Ollama

Another option for users who prefer a more personalized interface is to use Ollama. This tool allows you to install DeepSeek R1 and enjoy additional features that enhance the user experience.

Running DeepSeek R1 locally on Android and iPhone

Users of mobile devices are not left behind. To run DeepSeek R1 on Android and iPhone, it is recommended to have at least 6 GB of RAM. In the case of Android, applications like PocketPal have proven to be the most effective for running AI models locally, and the best part is that they are free. Additionally, PocketPal is also available for iOS at no cost, unlike other applications.

So, if you want to have DeepSeek R1 running on your mobile device, just download PocketPal and start enjoying its features. Although during testing, some 1.5B and 7B models may present historical accuracy issues, they are useful for creative writing and mathematical reasoning.

If you have powerful hardware, we recommend trying the 32B model of DeepSeek R1, which is much more efficient in coding tasks and offers more grounded responses.

Written by Miguel Ángel G.P.

IT Manager | Más de 15 años de experiencia en informática corporativa. Experto en Apple, sistemas, redes, nube, virtualización, big data, diseño web...
This article talks about Business Applications.
Published on 23 de March de 2025.
In this blog we talk a lot about Robotics, OpenAI, Employment, Neural networks, Automatic learning, Medical.

Discover new AIs

We talk about all this

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *