What is it?
DeepSeek is an AI lab based in China that released two models which caused a stir in early 2025: DeepSeek-V3 (a fast, general-purpose model) and DeepSeek-R1 (a reasoning model that thinks step-by-step before answering). Both models are open-weights — meaning the model files are freely available to download and run on your own hardware. They match or beat GPT-4 on many benchmarks, and cost a fraction of the price to train.
Who is it for?
- Developers and programmers who want an exceptionally capable coding assistant
- Students and researchers working on maths, logic, or reasoning-heavy problems
- Privacy-focused users who want to run a powerful model locally (not on DeepSeek's servers)
- Cost-conscious builders who want API access at very low rates
The magic moment
Paste a complex bug or algorithm problem into DeepSeek-R1. Watch it reason through the problem step by step — showing its working like a student writing out a proof — before landing on the answer. The reasoning trace is often as useful as the final answer, because you can see exactly where it caught a mistake.
Step-by-step setup
Option 1: Use the web interface (easiest)
- Go to chat.deepseek.com
- You can ask questions without signing in, or create a free account to save history
- Toggle DeepThink (R1) mode on for hard problems that need step-by-step reasoning
Option 2: Run locally via Ollama (private, offline)
- Install Ollama from ollama.com
- Open your terminal and run:
ollama run deepseek-r1:7b - Ollama will download the model (around 4 GB) and open a chat session
- For a bigger, more capable version:
ollama run deepseek-r1:14b(requires 16 GB RAM)
Option 3: Run locally via LM Studio (no terminal)
- Open LM Studio and search for DeepSeek-R1
- Download a quantised version (Q4_K_M recommended)
- Load it in the Chat tab and start chatting
Local hardware note: The 7B model runs on 8 GB RAM. The 14B model needs 16 GB. Full-size models (67B+) require a high-end workstation.
Compare with similar tools
- Ollama — the best way to run DeepSeek locally; Ollama is the runtime, DeepSeek is the model
- LM Studio — an alternative local runner with a graphical interface, no terminal needed
- ChatGPT — easier to use and has more features (image generation, voice), but costs money and isn't open-source; DeepSeek R1 matches it on coding and reasoning tasks