On January 20, 2026, a Chinese AI company called DeepSeek released a model called R1. Within 48 hours it had wiped $600 billion from NVIDIA's market cap, shocked Silicon Valley, and prompted every serious business leader to ask the same question: if a team outside the US can build this for $6 million, what does that mean for the AI tools we're paying for?
The short answer: the price of intelligence just fell off a cliff. And if you haven't reconsidered your AI stack in the last month, you're already behind.
What DeepSeek R1 Actually Is
R1 is a reasoning model — the same class of AI as OpenAI's o1, the model designed for complex multi-step thinking. Legal analysis, financial modelling, research synthesis, complex coding. Tasks that require a model to "think through" a problem rather than just pattern-match an answer.
Until January 20, the best reasoning model available was OpenAI's o1. Benchmarks put R1 on par with o1 across most measures. The cost difference is not subtle.
- OpenAI o1: $60 per million output tokens
- DeepSeek R1: $2.19 per million output tokens
That's a 97% cost reduction for equivalent capability. And R1 is open-source — meaning businesses can download the weights and run it on their own infrastructure, paying only for compute rather than per-use fees.
Why This Matters More Than a Cheaper Tool
The business implications go beyond "save money on AI subscriptions." DeepSeek R1 signals something structural: AI capability is commoditising faster than anyone expected.
For the last three years, the implicit assumption was that the best AI required massive investment — only OpenAI, Google, and Anthropic could build frontier models. R1 breaks that assumption. A relatively small team, with limited access to the highest-end chips (due to US export controls), produced a model that competes at the top tier.
If it cost $6 million to build R1, how long before every serious mid-size company can build a custom model for their specific industry? The answer is probably less time than you think.
This is the deeper shift: AI is moving from a product you subscribe to, toward infrastructure you configure. The businesses that understand this early will have a significant advantage.
What to Actually Do About It
You don't need to switch everything. The right response isn't to cancel your ChatGPT subscription and rebuild everything on DeepSeek. It's to be more deliberate about which tasks you're spending AI budget on — and whether you're paying a premium for capability you don't need.
A practical framework:
- For complex reasoning tasks (contract review, financial analysis, research synthesis) — R1 is now a legitimate alternative to o1 at a fraction of the cost. Worth testing directly against your current tool.
- For everyday tasks (email drafts, summaries, customer responses) — you were already overpaying if you were using o1. These tasks don't need a reasoning model at all.
- For customer-facing applications — consider that the gap between a $60 model and a $2 model may not be visible to your customers for most use cases.
- For sensitive data — self-hosting an open-source model (R1 included) means your data never leaves your infrastructure. This changes the risk calculus for businesses in regulated industries.
The Strategic Question This Raises
Most businesses are currently paying for AI the way they paid for software in 2010 — one subscription, one tool, one provider. The next 12 months will reward businesses that think about AI more like they think about cloud infrastructure: matching the right capability level to the right task, optimising for cost, and not getting locked in to a single vendor.
DeepSeek R1 isn't a reason to panic. It's a reason to be more intentional. The companies that treat AI as a fixed cost ("we pay for ChatGPT") are leaving money on the table. The ones that treat it as a managed resource — choosing models based on task complexity, cost, and data requirements — are building a real competitive advantage.
That conversation starts with knowing what you're actually using AI for. If you haven't mapped your AI usage to specific business outcomes, that's the first step. Everything else follows from there.