Summary: The rapid decline in AI model costs is reshaping the field, with UC Berkeley researchers replicating DeepSeek’s $5 million AI for just $30 in a matter of days. They demonstrated that cutting-edge AI development no longer requires massive budgets—only the right approach. This breakthrough highlights how disruptive innovation is accelerating at an unprecedented pace, making technology more accessible than ever before.


We noted recently that DeepSeek had created an artificial intelligence (AI) model for around $5 million that matched the performance of OpenAI’s $100 million model. Now we learn that a research team at the University of California, Berkeley (UC Berkeley) has reportedly re-created the core technology behind DeepSeek for just $30.

According to Brian Roemmele, UC Berkeley PhD candidate Jiayi Pan and his team replicated DeepSeek R1-Zero’s reinforcement learning capabilities using a compact language model called TinyZero. This open-source reinforcement learning engine utilizes the self-play learning paradigm, originally pioneered by DeepMind in the development of AlphaZero, to achieve mastery of the games of chess, shogi, and go.

The stunningly low cost of this replication underscores a growing trend: While tech giants pour vast sums into AI development, open-source and independent researchers are proving that high-performance AI can be built at a fraction of the cost. In fact, TinyZero is freely available for download on GitHub.

The TinyZero program achieved DeepSeek-level performance by renting two H200 Nvidia chips for under five hours at just $6.40 per hour.

XYZ Labs notes,

Their success in implementing sophisticated reasoning capabilities in small language models marks a significant democratization of AI research. . . . Richard Sutton, the father of reinforcement learning, would likely find vindication in these results. They align with his vision of continuous learning as the key to AI advancement, demonstrating that sophisticated AI capabilities can emerge from relatively simple systems given the right learning framework. . . . This work from a Chinese AI research company may well mark a turning point in AI development, proving that groundbreaking advances don’t require massive resources—just clever thinking and the right approach.

To put this breakthrough in perspective, the telegraph reduced the time it took the Pony Express to deliver a message from St. Joseph, Missouri, to Sacramento, California, by 99.93 percent—from 10 days to 10 minutes. Pan’s $30 TinyZero program slashed the cost of DeepSeek’s $5 million model by 99.9994 percent. For the price of a single DeepSeek model, you can build 166,667 TinyZero models.

Disruptive innovation is disrupting disruptive innovation. Clayton Christensen, originator of the “disruptive innovation” theory, would be pleased. Meanwhile, the $500 billion Stargate AI infrastructure initiative, announced 10 days ago, already looks obsolete. Human intelligence continues to discover ever more efficient ways of teaching AI how to learn. Hang on—this revolution is just beginning.

Find more of Gale’s work at his Substack, Gale Winds.