Chinese artificial intelligence venture DeepSeek has announced a major update to its R1 product in the form of DeepSeek-R1-0528, which is much better at AI reasoning, mathematics, programming and logical thinking.
The updated model seeks to slash AI “hallucinations” by a wide margin and places DeepSeek as a rat-a-tat competitor to leading global AI models from OpenAI and Google, according to the statement.
The news, confirmed in a post on the AI model platform Hugging Face, points to how this version is based on the original R1 model that gained attention earlier in the year for achieving strong scores despite being trained with less computational power than its Western peers.
DeepSeek is adding that the R1-0528 now has “better depth of reasoning” and an overall performance that’s nearing other top-tier models, including OpenAI’so3 and Google’s Gemini 2.5 Pro.
In particular, a substantial accuracy improvement was reported by DeepSeek on a benchmark math test (AIME 2025) from 70% to 87.5%. This achievement is explained by the improved reasoning power of the model and an increased number of tokens per query, from 12,000 to 23,000. The company also mentioned the model’s lesser hallucination, newer support for function calling, and a better experience in vibe coding.
The rise of DeepSeek itself and its founder, Liang Wenfeng, is a testament to the rising expertise of China in developing AI. Once obscure, Liang has emerged as a visual symbol of the ambitions of the nation in AI, invited to a high-level economic forum to sit alongside leading tech executives.
The timing of this release is interesting when taking Nvidia’s most recent financial report into consideration. Earlier this year the initial release of the R1 model briefly affected Nvidia’s stock price because of fears that AI infrastructure values could be cut down. But the ongoing global investment in AI data centers has also helped Nvidia regain its footing.
Industry analysts have argued that DeepSeek’s capability to produce high-performing models less expensively has the potential to change the economics of AI deployment and could challenge the long-term dominance of U.S. tech companies. Finally, the R1-0528 is released in open-source under the MIT license to promote likely its adoption by developers in the international community and to position the new datasets as a common benchmark.
AWS and Microsoft Azure already provide the original R1 model on their service, but guarantee that the data never leaves the servers selected by the customer, which solved security worries on American soil.
This new information from DeepSeek represents an important development in China’s continued push to be a major player in the world AI race, and is that more or less competition which spurs further growth.