DeepSeek vs The Rest: Is China's Large Language Model Really A 'Disruptor'?
DeepSeek is the new kid on the Artificial Intelligence-driven chatbot / large language model block, and it is threatening to blow the rest out of the water.
Join our WhatsApp Community to receive travel deals, free stays, and special offers!
- Join Now -
Join our WhatsApp Community to receive travel deals, free stays, and special offers!
- Join Now -
There is a new kid on the Artificial Intelligence-driven chatbot / large language model block, and it is threatening to blow the rest out of the water. Meet DeepSeek, developed by a Hangzhou-based research lab with a fraction of the budget (if you believe the rumours) used to make ChatGPT, Gemini, Claude AI, and others created by United States-based computer labs.
And the latest offerings - DeepSeek V3, a 671 billion parameter, 'mixture of experts' model; and DeepSeek R1, an advanced reasoning model that uses AI, possibly better than OpenAI's 01 - have underlined its status as a potential heavyweight financial and technological disruptor in this field.
How much of a disruptor is it?
On Monday this app, the DeepSeek V3 model, is now the top downloaded app on the Apple Store in the US; let that sink in... a Chinese-developed chatbot is now the most-downloaded app in the US.
And that disruption, even if seen as a 'potential' one at this time, has raised doubts about how well some US tech companies have invested the billions pledged towards AI development.
Either way, the quality and cost efficiency of DeepSeek's models have flipped this narrative; even if, in the long run, this particular Chinese model flops, that it was developed with a fraction of the financial and technological resources available to firms in the West is an eye-opener.
Again, how much of a disruptor is it?
Well, last month DeepSeek's creators said training the V3 model required less than $6 million (although critics say the addition of costs from earlier development stages could push eventual costs north of $1 billion) in computing power from Nvidia's H800 chips. "Did DeepSeek really build OpenAI for $5 million? Of course not," Bernstein analyst Stacy Rasgon told Reuters.
But break down the available financials and it gets quite remarkable.
OpenAI's 01 charges $15 per million input tokens.
DeepSeek's R1 charges $.55 per million input tokens.
The pricing, therefore, absolutely blows the competition away.
And, depending on end-use cases, DeepSeek is believed to be between 20 and 50 times more affordable, and efficient, than OpenAI's 01 model. In fact, logical reasoning test score results are staggering; DeepSeek outperforms ChatGPT and Claude AI by seven to 14 per cent.
Dev.to, a popular online community for software developers, said it scored 92 per cent in completing complex, problem-solving tasks, compared to 78 per cent by GPT-4.
Input tokens, by the way, refer to units of information as part of a prompt or question. These are basically what the model needs to analyse or understand the context of a query or instruction.
For context, OpenAI is believed to spend $5 billion every year to develop its models.
So, even if DeepSeek's critics (see above) are right, it is still a fraction of OpenAI's costs.
This translates, as company boss Sam Altman pointed out, into significantly enhanced computing capabilities, but for the DeepSeek model to deliver at least that much processing power on its relatively shoestring budget is an eyebrow-raiser.
And Mr Altman acknowledged that, calling the R1 model "very impressive".
Google boss Sundar Pichai went one step further, telling CNBC at Davos, " I think we should take the development out of China, very, very seriously." And US President Donald Trump has sounded a "wake-up" call to American tech and computing giants.
And there are the hundreds of billions of dollars that US companies have lost amid a rout this week in tech stocks; chip-maker Nvidia, for example, lost over $600 billion and the tech-rich Nasdaq index finished Monday down by more than three per cent, with the unwelcome possibility of a further drop based on AI giants Meta and Microsoft's expected earnings reports.
For context, Meta and Microsoft both have their own AI models, at the forefront of which are Llama and Copilot; the former is a LLM that was first released in February 2023 and the latter is now an integrated feature in various Microsoft 365 applications, such as MS Word and Excel.
While neither is, arguably, on the same tech level as OpenAI or ChatGPT, Meta and MS have invested billions in AI and LLM projects, both in the US and abroad. For example, some analysts believe big US cloud companies will spend $250 billion this year on AI infrastructure alone.
But what really makes DeepSeek special is more than the cost and technology.
It is that, unlike its competitors, it is genuinely open-source.
The R1 code is completely open to the public under the MIT License, which is a permissive software license that allows users to use, modify, and distribute software with few restrictions.
This means you can download it, use it commercially without fees, change its architecture, and integrate it into any of your existing systems.
DeepSeek is also faster than GPT 4, more practical and, according to many experts, even understands regional idioms and cultural contexts better than its Western counterparts.
There is much more consider.
How, for example, does DeepSeek affect diplomatic and military ties between China and the US (and India also, actually), and what are the ethical problems with truly open-source AI models?
But what is undeniable is that China's DeepSeek is a disruptor, in financial and tech terms. And experts believe China has now leapfrogged - from 18 to six months behind state-of-the-art AI models developed in the US.
With input from agencies
NDTV is now available on WhatsApp channels. Click on the link to get all the latest updates from NDTV on your chat.