Chinese AI giant DeepSeek is facing significant delays in releasing its latest model, R2, due to persistent technical difficulties with Huawei’s Ascend chips, despite encouragement from Chinese authorities to adopt domestic processors.
The company has struggled to achieve a successful training run using Huawei hardware, and despite on-site assistance from Huawei engineers, DeepSeek has been compelled to rely on Nvidia hardware for the core training of its models. Ascend chips are mainly used for inference tasks, highlighting a notable gap in stability, inter-chip connectivity, and software maturity between Huawei’s offerings and Nvidia’s more established products.
The R2 launch, initially slated for May 2025, was consequently postponed due to hardware challenges and longer-than-expected data labeling for the updated training dataset. DeepSeek founder Liang Wenfeng has reportedly expressed dissatisfaction with the model’s progress, emphasizing the need for additional development to ensure R2 can maintain the company’s competitive edge in the rapidly evolving AI landscape.
This setback has allowed competitors, such as Alibaba’s Qwen3, to gain an advantage. Qwen3 has reportedly incorporated DeepSeek’s core training algorithms while improving efficiency and flexibility, demonstrating the swift evolution within AI ecosystems, even when a leading startup faces internal struggles.
The situation at DeepSeek underscores Beijing’s broader push for AI self-sufficiency, placing considerable pressure on domestic firms to adopt local hardware. However, the practical implementation of this strategy has revealed significant technical hurdles. Nvidia has stressed the strategic importance of maintaining access to Chinese developers, warning that restrictions on technology adoption could negatively impact economic and national security interests.
Chinese AI companies are navigating a complex environment, balancing governmental directives to use domestic hardware with the practical realities of developing and deploying advanced large language models. The technical challenges faced by DeepSeek illustrate the tension between political ambitions and real-world AI deployment capabilities.
Despite considerable setbacks, DeepSeek’s R2 model could still be released in the coming weeks, but its performance will likely face intense scrutiny, particularly when compared to rival models trained on more mature and reliable hardware. This ongoing saga serves as a clear example of the challenges inherent in achieving AI self-sufficiency while maintaining a competitive technological edge.




