Nvidia has announced support for the RISC-V instruction set architecture within its CUDA software platform, allowing RISC-V CPUs to be used as main processors for CUDA-based systems, a significant development revealed at the 2025 RISC-V Summit in China.
The announcement was made by Frans Sijsterman, Vice President of Hardware Engineering at Nvidia, during a keynote, highlighting the company’s growing engagement with the open-source ISA. Sijsterman detailed how CUDA components will now operate on RISC-V, outlining a system configuration where a GPU handles parallel workloads while a RISC-V CPU manages CUDA system drivers, application logic, and the operating system. This setup enables the RISC-V CPU to orchestrate GPU computations seamlessly within the CUDA environment.
The integration is expected to include CUDA-enabled edge devices, such as Nvidia’s Jetson modules, and suggests Nvidia’s long-term vision for RISC-V in data centers. The presentation illustrated a comprehensive system that includes a Data Processing Unit (DPU) for networking tasks, creating a heterogeneous compute platform. In this configuration, the RISC-V CPU becomes central to managing workloads, complementing Nvidia’s GPUs, DPUs, and networking chips.
This move is seen as a strategic bridge between Nvidia’s proprietary CUDA stack and an increasingly prominent open architecture. Supporting RISC-V broadens CUDA’s reach into systems that prefer open instruction sets or require custom processor implementations, particularly relevant for companies seeking customized silicon solutions. The integration also strengthens options for developers utilizing Nvidia Jetson, targeting specialized or embedded computing platforms.
The timing of the announcement, made in China, is notable given current geopolitical considerations affecting the export of advanced AI hardware like Nvidia’s GB200 and GB300 offerings to the region. By enabling CUDA on RISC-V, Nvidia appears to be finding new avenues to sustain and expand its CUDA ecosystem in markets that might favor an open instruction set or custom chip designs.
While Nvidia did not explicitly confirm that all workloads would be AI-related, the company’s current strategic focus strongly suggests this direction. The integration of RISC-V into the CUDA platform could influence other major tech companies to consider similar adaptations, potentially positioning RISC-V as a more viable alternative in future AI and HPC processor designs across data centers globally.




