Nvidia has begun shipping the DGX Spark, a compact desktop AI supercomputer that leverages the company’s Grace Blackwell architecture, marking a significant development in local AI computing.
The DGX Spark system integrates GPUs, CPUs, networking components, and AI software to support local AI workloads, delivering up to 1 petaflop of AI performance and featuring 128 GB of unified memory. The system comes with preinstalled software for AI model training and inference, making it a comprehensive solution for organizations looking to develop and deploy AI models locally.
Nvidia has announced that orders for the DGX Spark will open on October 15, 2025, through the company’s website, with partner systems from Acer, ASUS, Dell Technologies, GIGABYTE, HP, Lenovo, and MSI also becoming available globally. The DGX Spark is priced at $4,000, positioning it as an accessible option for various organizations.
However, the system’s 273 GB/s memory bandwidth may limit its throughput for production inference workloads, making it more suitable for prototyping and experimental purposes rather than full-scale deployment. Benchmarks have shown that the DGX Spark is approximately four times slower than the RTX Pro 6000 Blackwell workstation GPU and also lags behind the RTX 5090 on large models due to its bandwidth constraints.
The DGX Spark features a compact chassis designed to maintain stable thermals under load, drawing around 170 W of power from an external USB-C source. To connect two DGX Spark units for processing 405-billion-parameter models, additional ConnectX-7 200 GbE hardware is required, adding to the overall cost.
The NYU Global Frontier Lab has noted the DGX Spark’s suitability for privacy-sensitive work in healthcare, creating opportunities for managed services that cover procurement, HIPAA-compliant implementation, and ongoing security. The system supports fine-tuning for models up to 70 billion parameters, making it suitable for educational institutions and smaller biotech firms seeking local model customization without exposing data to the cloud.
Nvidia’s extensive partner ecosystem, which includes Dell, HP, Lenovo, and ASUS, provides broad channel reach, enabling integrators to offer bundled services such as installation, training, and support for organizations lacking in-house AI expertise. This development is part of Nvidia’s broader strategy to expand its presence in the AI market, as evidenced by the company’s recent announcement of its first direct partnership with OpenAI.




