The specifics:
Nano (30B), Super (100B), and Ultra (500B) are the three sizes in the range; the larger variants will be available in 2026.
Nano generates responses more than three times faster and outperforms comparable-sized models such as Qwen3-30B on coding and instruction-following benchmarks.
Nvidia is releasing training data, fine-tuning tools, and reinforcement learning settings in addition to the models, in contrast to the majority of closed U.S. competitors.
In terms of coding, search, corporate automation, and cybersecurity, the chipmaker names Cursor, Perplexity, ServiceNow, and CrowdStrike as early adopters.
Nano generates responses more than three times faster and outperforms comparable-sized models such as Qwen3-30B on coding and instruction-following benchmarks.
Nvidia is releasing training data, fine-tuning tools, and reinforcement learning settings in addition to the models, in contrast to the majority of closed U.S. competitors.
In terms of coding, search, corporate automation, and cybersecurity, the chipmaker names Cursor, Perplexity, ServiceNow, and CrowdStrike as early adopters.
While Chinese leaders are leading the world in the use of open models, closed U.S. labs are increasingly producing their own silicon. In addition to providing Western developers with a competitive open-source choice, Nvidia's release of robust, fully open models encourages them to continue developing on Nvidia's hardware.