• Archives
  • Cryptocurrency
  • Earnings
  • Enterprise
  • About TechBooky
  • Submit Article
  • Advertise Here
  • Contact Us
TechBooky
  • African
  • AI
  • Metaverse
  • Gadgets
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
  • African
  • AI
  • Metaverse
  • Gadgets
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
TechBooky
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
Home Acquisition

Nvidia Invests in Open Models to Fuel AI Agent Development

Akinola Ajibola by Akinola Ajibola
December 16, 2025
in Acquisition, Artificial Intelligence
Share on FacebookShare on Twitter

With the debut of the Nemotron 3 family of open models and the acquisition of SchedMD, the company responsible for the widely used Slurm workload management system, Nvidia is making a strong move into open source AI. The simultaneous actions reveal Nvidia’s plan to take control of the crucial software infrastructure that developers rely on to train and deploy AI systems at scale, in addition to the processors that fuel AI.

To function effectively, Nvidia says AI agents need the ability to coordinate and operate across wide contexts and extended periods, necessitating an open infrastructure approach.

With the introduction of the Nemotron 3 family of open models, which offer a basis for businesses to create their own domain-specific AI agents, Nvidia is “betting on open infrastructure for the agentic AI era”.

Two channels are being used by Nvidia to transfer power. The semiconductor giant revealed on Monday that it had purchased SchedMD, the organisation in charge of Slurm, the open-source workload management system that has been operating covertly behind the scenes in almost all AI data centres, research facilities, and university labs since 2002.

One of those unglamorous but vital pieces of infrastructure that enable modern computing is Slurm. It determines which jobs run when and where by scheduling and managing computational resources across machine clusters. It has become crucial in the AI era for coordinating large-scale inference workloads and training sessions. The original creators of Slurm, Morris Jette and Danny Auble, formed SchedMD in 2010; Auble is presently the CEO.

Nvidia showed a strong commitment but refused to reveal the parameters of the agreement. The business stated on a blog post that it has been working with SchedMD for over ten years and sees Slurm as essential infrastructure for generative AI in its blog post announcing the acquisition. 

Nvidia pledged to maintain the software’s vendor neutrality and open source status while speeding up development and increasing system compatibility. This is significant because it indicates that Nvidia is not explicitly locking Slurm into its ecosystem, which would have caused a significant backlash from the research and academic computing communities that rely on it.

But there is more to the tale than just the SchedMD acquisition. Nemotron 3, a new collection of open-source AI models that Nvidia says is the most effective suite for creating AI agents, was introduced on the same day. This indicates the direction Nvidia believes AI will take as well as the skills developers truly require.

There are three varieties in the Nemotron 3 series, each intended for a distinct function. Nemotron 3 Nano focusses on specific applications where it makes sense to employ smaller models for more effective inference. Nemotron 3 Super was created especially for multi-agent systems in which various AI models must cooperate and coordinate. Nemotron 3 Ultra manages the more demanding tasks for intricate applications that call for more advanced logic.

Nvidia CEO Jensen Huang stated, “Open innovation is the foundation of AI progress,” at the announcement. “With Nemotron, we’re transforming advanced AI into an open platform that gives developers the transparency and efficiency they need to build agentic systems at scale.” The final section is important. 

In particular, Huang is criticising agentic systems—autonomous AI agents capable of job execution, planning, and iteration without continual human supervision. Nvidia believes that’s where the next generation of worthwhile AI applications will go.

Nvidia has recently made several open source initiatives. The company just released Alpamayo-R1, an open reasoning vision language model with an emphasis on autonomous driving research, last week. In order to assist developers in creating tangible AI applications, it also released additional procedures and guidance for its Cosmos world models. These actions are a component of a well-planned strategy.

This pattern is informative on a larger scale. Nvidia is placing a significant wager that the next frontier for GPU deployment will be physical AI, which includes robotics and self-driving cars that must comprehend and interact with the actual world. Nvidia is seeding the ecosystem with models, tools, and now essential infrastructure software instead of waiting for businesses to figure out how to use its GPUs for these applications. It aspires to be the essential provider for the complete stack, not just the processors, since businesses need to create the intelligence systems that will drive robots and self-driving cars.

Nvidia has direct control over the scheduling and resource allocation of AI workloads thanks to its acquisition of SchedMD. 

By releasing open models such as Nemotron 3, it provides developers with easily available tools to enhance Nvidia hardware. It’s a potent combo. Free, effective models that perform well on Nvidia GPUs and run on infrastructure built to maximise Nvidia chip consumption are made available to developers. Under the pretext of open innovation, this is vertical integration.

The action exerts pressure on the other players in the stack. The most powerful semiconductor business now supports a well-established, free alternative to proprietary workload management technologies. While celebrating Nvidia’s dedication to open models, proponents of open source may overlook the fact that Nvidia is actively leveraging openness to gain an edge. On the model and infrastructure fronts, rivals like AMD and Intel are forced to catch up.

Nvidia is doing more than just launching a new product and making a wise acquisition. The company is implementing a thorough strategy to own the most important levels of the AI stack, from the silicon to the models used by developers and the infrastructure that powers them. Nvidia is making it more difficult for anyone to create significant AI systems without utilising Nvidia technology at several levels by fusing open source idealism with strategic control. This makes Nvidia infrastructure less of an option and more of a necessity for developers and businesses creating physical AI applications.

This action, which is part of a larger plan to speed up the development of agentic AI systems and physical AI like robotics and autonomous driving, is perceived as Nvidia’s reaction to the growth of open-source solutions from other laboratories, especially Chinese companies like DeepSeek.

Related Posts:

  • xeuj0mlvi8pub4bqbss8
    Microsoft, Nvidia, BlackRock, MGX & xAI Partner to…
  • NVIDIA-GB200-NVL72
    Nvidia Unveils Blackwell To Further Push The…
  • 391318-nvidia11200900companywebsite
    Breaking: Nvidia Becomes First Company to Hit $5…
  • intel-and-nvidia-1758200605540
    Nvidia, Intel Partner With $5B Investment in Chips & AI
  • GettyImages-2021258442
    Two "Open" AI Reasoning Models Released by OpenAI
  • nvidia-blogroll-logos-1708718418344-1717630971898
    Nvidia Becomes Most Valuable Public Company By Market Cap
  • 1730632020944
    OpenAI To Produce 1st In-House Chipset
  • Nvidia Hits $1 Trillion Market Cap

Discover more from TechBooky

Subscribe to get the latest posts sent to your email.

Tags: Nemotron 3nvidiaSchedMD
Akinola Ajibola

Akinola Ajibola

BROWSE BY CATEGORIES

Receive top tech news directly in your inbox

subscription from
Loading

Freshly Squeezed

  • Nigerian Authorities Arrest Developer Linked to Microsoft 365 Phishing Tool December 20, 2025
  • WhatsApp GhostPairing Scam Lets Hackers Hijack Accounts December 20, 2025
  • OpenAI Reportedly Seeks $100B at $830B Valuation December 20, 2025
  • YouTube & Google Hit By Ongoing Outages As Reports Spike December 20, 2025
  • TikTok Finalises Agreement For Sale Of Its US Business December 19, 2025
  • Google Adds Data Tables & Export Support To NotebookLM December 19, 2025
  • Instagram Caps Hashtags At Five For Reels & Posts December 19, 2025
  • Vibe Coding Startup Lovable Hits $6.6B Valuation After $330M Raise December 19, 2025
  • NHS England Tech Supplier Confirms Data Breach December 19, 2025
  • OpenAI Unveils GPT-5.2-Codex December 19, 2025
  • OpenAI Launches ChatGPT Internal App Store December 19, 2025
  • X Ends Installation Support For iPad App on Macs December 19, 2025

Browse Archives

December 2025
MTWTFSS
1234567
891011121314
15161718192021
22232425262728
293031 
« Nov    

Quick Links

  • About TechBooky
  • Advertise Here
  • Contact us
  • Submit Article
  • Privacy Policy
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
  • African
  • Artificial Intelligence
  • Gadgets
  • Metaverse
  • Tips
  • About TechBooky
  • Advertise Here
  • Submit Article
  • Contact us

© 2025 Designed By TechBooky Elite

Discover more from TechBooky

Subscribe now to keep reading and get access to the full archive.

Continue reading

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.