Skip to main content

Nvidia to purchase Run:ai for $700M, further asserting its dominance in the AI stack

VentureBeat/Ideogram

Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now


AI cornerstone company Nvidia is continuing its strategic buying spree, announcing today its intent to purchase an Israeli startup that makes AI chips more efficient. 

The chipmaker has entered into a “definitive agreement” to acquire Run:ai, a Kubernetes-based software provider that helps optimize AI apps and workloads on graphics processing units (GPUs).

While the sum is as yet undisclosed, a source close to the matter told TechCrunch it would be close to $700 million. Earlier discussion put that price tag at a heftier $1 billion

The deal marks the latest in a series of tactical moves and investments that has Nvidia dominating more and more of the AI stack.


AI Scaling Hits Its Limits

Power caps, rising token costs, and inference delays are reshaping enterprise AI. Join our exclusive salon to discover how top teams are:

  • Turning energy into a strategic advantage
  • Architecting efficient inference for real throughput gains
  • Unlocking competitive ROI with sustainable AI systems

Secure your spot to stay ahead: https://bit.ly/4mwGngO


A single fabric to access GPUs

Run:ai helps enterprises manage and optimize their compute infrastructure, whether in the cloud, on-premises or in a hybrid scenario. Its orchestration and virtualization software layer is tailored to AI workloads running on GPUs and other chipsets. 

The company’s centralized interface allows users to manage shared compute infrastructure. Developers can pool GPUs and share computing power for various tasks — this could be “fractions of GPUs” or multiple GPUs or nodes of GPUs running on different clusters, Alexis Bjorlin , VP and GM for Nvidia DGX Cloud, wrote in a blog post. Customers benefit from better GPU cluster resource utilization, improved infrastructure management and greater flexibility. 

“You can manage clusters in a very efficient way to extract value from your hardware investments,” Rona Segev, co-founder and managing partner at Ran:ai investor TLV Partners, told VentureBeat.

When organizations are using big clusters of computers, “it’s a must to have some type of virtualization and management layer on top of it,” she said.

Run:ai can “split GPUs into pieces” and “allocate them dynamically,” Segev said, and can also combine and manage all workflows and data flows. 

The company built its open platform on Kubernetes, supports all Kubernetes variants and integrates with third-party AI tools and frameworks.

Run:ai’s capabilities will be extended to Nvidia DGX and DGX Cloud customers and Nvidia will continue to offer its products “under the same business model for the immediate future.”

“Together with Run:ai, Nvidia will enable customers to have a single fabric that accesses GPU solutions anywhere,” Bjorlin writes. 

Investing up and down the stack

Not a day goes by that the $2.2-trillion-valued Nvidia isn’t in the news, and this marks more than a dozen acquisitions by the company. Notably, it paid $6.9 billion for high-performance computing company Mellanox in 2019, and also scooped up OmniML for edge AI workloads, SwiftStack for data storage and management and Excelero for block storage. It has made numerous other investments in hardware, software, data center management platforms, robotics, security analytics and mobile capabilities. 

Regulators have at times taken notice of this spending frenzy: The company’s attempt to purchase British chip designer Arm for $40 billion was terminated in early 2022 due to “significant regulatory challenges,” notably around competition stifling.

To that very point, Nvidia is estimated to control 80% of the high-end chip market and recently announced plans to build a new business unit to design chips for cloud computing firms. 

Further, the company has well-established partnerships with all the major AI and cloud players — even as, at the same time, it is competitors with some of them — including OpenAI, Meta, Microsoft, Google, Amazon and others. The company continues to regularly announce new innovations, including most recently a “big, big GPU” that is “pushing the limits of physics” and a multimodal AI Project GR00T

Reacting to the news today, many industry watchers commented on the company’s pervasiveness.

“Packaging Run:AI within Nvidia’s existing DGX Cloud helps make the case that Nvidia’s vertically integrating its platform from chips-to-inference, ostensibly making it a one-stop-shop for your AI needs,” said one Twitter user.

Another shared a graphic of Nvidia’s investments over the last four years, commenting that, the company is looking to “capitalize on its current momentum to expand its ecosystem and secure future revenue streams. Startups (customers) depend on its GPUs and Nvidia’s growth depends on these startups. gpu rich!”

Israeli TV host and commentator Dror Globerman, for his part, shared a video he captured a few weeks ago of Nvidia CEO Jensen Huang — naturally in his signature black leather jacket — “dancing and mingling closely” with Run:AI entrepreneurs.

“What are they doing? Dramatically optimizing the utilization of the resource that is becoming the most desirable and expensive in the global economy: Processing power of supercomputers,” Globerman tweeted.

An important moment for the broader Israeli tech community

Run:ai was founded in 2018 by Omri Geller and Ronen Dar. The company launched out of stealth in 2019 with $13 million and has subsequently raised more than $105 million. It has worked with Nvidia for several years — its products are integrated into DGX, DGX SuperPOD, Base Command, NGC containers and AI Enterprise software — and counts among its customers Sony, Adobe and BNY Mellon. 

“Run:ai has been a close collaborator with NVIDIA since 2020 and we share a passion for helping our customers make the most of their infrastructure,” Geller said in the Nvidia blog post announcing the deal. 

Segev recalled how, in 2018, Geller and Dar pitched an idea for an orchestration layer between AI models and GPUs that would enable a “much more efficient use of the underlying compute resources.” At the time, it was all theoretical, she noted, but TLV made the bold move to invest in their seed round and subsequent rounds. Soon, the GPU market began heating up, and in 2022, ChatGPT changed the world. 

“These sparked unprecedented interest in AI models,” said Segev. Soon, enterprises realized that they would need to train their own models — and would have to have the infrastructure to do so. This “almost overnight” shift gave Run:ai “massive momentum.”

“Today’s announcement is a testament to the strength of Run:ai’s technology and the strength of its people,” Segev wrote in a blog post today. “Coming at this particular point in time, it is also an important moment for the broader Israeli tech community.”