Skip to main content

Accelerating generative AI adoption through an interconnected and open ecosystem

Image Credit: Adobe

Presented by VMware


The fast rise of generative AI has been a head-spinning experience for companies and consumers around the world. Fortunately, the whirlwind of fascination and trepidation over the past year has settled into more productive conversations around building an AI ecosystem for all.

The growing number of apps and frameworks today from companies such as NVIDIA, Hugging Face and Anyscale are helping lay the groundwork for more democratic use of AI and machine learning across the board. The potential gains are immense. McKinsey estimates generative AI could add up to $4.4 trillion annually to the global economy.

While every enterprise has an opportunity to join in, the drive to build and leverage new AI and ML platforms calls for committed collaborations and heightened engagement among enterprise leaders to support their customers’ AI journeys.

Here’s a look at how stakeholders can work together to cultivate an open and interconnected ecosystem while optimizing their AI and ML adoption.

Creating new AI and ML systems for sustainable growth

Despite the explosive growth of generative AI over the past year, we’re still in the early days. Responsible, safe and controlled use of AI and ML can lead to better outcomes for customers and help organizations achieve sustainable growth in these fast-moving times.

There are several key steps for CIOs and other stakeholders looking to cultivate an open AI ecosystem fueled by new collaborative efforts:

Embracing private AI

One of the biggest questions today is how organizations can accelerate their use of AI and ML responsibly. By using private AI, companies can balance the business gains from AI with the practical privacy and compliance needs of the organization. VMware recently showed how companies can work within an open ecosystem to support customers’ adoption of private AI. In teaming with NVIDIA, we will deliver a turnkey solution that features a set of integrated infrastructure and AI tools customers can purchase and deploy across a consistent hybrid cloud environment. Customers can also enable generative AI use cases by bringing the IBM watsonx AI and data platform to VMware Private AI. As well, VMware and Intel are working together to allow customers to use their existing general-purpose infrastructure and open-source software to simplify building and deploying AI models.

Setting universal AI standards

Every industry needs standards, ethics and fair regulations for checks and balances. UNESCO, which has helped shape ethical guidelines in science and technology for decades, published its first-ever “Recommendations on the Ethics of Artificial Intelligence” this year. This sets the right tone for enterprises. To ensure a more open and democratic generative AI ecosystem, stakeholders need to develop clear ethical principles to ensure and reinforce fairness, privacy, accountability, intellectual property protection and the transparency of training data.

Contributing to open collaboration

Companies are rapidly experimenting with AI foundation models and generative AI tools. By sharing data and coding techniques, enterprises are collectively able to reach greater heights. Our team recently fine-tuned Hugging Face’s SafeCoder for our GitLab code, which shows that it adapted well to VMware’s coding style. Collaborative efforts like these help us build better consensus.

Overcoming challenges and building greater trust in AI

Generative AI tools, including large language models and computer vision, can empower companies to increase their innovation and output while delivering better products and services.

But legitimate concerns remain. Our team has identified three main challenges that enterprises need to confront head-on:

Framing affordable AI models

Given that training today’s generative AI models is costly and complex, enterprises have every incentive to create and operate their own customized AI models at a more affordable price. The cost of training a “mega LLM” like GPT-3, for example, can exceed several million dollars, per deep learning company Lambda. With AI costs rising exponentially, many CIOs are looking to use open-source software to build smaller AI models that can be optimized for specific tasks. Through new solutions that offer greater flexibility and choice, AI innovation becomes more accessible to mainstream enterprises and entrepreneurs.

Democratizing AI expertise

The talent required to build successful AI models is highly specialized and in short supply. This comes up in many conversations with CEOs and CIOs who want the ability to easily pivot to new innovations as they emerge without being locked into a single arrangement. That level of adaptability becomes tricky when only a small percentage of tech professionals specialize in AI models. To tackle this skills gap, we must radically simplify how we create and train AI models. Reference architectures offer a blueprint and actionable pathway for enterprises that lack in-house expertise to build AI solutions from scratch.

Shifting from risk to trust

Today’s generative AI models still carry notable risks, which have the potential to harm customers and employees, damage a company’s reputation and negatively impact its revenue. These risks include security breaches, IP violations and litigation. A growing number of organizations have started to team up to address concerns over privacy, data integrity, bias and other flags. For example, the open-source community is developing innovative ways to help businesses train and deploy their AI models responsibly. Collective efforts like these can help us build greater trust around the use of generative AI for business growth.

New rules and regulations around generative AI will take shape over time. It’s in the best interest of industry stakeholders to lay more stable groundwork today.

Working together to build a stronger AI ecosystem

There are many ways that enterprises can take greater ownership over the disruptions brought on by AI, and the best way is collectively — across the public and private sectors, from major corporations and agencies down to small businesses, as well as consumers and employees.

VMware works closely with CIOs and other decision makers to ensure their digital infrastructure is optimized for AI and ML integration. Greater collaboration and teamwork with generative AI can pave the way for a thriving open ecosystem that remains interconnected and democratic. Our team is proud to be a part of that effort.

Abhay Kumar is Vice President, Hyperscalers & Technology Partners at VMware.


Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact sales@venturebeat.com.