Skip to main content

From hype to execution: AI adoption demands speed and modularity, says Zemoso Labs CEO

Satish Madhira, Founder & CEO of Zemoso Labs
Satish Madhira, Founder & CEO of Zemoso Labs

Companies of all sizes — startups, mid-sized enterprises, and industry giants — have been racing to announce their investments in artificial intelligence (AI) since the launch of large language models (LLMs). Satish Madhira, founder and CEO of Zemoso Labs, a company specializing in idea-to-scale product design and engineering, asserts that it’s time for execution to match the ambition.

A glance at the market shows that AI investment is in full swing. Companies across industries have made AI and LLM-related declarations, leading to stock price surges. Even firms outside traditional AI sectors are rebranding their offerings with AI-driven capabilities to capitalize on the trend. In fact, IT services companies have announced large-scale AI initiatives.

With over 20 years of experience building and growing new initiatives as a chief executive, entrepreneur, and investor, Madhira has seen a transformation in the landscape. From his early days as a founding engineer at Zoho to his successful exit to SAP, Zemoso’s leadership team has navigated the move from on-premise to cloud to now the GenAI era. As AI has become more accessible, especially with lowering costs, this enthusiasm and rush to be early adopters is hardly surprising.

Still, companies handling sensitive, proprietary data are wary of AI jailbreaks, data leaks, and regulatory compliance. After all, anonymized data can be decoded to reveal identifiable information even if the data is only indicative. This fear prompted some enterprises to delay AI adoption. Meanwhile, others were quietly experimenting behind closed doors. Enterprises debate whether to purchase private instances of AI models, build their own foundational models, or experiment with open-source alternatives.

“We’ve seen this play out in database technologies,” states Madhira. “Enterprises in financial services and healthcare used to prefer open-source databases like PostgreSQL, MySQL, and MongoDB over closed-source options mainly for security and control. The same pattern is happening with AI.” Vector databases like PGVector are gaining interest as companies seek to mitigate risks and remain the owners of their data.

It’s worth noting that open-source models are now delivering competitive performance. Madhira states the key is knowing when to utilize a high-end model and when a more secure one would suffice. “You must strike a balance between performance and security,” says Madhira. This approach enables companies to still deliver on the AI promise while controlling costs and managing regulatory requirements.

Madhira has observed a growing concern in the market despite the excitement. There remains a shortage of generative AI-specialized talent. Moreover, many enterprises are unsure where to invest and how to maximize the potential of AI within their specific industries. In other words, there’s still a gap in generative AI strategy. The result is delays in execution and organizations struggling to choose the right AI models and define their approach.

Madhira believes that the companies that succeed in AI are those that first explore AI use cases, experiment through iterative cycles to find the winning bet, and then execute at scale. The C-suite must ensure that AI investments translate into tangible business value (not just marketing hype). Secondly, it’s imperative to deliver on promises — fast. After all, customers and shareholders will demand immediate, working solutions and not proof-of-concept experiments.

Madhira also acknowledges that neither will be easy unless they employ the right approach. “At the end of the day, you can’t put the cart before the horse. You can have the best foundational models and the most high-performing tech stack, but if you aren’t solving the right problem, you will simply not see results,” he remarks.

Zemoso’s approach mirrors what works best in high-growth startups — agility, iteration, and a dedication to solving the right problems. Many chief digital officers and chief AI officers are turning to a “ring-fenced” approach, where they build a startup-like culture within their organizations and bring in specialized partners who can handle fast-paced innovation cycles. Ring-fenced teams ensure that AI implementation aligns with real business problems while keeping the IP and core of the idea with organizational leaders.

It doesn’t stop at execution, however. Enterprises must also account for AI regulations and compliance, which vary worldwide. Companies must future-proof their AI strategies by ensuring their model choice and deployment are flexible. Leadership teams are aware of this. They believe that organizations shouldn’t be locked into a single model provider and must be able to transition between models as use cases evolve. This modularity-first approach will define AI success in the enterprise space.

“Open-source LLMs are a compelling offering. These deployments in themselves will not be trivial and will require individuals experienced in handling net new technologies. But what will be even more significant is how leaders ensure modularity,” says Sudhir Rao, Data Engineering and Cloud Ops Leader at Zemoso Labs.

Besides employing a modular approach, selecting an AI model based on performance is also critical. Various models excel at different tasks, and choosing the right one can significantly impact results. As more foundational LLM models come to market, more specializations will emerge, and companies must retain the flexibility to replace models and use multiple models.

The AI boom is far from over, but expectations have shifted. Madhira urges leadership teams to focus on rapid execution, ensuring that AI investments result in tangible impact and return on investments. In this space, companies that embrace modular approaches to building AI infrastructure are better positioned to adapt and thrive.

GamesBeat newsroom and editorial staff were not involved in the creation of this content.