When people talk about the future of AI, they usually imagine bigger models, faster inference, and smarter chatbots. But what if the real breakthrough isn’t about intelligence at all, but coordination?
For engineer and innovator Matteo Zamparini, a more fundamental and far less flashy challenge must be solved first: the lack of a common language and shared standards among disparate models.
As the founding engineer at PropRise, Zamparini builds autonomous AI agents that help commercial real estate professionals find sites and properties by tapping into data sources ranging from listings to zoning codes, city council meetings, and local news. However, without shared standards and frameworks, building generalized and scalable AI Agents becomes a real struggle.
A shared “Internet of Agents,” complete with common protocols and frameworks among competing technologies, may provide the answer and unlock AI’s potential to fully integrate into the workforce.
“These innovations will be the backbone of the autonomous agent era, much like shared internet protocols underpinned the digital revolution,” says Zamaparini.
Here’s a closer look at his groundbreaking work.
PropRise and the need for a common language
PropRise’s goal is to automate the mundane work involved in real estate analysis. The platform tracks millions of commercial properties across the United States, using public sources, zoning and housing policies, shifting demographics in individual neighborhoods, ownership data, and purchasing trends to uncover investment opportunities that human analysts might miss or take months to compile.
Zamparini’s complex AI models run in the background, constantly analyzing new sources and providing new recommendations. These agents go beyond simple chatbots, they are general‑purpose planners that leverage linear agentic workflows and are expected to complete human‑level tasks with incredible accuracy in a fraction of the time a human would take.
AI agents like those behind PropRise cannot rely on a simplistic chat interface, tool calling or Chain of Thought. Instead, they rely on a comprehensive orchestration infrastructure, which is why a common language and standard becomes essential.
From access to structure: building a unified agent infrastructure
PropRise’s challenge is not unique. Across industries, large language models highly depend on information inputs and the context, or ‘state’ as engineers might call it, they need to analyze. Data access only matters if that data is accurate and reaches the right Agent at the right time or ‘state.’
In Zamparini’s eyes, this simple fact makes the emergence of open standards like Model Context Protocol (MCP), AGNTCY or the language BAML the single most important trend in AI. To him, they are the foundational building blocks for the next generation of AI systems, and they already power his architecture.
MCP, for example, standardizes how AI models connect to data and tools. Similar to a country’s electricity being accessible through the same type of wall outlet, it enables AI agents to seamlessly plug into diverse databases and APIs that follow the same model. Even if considered flawed by many engineers, MCP is the right idea in the correct direction.
BAML, on the other hand, is a unified programming language that developers can use to create reliable data outputs, again making it easier for Agents to act upon the correct information and context.
“BAML offers a way to build far more reliable and predictable agents”, Zamparini explains. Other developers agree, calling the language “faster andcheaper” for developers looking to build and scale AI agents designed to interact with humans and the internet at large.
Much like shared internet protocols were essential for the rapid spread of the World Wide Web, shared standards and languages will drive the adoption and usability of AI agents.
Interoperability as the missing piece of successful AI adoption
Startups are continually looking to add their own flavor of automation in new industry niches or use cases, driving the rapid innovation and adoption of AI technology. However, as the ecosystem of AI agents continues to expand, so does the risk of information fragmentation.
If the software powering those agents is difficult to test and maintain, or even worse, is poorly architected, agents risk producing inconsistent results also called “hallucinations”. Agents analyzing the same information risk coming to different, or even contradictory, conclusions. This would cast doubt on the industry and fail the promise of AI to power the next industrial revolutions..
Staying with the PropRise example, two real estate investment AI agents tapping into different types of information, or even the same information through different entry points, might not recommend the same property for moving forward due diligence. This could lead to investors comparing the results and losing faith in either recommendation being comprehensive or trustworthy.
Zamparini sees parallels to the early days of the internet, which needed shared protocols like HTTP, SMTP, and TCP/IP to become a truly global and comprehensive system. Frameworks like MCP and languages like BAML could have a similar effect, and with efforts like AGNTCY, lead to a developer collective with the aspiration of building an integrated “Internet of Agents”.
“My aspiration is to contribute to a tech ecosystem where any business can deploy AI agents that work with each other and produce results with four nines of accuracy past the decimal point(thanks to efforts like AGNTCY), access any information needed via standard protocols (like MCP), and output results that are trusted and verifiable (using a language like BAML),” says Zamparini.
Without these standards, the future of AI will see isolated and ultimately brittle systems that can’t scale. With them, AI systems can become more scalable, interoperating more easily across tools and platforms to create an ecosystem of automated workflows.
Why protocols, not platforms, will define the future of AI
Matteo Zamparini isn’t just advocating for shared standards and a more interconnected AI ecosystem in theory, he’s actively working towards it. PropRise has shown the potential of AI agents, and now he’s ready to take the next step.
The models he builds are only as good as the standards on which they rely and their ability to connect to other information inputs and outputs. Creating this interconnected landscape enables them to become truly comprehensive and able to accurately perform even the most complex tasks.
In turn, this ecosystem provides a glimpse into the potential of AI agent models to take on all themundane tasks that slow down professionals and innovators across industries.
“In a future where mundane work is no longer a necessary burden, innovation will accelerate at the speed of computation,” Zamparini concludes. “I believe it’s a future that’s much closer than we think.”
VentureBeat newsroom and editorial staff were not involved in the creation of this content.