It’s fun to imagine the AI future of home service robots, Amazon Dots in every room, delivery drones and more accurate home medical diagnoses. But while it’s natural that flashy consumer applications are capturing the public’s imaginations, AI’s capacity to transform another area doesn’t get as much attention – the way software itself is developed.

Imagine what computers could do if they understood themselves. Well, they soon will. And I’m not talking about far in the future; I’m talking about the very near future, using off-the-shelf technology that already exists today.

Until now, machine learning experts have tended to focus on AI applications highly tailored to specific tasks – for example, facial recognition, self-driving cars, speech recognition, even Internet search results. But what if those same algorithms were able to understand their own code structure — without human assistance, interpretation or intervention — in the same way they can recognize and process human language and images?

If code started analyzing itself – fixing errors and making improvements faster than a human ever could — technological breakthroughs could happen faster and faster. The possibilities seem endless: medical advances, smoother robots, smarter phones, software with fewer bugs, banks with less fraud, on and on.

AI Weekly

The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.

Included with VentureBeat Insider and VentureBeat VIP memberships.

Artificial intelligence holds the potential to solve an age-old problem in software development. While the ability for code to write or manipulate other code – a concept known as metaprogramming – has existed for a long time (it actually originated in the late 1950s with Lisp), it can tackle only the problems a human can imagine.

But AI can change that.

Using AI, computers could understand all the code in a software project’s history, improving and debugging individual lines of code in an instant, in every conceivable programming language.

Even an inexperienced or mediocre programmer with an idea for an app could just start describing the idea and the computer could build the application itself. This could mean the difference between completing, say, a cancer research project in days or months rather than years. It’s that significant an advance.

The technologies that will eventually lead to these dramatic advances are in the embryonic stage today, but they’re starting to get out there. For example, Google’s TensorFlow machine learning software lets everyday developers build neural network features directly into apps, such as the ability to recognize people and objects in photos. You no longer need a Ph.D. to tinker with these ideas. The ability for amateurs to tinker might be the biggest breakthrough in AI ever.

Think this is far in the future? You might be surprised to learn that companies are already using AI concepts in its in-house project management system – like Google, which built a bug prediction program that uses machine learning and statistical analysis to guess whether a piece of code is potentially flawed or not. Ilya Grigorik, co-chair of the W3C, then created an open-source version of the tool called bugspots, which has been downloaded over 20,000 times.

Another example is with Siri’s successor, Viv. As outlined in a recent Wired article, Viv doesn’t just do speech recognition with a smattering of natural language processing. Viv builds complex adaptive computer programs based on English words. Code writing code. Because the written code is trained and specialized by Viv’s creators, this isn’t quite the generalized code writing ability that I’m talking about here, but it’s a step in that direction.

Yet another step in that direction can be found in the land of amateur tinkerers. Emil Schutte has a provocative statement: Tired of writing code? Me too! Let’s have Stack Overflow do it. He goes on to share a proof of concept, full working code, that pulls from Stack Overflow’s large database of programming knowledge to provide fully functioning chunks of code based only on the intentions of the code already written.

As more of this technology comes online and matures, machines will be able to outperform humans at just about any task: visual processing, image processing, games, and now even programming other computers.

So why can’t computers understand themselves yet? The answer is it’s just a matter of time before they do. And once they do, you can expect to see radical breakthroughs in all fields where software is important.

Lucas Carlson is VP of strategy at Automic Software.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More