Skip to main content [aditude-amp id="stickyleaderboard" targeting='{"env":"staging","page_type":"article","post_id":2094392,"post_type":"guest","post_chan":"none","tags":null,"ai":true,"category":"none","all_categories":"ai,bots,business,enterprise,","session":"B"}']
Guest

How to apply natural language processing in the enterprise

A.I. will help in the enterprise sooner than you think.

Image Credit: Shutterstock.com/Vladitto

It’s no secret that artificial intelligence (AI) is rapidly transitioning out of R&D labs and into the mainstream market for enterprise applications.

Leaders in the computer vision space, such as those developing autonomous cars, have continually made headlines as the new technology breaks ground. Applications beyond computer vision are increasingly gaining recognition, as well, including those dealing with non-spatial data, such as text and numbers.

[aditude-amp id="flyingcarpet" targeting='{"env":"staging","page_type":"article","post_id":2094392,"post_type":"guest","post_chan":"none","tags":null,"ai":true,"category":"none","all_categories":"ai,bots,business,enterprise,","session":"B"}']

The most famous “non-vision” examples include well-known technologies beating the best of the best in highly complex abstract strategy games like Go, which is said to hold more possibilities than the total number of atoms in the visible universe. Do the examples set by industry leaders, like IBM’s Watson, mean that AI technology is finally arriving for enterprise applications with the capability to power business transformation?

The answer is “yes,” but it’s not likely to happen with most of the current technology. The problem is that many existing AI technologies attempt to replicate what has worked in the case of spatial data. This includes the application of computational statistics-based approaches for processing natural language. Such approaches attempt to turn text into “data” in order to look for deep patterns, a process which – to date – has largely failed.

AI Weekly

The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.

Included with VentureBeat Insider and VentureBeat VIP memberships.

From natural language processing to natural language understanding

To ensure the successful future of AI in the enterprise, an approach must be devised that addresses the three primary challenges AI technologies must overcome in order to power transformational enterprise applications. These center around language, context, and reasoning.

1. Language

The first challenge posed by many modern AI technologies is the inability to process language as humans do. This is because the large majority of current AI approaches focus on natural language processing (NLP) and are largely driven by computational statistics that treat text as data rather than language and use the same techniques that work on spatial data, e.g. pattern recognition. These methods do not attempt to understand text but instead simply transform it into data and attempt to learn from the patterns in that data. But during the mechanical process of the conversion of natural language into data, context and meaning of the text is often lost.

2. Context

The reason that this pattern recognition method doesn’t work is because of the inherent challenge of understanding context.  The ultimate goal of AI in this example is to devise mechanisms for comprehending the meaning of written text. To address the language challenge presented by AI for enterprise applications, there needs to be a shift from mechanically converting natural language into data to a word occurrence-based logic that helps the AI technology understand text, using its linguistic structure.

This approach — moving from natural language processing (NLP) to natural language understanding (NLU) — involves applying principles from computational linguistics to reverse engineer the text back to its fundamental ideas, and then understanding how the ideas were connected together to form sentences, paragraphs, and the full document. As the natural language text is processed, it needs to be done in the right context, which can only be developed if the technology focuses on the language structure and not just on the words in the text, as most current technologies do.

[aditude-amp id="medium1" targeting='{"env":"staging","page_type":"article","post_id":2094392,"post_type":"guest","post_chan":"none","tags":null,"ai":true,"category":"none","all_categories":"ai,bots,business,enterprise,","session":"B"}']

However, understanding context is a multi-­faceted challenge. First, the words in many languages can be used in multiple senses, and therefore it is important to disambiguate word senses so their usage in a particular document can be accurately understood. Second, text documents often use domain-specific discourse models, e.g., legal contracts, news articles, research reports, etc. There are certain properties of such domain discourse models that should be incorporated in the AI technology in order to enhance NLU.

Many words may also be used as proxies within a document. For example, we commonly say things such as “Kleenex” for “tissue,” “Tweet” for “Twitter post,” and so on. The AI technology must have a way to recognize and understand proxies like this. In some cases, text in a document may refer to knowledge which is not explicitly part of the text. Humans can understand this with prior knowledge. AI technologies, on the other hand, have to create a repository of such global knowledge that can be retrieved to supplement the document text in order to gain full understanding of its meaning.

3. Reasoning

The third challenge is the traceability of reasoning deployed by the AI solution to reach its conclusion. Almost all AI technologies using computational statistics are black boxes. The flaw to this approach is when a recommendation from the AI technology is received but is not intuitive, there is no way of understanding it. It is also not known whether it is truly causal or is spurious, and users must go on blind trust.

[aditude-amp id="medium2" targeting='{"env":"staging","page_type":"article","post_id":2094392,"post_type":"guest","post_chan":"none","tags":null,"ai":true,"category":"none","all_categories":"ai,bots,business,enterprise,","session":"B"}']

There are applications where such visibility may not matter. In the Go game, for example, it’s not important to understand the reasoning the machine deploys for its moves. On the other hand, in many enterprise-level applications, such visibility will be essential for adoption. In mission-critical environments where people are held accountable, business users have to be able to trust that the AI engine’s reasoning is sound.

Visibility is also important with regard to the ability to more easily improve the engine when there is a false positive or false negative. With a black box, the user has to find enough instances of the false positive or false negative and rebuild. There is no way of knowing whether all variations or permutations of that error have been addressed or not. However, with the adoption of deep linguistic learning approaches, full and complete visibility can be maintained.

Conclusion

The future of AI in the enterprise is now. However, to ensure successful adoption, enterprise AI applications must address the key challenges of language, context, and reasoning in order to fully comprehend the content in a purposeful way and create actionable insights for users.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More