It is sad to report that, as a people, we have become less trusting in the world around us and the institutions we engage with on a daily basis. According to Gallup, confidence in institutions such as banks, government, and the police (to name a few) have decreased over the past 10 years. When it comes to media, trust in television news and newspapers has decreased by 10 percent since 2006. Both are now trusted by less than 21 percent of the U.S. population, per the same study.

From a consumer perspective, instead of relying on salespeople or brand advertising and other traditional sources of media, we now inherently seek answers online. According to a 2015 Nielsen study, two-thirds say they trust consumer opinions posted online. We rely on online reviews to guide us which movie we should watch next, where to eat dinner, what to read next, where to visit on vacation, and which consumer products we should buy.

However, trends over the past few months suggest that in this area, too, consumers should be slightly less trusting and question what they are reading online. Amazon’s problems with reviews is a perfect example of this. A recent study of over 7 million Amazon reviews indicated that the average rating for products with incentivized reviews was higher than non-incentivized ones. Shoppers have started to distrust Amazon’s reviews because they believe them to be biased — many brands have been encouraging favorable reviews by giving away products to shoppers. It’s such a big problem that Amazon is making changes to its review policy to try to weed out incentivized product reviews.

Yelp, the San Francisco based review platform, labels approximately 25 percent of reviews as suspicious or not recommended. Yelp recently worked with the New York Attorney General, which led to companies being fined for generating false reviews. It is safe to presume, however, that on a global scale these are the tip of the iceberg.

AI Weekly

The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.

Included with VentureBeat Insider and VentureBeat VIP memberships.

Earlier this year, Google was forced to hand over IP addresses of people behind fake reviews in the Netherlands. Just in the past few weeks, Airbnb, which operates in 191 countries, has been accused of stopping guests from leaving bad reviews if they cut short their stay.

Thousands of formal opinions are left every day with ecommerce sites such as Amazon or review platforms like Yelp or TripAdvisor. Some of these, it is now clear, come from questionable sources. However, thanks to the explosion of social media, there are also millions of opinions left online every day through Facebook, Twitter, YouTube, traditional chat rooms, and other social platforms. This user-generated content, or UGC, has not been solicited or requested by brands and retailers. Rather, it’s produced by people who have just eaten at a restaurant, watched a movie, or just bought a new product and are voicing their opinion — good or bad.

This is where artificial intelligence can play a huge part.

The evolution of natural language processing and machine learning technologies, which help analyze the sentiment of opinions left online, has finally reached a level where it is possible to process UGC on a mass scale. Previously limiting technologies would scan a sentence for sentiment and provide a positive, negative, or neutral conclusion. Now, not only can brands learn with greater accuracy the sentiment behind a tweet or YouTube comment, but they can analyze exactly what has been said and also extract multiple aspects from one opinion.

To put this to the test, we applied artificial intelligence to online opinions about the Bellagio hotel in Las Vegas. Collecting over 25,000 opinions from numerous sources across the web, we found that among the opinions, there were 701 just about their food; 119 people didn’t like the hotels service, 77 people loved the flowers, and an incredible 21 people had an opinion about the water pressure. (Most were positive about the Bellagio’s water pressure, in case you were interested. See the rest of the Bellagio Hotel reviews here.)

We collected the opinions from sources such as Facebook and Twitter, plus chat rooms, forums, and other sources where people are giving their actual opinion. We largely sidestepped the sources where fake reviewers are active, such as Amazon. We also collected them from some sites that had a verified user status. The other point is that we collected so many opinions (Booking.com currently has only 11,000 reviews of the Bellagio). There is safety in numbers. Even if a few end up being fake, they’re overwhelmed by the real ones.

We used AI to process the review archive and then to analyze for actual sentiment — such as whether people liked the water pressure. This is much more valuable than collecting positive or negative opinions.

For retailers and brands alike, imagine being able to sidestep the process of requesting reviews from consumers — whether they turn out to be real or fake — and instead collect the opinions that are already expressed across the web, on a huge scale. After all, the crowd can’t lie.

For ecommerce sites trying to stay one step ahead of fake reviews, another option is now available, one where they can rely on the scale of hundreds of millions of online opinions. In an increasingly competitive market, retailers can hardly afford not to address the issue of consumer sentiment. While much has been written about artificial intelligence and the various applications of the technology, this is one case that we expect to affect the way we shop in the coming years.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More