The extraordinary backlash against big tech is rooted in three basic issues that have upset consumers:
1. Unwittingly giving foreign hackers (and private individuals as well as companies) the tools to attack the bedrock of our democracy
[aditude-amp id="flyingcarpet" targeting='{"env":"staging","page_type":"article","post_id":2358217,"post_type":"guest","post_chan":"none","tags":"category-business-industrial,category-science-engineering-technology","ai":false,"category":"none","all_categories":"big-data,business,security,","session":"C"}']2. Turning the Internet into a giant echo chamber, reinforcing the biases of users and shielding them from other points of view
3. Exploiting user data to become a commodity without making people aware of (or giving them the ability to understand) what is going on.
AI Weekly
The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.
Included with VentureBeat Insider and VentureBeat VIP memberships.
The obvious solution for big tech is to re-establish public trust by solving these problems. So why aren’t they fixed yet? Protecting user privacy — as well as preventing election interference — isn’t just a matter of accepting lower profits. Facebook, Twitter, Google, and other tech giants face huge technical challenges along with user resistance as they try to close Pandora’s box.
Here, specifically, are the three main barriers to fixing the trust problem:
Barrier 1: Trolls adapt and adjust
It isn’t that hard to stop foreign trolls and bots. With smart people and a lot of money, we can understand such groups and even block attacks from foreign powers. It’s much tougher, though, to anticipate the next generation of attacks. These foes are sophisticated and adept at exploiting our inattention. As a result, designing and building robust future defenses is very difficult and expensive.
Barrier 2: Tech companies give us what we want
Breaking through the echo chamber is tough — largely because the Internet’s core services are designed to show us what we want to see. Typing something into a search engine, we expect it to deliver exactly what we asked for, not an opposing point of view. When we surf Facebook, we want things we like, not things we don’t like.
Consider the Flat Earth Society or Holocaust deniers as metaphors for everything that has gone awry. Once disorganized fringe groups, they have been able to organize and connect with each other and to convert other scientifically-illiterate “victims” to their way of thinking, thanks to the Internet.
The answer can’t be censorship. Better to identify major Internet cliques and hate-mongering mobs via algorithms and prominently display the alternative point of view so that anyone searching for anything demonstrably untrue might receive the following results:
[aditude-amp id="medium1" targeting='{"env":"staging","page_type":"article","post_id":2358217,"post_type":"guest","post_chan":"none","tags":"category-business-industrial,category-science-engineering-technology","ai":false,"category":"none","all_categories":"big-data,business,security,","session":"C"}']
- Exactly what they’ve searched for
- A clear message that the topic is controversial and that experts overwhelmingly agree it is false — or, at least, that there is a legitimate opposite point of view
- Links to solid material or data supporting the opposite view and/or debunking what they’ve searched for, applied with total algorithmic neutrality
We probably haven’t seen this type of solution yet because it will take years of product development and engineering to get it right at scale.
Barrier 3: The average privacy policy is 2,514 words long
Ideally, your privacy settings ought to be transparent and well-designed for ease-of-use. Part of the reason they’re not is that tech companies don’t want to make it easy for users to prevent them from monetizing private user information. And that’s how we end up with 2,514-word privacy policies.
The other reason is that most users are not data scientists and don’t understand how privacy systems work or the implications of changing their user settings. Designing an intuitive control panel for everyone is a mighty technical and design challenge.
To visualize the challenge, imagine flying a helicopter without a single lesson. Your control panel has just two buttons: “Autopilot” and “Land Now.” Most people flying the chopper (that is, managing their personal data) would probably hit the Land Now button, well short of the destination (that is, short-circuiting the process of controlling your settings and preventing companies from monetizing personal information). Naturally, tech titans don’t want to make aborting the flight too easy, so the answer can’t just be an off switch like a “Land Now” button. But in a helicopter with a full control panel, how many people could pull off the mission and not mess it up? That’s why when you register and use search engines and social media sites, the Autopilot button is always on by default. This maximizes the use of your data but also prevents you from mistakenly putting your information at greater risk.
[aditude-amp id="medium2" targeting='{"env":"staging","page_type":"article","post_id":2358217,"post_type":"guest","post_chan":"none","tags":"category-business-industrial,category-science-engineering-technology","ai":false,"category":"none","all_categories":"big-data,business,security,","session":"C"}']
Why aren’t we seeing solutions?
None of these barriers is insurmountable, but they are all formidable.
In the first instance, combating foreign attacks and anticipating future threats probably calls for a Manhattan Project for security, currently a vastly underfunded field — lots of money and man/woman-power — that probably will not come about without considerable pressure from the public and Congress.
The second barrier is largely a technical challenge, but solving it could require tech companies to wade into the nation’s gaping political divide.
As for the issue of greater privacy protection, big tech could come up with a somewhat simpler user interface, provided it puts users through some basic training in how personal big data is used. But here’s the rub: Given the controls, would the average — who probably doesn’t have the fortitude to read through an electronic Terms of Agreement document — have the patience to learn how to use the right tools to control their privacy? All studies done on consumer attention spans indicate that’s not likely.
[aditude-amp id="medium3" targeting='{"env":"staging","page_type":"article","post_id":2358217,"post_type":"guest","post_chan":"none","tags":"category-business-industrial,category-science-engineering-technology","ai":false,"category":"none","all_categories":"big-data,business,security,","session":"C"}']
Alexander Hertel is the cofounder of Xperiel, a Sunnyvale, Calif., company that provides a platform to combine augmented reality with the Internet of Things.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More