Artificial intelligence has without question been a menace to modern democratic society. Malicious bots notably interfered in the 2016 presidential election in the United States, and they meddled in Mexican elections held earlier this week.

Perhaps even more alarming is a study published last month that found the majority of people in democratic societies around the world do not believe their voices are heard. Modern systems of government have been challenged in recent years by both disillusionment in institutions and foreign adversaries deploying malicious forms of AI.

Something has to give, and while AI is often painted as the villain automating away everyone’s jobs, it’s just a tool, and one that can be used in powerful ways to improve lives. Here are four ideas or initiatives that use AI to better democracy.

Deepfake detection

Earlier this year, when the deepfake techniques of grafting one person’s face onto another person’s body using AI became the subject of mainstream news coverage, possible misuses of such technology were readily apparent.

AI Weekly

The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.

Included with VentureBeat Insider and VentureBeat VIP memberships.

A scenario in which people are inserted into videos without their consent and “shown” performing sexual acts is low-hanging fruit, as evidenced by users in subreddit communities placing the faces of Scarlett Johansson and Gal Gadot on the bodies of porn stars. It was — and still is — shocking how easy it can be to find these porn videos online, but what happens if a deepfake with the president of the United States gets widely circulated online? Many people will understand it’s fake, but if we can’t figure out how to avoid being duped by fake news in the written word, what happens when it’s mimicking human speech and facial gestures?

In response to this growing threat, startups and government agencies have mounted AI initiatives to stop deepfake AI.

Truepic, a startup working with Reddit to identify manipulated media, recently closed an $8 million funding round and will begin to explore ways it can identify deepfakes. Also last month, researchers at the State University of New York (SUNY) announced they have made a computer vision model to monitor the way people blink in deepfake videos.

The Defense Advanced Research Projects Agency (DARPA) is funding a contest this summer for top forensics experts to establish ways to identify deepfakes.

The U.S. elections in November will more than likely be a testing ground for additional meddling in the electoral process by foreign adversaries. This could include deepfakes, and I hope we’re ready to suspend our belief in what’s real, because — as comedian Jordan Peele says using Barack Obama’s face below — “It may sound basic, but how we move forward in this age of information is going to determine whether we survive or whether we become some kind of fucked-up dystopia.”

Language analysis

Stanford professor Dan Jurafsky, part of the college’s Computer Science school and NLP Group, teamed up with psychology professor Jennifer Eberhardt to use natural language processing to scan the transcripts of 100 hours of conversations between police and members of the community at traffic stops. In results shared last month, Jurafsky found that police consistently speak to black drivers with a less than respectful tone than they do white drivers.

The work conducted by Jurafsky and Eberhardt is some of the first in body camera footage analysis, and they plan to continue refining their model to determine the tenor of conversations police have with various members of the community. These devices were initially installed by many police departments as a set of eyes and ears for evidence in shootings, particularly the shooting of unarmed black men.

But analysis of the resulting footage in less high-profile circumstances could deliver insights that help law enforcement agencies better understand when a change in language could lead to better outcomes.

How we speak to one another matters because it can — and does — impact outcomes. For clear reasons, it’s in the interest of the state to maintain healthy police-community relations. Whatever constructive criticism such work can have on community relations seems like a positive step.

Alexa skills for essential city services

Last year, we highlighted an Alexa skill for the city of Los Angeles that was made to tell residents about library opening hours, local government news, and the latest actions taken by the City Council. The goal, a city official told VentureBeat, is for smart speakers to be able to automate non-emergency city services.

Today, Alexa skills and Google Assistant actions made by municipal governments can tell you things like garbage pickup days or local pool hours, but governments should continue to strive for more ambitious voice apps.

Virtual assistant use is on the rise, and the majority of U.S. households are expected to have a smart speaker in less than five years, but it’s also one of the simplest forms of computing available. Part of the appeal of conversational AI is that you don’t have to be trained to use it. In fact, you don’t even need to know how to read or write; you can just open your mouth and speak.

If more city governments embrace voice app platforms as a way to reach a growing number of citizens, it might just change attitudes about government for the better.

What would be even more helpful is if Amazon adopted and promoted easy-to-use phrases like “Alexa, ask the city …” to make requests as simple as possible for city residents.

The speakers and assistants entering homes, cars, and the workplace could begin to become an interface with local institutions — for crucial city services like animal control and garbage disposal, but also to cut through bureaucracy and give citizens a way to voice opinions.

If cities follow initiatives like Northeastern University, where students this fall can get an Echo Dot to keep them apprised of essential information, they could greatly change the way people feel about their government, reduce frustration with bureaucracy, and perhaps even make government more responsive to the needs of citizens.

Direct democracy

Direct democracy may have seemed nearly impossible for a variety of reasons, starting with the fact that people don’t have much free time to vote and may not have the technical background needed to vote on every single piece of legislation elected officials consider in a representative democracy.

MIT Media Lab’s Cesar Hidalgo suggests countries could use predictive algorithms to learn individuals’ patterns of behavior and vote on their behalf, thereby creating legislation through direct democracy.

This approach definitely removes the illusion that a person — politician or not — will vote based on the merits presented at the time rather than how they voted in the past, but it could also potentially make government more responsive to the needs of the electorate.

I can’t imagine this will seriously be considered by many politicians in the future, but it does open the door to the idea of individuals using AI to explore their legislative options or the possibility of a virtual assistant to help people vote.

It’s not tough to imagine an assistant akin to Project Debater, an experimental AI made to debate with humans that debuted last month, helping combat fake news and arguing for or against particular forms of legislation.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More