Skip to main content

How to keep humans in charge of AI

Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now


We the people” need to stay in charge of AI — not tech companies or political elites. Google’s recent blunderfest with its Gemini AI system makes this abundantly clear. 

Gemini will not say that Hitler is worse than Elon Musk’s tweets. It refuses to write policy documents that argue for fossil fuel use. By default, it generates images suggesting that America’s founding fathers were different races and genders than they actually were. 

These examples may seem farcical, but they hint at a not-so-distant, dystopian future in which unaccountable bureaucrats at private AI companies decide which ideas and values are allowed to be expressed, and which are not. No one, regardless of their ideology, should accept this vision. 

Neither should we ask the government to tell AI companies how to control AI speech. Government regulation will be important for AI safety and fairness, but living in a free society means not letting governments tell us what ideas and values people can express or not express. 


AI Scaling Hits Its Limits

Power caps, rising token costs, and inference delays are reshaping enterprise AI. Join our exclusive salon to discover how top teams are:

  • Turning energy into a strategic advantage
  • Architecting efficient inference for real throughput gains
  • Unlocking competitive ROI with sustainable AI systems

Secure your spot to stay ahead: https://bit.ly/4mwGngO


So, corporations and governments are clearly not the right entities to make these decisions. At the same time, decisions will have to be made. People will use these AI tools to seek out all kinds of information and to attempt to generate all kinds of content. Users will expect the tools to reflect their values — but they won’t agree on what those values should be. 

There is a third option beyond companies and governments: Put the users in charge of AI. 

Strategies to put users in charge of AI

Over the past five years, along with my work as an academic political scientist, I have worked with the tech industry to develop and experiment with different ways to empower users to help govern online platforms. Through this work, here is what I’ve learned about how we can effectively put users in charge of AI.  

First, let users choose guardrails through the marketplace. We should encourage a large multiplicity of fine-tuned models. Different users, journalists, religious groups, civil organizations, governments and anyone else who wants to should be able to easily create customized versions of open-source base models that reflect their values and add their own preferred guardrails. Users should then be free to choose their preferred version of the model whenever they use the tool. This would allow companies that produce the base models to avoid, to the extent possible, having to be the “arbiters of truth” for AI. 

While this marketplace for fine-tuning and guardrails will lower the pressure on companies to some extent, it doesn’t address the problem of central guardrails. Some content — especially when it comes to images or video — will be so objectionable that it can’t be allowed across any fine-tuned models the company offers. This includes content that is already straightforwardly illegal, such as child sexual abuse material (CSAM), but also lots of content that exists in grayer areas, like satirical depictions of real people that might be defamatory, slurs that may offend some people in some contexts but not others, sexual or pornographic content, support for groups alternatively considered terrorists or freedom fighters and so on. 

How can companies impose centralized guardrails on these issues that apply to all the different fine-tuned models without coming right back to the politics problem Gemini has run head-long into? The answer is to put the users in charge of setting these minimal, central guardrails. 

Indeed, this is why some tech companies are already experimenting with democracy. First, in 2022, Meta announced a “community forum” to seek public input into how it designs certain guardrails for LlaMA, its open-source generative AI tool. Six months later, OpenAI announced a similar project to find “democratic inputs to AI.” Meanwhile, the AI startup Anthropic released a constitution co-authored by a representative set of Americans. 

These are great first steps, but we’ll need to do a lot more. Recruiting representative samples of users like these experiments have done is expensive, and the recruits don’t have “skin in the game” — they lack strong incentives to understand the issues and make good decisions. Moreover, each assembly only meets once, meaning that expertise in governance is not accumulated over time. 

Meaningful power over central guardrails

Stronger democracy for AI would require that users can make proposals, debate them and vote on them, with their votes holding binding authority over the platform. The scope of allowed proposals can be narrowed to avoid allowing proposals that violate the law or unduly impinge on the platform’s business, but should be kept broad enough to give people meaningful power over the platform’s central guardrails. 

Although no tech platform has yet attempted to implement a real voting system like this, experiments in web3 — like the one Eliza Oak and I studied in a recent academic working paper — show a path forward. Startups in Web3 have experimented with voting systems with extremely broad powers for years now. While they’re still early in their journey to full democracy, we’ve learned four key lessons that can apply to AI platforms. 

First, avoid having people vote with nothing at stake by tying their voting power to something of utility to users. AI platforms could tie voting power to digital tokens that users can use within the platform — for example, as credits for buying more compute time. 

Second, do not ask everyone to vote on everything. Instead, encourage users to delegate their tokens to validated experts who will cast votes on their behalf and provide transparent, public explanations of what proposals they made, how they voted and why. 

Third, create a rewards system to encourage good participation in governance. Announce that users will receive additional tokens — which they can use to vote or use to pay for AI usage — when they develop a track-record of participating meaningfully in governance over time. 

Fourth, embed this voting system into a broader constitution that makes clear what proposals are in scope for users, when and how companies can veto certain kinds of proposals, who has voting power and in what proportion and so forth. Make explicit the company’s commitment to giving up the power to set central guardrails for their AI tools. 

Helping society trust what we see

Platforms can start small with this experiment, piloting it on only a few decisions and phasing in its powers over time. To succeed, though, it must eventually clearly commit AI companies to not being able to set their central guardrails. Only then will society trust that what we see and the answers we get to the questions we ask are not being distorted by unaccountable actors who don’t share their values. 

Andrew B. Hall is the Davies Family Professor of Political Economy at the Graduate School of Business at Stanford University