Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now
As AI output quickly becomes indistinguishable from human behavior, are we prepared to handle the ethical and legal fallout? The practice of designing AI to intentionally mimic human traits, or “pseudoanthropy”, is raising urgent questions about the responsible use of these technologies. Key among these are questions of transparency, trust and the potential for unintended harm to users. Addressing these concerns, and minimizing potential liability, is becoming critical as companies accelerate the adoption and deployment of AI systems. Tech leaders must implement proactive measures to minimize the risks.
The downside of humanizing AI
The appeal of pseudoanthropy lies in its potential to humanize and personalize experiences. By emulating human-like qualities, AI can theoretically create more intuitive, engaging and emotionally resonant interactions. However, recent real-world examples illustrate how these same capabilities also open the door for deception, manipulation and psychological harm.
Take for instance generative AI models like VASA-1, announced by Microsoft last week, which can produce uncannily lifelike talking avatars from a single static image. On one hand, such tools could enable more natural, linguistically diverse and visually compelling human-computer interactions. However, they also carry the obvious and immediate risk of being used to create deceptive deepfakes. VASA-1 uses artificial “affective skills” – intonation, gestures, facial expressions – to simulate genuine human emotions. An AI that can convincingly convey feelings it does not have, and elicit powerful emotional responses in viewers who may not realize they are interacting with an artificial entity, creating concerning opportunities for manipulation and psychological distortion.
The emerging phenomenon of AI-powered virtual companions and partners takes these issues to the extreme. Using large language models (LLMs), these pseudoanthropic agents can form remarkably convincing romantic relationships with users, complete with simulated intimacy and emotional attachment. However, the AI’s inability to reciprocate real human feelings, combined with the high risk of users forming unhealthy psychological dependencies, raises serious red flags about the technology’s impact on mental health and human connection.
AI Scaling Hits Its Limits
Power caps, rising token costs, and inference delays are reshaping enterprise AI. Join our exclusive salon to discover how top teams are:
- Turning energy into a strategic advantage
- Architecting efficient inference for real throughput gains
- Unlocking competitive ROI with sustainable AI systems
Secure your spot to stay ahead: https://bit.ly/4mwGngO
Even more mundane applications of pseudoanthropy, like AI customer service avatars designed to add a “human touch,” pose ethical challenges around transparency and trust. An AI-generated face, voice or writing style that mirrors human mannerisms with uncanny precision makes it all too easy to mislead people about the true nature and limitations of the system they are engaging with. This could result in over-identification, misplaced affection, or inappropriate reliance on the AI.
In particular, the ability of AI to deceive users into believing they are interacting with a real human raises serious concerns about manipulation, trust and psychological well-being. Without clear guidelines in place, organizations risk causing unintended harm to individuals, and when deployed at scale, even to society as a whole. Technology leaders find themselves at a critical juncture, needing to navigate this uncharted ethical territory and make decisive choices about the future of AI pseudoanthropy in their organizations.
“In my opinion, not clearly disclosing that a person is interacting with an AI system is an unethical use,” warns Olivia Gambelin, author of the upcoming book Responsible AI. “There’s a high risk of manipulation.”
An emerging liability risk
The ethical quandaries surrounding AI pseudoanthropy extend beyond philosophical debates and into the realm of legal liability. As these technologies become more advanced and widely adopted, organizations that deploy them may face a range of legal risks. For instance, if an AI system that mimics human qualities is used to deceive or manipulate users, the company behind it could conceivably be held liable for fraud, misrepresentation or even infliction of emotional distress.
Also, as lawmakers and courts begin to grapple with the unique challenges posed by these technologies, new legal frameworks and precedents will likely emerge that hold organizations accountable for the actions and impacts of their AI systems. By proactively addressing the ethical dimensions of AI pseudoanthropy, technology leaders can not only mitigate moral hazards but also reduce their exposure to legal liabilities in an increasingly complex and uncertain regulatory landscape.
Avoiding unintended harm
Gambelin cautions that the use of AI pseudoanthropy in sensitive contexts like therapy and education, especially with vulnerable populations like children, requires extreme care and human oversight. “The use of AI for therapy for children should not be allowed point blank,” she states unequivocally.
“Vulnerable populations are the ones that need that attention. That’s where they’re going to find the value,” Gambelin explains. “There’s something intangible there that is so valuable, especially to vulnerable populations, especially to children. And especially in cases like education and therapy, where it’s so important that you have that focus, that human touch point.”
While AI tools may offer some benefits in terms of efficiency and personalization, Gambelin emphasizes that they cannot replace the depth of human understanding, attention and empathy that is crucial in therapeutic and educational relationships. Attempting to substitute AI for human care in these domains risks leaving people’s core emotional and developmental needs unmet.
Technologists as moral architects
Other fields have been grappling with similar ethical dilemmas for decades. Philosopher Kenneth D. Alpern argued back in 1983 that engineers have a distinct moral duty. “The harm that results from a dangerous product comes about not only through the decision to employ the design but through the formulation and submission of the design in the first place,” Alpern wrote. While Alpern was discussing civil and mechanical engineering, his point is equally relevant to AI development.
Regrettably, innovation leaders confronting the thorny ethical questions currently have little in the way of authoritative guidance to turn to. Unlike other technical professions such as civil and electrical engineering, computer science and software engineering currently lack established codes of ethics backed by professional licensing requirements. There is no widely adopted certification or set of standards dictating the ethical use of pseudoanthropic AI techniques.
However, by integrating ethical reflection into their development process from the outset and drawing on lessons learned by other fields, technologists working on human-like AI can help ensure these powerful tools remain consistent with our values.
Pioneering responsible practices for human-like AI
In the absence of definitive guidelines, tech decision-makers can start by putting proactive policies in place to limit the use of pseudoanthropy where the risks outweigh the benefits. Some preliminary suggestions:
- Avoid the use of simulated human faces or visually human-like representations for AI, to prevent confusion with real people.
- Do not simulate human emotions or intimate human-like behaviors and mannerisms.
- Refrain from invasive personalization strategies that mimic human friendship or companionship and could lead to over-identification or inappropriate emotional dependence.
- Clearly communicate the artificial nature of AI interactions to help people distinguish between human and artificial entities.
- Minimize the collection of sensitive personal information intended to influence user behavior or drive engagement.
Ethics by design
As AI systems grow increasingly adept at mimicking human traits and behaviors, maintaining ethical standards can no longer be an afterthought. It must become an integral part of the development lifecycle, practiced with the same degree of discipline as security, usability and other core requirements.
The risks outlined here, the potential for deception, manipulation and eroding human connection, demonstrate that ethics is not a philosophical thought experiment when it comes to pseudoanthropic AI. It is a burning priority that will make or break consumer trust.
“The company is dealing, is risking basically the only currency that matters in tech these days, which is trust” Gambelin emphasizes. “If you do not have your customers’ trust, you do not have customers.”
Tech leaders must recognize that developing human-like AI capabilities is practicing ethics by another means. When it comes to human-like AI, every design decision has moral implications that must be proactively evaluated. Something as seemingly innocuous as using a human-like avatar creates an ethical burden.
The approach can no longer be reactive, tacking on ethical guidelines once public backlash hits. Ethical design reviews and cross-functional ethical training must become institutionalized within mainstream software development methodologies from day one. In short, ethical oversight must permeate the development process with the same meticulous rigor as security audits and UX testing.
Just as bygone products failed due to poor security or shoddy usability, the next generation of AI tools will fail if ethics is not hardwired into their core design and development practices from the ground up. In this new era, ethics is the harsh pragmatism underlying lasting technology.