Any video gamer knows how boring NPCs (non-playable characters) in digital worlds are. Their behavior is simple and predictable and their words entirely scripted by a staff of writers. This makes them uninteresting opponents and unsatisfying companions.

We’re far more likely to emotionally attach to lifelike characters, like the emo robot sidekicks in the Star Wars franchise, but crafting believable, autonomous entities you can actually interact with is no easy feat.

Character models built by artificial intelligence aim to escape the uncanny valley and imbue inanimate objects and digital characters with an aura of realism and life. Normally this is accomplished by modeling CG (computer generated) characters after humans wearing sensors, but this tactic limits you to the actors’ exact movements.

What if you want believable behavior that humans can’t model, such as a zombie with a missing head and limbs? The DWANGO Artificial Intelligence Laboratory in Japan recently presented artificial intelligence technology that does precisely this to legendary animator Hayao Miyazaki of Studio Ghibli.

AI Weekly

The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.

Included with VentureBeat Insider and VentureBeat VIP memberships.

The creepy and grotesque realism of the demo prompted Miyazaki to proclaim the technology as “an insult to life itself” and bemoan that “we are nearing the end of times.”

If you build AI that replaces drawing, you shouldn’t be surprised to piss off a man who spent his entire life drawing.

Not everyone is as pessimistic about AI as Miyazaki. Brad Knox from the MIT Media Lab sees incredible potential for machine learning to create engaging, emotional, and authentic characters, robots, and toys. “I’m unaware of any NPCs or electronic toy characters that can sustain an illusion of life over more than an hour,” says Knox, whose company Bots Alive creates exactly this illusion.

Their first offering is a smartphone kit that gives “lifelike autonomy” to the popular Hexbug Spider toy. This robot spider is normally controlled manually with a remote, but the Bots Alive kit gives the toy a “brain” to intelligently and autonomously navigate around obstacles while quirkily looking around and precociously bumping into things the same way a live spider might. If you pick up two robots, the two can play together either as friends or foes.

Knox and his team developed the robot’s autonomous behavior by extending the machine learning technique called “learning by demonstration,” which works as follows:

  1. A human puppeteer operates the robot in different situations and reacts authentically and emotionally.
  2. Training data is gathered from these demonstration sessions that include the commands as well as the context in which they were given.
  3. Supervised learning builds a model of the puppeteer from the training data that answers the question “In context X, what is the probability the puppeteer would give command Y?”
  4. The robot’s behavior is observed and iteratively improved upon.

Bots Alive keeps the kit at an economical $35 by leveraging your smartphone processor and camera instead of expensive hardware sensors. The Hexbug Spider, not included, is an affordable $25 add-on which many robot enthusiasts already own. The total price tag is one third of the cost of Cozmo, another autonomous toy robot made by Anki, currently selling for $180 on Amazon.

Want to see for yourself whether intelligent autonomy enhances your play experience? Head over to the Bots Alive Kickstarter campaign to pick up your own kit.

From Tamagotchi to The Sims, we humans spend hours playing with and building emotional attachment to inanimate toys and digital characters. Now we have immersive VR games like Loading Human that feature complex emotional entanglements with NPCs.

For better or worse, making characters, robots, and toys more believable with artificial intelligence enhances their reality and thus our attachment to them. Knox expects that limitations with current machine learning methods, such as optimally sensing and encoding contextual information, will be improved by deep learning and new research.

Will we live harmoniously alongside lifelike robots and digital avatars, or will AI-powered characters bring about Miyazaki’s “end of times”? We can only wait to see.

This article appeared originally at TopBots.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More