Sociologist Clifford Nass, who’s just published the book “The Man Who Lied to his Laptop“, says we’re treating our machines as if they were human.
I recently talked to Nass about his work and his book, in which he uses our interactions with machines to investigate how human relationships could be improved.
Naas stumbled across sociology by accident, deciding it was the easiest possible course to take to get enough credits to finish his degree in Computer Science. Soon he realized he was in a unique position as a sociologist who knew how to program a computer. Software was a new tool for research. As he says, “Galileo was one of the first people in the world to have a telescope. He was bound to discover something.”
The perfect confederate
AI Weekly
The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.
Included with VentureBeat Insider and VentureBeat VIP memberships.
What Nass discovered is that we treat machines to some extent as if they are were also human, and highly tech-savvy people like software developers are no exception. He gives an example where people were asked to evaluate a piece of software they had been using. In one group of participants, the evaluation was done on the same computer on which the software being evaluated was used, in the other not. The first group gave consistently higher evaluations. Unconsciously, they didn’t want to “hurt the computer’s feelings”, said Nass.
Some issues in sociology and psychology are difficult to investigate because of what is called the confederate problem. Let’s say you want to design an experiment to test how people react to flattery. In such an experiment, the scientists need an assistant, whose complicity is unknown to the other participants, to do the flattering. The problem is that so many variables exist in the simplest social interaction that the experience of each participant is never exactly the same. Nass, through his work in human-computer interaction, realized that he had found the perfect, consistent confederate: a computer. Now he uses our relationships with computers to learn how we interact with each other.
How to build a team
One of the subjects Nass tackled, and discusses in the book, is team-building. He says only two things are required to make a team. The first is identification (knowing you are on the blue team or the development team of Startup A), and the second is the sense that success depends on everyone in the team. One of the traditional ways for tech companies to promote their products is to give out t-shirts at tech events. Nass sees this as a wasted opportunity. The point of the t-shirt is to build a team, which means that only employees should be able to get hold of them. Otherwise the identification is not effective.
He conducted experiments where participants had to perform some tasks on computers. Each computer was arbitrarily assigned to a red team or blue team simply by tying a cloth of the appropriate color around it. Participants quickly start to rate the computers on their “team” as better, faster and more helpful.
How to persuade
Another issue Nass examined was persuasion, in particular in the area of recommendations. When it comes to “experience goods” like perfume, whose qualities you can’t judge just by looking at them, social recommendations from “people like you” of the type used by Amazon are very effective.
With “search goods” like cameras, consumers are more swayed by recommendations based on perceived expertise, like a review score based on 30 aspects of the product rather than 5. In fact, the two general factors involved in persuasion seem to be perceived expertise and trustworthiness, where the latter means whether you think the recommender has your best interests at heart.
Asking for favors
Some of the most striking of Nass’s experiments involved testing the best way to get people to reveal information or do you a favor. Participants in one experiment interacted with a program that said something like “Most PCs these days have at 2MB of memory. Being an older model I only have 1MB. What do you feel inadequate about?” Participants were much more likely to reveal personal information in this case then when the program simply stated the specs of the machine.
The same was true when a search program was configured to be “helpful” or “unhelpful” in some search tasks participants performed. When people were then asked to help optimize the screen resolution on a computer where the program had been “helpful”, they were much more likely to do so than with the less helpful version.
Social Prosthetics
Nass expects technology to get better and better at following the social rules and therefore seeming more human. He cites the example of a navigation company that instead of saying, “Turn left after 30 meters” is starting to use landmarks instead, such as “Turn left after the supermarket”, since this is much more natural for people.
He even envisages software built into cell phones, which would change the speed and cadence of our voices to match that of our conversation partner, something humans do naturally but which some people do better than others. He calls such tools social prosthetics since they make us more likeable. It’s not hard to see an era when technology adapts itself to us so efficiently that we find it more pleasant company than other human beings. There’s something quite disturbing about that thought.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More