There has been quite the uproar over the recent study conducted by a Facebook data scientist and his two academic colleagues, with a number of ethical concerns raised. The most common of these being reported is that Facebook users did not provide informed consent. The authors of the paper state that the study “was consistent with Facebook’s Data Use Policy, to which all users agree prior to creating an account on Facebook, constituting informed consent for this research.”
While I agree with Facebook that their current terms of service delineate that user data can be used for A/B testing, users did not provide informed consent for the experimental manipulation. (At the time of the study, Facebook’s Data Use Policy said nothing about using user information for “research.”)
There’s an important distinction between collecting data from natural uses of the site and manipulating content to change user behavior. Informed consent would suggest that users had some idea that there was an experimental manipulation, which did not happen. An argument from Facebook might be that deception was necessary to obtain methodologically valid results — because if research participants knew that they were being observed,they would be more likely to change their behavior, something called the observer effect. However, if deception is used in any research study, then the participants must be debriefed (i.e., told about the deception that occurred and why it was necessary) soon after the study has ended which Facebook did not do.
But the informed consent piece is not what’s most ethically troubling about this research.
AI Weekly
The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.
Included with VentureBeat Insider and VentureBeat VIP memberships.
Facebook did not engage in an appropriate ethical review of the study. There is some confusion about whether the experiment was approved by an institutional review board (IRB, an ethics board that evaluates proposed research). It was not. Cornell University released a statement explaining that the IRB concluded no review was needed because the Cornell researcher was analyzing a pre-existing dataset. This is normal IRB procedure for pre-existing data where the researcher has no input into the study design.
While Facebook is not required to conduct an ethical review based on the Federal Policy for the Protection of Human Subjects (the “Common Rule”) because they don’t receive federal funding, they should still have an effective procedure in place to properly review such manipulations. Adam Kramer, the lead author of the study and Facebook data scientist referenced “internal review practices” which are clearly inadequate.
But why weren’t they good enough?
The Facebook team did not take into consideration the important ethical principle of beneficence. Researchers are obliged not only to ensure they do no harm, but also to maximize the potential benefits and minimize the potential harms of a study. Researchers must purposefully and thoroughly consider the risk inherent in their studies—just by the nature of taking part in research, participants are at risk of experiencing more harm or discomfort than they encounter in normal daily life. The fundamental job of any researcher is to decide whether such additional risk is justified given the potential benefits to society of the research findings. Even if the balance is overwhelmingly in favor of benefiting society, the researcher must take active steps to minimize potential harm to participants. The potential harm piece is key in this distinction—it is not enough for researchers to say “well, no one got hurt” after the fact.
Even if the principle of beneficence was considered in Facebook’s internal review, it was completely ignored in the execution of the study. Luckily, the emotional contagion effect sizes were so small that it is doubtful that a Facebook user suffered substantial psychological consequences; however, there was no way for the researchers to know this before the fact. What if the negative statements posted by those who were shown more negative content actually influenced their mood states? What if someone who was already struggling with depression and who was using Facebook to reach out to others to help was made worse by the negative statements? What if a person read a negative post they otherwise wouldn’t have seen that triggered a previous psychological trauma? If Facebook even asked these questions, they certainly did nothing about it as part of their experimental design.
What could have been done? Typically when researchers suspect that their experimental intervention might cause psychological discomfort, they put a set of checks and balances in place. Depending on the type of study, an institutional review board might recommend a debriefing session. As part of this debriefing session, researchers check in with participants after the experiment to see how they are doing. If they show signs of discomfort, they are referred to a mental health professional. They are also provided referral information in case they have a reaction after they leave the research area. When conducting online-only research, a debriefing statement is often used and the names and contact information for mental health professionals local to the participant are included.
It does not take a Herculean effort to ensure participants’ well-being. The fact that Facebook decided to conduct their manipulation without basic protections for participants is, without a doubt, a breach of research ethics. In the future, Facebook and other technology companies would do well to engage in independent reviews of their research and to focus more on the well being of their users. Facebook should be aware of the power they hold over their users—even if people wanted to quit Facebook, there is no better alternative for networking with family and friends. It is for exactly this kind of unchecked power that ethical guidelines for research have been developed.
Rey Junco is an associate professor of education and human computer interaction at Iowa State University and a fellow at the Berkman Center for Internet and Society at Harvard University. He is the author of the upcoming book, “Engaging Students through Social Media: Evidence-Based Practices for Use in Student Affairs.”
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More