Facebook news feed experiment furor not dying down even a wee bit

For one week in 2012, Facebook deliberately manipulated the posts users would see in their feeds to elicit an emotional response. The manipulation was part of a study on emotional contagion, the tendency of people to express more negative or positive emotions based on the emotions expressed by those around them.

Published in the Proceedings of the National Academy of Sciences (PNAS), the study showed users who viewed more negative posts in turn created more negative posts in the following week, while users exposed to an increased number of positive posts displayed more positive emotions.

Users and media alike have expressed outrage at the intentional manipulation of posts without notification or permission. Facebook claims the experiment was allowed under its data use policy, which states user data may be used for "data analysis, testing, research, and service improvement." Adam Kramer, a data scientist at Facebook, provided some insight into the motivation for the study.

"The reason we did this research is because we care about the emotional impact of Facebook and the people that use our product," Kramer said in a Facebook post. "We felt that it was important to investigate the common worry that seeing friends post positive content leads to people feeling negative or left out. At the same time, we were concerned that exposure to friends' negativity might lead people to avoid visiting Facebook."

Facebook stresses the experiment affected less than 700,000 people, or around .04 percent of the user base. The suppression of positive or negative posts was also relatively minor. Posts containing certain words were prioritized less in the news feeds of users, making them less likely to show up in the feeds. The posts still had a chance of appearing, however, and could therefore be seen on subsequent visits to the news feed. In addition, the posts would still appear on the wall of the person who made them.

Facebook's study has come under attack not only for its covert manipulation of information, but for its reported scientific value. The algorithm used to classify posts as positive or negative is imprecise and focuses only on single words. It was designed in 1993, and originally intended to analyze large chunks of text like novels. One of the program's major weaknesses is its inability to account for negation words. For example, the sentence "I am not happy" is clearly negative, but the software would count the word "not" as one point toward negativity and the word "happy" as one point toward positivity.

"What the Facebook researchers clearly show, in my opinion, is that they put too much faith in the tools they're using without understanding -- and discussing -- the tools' significant limitations," says John Grohol, doctor of psychology, in an article.

While many users are unhappy about Facebook's surreptitious research, Facebook seems unapologetic. In fact, the company highlights how access to this vast number of users could help research in a variety of fields. Facebook could even allow other researchers access to the data for a fee. As the outcry continues, however, Facebook may have to look at whether continuing this type of research is worth angering its users.

"In hindsight," says Kramer in his post, "the research benefits of the paper may not have justified all of this anxiety."

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Tags:Facebook
Join the Discussion
Real Time Analytics