You are here

CBC The Passionate Eye (2018.12.01) e124 Inside Facebook

Primary tabs

SizeSeedsPeersCompleted
1.11 GiB1068
This torrent has no flags.


File Duration Resolution Video Format Audio Format
CBC-ThePassionateEye.20181201.e124.InsideFacebook.1280x720.mp4 44m26s 1280x720 AVC AAC

.

Facebook is one of the world’s most powerful companies. Yet with 1.4 billion users logging on and sharing billions of pieces of content every single day, the platform has also become a breeding ground for extreme views and graphic content. That means Facebook has a big job on its hands, moderating millions of reported posts each week. But is the company putting profit before user safety?

Inside Facebook: Secrets of the Social Network goes undercover to expose, for the first time, how the platform moderates the content its users can and can’t see online.

In an office building in Dublin, Ireland, workers sit at their computers and scroll through disturbing images and video, determining which posts should stay up and which should be deleted. They work for CPL Resources, a contractor for Facebook, tasked with moderating the platform’s extreme content in the U.K.

“Extreme content” on Facebook can be anything from videos of violent abuse to hate speech to posts about self-harm or suicide. In Inside Facebook, an undercover reporter gets a job at CPL, and witnesses how moderators are trained to make the call between acceptable and unacceptable content.

Moderating violent content

A video of a toddler being beaten by an adult would prompt hundreds of calls by Facebook users for its removal. But as the documentary reveals, a video showing just that is being used by CPL to train moderators on the type of content that should remain on the platform.

In this case, the video was left to circulate online for six years after being flagged by online child abuse campaigners. “If that’s being used as an example for moderators of what is considered acceptable [and] is tolerated on Facebook, [it’s] truly shocking,” says Andy Burrows, associate head of child safety online at the U.K.’s National Society for the Prevention of Cruelty to Children.

Roger McNamee, an early investor in Facebook and a mentor to Mark Zuckerberg, anticipated these problems early on. “I was more proud of Facebook than anything I’d ever done … before I understood what was going on,” he says.

McNamee describes how the platform’s business model actually relies on extreme content, as it keeps users on the platform longer, feeding them more ads and increasing revenue. “This is essentially the crack cocaine of their product. It’s the really extreme, really dangerous form of content that attracts the most highly engaged people on the platform,” he says. “So they want as much extreme content as they can get.”

The decision to ignore or delete extreme content appears to rely on how it’s being positioned. In one instance, the undercover moderator asks the advice of his colleague regarding a video that shows two underage girls, one of them being beaten by the other. “Unless it has a condemning caption, it’s a delete,” says the colleague, referring to the text accompanying the video.

In other words, if the caption were promoting the violence or poking fun, it would come down. But in this case, since the caption condemns the fighting, the moderator is told to leave it up with a “mark as disturbing” warning. As the colleague notes, “If you start censoring too much, then people lose interest in the platform. It’s all about making money at the end of the day.”

That’s not how the mother of the girl being beaten feels, though. “It shouldn’t have been a question [of] whether to take it down [or not] … it shouldn’t have been a discussion,” she says. The standpoint that Facebook takes is that leaving the post up spreads awareness. “There are other ways to spread awareness without putting a video out there with someone’s daughter being battered,” the girl’s mother says. “It’s not Facebook entertainment.”

In response to the undercover findings, Richard Allan, Facebook’s vice president of Policy Solutions, sat down with the filmmakers to discuss their undercover findings and defend the company’s moderation process. “Shocking content does not make us more money. That’s just a misunderstanding of how the system works,” he says.

Posting self-harm

When it comes to the topic of suicide and self-harm, Aimee Wilson, who used to practise self-harm herself, describes how the online community can actually make things worse. “I think that probably around 65 per cent of my scars I would attribute to the impact social media’s had on me,” she says. “It meant I was surrounding myself with people that were self-harming. It would encourage me to cut a lot more.”

Once again, moderating these posts and communities involves walking a fine line. With an admission of self-harm by a user, Facebook’s policy is to send that user a message containing information about mental health and suicide support services — but the images are left up.

Consultant psychologist Dr. Jane McCartney sees the danger in that. “Misery loves company, so if you can get out there and actually see people doing the thing that is a representation of [your] misery, there might be something attractive in that,” she says.

Richard Allan explains the reasoning behind Facebook’s decisions to leave self-harming posts up, while sending resources, noting that the platform gives individuals the ability to, “express their distress to their family and friends through Facebook, and then get help.” He adds, “We see that happen every day ... individuals are provided with help by their family and friends because the content stayed up.”

Identifying hate speech

The film also uncovers how some Facebook pages that promote hate speech are left up and running. The undercover reporter inquires about a far-right page that touts anti-Muslim and anti-immigrant content. He is told that these pages, though they have exceeded Facebook’s “allowed content” violations, remain active and are “shielded,” preventing the CPL moderators from deleting them. “Obviously, they have a lot of followers, so they’re generating a lot of revenue for Facebook,” says one moderator.

As undercover filming was taking place, Facebook CEO Mark Zuckerberg was testifying in front of the US Senate’s Commerce and Judiciary committees, in light of questions about user data privacy, Russian misinformation and other issues on the platform.

Zuckerberg was also questioned about hate speech on the social network. “Our goal is to allow people to have as much expression as possible,” he testified, adding, “I don't want anyone at our company to make any decisions based on the political ideology of the content.”

While some of the inconsistencies around moderating extreme content may have come down to subjective opinion, others could be traced back to Facebook’s concern that it would be accused of having anti-conservative bias. “I think people would expect us to be careful and cautious before we take down their political speech,” says Allan.

Changing protocols

In response to the documentary’s findings, Facebook has provided a number of statements, including a letter to the filmmakers and a blog post about their efforts to keep the Facebook community safe. They note that training protocols have been investigated and re-training has been immediately issued for moderators.

As for the video of the toddler being beaten, after six years circling on the platform, it has finally been taken down. “We removed it, in line with our policies, once we knew the child was safe, and have used image-matching software to prevent it from being uploaded ever since,” wrote Monika Bickert, Facebook’s vice-president of global policy management, in the platform’s letter to the filmmakers.

Facebook has also developed policies on self-harm with mental health and suicide-prevention organizations, but still defend leaving certain self-harm posts online, saying the platform is “uniquely positioned” to help people in distress make contact with loved ones.

In addition, the company is working at double speed in its artificial intelligence (AI) division, in the hopes that algorithms and machine learning can identify extreme content and censor it, prior to it reaching human eyes. Since November 2017, the platform has used AI to reach more than a thousand users who appeared to be expressing thoughts of suicide. Their long-term goal is to use AI to more efficiently review the billions of pieces of content published each day.

The changes that Facebook promises may help to alleviate some of the challenges in moderating content, but it remains one of the most complex and labour-intensive problems for them to solve. There are currently more than 10 million posts flagged per week, and if Facebook aims to moderate these posts within their allotted 24-hour window, the task remains monumental.

As Roger McNamee notes, “I just hope that, as a consequence of this film, the tone of the debate becomes sharper, more focused, more persistent and that we stop accepting their excuses. We stop accepting their assurances.”