Don't ignore Facebook's silly-sounding policies

A leaked manual reveals the shadowy and powerful role social media sites play in shaping public discourse

Published February 22, 2012 9:38PM (EST)

       (Salon)
(Salon)

A longer version of this piece appears on Culture Digitally.

Last week, Gawker received a curious document. Turned over by an aggrieved worker from the online freelance employment site oDesk, the document iterated, over the course of several pages and in unsettling detail, exactly what kinds of content should be deleted from the social networking site that had outsourced its content moderation to oDesk’s team. The social networking site, as it turned out, was Facebook.

The antiseptically titled “Abuse Standards 6.1: Operation Manual for Live Content Moderators” (along with an updated version 6.2 subsequently shared with Gawker, presumably by Facebook) is still available on Gawker. It represents the implementation of the Facebook’s Community Standards, which present the social media site's priorities around acceptable content, but stay miles away from actually spelling them out. In the Community Standards, Facebook reminds users that “We have a strict ‘no nudity or pornography’ policy. Any content that is inappropriately sexual will be removed. Before posting questionable content, be mindful of the consequences for you and your environment.” But, an oDesk freelancer looking at hundreds of pieces of content every hour needs more specific instructions on what exactly is “inappropriately sexual” — such as removing “Any OBVIOUS sexual activity, even if naked parts are hidden from view by hands, clothes or other objects. Cartoons / art included. Foreplay allowed (Kissing, groping, etc.). even for same sex (man-man / woman-woman" (sic).

It’s tempting, and a little easy, to focus on the more bizarre edicts that Facebook offers here ("blatant depictions of camel toes" as well as "images of drunk or unconscious people, or sleeping people with things drawn on their faces" must be removed; pictures of marijuana are OK, as long as it's not being offered for sale). But the absurdity here is really an artifact of having to draw this many lines in this much sand. Any time we play the game of determining what is and is not appropriate for public view, in advance and across an enormous and wide-ranging amount of content, the specifics are always going to sound sillier than the general guidelines. (It was not so long ago that "American Pie's" filmmakers got their NC-17 rating knocked down to an R after cutting the scene in which the protagonist has sex with a pie from four thrusts to two.)

But the more important story concerns what this document reveals about the kind of content being posted to Facebook, the position in which Facebook and other content platforms find themselves, and the system they’ve put into place for enforcing the content moderation they now promise.

Facebook or no, it’s hard not to be struck by the depravity of some of the stuff that content moderators are reviewing. It’s a bit disingenuous of me to start with the camel toes, when what most of this document deals with is infinitely more reprehensible: child pornography, rape, bestiality, graphic obscenities, animal torture, racial and ethnic hatred, self-mutilation, suicide. There is something deeply unsettling about this document in the way it must, with all the delicacy of a badly written training manual, explain and sometimes show the kinds of things that fall into these categories.

This outpouring of obscenity is by no means caused by Facebook, and it is certainly reasonable for Facebook to take a position on the types of content it believes many of its users will find reprehensible. But that does not let Facebook off the hook for the kind of position it takes: not just where it draws the lines, but the fact that it draws lines at all, the kind of custodial role it takes on for itself, and the manner in which it goes about performing that role. We may not find it difficult to abhor child pornography or ethnic hatred, but we should not let that abhorrence obscure the fact that sites like Facebook are taking on this custodial role — and that while goofy frat pranks and cartoon poop may seem irrelevant, this is still public discourse. Facebook is now in the position of determining, or helping to determine, what is acceptable as public speech — on a site in which 800 million people across the globe talk to each other every day, about all manner of subjects.

This is not a new concern. The most prominent controversy has been about the removal of images of women breast-feeding, which has been a perennial thorn in Facebook’s side; but similar dust-ups have occurred around artistic nudity on Facebookpolitical caricature on Apple’s iPhonegay-themed books on Amazon, and fundamentalist Islamic videos on YouTube. The leaked document, while listing all the things that should be removed, is marked with the residue of these past controversies. It clarifies the breast-feeding rule somewhat, by prohibiting “Breastfeeding photos showing other nudity, or nipple clearly exposed.” Any commentary that denies the existence of the Holocaust must be escalated for further review, not surprising after years of criticism. Concerns for cyber-bullying, which have been taken up so vehemently over the last two years, appear repeatedly in the manual. And under the heading “international compliance” are a number of decidedly specific prohibitions, most involving Turkey’s objection to their Kurdish separatist movement, including prohibitions on maps of Kurdistan, images of the Turkish flag being burned, and any support for PKK (The Kurdistan Workers’ Party) or their imprisoned founder Abdullah Ocalan.

Facebook and its removal policies, and other major content platforms and their policies, are the new terrain for long-standing debates about the content and character of public discourse. That images of women breast-feeding have proven a controversial policy for Facebook should not be surprising, since the issue of women breast-feeding in public remains a contested cultural sore spot. That our dilemmas about terrorism and Islamic fundamentalism, so heightened over the last decade, should erupt here too is also not surprising. The dilemmas these sites face can be seen as a barometer of our society’s pressing concerns about public discourse more broadly: how much is too much; where are the lines drawn and who has the right to draw them; how do we balance freedom of speech with the values of the community, with the safety of individuals, with the aspirations of art and the wants of commerce.

But a barometer simply measures where there is pressure. When Facebook steps into these controversial issues, decides to authorize itself as custodian of content that some of its users find egregious, establishes both general guidelines and precise instructions for removing that content, and then does so, it is not merely responding to cultural pressures, it is intervening in them, reinforcing the very distinctions it applies. Whether breast-feeding is made more visible or less, whether Holocaust deniers can use this social network to make their case or not, whether sexual fetishes can or cannot be depicted, matters for the acceptability or marginalization of these topics. If, as is the case here, there are “no exceptions for news or awareness-related content” to the rules against graphic imagery and speech, well, that’s a very different decision, with different public ramifications, than if news and public service did enjoy such an exception.

But the most intriguing revelation here may not be the rules, but how the process of moderating content is handled. Sites like Facebook have been relatively circumspect about how they manage this task: They generally do not want to draw attention to the presence of so much obscene content on their sites, or that they regularly engage in “censorship” to deal with it. So the process by which content is assessed and moderated is also opaque. This little document brings into focus a complex chain of people and activities required for Facebook to play custodian.

The moderator using this leaked manual would be looking at content already reported or "flagged” by a Facebook user. The moderator would either “confirm” the report (thereby deleting the content), “unconfirm” it (the content stays) or “escalate” it, which moves it to Facebook for further or heightened review. Facebook has dozens of its own employees playing much the same role; contracting out to oDesk freelancers, and to companies like Caleris and Telecommunications On Demand, serves as merely a first pass. Facebook also acknowledges that it looks proactively at content that has not yet been reported by users (unlike sites like YouTube that claim to wait for their users to flag before they weigh in). Within Facebook, there is not only a layer of employees looking at content much as the oDesk workers do, but also a team charged with discussing truly gray area cases, empowered both to remove content and to revise the rules themselves.

At each level, we might want to ask: What kind of content gets reported, confirmed and escalated? How are the criteria for judging determined? Who is empowered to rethink these criteria? How are general guidelines translated into specific rules, and how well do these rules fit the content being uploaded day in and day out? How do those involved, from the policy setter down to the freelance clickworker, manage the tension between the rules handed to them and their own moral compass? What kind of contextual and background knowledge is necessary to make informed decisions, and how is the context retained or lost as the reported content passes from point to point along the chain? What kind of valuable speech gets caught in this net? What never gets posted at all, that perhaps should?

Keeping our Facebook streets clean is a monumental task, involving multiple teams of people, flipping through countless photos and comments, making quick judgments, based on regularly changing proscriptions translated from vague guidelines, in the face of an ever-changing, global, highly contested, and relentless flood of public expression. And this happens at every site, though implemented in different ways. Content moderation is one of those undertakings that, from one vantage point, we might say it’s amazing that it works at all, and as well as it does. But from another vantage point, we should see that we are playing a dangerous game: the private determination of the appropriate boundaries of public speech. That’s a whole lot of cultural power, in the hands of a select few who have a lot of skin in the game, and it’s being done in an oblique way that makes it difficult for anyone else to inspect or challenge. As users, we certainly cannot allow ourselves to remain naive, believing that the search engine shows all relevant results, the social networking site welcomes all posts, the video platform merely hosts what users generate. Our information landscape is a curated one. What is important, then, is that we understand the ways in which it is curated, by whom and to what ends, and engage in a sober, public conversation about the kind of public discourse we want and need, and how we’re willing to get it.


By Tarleton Gillespie

Tarleton Gillespie is a professor of Communication and Information Science at Cornell University. He is the author of "Wired Shut: Copyright and the Shape of Digital Culture" and is writing a new book on how private online media platforms curate public discourse. He co-curates the blog Culture Digitally.

MORE FROM Tarleton Gillespie


Related Topics ------------------------------------------

Facebook Internet Culture Social Media