Skip to content
Culture & Religion

How Facebook Decided to Delete the Profile of One San Bernardino Shooter

Technology companies are under pressure to remove violent, terrorist content from their sites. Who should decide what gets removed?
Sign up for the Smarter Faster newsletter
A weekly newsletter featuring the biggest ideas from the smartest people

A day after Tashfeen Malik and Syed Farook allegedly murdered 14 people in San Bernardino, California, Facebook removed a profile page of one of the suspects. Malik, posting under a moniker, used the page to pledge her support to ISIS around the time of the shooting. According to The Wall Street Journal, a spokesman for Facebook said the page violated Facebook’s community standards, which among other things, prohibits posts supporting terrorism. The page’s removal highlights the long-running debate regarding online freedom and government surveillance efforts and illustrates the pressure many technology companies are under to monitor and respond to violent content posted on their sites.


President Barak Obama, in his address to the nation Sunday evening, called on Silicon Valley to help in the fight against terrorism. “I will urge high-tech and law enforcement leaders to make it harder for terrorists to use technology to escape from justice,” Obama said. That position, where technology companies operate in concert with the government, has some folks worried. “When it comes to terrorist content, it’s certainly a tricky position for companies, and one that I don’t envy,”said Jillian York, the Electronic Frontier Foundation’s director of international freedom of expression, in an email to The Wall Street Journal. “Still, I worry that giving more power to companies — which are undemocratic by nature — to regulate speech is dangerous.”

Additionally, Reuters reports that the White House will be asking tech companies to restrict the use of social media if it’s used for violent purposes. “That is a deeply concerning line that we believe has to be addressed. There are cases where we believe that individuals should not have access to social media for that purpose,” an official speaking on background said.

In a previous article, I spoke to Google’s management of requests from the public to delete links to content from its index. Known as “the right to be forgotten,” Google determines on a case-by-case basis what information gets unlinked. In fact, the Court of Justice of the European Union says specifically that Google must consider “the type of information in question, its sensitivity for the individual’s private life, and the interest of the public in having access to that information. The role the person requesting the deletion plays in public life might also be relevant.”

As I mentioned in that article, what that means is Google has the responsibility for determining if the deletion request is valid and should be honored. If Google resolves that the link-deletion request is not in the best interest of the public’s access to information, it can deny the request. Google is essentially serving as the arbiter for online speech.

These two processes — one in which the government cedes control to a private entity to unlink content from its search engine and one in which the government asks a private entity to remove content that encourages terrorist activity — seem related. In the first example, by ceding the link-removal decision to Google, the Court of Justice of European Union blurs the line between what a court of law should decide and what a private corporation should be allowed to do. While I’m not opposed to being forgotten, I’m not sure I’m comfortable with some group of people at Google making that determination.

I’m equally troubled by the second example as well. We are now asking Twitter, Facebook, and others to identify and remove content that has “violent ends.” It’s not that I want that content to stay up. I don’t. But, relegating that decision to a private company, just like ceding the right-to-be-forgotten process to Google, doesn’t sit exactly right with me.

If we are concerned that a government can abuse online freedoms like speech, then we should be equally worried about arbitrary decisions made by private entities to remove terrorist speech from online social media. To be clear, I am not arguing that the content not be removed. What I am debating is that its removal be a considered proposition and not determined by a private entity. Restricting speech is a serious thing and because we’ve surrendered control over our data and privacy to corporate interests, sometimes we assume their interests and ours are the same.

Sign up for the Smarter Faster newsletter
A weekly newsletter featuring the biggest ideas from the smartest people

Related
Effective Date: January 1, 2020 Last Updated: May 13, 2020 The Big Think, Inc. (“Big Think” or “we” or “us”) knows that you care about how your personal information is […]

Up Next