Twitter Promoted Hate, Then Took a Small First Step to Reduce It. But Silencing Hate Does Not Make It Go Away.

Lawyers' Committee
5 min readNov 20, 2017

--

Twitter Promoted Hate, Then Took a Small First Step to Reduce It. But Silencing Hate Does Not Make It Go Away. -David Brody

Last week, Twitter announced new changes to its hate speech policies and the rules for its verified user status — that small blue checkmark next to a user’s name that certifies the authenticity of the person’s identity. To its credit, Twitter is starting to take seriously its responsibility to address how its users engage with its platform. But it has a long way to go to regain the public’s trust that it has the ability and fortitude to actually counter the proliferation of hate on its service.

While Twitter may have intended verified status to be merely an authenticator, its selective and opaque usage conveyed a sense of prestige: that this user is important enough that Twitter took the effort to verify who they are. The reputation-boosting program seemed innocuous enough — until Twitter started granting verified status to white supremacists, neo-Nazis, and other extremists. Richard Spencer received verified status; his dream is “a new society, an ethno-state that would be a gathering point for all Europeans. It would be a new society based on very different ideals than, say, the Declaration of Independence.” Jack Posobiec, one of the principal peddlers of the Pizzagate conspiracy theory, used his verified Twitter account to dox one of Alabama senatorial candidate Roy Moore’s accusers. Jason Kessler received verified status after using Twitter to celebrate the murder of Heather Heyer at the Charlottesville Unite the Right rally he organized.

White supremacists and other hate groups routinely use Twitter to incite violence, promote hateful and unlawful treatment of minorities, and intimidate those who speak out for civil rights with threats, doxxing, and harassment. Twitter built the platform, Twitter profits from its misuse, and so Twitter owns this problem.

Recognizing that verified status conveys a measure of endorsement, Twitter announced that it was removing verification from “accounts whose behavior does not fall within [its] new guidelines.” It removed verified status from Richard Spencer, Jason Kessler, and other hateful extremists like James Allsup and Laura Loomer. Going forward, Twitter may revoke verified status if the user promotes hate or violence, or supports organizations or individuals that promote hate or violence. This policy applies to “behaviors on and off Twitter.”

At a minimum, verified users’ compliance with Twitter’s code of conduct should be strictly enforced. If a user wants the benefit of verified status, they must accept the corresponding responsibility. Twitter has shown that at present it lacks the ability to actually enforce its own rules in a fair and comprehensive manner for the whole platform. So the least that it can do is enforce the rules for the users to whom it gives preferential status.

Twitter also announced that, beginning in mid-December, it will more aggressively remove and ban hate speech and terroristic activity. Users “may not affiliate with organizations that — whether by their own statements or activity both on and off the platform — use or promote violence against civilians to further their causes.” The new hateful imagery policy prohibits the use of “hateful images or symbols” in user profiles, profile images, bios, and usernames. Given the rampant use of Nazi, KKK, and Confederate imagery on Twitter, these policies will have a substantial impact, if Twitter gives sufficient resources and training to its enforcement teams.

These new rules echo Germany’s strict anti-Nazi laws, which prohibit the use of hate symbols, outlawed political parties, and surrogates for banned symbols and parties. Like Germany, Twitter and other social media companies are not constrained by the First Amendment; it only protects against actions by our government. To be clear, there is a big difference between a state actor and a private company. But since Twitter already operates in Germany in compliance with its laws, and is subject to the European Union’s hate speech code of conduct, one would hope that Twitter knows how to implement such restrictions.

Twitter’s track record, however, does not inspire confidence that these new policies will be appropriately executed. How does Twitter intend to fairly monitor and adjudicate “off the platform” activity? How will it respect user privacy while doing so? What experts and resources is it dedicating to ensure such broad policies do not censor non-hateful speakers? For example, civil rights activists advocating for racial equality on social media are frequently accused of being anti-white.

Twitter’s policy changes — if properly implemented — are just the beginning of a broader conversation. Germany’s anti-Nazi laws do not succeed simply by banning hate; the country engages in robust and honest education of its past and the consequences of hateful ideologies.

Twitter needs to continue to reflect on its role and responsibility in the social media ecosystem, if it wants to be a good corporate citizen. That discussion must include not just policy changes that reduce harassment and intimidation, but also a holistic evaluation of the platform’s architecture and algorithms. Here are just a few starting points:

· Reform the algorithms to break users out of their information bubbles and nurture more constructive conversations.

· Learn how to downplay false and defamatory material, and elevate counterspeech to offensive content.

· Educate users on how to spot propaganda and be more sophisticated consumers of information.

· Increase transparency into how the algorithms and policies work, so that independent researchers and journalists can help identify shortcomings.

· Appoint an ombudsperson to act as a public advocate within the company, facilitate transparency, and liaise with researchers and civil society organizations.

· Hire diverse and localized teams to increase cultural literacy when enforcing the Twitter Rules. Thoroughly train and resource these professionals.

Ultimately, silencing and siloing hate speech will not eliminate hate. When you block or suspend an offensive troll, they do not vanish into the void. They remain your neighbor and fellow citizen. As Supreme Court Justice Louis Brandeis wrote, “[The Founders] knew that order cannot be secured merely through fear of punishment for its infraction; that it is hazardous to discourage thought, hope and imagination; that fear breeds repression; that repression breeds hate; that hate menaces stable government; that the path of safety lies in the opportunity to discuss freely supposed grievances and proposed remedies; and that the fitting remedy for evil counsels is good ones.”

Twitter built a platform that promotes evil counsels; it bears the burden now to foster good ones.

David Brody is Associate Counsel & Fellow for Privacy and Technology at the Lawyers’ Committee. He focuses on issues related to the intersection of technology and free speech, hate group activity, consumer privacy, government surveillance, and racial discrimination. David previously worked on privacy and consumer protection matters at the Federal Communications Commission.

--

--

Lawyers' Committee
Lawyers' Committee

Written by Lawyers' Committee

The Lawyers' Committee for Civil Rights Under Law was formed at behest of JFK. Pres & ED @KristenClarkeJD. Support our fight for justice http://bit.ly/2a9L7JA

No responses yet