Despite the viral success of late night host Jimmy Kimmel’s recurring “Mean Tweets” segment, saying awful things to people on social media often doesn’t end with the person on the receiving end holding back laughter while trying to read it.
We saw this most recently take the form of SNL cast member and actress Leslie Jones leaving Twitter after an exchange with a journalist for Breitbart over a bad review he wrote of Jones’ latest project, the much-anticipated remake of the cult-classic “Ghostbusters.” The writer, Milo Yiannopolous, was banned permanently from Twitter after he posted screen captures that made it appear as though Jones was tweeting hateful things herself, which egged trolls on in continuing to harass her.
Other tech giants have dealt with this problem as well. Take Facebook, which in the wake of the shootings in Baton Rouge, Falcon Heights, MN, and Dallas has seen a drastic uptick in the amount of content that’s getting flagged for being racist, offensive, or violent. Or Craigslist, where a Southern California woman recently posted ads in which she purported to be the pregnant wife of a U.S. Marshal seeking someone who would engage with her in a “rape fantasy.” The woman, who turned out to be the Marshal’s ex-girlfriend, has been arrested and charged with 10 felonies.
How have major tech companies evolved when it comes to policing themselves for hate speech, threats of violence, and bigotry? What more can/should tech companies be doing to police themselves?
Guest:
Christina Warren, senior tech correspondent at Mashable; she tweets @film_girl