Visit hbs.edu

When online harassment doesn’t follow the rules

woman at computer

Is there a ‘cost’ to online harassment? Is it quantifiable? Beyond the toll it weighs on human victims, the rampant toxicity we see across social networks and communities, does this toxicity affect the bottom line of social networks?

Well, in December 2018, Amnesty International released a robust report on online harassment against women politicians and journalists on Twitter, which seemingly caused a 13% drop in Twitter’s worth two days after it was published. What made Amnesty International’s “Troll Patrol” report so damaging was regardless of political affiliation, women journalists and politicians were targeted by online harassment more than other demographics. And black women were targeted even more-so, being 84% more likely to be mentioned in abusive or problematic tweets than white women.

Let’s break into the report’s background a bit: In April 2018, Milena Marin, the project lead for Amnesty International’s “Troll Patrol” reached out to me to help guide aspects of the data labeling. The project was the largest of its kind with over 600,000 tweets labeled by 6,524 volunteers all across the world.

“Online harassment is not black and white. It’s contextual, it’s nuanced, and it can seem innocuous.”

This was clearly a large-scale data project. But our biggest challenge was this report needed to capture the nuances in a gray area — the kinds of harassment that don’t technically break the terms of service or content policies, but are still harassing in nature. Women and marginalized groups face this problem on a daily basis. 

We need to recognize this gray area. Online harassment is not black and white. It’s contextual, it’s nuanced, and it can seem innocuous. Online harassment can be something in aggregate, for example receiving misogynistic tweets, but over and over and over again. This kind of harassment can have a silencing effect on women and marginalized groups.

When discussing the gray area, Marin of Amnesty International said in a phone interview: “I’m really happy we made the decision to make this distinction, not just the clear-cut ‘yes abusive’ ‘yes not abusive.’ It’s hard to categorize and a lot of people had issues with it, and that was the number one question when we published the report, people were asking ’what is the difference, why did you label it like this’? It also made it extremely relevant. When we talk to women about their daily experiences, and journalists, [they explain]: ‘it’s not the single tweet that breaks me, but it’s the volume. It’s everyday.’ It usually doesn’t break the policy, but when I’m on the receiving end, and I get this daily content day in and day out….” She emphasized, “We [the Amnesty International team] wanted to understand how this affects women, and how it silences women, you have to have that differentiation from abusive tweets like rape threats and death rates, but also the more veiled sexism and regular misogyny, that is not against the rules, but it does affect their work and their ability to freely express themselves on Twitter.”

If Twitter, and social networks and communities in general, only think of content in terms of ‘is it abusive’ or ‘does it break this specific rule’, we will continue to create systems that harm marginalized groups and women. What’s important here is to understand the nuances of harassment, the gray areas that are hard to define in policy, but will deeply affect harassment victims. The problem here is not having nuanced policy and responses to harassment; harassment content and behavior should not be viewed under the lens of content takedowns, but under the lens of response, recidivism, and rehabilitation.

Engage With Us

Join Our Community

Ready to dive deeper with the Digital Data Design Institute at Harvard? Subscribe to our newsletter, contribute to the conversation and begin to invent the future for yourself, your business and society as a whole.