One of the frameworks we use in academia (often blurred in industry and in the media) is a spectrum to describe specific types of “bad” information in the world. Disinformation and misinformation, for example, are not the same thing. Why does that spectrum matter? We design very different types of interventions, both at a policy level — in terms of what governments and regulators are thinking about doing — and at a platform level to manage specific problems.
Social media platforms have a responsibility to be cognizant of what’s going viral within their part of the online ecosystem. These platforms are the front line of defense when bad actors try to manipulate the information environment, whether state actors or domestic spammers. They’re the ones hosting this content and they design the algorithms that amplify and push it out to people. There is, in my opinion, a degree of responsibility for the downstream harms from the content they amplify.
A wide swath of actors are misusing the same playing field, which is owned by a small handful of companies. Government regulators can’t respond quickly enough to address any new tactic that emerging technology makes possible.
In thinking about how to understand narratives on the internet, content is only one piece of the puzzle. What actor is behind the content is another. Most often, this conversation has focused on whether something was a foreign or domestic disinformation campaign, though the question for brands should generally focus on domestic actors — they are far, far more prevalent. Brands can view the internet as a series of factions: organized, hyper-motivated, networked subcultures that are united by passions, views, and beliefs. What brands need to understand is how these factions are talking about them, and the extent to which things like brand mentions translate to impact.
(This post was created using content from Renée DiResta’s presentation given at Summit 2020: brands & the disinformation reality.)