We all know fake news is a problem. This leaves us the interesting question of who should be responsible for verifying the veracity of fake news — or just news in general? Is it reporters? Is it users? Is it professional fact-checkers? Is it algorithms? Each option has its own shortcomings.
For example, when we try to rely on algorithms, we have to worry about, well what if the trolls try to target those algorithms specifically. There are a ton of articles out there describing how fake news detection through natural language processing is vulnerable to adversarial attacks. It’s vulnerable to carefully-crafted pieces of fake news that are able to elude machine learning.
What if we turn our attention to the user? If we want to come up with a way to improve people’s media literacy without exposing ourselves to bias, then it’s really about providing some low-level information, such as the lineage of a particular news object.
I’m currently working on a project at the Harvard John A. Paulson School of Engineering and Applied Sciences that aims to use technology to provide a tamper-resistant edit history of each digital object in a story. Our basic idea is to allow users to verify the provenance of a digital news story.
We want to expose the ways in which a particular piece of news is created and then say, “User, you have to decide.” In deciding what’s fake and what’s not, it’s important to recognize that a lot of these things are very deeply personal decisions.
Overall, human-based fact-checkers don’t seem to scale very well. They get accused of bias. Algorithms can scale very well, but are easy to fool using adversarial attacks. Right now, users don’t have enough raw information to be able to make well-informed decisions about what is good news and what is bad news. Our goal is to provide users with more expansive raw information so they can make considerate evaluations of trustworthiness.
(This post was created using content from James Mickens’ presentation given at Summit 2020: brands & the disinformation reality.)