Impact of News Digitalization on Democracies

Technology platforms have digitized the media supply chain leading to proliferation of news. But can they manage what they have created?

Context: Digitalization of News Media

The emergence of digital technology platforms such as Google/YouTube has fractured the traditional models for creating and distribution media content. Google/YouTube is particularly influential, handling more than 70% of worldwide online search requests [1]. Through its open platform, Google/YouTube changed the conventional rules for content production and ownership, which had been guarded by capital intensive, analog technologies such as production studios and equipment, owned by a few publishing stakeholders.

The value chain of the news supply, from development, financing, production, sales, distribution and consumption, was tightly controlled by limited availability of distribution channels such as TV, radio and other public infrastructures. However, since Google/YouTube launched, an average citizen can quickly create and upload content, leading to 400 hours of videos being uploaded to YouTube every minute—approximately 1000 days worth of video every hour [2]. Yet today, Google/YouTube is increasingly battling the behemoth it has created. Two challenges ensue: 1) how to manage the abundant supply of content and 2) what role should it have in controlling the access to such content, if at all.

 

Grappling with the Heart of Its Success  

The rapidity with which one can distribute and surface content, the very feature that made Google/YouTube potent, has crippled the organization’s ability to manage the abundant supply of content with debilitating consequences on the society. The reduced barrier has led to abundant information given airtime and audience, without regard for quality and veracity. The creators that have equal access to the platform have discovered ways to manipulate Google/YouTube’s “neutral” platform relying on personalized algorithms to proliferate low quality and false information portrayed as journalism.

In the increasingly polarized global political climate, there has been a viral spread of hateful misinformation that are perceived as journalistic accounts [3]. The end consumers of that content unknowingly filter themselves in their political ideology, exacerbating the political tensions, which can have potentially deadly consequences. In one instance in June 2017, in which an extremist conducted a deadly terrorist attack on London Bridge, YouTube found itself in the limelight upon reports that the attackers became radicalized by watching sectarian and hateful messages on YouTube [5] [6]. Legislators have called for greater oversight of Google and some politicians such as Stephen Bannon have called for regulation of Google/YouTube and other private technology companies as public utilities [8].

The flaw in personalized “neutral” algorithms has crippled Google/YouTube’s founding principles of openness and transparency. The implications of this are enormous for Google/YouTube as it aims to rival television and traditional media as a source of public information. YouTube’s feeding of spurious content shows the shortcomings of the medium despite its scale and accessibility.

 

Google/YouTube Fights Back

Public scrutiny by the government and the society have pressured Google/YouTube for action. In the short-term, Google has announced an initiative called “Project Owl” to provide “algorithmic updates to surface more authoritative content” and to demote low quality content [4]. It has devoted more engineering resources to apply machine learning research to “train new content classifiers” to help identify and remove extremist and false content [7].

It has also announced a set of policies aimed at curbing misinformation and hateful extremist content. It has promised to remove videos that are in violation of its community guidelines. As for more dubious content that does not violate the code of conduct, Google will attempt to make the videos harder to surface and unmonetizable [6].

In the long-term, Google, which has relied on computer and machine-based video analysis, will greatly increase the number of independent experts in YouTube’s Trusted Flagger program. It plans to enlist experts from 63 NGOs to help determine categorization of videos that can be democratically inflammatory.

In conjunction with these actions, I further urge the management to work more closely with industry collaborators including other digital technology platforms. Google/YouTube may represent the majority enabler of the proliferation of misinformation, however, it is part of a much larger digital ecosystem. In order to more effectively stem the virality of misinformation, an international coalition of Facebook, Twitter and Microsoft as well as international governments should work together.

 

Further Discussion:

While the debilitating effects of pervasive misinformation on societies remain unequivocal, what is less clear are questions of free speech versus censorship and the role of private companies. At what point is monitoring “low quality” content a ban on free speech and the marketplace? In the wake of supremacist rally in Charlottesville, Virginia, tech companies including Google/YouTube, have blacklisted the neo-Nazi blog the Daily Stormer [8]. They have become less of a neutral platform but more as “custodians of public interest.” But is that the role for the tech companies to play? Furthermore, is it even possible to fashion a democratic social media in a highly divisive culture?

(Word Count: 793)

 

Works Cited:

[1] “Google Inc”, Britannica, September 28,2017, https://www.britannica.com/topic/Google-Inc, accessed November 15, 2017

[2] Mark Robertson, “500 Hours of video uploaded to YouTube every minute”, Tubular Insights, November 13, 2015  http://tubularinsights.com/hours-minute-uploaded-youtube/, accessed November 15, 2017

[3] Jack Nicas, “YouTube cracks down on conspiracies, fake news”, Market Watch, October 5, 2017 https://www.marketwatch.com/story/youtube-cracks-down-on-conspiracies-fake-news-2017-10-05, accessed November 14, 2017

[4] Ben Gomes, “Our latest quality improvements for Search”, Google Blog, April 25, 2017 https://blog.google/products/search/our-latest-quality-improvements-search/, accessed November 15, 2017

[5] Camila Schick and Stephen Castle, “ ‘I trusted him’: London Attacker was friendly with neighbors”, New York Times, June 5, 2017, https://www.nytimes.com/2017/06/05/world/europe/london-attack-theresa-may.html, accessed November 15, 2017

[6] Daisuke Wakabayashi, “YouTube sets new policies to curb extremist videos,” New York Times, June 18, 2017, https://www.nytimes.com/2017/06/18/business/youtube-terrorism.html?_r=0, accessed November 15, 2017

[7] Kent Walker, “Four steps we’re taking today to fight terrorism online,” Google Blog, June 18, 2017 https://blog.google/topics/google-europe/four-steps-were-taking-today-fight-online-terror/, accessed November 15, 2017

[8] Adrian Chen, “The Fake-News Fallacy”, The New Yorker, September 4, 2017 https://www.newyorker.com/magazine/2017/09/04/the-fake-news-fallacy, accessed November 14, 2017

 

 

Previous:

From Canned Tomato Soups To Personalized Nutrition and Transparency

Next:

Fuel Sustainability at Alaska Airlines: Good for the environment and good for business?

Student comments on Impact of News Digitalization on Democracies

  1. It is a very interesting and sensitive topic. While democracy is highly valued in this country, it is interesting for me as a foreigner to see how people’s attitude changes with the latest trends happening globally with extreminism. For me the question is whether technology is neutral and what role are tech companies supposed to play in politics. Answers to these questions will have many specific impacts, such as how tech companies plan to share data with government, or what are the assumptions in the algorithms behind AI. If tech companies are supposed to fight against extreme content on their platforms, then the question becomes how them draw the line between extreme content vs. content that is different from American values. Given that these tech companies are usually operating their platforms globally, what stand should they take when it comes to conflicts of ideology between two countries? How should they deal with goverments from different countries?

  2. This is one of the most important issues shaping how people think these days and in effect how our world looks like. Despite controversies of controlling freedom speech, I agree that big Tech must control the content they distribute via their channels. I also agree with the notion that there should be a coalition of top big tech players working on this topic together, but the standards should be set in collaboration with a representation of governments. I would recommend institutionalizing UN-big tech ethics committee for setting standards for published content. And beyond just counting on algorithms, big tech should actually invest in significant personnel to screen the content until AI is proved to be qualified enough to replace humans in this task.

  3. This is an interesting topic. With regards to the question of free speech, I don’t think the censoring activities media platforms inhibits free speech. YouTube does provide an avenue voices from all over the world to be heard, but having your video removed from YouTube because of derogatory content is not a violation of an individual’s constitutional right. Whatever opinion you have you are free to voice it, but I think YouTube’s stance is that you won’t tarnish their brand in the process. I believe YouTube’s position on democratic social media is respect for all people and it is YouTube’s right to stand up for what they believe is morally right.

  4. This is extremely relevant in the present day context.

    In my opinion, Google and Facebook have to take on a larger role not just to verify the quality of the content, but also the implications. There are live examples of how terrorist organizations like ISIS recruit actively through social media. Similarly paid online media website (aka fake news) have been know to polarize the most rational of communities leading to inhuman activities like riots and genocide. With the platforms the digital world provides, there is a need to be accountable on the effects of the impact on the offline world.

  5. While I have a lot of moral opinions about the role that technology companies — namely, Facebook and Google — can and should play in mitigating disinformation, I think it might be more interesting to explore the economic role of other players in this ecosystem. Namely, the theory of a free market would indicate that the advertisers play the most crucial role in punishing “bad actors”; by blacklisting those publishers, YouTube channels, websites (and so on) that are proven to spread disinformation, these entities can de-facto shut down bad actors by choking off their access to capital via advertising revenue.

    Of course, this mechanism of control is easier said than done. For a long time, Google failed to provide appropriate tools to target publishers of false information, hate speech, and the like, making it difficult for advertisers to prevent their money from reaching such “bad actors”. However, now that such tools are more readily available, advertisers themselves can drive much of the change needed in this ecosystem, without fully relying on Facebook, Google, and other platforms to be the de facto arbiters of appropriate speech. While a reliance on a group of advertisers isn’t much better than a reliance on a few technology companies, it certainly does diffuse the responsibility more broadly and show how — if platforms develop the adequate, neutral monitoring tools — they can encourage others to participate more fully in the conversation around free speech and democracy in the internet age.

Leave a comment