Exploring the ethics of machine learning at Facebook – where does it draw the line?

Facebook's strengths in machine learning put the company in difficult ethical dilemmas. How far should the company be willing to go, and how does it balance the potential impact it has on society and its users' privacy?

Should Facebook be allowed to listen in on your private conversations?

To many, the answer is an obvious “no”, and CEO Mark Zuckerberg denies that Facebook does so.[1] But try tweaking the question:

Should Facebook be allowed to listen in on the private conversations of potential terrorists? What if its machines have flagged someone, and it may be able to stop an attack?

The ethics around machine learning are contentious, and Facebook’s actions over the next decade may shape how society thinks about privacy, ethics, and technology.


Why is machine learning important to Facebook?

Many people focus on the ways Facebook uses machine learning to enhance its core product. For instance, it uses its DeepText and DeepFace systems to analyze users’ posts, allowing it to serve better ads and optimize news feeds with content that users will “care most about”.[2], [3]

Beyond its core ads, however, Facebook also uses machine learning to drive positive social impact in fields like suicide prevention and counter-terrorism.


Suicide prevention

Facebook now uses pattern recognition to sift through posts and identify people who may be at risk to commit suicide. For instance, it may identify posts speaking of sadness or comments like “Are you OK?” or “I’m worried about you”. [4]


Facebook post that may be flagged for potential suicide risk


Once it does so, it can message that user mental health resources and a crisis counselor through Messenger. [5]

While this can be seen as a privacy violation, according to a suicide helpline chief, this move was “not just helpful but critical”. [6]


Fighting terrorism

Facebook has also used machine learning to identify and remove posts that spread terrorism from ISIS and Al Qaeda. Since the beginning of 2018, it has already removed more than 14 million pieces of content and reduced its average time to remove posts from 14 hours to less than two minutes. [7][8]

Facebook is using machine learning not just to drive its core product, but also to build products that help its communities.


The future

Over the next decade, Facebook is going to need to make serious decisions around the how and what of machine learning.

The How

How machines acquire data is sparking significant controversy. People insist that Facebook listens in on their conversations, citing stories about simply talking about a product before seeing an ad for it on Facebook.

Zuckerberg denies that Facebook does this.[9] Assuming he is telling the truth, there are two possible explanations for this phenomenon:

  1. Frequency illusion: We only notice the ad because we recently spoke about it, and we would have otherwise scrolled over it without noticing it.[10]
  2. Taking data from our connections: When we talk about products with friends, those friends may search them online. Facebook could then get that data from our friends, and knowing we are friends, apply it to us. [11]

This raises serious ethical questions for the next decade. Should Facebook be able to listen in on users or learn about users through their friends’ data? What if it brings significant societal benefit?


The What

Beyond how its machines get data, Facebook will also need to decide what data it allows its machines to compile and analyze.

For instance, Michal Kosinski, the well-known Polish psychologist, claims that AI can use facial recognition to identify traits such as political views. [12]While these findings are disputed, if they are true, should Facebook pursue this or is it too intrusive and uncertain?


What should Facebook do?

Moving forward, Facebook will find itself in a powerful position. It will no longer just be making technology decisions – it will be making ethical decisions that impact the world at large.

In doing so, I think it is critical that it does not make these decisions alone. It should partner with governments to dictate policy, and it should crowdsource feedback from its users to inform their decisions.

I think this is important for two reasons:

  1. If Facebook acts alone, it is far more likely to face backlash and potential legal issues.
  2. Facebook now has 2.27 billion monthly active users.[13] I don’t think it is right for one company to make such important decisions without broad feedback.

By collaborating with governments and its users, it will gain more buy-in to its platform, face less backlash, and hopefully find the optimal balance between privacy and technology for society.


Machine learning and ethics

Facebook finds itself at the intersection of machine learning and ethics, and its decisions will not just affect its own business but also society at large.

Do you think Facebook should be able to listen in on people’s conversations? What if it has flagged them as a potential terrorist? Is the violation of privacy worth the potential societal benefit it can bring?


(794 words)



[1] Fagan, K. (2018). Mark Zuckerberg tells Congress it’s a ‘conspiracy theory’ that Facebook uses your microphone to spy on you. [online] Business Insider. Available at: https://www.businessinsider.com/facebook-does-not-use-your-microphone-to-spy-on-you-zuckerberg-2018-4 [Accessed 14 Nov. 2018].

[2] Kaput, M. (2018). How Facebook Uses Artificial Intelligence and What It Means for Marketers. [online] Marketingaiinstitute.com. Available at: https://www.marketingaiinstitute.com/blog/how-facebook-uses-artificial-intelligence-and-what-it-means-for-marketers [Accessed 14 Nov. 2018].

[3] Facebook Research. (2018). Machine Learning. [online] Available at: https://research.fb.com/category/machine-learning/ [Accessed 14 Nov. 2018].

[4] BBC News. (2018). Facebook uses AI to spot suicidal users. [online] Available at: https://www.bbc.com/news/technology-39126027 [Accessed 14 Nov. 2018].

[5] Ibid.

[6] Ibid.

[7] Newsroom.fb.com. (2018). Hard Questions: What Are We Doing to Stay Ahead of Terrorists? | Facebook Newsroom. [online] Available at: https://newsroom.fb.com/news/2018/11/staying-ahead-of-terrorists/ [Accessed 14 Nov. 2018].

[8] MIT Technology Review. (2018). How Facebook uses machine learning to fight ISIS and Al-Qaeda propaganda. [online] Available at: https://www.technologyreview.com/the-download/612406/how-facebook-uses-machine-learning-to-fight-isis-and-al-qaeda-propaganda/ [Accessed 14 Nov. 2018].

[9] Fagan, K. (2018). Mark Zuckerberg tells Congress it’s a ‘conspiracy theory’ that Facebook uses your microphone to spy on you. [online] Business Insider. Available at: https://www.businessinsider.com/facebook-does-not-use-your-microphone-to-spy-on-you-zuckerberg-2018-4 [Accessed 14 Nov. 2018].

[10] Newstatesman.com. (2018). Testing the long-held belief that Facebook listens to your conversations to advertise stuff. [online] Available at: https://www.newstatesman.com/science-tech/social-media/2018/03/testing-facebook-listens-your-conversations-adverts [Accessed 14 Nov. 2018].

[11] Inc.com. (2018). Here’s the Real Reason You Think Facebook Is Listening to Your Conversations. [online] Available at: https://www.inc.com/john-brandon/heres-real-reason-you-think-facebook-is-listening-to-your-conversations.html [Accessed 14 Nov. 2018].

[12] Levin, S. (2018). Face-reading AI will be able to detect your politics and IQ, professor says. [online] the Guardian. Available at: https://www.theguardian.com/technology/2017/sep/12/artificial-intelligence-face-recognition-michal-kosinski [Accessed 14 Nov. 2018].

[13] millions), N. (2018). Facebook users worldwide 2018 | Statista. [online] Statista. Available at: https://www.statista.com/statistics/264810/number-of-monthly-active-facebook-users-worldwide/ [Accessed 14 Nov. 2018].


Your Personalized Dinner is Served


What would you do with a bounty? Decentralized collaboration on an open innovation platform

Student comments on Exploring the ethics of machine learning at Facebook – where does it draw the line?

  1. I had no idea FB had implemented machine learning in its efforts for suicide prevention! What a great tool, I wonder if they have data on lives saved? I would be willing to bet they could use it when the government comes knocking with ethics complaints and new privacy regulations.

    1. Great question! I haven’t found a number but there are stories. For example, last year, a woman in Alabama went on Facebook Live with a knife and made it clear that she was intending to commit suicide. Facebook quickly noticed this, contacted the authorities, and were able to get her to a hospital before she did so. Here’s an article with more info on that story and how Facebook is using machine learning for suicide prevention: https://www.fastcompany.com/40498963/how-facebooks-ai-is-helping-save-suicidal-peoples-lives.

      Thanks for your question!

  2. The tension between upsides and downsides of machine learning seems to be a never-ending debate. Although these data analytics have potential to deliver superb customer satisfactions as well as create social good, individual privacy seems to be an issue that many people started to pay more attention into. During the past few years I started to realized that many of my friends in Thailand have decided to stop allowing social media application or companies to gain access into their private information, with some decided to retire from playing facebook. The issue of trust will be the big concern in the future among the user-of-private-information industry. If such trust issue is not resolved, I doubt that there could be only garbage information left available in the market.

  3. This is such a difficult issue because of the extreme impacts that both sides of the debate may have. Those in favor of these invasive machine learning techniques may cite lives saved from suicide prevention and anti-terrorism applications. However, the importance of privacy and the sense of security that comes with it is much less tangible, so weighing the benefits is extremely subjective. While this does not address the entire problem, I think facebook could do a better job of explicitly informing users of their practices. I feel most of the resentment of these applications of machine learning comes from a lack of trust – when people agreed to sign up for Facebook they did not realize they were signing up for so much oversight from Facebook. If Facebook explicitly warned its users about its targeted advertising methods, suicide prevention, or counter-terrorism methods before enacting them, its users might trust the company more instead of feeling like these “spying” methods are being covered up.

  4. This is really interesting. Although for some reason, people still seem to have an expectation of privacy when they post on a public forum, I think you’ll find agreement that Facebook should be able to examine public posts. The question of whether they should look at private messages is another issue, but if they do it right, transparent with a clear purpose of public safety, I don’t think there’d be a huge pushback. Personally, I feel that the potential benefits here are huge and outweigh the privacy concerns.

  5. Very interesting article, Russel!

    While suicide prevention seems like a valid use case for Facebook AI, counter-terrorism comes off to me as a bit of a post-facto rationalization for extreme levels of data collection on the part of Zuck. Terrorism prevention is a role that has typically sat in the hands of governments – for good reason – and it’s not obvious to me why Facebook would be better at collecting, interpreting and (especially) acting on related data than the many public agencies that are currently dedicated to this purpose. Given that Facebook has adamantly protested requirements to share data with the NSA and similar organizations, I wonder how the company intends to act on potential threats they may identify? They can cut off information flow, but this introduces the additional ethical dilemma of whether they have a further responsibility to report possible attacks to those actually equipped to enact a more robust prevention plan (i.e., the military). It will be interesting to see how this plays out going forward!

  6. I agree that Facebook absolutely needs to work with governments on AI policy as they are already on the path to violating potential privacy and human rights laws. The future of AI can provide as many benefits as it does bad outcomes to humanity. The suicide prevention example is an excellent one to demonstrate that AI can be good for us. However, it is just as easy for AI features to go wrong and end up invading the privacy of users. Facebook has a very special opportunity ahead of them to influence the future of how we interact with AI systems and it should be extremely careful of how it treats this opportunity.

  7. Hi Russell,

    Great article!
    I clearly see the societal benefit that Facebook can bring by flagging terrorist/suicidal posts, and agree that Facebook should partner with governments to determine policy (given the importance of this issue and that they are not the sole company grappling with this dilemma). Similarly, I believe that Facebook needs to be more transparent with general consumers with regards to their rights and how Facebook is using their data. I believe that consumers would be more open to data mining, and the risk of backlash would decrease if consumers felt they were in ultimate control. In this line, Europe has recently applied GDPR giving individuals more control over their personal data and the right to request that a company delete all their data (research suggests that US consumers would want to see similar initiatives implemented https://www.janrain.com/resources/industry-research/consumer-attitudes-toward-data-privacy-survey-2018).

  8. You make an interesting point here – and I would be curious to see if Facebook works with any other platforms in order to “bridge the (knowledge) gap” on certain key takeaways the way Palantir does. I am wondering if this is possibly also a legal issue that users can accept simply by logging in and accepting a “User Agreement” – or if this would counteract (and exclude) certain people who are most at risk/ would benefit the most from this type of AI. Additionally, there is another issue that Facebook will face – that of country-specific laws – which I believe will have a huge impact on how they proceed in the future.

  9. This is really interesting. As a “techy” guy, I take a lot of pride in telling folks that they suffer from the “Frequency illusion” that you mention — imagining the bots are listening in when they are.

    The use case you bring up around whether facebook should switch on the mic is interesting. If they ave the remote capability to do so, I would imagine that it’s inevitable. Either facebook will do it on purpose, on accident, or via court order. My preference s of course the latter, but what happens when the computer is smart enough to just do it itself, with no permission? (Now we are in the middle of a sci-fi movie.)

    The question then is whether you go more of the Apple approach and limit your ability to see / access the data? If the ability doesn’t exist, it can’t be exploited for good… or evil.

  10. Great article and analysis of the various ways Facebook is leveraging AI and machine learning, Russell! I’ll try and sum up my response in a short paragraph but would love to continue the conversation offline as well!

    1. The good and the bad – True, there is a lot of good that I believe Facebook is doing when leveraging AI and machine learning. Suicide prevention and terrorist threats are great examples of this. At the same time, I worry about governments overstepping their boundaries and abusing the wealth of data Facebook has on its users. Facebook’s obligation to its users is to protect the community and their privacy, above all else. While I agree Facebook should be working with governments when it comes to regulation decisions, issues around net neutrality, etc, I very much worry about Facebook partnering with the government around issues specific to criminal activity. Check out what happened in our Brazil office a couple of years ago here (might help to better illustrate my point): https://www.theguardian.com/technology/2016/mar/01/brazil-police-arrest-facebook-latin-america-vice-president-diego-dzodan

    2. Are they listening on your conversations – while I won’t really ever know the truth I do believe Zuckerberg when he states that they are not listening in. Facebook is able to predict a lot about you as a person because it essentially knows everything you do online. Most, if not all, websites and apps you visit have a relationship with Facebook and therefore Facebook has access to all of that data, which they are then able to learn from. After Cambridge Analytica there has been a lot of press about the ethics around using your friends’ data to learn more about you. If it is not without consent, the company has stood by not being able to leverage that data, which I very much agree with.

    3. User privacy and control – for me this all comes down to transparency and control. Do you as a user know what Facebook is collecting and how they are using it? And do you have the ability to control that? Largely, in my opinion, no. Companies attempting to disrupt this are focused on decentralizing the web in an attempt to restore power back to the individual. Your data is yours to own and I believe we should have the controls to make technology work for us, not the other way around.

    All of that being said, I still believe there is a lot of good Facebook has done via their use of AI and machine learning. Facebook’s ad platform has given way to millions of small business owners being able to earn a living in a way that wasn’t possible a couple of years ago. Through the acquisition of Oculus I’m hopeful that we can bring AI to underserved parts of the world and provide them with greater access to education. In places around the world where the free press may be under attack, those media outlets can hopefully still exist on Facebook and connect with their relevant audiences leveraging their sophisticated technology.

    Great take on this and loved reading through everyone’s comments!


  11. Thanks for this interesting take. On one hand, I agree about the value that Facebook can bring to society via partnering with the government (or other organizations), particularly to flag and identify risk situations. Given how much of the world is on Facebook, and how much data is shared there, it’s a natural next step for the platform to be used for near-real-time intelligence. The rest of the web can be monitored in a similar way, and I think that informed users are generally aware that their online activities could be public. However, Facebook itself is a private experience; a user must have an account and be logged in to access the community. That creates a perception that anything happening within the community is private and shared only among friends. The majority of users certainly don’t follow privacy policy updates over time and therefore might not be aware of how their data is being used at all. In that case, does having an account really mean they are giving their consent? And is there really any line between using user data “for good” vs. “for business,” or even something darker? If user data is shared, how closely is that controlled? For me, this is a huge grey area and I think that, unfortunately, users and the agencies that are supposed to protect them are always the last to learn about misuse. It’s hard to deny some of the social benefits that you highlighted, but I can’t help but feel that these activities are not the place of a company like Facebook at all.

  12. In my opinion, I’d rather prefer machine learning algorithm to look at my posts/messages at facebook than a real person as it used to be. The algorithm is a tool and it does judge based on assumptions put into model. So, for me it’s more like a search engine through the posts/messages in a hunt for violation of community values, which is better in some sense than a real person would look through my posts. Moreover, the value it potentially could generate is enormous, even if it ever helps to safe only one life it is still worth it.

  13. I think this is a great topic, and raises very valid concerns. However, one of the key challenge here is to identify the potential action that Facebook is going to take on the piece of information collected, and the associated judgement involved. Further, when users log onto Facebook, the inherent assumption is that of a private platform, and it worries me when my private information is being snooped upon. One could argue that the areas talked about in the post above are around suicides and terrorism, but who is to control the boundaries of where all Facebook starts snooping and sharing information. The on-going debate around the use of data is naturally very valid, and it would be interesting to see how these companies (not just Facebook) deal with this. The next question could revolve around the (in)ability of us humans to get off the social media bandwagon?

  14. Thanks for the thought-provoking article! Facebook has access to a lot of information which allows it to utilize machine learning to draw insights on its users. I’d always thought that this was a dangerous imbalance of power, but I liked how you brought out the potential benefits. I think that Facebook has the potential to do a lot of really good things with their data (as you laid out) and I think that if they explained how they could use the data, people would be more comfortable sharing. In my mind, it may not be ethical for people to withhold information, if that causes many others in the world to die. Thanks for the great food for thought!

  15. If we were to assume that the vast majority of users were willing to make this trade-off in the interest of international security and counterterrorism, Facebook would go public with their decision and alert their massive user base of the change. Even if Facebook did not send a push notification to every user to communicate this use of data, given its controversial nature, the media coverage would be overwhelming. At that point, I have to imagine that terrorist operatives with any level of sophistication would 1) disseminate false information / decoy tactics through Facebook or 2) abandon Facebook as a communication channel altogether. While this seems like a beneficial albeit controversial application of Facebook technology, I ultimately worry about its intelligence-gathering practicality more than its polarizing privacy implications.

Leave a comment