“Creepy”: When Big Tech Personalization Goes Too Far

As machine learning drives advances in personalization, do firms have the controls in place to protect consumer privacy? Can this be done without stifling innovation?

Digital personalization, which started with Facebook newsfeeds and Netflix movie recommendations, has gone from a differentiator to an expectation. But, when personalization goes too far, customers get “creeped out” and the offending company risks lost trust. This privacy risk is outsized for companies focused on advancements in personalization, such as machine learning, to inform product development. Particularly, big tech – the firms with the most data and talent, at the bleeding edge of artificial intelligence.

Why personalization matters

Personalization is no longer a nice-to-have in the world of marketing, customers demand it. According to Salesforce, 52% of customers will go elsewhere if email content isn’t personalized. [1] Infosys found that 86% of customers say personalization influences what they purchase, with 25% saying it significantly influences their purchase decision. [2] Good news for companies: there is a return on this customer experience investment. According to Harvard Business Review in partnership with McKinsey, “personalization can deliver five to eight times the ROI on marketing spend and can lift sales by 10% or more.” [3] From Gartner, “we expect that by 2018, organizations that have fully invested in all types of online personalization will outsell companies that have not by more than 30%.” [4] In order to remain competitive and relevant to customers, firms need to invest in personalization. Those that do will be rewarded.

Big tech on the bleeding edge

Facebook, Google, and Amazon (among others) are all engaged in developing ever-increasing levels of personalization. Recently, this is driven by advancements in machine learning – algorithms that leverage vast amounts of customer data to predict what is most relevant to you. Machine learning enables these companies to deploy personalization at scale, delivering a unique individualized experience to each of their millions of users. This personalization appears in many forms. How can Facebook have a unique newsfeed for each of their 2 billion users? Or in Gmail, how do the suggested responses sound more like you now than they did before? How are Amazon’s emails with product recommendations hyper-relevant to you? The answer to all of these is machine learning and this personalization is just the tip of the iceberg. As machine learning advances, big tech is finding new ways to leverage personalization to create new products and experiences. Facebook’s ad targeting is so accurate some customers are uneasy when the exact product they viewed a couple hours ago appears in Instagram. Customers even suspect Facebook is leveraging phone microphones to listen, and target ads based on what it hears. [5] Real or not, this perception alone is enough to damage a brand. Google’s camera “Google Clips” leverages machine learning to recognize people and take spontaneous photos. While some may find this convenient and interesting, others “first instinct was: Holy s*** this is creepy.” [6] Amazon’s new store, Amazon Go, leverages computer vision and machine learning to track shoppers in the store and determine what they purchase. This “Just Walk Out” technology eliminates the need for a cashier, but some shoppers can’t get over the “Big Brother aspect of their shopping trip.” [7]

Innovative or creepy? A fine line

It can be argued that each of these products are innovations that create new benefits to users or customers. But, is the benefit enough to outweigh privacy concerns? How does a firm decide if a personalized product is appropriate or not? These are difficult questions to answer, but there are a few strategies than can help mitigate this risk. First, operators should ask themselves a simple question: is the benefit of this new product enough to justify the resulting loss in privacy? This is difficult to measure, but it can be done through surveys, focus groups or user testing and is worth the effort to avoid a PR disaster. At a minimum, this simple heuristic can ensure operators are thinking through the privacy risk of their product. Additionally, firms need to ensure boundaries are in place, so product teams don’t launch a product with reputation risk. If teams have these privacy conversations in silos, mistakes will happen. Organizations need to develop clear rules for teams to operate under, so they can continue to innovate while keeping customers’ best interests in mind. These strategies are by no means an all-encompassing solution. All firms engaging in personalization will need to test and iterate strategies to protect consumer privacy.

Personalization is evolving and there are no easy answers. We’re left with important questions to think about and debate. As machine learning continues to advance, how will big tech know where to draw the line? Are the proper internal controls in place to prevent inadvertent “creepiness”? What other strategies might be employed to mitigate privacy risk without stifling innovation? If operators aren’t already thinking through these, they should be.

Word Count: 784

References

[1] Miller, B. (2017). 3 Reasons to Personalize Every Email. [online] Salesforce Blog. Available at: https://www.salesforce.com/blog/2017/08/personalize-every-email.html [Accessed 12 Nov. 2018].

[2] Infosys.com. (2013). Rethinking Retail. [online] Available at: https://www.infosys.com/newsroom/press-releases/Documents/genome-research-report.pdf [Accessed 12 Nov. 2018].

[3] Ariker, M., Heller, J., Diaz, A. and Perrey, J. (2015). How Marketers Can Personalize at Scale. [online] Harvard Business Review. Available at: https://hbr.org/2015/11/how-marketers-can-personalize-at-scale [Accessed 12 Nov. 2018].

[4] Elkin, N. (2017). The Long and Winding Road to Real-Time Marketing. [online] Blogs.gartner.com. Available at: https://blogs.gartner.com/noah-elkin/the-long-and-winding-road-to-real-time-marketing/ [Accessed 2 Nov. 2018].

[5] Langone, A. (2018). http://time.com. [online] Time Money. Available at: http://time.com/money/5219041/how-to-turn-off-phone-microphone-facebook-spying/ [Accessed 12 Nov. 2018].

[6] Ehrenkranz, M. (2017). [online] Gizmodo.com. Available at: https://gizmodo.com/creepiness-wont-kill-the-google-clips-camera-1819181113 [Accessed 12 Nov. 2018].

[7] Denn, R. (2018). I thought Amazon’s new cashier-free store was creepy. My teenage son couldn’t care less.. [online] Washington Post. Available at: https://www.washingtonpost.com/lifestyle/food/i-thought-amazons-new-cashier-free-store-was-creepy-my-teenage-son-couldnt-care-less/2018/01/25/1f805838-020c-11e8-9d31-d72cf78dbeee_story.html?noredirect=on&utm_term=.fff0fcc00fbd [Accessed 12 Nov. 2018].

 

Previous:

Flying High: GE’s Billion Dollar Bet on Additive Manufacturing

Next:

Partners Healthcare and Machine Learning: Building Efficiencies in Diagnostics and Data Interpretation

Student comments on “Creepy”: When Big Tech Personalization Goes Too Far

  1. Thank you for this article. Really interesting to see that the entire industry is grappling with this problem, but that the most visible among them (Facebook, Google) are at the highest risk of damaging their reputations. In response to your questions about how companies can ensure that they are not crossing the line, I propose that consumers could be more a part of the process. Can they more easily choose if they want personalized ads? Can there always be a clear path to opt out, or perhaps even have consumers opt into these services? I also wonder if these large companies will ever come together to create a set of standards to which every company will hold themselves accountable. Will that kind of partnership among competitors ever be possible? Will they ever share learnings so as to better understand what crosses the line between innovation and creepiness?

    1. Thanks for the comment! Great though on the opt in/out solution. In fact, this is something my team considered when we rolled out a personalization feature on Amazon.com and in the app. I agree, this can be a great way to mitigate the risk. To address your other concern, I’m skeptical of firms creating a shared set of standards. It will take either a major privacy-loss event or government regulation to spur these companies to collaborate on a solution.

  2. Excellent article – thank you Dominic. There is indeed a very fine line between innovation and creepiness. However, I agree with T123 in the fact that consumers are not reacting to the problem. Despite concern about privacy is growing around consumers, it seems that consumers continue to use the same services. Consumers probably think that the benefit they get from big tech providers (e.g. Google, Facebook, Amazon) is greater than the risk of sharing the data with them and allowing then to use their data freely.

    However, in my opinion, this could be an unstable situation – which could potentially change. What if there is a massive data leak in any of these companies? What if individual workers from any of this companies use consumers data for inappropriate personal purposes? What if this data is sold to to third parties? This could disrupt the relationship between consumers and big tech providers. Would such an event end up with today’s degree of personalization?

    1. Thanks for the comment! Agree that consumers are part of the problem. I read about this in my research for the article. There is an age gap in the way consumers think about privacy. Millenials and younger give away their data freely and don’t see it as much of an issue, while older generations are much more hesitant. This begs the question, as our population ages, will we as a society be too trusting of these large organizations owning and controlling all of our data? If that’s the case, what checks and balances will hold companies accountable? Lots to think about!

  3. Dominic – top shelf work here. One question I’m grappling with is whether limited rollouts (similar to Uber) could be used as market tests to gauge user reactions to the services you described above. While beta tests and focus groups are very effective for many industries (Cineplanet), the issue I see here is that the “tail risk” is still catastrophic. Even if 99.999% of users have positive experiences as a result of this personalization, one negative experience (such as when Target inadvertently revealed a teen girl was pregnant before her family was aware) could lead to a massive loss in users.

    1. Thanks for the comment! Great point. I had not thought of a limited roll out as a way to test the risk, but I love it. I think this would definitely be effective above user testing for the reasons you mention. Additionally, the tail risk is another layer to think about. Even if you do all the right things, companies still need an action plan in the event something like the Target example happens.

  4. I laughed reading the microphone paranoia part. Many of us have had this conversation and I am guessing people have also asked you about the Echo and whether it listens all the time. One of the issues from these technologies, especially if the APIs are open is that hackers can get their hands on them, so even if companies don’t use them for their own profit, you could still run the danger of having your home watched, recorded and uninvited guests over. Facebook and Alexa are great, but it only takes one incident as anonymous commented above.

Leave a comment