Is Amazon Planning on Using NLP to Control Employee Interaction?
Amazon’s new project raises some interesting questions.
According to recent reports, Amazon is developing a new workplace social media platform where employees can acknowledge each other’s work and send encouraging messages in posts called “Shout Outs.” Dave Clark, Amazon’s head of worldwide consumer business, cites the platform as a method to improve employee happiness and increase retention rates. Assuming that Amazon used previous research or conducted an internal statistical study that indicated increasing employee recognition increases retention rates, the next step is logical. Let’s build a platform that encourages employees to give each other “Shout Outs!” Sounds great, right? Here is a perfect example of applying People Analytics to facilitate change.
While the “Shout Out” platform sounds nice in theory, it becomes less attractive considering Amazon’s plan for implementation. A document summarizing the program states, “We want to lean towards being restrictive on the content that can be posted to prevent a negative associate experience.” In practice, this means that managers will have the authority to report any “Shout Out” they deem offensive. Additionally, Amazon plans to implement NLP technology to detect negative sentiments and the presence of trigger words to help filter inappropriate “Shout Outs.” This is where our notion of a perfect application of People Analytics breaks down, and our ethical conflict arises.
Within their data-driven solution, Amazon plans to suppress employee interaction that is meant to be one-to-one (Clark suggested the experience to be more like a dating app than an open Slack form). Although the censorship of any content is an ongoing topic of debate, things get strange fast with Amazon’s “Shout Out” platform. First, among the list of trigger words reported by The Intercept are “Union,” “Living Wage,” “Representation,” “Ethics,” “Fairness,” and other similar terms. Second, “Shout Outs” are planned to be tied into Amazon’s gamification program which incentivizes employees with digital rewards, like stars and badges, for engaging more with their work and increasing efficiency. I might be breaking out my tinfoil hat, or I might be a reasonable skeptic. However, this technology would give Amazon the capability to use the gamification program to encourage employees to send Amazon-accepted “Shout Outs” and compete for stars while using the new platform to identify and suppress employees who advocate for Amazon-unaccepted social/work-related issues.
Is Amazon a wild, Orwellian corporation deadset on crushing unionizers and squeezing every ounce of work out of their employees? Probably not. The previously linked articles have plenty of quotes from Amazon spokespeople providing better context on “Shout Out” suppression, and emphasizing that the project is still in the planning stages and could get scrapped altogether. However, the entire situation raises some interesting questions on how privacy in the workplace and analytic techniques (like NLP) interact. Additionally, how did we get from our perfect application of People Analytics to an evil surveillance platform? I’d argue that the disconnect from sound statistical findings to sound implementation arises in the separation between the analyst and the solution. As practitioners of People Analytics, we ought to put more thought into the second and third-order effects of statistical findings, and anticipate ethical issues that could arise during solution implementation. In my mind, Amazon’s situation reflects exactly why it probably isn’t enough to give “the solution team” a stats report without some solid ethical consideration beforehand.
Hi Nico! This is a really interesting discussion – I agree with you that I’m skeptical. I love the concept of “shoutouts,” but I’m more inclined to believe that this type of encouragement is more impactful when done in person. This leads me to question – is opening this can of worms worth it to begin with? Further, within an organization’s culture, the ability to dissent is very important, so I worry the message that Amazon is sending if they restrict the types of comments people can leave.
Hi Nico, thanks for sharing! Really like your post and I think it sets a prefect example on how a good intention heads south after bad execution – you simply cannot have a platform with 100% positive reviews on one hand, and a platform which is free from surveillance on the other. Amazon certainly chose to sacrifice the latter in order to build the former, and NLP became their handy tool to achieve this. My thinking is that they identified the right “WHAT need to be solved”, which is the need to promote culture of providing positive feedback. However, they chose the wrong “HOW to solve it”. Looking into their current feedback system and structure and insert some nudges may be a better choice than implementing such a big initiative.
Interesting post! This reminded me of my company’s previous initiative of allowing employees to pass over “thank-you” cards to other colleagues every week for better engagement, though the company did not check what we thanked them for! I just hope that Amazon won’t ruin the positive side of people analytics. Some silver lining is seen according to the last part of the original article…
“Our teams are always thinking about new ways to help employees engage with each other,” Amazon spokesperson Barbara Agrait said in a statement to The Verge. “This particular program has not been approved yet and may change significantly or even never launch at all. If it does launch at some point down the road, there are no plans for many of the words you’re calling out to be screened. The only kinds of words that may be screened are ones that are offensive or harassing, which is intended to protect our team”
This is a perfect example of the potential terrors of AI in the workplace. I also found this interesting consider Amazon’s history of interfering in employee interactions (e.g., when they changed traffic light timing to interfere with union organization), and their extremely strict monitoring of their employees (e.g., their monitoring of employee time use, and their physical organization of factory floors to discourage employee interaction). Amazon’s paradox is that they want to foster connection between their employees, but at the same time are terrified of employees organizing together. This new idea seems poised to solve the problem of employee connection, but leans too far in the direction of digital Taylorism to be effective. It makes me wonder whether any initiative that gamifies recognition could work, or whether the simple act of gamification deems such recognition insincere and thus fruitless.
I love this post! It’s a great example of trying to use technology to solve a people problem. If communication and recognition are the issue, some digital high fives in dead silent cubicle zones won’t solve the general issue. People crave authentic connection and engagement, so this trying to solve a human issue digitally. It is especially problematic, as you allude to, with the way that censorship and intervention by Amazon managers/NLP/moderators could stifle the recognition making it even less authentic than digital recognition can already feel. I think People Analytics at its best buttresses or better informs real-life human engagement and at its worst attempts to replace real-life human engagement. Sadly, I think this falls squarely in the latter misguided category.
Hi Nico,
Thanks for writing this post about Amazon! Brings up really interesting topics about the FORMAT recognition should encompass — in our last class, we discussed the power of super-recognizers, and employees who felt they did not receive recognition.
If the platform was to be widely adopted — and believed in — by Amazon employees, despite the competitive nature of Amazon as a whole, is this a net positive? Or is it unrealistic to expect managers to really give these “Shout Outs” to employees? If the accolades aren’t genuine, the program isn’t much use.
How do you think Amazon might be able to ensure that the feedback is authentic and meaningful?
Interesting post! I liked the way you framed criticism around this important issue, as the boundaries between control and structure seem to be blurred in Amazon’s initiative.