According to recent reports, Amazon is developing a new workplace social media platform where employees can acknowledge each other’s work and send encouraging messages in posts called “Shout Outs.” Dave Clark, Amazon’s head of worldwide consumer business, cites the platform as a method to improve employee happiness and increase retention rates. Assuming that Amazon used previous research or conducted an internal statistical study that indicated increasing employee recognition increases retention rates, the next step is logical. Let’s build a platform that encourages employees to give each other “Shout Outs!” Sounds great, right? Here is a perfect example of applying People Analytics to facilitate change.
While the “Shout Out” platform sounds nice in theory, it becomes less attractive considering Amazon’s plan for implementation. A document summarizing the program states, “We want to lean towards being restrictive on the content that can be posted to prevent a negative associate experience.” In practice, this means that managers will have the authority to report any “Shout Out” they deem offensive. Additionally, Amazon plans to implement NLP technology to detect negative sentiments and the presence of trigger words to help filter inappropriate “Shout Outs.” This is where our notion of a perfect application of People Analytics breaks down, and our ethical conflict arises.
Within their data-driven solution, Amazon plans to suppress employee interaction that is meant to be one-to-one (Clark suggested the experience to be more like a dating app than an open Slack form). Although the censorship of any content is an ongoing topic of debate, things get strange fast with Amazon’s “Shout Out” platform. First, among the list of trigger words reported by The Intercept are “Union,” “Living Wage,” “Representation,” “Ethics,” “Fairness,” and other similar terms. Second, “Shout Outs” are planned to be tied into Amazon’s gamification program which incentivizes employees with digital rewards, like stars and badges, for engaging more with their work and increasing efficiency. I might be breaking out my tinfoil hat, or I might be a reasonable skeptic. However, this technology would give Amazon the capability to use the gamification program to encourage employees to send Amazon-accepted “Shout Outs” and compete for stars while using the new platform to identify and suppress employees who advocate for Amazon-unaccepted social/work-related issues.
Is Amazon a wild, Orwellian corporation deadset on crushing unionizers and squeezing every ounce of work out of their employees? Probably not. The previously linked articles have plenty of quotes from Amazon spokespeople providing better context on “Shout Out” suppression, and emphasizing that the project is still in the planning stages and could get scrapped altogether. However, the entire situation raises some interesting questions on how privacy in the workplace and analytic techniques (like NLP) interact. Additionally, how did we get from our perfect application of People Analytics to an evil surveillance platform? I’d argue that the disconnect from sound statistical findings to sound implementation arises in the separation between the analyst and the solution. As practitioners of People Analytics, we ought to put more thought into the second and third-order effects of statistical findings, and anticipate ethical issues that could arise during solution implementation. In my mind, Amazon’s situation reflects exactly why it probably isn’t enough to give “the solution team” a stats report without some solid ethical consideration beforehand.