The Ethics of People Analytics

A Review of “How to Ethically Secure People Analytics” by Andy Hames, Senior Vice President North America, People First

In his article, “How to Ethically Secure People Analytics,” Andy Hames does a satisfying job of featuring both the advantages and potential pitfalls of the application of people analytics within a company, but he fails to fully capture the role of ethics in this field.

Hames begins by highlighting that, for a company, having the ability to understand which of your employees are thriving, which might be struggling, or which might be ready for the next step in their careers, is crucial. The better you can understand the people who make up your team, the better you can support these people and promote a healthy, engaged and productive working environment. Additionally, with better information, you can also leverage and retain a diverse range of talents. I believe that most people can get on board with this application of people analytics. People helping people!

At the same time, as with any form of data collection, there are always pitfalls of which to be aware. Hames provides a cogent example of an instance at The Daily Telegraph in 2016 where desk sensors were tested in order to monitor the use of their current office space and determine whether it was being used effectively. While their intentions may have been innocent, it is likely of no surprise to many that this decision was confronted with backlash from employees who feared that they were being surveilled by their employer. As Hames points out, increased data collection (especially in the form of surveillance) can lead to employees feeling coerced to behave in a certain way. This can place limits on creativity, collaboration and overall team production.

While I completely support how Hames has laid out the field of people analytics, I take issue with his concluding arguments about the ethics of decision making and what he coins “pragmatic people analytics” (or ethical people analytics).

First, Hames claims that algorithms can never replace human intuition when it comes to making the right, moral decision. The claim that humans always make the right, moral decision is a far cry from the truth. There has been much research done to show that that judges are more lenient with their sentences after they have eaten. A study conducted by the Proceedings of the National Academy of Sciences evaluated over 1,100 judicial rulings and found that (even when controlling for the variation among judges – a fixed effects model was used) there was a statistically significant increase in the likelihood of a favorable ruling after a food break. If judges, who are typically viewed as being exemplars of impartiality and rationality, cannot let extraneous factors impact their decisions, how can we even begin to imagine that others, whose roles are not defined by impartiality and rationality, can be trusted to make the right, moral decision?

It is important to note here that I do not believe that algorithms should replace human decision making. In many instances, the data being used to train the algorithm is inherently biased. Any algorithm that is trained with biased data will produce results that only further perpetuate those biases. For example, if we were to try to better understand the attributes of a person (charged with a crime) that best predict their risk of recidivism, our training data would be reduced to those who were not detained after their crime, since there would be no way to measure recidivism for a detained prisoner. However, as we saw in the example above, the group of people who are detained inherently come with some bias. Someone might have been the recipient of a less-favorable-pre-lunch-decision and been detained, meaning that they will not be included in our training set and we can never understand how their characteristics relate to recidivism. We can therefore never truly understand the relationship between our inputs and our output.

The second argument that Hames makes is that in “pragmatic people analytics,” companies can retain the trust of their employees by being transparent. While I completely agree that transparency is key, I do not agree that you can equate transparency to trust. A simple example that demonstrates this point could be the following: a company alerts its employees that it is conducting a sentiment analysis of all emails sent from their sales team to external clients, in order to better understand which types of email exchanges are associated with successful sales pitches. There is clear transparency here, but this transparency is not likely to instill much trust in employees.

Hames clearly and succinctly outlines the field of people analytics. However, in his analysis of ethical considerations, his seemingly blind trust in human-led decision making and the impact of transparency are concerning.

Sources:

https://www.hrtechnologist.com/articles/hr-analytics/how-to-ethically-secure-people-analytics/

https://www.pnas.org/content/pnas/108/17/6889.full.pdf?nr_email_referer=1%29%2C

Previous:

Moneyball and Soccer

Next:

Nudging your way to the top?

Student comments on The Ethics of People Analytics

  1. Thanks for sharing this article. I particularly enjoyed your perspective that “Any algorithm that is trained with biased data will produce results that only further perpetuate those biases.”

    Overall, I agree with your argument. However, as more and more of our lives are being tracked with data, I wonder if people are going to be increasingly comfortable to sharing personal information. Or rather, I wonder if they would continue to care less about data privacy. And if people care less about their own data privacy, then perhaps there is less of an ethical concern?

  2. Thanks for sharing! This is an interesting topic and I definitely agree with your concerns about trust in human-led decision making and the impact of transparency. As you mention, I am concerned that there is an inherent “Catch 22” involved in algorithms and the data fed to them. Humans are inherently biased, leading us to rely on algorithms, however we also feel that data can be biased, leading us to modify the data we feed into them. We therefore doubt human decided outcomes, yet at the same time modify algorithms until they produce outcomes more similar to what we want/expect. Additionally, while transparency does not inherently produce trust in employees, I would argue that lack of transparency does create far more distrust. In the office example, people may be wary of desk sensors but I think they would be far more upset about the sensors if they were hidden.

  3. Very interesting article!

    I couldn’t agree with you more on your stance on the use of algorithms – it definitely seems like a mistake to think that algorithms could never replace human intuition on moral decision making when we know so much about how human decision-making can be flawed. Your example of the judicial hearings really makes apparent how important human decisions can be influenced by outside factors that aren’t even related to the decision at hand. Algorithms could definitely be leveraged to aid in these decisions.

    On the point about transparency and trust, I also agree that the two are not necessarily synonymous. However, I do think that being transparent is the first step in gaining the trust of employees. Not only do organizations need to let their employees know their data is being tracked, but they should also be clear about the analyses and outcomes the data is being used for. It might also be helpful to repeatedly assure employees that their data isn’t being used for any other purposes, that it’s truly anonymous, and that it won’t be used to single out any employees. By being transparent about the entire process and how the data will be used, I do think that it’s possible for organizations to collect data from their employees and still retain their trust.

Leave a comment