People Analytics is considered “the next big thing” by many. The ability to use data to better understand and positively influence decisions on recruitment, performance, wellbeing, retention, and promotions has great allure for a variety of companies. Hilton, for example, uses AI to conduct follow-up interviews of call center job applicants or to schedule the successful candidates’ final offer-extension calls. Especially in the United States, the limits of what’s possible are defined by the limits of technology. Rarely does someone ask the question how far the influence of technology should go in advance. Regulation and legislative frameworks in the US usually are created in a reactive manner once a problem has already become so apparent that stakeholders demand the intervention of legislators. An example of this is Amazon creating an AI for hiring decisions which eventually needed to be scrapped because of a gender and race bias.
Europe takes another approach to regulation of Technology. The General Data Protection Regulation (GDPR), which came into effect in May 2018, has severely restricted the possibilities of employers to use employee data for the purposes of people analytics in a preventive manner, i.e., anticipating possible problems related to data privacy.
To understand why regulation is so different between Europe and the US, one needs to deal with European history in more depth: all of Eastern Europe was subject to communist influence in the late 20th century. In countries like the Baltic States, Poland, former Yugoslavia, and the German Democratic Republic (today part of the Federal Republic of Germany), authoritarian governments has used information that it had on its citizens to spy on them, eliminate political dissidents and in the worst cases deport, torture, and kill people who acted against the government-dictated ideology and regime. As many people living in today’s Germany, Poland, Czech Republic etc. have still witnessed these practices themselves and are therefore suspicious of large organization that have authorities over them using their data for untransparent or unwanted purposes.
GPDR, as a consequence, incorporates seven principles that are intended to mitigate the risk of misuse of personnel/employee data. Key principles that are relevant for people analytics include transparency, purpose limitation, data minimization and storage limitation. All of those principles severely increase the cost associated with performing people analytics. Transparency, for example, states that every individual whose data is used in people analytics needs to give their explicit content to that use. To make this specific, if a people analytics team wants to use a person’s salary data or performance reviews, they need to ask the individual for explicit, written permission. The data minimization principle states that data being processed is limited to what is necessary for the purposes of the analysis. Incorporating additional details, like additional individuals or additional variables is made very hard due to this limitation.
This piece of legislation illustrates a key conflict and trade off that the field of people analytics: How much data privacy are we willing to give up in exchange for insight and impact?
The answer to this question cannot be a general one, but will depend on historical context, collective attitudes and emotions and other regional differences. The field of people analytics can best contribute to figuring out this trade off by maximum transparency and documentation: it should strengthen the insightfulness of analysis while proactively applying the most possible care to the treatment of data.