Balancing robots and humans in talent decisions

We are hiring

AI in hiring is here to stay. How do we find the right balance between robots and humans making talent decisions?

Ever since I heard Frida Polli speak about Pymetrics at one of my classes earlier in the semester, I could not help but wonder if using AI for hiring decisions would truly debias the process. A week later, I heard from Kate Glazebrook who spoke about Applied. Kate firmly believes that algorithms have no place in talent acquisition, and instead she uses a mix of behavioral science with analytics, but screenings and decisions are always conducted by a human being. Once again, I was left with more questions than answers.

 AI-powered companies for hiring seem to grow on trees these days, and each one of them claims to “make sure there’s never any bias creeping into the system”, like Scoutible’s CEO shared in a 2019 Wall Street Journal article. When a colleague asked Frida Polli about the human decisions behind Pymetrics’ algorithms, she gave all the reasons why hers was better and more precise than others. The same happened in our class with Noah Zandan from Quantified. The fact that neither of these smart individuals questioned the system they had created, or felt their algorithms needed more probing was, in my eyes, problematic.

Artificial Intelligence is going nowhere. Companies are demanding more cost effective and fast practices that allow them to screen piles of candidates in a more systematic and less time consuming way. Recently, companies have not only used algorithms to screen résumés, or make decisions based on a particular skill set, but it seems that “technology is displacing human interviews”, according to a 2020 Financial Times article. Gamifying the selection process seems to be capturing the interest of recruiters who choose real-life scenarios to assess the cognitive or problem-solving skills of a particular candidate. Some of these companies are willing to embrace the innovation, but are still skeptical of AI. EY, for example, still prefers the decision-making process to be done by a person: ​​”We felt we wanted a blend of smart people and smart machines working together”.

However, not all HR managers are raving about incorporating AI into their processes, and not for the reasons I initially thought. Besides the algorithm aversion that we frequently talked in class, HR managers are seeing AI as a threat to their jobs, as People Management shares in this 2022 article. AI seems to be forcing the HR profession to quickly go through an upskilling process. Yet, courses like People Analytics are not offered in every university across the world, and HR degrees tend to have very little analytical courses that would allow practitioners to be part of this new world.

Between the fear of biases, the fear of admitting algorithms do not have all the answers, and the fear of algorithms stealing HR jobs, we are less ready than I thought to embrace this technological revolution. AI Organizational psychologist Tomas Chamorro-Premuzic shared that “AI should be used to take care of predictable, repetitive, and low-level tasks, leaving humans to provide the creativity, curiosity, and empathy that AI cannot provide.” Maybe finding a balance between robots and humans in the hiring process is just about buying all of us some extra time.

Previous:

All you need is data…or not?

Next:

The Implementation of People Analytics

Student comments on Balancing robots and humans in talent decisions

  1. Great analysis Sofia. I love how you tied in things we’ve discussed in this class to multiple articles and things you’ve learned in other classes. One thing I would like to add to your analysis is the potential for hiring algorithms to be susceptible to historical bias. Even if an algorithm is fair when it is built does not mean it will be fair even a year later. As many companies turn towards algorithmic talent acquisition it is likely that we will see a dramatic change in how candidates approach an application. Candidates will try to learn how an algorithm evaluates their application and adapt to optimize their chances of getting a position. Nonetheless, the data and performance the algorithm was trained will be old and outdates and not reflect the changes in the candidates field. This could lead to an inaccurate and possibly unfair algorithm. I think there’s great potential for algorithmic talent acquisition but there are many issues and I agree that a balance between humans and algorithms is needed.

  2. I completely agree with your sentiments overall presented here, as I also worry about the role technology can/should play in “fixing” the hiring process. One thing in particular that stood out to me however, is the role of current HR managers in this equation. Aside from not typically having formal people analytics training, I think there’s a deeper disconnect as well, which is that many people in HR were drawn to the field due to the person to person interaction. It is very common for HR professionals to not be as generally tech aware as other departments within the same company. I experienced a similar trend when working in behavioral health technology. We often worked with social workers, which is another field where much of the workforce is interested in personal relationships rather than technical skills. We struggled to get this cohort of our customers to adopt the technology due to lack of know how/interest, and I could foresee a similar set of obstacles as the HR field becomes more demanding in regards to the people analytics skills required.

  3. Great insights, Sofía. I liked how you built an argument on caution on a too accelerated shift to AI in hiring when there is a need for more data analysis training in HR, which is truly an area that is deemed dependent on soft skills and empathy.

Leave a comment