K Shen

  • Section I
  • Section 1
  • Alumni

Activity Feed

Great post! A couple of thoughts:

BetterUp’s claim to 97% match accuracy/satisfaction may not be all that meaningful. To really understand what it means, we need to be able to compare it to a baseline of how satisfied the employees would be without the algorithm – e.g. if we let them manually pick their own coach, or if coaches were manually assigned. Given how stringent BetterUp’s hiring process for coaches seems to be, I suspect the baseline satisfaction rate would be quite high to begin with, so 97% isn’t that impressive.

Completely agree that companies should be thoughtful about algorithm use. Employees need to be treated as partners in the use of these algorithms, and that means respecting their privacy in coaching sessions. To fully develop, employees need an environment of psychological safety to discuss and learn from mistakes, and a company collecting data on what’s said in the sessions would undermine that.

On April 12, 2020, K Shen commented on The risks of collecting employee health data :

Very nice post! A few thoughts:

Completely agree on the concerns around data privacy and security. Personally, as someone working in the healthcare space, I’m absolutely shocked that the data in this case was not anonymized. Not only are there the issues you cited when it comes to layoff decisions, but also there are potential interpersonal effects in the workplace. It’s easy to imagine someone in the workplace with access to the un-anonymized data bullying or even blackmailing an employee with that knowledge.

Also agree on the points around regulation. One potential “solution” might be to have a 3rd party company be the one to handle all the health data, so that the client itself isn’t able to see any of it (except maybe some metadata). The 3rd party company would obviously need to have high integrity and top-level cybersecurity. Admittedly this is a little like the Equifax-TransUnion-Experian triumvirate which handles credit reports for everyone, and they were pretty bad with both integrity and cybersecurity. But putting the thought out there.

Very interesting article! Some thoughts:

A fair amount of AI has a “black box” problem, where we cram in data on one side and get results out the other, but can’t explain what happened in between. As predictive analytics gets more complex (e.g. using neural networks) it gets harder to explain exactly what factors contributed to the results, whereas interpretation is easy with say a linear regression.

It seems that HireVue’s analytics fall into this black box category, since it’s stated that the company doesn’t always know how their system makes it decisions. However, in this particular case it may actually be an advantage for HireVue itself. Often, hiring companies won’t tell candidates the reason for their rejection, in case it leads to a discrimination lawsuit. With this black box AI, they literally can’t tell their candidates why they were accepted or rejected.

Completely agree that HireVue AI can lead to less diversity. However, simply removing the AI wouldn’t solve the problem – the human recruiters likely share the same bias. I believe what’s needed is a very intentional effort on companies’ part to use AI to diversify their talent pools as well as predict performance. For instance, AI could be used to select interviewers with dissimilar backgrounds, who would provide very different perspectives on a given candidate.