Hanyin Cheng

  • Section E
  • Section 1
  • Alumni

Activity Feed

On April 15, 2020, Hanyin Cheng commented on AI in Predicting Candidate Potential :

Katherine – thank you for this interesting post.

Your example of bias in Amazon’s hiring practices made me think about whether companies have a legal responsibility to ensure that their algorithms do not disciminate any of the protected classes (gender, age, country of origin, etc.). I would like to say yes, but i also see how that could be very constraining for companies trying to leverage more analytics. Your specific example of how certain verbs are used more often by men is interesting because it is non-obvious unless you have specifically investigated the correlation. Do companies need to exhaustively look for all bias across all variables and try to eliminate them? Is this even possible? Or can we tolerate some level of bias? Where do we draw the line?

Thanks again

On April 15, 2020, Hanyin Cheng commented on AI in Predicting Candidate Potential :

Katherine – thank you for this interesting post.

Your example of bias in Amazon’s hiring practices made me think about whether companies have a legal responsibility to ensure that their algorithms do not disciminate any of the protected classes (gender, age, country of origin, etc.). I would like to say yes, but i also see how that could be very constraining for companies trying to leverage more analytics. Your specific example of certain verbs being used more often by men is interesting because it is non-obvious unless you have specifically investigated the correlation. Do companies need to exhaustively look for all bias across all variables and try to eliminate them? Or can we tolerate some level of bias? If a certain data pattern is discovered to be a statistically sigificant predictor of performance and is 40% more prevalent among a specific age group and a company used it to make hiring decisons, should that count as discrimination? What if it’s 10% more prevalent…?

Thanks again

On April 15, 2020, Hanyin Cheng commented on Wisdom of the Crowd: Interviewing Your Network with Searchlight.ai :

Hi John – Thanks for this interesting post. Their mission to “counteract prestige bias” is similar to Eightfold (the company i blogged about) which is trying to help companies identify non-traditional candidates that are equally qualified as traditional candidates.

My problem with Searchlight’s approach is that reference checks offered up are typically biased to be very positive so i wonder how helpful the data collected is. It reminds me of the testimonials feature on linkedin, which helps add color, but doesn’t provide a balanced review of the candidate.

Overall, i like the idea of helping companies “counteract prestige bias”. I see it as a win-win for both employers and job candidates.

On April 15, 2020, Hanyin Cheng commented on BetterUp: Finding the best coach for you at the right moment :

Hi Miho,

Thank you for this interesting post! I also wonder how much value their personalization and data is adding or if the improvements measured are just a function of the fact that employees are engaging with a mentor (any mentor). In other words, does the AI matching really help improve the outcome?

I also think BetterUp’s human + algorithm coaching approach is better than Quantified Communications’s fully automated approach. With QC, i thought the analytics was interesting but hard to internalize. I think it would have been helpful if had a human coach to help me interpret the data and elaborate on ways i could improve.

Thanks again!