trishim11's Profile
trishim11
Submitted
Activity Feed
Interesting read! The issue that most hiring teams face is the confusion between correlation and causation, as you rightly pointed out. I really like the idea of turning the tool to be more candidate-focused so that they can identify gaps in their skillset in the changing world and improve to stay relevant. The bigger issue I see with the product is around soft skills. While hard skills are easy to measure and pinpoint in job descriptions, soft skills carry the danger of perpetuating biases. Evaluating past history might mean that the algorithm constantly spits out individuals who are similar to the ones already in the organization. As the world understands the value of diversity, change in thought and people is needed to change the culture. These people would be lost with the Workdays approach. Also, as people start to identify which skills get them to the top of the list, it would get easier to pass the first screen through simply changed resumes. This can be resolved only if Workday continues to overlay human judgement on top of the system’s results. As with Linkedin, it is much easier to game the system by getting friends to vouch for skills, which makes it a very unreliable measure.
When I have been involved in hiring team members, a common thread that I have noticed is that we all tend to pick all skills that seem appealing. This leads to the required skillset to be a superset of what’s actually needed to get the job done. Many a times, people end up hiring the best candidate and not the right one for the job. These overachievers get disappointed upon joining, which leads to either attrition or dissatisfaction in the workplace. The value I see in this product is surfacing the right set of skills needed for a role, and nothing more, so that we don’t over-evaluate candidates and focus on whats truly important.
Great read Sarah! The initial question you have raised has been studied and debated extensively. The fact is as humans we are more forgiving towards human error and NOT AT ALL towards machine errors. Ideally, the combined error rate of all healthcare systems together should be the point at which we start preferring machines. But people still would want a machine to be 100% accurate. Therefore, for the near future, most people would prefer algorithms to be used as a complement to human judgement.
Unfortunately, since algorithms are built by humans with bias, removing that bias is a very hard problem to solve. I do like the idea of stress-testing with data scientists. I also wonder if instead of removing race as a field altogether, we could use it to our advantage. Using such fields to bucket data into different groups can help us analyze each individually. This can identify significance of factors within specific groups, can also maybe help identify some underlying causes that affect certain groups more than others. Ideally, it will also surface the bias when the results are compared across all buckets.
Well articulated Elisa! I share your concern around privacy of the data, but I wanted to dig deeper into what use cases have they looked at. While I don’t with this approach at the first glance, I do think there is merit in thinking of this tool as a productivity enhancer. The device can be modified to help individuals understand their mistakes and find more efficient ways to work. This would require a clear contract and understanding with leadership. Workers should be able to see their personal data for personal improvement, but management should only get an overview to act on changes they can make to improve the whole workforce’s effectiveness. Also, in response to the previous comment, even investments in robotics have been viewed negatively since they tend to replace front-line workers. In contrast, investing to help employees increase their worth within the organization by freeing up time spent in correcting mistakes and upskilling might be a positive innovation.