Paula Álvarez's Profile
Paula Álvarez
Submitted
Activity Feed
Thank you for sharing that article, Aurora! It’s so eloquently written and I think it captures my sentiments after writing this post. This notion of transparency he talks about I think it’s the critical aspect here, and it’s what I had in mind when I wrote about the need for checks and balances and some kind of auditing body at the very end of the post.
Thanks again!
Rocio- thank you for the thoughtful breakdown of the impact of the vital COVID-19 contact tracing effort on our considerations of acceptable limits on data privacy. I have been thinking quite a bit about this issue. It can be tempting to write off the danger of relinquishing protections on privacy in favor of improved surveillance and data access. That danger can feel like a concern for the future as we navigate the immediate challenges of the pandemic. But you highlighted some key concerns to a short-sighted approach: the risk of long-term consequences to relaxing privacy safeguards, and the history of data abuses and breaches that should make us all wary of handing over our data.
I see another reason that safeguarding data privacy must be a central pillar in the adoption of contact tracing: the crucial need to maintain public trust*. Containing the spread of the virus will be a challenge that requires overwhelming buy-in from the populace. Many countries are talking about implementing a software development kit or app like Singapore’s TraceTogether that you mentioned to track cases and interrupt the chain of infections by notifying people when they have recently come into contact with someone who tested positive and suggesting they isolate (PEPP-PT in Europe seems to be a great example: https://www.politico.eu/article/europe-cracks-code-for-coronavirus-warning-app/).
The simple fact is that in order for a tool like this to become effective, a majority of the population has to use it (the article above predicts at least 40-60%). Many of the most vulnerable older citizens among us do not have smartphones, or do not carry them everywhere so we’re starting from behind. Illegal immigrants or members of marginalized communities might be particularly reluctant to report location and health data. The general public must feel secure that participating in this global effort isn’t going to put them at risk in other ways.
The good news is that this reasoning implies that public safety and data privacy actually align in this case! The better we protect privacy, the more people buy in, and the safer we all are.
—————
*Specific actions for keeping public trust: 1) anonymizing all personal identifiers, 2) only saving epidemiologically relevant proximity history, and 3) erasing data as it is no longer useful for contact tracing.
Really interesting, John! Thanks for sharing!
A couple of reflections from my end:
i) I agree with the efficiency argument in favor of Searchlight over reference calls. It has the potential to save HR money and time. However, I’m not sure I agree that it filters out less serious applicants. You point out that “by giving applicants the ability to retain references on the platform, it helps applicants continually add to their profile over time”. If that’s the case and I’m understanding correctly, once I put in the upfront work to get my references on the platform, wouldn’t it very little incremental effort for me to apply to other jobs using those exact same references?
ii) The second thing I wanted to touch on is the idea that the platform might help mitigate unconscious biases. On their website, Searchlight says:
“Counteract prestige bias with a more equitable hiring practice and objective reference data. Using Searchlight, 80% of our partners have hired more top performers from underrepresented backgrounds.”
There are two things that influence whether your algorithm effectively eliminates or perpetuates biases: (1) the bias in the data you input, (2) the design of the algorithm itself. Regarding (1), I can see how a well defined survey might be effective in collecting data in an objective way (e.g. it’s well studied that when recommending a female vs a male, we are more likely to use certain adjectives and the survey can be designed to mitigate that). If you manage to gather less biased data, your algorithm is less likely to produce biased results. I would like to know more about how the algorithm addresses (2).
iii) Unconscious bias is just a subset of bias, but the algorithm by design might perpetuate other biases. For example, I wonder if Searchlight weighs the references from a manager at SMB the same way it weighs those of a manager from a big tech firm. Furthermore, I worry that their results might disproportionately benefit those with larger networks. Without Searchlight, a recruiter might be willing to do a few calls, but with Searchlight it can get as many reference points as possible for the same amount of effort. Hence, the disadvantage for a candidate that has worked for a few years at a small company with respect to one that has worked for a Google or a McKinsey where they have had different teams and managers might be exacerbated.