Caroline Stedman

  • Section 1
  • Alumni

Activity Feed

On April 13, 2020, Caroline Stedman commented on Humans versus robots: When we take performance tracking too far :

I really appreciate your thoughts here, and completely agree that some form data anonymization should be necessary (assuming the anonymization process is robust enough that employers are not be able to re-identify the individuals). I also agree that having a human in the loop is a critical aspect that is missing!

Your post also made me wonder how successful this tracking was in terms of increasing the percentage of quotas met (which I assume was their initial goal). In a class I took at the Kennedy School, we talked about the gift-exchange game, which attempts to model the relationship between employers and employees. Experiments conducted about this game typically consists of three different treatments: (1) Trust treatment: The employer offers a wage, w, and expects an effort, e, in return from the employee. There is no enforcement of the expected effort in this treatment. (2) Penalty treatment: The employer offers a wage, w, and expects an effort, e, in return from the employee. In this instance, the employer can monitor the employee’s effort and institute a penalty for shirking. (3) Bonus treatment: I will not go into those details here, as they are not relevant to my point. Results of these experiments have shown that there is not a statistically significant difference between the amount of employee effort exerted in each of these two treatments (trust vs. penalty). Or, in other words, monitoring employees does not increase their effort/production. I am sure there are instances where this is not always the case, but I found it interesting to think about in this context!

On April 13, 2020, Caroline Stedman commented on Humans versus robots: When we take performance tracking too far :

I really appreciate your thoughts here, and completely agree that some form data anonymization should be necessary (assuming the anaonymization process is robust enough that employers are not be able to re-identify the individuals). I also agree that having a human in the loop is a critical aspect that is missing!

Your post also made me wonder how successful this tracking was in terms of increasing the percentage of quotas met (which I assume was their initial goal). In a class I took at the Kennedy School, we talked about the gift-exchange game, which attempts to model the relationship between employers and employees. Experiments conducted about this game typically consists of three different treatments: (1) Trust treatment: The employer offers a wage, w, and expects an effort, e, in return from the employee. There is no enforcement of the expected effort in this treatment. (2) Penalty treatment: The employer offers a wage, w, and expects an effort, e, in return from the employee. In this instance, the employer can monitor the employee’s effort and institute a penalty for shirking. (3) Bonus treatment: I will not go into those details here, as they are not relevant to my point. Results of these experiments have shown that there is not a statistically significant difference between the amount of employee effort exerted in each of these two treatments (trust vs. penalty). Or, in other words, monitoring employees does not increase their effort/production. I am sure there are instances where this is not always the case, but I found it interesting to think about in this context!

On April 13, 2020, Caroline Stedman commented on The risks of collecting employee health data :

Thanks for putting together this article, Amina. I enjoyed reading it!

Your comment about the potential invasive and personal insights that could be gained by an employer linking health data with other information they have on their employees really struck me. While I believe that anonymization of the health data is key, I worry that simply removing names would not fully solve the issue. I keep coming back to the research of Latanya Sweeney, who showed that 87% of people in the United States are uniquely identified by {date of birth, gender, ZIP}. It would likely be fairly simple for an employer to re-identify their employees even in the absence of a name (given the wealth of information they have on their employees outside of the health data). I would therefore hope that employers might consider a more rigorous anonymization approach (perhaps Differential Privacy), even if it means slightly less precise analyses of the data.

On April 13, 2020, Caroline Stedman commented on Locked in by Algorithms? :

Thank you for laying out all sides of this issue, Paula.

On top of the fact that the Compas model was trained on data that is likely rife with historical bias, I also believe that a huge issue with this model is that it was deployed in a context in which there is no ground truth against which to validate it. The model is trying to predict recidivism, but there is no metric of recidivism for those who are detained. Therefore, how can we ever know the true relationship between the characteristics/traits of a person and their rate of recidivism? I find it very difficult to trust an algorithm that cannot truly be validated against any metric or ground truth.