James Eckfeldt

  • Section J
  • Section 1
  • Alumni

Activity Feed

On April 12, 2020, James Eckfeldt commented on The risks of collecting employee health data :

Thanks for a very interesting read Amina. I believe you hit the nail in the head with your closing statement that says that employers have an ethical obligation to put in place robust education, security, and consent mechanisms to protect the rights of the employees. Since regulation is lagging to protect employees, it is crucial that employers take this step proactively. However, how many will do so? More often than not, employees will sign whatever in order to get certain benefits, and we seldom read the fine print. I’m all for the benefits that come with employers and employees tracking health data, but I’m very concerned with regulation around this lagging behind.

On April 12, 2020, James Eckfeldt commented on Locked in by Algorithms? :

Thank you for a very interesting read Paula, I had no idea that many states in the US judicial system are using algorithms to asses risk and set bail. In general terms, I’m all for the use of algorithms in this regard. even though the drawbacks are pretty clear (as in almost all use cases for algorithms we’ve seen so far in class), I do believe that the human-machine combo ends up delivering a better outcome than the old, human-only system. Yes, historical data fed into the algorithm will carry risk of bias, but at least we minimize the human bias by arming the judge with an “objective” tool that can guide their decision and avoid “anomalies” in setting bail. Perhaps a way to enhance the outcome of the algorithms would be to adjust the bail outcomes for the parts of the population that have historically seen negative bias.

On April 12, 2020, James Eckfeldt commented on Books and Movies in the era of AI :

Thanks for a great article Haerin! I think your AI generate blog post was better than your actual one, so you might want to reconsider using that one instead (just kidding). My view on this is that AI will not come close to replacing humans in content creation until we develop AI that is fully capable of simulating human understanding of language and meaning. Since we’re not there yet, usually any “creative” content written by AI will end up being nonsense. Once we cross the chasm of self-aware AI, the possibilities would be endless and the machines would take over (Ex-Machina style). I do find that using AI to guide which type of content to write will inevitably limit the extent to which we have diverse content, since machines will guide most writers into scripts that will please “the majority”.