Oliver Badenhorst

  • Section J
  • Section 1
  • Alumni

Activity Feed

On April 15, 2020, Oliver Badenhorst commented on Don’t forget the ‘H’ in HR: The Ethics of People Analytics :

Thanks for the thought-provoking post.

I was struck as I read it by the similarity to one of our cases in Reimagining Capitalism–“When Technology Gets Ahead of Society”–where we examined Facebook’s attempts to launch Libra. There, Facebook’s pernicious and sustained lack of goodwill with broader stakeholders, combined with their overreach and attempt to hasten the introduction of a nascent technological solution in a broad blockchain-based payments network, led to a nasty reaction to the introduction of their Libra whitepaper; in a way, they walked into a buzzsaw of their own construction.

While I agree with your proscription for firms to be open, I think there is a spectrum of openness worth discussing. Simple transparency is one thing–publicly releasing your data and methodologies–but active engagement, educating users, subjects, and broader politial or societal stakeholders is another. In ReCap we discussed how firms, when they find themselves outside the well-understood lines of existing technological and moral frameworks, have a responsibility to help co-create new rules. Such co-creation will necessitate that they move slower than they could if they were only concerned about technological limitations to their work, but if their goal is to introduce meaningful, sustained change without provoking a negative backlash, they will find such prudence to be valuable.

On April 15, 2020, Oliver Badenhorst commented on What if companies could monitor your concentration? :

Well, this is terrifying! I was already worried enough that my future children will be exposed to advertising or other attention-sapping stimuli while at school; I hadn’t even thought about the possibility of in-classroom monitoring technologies being used here.

In addition to your concerns about the conceptualization of concetration as a matter of indivdual will, I also worry about the underlying relationship between concentration and better educational outcomes. While I wouldn’t endorse a student goofing off constantly, I don’t think the relationship is as simple as more concentration equals better outcomes. It’s too easy to think about the artists or scientists who had a breakthrough idea in a moment of absent daydreaming, or even of a student whose imagination, while running wild, inspires them to pursue a new line of study.

While I agree that part of schooling is to teach students will power and self-discipline, I don’t believe that’s all it is meant to do. I worry that if we start measuring variables like concentration in isolation of the full picture of educational goals, we will solve for what we can see at the expense of what we cannot.

On April 15, 2020, Oliver Badenhorst commented on Locked in by Algorithms? :

Thanks for the interesting post, Paula. As a reformed former law student, it’s cool to see the beginnings of some cross-pollination between the data analytics space and the notoriously conservative legal world.

Generally, I’m an optimist for the use of algorithms in legal decisions as a means of structuring and standardising what was previously a highly subjective process. As you point out though, there are risks. Similar to Aurora’s comment above, I worry about transparency, but I worry just as much about legitimacy. Law students are taught that it’s not enough for justice to be done, justice must be seen to be done. We need the public to have confidence in the system, and for that to be true, they need to understand how it works.

Algorithms can help with transparency and auditability, but their potential for understandability is mixed. Legal decisions are not simple, they must reflect the complexities of the human experience and our desire to balance a host of competing objectives such as punishment, deterrance, rehabilitation, and protection. Every variable added to an algorithm makes it more complex, and even if it remains transparent and it’s possible for someone trained in data science to interrogate how a decision was reached, it may quickly become “too much” for a regular observer to understand.

We must balance our desires to achieve a more perfect algorithmic result against the need for the public to understand what is going on. If we fail to do so, we risk replacing judges sitting atop their ivory tower with algorithms buried inside an ivory box.