Machine learning algorithms are now used to automate routine tasks and to guide high-stakes decisions, but, if not carefully designed, they can exacerbate inequities.
Sharad Goel of Harvard Kennedy School will start his talk by describing an evaluation of automated speech recognition (ASR) tools, which power popular virtual assistants, facilitate automated closed captioning, and enable digital dictation platforms for health care. He found that five state-of-the-art ASR systems — developed by Amazon, Apple, Google, IBM, and Microsoft — exhibited substantial racial disparities, making twice as many errors for Black speakers compared to white speakers, a gap we trace back to a lack of diversity in the audio data used to train the models.
He’ll then describe recent attempts to mathematically formalize algorithmic fairness. He’ll argue that some of the most popular definitions, when used as a design principle, can, perversely, harm the very groups they were created to protect. He’ll conclude by describing a general, consequentialist paradigm for designing equitable algorithms that aims to mitigate the limitations of the dominant approaches to building fair machine learning systems.
This talk part of the Digital Seminar, a D^3 Assembly series that is open to faculty, doctoral students, and academic researchers.
Email us at firstname.lastname@example.org for information on attending this seminar.