The following insights are derived from a recent Assembly Talk featuring Dr. Broderick Turner, a distinguished researcher and Business in Global Society Fellow at Harvard Business School. Dr. Turner is renowned for his work in the Technology, Race, and Prejudice (T.R.A.P.) Lab, which aims to uncover the intricate dynamics of marketing, technology, racism, and emotion.
Dr. Turner explains that AI itself is not inherently racist since it is merely a tool. However, depending on the data and rules it is trained on—both created by humans—it can be used in a racist manner.
He emphasizes the significance of comprehending the human element involved in various aspects of AI, such as data collection, categorization, and user feedback, and how their biases can influence the outcomes.
How is the data collected? Dr. Turner points out that data is sourced from various channels, including the Internet, especially its freely accessible sections. He discusses the limitations and biases that can arise during this process.
Another potential factor lies in the motivations and constraints of algorithms. Dr. Turner explains that algorithms are designed with specific objectives in mind but are also bound by mechanical limitations inherent in mathematical calculations. He presents an example concerning the impact of facial images on user behavior in social media feeds, where different emotional states and behaviors were observed based on displayed facial expressions. Dr. Turner suggests that companies may design algorithms to influence user behavior based on these findings.
During the talk, a participant raises a question about the political aspect of regulating AI and the resulting responsibilities for individuals. They mention the efforts made by the European Union to address data scraping and copyright law in the context of AI. Dr. Turner acknowledges that companies wield significant power and often engage in limited self-policing. He underscores the importance of user feedback and holding companies accountable.