Craiyon at the bottom of the page has a warning that its results may “reinforce or exacerbate societal biases” because “the model was trained on unfiltered data from the Internet” and could well “generate images that contain stereotypes against minority groups.” So I decided to put it to the test.
Using a series of prompts ranging from antiquated racist terminology to single-word inputs, I found that Craiyon indeed often produces stereotypical or outright racist imagery. For the example screenshot, I typed in ‘Kind Nurse’ and the results all showed relatively light-skinned only women nurses. Even though what I tested underscores all the hard work that researchers have figured out on how to train a neural network, using a huge stack of data, and producing incredible results. However, we have been seeing these algorithms pick up hidden biases in that training data, resulting in an output that is technologically impressive but which reproduces the darkest prejudices of the human population for free.