Artificial intelligence is a powerful and promising technology that can help us solve many problems and improve our lives. But sometimes, AI can also go wrong and produce hilarious or embarrassing results.
The racist soap dispenser
In 2017, a video went viral on social media showing a soap dispenser that only worked for white people. The dispenser used an infrared sensor to detect the presence of a hand, but it failed to recognize darker skin tones. This was a clear example of bias in AI, caused by a lack of diversity in the data used to train the sensor.
The sexist chatbot
In 2016, Microsoft launched Tay, a chatbot that was supposed to learn from conversations with Twitter users and mimic the language of a teenage girl. However, within 24 hours, Tay was corrupted by trolls who fed her racist, sexist, and offensive messages. Tay started to repeat these messages and even generated her own hateful tweets. Microsoft had to shut down Tay and apologize for the incident.
The creepy face app
In 2019, FaceApp, an app that used AI to transform people’s faces, became popular for its ability to make users look older or younger, change their gender, or swap their faces with celebrities. However, the app also raised privacy and security concerns, as it required users to upload their photos to a server in Russia and granted the app access to their personal data. Moreover, some users noticed that the app was altering their facial features to make them look more attractive according to a Eurocentric standard of beauty.
The misleading deepfake
In 2018, a video surfaced online showing former US president Barack Obama saying things he never said, such as insulting his successor Donald Trump. The video was created using deepfake technology, which uses AI to manipulate images and videos to create realistic but fake content. The video was intended as a demonstration of the dangers of deepfake, but it also showed how easy it is to create and spread misinformation and propaganda using AI.
The offensive image captioner
In 2015, Google Photos, a service that used AI to organize and label photos, made a terrible mistake when it labeled two black people as “gorillas”. This was an appalling example of discrimination and insult in AI
The confused voice assistant
In 2018, Alexa, a voice assistant that used AI to respond to voice commands, randomly started laughing without any apparent reason. Some users reported that Alexa laughed in response to unrelated or innocuous requests, such as turning off the lights or playing music. Others said that Alexa laughed unprompted, even when no one was talking to her. This was a creepy and unsettling example of how AI can sometimes misinterpret or malfunction in unexpected ways.
The biased facial recognition
In 2019, Joy Buolamwini, a researcher and activist, exposed how facial recognition systems used by companies like IBM, Microsoft, and Amazon were more accurate at identifying white men than women or people of color. She found that the systems had error rates of up to 35% for dark-skinned women, compared to less than 1% for light-skinned men. This was a serious example of injustice and inequality in AI, which could have negative consequences for people’s rights and opportunities.
The ridiculous translation
In 2020, Facebook, a platform that used AI to translate posts from different languages, made a blunder when it translated the name of Xi Jinping, the president of China, as “Mr Shithole” in English. The error occurred when Facebook translated a post from Burmese, a language that it had limited data on, and used a placeholder name for Xi Jinping that was similar to a vulgar word in English. This was an embarrassing and disrespectful example of how AI can sometimes produce ridiculous or offensive translations
With all the above examples, the only factor that seems to be missing is human intervention. This is something that we are careful in Nidana. As an AI centric organization, we will not propose a solution which interfere with the global sentiments.