Design & Reuse

AI: A Little Too Human, But Not Human Enough

Because AI is so passively absorptive of the data fed into it, it cannot help but reflect the biases – conscious or oblivious – lodged in the hearts and minds of its human developers.

eetimes.com, Dec. 30, 2019 – 

The current artificial intelligence (AI) algorithms for facial recognition appear to have the scientific rigor and social sensibility of a Victorian eugenics quack. However, this judgment would not do justice to the army of eugenics quacks who operated (often literally, tying tubes and performing lobotomies) from the 1890s well into the 1930s, because they were driven by a deep-seated racist bigotry.

The National Institute of Standards and Technology recently completed a massive study, finding that state-of-the-art facial-recognition systems falsely identified African-American and Asian faces ten to a hundred times more than Caucasian faces. "The technology," noted the New York Times, "also had more difficulty identifying women than men." It proved uniquely hostile to senior citizens, falsely fingering older folks up to ten times more than middle-aged adults.

Up against the wall, Grandma!

An earlier study at the Massachusetts Institute of Technology found that facial recognition software marketed by Amazon misidentified darker-skinned women as men 31 percent of the time. It thought Michelle Obama was a guy.

Click here to read more...