You would think that the killer combination of global warming, Doritos Locos tacos, and the Trump administration is sending us on a one-way ticket to the end of the world. Whelp, the nerds at MIT just asked us to hold their beer as they slammed their foot on the gas pedal. A team of scientists at the Massachusetts Institute of Killing Us All, I mean Technology have developed a psychopathic algorithm named Norman. Like Norman Bates, get it?
Norman was designed as part of an experiment to see what effects training AI on data from “the dark corners of the net” would have on its world view. Instead of exposing the AI to “normal” content and images, the software was shown images of people dying in violent circumstances. And where did MIT find such gruesome imagery? From Reddit, of course. Where else?
After exposure to the violent imagery, Norman was shown inkblot pictures and asked to interpret them. His software, which can interpret pictures and describe what it sees in text form, saw what scientists (okay, me) now describe as “some fucked up shit.” The procedure, commonly referred to as a Rorschach test, has been traditonally used to help psychologists figure out whether their patients perceive the world in a negative or positive light. Norman’s outlook was decidedly negative, as he saw murder and violence in every image.
MIT compared Norman’s results with a standard AI program, which was trained with more normal images of cats, birds and people. The results were…upsetting. After being shown the same image, the standard AI saw “a close-up of a vase with flowers.” Norman saw “a man is shot dead.” In another image, standard AI saw “a person is holding an umbrella in the air.” Norman saw “man is shot dead in front of his screaming wife.” And finally, in my personal favorite, normal AI saw “a black and white photo of a small bird” while Norman saw “man gets pulled into dough machine.”
Rather than running for the goddamn hills, MIT Professor Iyad Rahwan came to a different conclusion, saying that Norman’s test shows that “data matters more than the algorithm. It highlights the idea that the data we use to train AI is reflected in the way the AI perceives the world and how it behaves.” Ultimately, AI that is exposed to bias and flawed data will retain that world view. Last year, a report claimed that an AI-generated computer program used by a US court for risk assessment was biased against black prisoners. Based on skewed data, AI can be programmed to be racist.
Another study on software trained on Google News was conditioned to become sexist as a result of the data it received. When asked to complete the statement, “Man is to computer programmer as woman is to X”, the software replied ‘homemaker”. Dr Joanna Bryson, from the University of Bath’s department of computer science, said that machines can take on the view points of their programmers. Since programmers are often a homogenized group, there is a lack of diversity in exposure to data. Bryson said, “When we train machines by choosing our culture, we necessarily transfer our own biases. There is no mathematical way to create fairness. Bias is not a bad word in machine learning. It just means that the machine is picking up regularities.”
Microsoft’s chief envisioning officer Dave Coplin thinks Norman is an avenue to the important conversation of AI’s role in our culture. It must start, he said, with “a basic understanding of how these things work. We are teaching algorithms in the same way as we teach human beings so there is a risk that we are not teaching everything right. When I see an answer from an algorithm, I need to know who made that algorithm.”
(via BBC, image: Universal Pictures)
Want more stories like this? Become a subscriber and support the site!
—The Mary Sue has a strict comment policy that forbids, but is not limited to, personal insults toward anyone, hate speech, and trolling.—
Published: Jun 3, 2018 12:34 pm