Listen, there are some ideas so bad that you have to wonder if the people making them have never seen a movie, because the idea is so bad that it literally seems ripped from the screens. Case in point: the idea that AI should control US nuclear launch codes and missiles. Has no one here seen a movie??
In Bulletin of the Atomic Scientists, Matt Field outlines the dangers of having an AI-controlled nuclear arsenal. I just keep thinking about every bad movie about artificial intelligence I’ve ever seen. Skynet is still very much a part of pop culture with a new Terminator movie coming out, and yet we’re theorizing about possibly letting AI take control of our nuclear arsenal to prepare for retaliatory attacks, or to even have control over the missiles in the first place.
It’s like these idiots have missed basically all of science fiction for the last 50 years. https://t.co/7j9a2yUswF
— Stonekettle (@Stonekettle) September 3, 2019
The fact that AI would only rely on simulated data means that human instinct can’t come into play, for better or worse. Field interviewed fellow Bulletin columnist Michael Horowitz on the subject, writing “The algorithms that power artificial intelligence-systems are usually trained on huge datasets which simply don’t exist when it comes to nuclear weapons launches. ‘There have not been nuclear missile attacks, country against country. And so, training an algorithm for early warning means that you’re relying entirely on simulated data,’ Horowitz says. ‘I would say, based on the state-of-the-art in the development of algorithms, that generates some risks.'”
Field also points out how similar it is to the “Doomsday Machine” present in Dr. Strangelove, which again proves that if your plan sounds like it came out of a sci-fi story about nuclear annihilation then maybe it is a terrible plan. In Dr. Strangelove, Russia developed a device that would go off should Russia be attacked by the United States as a deterrent, but since they never told anyone about the device it defeated the purpose and the world ended up destroyed anyway.
Thankfully, this is just a theory posed by US deterrence experts and not actual policy. But seriously, have these people never watched a movie? There is no way that AI could successful run a nuclear arsenal without it potentially resulting in mass death and human extinction, and really, don’t we all have enough to worry about besides whether or not War Games is going to start happening in real life too?
Watch a damn movie or read science fiction. To paraphrase a non-nuclear centric sci-fi film, scientists are so focused on what they can do that they forget to think about whether they should, and this is no exception. While we might joke about wanting the world to end to put out of the misery of the twenty-four hour news cycle, let’s also consider that maybe, just maybe, we don’t want the world to end via nuclear destruction because Skynet came to life. Please.
(via Bulletin of the Atomic Scientists, image: Columbia Pictures)
Want more stories like this? Become a subscriber and support the site!
—The Mary Sue has a strict comment policy that forbids, but is not limited to, personal insults toward anyone, hate speech, and trolling.—
Published: Sep 3, 2019 03:51 pm