Could AI Start Nuclear War?

Authored by James Rickards via DailyReckoning.com,

I’ve covered a wide variety of potential crises over the years.

These include natural disasters, pandemics, social unrest and financial collapse. That’s a daunting list.

One thing I haven’t done is to cover the greatest potential calamity of all — nuclear war. For the reasons explained below, now is the time to consider it.

Nuclear warfighting is back in the air. The subject is receiving more attention today than at any time since the Cuban Missile Crisis of 1962 and its aftermath. There are three reasons for this.

The first is American accusations that Russia would escalate to use nuclear weapons as it grew more desperate in its conduct of the war in Ukraine. These accusations were always false and are risible now that Russia is clearly winning the war with conventional arms.

Still, the threats and counter-threats were enough to put the topic in play.

The second reason is the war between Israel and Hamas. Again, escalation is the concern. One not implausible scenario has Hezbollah in southern Lebanon opening a second front on Israel’s northern border with intensive missile bombardment.

Houthi rebels in Yemen would join the attack. Since Hezbollah and the Houthis are both Shia Muslims and Iranian proxies, Israel could attack Iran as the source of the escalation.

Israel is a nuclear power. With a U.S. aircraft carrier battle group and a nuclear attack submarine in the region, and with nuclear powers Russia and Pakistan standing by to assist Iran, the prospect of escalation to a nuclear exchange is real.

The escalating tensions between Iran and Pakistan just this week add even more fuel to the fire.

The third reason is artificial intelligence and GPT output. Although artificial intelligence can provide profitable opportunities for investors in many sectors of the market, AI/GPT may also be the greatest threat to nuclear escalation because it has an internal logic that’s inconsistent with the human logic that has kept nuclear peace for the past 80 years.

I’ve covered Ukraine and Israel extensively, and they’re widely covered in the news. But today,  I’m addressing the risks of nuclear war from AI/GPT. It’s a threat you’re not hearing anything about, but it needs to be addressed.

Let’s start with a fictional movie. The paradigmatic portrayal of an accidental nuclear war is the 1964 film Fail Safe. In the film, U.S. radar detects an intrusion into U.S. airspace by an unidentified but potentially hostile aircraft.

The U.S. Air Force soon determines that the aircraft is an off-course civilian airliner. In the meantime, a computer responding to the intrusion erroneously orders a U.S. strategic bomber group led by Col. Jack Grady to commence a nuclear attack on Moscow.

U.S. efforts to rescind the order and recall the bombers fail because of Soviet jamming of radio channels. The president orders the military to shoot down the bombers and fighter jets are scrambled for that purpose.

The fighters use afterburners to catch the bombers, but they fail, and the increased fuel consumption causes them to plunge into the Arctic Sea.

The president next communicates with the Soviet premier who agrees to stop the jamming. The president speaks with the attack bomber group leader to call off the attack, but the crew has been trained to disregard such pleas as a Soviet ploy.

The U.S. then offers the Soviets’ technical assistance in helping to shoot down the bombers. The planes are almost all shot down, but one makes it through. The president puts Col. Grady’s wife on the radio; he hesitates but is soon preoccupied with evading Soviet missiles. He then decides his wife’s voice is another deception.

Anticipating the worst and seeking to avoid a full-scale nuclear war, the president orders a U.S. nuclear bomber to fly over New York City knowing the first lady is in New York.

In the end, Moscow is destroyed by a U.S. nuclear weapon and the president orders a nuclear bomb to be dropped on New York City using the Empire State Building as ground zero. The expectation is that the sacrifice of New York in exchange for Moscow will end the escalation, but that is not portrayed in the film.

The next step is left in doubt.

Although Fail Safe is 60 years old, the issues it raises and some of the plot twists are strikingly contemporary. The computer error that caused the attack in the film is never explained technically, yet that’s not highly relevant.

Computer errors occur all the time in critical infrastructure and can cause real harm including power blackouts and train wrecks. Such computer errors are the essence of the debate over AI in strategic systems today.

Read on to see why…

Could AI Start a Nuclear War?

AI in a command-and-control context can either malfunction and issue erroneous orders as in Fail Safe or, more likely, function as designed yet issue deadly prescriptions based on engineering errors, skewed training sets or strange emergent properties from correlations that humans can barely perceive.

Perhaps most familiar to contemporary audiences are the failed efforts of the president and Col. Grady’s wife to convince the bomber commander to call off the attack. Grady had been trained to expect such efforts and to treat them as deceptions.

Today, such deceptions would be carried out with deepfake video and audio transmissions. Presumably, the commander’s training and dismissal of the pleas would be the same despite the more sophisticated technology behind them. Technology advances yet aspects of human behavior are unchanged.

Another misunderstanding, this one real not fictional, that came close to causing a nuclear war was a 1983 incident codenamed Able Archer.

The roots of Able Archer go back to May 1981 when then General Secretary of the Communist Party of the Soviet Union Leonid Brezhnev and KGB head Yuri Andropov (later general secretary) disclosed to senior Soviet leaders their view that the U.S. was secretly preparing to launch a nuclear strike on the Soviet Union.

Andropov then announced a massive intelligence collection effort to track the people who would be responsible for launching and implementing such an attack along with their facilities and communications channels.

At the same time, the Reagan administration began a series of secret military operations that aggressively probed Soviet waters with naval assets and flew directly toward Soviet airspace with strategic bombers that backed away only at the last instant.

These advances were ostensibly to test Soviet defenses but had the effect of playing to Soviet perceptions that the U.S. was planning a nuclear attack.

Analysts agree that the greatest risk of escalation and actual nuclear war arises when perceptions of the two sides vary in such a way as to make rational assessment of the escalation dynamic impossible. The two sides are on different paths making different calculations.

Tensions rose further in 1983 when the U.S. Navy flew F-14 Tomcat fighter jets over a Soviet military base in the Kuril Islands and the Soviets responded by flying over Alaska’s Aleutian Islands. On Sept. 1, 1983, Soviet fighter jets shot down Korean Air Lines Flight 007 over the Sea of Japan. A U.S. Congressman was onboard.

On November 4, 1983, the U.S. and NATO allies commenced an extensive war game codenamed Able Archer. This was intended to simulate a nuclear attack on the Soviet Union following a series of escalations.

The problem was that the escalations were written out in the war game briefing books but not actually simulated. The transition from conventional warfare to nuclear wargame was simulated.

This came at a time when the Soviets and the KGB were actively looking for signs of a nuclear attack. The simulations involving NATO Command, Control and Communications protocols were highly realistic including participation by German Chancellor Helmut Kohl and UK Prime Minister Margaret Thatcher. The Soviets plausibly believed that the war game was actually cover for a real attack.

In the belief that the U.S. was planning a nuclear first-strike, the Soviets determined that their only course to survive was to launch a preemptive first strike of their own. They ordered nuclear warheads to be placed on Soviet Air Army strategic bombers and put nuclear attack aircrafts in Poland and East Germany on high alert.

This real life near nuclear war had a backstory that is even more chilling. The Soviets had previously built an early warning radar system with computer linkages using a primitive kind of AI codenamed Oko.

On September 26, 1983, just two months before Able Archer, the system malfunctioned and reported five incoming ICBMs from the United States. Oko alarms sounded and the computer screen flashed “LAUNCH.” Under the protocols, the LAUNCH display was not a warning but a computer-generated order to retaliate.

Lt. Col. Stanislov Petrov of the Soviet Air Defense Forces saw the computer order and had to immediately choose between treating the order as a computer malfunction or alerting his senior officers who would likely commence a nuclear counterattack.

Petrov was a co-developer of Oko and knew the system made mistakes. He also estimated that if the attack were real, the U.S. would use far more than five missiles. Petrov was right. The computer had misread the sun’s reflection off some clouds as incoming missiles.

Given the tensions of the day and the KGB’s belief that a nuclear attack could come at any time, Petrov risked the future of the Soviet Union to override the Oko system. He relied on a combination of inference, experience, and gut instinct to disable the kill-chain.

The incident remained secret until well after the end of the Cold War. In time, Petrov was praised as “The Man Who Saved the World.”

The threat of nuclear war due to AI comes not just from the nuclear-armed powers but from third parties and non-state actors using AI to create what are called catalytic nuclear disasters. The term catalytic refers to chemical agents that cause volatile reactions among other compounds without themselves being part of the reaction.

As applied in international relations, it refers to agents who might prompt a nuclear war among the great powers without themselves being involved in the war. That could leave the weak agent in a relatively strong position once the great powers had destroyed themselves.

AI/GPT systems have already found their way into the nuclear warfighting process. It will be up to humans to keep their role marginal and data oriented, not decision oriented. Given the history of technology in warfare from bronze spears to hypersonic missiles, it’s difficult to conclude AI/GPT will be so contained. If not, we will all pay the price.

Ukraine, Gaza, and AI all raise the odds of a nuclear war considerably. The financial implications of this for investors are simple. In case of nuclear war, stocks, bonds, cash and other financial assets will be worthless. Exchanges and banks will be closed. The only valuable assets will be land, gold and silver.

It’s a good idea to have all three — just in case.

As an Amazon Associate I Earn from Qualifying Purchases
-----------------------------------------------------
It is my sincere desire to provide readers of this site with the best unbiased information available, and a forum where it can be discussed openly, as our Founders intended. But it is not easy nor inexpensive to do so, especially when those who wish to prevent us from making the truth known, attack us without mercy on all fronts on a daily basis. So each time you visit the site, I would ask that you consider the value that you receive and have received from The Burning Platform and the community of which you are a vital part. I can't do it all alone, and I need your help and support to keep it alive. Please consider contributing an amount commensurate to the value that you receive from this site and community, or even by becoming a sustaining supporter through periodic contributions. [Burning Platform LLC - PO Box 1520 Kulpsville, PA 19443] or Paypal

-----------------------------------------------------
To donate via Stripe, click here.
-----------------------------------------------------
Use promo code ILMF2, and save up to 66% on all MyPillow purchases. (The Burning Platform benefits when you use this promo code.)
Click to visit the TBP Store for Great TBP Merchandise
Subscribe
Notify of
guest
10 Comments
Cpt_Obviuos
Cpt_Obviuos
January 26, 2024 10:02 pm

Twice the fear pr0n in one article! Nicely done.

And the answer to the question: NO. A.I. can’t start fake wars.

That’s Israel’s job.

NORAD WOPR
NORAD WOPR
January 26, 2024 10:13 pm

comment image

Cricket
Cricket
  NORAD WOPR
January 27, 2024 12:06 am

Right? Did no one watch War Games and think turning over national security to a computer might be a problem? And did no one watch The Terminator a year later and think letting autonomous computers run weapons systems might also be a problem? Finally…did you not see The Matrix?

WTF…What if Hollywood/The CIAs predictive programming didn’t tell the masses exactly what those who think they should rule over us plan to do? 🙄

If you haven’t noticed America hasn’t won a war since WWII, and even then, they showed up at 2 minutes to midnight when all other friends and foes had already suffered terrible casualties and had little left to keep fighting. Meanwhile, since WWII, those who want endless wars have killed and maimed millions of people around the world in the name of ‘we have the kill them over there so they don’t do it here’, all while encouraging and supporting millions of people without skills to exist in Western society to invade our countries.

Until you accept that easily corrupted psychopaths who seek government office and currently have power really do want to burn our countries to the ground so they can rule over the ashes, this will never get better.

The Central Scrutinizer
The Central Scrutinizer
  Cricket
January 27, 2024 11:17 am

Science is like Socialism… “It just hasn’t been done right yet!”

I’m sort of ashamed I even had to type that, but it’s the truth.

As I see it, our situation has not improved since the Expulsion from the Garden.

pro stoae
pro stoae
January 26, 2024 10:19 pm

The title and stock photo of robots certainly don’t help dissuade the first impression of this article being more AI fear porn, but after reading it I can concede that this is a plausible scenario. Warning systems and prediction algorithms are exactly what you use this stuff for in the big boy world.

But everyone in data knows you must always pair technical systems with a “domain expertise” human who gets a veto. I.e Stanislov Petrov.

AI will never start an anything. But humans full of hubris who forgot to include an adult in the room certainly will.

B_MC
B_MC
  pro stoae
January 27, 2024 7:34 am

AI will never start an anything.

Correct. A few days ago, Clif High went into the details of how AI actually works.

Starts a little slowly, but then really gets going….

AI & Bullshit

(

Anonymous
Anonymous
January 27, 2024 12:38 am

Well, if nukes were actually real and not films of “yellowcake powder, fuel air bombs”, fusioned by lightning bolts triggered into the fuel/air mass by conducter trailing rockets.

Anonymous
Anonymous
January 27, 2024 1:50 am

Nukes are a big threat/deterrent, but using them is a much bigger liability. It’s like living in a suburb and firing artillery at a neighbor. Yeah, you might kill them, but once everyone else catches their breath, they are going to burn you out and salt the earth. It’s better to have it and not use it. I question any country actually being prepared to use them for any reason short of a multi-year buildup for WW*

That being said, I wouldn’t be surprised if they were used because some tard somewhere got deepfaked into it.

Anonymous
Anonymous
January 27, 2024 7:37 am

Will would be a better term than could.

Tlate
Tlate
January 27, 2024 7:45 am

Of course it could, didn’t you see war games and terminator. More foreshadowing by Hollywood. Musk implies it is already too late to stop. However, people do not give AI enough credit better to wipe out humans by taking over the power grid, launching a fatal virus, etc. etc. You do not want to destroy computers and control systems with a nuclear war if you are AI. Self-destruction is a human trait I do not think AI will copy.