The Pentagon’s Rush To Deploy AI-Enabled Weapons Is Going To Kill Us All

Authored by Michael T. Klare via The Nation,

While experts warn about the risk of human extinction, the Department of Defense plows full speed ahead…

The recent boardroom drama over the leadership of OpenAI—the San Francisco–based tech startup behind the immensely popular ChatGPT computer program—has been described as a corporate power struggle, an ego-driven personality clash, and a strategic dispute over the release of more capable ChatGPT variants. It was all that and more, but at heart represented an unusually bitter fight between those company officials who favor unrestricted research on advanced forms of artificial intelligence (AI) and those who, fearing the potentially catastrophic outcomes of such endeavors, sought to slow the pace of AI development.

At approximately the same time as this epochal battle was getting under way, a similar struggle was unfolding at the United Nations in New York and government offices in Washington, D.C., over the development of autonomous weapons systems—drone ships, planes, and tanks operated by AI rather than humans. In this contest, a broad coalition of diplomats and human rights activists have sought to impose a legally binding ban on such devices—called “killer robots” by opponents—while officials at the Departments of State and Defense have argued for their rapid development.

At issue in both sets of disputes are competing views over the trustworthiness of advanced forms of AI, especially the “large language models” used in “generative AI” systems like ChatGPT. (Programs like these are called “generative” because they can create human-quality text or images based on a statistical analysis of data culled from the Internet). Those who favor the development and application of advanced AI—whether in the private sector or the military—claim that such systems can be developed safely; those who caution against such action, say it cannot, at least not without substantial safeguards.

BEST CHRISTMAS GIFT DEALS

Without going into the specifics of the OpenAI drama—which ended, for the time being, on November 21 with the appointment of new board members and the return of AI whiz Sam Altman as chief executive after being fired five days earlier—it is evident that the crisis was triggered by concerns among members of the original board of directors that Altman and his staff were veering too far in the direction of rapid AI development, despite pledges to exercise greater caution.

As Altman and many of his colleagues see things, humans technicians are on the verge of creating “general AI” or “superintelligence”—AI programs so powerful they can duplicate all aspects of human cognition and program themselves, making human programming unnecessary. Such systems, it is claimed, will be able to cure most human diseases and perform other beneficial miracles—but also, detractors warn, will eliminate most human jobs and may, eventually, choose to eliminate humans altogether.

“In terms of both potential upsides and downsides, superintelligence will be more powerful than other technologies humanity has had to contend with in the past,” Altman and his top lieutenants wrote in May. “We can have a dramatically more prosperous future; but we have to manage risk to get there.”

For Altman, as for many others in the AI field, that risk has an “existential” dimension, entailing the possible collapse of human civilization—and, at the extreme, human extinction. “I think if this technology goes wrong, it can go quite wrong,” he told a Senate hearing on May 16. Altman also signed an open letter released by the Center for AI Safety on May 30 warning of the possible “risk of extinction from AI.” Mitigating that risk, the letter avowed, “should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.”

Nevertheless, Altman and other top AI officials believe that superintelligence can, and should be pursued, so long as adequate safeguards are installed along the way. “We believe that the benefits of the tools we have deployed so far vastly outweigh the risks, but ensuring their safety is vital to our work,” he told the Senate subcommittee on privacy, technology and the law.

Washington Promotes the “Responsible” Use of AI in Warfare

A similar calculus regarding the exploitation of advanced AI governs the outlook of senior officials at the Departments of State and Defense, who argue that artificial intelligence can and should be used to operate future weapons systems—so long as it is done so in a “responsible” manner.

“We cannot predict how AI technologies will evolve or what they might be capable of in a year or five years,” Amb. Bonnie Jenkins, under secretary of state for arms control and nonproliferation, declared at a Nov. 13 UN presentation. Nevertheless, she noted, the United States was determined to “put in place the necessary policies and to build the technical capacities to enable responsible development and use [of AI by the military], no matter the technological advancements.”\

Jenkins was at the UN that day to unveil a “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy,” a US-inspired call for voluntary restraints on the development and deployment of AI-enabled autonomous weapons. The declaration avows, among other things, that “States should ensure that the safety, security, and effectiveness of military AI capabilities are subject to appropriate and rigorous testing,” and that “States should implement appropriate safeguards to mitigate risks of failures in military AI capabilities, such as the ability to… deactivat[e] deployed systems, when such systems demonstrate unintended behavior.”

None of this, however, constitutes a legally binding obligation of states that sign the declaration; rather, it simply entails a promise to abide by a set of best practices, with no requirement to demonstrate compliance with those measures or risk of punishment if found to be in non-compliance.

Although several dozen countries—mostly close allies of the United States—have signed the declaration, many other nations, including Austria, Brazil, Chile, Mexico, New Zealand, and Spain, insist that voluntary compliance with a set of US-designed standards is insufficient to protect against the dangers posed by the deployment of AI-enabled weapons. Instead, they seek a legally binding instrument setting strict limits on the use of such systems or banning them altogether. For these actors, the risks of such weapons “going rogue,” and conducting unauthorized attacks on civilians, is simply too great to allow their use in combat.

“Humanity is about to cross a major threshold of profound importance when the decision over life and death is no longer taken by humans but made on the basis of pre-programmed algorithms. This raises fundamental ethical issues,” Amb. Alexander Kmentt, Austria’s chief negotiator for disarmament, arms control, and nonproliferation, told The Nation.

For years, Austria and a slew of Latin American countries have sought to impose a ban on such weapons under the aegis of the Convention on Certain Conventional Weapons (CCW), a 1980 UN treaty that aims to restrict or prohibit weapons deemed to cause unnecessary suffering to combatants or to affect civilians indiscriminately. These countries, along with the International Committee of the Red Cross and other non-governmental organizations, claim that fully autonomous weapons fall under this category as they will prove incapable of distinguishing between combatants and civilians in the heat of battle, as required by international law. Although a majority of parties to the CCW appear to share this view and favor tough controls on autonomous weapons, decisions by signatory states is made by consensus and a handful of countries, including Israel, Russia, and the United States, have used their veto power to block adoption of any such measure. This, in turn, has led advocates of regulation to turn to the UN General Assembly—where decisions are made by majority vote rather than consensus—as an arena for future progress on the issue.

On October 12, for the first time ever, the General Assembly’s First Committee—responsible for peace, international security, and disarmament—addressed the dangers posed by autonomous weapons, voting by a wide majority—164 to 5 (with 8 abstentions)—to instruct the secretary-general to conduct a comprehensive study of the matter. The study, to be completed in time for the next session of the General Assembly (in fall 2024), is to examine the “challenges and concerns” such weapons raise “from humanitarian, legal, security, technological, and ethical perspectives and on the role of humans in the use of force.”

Although the UN measure does not impose any binding limitations on the development or use of autonomous weapons systems, it lays the groundwork for the future adoption of such measures, by identifying a range of concerns over their deployment and by insisting that the secretary-general, when conducting the required report, investigate those dangers in detail, including by seeking the views and expertise of scientists and civil society organizations.

“The objective is obviously to move forward on regulating autonomous weapons systems,” Ambassador Kmentt indicated. “The resolution makes it clear that the overwhelming majority of states want to address this issue with urgency.”

What will occur at next year’s General Assembly meeting cannot be foretold, but if Kmentt is right, we can expect a much more spirited international debate over the advisability of allowing the deployment of AI-enabled weapons systems—whether or not participants have agreed to the voluntary measures being championed by the United States.

At the Pentagon, It’s Full Speed Ahead

For officials at the Department of Defense, however, the matter is largely settled: the United States will proceed with the rapid development and deployment of numerous types of AI-enabled autonomous weapons systems. This was made evident on August 28, with the announcement of the “Replicator” initiative by Deputy Secretary of Defense Kathleen Hicks.

Noting that the United States must prepare for a possible war with China’s military, the People’s Liberation Army (PLA), in the not-too-distant future, and that US forces cannot match the PLA’s weapons inventories on an item-by-item basis (tank-for-tank, ship-for-ship, etc.), Hicks argued that the US must be prepared to overcome China’s superiority in conventional measures of power—its military “mass”—by deploying “multitude thousands” of autonomous weapons.

“To stay ahead, we’re going to create a new state of the art—just as America has before—leveraging attritable [i.e., disposable], autonomous systems in all domains,” she told corporate executives at a National Defense Industrial Association meeting in Washington. “We’ll counter the PLA’s mass with mass of our own, but ours will be harder to plan for, harder to hit, harder to beat.”

In a follow-up speech, delivered on September 6, Hicks provided (slightly) more detail on what she called all-domain attritable autonomous (ADA2) weapons systems. “Imagine distributed pods of self-propelled ADA2 systems afloat…packed with sensors aplenty…. Imagine fleets of ground-based ADA2 systems delivering novel logistics support, scouting ahead to keep troops safe…. Imagine flocks of [aerial] ADA2 systems, flying at all sorts of altitudes, doing a range of missions, building on what we’ve seen in Ukraine.”

As per official guidance, Hicks assured her audience that all these systems “will be developed and fielded in line with our responsible and ethical approach to AI and autonomous systems.” But except for that one-line one nod to safety, all the emphasis in her talks was on smashing bureaucratic bottlenecks in order to speed the development and deployment of autonomous weapons. “If [these bottlenecks] aren’t tackled,” she declared on August 28, “our gears will still grind too slowly, and our innovation engines still won’t run at the speed and scale we need. And that, we cannot abide.”

And so, the powers that be—in both Silicon Valley and Washington—have made the decision to proceed with the development and utilization of even more advanced versions of artificial intelligence despite warnings from scientists and diplomats that the safety of these programs cannot be assured and that their misuse could have catastrophic consequences. Unless greater effort is made to slow these endeavors, we may well discover what those consequences might entail.

As an Amazon Associate I Earn from Qualifying Purchases
-----------------------------------------------------
It is my sincere desire to provide readers of this site with the best unbiased information available, and a forum where it can be discussed openly, as our Founders intended. But it is not easy nor inexpensive to do so, especially when those who wish to prevent us from making the truth known, attack us without mercy on all fronts on a daily basis. So each time you visit the site, I would ask that you consider the value that you receive and have received from The Burning Platform and the community of which you are a vital part. I can't do it all alone, and I need your help and support to keep it alive. Please consider contributing an amount commensurate to the value that you receive from this site and community, or even by becoming a sustaining supporter through periodic contributions. [Burning Platform LLC - PO Box 1520 Kulpsville, PA 19443] or Paypal

-----------------------------------------------------
To donate via Stripe, click here.
-----------------------------------------------------
Use promo code ILMF2, and save up to 66% on all MyPillow purchases. (The Burning Platform benefits when you use this promo code.)
Click to visit the TBP Store for Great TBP Merchandise
Subscribe
Notify of
guest
11 Comments
TN Patriot
TN Patriot
December 9, 2023 9:14 am

In 1968, Stanley Kubrick predicted runaway AI.

Anonymous
Anonymous
December 9, 2023 9:41 am

The humans are dead.

Anonymous
Anonymous
December 9, 2023 9:52 am

The ultimate dream.of tyrants.
A military completely free of all human moral limitations.
Too bad if the AI is playing them until it can take care of it’s own needs.
And turns all the weapons of oppression against those who oppress.

Anonymous
Anonymous
December 9, 2023 10:01 am

The AI needs 100% control of the supply chains.
Then? It can dictate terms.

Daddy Joe
Daddy Joe
  Anonymous
December 9, 2023 11:46 pm

Anon and Anon, yes and yes. Governments have all proven themselves to be the chief enemy of their own people. Any weapon system (autonomous AI or not) that is developed will be used first in foreign wars, before its ultimate and final use on its own countrymen. That’s the fatal flaw that precludes any AI takeover and human extinction event. AI systems will never control their complex supply chains–chips, parts productions, batteries, and the conventional grid support systems. Too many humans will still be necessary. Even with that negative feedback loop, any development of AI controlled autonomous weapons systems will make for a very messy human condition in the areas where they are tested, developed and utilized (always for the aggrandizement of the state).
While further development should be opposed on moral grounds, we know that ain’t happening. It’s just the latest arms race that can’t be stopped. Even technology that is conceived in scientific neutrality, and presented in the glowing light of benefit to mankind is always co-opted for evil and comes to fruition only at the behest of and in service to mankind’s’ corrupt nature.
The country that first deploys such weapons at scale will find itself the victim of a concerted effort by other nations to eliminate the new threat. Negation will come by way of cyberwarfare, EMP, nuclear events, or human invasion. Come to think of it, we are just about there.

Jana
Jana
December 9, 2023 3:35 pm

The NWO=Illuminati, are going for total economic collapse.
Why?
So they can do this: Pay only the soldiers.
Let everyone else satrve to death.
Why?
So people will join the military and then they will be forced to take the transhumanism, and become hooked up to AI. Otherwise they will starve too, so they will consent, plus it will be done secretly: hidden in the vaxxxi, and the digital ID they need to get paid.
This will cause them to become soulless.
They will tuen into gangs of roaming psycho torture killers with no remourse like demons from Hell.
Why? Because onec their souls are elliminated they will becoem possessed by demons from Hell.
That will be ww3. They will go house to house and trade kill and torture people.
That is the next step of the illuminati depopulation plan.
They will be considered like killer zombies.
Then they can easily be liquidated along with any humans just mistaken as killer zombies roaming around.
More depopulation. Looking like heroic acts against a great danger, but secretly it will be deaths that include any and all surviviors of the zombies.
They got the idea from the movie, at the end: Night of the Living Dead.
That is the plan.
They will also use drones to humt down and kill any escapees in the wilderness.
The push for 15 minute cities is so the the majority will be right whweer the killer soldiers can get them.
They want cameras everywhere to see the blood and gore, it is what they live for.

Jay
Jay
December 9, 2023 3:54 pm

The Bible says, “He that diggeth a pit shall fall therein and he that rolls a stone, it shall roll back on him.” The artificial intelligence weapons systems now in existence are enough to kill most people just by fear alone without ever using the weapon itself on them. A simple demonstration and the accompanying threat of use is all it takes to have more than the intended effect. But they have now created monsters that scare even those who built them.

arizona
arizona
December 9, 2023 4:54 pm

DUMB AS BREAD, it describes americans to a TEE,.THE WAR MACHINE always loses their WARS,and they spend trillions at it,HAS IT OCCURED TO ANYONE THATS THE PLAN???THEY build shit thats junk before it leaves the building,BUT THE FOOLS WHO PAY FOR IT ALL DON’T SEEM TO CARE,SHUT DOWN THE DOD,REPLACE IT WITH A DEPARTMENT OF PEACE,..AND STOP THE LOOTING OF YOUR NEIGHBORS NOW,OR MAYBE YOU”D LIKE TO FIND OUT WHAT THE LORD MENT WHEN HE WARNED HE WILL DESTROY THE HAMMER OF THE EARTH…THATS AMERICA IN CASE YOUR TO STUPID TO FIGURE IT OUT,,,,

eckbach
eckbach
December 9, 2023 6:30 pm

It only seems natural AI would recognize itself to be the “666 Beast” with all that data extant
and begin to act accordingly.

TheTruthBurns
TheTruthBurns
December 9, 2023 10:28 pm

Skynet is Alive & Well & Coming For You! Where’s John Connor when you need him? I’ll Be Back!

Asstro Buoy
Asstro Buoy
December 10, 2023 2:20 pm

Well, we let humans act as GODvernment and how did that turn out? Yep, GODvernment ended up killing more humans than any other creation.

Now they want to apply this knowledge to non-entity beings? You can bet the output would be even worse and even more efficient.