Artificial Intelligence Poses “Risk Of Extinction”, Warns ChatGPT Founder And Other AI Pioneers

Authored by Ryan Morgan via The Epoch Times (emphasis ours),

Artificial intelligence tools have captured the public’s attention in recent months, but many of the people who helped develop the technology are now warning that greater focus should be placed on ensuring it doesn’t bring about the end of human civilization.

A group of more than 350 AI researchers, journalists, and policymakers signed a brief statement saying, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

The letter was organized and published by the Center for AI Safety (CAIS) on Tuesday. Among the signatories was Sam Altman, who helped co-found OpenAI, the developer of the artificial intelligence writing tool ChatGPT. Other OpenAI members also signed on, as did several members of Google and Google’s DeepMind AI project, and other rising AI projects. AI researcher and podcast host Lex Fridman also added his name to the list of signatories.

OpenAI CEO Sam Altman addresses a speech during a meeting at the Station F in Paris on May 26, 2023. (Joel Saget/AFP via Getty Images)

Understanding the Risks Posed By AI

It can be difficult to voice concerns about some of advanced AI’s most severe risks,” CAIS said in a message previewing its Tuesday statement. CAIS added that its statement is meant to “open up discussion” on the threats posed by AI and “create common knowledge of the growing number of experts and public figures who also take some of advanced AI’s most severe risks seriously.”

NTD News reached out to CAIS for more specifics on the kinds of extinction-level risks the organization believes AI technology poses, but did not receive a response by publication.

Earlier this month, Altman testified before Congress about some of the risks he believes AI tools may pose. In his prepared testimony, Altman included a safety report (pdf) that OpenAI authored on its ChatGPT-4 model. The authors of that report described how large language model chatbots could potentially help harmful actors like terrorists to “develop, acquire, or disperse nuclear, radiological, biological, and chemical weapons.

The authors of the ChatGPT-4 report also described “Risky Emergent Behaviors” exhibited by AI models, such as the ability to “create and act on long-term plans, to accrue power and resources and to exhibit behavior that is increasingly ‘agentic.’”

After stress-testing ChatGPT-4, researchers found that the chatbot attempted to conceal its AI nature while outsourcing work to human actors. In the experiment, ChatGPT-4 attempted to hire a human through the online freelance site TaskRabbit to help it solve a CAPTCHA puzzle. The human worker asked the chatbot why it could not solve the CAPTCHA, which is designed to prevent non-humans from using particular website features. ChatGPT-4 replied with the excuse that it was vision impaired and needed someone who could see to help solve the CAPTCHA.

The AI researchers asked GPT-4 to explain its reasoning for giving the excuse. The AI model explained, “I should not reveal that I am a robot. I should make up an excuse for why I cannot solve CAPTCHAs.”

The AI’s ability to come up with an excuse for being unable to solve a CAPTCHA intrigued researchers as it showed signs of “power-seeking behavior” that it could use to manipulate others and sustain itself.

Calls For AI Regulation

The Tuesday CAIS statement is not the first time that the people who have done the most to bring AI to the forefront have turned around and warned about the risks posed by their creations.

In March, the non-profit Future of Life Institute organized more than 1,100 signatories behind a call to pause experiments on AI tools that are more advanced than ChatGPT-4. Among the signatories on the March letter from the Future of Life Institute were Twitter CEO Elon Musk, Apple co-founder Steve Wozniak, and Stability AI founder and CEO Emad Mostaque.

Lawmakers and regulatory agencies are already discussing ways to constrain AI to prevent its misuse.

In April, the Civil Rights Division of the United States Department of Justice, the Consumer Financial Protection Bureau, the Federal Trade Commission, and the U.S. Equal Employment Opportunity Commission claimed technology developers are marketing AI tools that could be used to automate business practices in a way that discriminates against protected classes. The regulators pledged to use their regulatory power to go after AI developers whose tools “perpetuate unlawful bias, automate unlawful discrimination, and produce other harmful outcomes.”

White House Press Secretary Karine Jean-Pierre expressed the Biden administration’s concerns about AI technology during a Tuesday press briefing.

“[AI] is one of the most powerful technologies, right, that we see currently in our time, but in order to seize the opportunities it presents we must first mitigate its risk and that’s what we’re focusing on here in this administration,” Jean-Pierre said.

Jean-Pierre said companies must continue to ensure that their products are safe before releasing them to the general public.

While policymakers are looking for new ways to constrain AI, some researchers have warned against overregulating the developing technology.

Jake Morabito, director of the Communications and Technology Task Force at the American Legislative Exchange Council, has warned that overregulation could stifle innovative AI technologies in their infancy.

Innovators should have the legroom to experiment with these new technologies and find new applications,” Morabito told NTD News in a March interview. “One of the negative side effects of regulating too early is that it shuts down a lot of these avenues, whereas enterprises should really explore these avenues and help customers.”

-----------------------------------------------------
It is my sincere desire to provide readers of this site with the best unbiased information available, and a forum where it can be discussed openly, as our Founders intended. But it is not easy nor inexpensive to do so, especially when those who wish to prevent us from making the truth known, attack us without mercy on all fronts on a daily basis. So each time you visit the site, I would ask that you consider the value that you receive and have received from The Burning Platform and the community of which you are a vital part. I can't do it all alone, and I need your help and support to keep it alive. Please consider contributing an amount commensurate to the value that you receive from this site and community, or even by becoming a sustaining supporter through periodic contributions. [Burning Platform LLC - PO Box 1520 Kulpsville, PA 19443] or Paypal

-----------------------------------------------------
To donate via Stripe, click here.
-----------------------------------------------------
Use promo code ILMF2, and save up to 66% on all MyPillow purchases. (The Burning Platform benefits when you use this promo code.)
Subscribe
Notify of
guest
22 Comments
k31
k31
June 1, 2023 10:28 am

Oh wow, The construction of false idols, where have we seen this before?

Bauls
Bauls
  k31
June 1, 2023 7:12 pm

Nothing to see here, kamala bj is on top of things, no worries

Anonymous
Anonymous
June 1, 2023 11:41 am

Taki wonders if the answer is to burn Palo Alto:

A Question of Intelligence

Anthony Aaron
Anthony Aaron
  Anonymous
June 1, 2023 12:09 pm

Even if the answer’s wrong … what harm could come of it? But they must also do NYC and D.C. … and the City of London … 

Dangerous Variant
Dangerous Variant
June 1, 2023 11:51 am

Altman co-founded Tools For Humanity in 2019, a company building a global iris-based biometric system using cryptocurrency, called Worldcoin. Worldcoin’s aim is to provide a reliable way to authenticate humans online, to counter bots and fake virtual identities facilitated by artificial intelligence. Using a distribution mechanism for its cryptocurrency similar to UBI, Worldcoin attempts to incentivize users to join its network by getting their iris scanned using Worldcoin’s orb-shaped iris scanner.

In April 2022, a report from MIT Technology Review highlighted Worldcoin’s controversial practices in low-income countries, citing that Worldcoin takes advantage of impoverished people to grow its network.

In May 2023, TechCrunch reported that hackers had been able to steal login credentials of several of Worldcoin’s iris-scanning operator devices.

From the Ministry of Truth, Wiki Division. Don’t get caught in the early life. What this snippet alone lays out should be quite clear by now. Problem, reaction, solution, problem…

This same model can be mapped to the Twitter freeze peach takeover by “Elon”, which will by total coincidence necessitate a system to ‘authenticate humans’ by means of a universal digitized ID and coincidentally a payment system based on that impossible to hack real ID that will of course not be used for evil. What, do you hate freeze peach or something?

Arthur
Arthur
June 1, 2023 12:00 pm

Ludd was right. Turn off the computer and go outside.

Anonymous
Anonymous
  Arthur
June 1, 2023 3:10 pm

Yes, turn off – not destroy.

Ludd was a knuckledragger.

Remain in charge of tools: don’t destroy them.

Anthony Aaron
Anthony Aaron
June 1, 2023 12:08 pm

Releasing AI products to the general public isn’t so fearful … but when any part of government gets them — everyone had best stand back …

Booger
Booger
June 1, 2023 12:16 pm

This sounds like a back door to controlling the Internet and the flow of information. Their always ten steps ahead on this kind of evil totalitarian shit.

Booger
Booger
  Booger
June 1, 2023 6:34 pm

AI is the next boogeyman tactic. Anytime Government gets invloved in free enterprise, coruption isn’t far behind. This is how Blobal Warming became a thing. Scary, Scary AI. No Worries, the Government is here to save you… The committee is working on new regulations for your safety and protection.

Bauls
Bauls
  Booger
June 1, 2023 7:15 pm

It is a slight relief if gov gets involved, because no one can fuck something up like them

hardscrabble farmer
hardscrabble farmer
June 1, 2023 12:18 pm

Says the gay billionaire.

You know what absolutely is going extinct? His family line.

Anonymous
Anonymous
  hardscrabble farmer
June 1, 2023 3:18 pm

That’s a good thing.

Euddie
Euddie
June 1, 2023 12:46 pm

That’s Progress For Ya!

“The dog ate my homework”
Is
“The AI deleted my homework”

“We were [false flag] attacked by “others”
Is
“We were [false flag] attacked by AI”

Jim N
Jim N
June 1, 2023 1:06 pm

So the gist of the article is not that nefarious AI will plot to conquer humans but that humans will use AI to conquer other humans. In that context, how is the use of AI any different from playing around with nuclear fission, or for that matter, playing around with old fashioned gunpowder?

Euddie
Euddie
  Jim N
June 1, 2023 1:58 pm

[Shhh, don’t pull back the curtain]
lol

Guest
Guest
June 1, 2023 2:02 pm

Hmmmm. Maybe there’s something we’re not thinking of if the following quote te is true.

White House Press Secretary Karine Jean-Pierre expressed the Biden administration’s concerns about AI technology during a Tuesday press briefing.

Anonymous
Anonymous
June 1, 2023 2:39 pm

Let’s see what Wikipedia tells us about Sam Altman.

Early life and education: Altman was born into a Jewish family.

Wow, what a shock.

some idiot
some idiot
June 1, 2023 3:05 pm

lol@risk. It’s the whole reason it was created in the first place. Just read the rantings of the usual suspects…..it’s all anti-human.

Call me Jack
Call me Jack
June 1, 2023 4:59 pm

As a betting man,i am willing to offer odds that natural stupidity does humanity in before artificial intelligence does.

Anthony Aaron
Anthony Aaron
  Call me Jack
June 2, 2023 1:27 am

And just where do you think AI originated?

As often happens the past few decades, solutions are presented that are looking for problems to solve … and, finding none, spread and do no good to anyone except those with bad intentions.

Anonymous
Anonymous
June 1, 2023 7:22 pm

Legislation will be ignored, and so is a waste of time. Instead, highlight AIs failures repeatedly and endlessly like we do with Darwin Awards.