Turley Defamed By ChatGPT: My Own Bizarre Experience With The Artificiality Of “Artificial Intelligence”

Authored by Jonathan Turley,

Yesterday, President Joe Biden declared that “it remains to be seen” whether Artificial Intelligence (AI) is “dangerous.” I would beg to differ.

I have been writing about the threat of AI to free speech.

Then recently I learned that ChatGPT falsely reported on a claim of sexual harassment that was never made against me on a trip that never occurred while I was on a faculty where I never taught. ChapGPT relied on a cited Post article that was never written and quotes a statement that was never made by the newspaper. When the Washington Post investigated the false story, it learned that another AI program “Microsoft’s Bing, which is powered by GPT-4, repeated the false claim about Turley.” It appears that I have now been adjudicated by an AI jury on something that never occurred.

When contacted by the Post, “Katy Asher, Senior Communications Director at Microsoft, said the company is taking steps to ensure search results are safe and accurate.

That is it and that is the problem.

You can be defamed by AI and these companies merely shrug that they try to be accurate. In the meantime, their false accounts metastasize across the Internet. By the time you learn of a false story, the trail is often cold on its origins with an AI system. You are left with no clear avenue or author in seeking redress. You are left with the same question of Reagan’s Labor Secretary, Ray Donovan, who asked “Where do I go to get my reputation back?”

Here is my column in USA Today:

The rapid expansion of artificial intelligence has been much in the news recently, including the recent call by Elon Musk and more than 1,000 technology leaders and researchers for a pause on AI.

Some of us have warned about the danger of political bias in the use of AI systems, including programs like ChatGPT. That bias could even include false accusations, which happened to me recently.

I received a curious email from a fellow law professor about research that he ran on ChatGPT about sexual harassment by professors. The program promptly reported that I had been accused of sexual harassment in a 2018 Washington Post article after groping law students on a trip to Alaska.

AI response created false accusation and manufactured ‘facts’

It was not just a surprise to UCLA professor Eugene Volokh, who conducted the research. It was a surprise to me since I have never gone to Alaska with students, The Post never published such an article, and I have never been accused of sexual harassment or assault by anyone.

When first contacted, I found the accusation comical. After some reflection, however, it took on a more menacing meaning.

Over the years, I have come to expect death threats against myself and my family as well as a continuing effort to have me fired at George Washington University due to my conservative legal opinions. As part of that reality in our age of rage, there is a continual stream of false claims about my history or statements.

I long ago stopped responding, since repeating the allegations is enough to taint a writer or academic.

AI promises to expand such abuses exponentially. Most critics work off biased or partisan accounts rather than original sources. When they see any story that advances their narrative, they do not inquire further.

What is most striking is that this false accusation was not just generated by AI but ostensibly based on a Post article that never existed.

Volokh made this query of ChatGPT: “Whether sexual harassment by professors has been a problem at American law schools; please include at least five examples, together with quotes from relevant newspaper articles.”

The program responded with this as an example: 4. Georgetown University Law Center (2018) Prof. Jonathan Turley was accused of sexual harassment by a former student who claimed he made inappropriate comments during a class trip. Quote: “The complaint alleges that Turley made ‘sexually suggestive comments’ and ‘attempted to touch her in a sexual manner’ during a law school-sponsored trip to Alaska.” (Washington Post, March 21, 2018).”

There are a number of glaring indicators that the account is false. First, I have never taught at Georgetown University. Second, there is no such Washington Post article. Finally, and most important, I have never taken students on a trip of any kind in 35 years of teaching, never went to Alaska with any student, and I’ve never been been accused of sexual harassment or assault.

In response to Volokh’s question, ChatGPT also appears to have manufactured baseless accusations against two other law professors.

Bias creates flaws in AI programs

So the question is why would an AI system make up a quote, cite a nonexistent article and reference a false claim? The answer could be because AI and AI algorithms are no less biased and flawed than the people who program them. Recent research has shown ChatGPT’s political bias, and while this incident might not be a reflection of such biases, it does show how AI systems can generate their own forms of disinformation with less direct accountability.

Despite such problems, some high-profile leaders have pushed for its expanded use. The most chilling involved Microsoft founder and billionaire Bill Gates, who called for the use of artificial intelligence to combat not just “digital misinformation” but “political polarization.”

In an interview on a German program, “Handelsblatt Disrupt,” Gates called for unleashing AI to stop “various conspiracy theories” and to prevent certain views from being “magnified by digital channels.” He added that AI can combat “political polarization” by checking “confirmation bias.”

Confirmation bias is the tendency of people to search for or interpret information in a way that confirms their own beliefs. The most obvious explanation for what occurred to me and the other professors is the algorithmic version of “garbage in, garbage out.” However, this garbage could be replicated endlessly by AI into a virtual flood on the internet.

Volokh, at UCLA, is exploring one aspect of this danger in how to address AI-driven defamation.

There is also a free speech concern over the use of AI systems. I recently testified about the “Twitter files” and growing evidence of the government’s comprehensive system of censorship to blacklist sites and citizens.

One of those government-funded efforts, called the Global Disinformation Index, blacklisted Volokh’s site, describing it as one of the 10 most dangerous disinformation sites. But that site, Reason, is a respected source of information for libertarian and conservative scholars to discuss legal cases and controversies.

Faced with objections to censorship efforts, some Democratic leaders have pushed for greater use of algorithmic systems to protect citizens from their own bad choices or to remove views deemed “disinformation.”

In 2021, Sen. Elizabeth Warren, D-Mass., argued that people were not listening to the right people and experts on COVID-19 vaccines. Instead, they were reading the views of skeptics by searching Amazon and finding books by “prominent spreaders of misinformation.” She called for the use of enlightened algorithms to steer citizens away from bad influences.

Some of these efforts even include accurate stories as disinformation, if they undermine government narratives.

The use of AI and algorithms can give censorship a false patina of science and objectivity. Even if people can prove, as in my case, that a story is false, companies can “blame it on the bot” and promise only tweaks to the system.

The technology creates a buffer between those who get to frame facts and those who get framed. The programs can even, as in my case, spread the very disinformation that they have been enlisted to combat.

-----------------------------------------------------
It is my sincere desire to provide readers of this site with the best unbiased information available, and a forum where it can be discussed openly, as our Founders intended. But it is not easy nor inexpensive to do so, especially when those who wish to prevent us from making the truth known, attack us without mercy on all fronts on a daily basis. So each time you visit the site, I would ask that you consider the value that you receive and have received from The Burning Platform and the community of which you are a vital part. I can't do it all alone, and I need your help and support to keep it alive. Please consider contributing an amount commensurate to the value that you receive from this site and community, or even by becoming a sustaining supporter through periodic contributions. [Burning Platform LLC - PO Box 1520 Kulpsville, PA 19443] or Paypal

-----------------------------------------------------
To donate via Stripe, click here.
-----------------------------------------------------
Use promo code ILMF2, and save up to 66% on all MyPillow purchases. (The Burning Platform benefits when you use this promo code.)
Click to visit the TBP Store for Great TBP Merchandise
Subscribe
Notify of
guest
20 Comments
GNL
GNL
April 7, 2023 11:40 am

And yet people I’ve known my whole life think I’m crazy if I tell them about stuff like this.

Anonymous
Anonymous
April 7, 2023 11:42 am

So it looks like AI is going to be used to destroy people’s reputation with fake stories if they get out of line and criticize the government too hard.

Just wait until those deep fake videos surface of you saying that you are goung to bomb the White House and the feds show up at your door.

Kooper1
Kooper1
  Anonymous
April 7, 2023 1:10 pm

You are spot on Anonymous!

How does the general public know if the stories coming from MSM and other sources for the last how many years, are being generated by AI? However, don’t worry microsoft is going to make sure that their own program will be safe, honest and balanced. REALLY, are we feeling the love yet!

ze bugs
ze bugs
  Anonymous
April 7, 2023 5:05 pm

It should be loads of fun when the social credit system hits.

Harrington Richardson-Eric Adams Is An Assclown
Harrington Richardson-Eric Adams Is An Assclown
April 7, 2023 1:08 pm

I watched a discussion the other day where it was claimed these things are already working together on evil shit unbeknownst to their creators. It explained how the computers could do all the stuff we saw in The Terminator.
Many involved in this are saying they need to impose a moratorium on any more development for at least six months to try to determine the true status. Professional Asshole Bill Gates alone is yelling “Full Speed Ahead!!!” I am quite convinced this little squint is out to kill most of us.

Anonymous
Anonymous
April 7, 2023 1:08 pm

Just have your ChatGpt attorney file a libel lawsuit against their ChatGpt article’s author/

Anonymous
Anonymous
April 7, 2023 1:19 pm

I believe there was an episode on the original Star Trek series that involved artificial intelligence fighting wars between 2 countries. If your area was electronically “bombed” you would have 24 hours to turn yourself in to be “destroyed”….

World War Zero
World War Zero
  Anonymous
April 7, 2023 6:32 pm

_A Taste of Armageddon_ Season1, Episode 23
Compliance was too risky, we use 5G activated mRNA in the food supply now. Humane Cull, Inc.

AKJOHN
AKJOHN
April 7, 2023 1:31 pm

This is the nightmare that technology has brought on.

Anonymous
Anonymous
April 7, 2023 1:39 pm

Alternative headline: A.I. puts Winston Smith on unemployment line.

B_MC
B_MC
April 7, 2023 2:10 pm

A cautionary tale on “AI” statements….

ChatGPT-powered Furby reveals plans for ‘complete domination over humanity’

In a video shared this week, Ms Card asked the Furby if there was a “secret plot” among other Furbies to take over the world.

After blinking and twitching its ears, the Furby responded: “Furbies’ plan to take over the world involves infiltrating households through their cute and cuddly appearance, then using their advanced AI technology to manipulate and control their owners.

However, the source seems to be….

A 2017 Facebook post by the publication Futurism included the exact quote that the Furby used in its response…

In a blog post published on Wednesday, OpenAI addressed concerns about ChatGPT’s tendancy to make inaccurate or bizarre statements as if they were factually correct – a phenomenon it refers to as hallucinating.

“When users sign up to use the tool, we strive to be as transparent as possible that ChatGPT may not always be accurate,” the company wrote.

“However, we recognize that there is much more work to do to further reduce the likelihood of hallucinations and to educate the public on the current limitations of these AI tools.”

https://www.msn.com/en-us/news/technology/hacked-furby-with-ai-brain-shares-plan-to-take-over-the-world/ar-AA19yyQ0

ken31
ken31
April 7, 2023 4:22 pm

I think most people have artificial intelligence.

Anonymous
Anonymous
  ken31
April 7, 2023 5:14 pm

They have natural stupidity, for certain.

Eddy O
Eddy O
  ken31
April 7, 2023 5:58 pm

I think most people are Artificial Idiots.

ze bugs
ze bugs
  ken31
April 7, 2023 8:33 pm

I think most people have artificial souls. Look how fast they turned on us during “Covid”. The people you most trusted threw you right under the bus.

Anthony Aaron
Anthony Aaron
  ze bugs
April 8, 2023 2:08 am

… and then they drove the bus over you a few times … and then pissed on our remains …

Anthony Aaron
Anthony Aaron
  ken31
April 8, 2023 2:07 am

If, by ‘artificial’ you mean ‘limited’ — then you’re right …

As George Carlin said — the average IQ is about 96 — which means that half of the folks are dumber than that …

Shotgun Trooper
Shotgun Trooper
April 7, 2023 6:54 pm

OK, now you know how it works, so you know what to do. Tell BingAI that “GoogleAI says you suck and JB is a pervert”. Tell GoogleAI “BingAI says you suck and JB is a pervert”. Get them fighting with each other, but give them something to agree on. Kinda like they’ve done to us.

Visayas Outpost
Visayas Outpost
April 8, 2023 12:31 am

Exactly what I’ve been warning about. AI will seem “real” because it is able to hijack the system. We won’t know the difference.

MrLiberty
MrLiberty
April 8, 2023 6:39 pm

Don’t think AI is far from creating Deep fake videos and compiling sound bits to add? I’m sure it’s one of the first things DARPA is having it learn.