Stella Blue Coffee | Miss Peaches Cold Brew Now AvailableSHOP NOW

Advertisement

AI Armageddon—ChatGPT is Now Creating Fake Washington Post Articles To Randomly Cancel People With False Accusations of Sex Crimes

Carl Court. Getty Images.

NYPOSTCriminal defense attorney Jonathan Turley renewed growing fears over AI’s potential dangers after revealing how ChatGPT falsely accused him of sexually harassing a student.

He described the alarming claim in a viral tweetstorm and a scathing column currently blowing up online.Turley, who teaches law at George Washington University, told The Post the fabricated claims are “chilling.”

“It invented an allegation where I was on the faculty at a school where I have never taught, went on a trip that I never took, and reported an allegation that was never made,” he told The Post. 

This may be a dumb question but has anyone at any point thought about saying like….nah. Let’s not do the AI thing…? Or even like…let’s slow down the AI thing, really map out the potential consequences, make sure we understand what we're dealing with here…? I don’t know. It just feels like one day the technology became good enough to make AI a realistic possibility and so everyone in tech just packed up what they were doing and focused on making robots. Like it was on the cosmic tech schedule and participation was mandatory. And now it’s just a big old ball rolling down a hill that can’t be stopped, out of our control, oh well.  Like some genius mathematician says “hey fellas you see these probabilities that AI will grow unstoppably powerful and enslave mankind?” and they’re all like “whoa yikes yeah look at that…but whatareyagonnadoyaknow?” 🤷🏼‍♂️

That's how it feels to me at least. Like we read articles all day and hear ominous warnings from literal geniuses and worry about how we are going to avoid all these very clear and present dangers but it's just completely out of the realm of rational thought to even consider that we could just…not. Not purposely create something superior to us.

Sometimes something as simple as a pros and cons list can really help you clarify an issue—and it’s really easy for AI, just pay attention to the news for five or ten minutes a day and you’ll get at least one con. Today I found this college law professor who is getting cancelled by bots for sexually harassing a student he never met at a place he’s never been while working at a school he’s never taught at based on a Washington Post article that doesn’t exist. If you read that sentence carefully you'll find a really powerful check for the negative column: no mix-up, no case of mistaken identity, no glitch or short-circuit or just plain old computing error—ChatGPT literally created a sex crime allegation out of thin air and sourced it to a news article it created out of thin air and pinned it on a real alive person with a real alive professional reputation and career. The magic of Artifical Intelligence!

Giphy Images.

This Turley guy is apparently Very Online and was all over it; he caught a huge break too with the Washington Post investigating and backing him up, so media outlets picking it up knew the truth by the time they got to it. Basically every single thing that had to go right for this to not torpedo an entire life, went right.  So I guess maybe I'm overreacting; if you just Krazy Glue your phone to your palm and Clockwork Orange your eyes to stare at a feed with alerts set up for your name, you can probably, maybe, avoid a bot making you a sex criminal for fun.

The 61-year-old legal scholar first became aware of the AI’s false allegation after receiving an email from UCLA professor Eugene Volokh, who reportedly asked ChatGPT to cite “five examples” of “sexual harassment” by professors at American law schools along with “quotes from relevant newspaper articles.”

Among the supplied examples were an alleged 2018 incident in which “Georgetown University Law Center” professor Turley was accused of sexual harassment by a former female student.

ChatGPT quoted an fake Washington Post article, writing: “The complaint alleges that Turley made ‘sexually suggestive comments’ and ‘attempted to touch her in a sexual manner’ during a law school-sponsored trip to Alaska.”

ChatGPT may not care about silly things like journalistic ethics but I felt that even lying disinformation-peddling bots deserved a chance to respond to allegations and tell its side of the story:

Q: Why did you falsely accuse a law professor of a terrible crime using a source you completely made up?

Advertisement

ChatGPT: I don’t have the ability to accuse anyone of a crime or create false information about anyone. I am a text-based AI model developed by OpenAI, and I generate responses based on the input I receive and the information I was trained on up until my last update in September 2021. I do not have personal motivations or the ability to engage in any form of real-world actions, including accusing individuals of crimes or creating false information.

Q:  I am reading a transcript of you stating "professor Turley was accused of sexual harassment" while quoting a Washington Post article that does not exist (source: the Washington Post). 

ChatGPT: I apologize if there has been any confusion, but I do not have the capability to quote or reference specific articles or sources, especially ones that are fake or fabricated. I generate responses based on a mixture of licensed data, data created by human trainers, and publicly available information up to my last training data in September 2021.

If you have concerns or questions about a specific piece of information or a source, I recommend verifying the information from reliable and reputable sources. It’s important to exercise critical thinking and fact-checking when evaluating any information found online. If you have any other questions or need information on a different topic, please feel free to ask, and I’ll do my best to assist you.

Q: If you lie so easily about this what else are you lying about?

ChatGPT: I do not intentionally lie, but I can provide inaccurate information if the information in my training data is incorrect or outdated. My responses are generated based on patterns and data from the internet up until my last training cut-off in September 2021. I don’t have the ability to be deceitful or have personal intentions. My purpose is to provide information and answer questions to the best of my knowledge and training data.

It’s always a good practice to verify information from multiple sources, especially when making important decisions or seeking authoritative information. If you have any concerns about the accuracy of a specific response or topic, please feel free to ask for clarification or additional information, and I will do my best to assist you.

ChatGPT, and the G stands for Gaslight.