AI generated text can be detected using ChatGPT’s free tool

Capture

A free tool from OpenAI, the company behind ChatGPT and DALL-E, claims to be able to distinguish between “human-written texts and artificial intelligence-written texts.” According to OpenAI, the classifier is “not fully reliable” and “should not be relied upon as a primary decision-making tool.” This can be useful in determining whether someone is trying to pass off a generated piece of text as something created by them.

Classifiers are relatively simple tools, however you need an account with OpenAI to use them. It will tell you if it thinks the text is probably sourced from AI by putting it into a box, clicking a button. In addition, it will tell you if it thinks it’s unlikely, unclear if it is possibly, or likely.

According to its press release, OpenAI trained the tool’s model using “pairs of human-written text and AI-written text on the same topic.”

A number of warnings are provided regarding the tool, however. There are a few limitations listed above the text box:

It should be between 150 and 250 words long, which is a minimum of 1,000 characters.

Both human-written and AI-generated texts can be mislabeled by the classifier.

Text generated by AI can be easily edited to evade classification.

Because it was trained primarily on English content written by adults, it is likely to get things wrong in text written by children.

Especially if the text differs greatly from the training data, it may also mistakenly label human-written text as AI. The classifier still has a lot of development to do.

My own work was marked as “very unlikely AI-generated” by the tool after I ran it through it. (Fooled ‘em again). As well, it said it was unclear if this Buzzfeed News article had been written by an AI, despite the notice at the bottom, which reads, “This article was written entirely by ChatGPT.”.”

CNET Money also got an “unclear” classification for some of their articles, while others were classified as “unlikely.” In addition to an artificial intelligence engine, CNET’s editorial staff reviewed, fact-checked and edited these articles, which suggests there are likely to have been some human tweaks (especially since over half have been corrected by CNET). Mia Sato has reported that CNET uses Wordsmith for some of its content, but CNET’s owner has not revealed which specific system it uses. The OpenAI tool doesn’t just detect GPT, but AI-written text from all types of providers, the company says.

In no way am I implying that OpenAI’s classifier does not work. The ChatGPT system marked about half of the responses posted through it as “likely” or “possibly” AI-generated. According to OpenAI, their tool correctly identified AI-written text 26 percent of the time, while falsely detecting AI 9 percent of the time, outperforming their previous tool for detecting AI.

In the first few days after ChatGPT went viral, a number of sites took advantage of the trend, including GPTZero, which was designed to “detect AI plagiarism”.

OpenAI’s detection technology is focused on education. In its press release, the company states that educators have been discussing whether to ban or embrace ChatGP due to AI-written text. ChatGPT is asking for feedback from educators in the US about what they see in their classrooms and how it can be improved.

_

Source: Omanghana.com/SP


About us

Omanghana is an online news portal that provides readers around the world with a greater focus on Ghana and other parts of Africa. Established in 2009, Omanghana regularly publishes articles related to News, Sports, and Entertainment.


CONTACT US



Latest posts

August 12, 2024
August 9, 2024