Advertisement

ChatGPT removes tool meant to warn teachers of plagiarism due to ‘low accuracy’

Click to play video: 'U.S. Congress holds hearing on risks, regulation of AI: ‘Humanity has taken a back seat’'
U.S. Congress holds hearing on risks, regulation of AI: ‘Humanity has taken a back seat’
WATCH: U.S. Congress holds hearing on risks, regulation of AI - 'Humanity has taken a back seat' – May 16, 2023

OpenAI, the company behind ChatGPT, has removed its AI classifier tool meant to inform users whether text is created by artificial intelligence (AI) due to its “low rate of accuracy.”

The company made the announcement on July 20 on its blog, saying that the tool is no longer available. The company said it is committed to helping users know if content is AI-generated, and will use feedback and further research to achieve that.

The classifier tool had originally been released in late January when ChatGPT was beginning to pick up popularity. OpenAI noted at the time some of the concerns being flagged by the flourishing technology.

“We believe good classifiers can inform mitigations for false claims that AI-generated text was written by a human: for example, running automated misinformation campaigns, using AI tools for academic dishonesty, and positioning an AI chatbot as a human,” the blog post read.

Story continues below advertisement

ChatGPT and other text generators have sparked widespread worries from teachers that they could be used for plagiarism by students.

Breaking news from Canada and around the world sent to your email, as it happens.

School administrators are still debating what to make of ChatGPT and whether its use should be banned in classrooms. New York state did ban the application in January but then later rescinded it in May, instead opting to use it as a tool to help with education.

Click to play video: 'ChatGPT not the cheating wingman you need, Manitoba colleges, universities warn'
ChatGPT not the cheating wingman you need, Manitoba colleges, universities warn

ChatGPT uses generative AI, which means artificial intelligence that can create its own content by predicting the probability two words should follow one another. Such a model is being fed text from the internet and humanity’s literary history to improve the content it generates.

Lawmakers are still wrestling with how exactly to regulate the new technology while some of those behind its creation are calling on more limitations to stave off its potential risks — including human annihilation.

Story continues below advertisement

Major tech companies, including Microsoft, Amazon, Google and Meta, recently agreed to voluntary “safeguards” set by the White House in the U.S. that include a digital watermark on AI-generated images, and the UN Security Council held its first meeting on AI last week.

Sponsored content

AdChoices