OpenAI developed a tool to detect AI-written essays but hasn't released it, considering alternatives and potential impacts
The Wall Street Journal recently reported that OpenAI has a tool ready to spot essays written by ChatGPT with high accuracy.
But, despite being ready to go, the company hasn’t released it yet. Why? OpenAI shared some insights into this decision, highlighting their ongoing research into text watermarking.
In a blog update, OpenAI explained that they've developed a text watermarking method, but they’re still considering other options. This method is part of a bigger effort to track the origins of written content.
They said, “Our teams have developed a text watermarking method that we continue to consider as we research alternatives.”
While effective in many cases, text watermarking has its challenges.
It doesn’t work as well when someone uses tricks like translation systems, rewording with another AI model, or adding and then removing special characters between words.
Plus, OpenAI pointed out that watermarking could unfairly impact some groups, like non-native English speakers who use AI tools to help with writing.
OpenAI is also looking into other solutions, such as classifiers and metadata, as part of their extensive research.
An OpenAI spokesperson told TechCrunch, “We are taking a deliberate approach due to the complexities involved and its likely impact on the broader ecosystem beyond OpenAI.”
Instead of rushing, OpenAI is prioritising the release of tools to verify audiovisual content first. This careful strategy shows that OpenAI is committed to rolling out AI technologies in a way that’s fair and beneficial for everyone.
Thoroughly assess potential impacts before releasing new technologies, especially those affecting diverse user groups
Consider multiple approaches to solving complex problems, weighing pros and cons of each method
Prioritise fairness and ecosystem health over rapid product releases, even when facing market pressure