ChatGPT Text Won’t Be Watermarked by OpenAI Because Users Might Be Caught

According to The Wall Street Journal, OpenAI has had a system for watermarking text generated by ChatGPT and a corresponding detection tool ready for about a year. However, the company is internally divided on whether to release it. While releasing the tool seems like a responsible decision, it could potentially impact the company’s profits.

OpenAI’s watermarking method involves modifying how the model predicts the sequence of words, creating a detectable pattern. Although this is a simplified explanation, a more detailed description can be found in Google’s explanation of Gemini’s text watermarking.

Open Ai
ChatGPT Text Won't Be Watermarked by OpenAI Because Users Might Be Caught 2

A tool to detect AI-generated text could be beneficial for teachers trying to prevent students from using AI for writing assignments. The Journal reports that watermarking does not affect the quality of the chatbot’s text output. A company-commissioned survey revealed that people worldwide supported the idea of an AI detection tool by a margin of four to one.

Following the Journal’s report, OpenAI confirmed in a blog post that it has been working on text watermarking. The company stated that its method is highly accurate (99.9% effective, according to documents seen by the Journal) and resistant to tampering, such as paraphrasing. However, techniques like rewording with another model can easily circumvent the watermark. The company also expressed concerns about the potential stigmatization of AI tools’ usefulness for non-native speakers.

OpenAI is also concerned that watermarking could deter some users from using ChatGPT, with almost 30 percent of surveyed users indicating they would use the software less if watermarking were implemented.

Despite these concerns, some employees believe watermarking is effective. In response to user sentiments, the Journal reports that some employees suggested exploring potentially less controversial but unproven methods. In its blog post, OpenAI mentioned that it is “in the early stages” of exploring embedding metadata. While it’s too early to determine the effectiveness, the company noted that cryptographic signing would eliminate false positives.

Latest articles