AI text from systems like ChatGPT may already be advanced enough that an OpenAI creator thinks it is now time to put a watermark on AI-generated texts to ensure the safety of systems.
As reported by TechCrunch, an OpenAI guest researcher is working on developing a way to "statistically watermark the outputs of a text [AI system]."
In a lecture, a computer science professor, Scott Aaronson, stated that whenever a system, such as ChatGPT, generates text, the tool will incorporate an "unnoticeable secret signal," revealing where the content came from.
According to Aaronson, OpenAI has a working prototype of the watermarking system, which was designed by OpenAI developer Hendrik Kirchner. He stated that it appears to function fairly well. In the prototype, a few hundred tokens seemed sufficient to generate a reasonable signal that a sentence came from GPT.
Putting Limit on AI Texts
In case you are not aware, AI startup OpenAI is one of the few firms that has successfully developed an AI text generator. OpenAI calls its text model GPT and has been known to write the most advanced pieces AI can produce.
The AI system is made of a massive neural network with about 175 billion parameters that Aaronson says is also called a transformer model.
Read Also : Lucid Motors to Go Out of Business, Says Elon Musk | Tesla Rival Aggressively Prevents Canceled Orders
The OpenAI chatbot prototype uses artificial intelligence (AI). It has become quite well-liked for giving thorough, human-like answers to queries. It can create functional codes and generate agreements between two parties.
The bigger news is that it can fundamentally change how people use search engines by offering links for users to browse, resolving intricate issues, and providing in-depth responses.
The program is still in the testing phase. So, despite its unique concept, OpenAI admits that its moderation could be better and is only sometimes accurate.
And, despite its numerous applications and breakthrough successes, the system presents clear ethical concerns. ChatGPT, like many other text-generation systems before it, might be used to create high-quality phishing emails and destructive viruses and defraud school systems.
GPT, as a recently established AI system, can be unreliable as a question-answering tool at times. Because of this issue, the programming Q&A site Stack Overflow has banned replies from ChatGPT until further notice.
What Happens Next
GPT requires input and output, which can take the form of a string of "tokens," which could be words, sections of words, or even punctuation.
GPT is continually producing a probability distribution over the next token based on the previous token string. Following the generation of the distribution by the neural net, the OpenAI server samples a token based on that distribution.
Watermarking AI text entails selecting the next token pseudorandomly, rather than randomly, using a cryptographic pseudorandom function whose key is only known to OpenAI.
Aaronson stated in the lecture that if OpenAI can demonstrate that watermarking works and has no effect on the quality of the generated text, it has the potential to become an industry standard.