Noteworthy information concerning shifts that have occurred inside OpenAI's operations was revealed in a recent report.
TechCrunch found a concerning tendency in OpenAI's GPT Store, the official GPT marketplace. The abundance of odd and possibly copyright-violating GPTs on the platform points to OpenAI's slack moderating practices.
During the company's first developer conference in November of last year, CEO Sam Altman unveiled GPTs, personalized chatbots powered by OpenAI's generative AI models. He praised them as multipurpose tools that could provide everything from fitness advice to programming help. However, some users have managed to circumnavigate regulations over these AI-powered tools.
Their report identified GPTs in the OpenAI GPT store that purport to create artwork in the vein of Disney and Marvel, serve as gateways to third-party paid services, and brag about being able to elude AI content detection systems.
The analysis suggests that OpenAI's moderation methods, which include automatic technologies, human inspection, and user complaints, may not be enough to stop bogus submissions and copyright infringement. OpenAI's GPT Builder tool lets developers construct GPTs without coding, resulting in a fast growth of GPTs on the GPT store.
Copyright concerns about GPTs that are inspired by well-known media brands, such as "Monsters, Inc." and "Star Wars," without permission from the respective owners are especially concerning. Further complexity concerning copyright arises from GPTs that allow users to interact with trademarked characters, such as Wario and Aang from "Avatar: The Last Airbender."
(Photo : SEBASTIEN BOZON/AFP via Getty Images)
This illustration photograph taken on October 30, 2023, shows the logo of ChatGPT, a language model-based chatbot developed by OpenAI, on a smartphone in Mulhouse, eastern France.
Unlawful GPTs Put Academic Integrity At Risk
Furthermore, the investigation revealed that GPTs are potentially dangerous to academic integrity since they advocate academic dishonesty by claiming they can get beyond AI content filters. Some GPTs are thinly disguised routes to premium services that provide sophisticated rephrasing and humanization, which may undermine plagiarism detection.
Numerous examples of GPTs impersonating prominent people, such as Donald Trump and Elon Musk, were found in the OpenAI GPT store even though OpenAI prohibits them from doing so. While some GPTs could be meant to be humorous parodies, others pose as an authority on certain subjects, bringing concerns about trademark infringement and impersonation.
Read Also: Elon Musk Opens Up About Ketamine Use: How It Impacts Tesla's Performance
Moreover, another report from News 18 noted that developers complain about OpenAI's lack of assistance in helping them gather information and statistics on app store customers and their interactions with different AI chatbots.
The spread of unapproved GPTs raises further questions about the integrity of OpenAI's platform and moderation procedures. With aspirations to commercialize GPTs and provide developers with income, the corporation confronts quality control and legal and ethical difficulties. If ignored, these concerns might damage OpenAI's GPT ecosystem's legitimacy and efficacy.
Sam Altman Shares Some Info on AGI Progress
Amid these concerns on OpenAI GPT store, Sam Altman, CEO of OpenAI, disclosed some details on the company's outlook for artificial general intelligence (AGI) in the future.
Sam Altman discussed several aspects of artificial general intelligence (AGI) and the role of OpenAI in advancing it in a recent discussion with Lex Fridman, a widely known podcaster. He underlined that, rather than being the end goal, the pursuit of AGI should be seen as the start of a revolutionary period, as previously reported by TechTimes.
According to Altman's rough schedule, AGI may become a reality by the end of the decade or before. He did, however, issue a warning against considering the development of AGI to be just a benchmark, stressing its importance in bringing significant changes to society.
In January, Altman told a Bloomberg panel at the World Economic Forum in Davos, Switzerland, that AGI may be possible in the "reasonably close-ish future."