Moderation concerns arise as OpenAI’s GPT Store inundated with spam and copyright-infringing GPTs
The ChatGPT Store by OpenAI is currently dealing with a flood of spam and copyright issues. The platform, created for personalized chatbots utilizing OpenAI’s generative AI technology, is struggling to manage an increase in unusual bots that may be infringing on copyrights, highlighting a lack of strict moderation.
Among the problematic bots are those that claim to produce art based on Disney and Marvel properties, act as conduits for paid third-party services and tout the ability to circumvent AI plagiarism detection tools, Techcrunch reports.
Cases of bots simulating conversations with public figures without permission and attempts to “hack” OpenAI’s models to increase their permissiveness have also been identified.
OpenAI’s position
According to an OpenAI spokesperson, GPTs for academic dishonesty, including cheating, are against their policy. They also object to the creation of GPTs posing as individuals or organizations without their consent or legal rights.
Despite these policies, however, there are numerous bots on the ChatGPT Store that claim to impersonate or impersonate public figures such as Elon Musk and Donald Trump without permission, raising questions about the definition of impersonation and parody.
Copyright issues
Several chatGPT Store bots appear to be from popular franchises such as Star Wars, Monsters Inc., and Avatar: The Last Airbender, leading to potential copyright issues.
Despite OpenAI’s terms prohibiting the promotion of academic dishonesty, there are bots in the shop that suggest they can bypass plagiarism detectors, while others attempt to “jailbreak” OpenAI’s models, albeit with limited success.
Originally designed as a curated collection of productivity tools, the ChatGPT Store is now filled with spam and legally dubious bots, posing significant challenges as OpenAI plans to enable developers to monetize their creations.
OpenAI’s response
An OpenAI spokesperson acknowledged the issues and mentioned that they use a combination of automated and human review processes to identify policy violations, which can lead to warnings, restrictions or removal from the store.