Government eliminates permit requirement for untested artificial intelligence models, providing relief to IT platforms.
According to a recent advisory on Artificial Intelligence (AI) technology, the government has eliminated the need for permits for untested AI models, but stressed the importance of labeling AI-generated content.
Instead of a permit for AI models to be developed, a fresh advisory issued by the Ministry of Electronics and IT on Friday evening fine-tuned the compliance requirement under the 2021 IT rules.
“The consultation has been issued to suppress the notification dated March 1, 2024,” the advisory said.
It has been found that IT companies and platforms have often neglected the due diligence obligations highlighted in the Intermediary Guidelines and Digital Media Ethics Code 2021, according to the new advice.
The government has asked companies to mark the content produced by their artificial intelligence software or platform and to inform users about the possible inaccuracy or unreliability of the result created with the help of their artificial intelligence tools.
“If any intermediary, through its software or other computer resource, allows or facilitates the synthetic creation, generation or modification of text, sound, image or audiovisual information in such a way that such information can be used as a potentially false or deep counterfeit, It is recommended that such information that is created or modified through its software or any other computer resource, it is indicated….that such information was created or modified using the broker’s computer resource,” the advisory says.
If the user makes changes, the metadata should be defined in such a way that the user or computer resource that caused the change can be identified, it added.
Following controversy over Google’s AI platform’s response to queries related to Prime Minister Narendra Modi, the government on March 1 issued an advisory for social media and other platforms to test AI models and prevent illegal content from being hosted.
In an advisory to brokers and platforms, the Ministry of Electronics and Information Technology warned of criminal action if the regulations are not followed.
The previous advisor has asked entities to seek approval from the government for the introduction of experimental or unreliable artificial intelligence (AI) models and to introduce them only after they have been marked as “possible and inherent inaccuracy or unreliable in the results produced”.