MeitY issues advisory asking platforms to seek permission before launching AI models in India, emphasizing labeling of fallibility and compliance with IT Rules. (PTI)AI 

Tech firms must obtain approval from the government before releasing ‘unreliable’ AI tools

The Ministry of Electronics and Information Technology (MeitY) has released a second advisory instructing platforms and intermediaries to obtain prior approval from the government before deploying Artificial Intelligence (AI) models for testing in India.

The directive was issued on Friday night, more than two months after the ministry issued an advisory to social media platforms in December last year, urging them to follow existing IT rules to deal with the problem of deep fakes.

“The use and availability of under-tested/untrusted AI models/LLM/Generative AI, software or algorithms to Internet users in India shall be subject to the express permission of the Government of India. India and shall be adopted only after the potential and inherent inaccuracy or unreliability of the generated output is properly flagged. In addition, a ‘consent pop-up’ mechanism may be used to inform users of the potential and inherent inaccuracy or unreliability of an expressly generated result,” the advisory reads.

The advisory added that the ministry recently became aware that intermediaries or platforms are neglecting the due diligence obligations outlined in the Intermediary Guidelines and Digital Media Ethics Code 2021 (IT Rules).

“Each broker or platform shall ensure that the use of artificial intelligence model(s) /LLM/Generative AI, software or algorithms on or through its computer resource does not allow its users to host, display, download, edit, publish, transmit, store, update or distribute illegal content under Rule 3(1)(b) of the IT Rules or violate any other provision of the IT Act,” it stated.

“All intermediaries or platforms shall ensure that their computing resources do not permit any form of bias or discrimination or threat to the integrity of the electoral process, including the use of AI model(s)/LLM/Generative AI, software(s) or algorithm(s),” it advises. .

“The use and availability of any under-tested/untrusted AI models/LLM/Generative AI, software or algorithms to Internet users in India shall be subject to the express permission of the Government of India. India and may be deployed only after the potential and inherent inaccuracy or unreliability of the generated output is properly flagged .In addition, the “consent pop-up” mechanism may be used to expressly notify users of the potential and inherent inaccuracy or unreliability of the output generated.

“All users must be clearly informed, including in the broker’s or platforms’ terms of service and user agreements, of the consequences of processing illegal data on its platform, including blocking access to or removing illegal data, suspension or termination, as the case may be, of the user’s access or use rights to their user account and penalties under applicable law along,” it added.

“If any intermediary, through its software or other computer resource, allows or facilitates the synthetic creation, creation or modification of text, sound, image or audiovisual information in such a way that such information can be used as a potentially false or deep counterfeit, it is recommended that such information, which is created, created or modified by its software or any other computer resource, is marked or embedded with a permanent unique metadata or identifier, by any name, in such a way that such identifier, metadata or identifier can be used to identify that such information was created, produced or altered using the intermediary’s computer resource, or identifies the user of the software or other computer resource, the intermediary through whose software or other computer resource such information was created, produced or altered, and the originator or first initiator of such misinformation or deep forgery,” the advisory added.

“It is reiterated that failure to comply with the provisions of the Information Technology Act and/or the Information Technology Rules would lead to potential criminal consequences for intermediaries or platforms or their users when identified, including but not limited to prosecution under the Information Technology Act and various other statutes under the Criminal Code,” the advisor added.

“All brokers are requested to ensure immediate compliance with the above and submit a status report on the measures taken to the ministry within 15 days of this notification,” the union ministry added.

Related posts

Leave a Comment