According to recent government Meity advice on artificial intelligence technology, the Indian government has abandoned the demand for permits for unproven AI models but has emphasized the need for AI-generated information to be labeled.
The updated advice, released on Friday night by the [Meity] Ministry of Electronics and IT, adjusted the compliance criterion rather than granting clearance for AI models that are still in development. “The advisory is issued in suppression of an advisory…dated 1st March 2024,” the notice stated.
Intermediary Guidelines by Meity Mandate AI Transparency and Accountability
According to the new advice from Meity, it has been noted that IT companies and platforms frequently neglect to carry out the due diligence requirements outlined in the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. The Meity has mandated that companies mark material produced by their AI platform or software and notify consumers of any potential inherent fallibility or unreliability in the output produced by their AI technologies.
“Where any intermediary through its software or any other computer resource permits or facilitates synthetic creation, generation or modification of a text, audio, visual or audio-visual information, in such a manner that such information may be used potentially as misinformation or deepfake, it is advised that such information created generated or modified through its software or any other computer resource is labeled….that such information has been created generated or modified using the computer resource of the intermediary,” the warning stated.
It further said that if a user makes any modifications, the metadata should be set up to make it possible to identify the person or computer resource that made the change. Following the uproar over Google’s AI platform’s answer to questions about Prime Minister Narendra Modi, the government on March 1 advised social media and other platforms to identify AI models that are still on trial and to stop displaying illegal information.
In a warning to platforms and middlemen, the Ministry of Electronics and Information Technology threatened criminal penalties for noncompliance. According to the prior recommendation, companies must apply for government clearance before implementing artificial intelligence (AI) models that are still under trial or that have been deemed to have “possible and inherent fallibility or unreliability of the output generated.”
Comments 1