“In an era dominated by technology, the proliferation of AI-generated deepfakes and misinformation has emerged as a pressing concern globally. India, recognizing the imminent threat these synthetic manipulations pose to societal trust and democratic values, is poised to introduce new regulations. These regulations aim to combat the dissemination of harmful content by imposing financial penalties on both creators and the social media platforms that propagate such deceptive material.”
Regulating AI-generated deepfakes and misinformation is India’s forthcoming endeavor, marked by new regulations potentially enforcing financial penalties on both creators and platforms sharing such content. Ashwini Vaishnaw, Minister for Information Technology and Telecom, plans to collaborate with stakeholders to develop strategies within the next ten days to detect, prevent, and report deepfakes. Stressing their threat to societal trust and democratic values, Vaishnaw underlined the urgent need for decisive action.
Proposed regulations may include financial penalties for creators and platforms hosting deepfakes. Consultations with tech giants highlighted the pressing need to combat the rampant spread of deepfakes via social media platforms.
Vaishnaw emphasized the necessity for proactive measures by social media platforms due to the immediate and extensive damage caused by the swift dissemination of deepfake content. Discussions will prioritize identifying deepfakes, preventing their spread, fortifying reporting mechanisms, and raising public awareness through joint government-industry efforts.
At the heart of these proposed regulations are deepfakes—synthetic media created using AI—which could be introduced through amendments to India’s IT rules or as an entirely new law.
The ‘safe harbor immunity’ enjoyed by platforms under the IT Act would not apply unless swift action is taken against harmful content, as stated by the Minister.
Discussions also addressed concerns regarding AI bias, discrimination, and potential modifications to existing reporting mechanisms. Recent notices issued to social media platforms were in response to reports of deepfake content targeting public figures like Prime Minister Narendra Modi and actor Katrina Kaif, amplifying concerns.
Industry stakeholders are optimistic about effectively identifying and penalizing deepfake creators. They discussed technological solutions like watermarks or labels embedded in altered content to alert users about associated risks. Considering compliance challenges for platforms hosting non-English content, the industry favors a balanced approach to penalties and compliance timelines, akin to the Digital Personal Data Protection Act.