Google collaborates with Indian government to address deepfakes


Google collaborates with Indian government to address deepfakes
The Indian government has been taking a tough stance against AI-generated fake content, particularly deepfakes. In response, Google has stated that its collaboration with the Indian government for a multi-stakeholder discussion is in line with its commitment to addressing this challenge together. Through this partnership, they aim to ensure a responsible approach to AI.
By embracing a multi-stakeholder approach and fostering responsible AI development, we can ensure that AI's transformative potential continues to serve as a force for good in the world, said Michaela Browning, VP, Government Affairs & Public Policy, Google Asia Pacific. "There is no silver bullet to combat deep fakes and AI-generated misinformation. It requires a collaborative effort, one that involves open communication, rigorous risk assessment, and proactive mitigation strategies", Browning added.
The company said it is pleased to have the opportunity to partner with the government and to continue dialogue, including through its upcoming engagement on the Global Partnership on Artificial Intelligence (GPAI) Summit. "As we continue to incorporate AI, and more recently, generative AI, into more Google experiences, we know it's imperative to be bold and responsible together", said Browning.
Last week, the Indian government gave social media platforms a seven-day deadline to modify their policies in accordance with Indian regulations, in order to tackle the spread of deepfakes on their platforms. Minister of State for Electronics and IT, Rajeev Chandrasekhar, stated that deepfakes could be subject to action under the current IT Rules, particularly Rule 3(1)(b), which mandates the removal of 12 types of content within 24 hours of receiving user complaints.
The government will also take action of 100 per cent of such violations under the IT Rules in the future. According to Google, it is looking to help address potential risks in multiple ways. "One important consideration is helping users identify AI-generated content and empowering people with knowledge of when they're interacting with AI generated media", said the tech giant.
In the coming months, YouTube will require creators to disclose altered or synthetic content that is realistic, including using AI tools. "We will inform viewers about such content through labels in the description panel and video player," said Google. "In the coming months, on YouTube, we'll make it possible to request the removal of AI-generated or other synthetic or altered content that simulates an identifiable individual, including their face or voice, using our privacy request process", it added.
Google recently updated its election advertising policies to require advertisers to disclose when their election ads include material that's been digitally altered or generated. "We also actively engage with policymakers, researchers, and experts to develop effective solutions. We have invested $1 million in grants to the Indian Institute of Technology, Madras, to establish the first of its kind multidisciplinary center for Responsible AI", Browning noted.
Source: IANS