Post Widget 1

Heath Tips

  • In enim justo, rhoncus ut, imperdiet a
  • Fringilla vel, aliquet nec, vulputateDonec pede justo,  eget, arcu. In enim justo, rhoncus ut, imperdiet a, venenatis vitae, justo.Nullam dictum felis eu pede mollis pretium.

Post Widget 2

Editorial: Flip side of technology

Editorial: Flip side of technology

Deepfakes have the potential not just for personal injury, but to hurt national security and trust in institutions

Published Date – 11:45 PM, Thu – 9 November 23


Editorial: Flip side of technology

Representational Image

Two recent developments — a morphed video of popular film star Rashmika Mandanna going viral across India and protests by Hollywood actors against unauthorised use of images generated by artificial intelligence (AI) — have brought the spotlight on the dangers of deepfakes and the need for regulatory mechanism to catch up with the rapidly changing technologies. The South Indian actor’s doctored video, using the original clip of British-Indian social media influencer Zara Patel, went viral because it was a deepfake of a female celebrity. Hollywood celebrities too are worried about the impact of deepfake, the digital alteration of videos to spread false and misleading information, on the future of their careers. They fear that they could be replaced by their ‘digital doubles’. Deepfakes have the potential not just for personal injury, but to hurt national security and trust in institutions. It is widely suspected that Russia was using deepfakes last year to justify its invasion of Ukraine. It can also be used to impersonate friends or loved ones to trick individuals into sending money to scammers. Importantly, the technology behind deepfakes and other invasive tools has evolved faster than the abilities of the governments to understand and regulate them. Deepfake’s potential for violating privacy is alarming. The morphing tools can be used to commit crimes, harm reputations, influence polls and undermine trust in democratic institutions. There have been instances of criminals using morphed clips to create non-consensual intimate images of women. A report says adult content targeting mostly women accounts for 98% of all deepfake videos online.

India is listed as the sixth most vulnerable country in this regard. Rashmika’s morphed viral footage has evoked massive outrage and triggered calls for legal action against the culprits. The Centre has responded promptly by instructing social media platforms to remove the offensive content within 24 hours of receiving a complaint. Those impacted by the menace have been advised to get FIRs registered and inform the platforms. The Information Technology Act, of 2000, provides for a jail term of up to three years and a fine of up to Rs 1 lakh for anyone who uses a communication device or a computer resource to cheat by impersonating. What is needed to tackle this menace is a holistic approach to the regulation focusing on the interplay between platform and AI regulation, and ways to incorporate safeguards for emerging technologies more broadly. Several countries have been proactive in addressing the problem. The UK’s Online Safety Act criminalises sharing deepfake porn. China has banned the production of deepfakes without user consent. South Korea has made it illegal to distribute deepfakes that could cause harm to public interest. India, too, must scrutinise the existing provisions to mitigate the risks. While solutions must be found to counter the misuse of technology tools, the overreach of the state authorities and invasion of privacy should be avoided.


admin

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *

Read also x