My Blog

Townkart

What are the commonalities between Taylor Swift, Rahul Gandhi, Mamata Banerjee, and PM Modi?

They have all been sufferers of ‘deep fake’ videos. Clearly, the warfare for integrity of electoral structures has now been taken into the digital global

What do Prime Minister Narendra Modi, West Bengal Chief Minister Mamata Banerjee, Opposition chief Rahul Gandhi, singer Taylor Swift and actor Anil Kapoor have in common? They have all been victims of “deep fake” films, generated the use of artificial intelligence (AI).

The Merriam-Webster dictionary defines a “deep fake” as an picture or recording that has been convincingly altered and manipulated to misrepresent someone as doing or announcing some thing that was now not certainly done or said. And actual existence has already begun to imitate art in relation to AI. Recently, the actor Scarlett Johansson alleged that her voice from the 2013 movie Her changed into used without her consent by Open AI for the voice called ‘Sky’ in its chatbot. Her’s protagonist falls in love with his phone’s AI, voiced by Johansson. In 2024, fiction has converted into fact, with some adjustments to the plot line. When it comes to artistes and deep fakes, the causes and results in regulation revolve around the possession of proprietary material — one’s popularity, fame, voice and individual getting used with out permission or for malicious motives.

However, given that this is election season in India and Delhi goes to the polls these days, I would really like to cognizance at the outcomes — or lack thereof — of “deep fakes” for elections. The safety and integrity of the electoral procedure has historically been premised at the integrity of the poll box, the independence of the Election Commission of India (ECI) and accurate counting of each vote forged. Since 1951-52, when India held its first widespread election, this has been the focal point of efforts to keep the procedure pristine. Now there is a further assignment — using AI to influence the outcome. One facet of the use of AI is this phenomenon of “deep fakes”.

On May 6, the ECI issued an advisory to political events at the “responsible and ethical use of social media in election campaigning”. It requested political parties to do away with fake content material inside 3 hours of it coming to their notice.

The legal provisions available to deal with such deployment of deep fakes consists of the Information Technology Act, 2000, the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules 2021 and the Indian Penal Code, 1860.

Let us begin with the oldest of those criminal devices — The Indian Penal Code, which presents 3 conventional remedies. One is Section 468, which deals with the forgery of a file or electronic record for the purposes of cheating. Another is Section 505, touching on the making, publishing and stream of any assertion, hearsay or record with the purpose to reason worry or alarm to the general public. Both provisions had been used to deal with alleged deep fakes purporting to be the Chief Minister of Uttar Pradesh Yogi Adityanath. Further, Section 416 of the Code criminalises cheating with the aid of personation, together with while an person pretends to be some different character or knowingly substitutes one character for every other or represents that he or another man or woman is a person aside from who he’s.

The Information Technology Act, 2000 has the ability to provide a few redressal towards deep fakes. Section 66 (c) presents that the sending of any e-mail or message for the cause of causing annoyance or deceiving or misleading the recipient might be punished with a term of up to a few years in jail. Further, the Act, via sections sixty six and sixty seven, also punishes dishonest via personation, the violation of privacy and the transmission of visible photographs or ebook of photos of a “private region” with imprisonment of up to 3 years. These felony provisions, whilst beneficial, do no longer necessarily provide comprehensive protection in opposition to the usage of AI to generate incorrect information, such as deep fakes

The present felony regime also offers no treatment for attempts through adversarial international locations to persuade electoral outcomes. In 2024, over half the planet goes to polls, inclusive of predominant democracies like India, america and the UK. The Independent reports that British Home Secretary James Cleverly had warned in February that adversaries like Iran or Russia may want to generate content to sway voters in the elections which might be scheduled to be held later on this 12 months in Britain. He said that “increasingly more, the war of ideas and regulations takes place in the ever changing and increasing digital sphere…The landscape it is inserted into needs its guidelines, transparency and safeguards for its users.”

In April, just earlier than the commencement of the Indian wellknown elections, the Microsoft Threat Analysis Centre (MTAC) had warned that China will “at a minimal, create, and amplify AI-generated content material to benefit its hobbies” in elections in India, South Korea and the USA. Last week, Forbes stated that Russia is asking to influence US opinion in opposition to Ukraine and NATO. It is based on MTAC evaluation that observed “as a minimum 70 Russian actors the use of each conventional media and social media to unfold Ukraine-associated disinformation during the last two months” as a prelude to the approaching presidential elections within the US. This AI-associated campaign includes using deep faux movies.

The conflict for the integrity of electoral structures and the components of informed public opinion has now been taken into the “digital” international. This will always entail a brand new legal expertise of what quantities to impersonation and misinformation. Europe’s Artificial Intelligence Act, 2024, which will come into force in June (mentioned in advance in ‘A penal code for AI’, IE, March sixteen), offers a few ideas on how to consider a new felony regime to address offences that include the technology of deep fakes whose goal is to “control human behaviour”. Law reformers in India want to apply the present felony regime as a basis to thoughtfully craft new laws to be able to address AI and deep fakes that appearance to steer electoral results.









































Published
Categorized as Blog

Leave a comment

Your email address will not be published. Required fields are marked *