Democracy Put to the Test by Deepfakes: Indian Elections Face Hypertrucage

_The 2024 Indian general election, the largest democratic exercise in history, unfolded under a new shadow: that of deepfakes. These AI-generated audiovisual manipulations flooded social media, depicting leading candidates in fabricated situations. This phenomenon, of growing scale and sophistication, questions the resilience of electoral processes in the face of technological disinformation and forces democracies worldwide to rethink their informational defense strategies._
A Titanic Election in the Age of Digital Doubt
With nearly 970 million voters called to the polls over a six-week period, the 2024 Indian elections were an event of considerable magnitude [6]. This logistical and human gigantism was compounded by a new complexity: the massive circulation of information and, increasingly, disinformation on digital platforms. At the heart of this issue are deepfakes, synthetic audio or video content of striking realism, created using deep learning algorithms. Once confined to research labs or special effects studios, this technology has become accessible to the general public, allowing for the manipulation of anyone's image and voice, including the most prominent political figures. The proliferation of easy-to-use mobile applications has democratized the creation of this content, drastically lowering the technical skill threshold required to produce a convincing manipulation.
The power of these tools lies in their ability to exploit human cognitive biases. A video, even of poor quality, is often perceived as more tangible proof than text. When it depicts an authority figure or a political candidate in a compromising position or making shocking statements, the emotional impact can be immediate and powerful, short-circuiting critical analysis. The speed of dissemination on social media, particularly via encrypted messaging apps like WhatsApp, which has over 500 million users in India, makes verification and rebuttal extremely difficult. Once a deepfake goes viral, the damage is done: doubt is sown in the voter's mind, and trust in information and institutions erodes. The 2024 elections thus served as a large-scale testing ground for this new weapon in the disinformation arsenal, posing a fundamental question: how can a democracy function if its citizens can no longer believe what they see and hear?
Modi and Gandhi, Targets of Synthetic Campaigns
During the election campaign, the country's two main political figures, incumbent Prime Minister Narendra Modi and his main opponent Rahul Gandhi, were prime targets of these manipulations. Videos showing them in entirely fabricated situations circulated widely. For example, audio clips imitating the voices of senior officials were used to spread false information about alleged electoral fraud. Immensely popular Bollywood actors were also victims of deepfakes, their images used in videos criticizing the Modi government without their consent. The objective of these synthetic campaigns is multifold: it can be to directly harm a candidate's reputation by attributing to them words they never said or actions they never committed. It can also be to sow confusion by disseminating false announcements or promises, or to demobilize a part of the electorate by spreading a sense of generalized cynicism towards the political class.
These deepfakes are not always crude. Some are designed to be subtle, inserting a piece of false information into an otherwise authentic speech, or modifying a facial expression to suggest a particular emotion. The growing sophistication of these techniques makes detection by the naked eye almost impossible for the average citizen. The impact is all the stronger as this content is often shared within trusted groups (family, friends), which gives it additional credibility. The extreme political polarization that characterizes the current Indian media landscape provides fertile ground for the spread of these manipulations. Each side is tempted to believe the content that confirms its own opinions and to reject that which contradicts them, creating filter bubbles where factual truth no longer has a place. The 2024 campaign thus illustrated how deepfake technology can be instrumentalized to exacerbate tensions and transform the democratic debate into an informational battlefield where all blows seem permissible.
The Indian Legislative Response and Its Deadlocks
Faced with this growing threat, the Indian government has reacted by proposing to amend its information technology law, the IT Act of 2000. The main idea is to compel digital platforms, such as Facebook, YouTube, or X (formerly Twitter), to clearly identify and label content generated or modified by artificial intelligence [1]. This measure, which seems to be common sense, however, runs into considerable obstacles that limit its practical scope. The first is technical. How to reliably identify on a large scale all content that has been manipulated? Detection tools exist, but they often lag behind creation techniques, in a permanent technological chase. The latest deepfake generators integrate countermeasures to thwart detection systems, making the task even more arduous.
The second obstacle is legal and constitutional. Placing the responsibility for detection and labeling on platforms raises complex questions regarding freedom of expression. Where to draw the line between combating disinformation and censorship? A platform could, for fear of sanctions, err on the side of caution and label legitimate content, such as parodies or artistic creations. Moreover, the immense volume of content posted every second in India, in a multitude of languages and dialects, makes human and even automated moderation extremely complicated. The analysis published by The Diplomat emphasizes that while the government's intentions are laudable, the implementation of these new rules proves to be very difficult in practice [1]. Without strong international cooperation and common technical standards, the purely national response is likely to remain largely symbolic, leaving the field open to creators of malicious content.
A Global Phenomenon, Contrasting Responses
India is not an isolated case. The threat of deepfakes to democratic processes has become a global concern, and other countries have also faced this phenomenon, with varied responses. In Indonesia, during the presidential election of February 14, 2024, the campaign was marked by the appearance of a deepfake of the former dictator Suharto, who died in 2008, seemingly endorsing a party [2]. Other videos showed candidates speaking perfect Arabic, a communication asset in the world's largest Muslim country. In Thailand, a doctored video showing a candidate admitting to vote-buying had to be debunked by fact-checkers, who estimated a 92.8% probability that it was a manipulation [3].
Faced with this threat, some countries are trying to organize. In the Philippines, lawmakers have established a National Deepfake Task Force, and companies like Microsoft have launched initiatives to help protect the integrity of elections [4]. In Europe, the Digital Services Act (DSA) is a first attempt at large-scale regulation. Without specifically targeting deepfakes, it requires very large platforms (those with more than 45 million monthly users) to implement measures to analyze and mitigate systemic risks, which includes information manipulation [5]. This approach, based on the responsibility of the most powerful actors in the digital ecosystem, could serve as a model. However, the heterogeneity of national responses and the lack of an international consensus on how to regulate these technologies show that the global community is still struggling to grasp the scale of the problem. The international comparison reveals a shared awareness, but also a fragmentation of strategies, while the technology itself knows no borders.
Education, the Keystone of Democratic Resilience
While technological and legislative responses are necessary, they will remain insufficient if not accompanied by a massive effort in citizen education and awareness. The best defense against disinformation remains a critical public, capable of questioning the information it receives and verifying its sources. This involves developing media and information literacy at all levels of the school system, as well as public awareness campaigns. It is about instilling simple reflexes: being wary of content that provokes a strong emotional reaction, cross-referencing sources, and checking the origin of a video or image before sharing it. Initiatives like the "FactShala" program in India, which aims to train hundreds of thousands of people in digital literacy, are encouraging examples but need to be scaled up.
The fight against deepfakes cannot be won by experts or governments alone. It is everyone's business. Journalists have a responsibility to be extra vigilant and adopt new tools to authenticate their sources. Tech platforms, which profit from the engagement generated by this content, must invest massively in moderation and research, and collaborate more closely with fact-checkers and authorities. Civil society organizations have a crucial role to play in monitoring abuses and alerting the public. Finally, every citizen has the power, and the responsibility, to break the chain of disinformation by adopting rigorous informational hygiene.
The case of the 2024 Indian elections is a warning for all democracies. It shows that technology can be a formidable tool for progress and communication, but it can also, if we are not careful, undermine the very foundations of the democratic pact: trust in a shared reality and in the possibility of a rational public debate. The challenge is not to reject technological innovation, but to master it and put it at the service of the democratic ideal rather than letting it subvert it. The future of democracy in the digital age will depend on our collective ability to build a new form of social contract, one that integrates the realities of the digital world and reaffirms the primacy of truth and reason in the public sphere. The battle has just begun.
Sources
- [1] The Diplomat. India's New Rules to Tackle Deepfakes, thediplomat.com
- [2] The Conversation. Deepfakes and disinformation ahead of Indonesian election, theconversation.com
- [3] Thai PBS Verify. Natthaphong's viral admission proved to be deepfake, thaipbs.or.th
- [4] Microsoft. AI & Cybersecurity: Safeguarding Philippine Elections, news.microsoft.com
- [5] European Commission. The Digital Services Act, digital-strategy.ec.europa.eu
- [6] Wikipedia. 2024 Indian general election


