By James Jimenez
The Commission on Elections recently promulgated rules meant to curb the abuse of Artificial Intelligence for political campaign purposes. Word has it that Commissioner Nelson Celis – who, apart from being a COMELEC Commissioner, is also a well-renowned expert in the field of information technology – played a key role in writing up those rules. If nothing else, that information should greatly reassure the public, voting or otherwise, that great care and thought has gone into the COMELEC’s attempt to rein in AI.
The Problem with AI
The COMELEC’s new rules on AI are indeed a welcome addition to the legal framework of the 2025 National and Local Elections – and hopefully all future elections moving forward. While AI does hold great promise if judiciously applied to various aspects of election management (more on that later), its immediate – and most obvious – impact comes from its ability to generate audio-visual content of stunning realism.
AI can generate and spread fake news stories – images, video, and audio – that misrepresent candidates or issues, leading to public misinformation and confusion. Of particular concern is the proliferation of what is known as deepfakes – hyperrealistic but artificially generated media featuring fabricated faces, voices, or movements of people, that look and sound real. With deepfakes, the image of any candidate can be made to say or do almost anything, effectively putting them in compromising but entirely fabricated circumstances.
While this is bad enough, AI can be used in subtler – less apparent ways. For instance, AI canbe used to manipulate the algorithms of the most popular social media platforms currently in used by nearly everyone. These socmed platforms can then be ginned up to misrepresent public opinion by underrepresenting minority voices or by signal boosting false information, making it appear that such information has more widespread acceptance and support that it actually does.
Used in this way, AI could easily amplify polarization by facilitating the emergence of echo chambers or exacerbating existing divisions through targeted content, inevitably leading to an increase in social discord.
If voters are allowed to believe, therefore, that AI is playing an outsized role in manipulating the information environment of the elections, without any active intervention from the election management body, public trust in the electoral process would soon go the way of the ill-fated dodo – extinct. Good thing then, that COMELEC undertook the effort to lay down the guidelines it did.
More than campaigns
However, AI presents a greater challenge than just its known impact on the campaigning environment.
Because AI is so new to the awareness of the general public – because it is such an unknown quantity – it is easy to leverage even just the very idea of AI, without actually using any artificially intelligent tech.
When the term “AI” burst onto the scene, it was not long before products started introducing AI into their marketing strategies. All of a sudden, product packaging would feature the letters AI prominently; marketing and advertising copy would wax lyrical over how AI was used to improve a product or a service; and futuristic graphic designs soon became the accepted aesthetic.
TO be very clear, none of this meant that AI was actually present or used in those products and services. It was simply that the public was encouraged to believe that it was, with the corresponding good will that mistaken belief generated. The same can be done with respect to elections.
People are easy to fool
During the run-up to the 2022 National and Local Elections – the last electoral exercise that I took active part in – our most significant learning was that people are easy to fool when they don’t know what’s being talked about.
Thus, from 2013 onwards, people kept getting fooled by scammers who peddled promises of “pre-programmed election results from the counting machines,” and who swore up and down that electronically transmitted results could be hijacked with ‘man-in-the-middle’ arrangements. These stories ran rampant for more than a decade and yet, not once have they been proven to exist. But the public – and more importantly, politicians eager for any edge – don’t care for the absence of proof. All they know – and they cling to this with ferocity – is that the whole idea of doctored automated election results makes intuitive sense.
I don’t imagine that things will be any different, now that these scammers have been given a new bridge to sell: AI. Where they used to promise connections inside the COMELEC – a disgruntled technician or a dirty official – scammers are now pushing the idea that they have AI tools that can be used to hack even the most secure election systems.
That fiction can even be used to facilitate vote buying.
In the last elections, in several jurisdictions, I have had to do battle with vote buyers who scare people into accepting their money, saying that because the system was automated, they would know who voters actually voted for. Now imagine using that line and supercharging it by attaching the concept of AI to it. People who don’t have in-depth understanding of what AI is and does, would fall for that easily.
Unfortunately, this is the aspect of AI’s impact on elections that COMELEC has yet to address.