Since 2016, election campaigns in the Philippines, Europe, and the U.S. have shown us the undeniable power—and peril—of social media and artificial intelligence (AI).
While these tools can amplify a candidate’s message, they also offer pathways for disinformation, misinformation, and the creation of divisive narratives.
As we approach the 2025 elections, the Commission on Elections’ (Comelec) new regulations on social media and AI come at a crucial time.
This regulation is a lesson learned from elections marred by fake news, bots, and deepfakes, all used to manipulate voter perceptions.
Since the 2016 Philippine elections, social media platforms like Facebook and Twitter have been central to campaigns, but this also marked the beginning of widespread misinformation where voters found themselves caught between legitimate political discourse and cleverly disguised disinformation campaigns.
In the 2016 U.S. elections, the influence of fake news, bots, and AI-driven disinformation campaigns had a profound impact on voters, with false narratives fueling division.
The European Union (EU) has since implemented stricter regulations, emphasizing transparency and accountability from social media platforms to prevent the manipulation of information. These measures are necessary to prevent elections from devolving into battles over manipulated, sensationalized content, often devoid of real discussion about programs and
The challenge lies not only in identifying disinformation but in holding those responsible accountable. Comelec’s Task Force KKK, which will monitor AI and social media usage, offers a much-needed safeguard.
The task force’s power to require political candidates and parties to register their online platforms is a step toward transparency. Candidates are also required to disclose when AI is used to create campaign content, ensuring voters are aware when they are engaging with machine-generated material.
This move isn’t just about curbing fake news; it’s about promoting thoughtful discourse based on policy, not propaganda. As a nation, we must reject shallow, divisive tactics in favor of meaningful discussions on programs and policies.
The use of AI in political campaigns can be powerful and positive, but only when it enhances informed decision-making.
Candidates should focus on presenting clear, comprehensive policies on key issues such as healthcare, education, and economic reform, instead of relying on emotionally charged content designed to manipulate voters.
The integrity of the democratic process depends on voters making informed choices based on policies, not on who can best exploit AI or social media algorithms to capture attention.
However, regulation alone is not enough. Voters must be vigilant and critical of the information they consume. Platforms, too, must strengthen their role in detecting and taking down malicious content. The success of these regulations will depend not only on Comelec’s enforcement but also on the collaboration between tech companies and civil society to promote digital literacy and responsible online behavior.
The 2025 elections offer an opportunity for the Philippines to prove that technology can be used ethically and responsibly to enhance democratic participation, not subvert it.
The Comelec’s initiative to regulate AI and social media in elections is a necessary safeguard in the digital age. It should serve as a model for other democracies seeking to protect the integrity of their elections from the rapid rise of misinformation.
To foster a truly democratic election, we must move beyond shallow digital campaigns and focus on the policies that will shape the future of the nation.