NAMFREL opposes AI ban in elections, advocates ethical use

Illustration / Samantha Wong; Adobe Stock

By Francis Allan L. Angelo

The National Citizens’ Movement for Free Elections (NAMFREL) has expressed its opposition to a proposed ban on the use of artificial intelligence (AI) in elections, advocating instead for the establishment of a Code of Conduct to ensure ethical practices.

The position comes in response to the Commission on Elections (COMELEC) calling for legislation to ban AI and deepfakes ahead of the 2025 midterm elections, following instances of AI misuse in global elections.

Recent elections in countries like Indonesia, Turkey, and South Korea have seen a rise in AI-generated deepfake videos, raising concerns about their impact on the electoral process.

In Indonesia, AI and deepfakes gained prominence before the Presidential elections in February 2024.

Similarly, Turkey’s March 2024 elections and South Korea’s April 2024 National Assembly Elections saw significant use of deepfake technology, with 129 election-related deepfakes uncovered in South Korea alone.

COMELEC’s call for a legislative ban on AI in elections aims to prevent such abuses.

However, NAMFREL argues that such a ban could stifle technological innovation and infringe on freedom of speech and expression.

Banning or regulating the use of AI in elections may curtail technological innovation and limit the benefits of AI in enhancing electoral processes. Given the rapid evolution of AI technologies, such regulations may become ineffective.

Moreover, such ban could infringe on freedom of speech and expression, and COMELEC may face challenges in enforcement, requiring new expertise and skills to effectively implement and uphold these laws.

Instead of a ban, NAMFREL recommends the creation of a Code of Conduct encompassing six ethical principles.

These principles were discussed in a roundtable on June 26, 2024, at the University of Asia and the Pacific (UA&P), with participation from election monitoring organizations, the IT industry, AI experts, the academe, and COMELEC representatives.

Principle 1: Transparency.

The use of AI in the generation of election-related content, including political advertisement, must be disclosed and such election-related content material appropriately marked. The disclosure must include funding sources, expenditures, AI technology used, data about the target audience and the source of such data. Transparency should extend across the AI ecosystem, from content creation to audience targeting, with social media platforms actively participating and adhering to a Code of Conduct.

Principle 2: Respect for Human Rights

AI-generated content must not infringe on the suffrage, digital, and privacy rights of individuals. While harmful AI use may be penalized, a balance with free speech is essential, supported by mechanisms to address AI grievances promptly and inform people about potential rights violations by AI-generated content.

Principle 3: Accountability

Candidates and political parties should register their intention to use AI in campaigns and be open to auditing their AI-generated content. Legal liabilities and penalties should apply to candidates, political parties, campaign teams and public relations and advertising firms that caused the development or generation of AI-generated content. Shared accountability is crucial due to the challenges in detection and monitoring by election management bodies alone.

Principle 4: Truthfulness

AI-generated content must uphold data integrity, with social media platforms actively moderating election-related content. Ensuring truthfulness involves candidates, political parties, and media, supported by a clear source of truth and mechanisms for fact-checking information.

Principle 5: Fairness and Non-discrimination

AI-generated content must be subject to review to detect discrimination based on race, gender, age, socio-economic status, religion, or other protected characteristics, with safeguards in place to prevent such biases. Any such AI-generated content that exhibits discrimination must not be published.

Principle 6: Oversight by COMELEC

The COMELEC, in the exercise of its oversight function, may establish a committee or task force to monitor the use of AI in the generation of election-related content, including AI- generated political advertising with focus on detecting misinformation, disinformation, and deepfakes. COMELEC should implement a reporting and complaint process, regulating AI- generated election paraphernalia (AI-GEP). COMELEC can encourage candidates, political parties and other stakeholders to adopt self-regulation mechanisms for AI use in elections and election-related activities.

NAMFREL said it has submitted this position paper to the COMELEC En Banc, and hopes

that the COMELEC will consider implementing the recommendations in time for the

2025 National and Local Elections.