By Francis Allan L. Angelo
Artificial intelligence has taken another leap forward—and journalism may be left in its digital dust.
A recent study by the Tow Center for Digital Journalism, published in the Columbia Journalism Review (CJR), casts a glaring spotlight on the performance of eight popular AI-powered search engines. The verdict? They are uniformly bad at citing news sources, and worse, many of them get the facts wrong.
This matters—not just for reporters, editors, and publishers—but for every citizen who relies on accurate information to understand the world.
AI tools routinely misrepresent news
According to the study, over 60 percent of answers generated by these AI search platforms were riddled with inaccuracies. Grok 3, a product of Elon Musk’s xAI, had a staggering 94 percent error rate.
Even worse, many of these tools fabricate citations, linking to articles that do not exist or directing users to syndicated versions instead of original sources. This siphons off traffic—and revenue—from the actual newsrooms that broke the story in the first place.
This is not just sloppy. It’s systemic disinformation cloaked in authority.
Confidence without competence
AI tools not only mislead—they do so with unwavering confidence. The study found that AI responses were often presented with a tone of certainty, rarely flagging knowledge gaps or ambiguous facts.
This faux-authoritativeness is dangerous.
Unlike a traditional search engine that presents a list of links, these generative AI platforms summarize information in conversational language. Readers might easily mistake these summaries as definitive answers, especially if they aren’t cross-checking with primary sources.
In effect, users are lulled into a false sense of informational security.
Ethics ignored: scraping content without consent
The study also revealed that some AI tools ignored the “robots.txt” protocol—a web standard that allows publishers to block web crawlers from accessing their content.
By bypassing this consent mechanism, these AI engines not only disrespect the legal boundaries of digital publishing, they also violate the trust and labor of news organizations.
In a country like the Philippines, where media entities already struggle with funding, harassment, and state pressure, this kind of digital trespassing only worsens the crisis.
Journalism is being robbed—and blamed
This shift isn’t just about faulty tech. It’s a systemic challenge that undercuts the sustainability of journalism. When AI platforms lift information without attribution, they erode two critical resources: trust and income.
Fewer clicks mean fewer ad revenues. Less visibility means fewer subscribers. And the more AI-generated text dominates search results, the harder it becomes for real journalism to get noticed.
Even worse, when people encounter misinformation or errors, they often blame “the media”—even if that media content was actually fabricated by AI.
The risk is double-edged: journalists are penalized for errors they didn’t make, and the public becomes more skeptical of all information.
What journalists can do now
The Tow Center study makes one thing clear: the rise of AI search tools is not just a technological issue—it is a media survival issue. But it also opens doors for journalism to innovate and reclaim its authority.
Here are five ways forward:
- Be transparent: Newsrooms must show their work. Explaining how stories were sourced, verified, and written can rebuild public confidence. Transparency is a journalist’s best defense against AI’s algorithmic opacity.
- Educate readers: Journalists should start treating digital literacy as part of their mission. News outlets can publish guides on spotting AI-generated content and teach users how to verify what they read.
- Push for ethical AI: Journalists and publishers must lobby tech companies and lawmakers to adopt ethical standards for AI. This includes honoring robot.txt exclusions, requiring clear source citations, and flagging AI-generated content.
- Use AI—but wisely: AI is not inherently the enemy. Newsrooms can use it for transcription, summarization, or data scraping—as long as the editorial process stays human-led. Responsible use of AI can free up reporters to focus on deeper, investigative work.
- Forge partnerships with tech developers: Media organizations must stop being passive victims of technology and start shaping it. Partnering with AI developers to build attribution-friendly search engines could provide long-term solutions.
The local stakes are just as high
In the Philippine context, these global findings are not abstract. They hit home.
Local journalism—especially in provinces like Iloilo and other parts of Western Visayas—already struggles with visibility in a Manila-centric media environment. If AI search engines fail to surface regional news, or worse, misattribute it, then the public loses access to crucial information about their own communities.
We must also guard against a future where AI rewrites our history, distorts our current events, or becomes the default interpreter of news, unfiltered and unaccountable.
News without journalists is not journalism
The CJR study should be a wake-up call. The internet was once hailed as a democratizer of information. Now, it risks becoming a labyrinth of recycled content, where original reporting is buried and facts are optional.
AI search engines are here to stay. But they should not replace the journalistic values of verification, attribution, and accountability. The path forward requires vigilance, collaboration, and a refusal to cede truth to machines.
For journalism to survive—and thrive—in the age of AI, it must do what AI cannot: tell the full story, with context, care, and a human conscience.
This piece was inspired by the findings of the Tow Center’s report on AI search engines and their impact on journalism, published by the Columbia Journalism Review.