Issue #10: The Cost of AI-Generated Misinformation for Businesses
The double-edge sword of AI in Journalism
AI is increasingly used to generate or summarize news content, but its mistakes are affecting business credibility, finances, and public trust.
Factual inaccuracies and misattributed sources are just small examples with great effect on people’s perspective.
Undoubtedly, AI-generated errors in news reporting have real consequences.
This week we explore how companies are grappling with reputation damage, legal risks, and regulatory scrutiny as they adopt AI assistants for content creation and information delivery.
Prevalence of AI Errors in News Content
Leading AI assistants frequently misrepresent news content. A recent international study by the European Broadcasting Union (EBU) and BBC examined 3,000 news-related queries across platforms including ChatGPT, Microsoft’s Copilot, Google’s Bard/Gemini, and Perplexity.
The findings were striking as 45% of AI responses contained at least one significant issue, and 81% had some form of problem. These issues ranged from factual inaccuracies to sourcing errors like missing or misleading attributions in about one-third of responses.
For example, Gemini (Google’s AI) was especially problematic.
72% of Gemini’s answers showed serious sourcing issues, compared to under 25% for other assistants. About 20% of all AI responses contained factual errors or outdated information.
In one cited case, Gemini incorrectly stated changes to a law (on disposable vapes) that hadn’t actually occurred, while ChatGPT referred to Pope Francis as still alive months after his death.
Separate BBC research in early 2025 similarly found that over 51% of AI-generated answers had “significant” errors, and 91% had at least minor inaccuracies or misrepresentations. These errors included wrong statistics, false facts, and even made-up quotes, often presented in a confident tone that could mislead readers.
Read also:
Not only do AI assistants get facts wrong, they also struggle with distinguishing opinion from fact and maintaining impartiality. The BBC study noted instances of chatbots injecting biased language or presenting opinions as if they were BBC facts, even attributing sentiments to news sources that never appeared in the original articles.
Such distortions turn a news summary into what BBC analysts called a “confused cocktail” of truth and error, which can erode public trust in news media. As the EBU’s media director warned, when people can’t tell what’s accurate, “they end up trusting nothing at all,” undermining audience confidence in both the news and democratic discourse.
Real-world examples of AI news inaccuracies and business fallout
These cases aren’t cautionary tales from the future as they’ve already hit the headlines. From market-shaking mistakes to high-profile editorial blunders, here’s a look at how AI-driven news misfires have impacted major companies in the real world.
1. Google Bard’s Costly Error
In February 2023, Google unveiled its Bard AI with an ad that contained a factual mistake. Bard claimed the James Webb Space Telescope “took the very first pictures of a planet outside our solar system.” In truth, another telescope had done so years earlier.
The fallout was swift. Alphabet’s stock plunged by nearly 9%, losing $100 billion in value. Google had to pull back Bard’s launch and recommit to more rigorous testing.
2. Gizmodo’s AI Article Fiasco
In July 2023, Gizmodo published an AI-written article listing Star Wars films chronologically. It was riddled with errors. At least 18 mistakes were flagged by staff. The article ran without clear AI disclosure, sparking backlash internally and externally. Gizmodo’s credibility took a hit, and its leadership faced internal revolt over how AI was deployed.
3. CNET’s Quiet Corrections
CNET used a proprietary AI to write finance articles, many of which contained serious factual and mathematical errors. A post-publication review revealed over half of the articles needed corrections. This led to a pause in the AI program, public editor’s notes, and promises of better oversight moving forward.
4. Men’s Journal Publishes Dangerous Health Advice
An AI-generated health article in Men’s Journal included at least 18 false or misleading claims about testosterone. After expert review, the publication had to revise the article and issue a note, all while defending the experiment as a “work in progress.”
5. Gannett’s Embarrassing Sports Coverage
Gannett used AI to automate high school sports recaps, but the stories included bizarre language, factual gaps, and even unfilled placeholders. The backlash was so severe that Gannett suspended the tool and revised the articles, promising more editorial control.
Read also:
6. AI-Fueled Market Volatility
An AI-generated fake image of an explosion at the Pentagon briefly caused a dip in U.S. markets before being debunked. Other examples include fabricated legal claims against public figures by AI chatbots, prompting legal threats and defamation suits. These incidents spotlight the legal, financial, and social risks when AI tools hallucinate or fabricate sensitive news.
Key Business Risks from AI News Errors
The ripple effects of AI-generated misinformation are already shaping boardroom decisions, brand strategies, and compliance frameworks across industries.
Generative AI tools take on a more visible role in content creation and this has its consequences in public communication.
Companies are now called to confront a new category of risk. Errors they didn’t author, but are still accountable for. Below are the five most pressing business risks stemming from flawed AI-generated news and content, and why leaders can’t afford to ignore them.
1. Credibility and Brand Integrity
When companies publish or rely on AI-generated content with errors, their credibility takes a hit. Readers may disengage, staff morale may fall, and recovery often requires public mea culpas and policy overhauls.
2. Financial and Market Exposure
AI mistakes can move markets. Google’s Bard misstep alone wiped billions in market value. Meanwhile, AI content that bypasses source traffic (as seen with search engines summarizing news) threatens publisher revenue models.
3. Legal and Compliance Threats
Libel lawsuits, FTC penalties, and regulatory action are all on the table when AI content causes harm. Whether it’s a fabricated legal case or inaccurate financial summaries, companies could face reputational and legal fallout.
4. Operational Overhead
If AI content requires as much human editing as it saves in production time, the value proposition crumbles. Additionally, flawed AI output can mislead internal decision-makers, affecting strategy and execution.
5. Sector-Specific Risks
News organizations face immediate reputational and commercial harm. Financial institutions face misinformation risk. Health companies must guard against life-threatening inaccuracies. Legal and professional services require airtight accuracy, while tech firms face brand scrutiny based on the reliability of their own AI tools.
How Companies Are Responding
Facing mounting pressure from users, regulators, and their own reputational missteps, companies aren’t standing still. Across industries, organizations are building guardrails around AI.
Editorial Oversight
Many publishers now mandate human review for all AI-generated content. The Associated Press, CNET, and others emphasize journalistic accountability and ban certain AI uses outright (like altering multimedia).
Disclosure Policies
Readers deserve transparency. Organizations now label AI-written articles, issue public corrections, and use editor’s notes to acknowledge AI involvement and errors.
Governance and Training
AI ethics guidelines, editorial standards, and AI-specific training for staff are on the rise. Teams are also trained in prompt crafting, error detection, and content review.
Read also:
Technical Safeguards
Some AI companies integrate real-time information retrieval, add fact-checking layers, or restrict model behavior in sensitive topics. Model accuracy improvements are ongoing, but far from complete.
Incident Response Playbooks
Organizations are preparing response plans for AI content failures — including takedowns, corrections, retractions, and public communications. Some even limit AI’s role in high-risk workflows until accuracy improves.
Trust, Regulation, and the Future
AI’s role in content is growing, but public trust is fragile. If errors continue, trust in both AI and the media may erode, especially among younger users. Regulation is coming: the EU’s AI Act and U.S. FTC actions are early signs.
Companies that self-regulate (with transparency, governance, and editorial rigor) are more likely to thrive.
The real power of AI lies not just in automation, but in accountability. Companies that get the balance right will win trust and the future.



