Edison explains: AI slop – when fake news costs real money

Investment Companies

Edison explains: AI slop – when fake news costs real money

What is AI slop and why should investors care?

Written by

Neil Shah

Executive Director, Market Strategist

What is AI slop and why should investors care?

AI slop refers to low-quality AI-generated content that floods the internet with plausible-sounding but often inaccurate or misleading information. The term, originally coined to describe generic AI-generated text and images cluttering the web, has taken on particular significance in financial markets. For retail investors, AI slop represents a growing threat to informed decision-making.

The problem has escalated dramatically since late 2022 when generative AI tools became widely accessible. Unlike traditional misinformation, which required human effort to create and distribute, AI slop can be produced at industrial scale with minimal cost or expertise. A recent article from Institutional Investor noted that ‘the failure of current detection methods to reliably identify AI slop introduces systemic risk into any strategy relying on web-scraped data’. For retail investors who increasingly rely on online forums, social media and free research platforms, this means the information landscape has become significantly more treacherous.

How does AI slop spread inaccurate financial information?

The mechanics of AI slop in financial markets are particularly insidious. Generative AI tools can produce convincing-looking analysis, complete with charts, tables and technical jargon, without any underlying expertise or fact-checking. These synthetic reports then flood investment forums, social media platforms and even legitimate-looking websites. The North American Securities Administrators Association explicitly warns that ‘state securities regulators expect an uptick in 2025 of bad actors using AI to generate professional graphics, videos and content that create the illusion of legitimacy.’

Stephen Clapham of Behind the Balance Sheet recently documented how a LinkedIn post from a strategist at a respected company made dramatic claims about AI hyperscaler depreciation, suggesting $2.5tn in AI assets would generate $500bn in annual depreciation expense by the decade’s end. As Clapham demonstrated, the calculation was fundamentally flawed: it failed to account for assets purchased before 2025 being fully depreciated by 2030, incorrectly applied a uniform 20% depreciation rate across all AI hyperscalers and ignored that total capex includes land and buildings with far longer useful lives. The post nonetheless spread widely, illustrating how plausible-sounding analysis — whether AI-generated or not — can mislead investors who lack the accounting expertise to spot the errors.

The problem compounds in the AI world, firstly in terms of the sheer volume of content that can be generated and secondly when AI-generated content gets recycled. One hallucinated earnings figure or misquoted guidance statement can be picked up by other AI systems scraping the web, creating a cascade of misinformation. Research by Betty Liu of Indiana University and Austin Moss of the University of Colorado Boulder found that ‘fake news authors are less likely to target firms with more robust accounting information’, suggesting fraudsters deliberately exploit information gaps where verification is more challenging. For retail investors researching smaller companies or emerging sectors, such as those that Edison covers, this can create a minefield of potentially false data points that can lead to costly investment mistakes.

Exhibit 1: Most Americans are concerned about people getting inaccurate information from AI

Source: Pew Research Center

During NPR’s podcast All Things Considered, business and economics journalist Wailin Wong stated that ‘generative AI makes it super easy for bad actors to manufacture misinformation. With a few keystrokes, anyone can make a fake news article.’ The velocity and volume of this content mean that even diligent investors can struggle to separate signal from noise, particularly when AI-generated content mimics the style and format of legitimate analysis.

Can AI slop actually move markets?

The evidence suggests AI-generated misinformation can trigger real market movements, often with devastating consequences for the affected retail investors who often react too slowly once the market recognises that misinformation was involved. The World Economic Forum reports that ‘disinformation – including fake news, hacked accounts and deepfakes – has caused billions of dollars in market losses and led to poor financial decisions.’ Two high-profile incidents illustrate the scale of the problem.

In May 2023, an AI-generated image purporting to show an explosion at the Pentagon briefly circulated on social media, causing a sharp drop in US equity markets before the image was debunked. More recently, in January 2024, hackers compromised the US Securities and Exchange Commission’s (SEC’s) X account to falsely announce the approval of a Bitcoin exchange-traded fund. The announcement made Bitcoin prices fluctuate wildly and retail investors who bought at the peak or sold at the trough suffered real losses before the SEC clarified that the account had been hacked.

These incidents highlight a particular vulnerability for retail investors. Algorithmic trading systems and institutional investors can often reverse positions quickly once misinformation is identified, but individual investors typically lack the speed or resources to react.

How are regulators responding to AI slop?

UK regulators have begun addressing the threat, though enforcement remains challenging. The Bank of England acknowledges that while ‘there are market conduct regulations to guard against market manipulation,’ the scale and speed of AI-generated content presents novel challenges. Furthermore, the Financial Conduct Authority (FCA) has issued stark warnings about fraudsters impersonating the regulator itself. In the first six months of 2025, the FCA received almost 5,000 reports of fake FCA scams, demonstrating how AI tools are being weaponised to create convincing forgeries of official communications. For retail investors, this means even regulatory warnings and official-looking documents require verification.

How can retail investors identify and avoid AI slop?

Protecting yourself from AI slop requires scepticism, verification and source discipline:

  1. Scrutinise the provenance of any research or analysis. Legitimate investment research firms employ named analysts with professional credentials and regulatory oversight, so be immediately suspicious of anonymous sources making specific price predictions or urgent calls to action.
  2. Verify key data points independently by cross-referencing claims against official company filings, regulatory announcements and established financial data providers.
  3. Watch for tell-tale signs of AI generation such as overly formal or generic language, lack of specific attribution, repetitive phrasing and an absence of genuine insight.
  4. Stick to established, reputable sources for investment research. If professional institutional investors pay substantial sums for verified research, free AI-generated content is unlikely to provide comparable value.

Exhibit 2: An AI-generated picture showing smoke billowing near the Pentagon

Source: Original source unknown, copy of image taken from Rani et al, Dialogues with AI Reduce Beliefs in Misinformation but Build No Lasting Discernment Skills

Edison’s view

Unlike previous waves of misinformation, which required human effort and could be combated through education and regulation, AI-generated content operates at a scale and speed that overwhelms traditional defences. The asymmetry is stark: institutional investors have compliance teams, verified data feeds and sophisticated tools to filter noise, while retail investors often rely on free online sources that are increasingly contaminated with AI slop. This information disadvantage translates directly into investment risk.

The solution requires both regulatory action and investor discipline. Regulators must develop new frameworks for AI-generated content in financial markets, while investors must adopt a more rigorous approach to source verification. In an environment where over 90% of consumers express concern about AI spreading misinformation, scepticism is not paranoia but prudence. The investors who thrive in this new landscape will be those who recognise that in an age of abundant information, the scarcest resource is reliable intelligence.

What should investors do now?

The rise of AI slop is not a temporary phenomenon but likely a permanent feature of modern financial markets. Retail investors must adapt by treating information verification as a core investment skill, not an optional extra. Start by auditing your current information sources and eliminating any that cannot demonstrate clear editorial standards, named authorship and verifiable track records. Build a curated list of trusted sources and resist the temptation to chase novel analysis from unproven platforms, no matter how compelling it appears. In an era where AI can generate unlimited quantities of plausible-sounding nonsense, the ability to distinguish signal from slop may be the difference between investment success and costly mistakes.

Financials

Digital assets – Part 5: Digital asset treasury companies

Unlocking investment opportunities in blockchain equities

Continue Reading