The Synthetic Media Crisis
Why events are becoming a systemically critical bastion of trust
Across the business events industry, we have long argued that trust is built through physical presence. Until recently, that claim was mostly anecdotal. But in 2026, it has become an empirical necessity. As digital environments are subsumed by synthetic media, trust is no longer a given; it is a premium asset that requires verification.
I realised the gravity of this shift while working on the global trends section for our JWC Global Industry Performance Review (GIPR), which is out now.
We need to talk about the fact that the internet has hit a tipping point: human interaction is now the minority. Bots and AI-generated content rule. The result is what is most commonly known as the “Synthetic Media Crisis”.
The Data: A Minority Interest in the Digital Square
The “human-centric” internet effectively ended in 2024. That year, automated traffic surpassed human activity for the first time, with bots accounting for 51% of global web traffic. Within that, “bad bots”, those used for fraud, scraping, and account takeovers, hit 37%, completing a six-year growth trend.
The social platforms that we rely on to a large degree for professional networking have become automated forests:
X (formerly Twitter): Independent audits estimate 64% of all accounts are bots. Between a quarter and a third of daily active users are automated entities.
LinkedIn: The “professional” narrative is increasingly artificial. Over 54% of long-form posts in 2025 were identified as likely AI-generated.
Commercial Reliability: Between 30% and 47% of online reviews are fake. AI-generated reviews on platforms like Zillow have scaled from 3.6% in 2019 to nearly 24% today.
The Scale of Synthetic Production
We are facing a volume of media that human production cannot possibly match. By late 2024, AI became the internet’s “leading author,” producing more daily articles than manually written ones. In recommendation-heavy feeds like Pinterest or YouTube Shorts, up to 90% of visual content is now “AI slop” optimized for engagement over accuracy.
The Copenhagen Institute for Futures Studies (CIFS) warns that if AI agents continue to operate in unstructured environments, 99.9% of all online media could be synthetic by 2030.
This leads to a structural “Turing Test Failure.” In 2025, a UC San Diego study showed that humans identified GPT-4.5 as “human” 73% of the time, statistically more often than they identified actual human participants. Conversational realism has successfully overtaken provenance as the primary signal.
The Institutional Response: “TrustOps”
Accordingly, corporate strategies are shifting to meet this “trust recession.” Companies are no longer just producing content; they are deploying multimodal monitoring tools to defend against “full-stack” synthetic media, which includes everything from fabricated CEO messages to AI-stitched reviews.
Analysts predict half of all enterprises will invest in “disinformation security” and “TrustOps” by 2027. On the regulatory side, the European AI Act (RIA) mandates that as of August 2, 2026, all AI-generated media must carry visible warnings and machine-readable technical markings.



