Terri is a thought leader in Digital Journal’s Insight Forum (become a member).
A comment under a viral video sums up the feeling of an era: “I miss the old Internet, when fake was fake and real was real.”
That sense of loss runs deep these days.
It captures a turning point in human experience: the reality before AI and the AI reality in which we are now immersed.
The phrase “coin of the realm” once referred to the official currency recognized as genuine and valuable. It was proof of authenticity, the benchmark of what could be believed. In this era, that benchmark is trust.
In the past two years, AI has become capable of generating sound, image, and motion so convincing that our eyes, ears, and instincts can no longer tell the difference.
The acceleration has been fantastical. A politician appears to sharply and uncharacteristically criticize an opponent. A celebrity endorses a product they’ve never seen. A son’s voice calls asking for money, but it isn’t him.
Truth itself has become unstable.
As generative AI absolutely floods every channel with realistic content, the next frontier isn’t more creativity, it’s more credibility.
The most valuable commodity in the coming decade will not be data, talent, or technology, it will be a breed of trust that cannot be faked.
Living through the collapse of certainty
“We’re living in an era of a massive lowering of trust.”
-Jimmy Wales, cofounder of Wikipedia
We’ve lived through disruption before, but never one that rewired our perception of reality so quickly.
AI systems now generate more than 15 billion synthetic images a month. Deepfake videos have increased ninefold since 2022, and tools like OpenAI’s Voice Engine can clone a person’s voice from three seconds of audio.
The effect is twofold. First, epistemic confusion, or the inability to know what’s true. Second, authenticity inflation, or when everything looks perfect, we start trusting imperfection instead. We begin to equate roughness, flaws, and hesitation with humanity.
This erosion of certainty is reshaping grey matter inside of every head and in public life around the world. In one MIT Media Lab study, when people were told that a photo might be AI-generated, their trust in all images dropped by over 40%.
The collapse of trust is exponential all at once. Once people stop believing their senses, they stop believing each other. It’s an epidemic that can hardly be quantified.
Today’s economics of trust
The Edelman Trust Barometer for 2025 found that global trust in information sources is at record lows. Only 37% trust social media, 51% trust traditional media, and 42% trust corporate communications. Yet the companies that sustain high trust outperform their peers. PwC’s CEO Survey showed that high-trust firms grew revenue at two and a half times the industry average.
The implication is stark. In a synthetic world, trust becomes not a moral virtue but an economic differentiator. It functions like capital: earned, stored, and spent.
Over the next decade, businesses will begin to treat credibility as a measurable asset class. Auditors, insurers, and investors will all want to quantify it. The winners will not be those who can produce the most persuasive message, but those whose truth can be proven and elevated above the noise.
When everything can be faked
Three domains show how far the problem already goes.
Politics
In politics, deepfakes are shaping elections. According to analysis by Steptoe & Johnson, synthetic videos and audio clips were detected in all thirty countries that held national elections between July 2023 and July 2024. The targets ranged from sitting presidents to local candidates, and many of the fake videos amassed millions of views before being debunked.
Financial services
AI-generated voice and video deepfakes are fueling a surge in financial fraud, with increasingly sophisticated scams targeting corporate systems.
In one high-profile case, British engineering firm Arup lost $25 million after scammers used a deepfake of the CFO during a video call to authorize fraudulent transfers. Similarly, a Hong Kong-based finance employee was tricked by a multi-person deepfake video conference, believing he was speaking to real colleagues, and transferred funds to 15 different accounts.
In 2024, over 845,000 imposter fraud incidents were reported, many involving voice cloning attacks where criminals replicated executives’ voices using just seconds of audio, leading to unauthorized transactions and identity breaches.
These incidents highlight how AI is weaponizing trust and reshaping risk in global finance.
Marketing
In digital marketing, AI-generated influencers are blurring the line between reality and simulation.
Lil Miquela, a virtual model created by Brud, has amassed millions of followers and collaborated with brands like Prada and Calvin Klein. Similarly, Imma, a Japanese AI influencer, has appeared in campaigns for IKEA and Puma, captivating audiences with her hyper-realistic aesthetic.
Even newer entrants like Noonoouri, a digital fashion icon, have signed deals with Dior and Versace, proving that audiences often embrace synthetic personas as long as the content resonates. These examples show how brands are leveraging AI avatars beyond novelty.
As a result, the new consumer malaise is a mixture of fatigue, apathy, and anxiety, where audiences distrust the genuine as much as the artificial. Others are pushed past caring at all.
The real cost of a synthetic world is not deception, it’s indifference and avoidance.
A race for authenticity
The good news is that a counter-movement is forming.
The Coalition for Content Provenance and Authenticity (C2PA), a consortium including Adobe, Microsoft, and OpenAI, is creating technical standards for verifying digital media at the source. Each image or file carries a “fingerprint” that records when, where, and by whom it was made.
Other firms are pushing further. Startups like Truepic and Numbers Protocol use blockchain to anchor metadata and ownership proofs. Intel’s FakeCatcher can detect deepfakes by analyzing subtle blood-flow patterns in human faces.
None of these solutions are perfect. Detection rates still fall below 85%, but they point to an emerging infrastructure of verification.
The irony is inescapable. We are now using AI to prove what is real, in a world where AI makes it impossible to know what is real.
The brand imperative
For companies, the implications go well beyond communications.
Forrester’s 2024 research found that 62% of consumers would permanently abandon a brand involved in a verified deepfake incident. Getting ahead of the issue, 71% said they now fact-check brand claims against third-party sources.
Marketing, once about storytelling, is now about provenance. Every asset must be traceable. Transparency is becoming more persuasive than perfection, and brands will compete to display their authenticity tags, their AI disclosures, their creative supply chains.
The brands that thrive will be those that show their process, not just their polish.
Embedding proof of reality
A new trust architecture is emerging.
Camera manufacturers are embedding watermark standards directly into devices. AI platforms are adding authenticity APIs to label verified outputs automatically. Identity protocols such as HumanID and Worldcoin are experimenting with proof-of-personhood tokens that confirm a human origin for any digital act.
We are moving toward a world where every piece of media carries a provenance trail as detailed as a shipping manifest. Authenticity will be programmable, and truth will come with metadata.
This won’t stop deception, but it will reintroduce friction, the pause before belief that modern information systems stripped away.
An ethical fault line
Of course, there’s a danger in all of this. If authenticity systems determine what counts as verified, who decides what’s true?
Governments could weaponize verification to suppress dissent. Platforms could tilt algorithms toward “official” truth while demoting alternative narratives.
The fight for verified truth might save us from chaos or create new information empires that profit from it. Power will shift to whoever controls the verification layer.
Leadership in the new trust economy
For executives and policymakers, the question isn’t whether AI will distort reality. It already has. The question is whether your organization can become a trusted signal in the noise.
Leadership in the new trust economy will depend on three disciplines:
1. Transparency. Tell the market how AI is used, when it’s used, and why. Hidden automation breeds suspicion.
The companies earning trust today are those showing their work, explaining the reasoning behind an AI decision, or labeling generated content so users can see the hand behind the machine. Transparency doesn’t slow innovation, it builds confidence in it.
2. Provenance. Build systems that track and verify the source of every piece of content you produce. Treat authenticity as infrastructure, not compliance.
Forward-thinking brands are already embedding metadata trails and content credentials so audiences know what’s real, where it came from, and how it was created. Provenance turns proof into policy.
3. Consistency. Every action either reinforces or erodes credibility. In an age of infinite information, trust is cumulative behavior. A leader who says one thing and does another—online or off—undermines the entire organization’s integrity. Consistency transforms credibility from a slogan into a strategy.
Leaders who master these disciplines will manage belief. They’ll recognize that in a world shaped by algorithms, trust itself is now a competitive advantage.
The future of credibility
Perhaps people are right to miss the old Internet, when fake was obvious and truth felt tangible.
But nostalgia won’t bring it back.
What we can build instead is a world where truth has a lineage, where each digital act carries proof of origin and intent. And maybe that’s an even better Internet.
AI will force us to rebuild the systems that protect truth. The institutions, companies, and leaders who succeed in this new era will be those who treat authenticity as their prime motive. The future of credible leadership will belong to those who treat truth as a principle embedded and traceable in every product, policy, and post.
Because when everything can be faked, what matters most isn’t what looks real, but what can be proven as real.
