Deepfakes and distrust: how human provenance can rebuild digital confidence

AI deepfake faces
(Image credit: Shutterstock/ Lightspring)

In recent years the digital landscape has changed dramatically. What once seemed like science fiction, such as video forgeries, voice cloning and real-time faceswaps, is now part of everyday life.

The scale of manipulation is accelerating fast. The UK government projects eight million AI-generated deepfakes will be shared this year - up from 500,000 in 2023.

Adrian Ludwig

Chief Architect and CISO at Tools for Humanity.

What once required Hollywood budgets and specialized teams now sits in the hands of anyone with a smartphone and an internet connection.

If we can no longer tell what is real and what is fake, how do we restore digital confidence?

The scale of the threat

The threat is expanding across every aspect of society. Scammers now use deepfaked celebrity endorsements to steal millions through investment schemes on social media.

After natural disasters, fraudsters deploy AI-generated video pleas using the faces of aid workers. Political deepfakes spread rapidly, showing candidates saying things they never said.

Financial institutions face an especially severe challenge. Deepfake-related fraud rose by 3,000 percent in 2023, and average losses per incident reached around 500,000 dollars.

Behind these statistics are real human consequences. Employees are tricked into authorizing payments to fraudulent accounts. Consumers are lured into scams through fake endorsements or cloned voices. Elderly users, often less digitally confident, are particularly exposed.

Why traditional defenses fall short

The instinctive response has been to expand detection, moderation and Know Your Customer (KYC) checks. Yet these measures are proving inadequate.

Detection tools are locked in an arms race with generative AI. Moderation at scale is costly and controversial. KYC escalation adds friction for consumers without solving the difficulty of spotting fakes in real time.

KYC also increases the risk that user-identifying content (such as selfies and document images) might be leaked or stolen, accelerating the ability of AI to impersonate real people.

As AI generated content becomes harder to distinguish, the very idea of spotting the fake is collapsing - only 34 per cent of people believe it is easy to tell AI content from user generated material. This is not only a fraud problem. It is about the future of trust in digital interactions and the credibility of AI itself.

Proof of humanness as a smarter safeguard

If detecting the fake is failing, the smarter approach is proving the real. Proof of humanness means verifying that a genuine person is behind an interaction, without storing or sharing sensitive biometric data.

The uses are clear. Banks could apply proof of human checks when opening accounts or authorizing transactions. Video platforms could block deepfake executives from tricking colleagues.

Customer service teams could separate genuine callers from AI driven scams. Rather than waiting for regulation, businesses can lead on consumer protection.

Embedding provenance and proof of human verification into digital systems demonstrates responsibility, transparency and care. It also builds public confidence by showing that technology can be used to protect, not exploit, the people it serves.

Cost and confidence: why provenance must beat deepfakes

The financial argument is equally strong. Prevention is cheaper than reimbursement and far less damaging to consumer trust. Financial institutions spend billions each year reimbursing fraud victims, while regulators demand greater compensation. Building systemic safeguards reduces those costs at the source.

At the same time, trust in digital systems is eroding: 67% of UK adults say they trust the internet less than ever, and nearly half (49%) trust less than half of what they see online.

This decline undermines digital commerce, slows the adoption of innovative services and fuels resistance to AI-powered tools that could otherwise deliver real productivity gains.

As digital assets and cryptocurrencies move further into mainstream finance, the need for strong provenance is even sharper. Without confidence in who is transacting, or whether they are human at all, stability is at risk.

Choosing the right path

We are at a crossroads. One path leads to doubt and suspicion in every interaction. The other leads to renewed trust, built on safeguards that prove humanness at the moments that matter most

Proof of humanness should be seen as the missing piece of digital infrastructure, the foundation that allows people, businesses and governments to interact with confidence in the age of AI. The challenge is not spotting the fake, but proving the real.

Extending this vision requires recognizing that trust cannot be retrofitted after systems fail; it must be embedded by design. Organizations that act now will not only reduce fraud and operational risk but also differentiate themselves in a market where digital confidence is rapidly becoming a competitive advantage.

Consumers are already demanding stronger assurances that the people and services they interact with are authentic.

As deepfakes proliferate, provenance will shift from a niche security feature to a universal expectation. By embracing verifiable humanness as part of core digital architecture, we can create an environment where innovation thrives because users feel protected, not exposed, by the technologies shaping their daily lives.

We've featured the best encryption software.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://stealprices.shop/news/submit-your-story-to-techradar-pro%3C/em%3E%3C/a%3E%3C/p%3E

Chief Architect and CISO at Tools for Humanity.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.