ARTICLE AD BOX

In a world increasingly shaped by artificial intelligence, the line between what is real and what is fabricated has never been thinner. For any organisation that depends on digital identity, therefore, trust can no longer be assumed. It must be verified.
Not long ago, I spoke with someone responsible for fraud prevention at a retail bank. The threat landscape has shifted sharply. Their team, once confident in spotting fake IDs almost instinctively, now faces documents that pass multiple checks and appear entirely authentic until closer inspection reveals they are not.
Since 2021, digital forgeries have surged by more than 1,600%, a rise that almost perfectly mirrors the growth of generative AI tools. Where once counterfeits required specialist skills, they can now be produced by anyone with an internet connection.
What the new face of fraud looks like
Financial services remain at the frontline in fighting this scourge, but any sector that depends on digital verification now faces the same challenge.
Analysis from Experian shows a 60% rise in false identity cases in the UK in 2024 compared with 2023, with synthetic identities now making up nearly one third of all identity fraud cases. Yet only a quarter of UK organisations feel confident tackling synthetic or AI-driven fraud. As AI blurs the line between genuine and fabricated, trust is shifting from visual inspection to proof of origin.
Trust has always been the bedrock of secure interaction, whether it be between a consumer and a bank or someone contacting their local council. Yet digital deception targets that foundation directly. When falsified credentials or cloned content pass through verification, it erodes confidence across the system.
Making trust visible to everyone
The problem is not a lack of investment. Traditional verification systems were never designed for AI-generated deception, and criminals now use the same machine learning techniques as defenders to bypass them.
Only last month, I was helping a family member work out whether an email from their bank was genuine. At first glance, it looked real because the spelling, font and branding were flawless. However, a closer look at the sender’s address revealed a single misplaced letter that gave it away. We confirmed the message through official contact channels and checked the bank’s verified logo, through email authentication standards such as BIMI and DMARC, and it was not the right one. But how many consumers have friends or family who work in this industry and can go to that level of detail to spot something fake? Not many, I would imagine.
That moment brought home how fraud is no longer confined to stolen documents or forged IDs. It extends into every digital touchpoint, from email attachments to deepfake content. And while technology has created that complexity, it can also provide the solution by making authenticity provable.
The next generation of trust
Most of us have been taught to look for spelling mistakes or poor design as signs of a scam, but those cues do not always work. What people need are clearer signals that confirm when something or someone is genuine.
Younger generations are particularly exposed because so much of their lives now takes place online, whether it be mobile banking to social media, or simply doing their homework. According to LexisNexis Risk Solutions, 85% of fraudulent identities linked to younger customers now evade detection by third-party models. Personal information is constantly being shared and replicated across countless platforms. How can any parent realistically monitor that level of activity? They cannot do it alone; they need help.
That is why trust must be built into the digital world from the start, with protection at every stage. Embedding trust at scale is no small task, but as the next generation grows up entirely online, it is the only sustainable way for organisations to protect and retain the confidence of their customers and their employees.
Organisations must move towards cryptographic trust to make authenticity verifiable at the data level. By embedding PKI-based digital signatures and certificates into documents, transactions and communications, they can create a chain of authenticity that reaches back to a trusted Certificate Authority.
These protections are mathematically secure and reveal any tampering. Even if a message appears genuine, it cannot be trusted without its digital proof, which is a vital safeguard. This approach aligns with Europe’s eIDAS 2.0 regulation and the UK’s regulatory focus on resilience, transparency and strong authentication set by the Financial Conduct Authority and the Bank of England.
Restoring confidence in what’s real
After conversations with fraud teams, colleagues, and even my own family, one thing is clear: the challenge of trust touches everyone. It is not just about protecting institutions or meeting compliance standards but about helping people feel confident in a world where authenticity is harder to judge.
AI has undoubtedly made deception easier, but it has also allowed us to build confidence in new ways. If we use technology to strengthen integrity rather than undermine it, we can create systems that prove what is real, not just detect what is fake.
Trust is not a one-time check or a tick-box exercise. It is something that must be earned and maintained every day, through transparency, accountability, and the right digital foundations. If organisations across every sector commit to this, we can all help rebuild the confidence that the digital world depends on.
Paul Holt is the GVP of EMEA at DigiCert