Connect with us

Hi, what are you looking for?

Business

AI fraud is hitting Canadian companies’ bottom lines

AI can now imitate voices, colleagues, and job candidates. Canadian companies are discovering that trust signals are easier to fake.

AI fraud
Photo by Getty Images on Unsplash
Photo by Getty Images on Unsplash

The call sounded urgent.

A senior executive’s voice, familiar and slightly impatient, asked for a wire transfer to close a time-sensitive deal, so the finance team moved quickly. The money left the account. 

But the executive never made that call. By the time anyone realized something was wrong, the funds were gone.

And now incidents like this are showing up in financial results.

New data released this week by KPMG shows that among Canadian companies that experienced fraud, 72% say they lost between 1% and 5% of annual profits to AI-powered fraud in the past year. And 81% of them say the incidents involved AI. 

Seven in 10 were targeted more than once. 

At the same time, 94% of leaders say they are concerned about AI-driven fraud, but only 26% have a tested incident response plan that specifically addresses deepfakes and synthetic impersonation.

For decades, organizations relied on human recognition as a control. If a familiar voice requested a transfer or a colleague appeared on a video call, the interaction carried built-in credibility.

But AI is starting to break that assumption. 

In January 2024, engineering firm Arup confirmed that a finance employee in Hong Kong transferred $25.6 million USD after joining what appeared to be a routine internal video conference. The CFO and several colleagues were visible on screen. The request aligned with company activity.

But every participant in that meeting was an AI-generated deepfake.

The Arup incident demonstrated that authority and identity can all be simulated. KPMG’s findings suggest Canadian companies are already absorbing the financial impact.

Identity becomes the new attack surface

Generative AI tools now allow attackers to automate impersonation at a scale that was previously impossible.

These systems can replicate writing style, make a report look the same as Jo’s from accounting, and synthesize voices from publicly available clips. A convincing internal email no longer requires weeks of reconnaissance. A cloned voice message can be generated in minutes.

Security firm Huntress reports that 23.2% of applicants for certain technical roles were flagged as potential fraud risks, including AI-generated resumes and manipulated identities. They also found that 17% of hiring managers surveyed say they have encountered candidates who appeared to be using deepfake technology during interviews.

An artificial job candidate who passes screening gains legitimate credentials and internal access. At that point the intrusion looks like a normal employee.

Gartner expects that by 2028 one in four job candidate profiles globally will be completely fake. Hiring processes are becoming security checkpoints.

Executive communication, recruitment, vendor onboarding, and financial approvals all depend on signals of identity that AI can now replicate.

Canada’s own security agencies are tracking the shift. 

Cybercrime is now considered the top threat to Canadian organizations, with emerging technologies increasing both the scale and sophistication of attacks. Criminal actors are using automation and widely available digital tools to improve success rates while reducing effort.

An organization can have policies in place and still lose money if its decision-making system can’t keep pace with AI-driven manipulation.

The question is whether verification systems can keep pace with the tools targeting them.

Designing verification into decision making

AI-driven fraud is forcing organizations to rethink how trust works inside operational systems.

Many internal processes were designed for speed. Executives approve transfers quickly, and  managers move fast when talent is scarce. Speed became a design goal.

Synthetic impersonation exploits that design.

When a request arrives from a familiar voice or account, people act. That assumption worked for decades. AI-generated identity removes that safeguard.

Financial approvals increasingly require confirmation through a separate communication channel, a practice known as out-of-band authentication. 

Some companies require a phone call or in-person verification before large transfers move forward. Others are building layered identity checks into hiring processes and vendor onboarding.

Security teams are experimenting with continuous identity validation, behavioral monitoring, and systems that detect whether a real person is present during authentication.

KPMG’s findings suggest many Canadian organizations are still early in this transition. Nearly all leaders surveyed say they are concerned about AI-driven fraud, yet only 26% report having a tested response plan that addresses deepfakes and synthetic impersonation.

Many companies already recognize the risk, but now the challenge is redesigning the systems that were built for trust in an era when identity could not easily be fabricated.

Canada’s exposure extends beyond company walls

The systems Canadian companies rely on (cloud platforms, AI tools, and payment networks) operate across jurisdictions by default. Fraud moves through the same channels.

Concerns about digital trust are growing across governments and industry, with cybercrime and digital trust breakdown listed as some of the biggest risks facing businesses and economies in the coming years.

Canada’s cyber defence agency is also warning that geopolitical tensions could amplify digital threats. Following recent U.S. and Israeli strikes against Iran, the Canadian Centre for Cyber Security said Tehran will “very likely” use cyber operations as part of its response and urged organizations to remain vigilant. 

While Canada is unlikely to be a primary target, officials say Canadian networks could still be affected through broader campaigns aimed at North American infrastructure.

Canadian companies often rely on international vendors for AI tools and cloud infrastructure. Data may be stored in one jurisdiction, processed in another, and then accessed by distributed teams.

When fraud exploits those systems, investigations may require cross border coordination, regulatory reporting, and clarity about who controls the data and infrastructure involved.

Canada’s legal framework for AI is still evolving. Bill C-27, which included the proposed Artificial Intelligence and Data Act, died on the order paper when Parliament prorogued in January 2025, leaving existing privacy and digital governance laws to address emerging AI risks.

Some provinces have introduced updated requirements. Quebec’s Law 25 strengthened obligations around personal data handling and accountability. The broader national landscape continues to develop.

At the same time, questions of AI sovereignty are moving into operational conversations. Canadian organizations are increasingly examining where their AI systems are hosted, who can access the data, and which jurisdictions govern that access.

The same digital infrastructure that enables innovation also shapes how fraud travels, how incidents are investigated, and how responsibility is determined. 

For companies expanding their use of AI, governance and infrastructure decisions now shape security outcomes.

For a long time, a request from a familiar voice was enough.

Inside most organizations, that was how work moved forward. Someone asked, someone approved, the transfer went through. Rinse, repeat.

Increasingly, that may be the moment that requires the most scrutiny.

Final shots

  • The same tools companies are adopting to automate work are also making impersonation easier.
  • Trust inside organizations now requires design. Verification protocols are becoming part of performance strategy.
  • Canada’s innovation ambitions depend on governance that keeps pace with technology. Speed without resilience is not an advantage.
Jennifer Friesen
Written By

Jennifer Friesen is Digital Journal's associate editor and Calgary Bureau lead.

You may also like:

World

AI tools make deepfakes easier to create and harder to detect than ever before.

Business

If intelligence becomes a metered utility controlled by a handful of providers, then decision making becomes capacity-constrained infrastructure.

Business

Factors like convenience and workflow efficiency increasingly outweigh model preference in day-to-day usage.

Social Media

Australian mining magnate Andrew Forrest is asking a US federal court in Silicon Valley to hold Meta accountable for scam ads.