AI Deepfakes Are Forcing Companies to Rebuild Trust

AI Deepfakes Are Forcing Companies to Rebuild Trust

Artificial intelligence has blurred the line between what’s real and what’s not, and that line is vanishing fast.

Resemble AI’s Q3 2025 Deepfake Report confirms what many in security and compliance have suspected: Deepfakes have gone corporate. More than 2,000 verified incidents were recorded last quarter, nearly half targeting businesses. The attacks are faster, more realistic and harder to detect, marking what the company calls an “inflection point” in synthetic deception.

“This quarter marks an inflection point where we’re seeing industrial-scale operations, not isolated experiments,” Zohaib Ahmed, co-founder and CEO of Resemble AI, a Toronto-based company that develops generative voice and deepfake-detection technology, told Newsweek.

“The barrier to entry has completely collapsed. Anyone with basic access to generative tools can create highly convincing audio or video in minutes.”

How Deepfakes Became a New Corporate Threat

The report paints a picture of deepfake fraud maturing into a repeatable business model. Enterprises are now prime targets: executives impersonated on video calls, cloned voices authorizing wire transfers, fabricated investor announcements designed to move markets.

Ahmed said companies offer richer and more scalable targets than individual consumers because “there are more assets, more authority and more trust to exploit.” He pointed to incidents in Singapore and Australia that highlight how fast the threat is evolving.

In one case, a company director was tricked into wiring almost half a million dollars after a Zoom meeting where all executives—including the CFO—were AI-generated. In another, criminals used synthetic voices to convince Qantas call-center staff to release internal credentials, exposing the data of 6 million customers.

“The biggest takeaway from both is that it’s now possible to weaponize any available personal or professional voice or video data,” Ahmed said, adding that the expansion of deepfakes across video, audio and documentation means “defense requires continuous employee training, additional layers of authentication and using deepfake-detection technology.”

When Visual Trust Breaks

For corporate leaders, misplaced confidence can be catastrophic. Deepfake-driven fraud accounted for nearly 40 percent of all incidents in the study, and 385 cases led to direct financial loss. Yet the damage often extends beyond the balance sheet.

Ahmed noted that “most businesses, especially B2C companies, have worked hard to build trust with their audiences,” and that “trust can now be lost in minutes or even seconds, eliminating the goodwill companies have worked so hard to build.”

Depending on the sophistication of the scam, a single deepfake can cost a company anywhere from thousands to millions of dollars.

Analysts agree. KPMG recently warned that deepfakes are eroding the basic trust people place in what they see and hear online, while Deloitte projects continued acceleration of AI-driven fraud across financial services.

“When stakeholders start to question if a CEO’s message or a board announcement is real, you’ve lost credibility,” Ahmed said. “That’s what makes this such a business problem, it’s not a cybersecurity niche anymore, it’s a corporate integrity issue.”

Turning Detection Into Infrastructure

The report argues that deepfake detection can no longer be a defensive add-on. It must become part of a company’s digital foundation.

Ahmed said detection now has to “evolve into infrastructure,” embedded into how organizations authenticate media and identity. “Every organization that handles video, audio or imagery will eventually need systems that authenticate reality before it can be trusted,” he added.

He compared the moment to the rise of antivirus software in the 1990s: optional at first, then indispensable. But he cautioned that technology alone won’t solve the problem.

“Tools matter, but culture matters more,” Ahmed said. “Executives have to train people to slow down, verify and build doubt into their decision-making. Doubt is not a weakness; it’s a safety feature.”

A New Compliance Frontier

Governments are beginning to catch up. In the U.S., proposed measures like the NO FAKES Act would codify rights over the use of someone’s likeness, while China’s AI labeling rules are already setting standards for disclosure. Ahmed said these developments “place the onus on corporations to ensure their brands and people aren’t hijacked with deepfake technology.”

He noted that early on, companies treated deepfakes as someone else’s problem. “The mindset was, ‘These fakes aren’t our fault; that’s just the nature of the Internet; people will always use technology to game the system,’” he said. “Now, we’re seeing real harm from this technology, especially when it comes to things like revenge porn.”

That shift, he added, is forcing platforms to recognize their responsibility to users. “Whereas before, as the old saying went, ‘the users are the product,’ now deepfake technology threatens to drive users off platforms lacking regulation.”

China’s standards, he said, help people “recognize when they’re not looking or listening to something authentic,” and could become a global model for how societies adapt to synthetic media.

Building a Corporate Playbook for Deepfake Defense

For leaders beginning to take this threat seriously, Ahmed said the first step is awareness. “Staff and executives need regular training to spot deepfakes in voice, video and documents,” he said.

From there, organizations should “bring in technology,” implementing multi-factor verification for sensitive transactions and deploying detection tools that specialize in voice and video verification, email scanning and document authenticity.

Policy updates are also key. “Incident-response protocols should explicitly address deepfake scenarios,” Ahmed said, adding that companies should engage with industry networks, share intelligence on emerging attack trends and “run periodic security audits that test your team’s ability to recognize and respond to deepfake attacks.”

The Future of Reality Verification

Looking ahead, Ahmed believes real preparedness will require “concerted effort from all kinds of actors and entities.”

Enterprises will need to build “substantive internal safeguards, both technological and behavioral,” while regulators and communications platforms must do their part. He cited Resemble’s forecast that deepfake-enabled fraud will rise to $40 billion globally by 2027.

He also drew parallels to the history of cybersecurity. “Every meaningful advance in communications technology has come with an associated risk,” Ahmed said. “Throughout history, those risks have been managed through the diligent, intentional construction of safeguards.”

Over time, he believes, organizations will develop instinctive resilience. “Eventually, deepfakes will be as common as cyberattacks, but no less destructive,” he said. “We’re not immune to cyberattacks, but we’ve developed muscle memory in protecting ourselves. When we get a new computer, the first thing we do is install antivirus software, that’s how the world will approach deepfake attacks.”

As companies rethink identity and reputation in the age of synthetic media, Ahmed said the fundamentals of trust remain unchanged.

“Maintaining brand trust has always been about staying watchful, understanding how your company is represented, both officially and unofficially,” he said. “The rise of deepfakes doesn’t change that math, but it does raise the stakes.”

Leave a Reply

Your email address will not be published. Required fields are marked *