News: AI impersonations and scams are on the rise

Technology

AI impersonations and scams are on the rise

As AI scams and deepfake impersonation become more realistic, complacency is no longer an option.
AI impersonations and scams are on the rise

The world is witnessing a sharp escalation in AI-driven impersonation scams, with global financial losses from fraud resulting in US$1 trillion in 2024 alone, according to a study by the Global Anti-Scam Alliance.

Many of them are tied to deepfakes, voice cloning, and synthetic identities, according to the study. The situation puts business leaders, governments, and individuals under a growing and urgent threat as attackers harness generative AI to convincingly impersonate executives, officials, and loved ones for illicit gain.

Nearly half of global businesses reported falling victim to deepfake fraud last year, while Business Email Compromise (BEC) scams – now often powered by generative AI – targeted 64% of companies.

In Singapore, scam-related losses reached a record US$822 million in 2024, with cryptocurrency and deepfake-enabled attacks accounting for a rising share. In the Philippines, cybercrime complaints tripled year-on-year, with emerging threats linked to voice cloning and fake job listings.

What AI impersonation looks like today

AI impersonation scams rely on a mix of powerful tools – deepfakes, voice cloning, synthetic text, and fake documents – to convincingly mimic real people or institutions. Once a niche concern, these techniques are now alarmingly common and often indistinguishable from the real thing.

Attackers typically deepfake video and audio to trick employees into transferring millions in corporate funds. One widely cited case involved scammers impersonating a CFO during a video call, resulting in a US$25 million theft.

In Singapore, a finance director was deceived through a deepfake call showing fake visuals of his CEO and colleagues.

What makes these scams especially dangerous is their ability to bypass traditional defences.

AI-generated emails are now almost flawless and emotionally persuasive, crafted using publicly scraped data to sound like a trusted colleague or executive.

Voice cloning tools require just a few seconds of audio – easily obtained from social media – to recreate someone’s voice with shocking accuracy.

But it’s not just executives being targeted. AI impersonation now spans a wide range of victims – from elderly individuals receiving distress calls in the cloned voice of a “grandchild,” to jobseekers lured into fraudulent interviews via fake company profiles and recruiters.

In some cases, attackers even forge synthetic identities complete with AI-generated photos and documents to secure jobs, open bank accounts, or bypass Know Your Customer checks.

The speed and scale of these attacks are growing, thanks to the accessibility of cheap or free AI tools. In many cases, scammers need just a minute of a person’s voice – easily lifted from social media or podcasts – to generate a convincing clone.

Deepfake-as-a-service offerings on the dark web further lower the barrier to entry, making these advanced scams accessible to novice attackers.

Who’s being targeted, why it’s working

The tactics may differ, but the strategy remains the same: exploit trust. Whether it’s the voice of a CEO, a message from a familiar email address, or the face of a political figure in a fake video, AI impersonation scams manipulate people by faking familiarity.

C-suite executives are a prime target – not always as victims, but as tools. Their public-facing roles provide ample voice and video data that scammers can use to impersonate them convincingly.

One study found that 75% of known deepfake frauds featured impersonated senior executives.

Employees in finance, HR, and IT departments are also heavily targeted. Many receive fake emails or deepfake calls requesting urgent wire transfers or confidential information. New hires are particularly vulnerable due to their limited familiarity with internal protocols.

Beyond the enterprise, individuals from all age groups are at risk. Elderly citizens face AI voice scams involving fake emergencies, while Gen Z users are increasingly being tricked by fake influencer accounts, bogus crypto giveaways, and deepfake-generated endorsements on social media.

Southeast Asia in the crosshairs

The rise in AI impersonation scams is not evenly distributed across the globe. Southeast Asia has emerged as a hotspot, with both the Philippines and Singapore experiencing major surges in fraud cases – and in very different ways.

In the Philippines, the Cybercrime Investigation and Coordinating Center (CICC) logged over 10,000 cybercrime complaints in 2024. Losses reached US$3.4 million, largely driven by job scams, investment fraud, and consumer deception. Financial fraud alone accounted for nearly 1,000 cases, with digital wallet platforms like GCash frequently used in scam transactions.

Public anxiety over deepfakes is also mounting. The country’s Commission on Elections has formally proposed banning AI-generated content in the 2025 campaign period. The national government launched Hotline 1326 for reporting deepfake scams, and plans are underway to establish a dedicated deepfake task force.

Singapore, meanwhile, has seen losses on a much larger scale. But the government has already taken several steps to respond, introducing new anti-scam laws, launching the ScamShield app, deploying deepfake detection advisories, and rolling out enhanced seller verification measures on online marketplaces. Still, the sheer volume and complexity of scams highlight how even well-prepared nations are struggling to keep up.

Fighting back: Can cyberdefences keep pace?

Despite the growing threat, many organisations and individuals remain underprepared. While 90% of executives surveyed in 2024 expressed confidence in their ability to spot deepfakes or AI-powered scams, nearly the same percentage of companies reported falling victim to such attacks.

Experts argue that fighting AI impersonation requires a multi-layered strategy. Technological solutions, like AI-driven detection systems, deepfake analysis tools, and advanced identity verification, play a key role.

Behavioural analytics, real-time monitoring, and anomaly detection can help spot irregular patterns before damage is done.

But tech alone isn’t enough. Employee training remains one of the most effective safeguards. Regular awareness sessions that include exposure to deepfake samples, phishing simulations, and real-world case studies can help staff recognise and react appropriately to suspicious activity.

Protocols like “verify then trust” are critical. For instance, any urgent financial request, especially one made via video or voice call, should be verified through a secondary, known communication channel.

Organisations are also encouraged to adopt internal controls such as dual authorisations and separation of duties to prevent unilateral actions.

On the regulatory side, governments are being urged to speed up legislation around AI misuse. Many tools still lack built-in safeguards to prevent abuse, and global cooperation is essential given the borderless nature of cybercrime.

Public-private partnerships, stronger enforcement, and platform accountability are increasingly being called for.

As AI impersonation becomes more realistic, scalable, and profitable, complacency is no longer an option. The ability to fake trust has never been more powerful – and the price of falling for it is growing by the day.

Read full story

Topics: Technology, #Artificial Intelligence, #Cybersecurity

Did you find this story helpful?

Author

QUICK POLL

What will be the biggest impact of AI on HR in 2025?