Key takeaways

AI-powered tools are transforming how scammers write email attacks. In fact, by April 2025, 51% of spam emails were AI-generated. These AI-crafted messages tend to be unusually polished, more formal, and practically free of typos, making them convincingly professional.

Attackers even run AI-powered ‘A/B tests,’ churning out many message variations to see which wording fools the most people. In practice, the urgency and common phishing tactics remain the same, but AI polishes the message, making it much harder to spot.

In this article, I’ll discuss the threat of AI-powered email phishing to you and your customers, the duty of care your brand has to ensure your customers avoid email marketing scams, and best practices on how to help safeguard them.

The Escalating Threat of AI-Driven Email Phishing Scams and Their Impact on Customers

Traditional phishing was once labor-intensive and limited in scope, but AI has changed the game. Modern attackers can scrape vast amounts of personal data (from social media, corporate websites, and public records) to build highly credible, tailored narratives. With that information, they can fire off thousands of highly personalized phishing emails in seconds. 

And it’s not just email: AI now powers multi-channel assaults. For example, scammers can clone a CEO’s voice for follow-up phone calls, and even craft realistic deepfake video conferences to back up those requests. This multi-pronged approach lets many threats slip past standard security filters with alarming effectiveness.

This is just the beginning. There are reports of a 1,265% increase in malicious phishing emails since ChatGPT’s debut. Scammers are inventing new tricks: ‘scam-yourself’ schemes use fake AI influencers or cloned voices to trick people into infecting their own devices. In the cryptocurrency world, one series of deepfake videos (the CryptoCore campaign) duped victims into transferring nearly $4 million in fraudulent transactions. 

Even legitimate tools can be misused: researchers showed that Google’s Gemini email summarizer can be manipulated to insert fake security alerts, prompting users to call bogus support numbers. 

Resource
Free Resource

Getting Up to Speed: The Latest Marketing Trends You Can’t Miss Webinar

DOWNLOAD NOW

Your Brand’s Duty of Care to Safeguard Customers Against AI-Driven Email Phishing

As AI scams proliferate, legitimate brands are caught in the crossfire. Scammers frequently impersonate well-known companies, exploiting consumer trust in familiar logos and email formats. This forces brands to step up as defenders of their customers. For example, the UK has established anti-fraud centers and an Online Fraud Charter to promote cross-industry cooperation. In practice, this means brands must act as vigilant watchdogs for their customers’ inboxes.

Detection and Prevention

Leading companies are already investing in advanced detection. PayPal uses an AI-powered system that analyzes billions of transaction signals in real-time to flag and even automatically block suspicious friends-and-family payments. 

Amazon maintains global teams of machine learning scientists, software engineers, and fraud investigators whose sole job is to find and shut down phishing websites and phone numbers impersonating Amazon. 

In short, brands must invest in similarly sophisticated defenses – advanced AI and machine-learning tools that can detect AI-generated phishing patterns, unusual user behaviors, or other subtle indicators of fraud before customers are harmed.

Making Sure Customers Know When Communication is Legitimate

Brands must also make genuine communication unmistakable. Implementing secure email practices (SPF, DKIM, DMARC) is essential. DMARC, in particular, should be the first line of defense against email spoofing. Major providers like Gmail and Yahoo now require strong DMARC policies from bulk senders, helping to ensure legitimate emails get through while fakes are rejected. Equally important is setting clear expectations with customers. 

When it comes to the communication itself, Microsoft sets a great example. The tech giant explicitly tells users it will never ask for payment via cryptocurrency or gift cards, and urges them to download software only from official sources. 

These plain-language policies help customers spot impostor messages. But you also need to make it simple for your customers to report fraud and scams they receive, so you can share intel and take action to protect other customers from similar email scams. Amazon provides a self-service reporting tool in over 20 languages, recognizing that customer tips are crucial to identifying and stopping scammers.

Resource
Free Resource

Harnessing the Power of AI in Email Marketing

DOWNLOAD NOW

Collaboration

No single company can fight scams alone. Brands should collaborate with law enforcement and industry groups to share intelligence and take joint action. Amazon works with the Better Business Bureau and the Anti-Phishing Working Group, and in 2023, even partnered with Microsoft and India’s CBI to dismantle more than 70 tech support scam centers

Recovery

Duty of care extends beyond prevention: companies should plan how to help customers who fall victim. It’s an unfortunate fact of modern life that no safeguards or technologies are going to be completely immune to AI-driven email scams. However, brands can provide emotional or even financial support to help customers recover quickly.

This helps maintain the relationship with customers who might have lost faith in your brand due to the falsified communications that duped them into a scam in the first place.

Best Practices to Empower Your Customers Against AI-Driven Email Phishing

Security isn’t just about software: it’s about people and habits. Brands should continuously educate both employees and subscribers about evolving threats. While you can’t necessarily require customers to take part in cybersecurity training programs, you can provide materials that keep them up to date on the latest threats and examples of realistic AI-generated phishing examples, voice-cloning scenarios, or deepfake lures. 

You can teach customers how to adopt practical habits to stay safe. First and foremost, teach them to be skeptical of anything urgent or too-good-to-be-true. They should learn that a polished email demanding immediate action should raise eyebrows. Your communications should instruct them to verify all requests through official channels. They should never click on a link or call a number in a suspicious message. Instead, they should look up their company’s official contact information on the website and confirm independently. 

Teach them to pay close attention to the sender’s address and domain, like if it’s slightly off (for example, ‘tectharget.com’ instead of ‘techtarget.com’). You can also teach them to double-check the veracity of links by taking steps like hovering over URLs to check where they lead. 

This includes things like QR Codes. Their explosion in popularity during the pandemic means consumers are now quite used to using QR codes to get information. Hence, they need to be kept in the loop about the prevalence of phishing and its use in email scams to deliver malware.  

You can also provide information to help customers keep their operating systems, browsers, and antivirus software up to date to protect against known vulnerabilities.

Beyond personal habits, you can also provide or recommend additional safeguards. Multi-factor authentication (MFA) on customer accounts should be enabled wherever possible, so even if scammers steal a password, MFA can block most break-in attempts. You can also use a reputable password manager to store credentials securely. 

And finally, make sure they’re aware of reporting channels to flag any suspicious emails or scams, whether that’s through your own channels or to government bodies like the FTC. These reports help authorities track down and shut down fraudsters.

AI-driven phishing is a significant threat in the email threat landscape, but it also clarifies the path forward. Brands must recognize their duty of care: deploying advanced AI defenses, authenticating communications, debunking fraudulent ploys, and teaming up with law enforcement. At the same time, companies should foster a vigilant community by continuously educating customers and employees. 

Combining robust technology with smart habits, such as verifying urgent requests, using MFA, and promptly reporting suspicious messages, enables us to preserve customer trust and keep people safe in an increasingly AI-filled inbox.

Meet Lee Li Feng
Lee is a project manager and B2B copywriter currently based out of Singapore. She has a decade of experience in the Chinese fintech startup space as a PM for TaoBao, MeitTuan, and DouYin (now TikTok).