Cybersecurity, for many years, has functioned on an obvious assumption that attacks repeat themselves. For instance, if a phishing email works once, it will work again. Simply put, catch it, study it, write a signature, update the model — and block the next wave. 

What if there is no next wave? What happens when every malicious email is now uniquely written by AI, personalised at scale, and never seen before?

In the recent episode of The Security Strategist podcast, host Richard Stiennon spoke with Alan LeFort, CEO of StrongestLayer, and Eric Sanchez, CISO at Orrick, about how generative AI is reshaping email security — and why many traditional defences may already be obsolete.

Why is Email the Open Door to Attacks? 

Stiennon questions what many security leaders tacitly ask – If most enterprises run on Microsoft’s ecosystem, why does a separate email security market even exist?

LeFort responds, stating that attackers are economically rational. They go where entry is cheapest and easiest. For decades, email has been that open door.

However, the industry has changed. First came secure email gateways built on rules and regex. Then, machine learning systems are trained to distinguish “normal” from “abnormal.” Both improved detection rates and both reduced risk.

But both depend on historical data. They need to have seen an attack before to stop it again.

Generative AI is believed to have changed that. It enables attackers to create perfectly written, highly personalised phishing emails at near-zero cost. According to a study from the Harvard Kennedy School, AI-generated phishing achieved a 54% click rate among trained employees — more than four times the baseline. Even more concerning, the cost of crafting those emails dropped from roughly $15–$20 in labour to just a few cents.

That economic shift is seismic. When every email can be unique, the pattern is difficult to spot, signatures are not updated, and a “previous attack” to learn from is nonexistent. 

Is Alert Fatigue the Hidden Crisis?

While breach headlines dominate the industry, Sanchez spotlights a quieter operational threat – alert fatigue.

At Orrick, a global law firm handling hundreds of thousands of emails each month, traditional security tools generate a steady stream of alerts. Many turn out to be benign. Analysts triage, close, repeat, Sanchez shared that, over time, the burden compounds. Security teams spend less time stopping real attacks and more time managing noisy systems.

LeFort argues that false positives are not merely tuning problems — they are architectural problems. Most detection systems rely on a single scoring threshold. If something crosses the line, it’s flagged. If it doesn’t, it passes.

A key insight to note is that deception alone isn’t malicious intent. Marketing emails are persuasive and sometimes manipulative, yet harmless. A credential-harvesting email, on the other hand, carries real risk. Treating both on the same scoring axis inevitably creates noise.

From Pattern Matching to Reasoning

StrongestLayer’s approach, as described by LeFort, moves away from pure pattern recognition and toward reasoning. Instead of asking, “Does this match something bad we’ve seen before?” the system evaluates multiple dimensions: What harm would occur if this succeeds? Is it anomalous for this recipient? What is the sender’s likely intent? How much deception is present?

Crucially, it weighs evidence of innocence alongside evidence of guilt — akin to how opposing arguments are weighed in a courtroom.

Are you enjoying the content so far?

Such a multi-dimensional analysis, LeFort believes, dramatically reduces false positives while still catching novel threats. For Sanchez, the operational benefit is tangible. He describes scenarios where traditional gateways failed to detect unusual phishing techniques, including Unicode-based obfuscation. A reasoning-driven system flagged the anomaly not because it recognised a known signature, but because the structure and context “didn’t make sense.”

That distinction is critical. AI-generated attacks do not need to repeat. They only need to work once.

What key Challenges will Security Teams Face within 2 Years?

All speakers agree that over the next 12 to 24 months, security teams face a dual challenge of sophistication and scale. AI lowers the cost of creating attacks and automating personalisation. When volume increases, precision increases and speed increases.

LeFort emphasises that organisations evaluating AI security tools should look beyond detection rates. Automation matters just as much. Does the system eliminate operational drag? Does it allow analysts to focus on strategic threats rather than inbox noise?

The consensus is that email remains the most common entry point into organisations. What has changed is the attacker’s economics. When personalisation costs pennies and sophistication is automated, defenders must respond in kind.

The question is no longer whether AI will influence email security. It’s already influencing email cybersecurity across enterprises. The real question is whether an enterprise's defences are still waiting to see the attack twice.

Key Takeaways

  • AI-generated attacks break detection models that rely on past patterns.
  • Email remains the easiest and most economical entry point for attackers.
  • Traditional tools force security teams into a reactive cycle.
  • Effective AI defence must evaluate context, not just rules.
  • Automation is now as critical as detection accuracy.
  • Stopping the first and only attack is the new security standard.

#EmailSecurity #AICybersecurity #GenerativeAI #Phishing #B2BSecurity #EnterpriseSecurity #CyberAttack #SecurityStrategist #StrongestLayer #AlertFatigue #CISO #TechPodcast #InfoSec #CyberDefence