As businesses approach the holiday season, security teams feel the pressure while online activity increases. At the same time, AI is quickly changing how attacks are launched and how organisations function daily.

In the recent episode of The Security Strategist podcast, host Richard Stiennon, Chief Research Analyst at IT-Harvest, sits down with Pascal Geenens, VP of Threat Intelligence at Radware, to discuss why CISOs need to rethink their long-held beliefs about attackers, users, and what “web traffic” really means in an AI-driven world.

They talk about the dual nature of AI in cybercrime, the emergence of new tools that facilitate attacks, and the importance of automated pen testing as a defence strategy. The conversation also highlights vulnerabilities associated with AI assistants, such as indirect prompt injection, and emphasises the need for organisations to adopt best practices to safeguard against these threats.

Also Watch: From Prompt Injection to Agentic AI: The New Frontier of Cyber Threats

AI Attacks Lower the Barrier for Cybercrime

Geenens tells Stiennon that AI’s biggest effect on security is not a new type of futuristic attack but rather its scale and accessibility. Tools like WormGPT, FraudGPT, and advanced platforms like Xanthorox AI provide reconnaissance, exploit development, data analysis, and phishing as subscription-based services. For a few hundred dollars each month, attackers can access AI-assisted tools that cover the entire cyber kill chain.

This “vibe hacking” model resembles vibe coding. Attackers describe their goals in natural language, and the AI generates scripts, reconnaissance workflows, or data extraction logic. While these tools have not fully automated attacks from start to finish, they significantly lower the skills needed to engage in cybercrime. As Geenens explains, attackers can now target hundreds or thousands of organisations simultaneously, a task that once required large teams.

Attackers can now afford to fail repeatedly as part of their learning process, while defenders cannot. Even flawed AI-generated exploits speed up scanning, vulnerability detection, and phishing at levels that security teams find challenging to handle. The result is a threat landscape that uses familiar techniques but operates with greater speed and intensity.

Also Watch: How Do You Stop an Encrypted DDoS Attack? How to Overcome HTTPS Challenges

AI Assistants & Browsers Creating Invisible Data Leak Risks

The second, and more alarming, change that the VP of Threat Intelligence emphasises occurs within companies themselves. As organisations use AI assistants and AI-powered browsers, they delegate authority along with convenience. These tools require access to emails, documents, and business systems to be effective, and this access creates new risks.

Indirect prompt injection, shadow leaks, and echo leaks turn normal workflows into potential attack vectors. For instance, an AI assistant summarising emails may unintentionally process hidden commands within a message. These commands can lead the model to inadvertently leak sensitive information without the user clicking any links or noticing anything unusual.

In some cases, the data doesn't even leave the endpoint; it exits directly from the AI provider's cloud infrastructure, completely bypassing established data loss prevention and network monitoring.

Meanwhile, Geenens points to a fundamental shift in traffic patterns. The web is moving from human-to-website interactions to machine-to-machine communications. AI agents browse, conduct transactions, and query on behalf of users.

Bot traffic is growing rapidly, surpassing human traffic, and traditional controls, such as CAPTCHA or login challenges, are no longer effective. Defenders must now focus on behaviour rather than identity—understanding what a machine is trying to do and whether that behaviour matches business intent.

For CISOs, the message is straightforward: AI is unavoidable, but it needs to be used with proper governance, monitoring, and behavioural security measures. Understand what data AI assistants can access, log their activities, and get ready for a future where most traffic is automated. Attackers have already adapted.

Also Watch: Can You Stop an API Business Logic Attack?

Takeaways

  • The holiday season sees an increase in cyber threats.
  • AI tools like Worm GPT and Fraud GPT are changing the threat landscape.
  • Automated pen testing can help organisations defend against AI-driven attacks.
  • Indirect prompt injection poses significant risks to data security.
  • Organisations must monitor AI assistant interactions closely.
  • Vibe hacking is a new trend that lowers the barrier to entry for cybercriminals.
  • Behavioural analysis is crucial as machine-to-machine communication increases.
  • Pen testing remains essential to identify vulnerabilities before attackers do.
  • AI can automate parts of attacks, but is not fully autonomous yet.
  • CISOs need to implement strict controls when deploying AI technologies.

Chapters

  • 00:00 Introduction to Cybersecurity Threats During Holidays
  • 02:37 AI's Role in Evolving Cyber Threats
  • 05:45 The Impact of AI Tools on Cybercrime
  • 08:59 Automated Pen Testing and AI's Defensive Role
  • 11:45 Indirect Prompt Injection and AI Vulnerabilities
  • 14:37 Best Practices for CISOs in the Age of AI
  • 21:39 The Future of Cybersecurity: Machine-to-Machine Communication