Podcast: The Security Strategist podcast
Guest: Eric Schwake, Director of Cybersecurity Strategy, Salt Security
Host: Shubhangi Dua, Podcast Producer and B2B Tech Journalist
Adopting enterprise AI is often seen as a productivity boost. However, a subtler change is happening behind the scenes, and security leaders are still trying to understand it. Enterprises now not only optimise AI tools but are also bringing autonomous agents into their workplaces.
“We would call AI agents an additional workforce that enterprises are deploying,” says Eric Schwake, Director of Cybersecurity Strategy at Salt Security.
The description is more literal than it seems. These agents can access systems, interact with data, and perform multi-step tasks with little human input. Unlike employees, they lack intuition and caution.
In the recent episode of The Security Strategist podcast, Schwake sat down with Shubhangi Dua, Podcast Producer and B2B Tech Journalist to discuss AI agents, shadow AI, and API security challenges are transforming enterprise cybersecurity. Schwake explains how to secure autonomous AI systems at scale today.
Has AI Surpassed Experimentation Across Enterprises?
AI is no longer in the experimental stage. Leadership teams across industries are actively promoting its use to boost innovation. Executives like Jensen Huang, Founder, President & CEO of NVIDIA, are highlighting a larger trend where enterprises are measuring, incentivising, and expecting AI adoption.
This urgency creates a familiar tension. Speed provides a competitive edge, but it also shortens the time available for governance. “You want them to use this innovation to do their work,” Schwake tells Dua. “But you don't want sensitive data leaking and getting into the wrong hands.”
Also Watch: What Happens to API Security When AI Agents Go Autonomous?
Where the Real AI Risk Lives
Current discussions about AI security often focus on models and outputs. Yet, a more significant risk may be found deeper within how AI systems operate.
Every decision made by an AI agent leads to interactions at the system level. These interactions typically involve internal tools, third-party services, and experimental infrastructure that change weekly. The outcome is a rapidly growing, highly connected environment that is hard to monitor completely.
Spotlighting a rising challenge, Schwake alludes to how enterprises are growing faster than they can observe.
AI differs from previous technologies not only in capability but also in speed. Tasks that used to take hours can now be completed instantly and repeatedly. “If you think about it as a worker who works 10 times faster, that makes the problem 10 times worse,” Schwake explains.
In practical terms, this means minor issues—like misconfigurations, too many permissions, and unintended data access—can evolve into significant risks much faster than in the past. Since many of these systems work autonomously, problems may not appear until after damage occurs.
When AI Makes APIs a Board Risk
Explores how autonomous agents multiply API exposure, demanding CISO-led governance of shadow endpoints before they drive business disruption.
What Blind Spots are Missed Internally Across Enterprises?
Not all risks come from outside threats. More often, they start within the enterprise itself. Teams testing AI frequently link new tools to existing data sources without formal oversight. These informal setups, sometimes called shadow AI, create areas of activity outside traditional security controls.
“How are we going to ensure that what this personal AI system is doing stays secure?” Schwake asks. It’s a concern that many enterprises are just starting to tackle.
How Autonomous Systems are Changing Enterprise Operations?
The move toward autonomous systems is pushing companies to rethink long-held security beliefs. If AI agents act like workers, they must be managed similarly. “We have to treat those as if they are employees,” Schwake says. This involves defining limits, monitoring actions, and ensuring that access matches intent.
However, unlike human employees, AI agents don’t pick up on nuances. They follow instructions consistently—with no hesitation.
Inside API Defenses for LLMs
Examine how intent-aware API protection and AI/ML detection engines are becoming foundational controls for LLM and agentic AI deployments.
Why AI Regulation Won't Fix the Problem?
As frameworks like the EU AI Act begin to develop, they provide initial clues about how governance might change. Still, regulation alone won’t fix the problem. “There needs to be a more measured approach,” Schwake notes. “We can’t just roll this stuff out and hope it’s secure.”
For many enterprises, the difficulty isn’t a shortage of tools but rather balancing speed with oversight in a way that keeps up with innovation. Most enterprises are still in the early stages of understanding how AI alters their risk scenario. The complexity is operational and technical.
Visibility, context, and control will shape the next phase of enterprise AI adoption. However, how these elements will come together—and what gaps may still exist—remains to be seen.
For more insights on how security leaders are navigating this change, follow Salt Security on their website, YouTube, and LinkedIn, where discussions about agentic security continue to evolve.
API Blindspots Board Must See
Unpacks the disconnect between exploding API exposure and slow adoption of purpose-built protections across enterprises.
Key Takeaways
- AI agents behave like employees and need the same level of security oversight.
- Most AI risk sits in the API layer where actions actually happen.
- Faster AI systems can turn small security gaps into major threats.
- Unmonitored “shadow AI” tools are quietly exposing sensitive data.
- Continuous visibility is the foundation of securing any AI ecosystem.
Chapters
- 00:00 Introduction to AI and Cybersecurity
- 02:43 Insights from RSA Conference
- 06:30 The Role of AI Agents in Security
- 08:30 Transitioning from Discovery to Governance
- 12:03 Protecting Sensitive Data in AI Systems
- 15:21 Identifying Weak Points in AI Security
- 18:54 The Need for Measured Security Approaches
- 20:38 CISO Strategies for API Security
- 23:22 The Future of AI in Cybersecurity
- 25:14 Visibility as a Key Security Measure
For more information, please visit em360tech.com and salt.security.
To learn more about Salt Security and AI and API security, follow:
Salt Security LinkedIn: Salt Security
Salt Security X: @SaltSecurity
Salt Security YouTube: @SaltSecurity
EM360Tech YouTube: @enterprisemanagement360
EM360Tech LinkedIn: @EM360Tech
EM360Tech X: @EM360Tech
Comments ( 0 )