In an era of AI, it’s no longer a question of whether we should use it, but instead, we need to understand how it should be used effectively, conveys Sam Curry, the Chief Information Security Officer (CISO) at Zscaler. He believes that the growth of agentic AI is not meant to replace human security teams; rather, it aims to improve the industry as a whole.
In this episode of The Security Strategist podcast, host Richard Stiennon, an author and the Chief Research Analyst at IT-Harvest, speaks with Curry, Zscaler CISO, about the need for a shift to a model derived from authenticity, the role of agentic AI in security operations, and the criticality of awareness in adopting to changes brought by AI.
The conversation also touches on the necessity of establishing trust and accountability in AI systems, as well as the implications for cybersecurity professionals in an increasingly automated world.
AI Allows Easy Transition to Complex & Strategic Work
The cybersecurity industry is constantly warring against malicious actors. As attackers become more skilled, especially with AI in the picture now. Security professionals must step up their skills just to keep pace with the advancements brought by AI. Instead of taking away jobs, it enables security experts to break free from repetitive manual tasks. Such a transition allows them to focus on more complex and strategic work.
"We spend a lot of our time in the SOC doing manual tasks repetitively and trying to glue things together," Curry says. "When you manage not to think about the tools, your ability to perform a task improves drastically."
AI adaptations bring other changes that also help IT teams find better ways to perform their jobs. They move from simple detection and response to a more proactive approach to security. Curry believes that in this new environment, there will still be plenty of jobs; they'll just be more engaging and valuable.
Ethics & Logic are Crucial to Work With AI
For universities and educational institutions, the rise of AI in cybersecurity poses a significant challenge. The traditional emphasis on technical certifications like Certified Ethical Hacking and Security+ is no longer adequate. Future jobs will demand a deeper understanding of fundamental principles.
"They're going to have to walk over to the philosophy department," Curry explains. "They'll probably need to engage with the social sciences department. Understanding ethics and logic is crucial because they have to work with AI and assess whether the information it provides is logical."
The key is in coding, running scripts, but most importantly, it’s in learning to collaborate with AI as a partner. However, a boost in education is necessary to help cybersecurity professionals comprehend the principles of logic, ethics, and sociology. Such an approach to awareness will help IT teams find their way through the convoluted relationships between humans and AI.
As agentic AI becomes more common, we are shifting away from traditional security models. Authentication and authorisation are no longer sufficient. The new reality calls for a focus on authenticity.
Takeaways
- The rise of agentic AI necessitates a new security model based on authenticity.
- AI is not just a tool for attacks; it can enhance defensive strategies.
- Organisations must consider privacy and data handling when implementing AI.
- The role of cybersecurity professionals will evolve, focusing on more complex tasks.
- Education in cybersecurity must adapt to include ethics and logic.
- AI can help automate repetitive tasks, allowing for more interesting work.
- Trust and accountability are crucial in the deployment of AI systems.
- Consumption metrics can provide deeper insights into product value.
- Understanding user engagement is more important than just satisfaction surveys.
- The future of cybersecurity will involve continuous adaptation to new threats.
Chapters
- 00:00 The Rise of Agentic AI in Cybersecurity
- 09:05 The Future of Cybersecurity Jobs
- 12:45 The Role of Education in Cybersecurity
- 19:42 Establishing Trust in Agentic AI
Comments ( 0 )