What the Cybersecurity and AI Tango Means for the Enterprise

Published on
AI

AI is no longer the stuff of science fiction; it’s happening right in our eyes. Next Move Strategy Consulting reports the AI market is expected to see a whopping twenty-fold increase over the next decade, reaching almost $2 trillion in 2030— up from about $100 billion in 2021. From Dall-E to ChatGPT to Bard and more, the wild ride of AI— particularly generative AI— is sweeping across the enterprise ecosystem at breakneck speed. The never-sleeping, always-on world of cybersecurity has also been touched by this AI renaissance.

Blackberry’s recent research found “the majority (82%) of IT decision-makers plan to invest in AI-driven cybersecurity in the next two years and almost half (48%) plan to invest before the end of 2023.” In other words, the cybersecurity and AI tango is already on. The big question, however, is— for how long? Industry experts like Funso Richard, information security officer at Ensemble, say the dance will continue for a long time. Today’s article uncovers why the cybersecurity-AI union is here to stay and what it means for the enterprise.

AI for security

Richard told BrainBox via a telephone chat that AI is already changing the current cybersecurity landscape in two polarized yet profound ways: Threat actors are using generative AI to optimize cyberattacks, while cybersecurity professionals are also leveraging the potential of AI to protect organizations.

More and more, cybersecurity teams and service providers are interested in bolstering their capabilities to detect threats and bounce back from incidents at speed, a sentiment that Kfir Kimhi, CEO at data protection and security company ITsMine, also shares. 

“The security department is as much the security operation center in any organization. When an attack happens or before it happens, they need to know what to investigate first,” Kimhi said in a video interview with BrainBox. “This is where AI helps a lot; to reduce the number of tickets that security teams should handle, helping them prioritize what’s important,” he continued.

While AI-enabled cybersecurity solutions can help to automate threat detection, streamline threat hunting, and analyze large amounts of data, Ron Moritz, cybersecurity expert and venture partner at venture capital firm OurCrowd, argues that AI cannot always capture or comprehend all the nuances and contexts of complex IT environments. “AI may make recommendations that are suboptimal in experts’ eyes,” said Moritz who added that “we  can improve the effectiveness of AI models with ongoing human feedback.” As I wrote in this article for ITProToday back in 2022, “automated tools must allow for human intervention at critical points.”

Security for AI

As the AI fire grows wilder, so do concerns about its safety. On March 22, 2023, Elon Musk, Steve Wozniak, Gary Marcus, and several thousands of AI experts across the globe signed an open letter (published by the nonprofit Future of Life Institute) calling for an “immediate six-month pause on the training of AI models more powerful than GPT-4.” Why? Because, according to the letter, “AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs.”

Talks about the dangers of AI have only grown stronger over the last few months. However, some believe the letter “further fuels AI hype and makes it harder to tackle real, already occurring AI harms,” as told in this article by VentureBeat’s Sharon Goldman. But whether you’re for AI doomerism or on the other side that vehemently chides AI pessimists, there’s industry consensus on the need to safely harness AI’s massive potential.

“There is AI for security and there’s security for AI,” Moritz enthused. While it’s easier to understand how to use AI to improve security products, the more challenging and trickier problem, according to Moritz is trust: Can I trust the AI, to begin with? This is why he believes business leaders must consider and prioritize the security of the AI tools into which they pour their investment dollars.

Getting your AI security investment right

For Richard, IT decision-makers must evaluate the need for such AI investment to get the right value from it. They must ask these questions: What part of the business will benefit from AI-enabled security solutions? What benefit are we looking for?” A few other things to consider for investment are people, processes, technology, and risk reduction, he added.

“Cybersecurity professionals will need training and upskilling to use AI-enabled cybersecurity solutions. Manual processes may have to be replaced by automation. Technology spending should, at minimum, address threat detection and prevention, cyber resiliency, user behavior analytics, identity management, vulnerability scanning, network and cloud security.”

The future is ‘like a cocktail of chaos and calm’

When I asked Richard if he thinks the current AI wave is just another tech hype that would fizzle out soon, like many others before it, he replied pretty strongly, “No, it is not likely to fizzle out. In fact, it will continue to grow in the years to come.” He said some reasons for this include increased availability of data, lower cost of computing power, and more demand for AI-powered solutions. 

Richard explained that “recent advancements in large learning models (LLM) as manifested in the latest releases of ChatGPT and other generative AI tools point to a strong future for AI and cybersecurity.” He, however, noted that in a world driven by AI, the future of cybersecurity is like a cocktail of chaos and calm.

We will be experiencing the AI battle and there will be more use of AI-powered tools for sophisticated cyberattacks, with a corresponding increase in the adoption of a new generation of AI-enabled security solutions, he said.

“As AI continues to evolve, cybersecurity will constantly disrupt to stay ahead of emerging threats and safeguard critical digital assets. There is also a possibility that one of the major generative AI tools will be compromised to distribute malware on a scale only imagined. One important trend worth mentioning is the shift from data confidentiality to protecting data integrity and provenance as the proliferation of synthetic media will threaten confidence in data, making it challenging for businesses to fully rely on data available to them.”

The threats are real and business leaders must make the most compelling developments of AI in cybersecurity top-of-the-mind. While AI will help cybersecurity to be faster and more responsive, cybersecurity must also help to secure AI — creating a utopian balance of sorts.

“IT decision-makers and business leaders will need to develop an AI strategy that aligns with their business goals and culture. It is not enough to leverage AI because it is the new shining toy in town. This is not just about responsible AI, which is an important approach to AI development, deployment, and adoption. It is more about strategic-value AI. It is important that the conversation about AI-enabled solutions shifts from focusing solely on cybersecurity to the business need for AI. Value is crucial to ensuring the right investment is made and returns are guaranteed. AI adoption must be about optimizing operations, maximizing value, empowering the workforce, and pursuing social good” said Richard. 

Join 34,209 IT professionals who already have a head start

Network with the biggest names in IT and gain instant access to all of our exclusive content for free.

Get Started Now