With more and more organisations adopting AI as part of their operations, a new layer of data risk has begun to emerge.
In the recent episode of The Security Strategist Podcast, guest Gidi Cohen, CEO and Co-Founder of Bonfy.AI, sat down with host Richard Stiennon, Chief Research Analyst at IT Harvest. They discussed the reasons traditional data loss prevention (DLP) systems fail. Cohen stressed that understanding data context is now crucial for securing AI-driven enterprises.
What Happens When “Trusted” AI Tools Become Paramount Risk
The rise of generative AI, from chatbots to embedded assistants in SaaS platforms, has created a complex web of data interactions that many organisations do not fully grasp. Cohen argues that this new reality has made legacy DLP technologies completely irrelevant.
“Even before generative AI, DLP never really worked well,” he told Stiennon. “It relied on static classification and outdated detection models that created noise and false positives. Now, with dynamic content generated and shared instantly — and humans often out of the loop — those tools can’t keep up.”
While “shadow AI” applications have gained much attention, Cohen believes the larger threat lies in the trusted tools organisations already use. “We’re using Microsoft 365, Google Workspace, Salesforce — all of which now embed AI models,” the Bonfy CEO explains. “They process vast amounts of sensitive data every day. Yet most companies have no control or visibility over how that data is accessed, transformed, or shared.”
This lack of visibility creates a perfect storm for data exposure. “You might use an LLM to summarise a customer meeting, which is fine,” Cohen says. “But if that summary is later shared with the wrong client or synced with another app, you’ve just leaked confidential information — and no one will even notice.”
The main issue, he adds, isn’t about whether AI vendors misuse data. “The model itself isn’t the main problem. It’s what happens afterwards — how the data and outputs move through the organisation.”
How to Create a New Model for AI-Aware Data Security
Cohen’s solution to this growing complexity is what he calls a context-driven, multichannel architecture. Such a way perceives data protection as an ecosystem rather than a single checkpoint.
“The flows are too complex for simple guardrails,” he explains. “You can’t just block uploads of credit card numbers and call it a day. You need to understand the context — who’s sharing the information, through what channel, for what purpose, and whether it’s leaving the organisation.”
Bonfy’s approach looks across multiple communication layers — from email and file sharing to APIs, AI agents, and web traffic. They create a complete view of how information moves. Cohen says it’s essential for spotting risky behaviour, whether it comes from a careless employee or an autonomous AI agent working in the background.
As organisations start using multimodal AI — incorporating text, images, audio, and video — this overall visibility becomes even more important. Browser extensions or regex-based filters, he notes, simply won’t catch everything. “An AI agent isn’t using a browser. It’s running somewhere on your network, processing sensitive data on its own. You need a system that can detect that.”
Alluding to turning points in cybersecurity, Cohen explained: “What happened with firewalls 20 years ago or endpoint protection 15 years ago is happening again with data security. We’re entering a new generation.”
“AI is moving faster than defenders. Waiting to figure it out later is not an option. The risks are growing rapidly — and the time to act is now.”
Ultimately, as AI changes how organisations create, share, and use information, traditional data security methods are struggling to keep up. The Bonfy co-founder believes the future belongs to tools that understand context — not just content — and that can protect data wherever and however AI is used.
Takeaways
- The rise of LLMs introduces significant new risks to data security.
- Traditional DLP solutions are inadequate for modern data flows.
- Shadow AI poses hidden risks that organisations often overlook.
- Embedded models in SaaS applications increase data leakage risks.
- Contextual understanding is crucial for effective data protection.
- Organisations must adapt to the dynamic nature of data sharing.
- Guardrails are essential to prevent data misuse and leaks.
- Data security controls need to evolve with technological advancements.
- The integrity of data can be compromised by AI hallucinations.
- Trust in AI tools must be balanced with robust security measures.
Chapters
- 00:00 Introduction to Cybersecurity Challenges
- 02:47 The Rise of LLMs and Data Risks
- 06:04 Shadow AI and Embedded Models
- 08:59 Guardrails and Data Protection Strategies
About Bonfy.AI
Bonfy.AI empowers enterprises to safely adopt GenAI and productivity platforms, such as Microsoft 365 and Copilot, without sacrificing security, compliance, or innovation. Legacy DLP and DSPM is increasingly inadequate against the risks of prompt injection, shadow AI, and over-permissioned access inherited by AI agents. Bonfy.AI’s solutions go beyond static detection, offering a context-driven, multi-channel data security architecture that understands not just content, but the full context of data flows within the modern enterprise.
Bonfy Adaptive Content Security™ (Bonfy ACS™) enables real-time monitoring, automated labeling, and risk-based remediation of AI-generated content across email, file sharing, cloud applications, and enterprise collaboration platforms. Deep integrations with Microsoft 365 ensure full visibility and policy enforcement in critical channels (Mail, SharePoint, Entra, Purview, and Copilot outputs) while adaptive knowledge graphs and behavioral analytics detect and prevent ten times more real-world risk scenarios with far fewer false positives.
Bonfy ensures audit-readiness and regulatory compliance for frameworks such as GDPR, HIPAA, PCI, and CCPA by blocking unauthorized AI access, labeling confidential data in real time, and providing security teams with actionable visibility across all information flows. Organizations in regulated industries, including financial services, healthcare, insurance, and technology, leverage Bonfy.AI to enable secure AI adoption, prevent costly data leaks, and achieve balanced productivity gains without introducing compliance crises or reputational risk. To learn more, visit bonfy.ai and follow us on LinkedIn.
Comments ( 0 )