Enterprises didn’t suddenly lose control of AI security. The control plane shifted quietly.

Most security teams have done what they were supposed to do. They’ve tightened identity. They’ve rolled out single sign-on (SSO), conditional access, and multi-factor authentication (MFA). They’ve put guardrails around approved tools. They’ve trained staff to spot obvious scams. And yet, a growing share of AI risk isn’t happening inside the AI platform at all.

It’s happening in the browser.

AI productivity tools have turned the browser into a high-value workspace. That matters because browser extensions sit inside that workspace, often with permissions that can read what users see, copy, paste, and upload. When that layer is unmanaged or poorly governed, it becomes the easiest way to bypass controls that were designed for a different era of enterprise computing.

Recent campaigns involving fake ChatGPT extensions make the point painfully clear. They’re not the whole story. They’re the warning light on the dashboard.

em360tech image

The Enterprise Assumption That No Longer Holds

There’s an assumption baked into many enterprise security models: if you can trust identity, device posture, and the application, you can trust the session.

That worked when most high-risk activity happened in clearly defined places, like endpoints, email, or corporate applications. The browser was “just” the thing people used to access those applications.

Now the browser is where the work happens.

That shift breaks the old mental model in a specific way. Identity-first security tends to focus on logins and access decisions. Once the user is authenticated, the session is treated as legitimate by default. Extensions exploit that trust. They don’t need to a password or defeat MFA if they can ride along inside an already-authenticated session.

This is where browser extension security stops being a hygiene issue and becomes an enterprise AI security risk. Extensions can inherit trust without inheriting scrutiny. They live close enough to the user to observe behaviours that most security tools don’t consistently see, like what gets copied into a prompt, what gets uploaded, and which tabs are open.

AI didn’t create this gap, but it did widen it.

Why AI Changes the Risk Profile of Browser Extensions

The browser has always been messy. What’s different now is the value of what passes through it.

Enterprise AI usage is not limited to “ask a chatbot a quick question.” In real teams, AI gets used to draft customer communications, summarise meetings, analyse documents, interpret logs, help with coding, and speed up research. That means the inputs often include:

  • Sensitive prompts, like internal processes or product roadmaps.
  • Copied snippets of code or configuration data.
  • Drafts of legal or commercial language.
  • Customer context, incident notes, or operational details.

None of this has to be dramatic to be damaging. A single prompt can carry enough detail to expose intellectual property or create regulatory complications, especially if employees paste content they would never email to an external address.

Extensions with broad permissions change what “data leakage risk” looks like. They can be positioned to capture content from a page, watch what’s typed, intercept what’s copied, or access cookies that represent an authenticated session. If the goal is account access, attackers can also skip the login step entirely by targeting session tokens instead of credentials. That’s why these campaigns are enterprise-relevant even when download numbers aren’t in the millions.

The risk isn’t that “everyone” is using an AI extension. It’s that the wrong person is using the wrong one.

LayerX’s Enterprise Browser Extension Security Report 2025 puts some hard numbers behind the scale of extension exposure in enterprise environments: 99% of enterprise users have at least one extension installed, and more than half have over 10 extensions installed. The same report says 53% of enterprise users have installed extensions with “high” or “critical” permission scopes, meaning they can access sensitive browser data such as cookies, passwords, and browsing activity.

That’s the attack surface. AI simply raises the value of what’s sitting on it.

Fake ChatGPT Extensions Are a Warning Sign, Not the Core Threat

In late January 2026, LayerX disclosed a coordinated campaign involving 16 malicious Chrome extensions masquerading as helpful ChatGPT tools. The purpose was straightforward: hijack ChatGPT accounts by stealing session data, giving attackers access without needing credentials.

This is the part that matters for enterprise leaders: it’s not an authentication failure. It’s a session trust failure.

If an attacker can steal a session token or otherwise capture the active session, MFA doesn’t get a vote. The user already passed the checks. The session is already “trusted.” From the platform’s perspective, the attacker looks like the user.

That’s why the “but how many people installed it?” question misses the point. Enterprise impact isn’t measured by infection rate alone. It’s measured by who was compromised and what they had access to.

If one senior engineer, analyst, legal professional, or executive had a hijacked AI session, the attacker doesn’t just get an account. They can potentially see sensitive conversations, infer business context, and capture ongoing work.

So yes, these fake extensions are important. But they’re important because they show how modern attacks are increasingly built around hijacking trust, not breaking locks.

The Browser Blind Spot in Enterprise Security

Browsers sit in an awkward place in the stack. They are not quite an endpoint. They are not quite an application. They are the gateway through which people do most of their work, across dozens of software-as-a-service (SaaS) platforms.

That’s a blind spot many organisations still underinvest in.

Google’s Chrome Enterprise “The Security Blindspot” paper makes the point in enterprise terms: most work happens in browsers, employees handle sensitive data through browser-based SaaS, and traditional security tools have not kept pace with protecting sensitive data inside the browser.

The same paper also highlights malicious browser extensions as an increasingly sophisticated threat that can evade detection and bypass endpoint protections, and it argues for browser telemetry and granular controls as part of modern security architecture.

That’s the real takeaway: enterprise controls often focus on the wrong visibility layer. You can have strong identity controls and still have poor visibility into what happens after authentication.

“Official store” trust signals add another trap. People assume that if something is listed in a major extension store, it’s been meaningfully vetted. In reality, store controls reduce some risk, but they don’t eliminate it. Attackers can also play long games, where an extension behaves normally for months or years and then changes through an update.

Malwarebytes documented a “sleeper” extension campaign where seemingly legitimate extensions later turned into spyware on millions of devices. The lesson is uncomfortable but simple: historical trust doesn’t guarantee current safety.

Why “Just Block Extensions” Is Not a Realistic Answer

Some organisations try to solve this by banning extensions outright. It’s understandable, and it often fails.

Extensions are tied to productivity. People use them for password managers, accessibility, translation, note-taking, and workflow tools. AI add-ons are part of the same story. If you don’t offer a controlled path, users will still look for shortcuts. They’ll install something on an unmanaged profile, a personal device, or a browser you’re not actively governing.

Bans can also create a blind spot of a different kind. They push behaviour into places that are harder to see, harder to audit, and harder to support. In practice, that increases risk rather than reducing it.

Enterprise constraints matter here. Security controls that ignore how work gets done don’t stick. The goal isn’t to eliminate extensions. It’s to make extension use visible, deliberate, and governed like any other high-risk software category.

Help good content travel further, give this a like.
Link copied to clipboard!

What Security Leaders Need to Rethink Now

This is where the conversation needs to mature. Extensions can’t be treated as harmless add-ons anymore, especially when they touch AI workflows.

A more realistic posture starts with a mindset shift: treat AI browser extensions as high-risk software, not casual productivity tools. That changes what “approval” means.

A practical enterprise approach usually includes:

Managed browser policies, not just endpoint policies. If you manage devices but don’t manage the browser layer, you’re leaving a gap where extensions can operate with too much freedom. Chrome Enterprise guidance points toward making the browser a first-class security surface, with telemetry and controls that can feed into existing security operations tooling.

Extension governance based on permissions, not popularity. Reviews and star ratings are not security signals. Permission scopes are. Extensions that can access cookies, read page content, or interact with sessions should trigger additional scrutiny and stricter controls.

Session monitoring, not just login monitoring. Identity controls are necessary, but they’re not sufficient. Watch for session anomalies, token reuse patterns, and unexpected access behaviours, especially around AI platforms and high-value SaaS.

A tighter link between AI governance and browser governance. If your AI policy focuses on “approved AI tools” but ignores the extensions that connect to them, you’re governing the brand name and missing the access path.

None of these are silver bullets. But together they shift security from “trust the session by default” to “verify what the session is doing.”

Where This Leaves Enterprise AI Strategy

Secure AI adoption depends on visibility more than restriction.

Enterprises that treat this as an “extension problem” will chase symptoms. Enterprises that treat it as a control-plane shift will build a more resilient AI posture.

The bigger idea is organisational maturity. If your AI programme assumes risk lives inside the AI platform, you’ll keep strengthening the platform and wondering why sensitive data still leaks. If you recognise that risk also lives in the browser, you start governing the environment where AI is actually used.

That’s the difference between AI adoption that scales and AI adoption that stays fragile. The same organisation can have strong identity controls and still have weak AI security if browser behaviour is unmanaged. Extensions are one of the clearest examples of how that gap forms.

Enterprises don’t need to panic. They do need to update the map.

Final Thoughts: AI Security Breaks Down Where Trust Goes Unchecked

The fake ChatGPT extension campaign didn’t reveal a brand-new threat. It revealed an old habit that no longer fits modern enterprise work: trusting sessions too easily, because authentication looks strong on paper.

The next phase of enterprise AI security is going to be shaped by browser governance, extension controls, and the ability to see what happens after the user is logged in. That’s where trust either holds, or quietly collapses.

EM360Tech covers these grey areas where technology adoption moves faster than security models, with analysis that stays grounded in how enterprises actually work and what security leaders can realistically change before the next control gap becomes the next incident.