Enterprise AI leadership rarely fails because people lack vision. It fails because execution gets messy faster than governance, security, and operating discipline can keep up.

Most organisations are trying to do several hard things at once. They’re modernising data foundations, hardening security, rethinking operating models, and proving value that holds up under cost scrutiny. At the same time, they’re being asked to move faster than their risk posture is ready for.

That’s why “AI strategy” is no longer a neat plan you present once and then deliver. It’s a living set of trade-offs: speed versus control, innovation versus defensibility, autonomy versus accountability.

em360tech image

The analysts on this list matter because they help leaders separate signal from noise. They don’t make the complexity disappear. They make it easier to name the real problem, pick the next sensible move, and avoid the kind of architectural and governance debt that shows up later as reputational risk.

Why Enterprise AI Leaders Still Rely on Analysts

Enterprise leaders don’t rely on analysts to tell them what AI is, or which vendors exist. They rely on them to pressure-test decisions when the stakes are real, the org chart is complicated, and the consequences land in more than one place.

That’s because enterprise AI isn’t just a tooling shift. It’s an operating model shift. Models change how work gets done, how decisions get made, and who has to sign their name at the bottom when systems influence outcomes at scale.

The tension sits in plain sight. Everyone wants speed, but governance pressure is tightening. “Move fast” stops being a motivational line the moment your AI touches regulated data, impacts customers, or automates actions people can’t easily rewind.

AI adoption also forces uncomfortable coupling across domains that used to pretend they were separate. Data quality affects model reliability. Security affects access and control. Architecture affects latency and cost. Value measurement affects whether programmes survive budget season. When visibility is fragmented across teams, the organisation can ship something impressive and still fail to ship something sustainable.

The analysts enterprise leaders keep coming back to are the ones who stay close to execution. They help leaders frame trade-offs early, before those decisions harden into structural mistakes that are expensive to unwind.

How This List Was Curated

This list focuses on analysts with earned enterprise credibility over time: senior roles held, long-running research themes, industry influence, and work that consistently lands in real-world environments.

Each analyst is closely associated with a clear AI-related problem space, such as secure AI adoption, AI governance, agentic systems reliability, AI-ready data foundations, or AI-driven operating change.

The final filter is trust earned through consistency. Plenty of people talk about AI. Fewer can stay coherent across shifting hype cycles and still help leaders make defensible decisions.

The Analysts Enterprise Leaders Should Be Following

The goal is simple: help enterprise leaders decide whose thinking fits the challenges they’re trying to solve, whether that challenge is AI security, AI value, agent reliability, AI data foundations, or AI applied in a specific operational domain.

Note: Names are listed alphabetically, not ranked. The point isn’t to compare them as if they all do the same job. Because AI is such a broad field, that everyone is bringing different lenses to the conversation. But each maps to the real problems enterprise teams are dealing with right now.

Aparna Sundararajan

Profile card of Aparna Sundararajan, Practice Lead for AI and Business Transformation at MIT xPRO Insight, specialising in responsible AI at scale, based in Sydney, Australia.

Aparna Sundararajan is a strategic advisor focused on secure AI adoption, with experience across enterprise-facing roles that include major advisory and transformation environments. Her credibility comes from sitting in the overlap between strategy, security, and enterprise delivery. 

Her core problem space is secure AI adoption and governance that holds up in the real world, especially when organisations want agentic workflows and fast deployment without losing control of risk, data exposure, or accountability.

What they are known for and why they matter to enterprise AI leaders

Sundararajan is most useful when leaders feel the pressure to “scale AI” but know the security story won’t survive first contact with reality.

Her value is in translating security into the language enterprise leaders actually need: what has to be true for AI to be safe enough, trusted enough, and operational enough to run inside the business. That’s not theoretical. It’s a leadership-level question about what you can defend.

The consequence of getting that wrong usually isn’t a dramatic failure on day one. It’s slow erosion: controls become inconsistent, access expands faster than oversight, and AI becomes a source of new attack surface rather than a capability boost.

She’s also very clear that security can’t be treated as a blocker. It has to become part of how teams build, ship, and operate, especially when AI systems and automations increase the speed at which mistakes propagate.

Where their insights are most valuable

Sundararajan’s insights are strongest when organisations are trying to put structure around AI risk without killing momentum.

Across her recent work, a few themes show up consistently: real-time visibility, unified “whole environment” security thinking, and the need to design controls that work in dynamic cloud contexts rather than static legacy assumptions.

A good example is the Security Strategist episode Unified Defences: Why CDR Matters, where she explores cloud detection and response as a practical answer to expanding cloud attack surface, real-time threats, and the need for unified security posture. The takeaways emphasise visibility and context, integrating security into development, and using automation to improve security team efficiency.

That episode is one lens, not her whole identity. The broader through-line is that enterprise AI security has to be designed as an enabling system: real-time enough to matter, unified enough to reduce blind spots, and practical enough that teams don’t route around it.

Her perspective tends to resonate with leaders who need to scale AI in environments where security and governance are board-level concerns, not check-box activities.

Check out Aparna Sundararajan on LinkedIn.

Dana Gardener

Profile card of Dana Gardener, President and Principal Analyst at Interarbor Solutions and lecturer at Northeastern University, based in New Hampshire, United States.

Dana Gardener has built a long-running platform around enterprise tech analysis and moderated insight, with decades of experience producing and distributing enterprise IT content. He’s also the founder of Interarbor Solutions and the host of BriefingsDirect, a series that began in 2005 and has produced hundreds of episodes.

His problem space, in an AI context, is scaling AI as part of enterprise systems: how AI lands in real architecture, real ops, and real constraints, especially when organisations are trying to modernise platforms without adding fragile complexity.

What they are known for and why they matter to enterprise AI leaders

Gardener is most useful when leaders need clarity on how AI affects the full system, not just the model layer.

He frames AI adoption as an engineering and operating reality: architecture bottlenecks, integration decisions, and the practical sequencing that makes AI scalable instead of flashy. That matters because many AI programmes don’t stall due to lack of ambition. They stall because the underlying system isn’t designed to support what the AI now needs.

His credibility is also tied to repetition and pattern recognition: thousands of moderated conversations across shifting waves of enterprise tech, which lets him spot when organisations are about to rebuild the same problem with newer labels.

Where their insights are most valuable

Gardener’s insights are strongest when organisations are trying to operationalise AI in environments where architecture decisions become product decisions.

A recurring theme in his recent AI-facing work is that scale comes from simplifying, not piling on more moving parts. In the Don’t Panic, It’s Just Data episode How To Scale AI in Digital Commerce Effectively, the conversation is anchored in a very practical set of blockers: architectural bottlenecks, the inefficiency of splitting search and recommendation systems, the need for real-time personalisation, and the reality that migration has to be phased.

The value of that example is not “digital commerce is the only place AI matters”. It’s the pattern: AI complicates systems that were already fragmented, and the path forward is often to design for cohesion and latency, not bolt-on cleverness.

His thinking tends to resonate with enterprise leaders responsible for performance, reliability, and cost, especially when they’re pushing AI into customer-facing or revenue-critical environments where “close enough” isn’t acceptable.

Check out Dana Gardener on LinkedIn.

Edosa Odaro

Profile card of Edosa Odaro, MIT-affiliated AI strategist and author focused on AI value, governance, and human-centred leadership, based in the United Kingdom.

Edosa Odaro is an AI value advisor and speaker, and he’s also the author of The Values of Artificial Intelligence, which is explicitly framed around connecting AI value to human values and leadership judgment, not just capability. 

He works directly on a problem many enterprise leaders feel but struggle to name: AI value that can be defended, measured, and aligned to what the organisation actually cares about. His problem space is AI value, governance, and the leadership trade-offs that determine whether AI helps people and the business, or quietly undermines trust.

What they are known for and why they matter to enterprise AI leaders

Odaro is most useful when an organisation can build AI, but can’t consistently explain why it matters, how it creates value, and what guardrails prevent harm.

He argues that many AI failures are not “model failures” first. They’re design failures in shared ownership: technical delivery separated from business context, and governance treated as an add-on rather than a decision system.

That matters to enterprise leaders because the hardest AI questions are no longer “can we do this?” They’re “should we do this?”, “can we defend it?”, and “what are we optimising for without noticing?” If leadership can’t answer those questions, AI becomes a risk multiplier.

His credibility is anchored in a consistent framework-led stance: AI value has to be connected to human outcomes, and leaders have to make trade-offs explicit rather than hiding behind tool performance.

Where their insights are most valuable

Odaro’s insights are strongest when organisations are trying to move AI conversations out of hype and into accountable leadership.

A clear recent example is his piece How Do We Make AI Valuable - Without Losing What Makes Us Human?, where he challenges the “AI is a race for technology” framing and instead positions the real race as values and the choices organisations embed into systems. He explicitly ties the question of AI value to decisions about who and what we value, and how that shapes outcomes at scale.

That example is one thread, not his entire scope. The deeper pattern is that AI value is fragile when measurement incentives are wrong, when governance is reactive, and when leaders treat ethics as a slogan instead of an operating discipline.

His perspective tends to resonate with executives who need to justify AI investment under scrutiny, leaders building governance that goes beyond compliance, and organisations trying to align AI success metrics with real human and business outcomes.

Check out Edosa Odaro on LinkedIn.

Jon Arnold

Profile card of Jon Arnold, technology analyst and founder of J Arnold & Associates, specialising in collaboration technology, AI, and digital transformation, based in Toronto, Canada.

Jon Arnold has been an independent analyst for decades, with a technology focus that includes unified communications, cloud communications platforms, and AI applied to workplace productivity and customer engagement.

His problem space sits where AI meets communications: contact centres, speech technologies, customer experience, and how conversational interaction becomes measurable operational signal.

What they are known for and why they matter to enterprise AI leaders

Arnold is most useful when leaders need to understand AI in the context of customer interaction systems, not abstract “AI transformation”.

In contact centres and communications-heavy environments, AI isn’t being adopted because it’s trendy. It’s being adopted because organisations are bleeding value through missed interaction, inconsistent service, and poor visibility into what actually happens in calls, chats, and follow-ups.

That matters because enterprise AI success is often decided in the operational layer. If AI can’t improve response, consistency, and real-time decision support, it becomes another tool that teams tolerate instead of rely on.

Arnold’s analysis helps leaders translate AI capability into customer experience outcomes: what changes in behaviour, what becomes measurable, and what becomes automatable without losing quality.

Where their insights are most valuable

Arnold’s insights are strongest when organisations want to turn communications data into action, especially in environments where speed and service quality are revenue-linked.

A recent example is the Tech Transformed episode How AI and Analytics Are Transforming Automotive Call Tracking and Repair Orders. The episode frames a concrete operational gap, including the impact of unanswered dealership calls, and then looks at how AI-driven call monitoring and real-time analytics can help improve engagement, reduce missed opportunity, and support better decision-making.

That example doesn’t define his work as “automotive only”. It shows the broader pattern of AI in communications: capturing interaction, extracting signal, and giving teams the ability to intervene and improve outcomes while the work is happening.

His perspective tends to resonate with leaders responsible for customer experience, contact centre modernisation, and any AI initiative that depends on speech, conversation, or real-time human interaction as a key data source.

Check out Jon Arnold on LinkedIn.

Jonathan Care

Profile card of Jonathan Care, cybersecurity and AI analyst at KuppingerCole and former Gartner analyst, specialising in AI risk and identity security, based in Portugal.

Jonathan Care is a senior analyst at KuppingerCole Analysts AG and previously held senior leadership roles in enterprise analyst work. His background also includes hands-on industry experience across security engineering and fraud and risk research.

He brings deep credibility from long-term work in cybersecurity and fraud detection, including senior advisory and analyst roles across major enterprise environments. His problem space is enterprise security and data risk in a world where AI increases automation, expands access patterns, and creates new accountability questions for CISOs.

What they are known for and why they matter to enterprise AI leaders

Care is most useful when leaders are trying to reduce risk without becoming the department that slows everything down.

That focus matters because AI systems don’t live in isolation. They rely on data access across cloud platforms, SaaS tools, collaboration environments, and identity layers. If security and access control aren’t designed to match how the business actually works, AI adoption either gets blocked, or it slips through uncontrolled side channels.

His work consistently pulls security out of policy land and into operating reality: what data is actually being accessed, how permissions accumulate, how risk expands in practice, and how controls can be applied dynamically rather than through blanket restriction.

Where their insights are most valuable

Care’s insights are strongest where AI-related autonomy, data sprawl, and security accountability collide.

A good example is How CISOs Can Reduce Enterprise Data Risk Without Slowing the Business, which makes a very practical argument: most enterprise data is dormant, and securing everything equally is a losing strategy. The episode stresses permission hygiene, monitoring, and dynamic controls that reduce exposure without breaking workflows. It also explicitly connects that operating reality to the rise of agentic AI, where autonomous systems increasingly access data on behalf of users.

That example doesn’t turn him into an “agentic AI analyst” only. It shows why AI security is now inseparable from data security and identity reality. As organisations introduce AI-driven automation, access patterns change, and oversight has to evolve with them.

His perspective tends to resonate with CISOs, security leaders, and enterprise risk owners who need pragmatic control models that scale alongside AI adoption rather than fighting it.

Check out Jonathan Care on LinkedIn.

Kristen Kehrer

Profile card of Kristen Kehrer, host of the Mavens of Data podcast and co-author of Machine Learning Upgrade, based in Greater Boston, United States.

Kristen Kehrer operates at the intersection of hands-on AI building and practical education. She’s a live event producer and host at Maven Analytics, and she’s explicit about working with topics like computer vision, semantic search, vector databases, and chatbot building using mainstream LLM tooling.

Her problem space is trustworthy, practical AI: how people build working systems, how those systems earn trust, and how governance and data discipline make “AI at scale” possible.

What they are known for and why they matter to enterprise AI leaders

Kehrer is most useful when teams need grounded guidance on what “trustworthy AI” actually means once the demos stop.

In her writing on trustworthy AI at scale, she frames a key enterprise shift: moving from isolated tools and models to governed agent ecosystems that can reason, act, and adapt. She also highlights an enterprise blind spot that keeps recurring: unstructured data is rich, but it’s hard to use safely without lineage, governance, and policy enforcement.

That matters because enterprise adoption isn’t just an accuracy problem. It’s a trust problem. Leaders need to know where answers came from, whether data use complies with policy, and whether systems can be monitored without turning every interaction into an incident.

Her credibility is anchored in translating those issues into language that feels workable, not academic, while still taking governance seriously.

Where their insights are most valuable

Kehrer’s insights are strongest when organisations are trying to build AI that feels reliable, auditable, and safe enough to embed into daily work.

Across her recent themes, a few threads stand out: the shift from automation to orchestration, extending governance into new data types, and starting with use cases where the organisation has more control and faster feedback loops.

A good example is her podcast episode Responsible AI at the Federal Level: Leadership, Policy, and Data, which frames responsible AI through high-stakes public-sector realities. The focus points she highlights include lessons from AI and data strategy work across federal environments, the need for a centralised AI framework, leadership lessons, and the link between AI work and societal impact.

That example is one slice of a broader pattern: responsible AI is not just “ethics language”. It’s leadership design under consequence, and it forces clarity about governance, accountability, and what outcomes the system is allowed to optimise for.

Her perspective tends to resonate with teams trying to operationalise trustworthy AI, leaders building AI literacy alongside governance, and organisations working with unstructured data where traceability is non-negotiable.

Check out Kristen Kehrer on LinkedIn.

Mike Ferguson

Profile card of Mike Ferguson, CEO of Intelligent Business Strategies and industry analyst specialising in data management, analytics, and AI, based in Stockport, United Kingdom.

Mike Ferguson is a long-standing analyst and consultant in data management and analytics, and the CEO of Intelligent Business Strategies. He’s also closely tied to major industry event leadership through his role as conference chair for Big Data LDN.

His problem space is AI-ready data foundations at enterprise scale: architecture, governance, metadata, data products, and the practical trends that shape how organisations industrialise AI.

What they are known for and why they matter to enterprise AI leaders

Ferguson is most useful when leaders need to connect AI ambition back to the foundations that make AI dependable.

He consistently frames AI as something that changes data management itself. It’s not just “use the data platform to support AI”. It’s “data platforms are being reshaped by AI agents, orchestration, and new governance demands”.

That matters because many organisations are discovering that scaling AI isn’t limited by model availability. It’s limited by the ability to find trusted data, define meaning consistently, govern access, and make AI workflows repeatable.

His credibility is anchored in long-running, architecture-led analysis that ties trends to implementation reality, rather than treating AI as an overlay on yesterday’s operating model.

Where their insights are most valuable

Ferguson’s insights are strongest when organisations are trying to industrialise AI and avoid building fragile systems that collapse under scale.

In his trends recap drawn from his keynote speech at Big Data LDN, he identifies a very specific direction of travel: AI agents becoming a standard interface, agentic data management and analytics growing fast, and the merging of structured and unstructured data through AI search. He also emphasises knowledge graphs and enterprise ontology as context layers for AI, and highlights the growing importance of unified governance and shared metadata.

Those themes show up again in his focus on metadata as a “big data problem” in its own right, and the idea that data products and internal marketplaces will increasingly include not just datasets, but prompts, models, and AI agents as reusable enterprise assets.

Are you enjoying the content so far?

As an example lens, his Big Data LDN framing is useful because it doesn’t pretend AI success is magical. It treats AI as something that forces organisations to raise their standards for data meaning, orchestration, governance, and repeatability.

His perspective tends to resonate with CDOs, enterprise architects, and data platform leaders who need to build foundations that let AI scale without turning reliability into a constant firefight.

Check out Mike Ferguson on LinkedIn.

Paige Roberts

Profile card of Paige Roberts, independent data industry analyst and global analytics thought leader at Strigid Insight, based in Hamilton, Texas, United States.

Paige Roberts has worked as an independent consultant in data management and analytics for decades and founded Strigid Insight, alongside extensive writing and publishing history. She brings a rare mix of long-term data platform experience and direct, no-nonsense analysis on what agentic AI means once it hits real enterprise systems.

Her problem space is agentic AI and the architecture that makes AI agents usable in production: latency, data location, data meaning, and the difference between a demo and an operational system.

What they are known for and why they matter to enterprise AI leaders

Roberts is most useful when leaders want a clear explanation of agentic AI that doesn’t drift into vague futurism.

In her writing, agentic AI isn’t described as “better chatbots”. It’s described as multi-step workflows where generation is only one step, and other steps include planning, revision, checking, testing, tool use, and parallel agent collaboration.

That matters because enterprise risk and value often sit in the steps around the generation. Planning and tool access create new control points. Checking and testing determine reliability. Workflow design determines whether the system can be trusted.

Her credibility is anchored in tying agentic capability back to engineering realities: what has to be true for response to be fast, data to be findable, and outcomes to be consistent enough for real work.

Where their insights are most valuable

Roberts’ insights are strongest when organisations are trying to build AI agents that work under real constraints: huge datasets, moving data, and users who don’t behave like test cases.

Her Medium piece Agentic AI is the New Internet gives a practical definition of agentic AI, then connects it to the future of software creation, including the idea that requirements quality becomes a core skill when AI can generate, test, and iterate code as part of normal workflow.

She also covers the infrastructure side. In Architecting a Modern Data Stack for AI Agents, she frames the core production challenge as fundamentals: scattered data, slow response, and trust in the retrieved information. She argues that if you want AI agents to respond fast at scale, you can’t ignore the data foundation and network latency realities.

Those examples don’t trap her into a single “message”. They show the broader lane: agentic AI becomes valuable when you treat it like an operational system, not a clever interface.

Her perspective tends to resonate with enterprise architects, platform leaders, and anyone responsible for making agentic AI reliable enough to ship into workflows that matter.

Check out Paige Roberts on LinkedIn.

Ravit Jain

Profile card of Ravit Jain, founder and host of The Ravit Show and data and AI community builder, based in San Francisco, California, United States.

Ravit Jain operates as a public-facing analyst and convener in the data and AI space. He’s the founder and host of The Ravit Show and has also been brought into advisor ecosystems where enterprise audiences expect signal rather than noise.

His problem space centres on making AI and data trends legible, especially around responsible AI, data ethics, and the practical realities leaders face as AI becomes normalised across work.

What they are known for and why they matter to enterprise AI leaders

Jain is most useful when organisations need a reality-led view of where AI and data trends are heading, and how leaders should interpret them without overreacting.

His value is strongly tied to convening the right conversations. That includes surfacing practical debate around governance, ethics, data quality, and the human impact of AI adoption, not just “what’s new”.

For enterprise leaders, that matters because responsible AI doesn’t live in a single team. It spans data governance, business measurement, workforce impact, and executive accountability, often at the same time.

Where their insights are most valuable

Jain’s insights are strongest when leaders need current, multi-lens discussion on responsible AI foundations: data ethics, governance, and the metrics that separate value from vanity.

A good example is the Meeting of the Minds episode Data Experts Question: Is Data Infrastructure Ready for Responsible AI? The discussion is framed around ethical issues in AI use, the need for data governance guidelines, and the role data quality plays in whether AI succeeds. It also highlights the need to measure AI value through KPIs that balance technical achievement with business results.

The same episode also captures recurring themes that keep showing up for enterprise leaders: regulatory pressure shaping governance, the risks introduced by generative AI, and workforce realities like retraining, AI literacy, and how people adapt to probabilistic systems.

That example doesn’t define him as “only responsible AI”. It shows how he operates: by pulling together signals from multiple expert lenses and translating them into leadership-relevant framing.

His perspective tends to resonate with enterprise leaders who want to track AI direction without falling into vendor-driven narratives, especially teams building governance and AI literacy at the same time as they scale adoption.

Check out Ravit Jain on LinkedIn.

Tom Croll

Profile card of Tom Croll, AI security leader and former Gartner analyst specialising in cloud security and CNAPP strategy, based in London, United Kingdom.

Tom Croll is the CTO of Secure Cloud Services and has made research contributions in cloud-native security, along with a broader focus that includes AI systems security and multi-cloud security concerns.

He brings credibility from deep cybersecurity and cloud security work, with an explicit focus on AI systems security and modern cloud operating realities.

His problem space is where AI adoption meets cloud complexity: multi-cloud fragmentation, interoperability, latency, security posture, and the governance implications of operating AI across distributed environments.

What they are known for and why they matter to enterprise AI leaders

Croll is most useful when organisations are trying to scale AI in environments where cloud architecture and security decisions are inseparable.

Multi-cloud and AI sound like separate programmes until you try to operationalise both. AI needs unified, reliable data access and predictable performance. Multi-cloud often introduces silos, inconsistent interfaces, and policy drift. Leaders need someone who can connect those dots without pretending it’s easy.

That matters because organisations often discover too late that “AI at scale” depends on the boring stuff: data placement, interoperability, latency, and consistent security controls across environments.

Croll’s lens helps leaders see how cloud decisions become AI decisions, and how security becomes part of performance and reliability rather than a separate conversation.

Where their insights are most valuable

Croll’s insights are strongest when organisations are trying to make AI work across fragmented cloud estates without losing control of data and risk.

A strong example is the Tech Transformed episode Multi-Cloud & AI: Are You Ready for the Next Frontier? The discussion highlights data fragmentation as a major blocker, along with interoperability and tooling issues caused by inconsistent APIs and lack of standardisation. It also points to performance realities like latency sensitivity for inference, and the compounding complexity of compliance across jurisdictions.

Just as importantly, it frames AI as both driver and remedy: using AI for intelligent orchestration, workload placement, policy automation, and abstracting complexity through AI agents so that operators and developers can move faster without guesswork.

That episode is one snapshot. The broader pattern is that AI security, cloud architecture, and data coherence will keep converging in 2026. Leaders who treat them as separate domains will keep paying for that separation in rework and risk.

His perspective tends to resonate with cloud, security, and data leaders trying to operationalise AI across multi-cloud environments while keeping governance, performance, and accountability intact.

Check out Tom Croll on LinkedIn.

Final Thoughts: Trusted AI Insight Is a Strategic Advantage

Enterprise AI in 2026 isn’t about proving AI works. It’s about proving it holds up under scrutiny.

That scrutiny shows up in governance, reliability, and value. It shows up when teams need to defend why a system made a call, what data it used, how risk was controlled, and whether the outcome was worth the cost.

Trusted analysts don’t replace internal strategy. They sharpen it. They give leaders language for trade-offs that are otherwise hard to explain, and they help organisations avoid the kind of structural mistakes that only become obvious when it’s painful to change course.

The organisations that win with AI over the next few years won’t be the ones that move the fastest in a straight line. They’ll be the ones that build AI capability with enough operating discipline that speed doesn’t come at the cost of trust. And EM360Tech will be here with the insights that help you do exactly that.