The EM360Tech Q1 2026 Impact Index recognises the content that created meaningful enterprise impact across our podcasts, campaigns, analysts, and wider thought leadership ecosystem.
The Vanguard Award has a specific role in that Index. It recognises the most innovative and forward-thinking podcast of the quarter, with emphasis on the strength of the insight, the quality of the perspective, and the usefulness of the conversation. It isn’t just about reach. It’s about whether the episode moved the discussion forward.
For Q1 2026, the Vanguard Award goes to Vespa.ai for its episode of Don’t Panic, It’s Just Data, titled How To Scale AI In Digital Commerce Effectively.
The episode stood out because it treated AI in digital commerce as an architecture problem, not a marketing slogan.
The Conversation Behind the Win
The winning episode focused on one of the most difficult problems in modern e-commerce: how to make AI-driven search, ranking, and personalisation work at scale.
Hosted by Dana Gardner, Principal Analyst at Interarbor Solutions, the conversation featured Jürgen Obermann, Senior GTM Leader EMEA at Vespa.ai, and Piotr Kobziakowski, Senior Principal Solutions Architect at Vespa.ai.
Together, they looked at why digital commerce teams often struggle to turn AI ambition into real customer impact. Most teams understand the value of better search and personalisation. They know customers expect fast, relevant, adaptive experiences. But the systems behind those experiences are often fragmented, fragile, and slow to change.
The discussion covered several practical blockers, including legacy search stacks, disconnected recommendation systems, duplicated data, slow experimentation, and the operational cost of running search, vector retrieval, ranking, and inference across separate platforms.
That gave the episode its strength. It didn’t treat personalisation as a vague customer experience goal. It showed what has to change underneath the platform before real-time relevance can work.
Why the Topic Mattered
Digital commerce teams are under pressure to make every customer interaction feel more relevant. Search results can’t just match keywords anymore. Product rankings can’t rely only on static rules. Recommendations can’t depend on stale nightly batches when customer behaviour changes in the moment.
That pressure is growing because AI has raised expectations on both sides of the business.
Customers expect systems to understand intent more naturally. Business leaders expect AI investments to improve revenue, conversion, and customer experience. Technical teams are expected to deliver those improvements without breaking the systems the business already depends on.
That’s where the real problem sits.
Many e-commerce platforms were built over years of incremental decisions. A search tool here. A recommendation engine there. A feature store somewhere else. A separate inference platform bolted on later because the AI roadmap needed one. Each piece may make sense on its own, but together they create latency, duplicated data, and operational drag.
For enterprise leaders, this matters because fragmented architecture limits business agility. If a simple relevance change takes months to deliver, the business can’t respond quickly to seasonal demand, customer behaviour, or competitive pressure.
AI doesn’t remove that problem. In many cases, it exposes it.
Scaling AI Commerce Architectures
How leaders move from fragmented tools to unified AI-native commerce platforms that finally align search, data and personalisation with revenue goals.
Where the Conversation Moved Beyond the Expected
Most AI in commerce conversations stay close to familiar ground. They talk about personalisation, better recommendations, smarter search, and improved customer journeys. All of that matters, but it can become a little weightless if no one explains what has to happen behind the scenes.
This episode went deeper.
Instead of positioning AI as something organisations can simply add to existing systems, the discussion challenged that assumption. Jürgen and Piotr made it clear that many legacy environments aren’t struggling because teams lack ideas. They’re struggling because the architecture can’t support the speed, flexibility, and data access AI requires.
Piotr’s explanation of tensors was a strong example of this. Vectors are often discussed in AI search because they help systems understand similarity and meaning. Tensors go further. In simple terms, they can represent richer relationships, such as different user preferences across categories, behaviours, or product types.
That matters because a customer doesn’t have one flat preference profile. Someone may prefer one kind of car, another kind of clothing, and a completely different set of priorities when buying technology. A more advanced system needs to understand those patterns without forcing everything into a single, oversimplified signal.
The conversation also reframed search itself. Search wasn’t presented as a standalone function. It was treated as part of a broader decision system that includes product data, user signals, business logic, machine learning models, and real-time ranking.
That’s the difference between adding AI features and building an AI-native commerce platform.
Rebuilding Vinted’s Data Layer
Sharding strain, multilingual load, and why Vespa’s architecture outperformed Elasticsearch for large-scale, query-heavy search.
The Enterprise Takeaway
The main takeaway for enterprise teams is simple: AI in digital commerce won’t scale properly if the underlying systems stay disconnected.
Technical leaders need to look beyond individual tools and ask how search, ranking, recommendation, personalisation, and inference work together. If every capability depends on a separate system, every improvement becomes slower, riskier, and harder to test.
That doesn’t mean organisations should rip out legacy systems overnight. The episode made a more practical case for phased migration. Start where value can be proven. Build from there. Use personalisation in category pages as an entry point. Then extend into search once the data model and ranking logic are stronger.
That’s a useful message because it respects the reality of enterprise environments. Large e-commerce teams can’t afford reckless transformation. But they also can’t keep treating AI as a layer that sits politely on top of systems that were never built for it.
The shift has to be architectural.
For business leaders, the lesson is just as important. Asking teams to “do AI” isn’t enough. The better question is whether the organisation has the data access, platform flexibility, and operational model needed to make AI useful in real time.
AI only creates value when the system around it can act on what it learns.
When Vectors Aren’t Enough
Understand the risks of vector-only retrieval and how richer data signals and ML ranking are redefining production-grade generative AI.
Final Thoughts: The Role of the Vanguard Award in the Impact Index
The Vanguard Award plays an important role in the Q1 2026 Impact Index because it recognises a particular kind of enterprise impact.
Some awards recognise scale. Some recognise analyst influence. Some recognise campaign strength. The Vanguard Award recognises a conversation that shifts how a topic is understood.
Vespa.ai earned that recognition by bringing clarity to a problem that’s often buried under AI language. The episode showed that better digital commerce experiences don’t begin with another feature announcement. They begin with systems that can support real-time relevance without slowing the business down.
That’s the kind of thinking the Impact Index is built to recognise.
As enterprise AI matures, the strongest conversations won’t be the loudest ones. They’ll be the ones that help leaders understand what actually has to change.
Comments ( 0 )