Podcast: Tech Transformed

Guests: Maxim Fateev, Co-Founder and CTO, Temporal Technologies and Cornelia Davis, Developer Advocate, Temporal Technologies

Host: Kevin Petrie, VP of Research at BARC

Artificial Intelligence (AI) models have been breaking ground in the last three years. In the race to boost capabilities month by month among platforms like OpenAI, Anthropic, and Google’s Gemini models. However, for many enterprises, the main challenge is not creating AI prototypes; it's ensuring they can reliably support real business processes.

In a recent episode of the Tech Transformed podcast, Kevin Petrie, VP of Research at BARC, hosted a discussion with Maxim Fateev, Co-Founder and CTO, Temporal Technologies and Cornelia Davis, Developer Advocate, Temporal Technologies. They talked about why enterprises find it hard to transition AI from experimentation to production and how infrastructure must change to support autonomous systems.

Why AI Demos Break in the Real World

According to Davis, many organisations make a common mistake: they focus on the "happy path" during experiments and overlook real-world operational challenges. “We have always ignored the non-functional requirements until we go to at our peril,” Davis said. “A lot of our experimentation is so focused on the models that we forget about the non-functional requirements.”

This means developers often prioritise model performance but neglect reliability, scaling, and system resilience. Agent frameworks used in experiments—usually lightweight Python or TypeScript libraries—add to the issue.

“What you’re really building is a highly distributed system that’s calling Large Language Models (LLMs) that will be rate-limited… networks are going to go down,” Davis explained. “When we move into , we haven’t considered scale or instability.”

As enterprises expand AI into their workflows, these overlooked details become imperative. A single outage, rate limit, or infrastructure failure can disrupt a complicated workflow that involves multiple AI steps.

Also Watch: Developer Productivity 5X to 10X: Is Durable Execution the Answer to AI Orchestration Challenges?

What Risks are Surfacing Since the Rise of Agentic Systems?

The transition from simple AI workflows to autonomous agents adds a new layer of complexity. Traditional AI applications have predictable flows—such as summarising documents, tagging data, or creating recommendations. In contrast, agentic systems choose tools and decide on actions dynamically.

“When we move from non-agentic to agentic, we introduce unpredictability,” Davis said. “The tools and the order they run in are unpredictable. Whether we go through the agentic loop once or a hundred times is unpredictable.”

Such unpredictability creates new governance and compliance challenges, especially in regulated industries. “Enterprises are still responsible for predictable outcomes,” Davis noted. “We need stronger audit trails to understand why the agent made the decisions it did.”

For enterprises, this means AI systems must ensure traceability, accountability, and compliance, even when decision paths differ from one interaction to another.

Why is Temporal Durable Execution the New Foundation for Enterprise AI?

Fateev argues that to manage such newly surfacing risks, enterprises need a new architectural layer focused on reliability. His concept, “Durable Execution,” aims to ensure that complex workflows keep running even when infrastructure fails.

“You write code as if failures don’t exist,” Fateev explained. “If a process crashes, we recover all the state and continue executing.” In practical terms, Durable Execution allows long-running AI workflows to survive interruptions—from network outages to system crashes—without losing progress or data.

This is essential as agents start interacting with real systems and taking real actions. “The moment agents start acting on the external world—changing files, submitting orders—you absolutely don’t want those things to get lost,” Fateev said.

The Temporal co-founder further emphasised that enterprise AI will not completely replace traditional software systems.

“You will always have deterministic code,” he said. “You can’t imagine banks dynamically deciding what a money transfer means.”

Instead, the future architecture will combine deterministic software with agents that interact through controlled tools and reliable communication layers.

Also Watch: How Do You Make AI Agents Reliable at Scale?

Key Takeaways

  • AI projects fail in production when non-functional requirements are ignored
  • Agentic systems bring unpredictability, making governance, traceability, and auditability essential.
  • Lightweight experimentation frameworks aren't suited for enterprise workloads.
  • Durable execution enables reliable AI workflows, ensuring processes continue despite infrastructure failures.
  • Enterprise AI will blend deterministic software with agents.

Chapters

  • 00:00 Introduction to AI's Impact on Business
  • 03:53 Challenges in Integrating AI into Business Workflows
  • 13:00 Understanding Non-Functional Requirements in AI
  • 19:14 The Role of Orchestration in AI Systems
  • 24:26 Exploring Durable Execution in AI Workflows
  • 30:28 Future Architectures for Autonomous AI Systems
  • 36:05 Key Takeaways for Executives in AI Implementation

For more information, please visit em360tech.com and temporal.io.

To learn more about Temporal and Durable Execution, follow:

Temporal LinkedIn: Temporal Technologies

Temporal X: @Temporalio

Temporal YouTube: @Temporalio

EM360Tech YouTube: @enterprisemanagement360

EM360Tech LinkedIn: @EM360Tech

EM360Tech X: @EM360Tech