Podcast: Tech Transformed Podcast

Guest: Manesh Tailor, EMEA Field CTO, New Relic 

Host: Shubhangi Dua, B2B Tech Journalist, EM360Tech

AI-driven development has become obsessive recently, with vibe-coding becoming more common and accelerating innovation at an unprecedented rate. This, however, is also leading to a substantial increase in costly outages. Many organisations do not fully grasp the repercussions until their customers are affected.

In this episode of the Tech Transformed Podcast, EM360Tech’s Podcast Producer and B2B Tech Journalist, Shubhangi Dua, spoke with Manesh Tailor, EMEA Field CTO at New Relic, about why AI-generated code, also called vibe-coding, rapid prototyping, and a focus on speed create dangerous gaps. They also talked about why full-stack observability is now crucial for operational resilience in 2026 and beyond.

AI Vibe Code Prioritising Speed over Stability

AI has changed how software is built. Problems are solved faster, prototypes are created in hours, and proofs-of-concept (POC) swiftly reach production. But this speed comes with drawbacks.

“These prototypes, these POCs, make it to production very readily,” Tailor explained. “Because they work—and they work very quickly.”

In the past, the time needed to design and implement a solution served as a natural filter. However, the barrier has now disappeared.

Tailor tells Dua: “The problem occurs, the solution is quick, and these things get out into production super, super fast. Now you’ve got something that wasn’t necessarily designed well.”

The outcome is that the new systems work but do not scale. They lack operational resilience and greatly increase the cognitive load on engineering teams.

New Relic's research indicates that in EMEA alone:

  • The annual median cost of high-impact IT outages for EMEA businesses is $102 million per year
  • Downtime costs EMEA businesses an average of $2 million per hour
  • More than a third (37%) of EMEA businesses experience high-impact outages weekly or more often.

Essentially, AI-driven development heightens risks and increases blind spots. “There are unrealised problems that take longer to solve—and they occur more often,” Tailor noted. This is because many AI-generated solutions overlook operability, scaling, or long-term maintenance.

Modern architectures were already complex before AI came along. Microservices, SaaS dependencies, and distributed systems scatter visibility across the stack.

“We’ve got more solutions, more technology, more unknowns, all moving faster,” he tells Dua. “That’s generated more data, more noise—and more blind spots.”

Traditional monitoring tools were built for known issues—predefined components, predictable dependencies, and static systems. “Monitoring was about what you already understood,” Tailor explained. “Observability is about the unknown unknowns.”

AI-generated code complicates the situation because teams often lack detailed knowledge of how that code was created, how components interact, or how dependencies change over time.

This is where full-stack observability becomes essential—not as a standalone tool, but as a coordinated capability that connects signals across applications, infrastructure, data, and AI systems in real time.

Also Watch: How Do AI and Observability Redefine Application Performance?

Reactive to Proactive: The Role of AI in Observability

Ironically, the same AI that increases complexity is also necessary to manage it. According to New Relic data, 96 per cent of organisations plan to adopt AI monitoring and 84 per cent plan to implement AIOps by 2028.

However, Tailor stresses that success relies on using AI to enhance—rather than replace—human expertise. “We have to leverage AI to establish baselines much faster,” he said. “But humans still bring experience and judgment that machines don’t have.”

AI allows teams to shift from responding to known patterns to proactively spotting anomalies before they turn into customer-facing incidents.

Beyond uptime and performance, observability is becoming a regulatory requirement. “If it’s not observed, then it’s rogue,” Tailor warned.

New regulations like the EU AI Act and ISO 42001 will require organisations to show visibility into AI systems, decision-making processes, and operational behaviour. “You won’t be allowed to operate AI solutions without the right level of observability,” he added.

The 2026 Takeaway: Observability is Essential for AI

As AI-driven development becomes the norm, Tailor’s message to CIOs, CTOs, and CDOs is: “Observability isn’t an option. Without it, your AI strategy simply won’t work.”

Organisations that neglect to invest in centralised, full-stack observability risk more than outages—they risk compliance failures, security issues, and rising operational costs.

“Otherwise,” Tailor stated, “you will limit the ability to benefit from your AI strategy.”

To learn more, visit NewRelic.com or listen to the full episode of the Tech Transformed podcast at EM360Tech.com.

Also Watch: How Can AI Bridge the Gap from Observability to Understandability?

Takeaways

  • If you don't get your observability house in order, all the grand plans with AI may be at risk.
  • Speed has been favoured over good governance and engineering standards.
  • Observability is about understanding the relationship between components, not just monitoring known issues.
  • AI can help establish baselines faster in a rapidly changing environment.
  • Without observability, you can't make your AI strategy work.

Chapters

  • 00:00 Introduction to AI and Observability
  • 01:11 The Risks of Rapid Software Development
  • 04:21 Understanding the Cost of Outages
  • 06:30 Blind Spots in AI-Driven Systems
  • 11:29 Transitioning to Full-Stack Observability
  • 13:58 Moving from Reactive to Proactive Monitoring
  • 18:54 Real-World Applications of AI Monitoring
  • 19:51 The Future of AI and Observability

#Observability #AIOps #AIDrivenDevelopment #FullStackObservability #ITOutages #VibeCoding #AIinProduction #DevOps #NewRelic #TechPodcast