Based on the number of coding courses I've started over the years, I should be a master developer by now. I learned the structures, the practices, the syntax, but never made it much past the 50 lines of code mark before going back to the comfort of SQL and data. I understood the principles better than I could execute them.


Then AI coding arrived, and it felt like being handed a superpower. Suddenly, I’m building things with an ease that I could have only dreamed of literal months ago. But despite the ease, the experience makes the fundamentals of coding seem more important, not less. The quality of what AI produces depends directly on how well you apply structure, naming conventions, testing, and version control. The people who will get the most lift out of AI coding tools are the ones who know enough to guide it.

em360tech image


That same dynamic is playing out across the data and analytics landscape.


"As-code" architectures have been well-established among technical practitioners for years, but struggled to achieve broader adoption. Most data professionals aren't developers, and asking them to write and maintain code felt like an unnecessary step away from the self-service tools already in place.


When no-code was the answer

For years, the dominant narrative was that democratization meant removing code from the workflow. Self-service BI platforms, visual data prep tools, and drag-and-drop pipeline builders all sought to enable business users to operate with minimal training.


But along with self-service adoption came more ungoverned metrics, inconsistent definitions, and shadow reporting. Organizations discovered that tools optimized for ease of use were difficult to audit, test, or govern at scale. Self-service delivered speed but sacrificed discipline. Governance failures in no-code environments became impossible to ignore.
Meanwhile, dbt grew from niche to mainstream by arguing that SQL and version control weren't barriers to democratization but prerequisites for trust.


Then, LLMs arrived and changed what "easy" means. If AI can write and reason about code, the argument for removing code from the workflow disappears. The argument becomes: keep code as the foundation, use AI to make it accessible.
The "as-code" resurgence is a recognition that declarative, version-controllable structures were always the right foundation. What changed is who (and what) writes them.


AI changes the equation


When agents can generate, edit, and reason about code, the accessibility barrier drops. But the structural requirements don't. If anything, they intensify.


Infrastructure-as-code established the precedent: if you want systems that are repeatable, auditable, and scalable, you define them in text. That logic now applies to nearly every layer of the modern data platform: transformation (dbt, SQLMesh), observability (Elementary, Soda), analytics (Hex, Evidence), semantic layers (Cube, dbt Semantic Layer), and orchestration (Dagster).


As AI moves into production workflows, systems need to be expressible in structured, machine-interpretable form. If they aren't, they can't participate in automated pipelines, agent-driven operations, or governed decision-making.


Large language models are strong at generating and editing structured syntax, following declarative constraints, and operating within explicit schemas. They are weak at navigating opaque GUIs and interpreting undocumented workflows.
This limitation is baked into how the models work. LLMs operate on text. Systems defined in text are inherently more AI-compatible than systems defined by clicks in a UI. If your analytics platform, your governance rules, or your data quality expectations live only inside a GUI, they are invisible to the AI systems you are trying to deploy.


The "as-code" structure becomes more important when you consider the industry is heading toward agent-driven operations. AI agents need to know what they're allowed to do, what they've done, and (perhaps most critically) how to undo it if they got it wrong.


Data trust requires ownership, versioning, quality thresholds, reproducible transformations, and observable lineage. Declarative systems provide all of these naturally. They represent the only medium that gives machines, and the humans overseeing them, the transparency they require. It’s how the guardrails are built.


Looking ahead


Organizations still want business-friendly tools and self-service. But AI-native systems require structured definitions, machine-readable constraints, and version control. These are prerequisites for safe, governed AI integration.
The winning vendors will resolve this by hiding code from casual users while preserving it underneath, and exposing programmable surfaces for AI and technical users. The vendors who treat "no-code" and "as-code" as contradictory will lose to those who treat them as complementary layers of the same platform.
Declarative architectures are becoming AI control surfaces: the place where intent is expressed, constraints are enforced, and trust is verified. This is true across the full data lifecycle: transformation, quality, observability, governance, analytics, and increasingly the agents themselves.
It's an architectural constraint that enterprises are already hitting. And increasingly, it's the line between data systems that can participate in AI-driven workflows and those that can't.