StratusOnAIMaestro Studio AIMOKA

Infrastructure as Code is gone. Infrastructure as Conversation is here

While this is not news at all since natural language use for generating Infrastructure as Code has been going on for the last 3+ years, the title is a depiction of the reality that is taking shape alongside and in-sync with the evolution and progress of foundation models. It's true that the foundation models, specifically Large Language Models (LLM), including the recent reasoning type LLMs, are still not quite ready to produce production-ready code on their own, the state of the AI coding assistant industry has improved the output quality so much in recent months by using several hybrid techniques to augment foundation models with fine-tuned, domain-specific or domain-adapted models, as well as leveraging workflows orchestrating a plethora of tools. Some scenarios are even leveraging non-Transformer architecture-based AI along with custom machine learning (ML) worflows for training and inference. As more effort is now being devoted to R&D vs. hype, we're seeing incredible progress, especially great results from our own R&D lab, which we've been showcasing in our products.

Infrastructure as Code (IaC) tools and their underlying languages, or structured formats (like JSON and YAML), represent an area that is very well suited for AI coding assistants to tackle since they're much simpler and smaller than normal programming languages, and their low-code programmability is far easier to tackle than full-fledged programming or scripting languages.

While discussing IaC in this post, we're often going to reference HashiCorp's Terraform as the main example of a platform-agnostic solution, simply because HashiCorp is the only player with real shiny armor, and Terraform still represents the gold standard that everyone tries to emulate (or copy). We will also only refer to Azure as an example of a hyperscaler cloud provider and its Azure Resource Manager (ARM) templates and Bicep as the first-party cloud IaC example, mainly because Azure's native IaC story is more mature than other hyperscalers like AWS and GCP. It is worth noting that Bicep gets compiled into an ARM template before deployment.

One concern on almost everyone's mind is the potential of job loss that the emergence of AI coding assistant tools brings. Just like with normal programming, we believe that the human engineers are still needed. When used by experienced engineers, AI coding assistants can easily have a multiplicative effect. The tools will gradually turn experienced engineers who were once called 10x performers into 100x performers because they can focus on the complex work that AI cannot tackle reliably yet.

Finally, in terms of job functions, the current trend we're seeing today is that the line between DevOps and Site Reliability Engineering (SRE) is becoming more and more blurred. This might very well be the precursor for some convergence, or at least more close collaboration, between the two important roles. We're seeing early indication of that in the way Moka AI is being used today by both DevOps and SRE engineers in similar and sometimes overlapping ways. We'll elaborate more on this below.

Is IaC really gone?

Not yet, but the wheels are in motion. We are not trying to stir controversy here. While things are not going to change overnight (this is not an ON/OFF switch), the current trends, both in DevOps and IaC, as well as in AI, show clear indication that we are heading towards a world where general-purpose, platform-agnostic IaC will no longer be needed as the problem it tries to solve disappears.

But before we elaborate, let's first start with a bit of history and define the problem that IaC is trying to solve.

One of the biggest hurdles with IaC is not related to the IaC tool itself, or its language or file structure. Having declarative structures that are not easily readable (mainly based on JSON and YAML) has always been a point of contention, from AWS CloudFormation to Azure's ARM templates. Terraform was the most prominent solution that was able to simplify the declarative structure by making it a lot more human-readable. However, anyone who has used Terraform, or any other IaC for that matter (even the ones utilizing an imperative style), knows that composing the IaC assets is only half the battle. You still need to deploy the IaC file. And the process of deployment has dependencies on "runtime" characteristics of the target environment (regardless of whether it is on-premises or cloud-hosted) as they relate to "resources" declared in the template. These characteristics cannot be emulated by the IaC tools at composition time, and they can even vary from one day to another based on environment events, like loss of capacity in a target cloud location or lack of availability of a certain resource type with specific attributes (for example a VM type experiencing a high demand can result in depleted capacity in a certain cloud location when there is a large event causing a spike in consumption of such resource).

Is platform-agnostic really agnostic?

Platform-agnostic IaC, like Terraform, provides a consistent way to learn the syntax once and apply to any cloud provider for which there is a provider (adapter). At least that's the promise, but the reality is a bit different. Due to the configuration differences and the implementation details of each cloud provider's resources, the declaration of the same resource type (e.g. a virtual machine) can look different between AWS, GCP and Azure. So, developers or DevOps engineers tasked with maintaining such platform-specific scripts must also have a good knowledge of the underlying cloud environment. This tends to dilute the perceived benefits of the platform-agnostic promise, especially in large architectures where the configuration is most of the work. In reality, what is platform-agnostic is the common structure, or language features, for defining parameters, variables, modules, references, documentation and other general artifacts, not the true platform bits.

While it is hard to truly have a platform-agnostic IaC, the benefits of having a common base still help and can save time compared to having to maintain IaC files in their cloud-native form for each provider (e.g. ARM template for Azure, CloudFormation for AWS, etc).

A quick point on idempotency and why it matters

One key feature of first-party cloud IaC that works differently in platform-agnostic IaC is idempotency. Idempotency means that a deployment remembers its last state and can deploy a script many times where only deltas are applied in each subsequent time. That is, new resources are created, and existing resources are updated (not torn down and re-created). There are ways to emulate that behavior through APIs, but the experience can vary based on the resource's resource provider. Terraform uses its own state management to keep track of changes. Its state management does not use Azure's built-in deployment state. Some prefer this platform-independent way of tracking deployment history while others prefer the cloud platform's built-in support for idempotent deployments since the single source of the truth resides in the cloud platform itself.

The important thing to know when comparing native, built-in state storage (like in Azure's ARM) to externally managed (like in Terraform state file) is that mixing deployment methods can impact the externally managed state's validity and might likely cause unexpected errors or failures.

What about pre- and post-deployment capabilities?

As mentioned earlier, traditional IaC only deals with the structure of the deployment and stops there. It does not deal with the pre- or post-deployment nuances like making sure the target environment is suitable for resources defined in a template. "What-if" functionality in some tools, like Azure's CLI, only helps with previewing the deltas but has no way to connect to the live Azure environment for collecting health, telemetry or capacity information.

In summary, IaC tools, both first party (like ARM templates) and third-party (like Terraform and other vendors' tools) live in a silo, limiting themselves to the deployment of resources to a target environment and scope (in Azure, a scope can be a Tenant, a Management Group, a Subscription, or a Resource Group). Knowledge of the health, capacity or reliability of that target environment, although very critical to the success or failure of many deployments, is completely ignored. This is an area where modern AI coding assistants like Maestro Studio AI shine, providing a solution that treats deployments as an end-to-end process that is larger than the mere deployment assets. We'll talk about this below.

True platform-agnostic templates are templates you don't have to write!

That's always been the promise. That you don't have to author the templates, or at least that you author them once then rinse and repeat. However that was never the case, and this ability has not been feasible until just recently.

The increased quality and maturity of generative AI's LLMs has provided a great vehicle for delivering the platform-agnostic promise, using nothing more than natural language. This has been a game changer for IaC, not because it makes it easier to compose IaC templates, but also because it makes platform-agnostic IaC tools obsolete. If a developer or DevOps engineer can engage in a conversational AI session with an AI coding assistant and have the resulting template created in the cloud environment's native language, why would they need to use a platform-agnostic product, like Terraform, anymore. The answer to this question so far has been "because the files generated by the AI assistant still need manual tweaking, testing and debugging due to the tendency of LLMs to hallucinate, many times making up things that don't really exist."

This limitation is still the only deterrent and still exists in all AI coding assistants.

All, except for one, Maestro Studio AI, with its built-in AI coding assistant, Moka, which specializes in Azure deployment templates.

What is Maestro Studio AI and its AI coding assistant Moka?

Cloud engineers spend too much time chasing failing deployments and tweaking templates. What if you could type a plain-English (or even other languages) request and instantly get a fully tested, deployment-ready solution—complete with architecture diagrams, docs, CLI commands, and one-click portal deployment? What if the assets were updated alongside updates made to the deployment, ensuring they're always in sync? That's exactly what Maestro Studio AI's Moka delivers.

The following short, 90-second video demonstrates Maestro Studio AI's features and benefits:

To learn more about Maestro Studio AI, read the following blog posts: Build, test, and-deploy reliable Azure infrastructure with Maestro Studio AI's Moka and How Moka beats the Azure Portal at its own game.

What makes Moka different?

Maestro Studio AI's Moka is a true game changer. Rather than being a wrapper on a foundation model, Moka relies on domain-adapted LLMs that are specifically fine-tuned for Azure deployments. In addition to that, Moka leverages agentic AI and communicates with a mesh of AI agents deployed across all Azure regions, as explained in this blog post. This enables Moka to not only generate a deployment template that matches the user's question, but also to test it across all Azure regions to find out if there are Azure locations where the deployment might fail. The downside is that this increases the response time to questions, but the upside of providing a reliable answer is well worth that small penalty. No other AI coding assistant can do that today.

Moka also provides suggestions for the next questions to ask to refine the generated deployment template and add or remove parts of its content. This is similar to the well-known process of vibe coding; we like to think of it as preemptive vibe coding. However, Moka attempts to keep the vibe coding to a minimum, and only for refining the solution, not to correct syntax errors since Moka performs exhaustive testing—using its AI agent— to ensure the deployment template can be deployed successfully to regions where it indicates are safe for deployment. Moka will also flag regions where there will be issues preventing the deployment from completing successfully. This helps save developers and DevOps engineers some precious time, time that is typically wasted troubleshooting and debugging deployment failures and cleaning up the resources deployed when using vibe coding with other AI coding assistants and AI IDEs.

While it is not a 100% error-free experience yet, our tests at the time of this article (late July 2025) have shown around 90% success rate in deployments in the Basic subscription tier (the paid tier that is backed by our SLA and support plan). As we continue to improve the product, the success rate will continue to increase. We expect it to be close to 99% by early 2026.

What is the future of Infrastructure as Code?

Infrastructure as Conversation (IaC) is where we see the current infrastructure tooling heading.

This, however, does not mean that developers or DevOps engineers will not be needed anymore, or not need to have experience with the platform. Public cloud platforms are still complex, brittle, and volatile, and they have the tendency to have performance and quality degradation events happening at short notice, or even without notice, rendering a deployment template that was previously working flawlessly suddenly broken.

But what is crystal clear right now is that there will be less need to know the platform's technical details well. AI assistants will be able to manage all that for you in the future. Moka can do that today, and we believe others will follow suit.

💡 There is no syntax that is easier than the conversational, natural language syntax.

We posit that in 3-5 years from now, platform-agnostic tools like Terraform will no longer be needed. Moka (and others at some point) will be managing all the work on the target platform for you, through conversational AI using natural language, any language, not just English. The AI agent only needs to know the base, foundational IaC and transform users' questions to deployment assets built against that base, first-party IaC. Even first-party IaC like Bicep, which was created with an easier syntax that its language provides to help simplify access to Azure deployments without having to deal with the archaic JSON format of ARM templates, will eventually have to retire, leaving only ARM templates as the only option. Bicep's simplified syntax will no longer be needed since there is no syntax that is easier than the conversational, natural language syntax.

And remember the important idempotent deployment feature we mentioned above? With the single source of the truth becoming the ARM template, idempotency is automatically leveraged from ARM's deployment state, which eliminates the risk of having mismatched, externally-stored state or mixed-state management (with the various issues associated with it).

The role of these IaC tools will likely shift to building agentic interfaces and chat-based conversational systems and running orchestrations, just like Moka does today.

The future of IaC is really not as grim as it might first seem. There are many benefits to moving to the single, native IaC: from easier and quicker maintenance to integrated security, policies and governance. This lowers the risks that are associated today with myriad of tools, tool variants, and the lack of live deployment pre-testing. Essentially, this will be a win-win for everyone: engineers get to focus on non-mundane work (they still need to know the platform, but they will likely spend less time dealing with the platform than before, much less).

Conclusion

While tools like Terraform and Bicep are here to stay for the next few years, we think that in about 3-5 years, such tools, if they continue to exist, will not exist in their current form but rather in a form compatible with conversational AI and utilizing agentic AI to perform its work in a manner that is more closely tied to the target deployment platform. At some point, cloud providers will future-proof their control planes to provide built-in agentic AI that can be leveraged by the next-gen AI tools so they don't have to build that agentic AI universe themselves, like Maestro Studio AI had to. In order for these tools to maintain their relevance in an agentic AI world, they will have to support more of the deployment lifecycle and gradually start moving towards SRE, just like Moka does today. DevOps and SRE will move closer until they very likely eventually become one.

This perspective should not be too shocking. Normal software development is going to evolve as a result of more maturity and stability in AI coding assistants. IaC is already following that same trend and is directly impacted by those same improvements in AI-assisted coding, and this wave of changes will impact scripting and configuration languages, including IaC, as much it will impact, and already is, full-fledged programming languages and software engineering processes and tools.

For SRE, agentic AI is already replacing static monitoring tools with AI agents that can predict and fix issues and not only flag anomalies and send notifications so that human engineers can fix them. Autonomous AI agents are becoming more mature and more capable, and we're already using them to fix issues before they turn into outages. This is a key feature of the actors that constitute our AI agents distributed in all Azure regions.

AI coding assistants are also a greate way to onboard fresh DevOpers and get them started on existing code bases of IaC files and supporting scripts. One of the features of Moka that new DevOps engineers are finding super useful is the generation of documentation with each prompt, with every change, and the list of suggested next steps Moka proposes to help engineers continue to improve (and for newcomers, learn) their current code bases and systems.

IaC is being reimagined, and the gradual evolution of the tools will bring big benfits and a lot of productivity, not job loss. With mundane tasks that cripple the progress of a lot of teams out of the way, this opens up oppprtunities for doing more productive work and building more features. If anything, this should realistically require more headcount.

Unfortunately, there is a lot of hype and FOMO is driving many decision makers to make premature decisions that they often regret later. Keeping a cool head, investing in understanding the new wave of AI tools and evaluating them to experience first-hand where they're at and what they can and cannot do, is important in order to adapt to the new reality. Engineers that are investing early on in evaluating AI coding tools, and other specialized AI tools, have a much better chance of absorbing the impact of the AI tools and AI hype. Note that chat tools like ChatGPT are a good start but not enough for specialists and experienced engineers. Most AI coding assistants utilize the latest reasoning foundation models, and the experience (and results) are quite different than those presented by general purpose AI chat tools.

How to get started

Ready to stop wrestling with ARM, Bicep, Terraform, etc? Sign up for Maestro Studio AI, launch Moka, and in minutes you'll be generating, and deploying robust Azure infrastructure.

Get started at https://stratuson.ai.

Feedback?

A lot of the content, especially the predictions, shared in this post may seem controversial, and perhaps even naive. If you feel so, or have other thoughts that you think make more sense than what we have presented, we would love to hear from you. Do contact us using our Contact Us page. We always review feedback and thoughts that are shared with us and get back to folks that share that feedback. We often learn quite a lot from such shared thoughts, especially ones that challenge ours.

Comments are closed