StratusOnAIMaestro Studio AIMaestro Studio EnsembleMOKA

3 Ways Conversational AI is Being Used to Build Azure Infrastructure

AI assistants are everywhere. Conversational AI has become mainstream. There is the good, but there is also the bad and the ugly. AI assistants are getting better, much better, as they start to rely on AI models that are improving at a fast rate, and a lot of effort is being put into making these assistants more reliable and consistent. They are amazing when they work, and that's most of the time, but then comes the occasional hallucination that spoils things. Depending on the context of the problem space, companies are figuring out ways to create guardrails to mitigate most of these problems.

3 Use Cases

Within our problem space of accelerating digital transformation on the Azure cloud by streamlining the processes for building and deploying Azure infrastructure, we've seen many ways that conversational AI is being used. We'll share three key use cases where conversational AI is successfully being used. In a future blog article, we'll share another three use cases that are moving closer to becoming mainstream and maturing rapidly.

1. Start from scratch

In this scenario, cloud architects and engineers are interacting with an AI assistant to build a cloud environment from scratch based on a natural language description of the desired solution. This scenario is probably the most common, and it's also the most reliable, producing a high level of success and accuracy depending on the level of fine-tuning and customization employed.

It is worth noting, based on our observations, that using the out-of-the-box assistants for this scenario has a high rate of hallucinations. A non-trivial amount of work has to be put into making them produce reliable results. We'll share a working example that solves this problem and has a high rate of success and reliability.

2. Finish my work

This is the case of bring-your-own-template (BYOT) where you share your Bicep or Azure Resource Manager (ARM) template with the AI assistant and ask it to finish it for you, taking it to the ultimate final state. This is also used in typical DevOps where the Azure environments continue to change and evolve as the software products evolve. This requires constant tuning. Automatic generation of the Infrastructure-as-Code (IaC) artifacts is one way this is being solved today, using a technique called Infrastructure-from-Code (IfC) which seems to be promising.

Ultimately, employing Agentic AI where the AI agent is monitoring and making the required changes automatically is where this use case seems to be headed, but it is not reliable today to the extent of being practical. Human intervention is still needed. The transitional step today is through engaging in a chat session with an AI assistant to incrementally make the changes and validate them, and also building the required test cases that can be run in a CI/CD pipeline.

3. Make it stand out

This is a newer scenario and pertains mainly to Azure cloud solutions that need to be shared and/or sold. Sharing deployment artifacts (Bicep, ARM templates, other IaC) is not appropriate in these cases as it is highly error prone. It requires manually plugging in values that may not be intuitive, thus increasing the chance of making mistakes, introducing typos and engaging in the vicious cycle of repeating your deployment until you figure out the correct, appropriate input combination that is safe to use.

In this scenario, an AI assistant takes a deployment template and builds an Azure UI, akin to what is used today in the Azure Portal, that can be handed over to the customer, partner or team to deploy reliably. This is due to the fact that the UI takes care of ensuring that most input is not typed in manually but rather picked from UI controls.

For example, if your solution allows your customer to pick their own Virtual Network (VNet), showing them the standard VNet UI is far better and more productive for picking the right VNet than say asking them to type in the name of the VNet and the Resource Group in which is resides. This increases productivity astronomically and reduces the support needed to deal with and address error cases. It's use cases today are mainly around publishing to the Azure Marketplace, but it can be used outside of the Azure Marketplace. The solutions are packaged into what is called an Azure Application.

AI solutions rely on this use case to share their Azure-based products in a reusable way (inside or outside the Azure Marketplace). Many SaaS solutions cannot be leveraged in certain industries where there are regulations involving customer data and compliance is mandatory. Fear of using customer data in training AI models and leaking into other users/customers AI interactions is causing large enterprises to shy away from multi-tenant SaaS solutions.

Luckily, in Azure, AI vendors can provide their technology in the form of an Azure Application that their customer can deploy in their own tenant, where they have full control over their data. This provides both peace of mind for customers and allows vendors to continue to be able to sell their solutions to customers, albeit through a different method of packaging and presentation.

Where do we fit in all this?

At StratusOn, we've invested in all 3 use cases described above, and in another 3 use cases we'll be sharing soon in a future blog post.

For use case #1, we've spent a lot of time building a reliable solution for allowing Azure architects and engineers to ask an AI assistant, we call it MOKA, describing their desired end goal Azure environment and get, within seconds, a solution they can deploy, or share with their teammates or customers or partner, at the click of a button! The solution is called Maestro Studio AI, and the following YouTube video provides a quick overview (about a minute long) - you can also read this walkthrough blog post:

As for scenario #3, it is actually what started it all for us, as we were building our Maestro Studio ENSEMBLE product for building Azure Applications (and easily publishing them to the Azure Marketplace). You can learn more about it here. You can also watch the following video showing its embedded AI assistant helps compose and visually "edit" the UI of an Azure Application.

We're currently working on a solution that represents use case #2, and we plan to open it up for private preview very soon. Stay tuned.

AI is much more than hype

Perhaps Generative AI and Large Language Models (LLM) are not a panacea and don't represent the future of AI. We agree, but one thing is for sure: they are a great transitional step in the right direction.

The bottom line is: Yes, it's true that conversational AI and Agentic AI is not perfect today, but it's getting better. It will continue to improve and get faster and more reliable, and most importantly, it will eventually get to the point where it becomes more efficient in terms of the underlying energy costs. Let's not forget that the powerful processing going on today in our smart watches started with large computers that needed large rooms to house them. They evolved to where we are today. And so will AI. It's evolving, and we're seeing that every single day. The possibilities are endless, and the potential is barely starting to be realized.

There are certainly scenarios that are dodgy in which AI assistants are used, but as demonstrated above, there are ones where AI assistants are making a huge difference and helping companies make a decent ROI. And the evolution will continue.

More information

Check out the following resources for more information on the space and, more specifically, what we are doing to make things better, faster, cheaper, and more reliable:

Give us feedback. Let us know what you think. Drop us an email us at support@stratuson.com or use our Contact Form.

But most importantly, keep leveraging AI where it makes sense, and don't let the current drawbacks discourage you. There is a lot of positive in the midst of all the negativity.

Comments are closed