A customer of mine recently began their journey to the public cloud and laying out their design for new and existing application deployments to Microsoft Azure. This shift towards a new horizon has opened up many new possibilities for them. But it has also come with a new set of challenges, as does any disruptive technology.

As an ISV (independent software vendor), automated deployment of infrastructure and applications is a core value to them. However, most of the existing automation was developed around private cloud infrastructure, cobbling together disparate language and API types, unique to each provider. In contrast, Azure leverages a single API to provision all the infrastructure required for applications, including networks, firewalls, virtual machines, storage, and load balancers, just to name a few. ARM (Azure Resource Manager) templates allow an administrator to define an entire collection of these resources in a single JSON file and provision them all at one time (obviously with required elements provisioned first, dependent elements last). All of this is possible when you leverage a 100% software-defined infrastructure platform. It’s a very powerful process!

Additionally, the code that comprises these templates can be centrally stored in an on-prem or hosted repository, allowing a multitude of team members to contribute to these templates together, and it can be versioned for tracking. The templates can be modularized to individual components, and generic enough to be used every time an application is deployed or updated. Templates can be strung together as a group, bound as a release, and then deployed in one fell swoop (known as a “pipeline” in the CI/CD process). In many cases, customers completely redeploy all the infrastructure and application code anytime they upgrade or patch.

To Operationalize . . . that is the challenge.

There are so many advantages to managing Infrastructure as Code. But with great power comes, well . . . . you know.

While deploying coded infrastructure is the coolest (I wouldn’t want to do it any other way now!), the operationalizing of that process comes with its own set of challenges that every organization will have to face. The two biggest questions that my customer pondered in regards to managing Infrastructure as Code was:

  • How do we get our entire operations team to be experts at understanding and writing code?
  • How do we manage that at scale?

This is a mission organizations must choose to accept if they go down the road of coded infrastructure. But this challenge isn’t as hard as you might think. The answer is for organizations to approach Infrastructure as Code like a Products Company, including Order Fulfillment and Product Development divisions (similar to the manufacturing line concept from the Phoenix Project).

Order Fulfillment and Development of Widgets

Imagine ABC Company who manufactures widgets based on customer needs, takes orders for the widgets online, packages them up for shipping, and then delivers them to customers. ABC Company employs workers with three basic skillsets:

  1. Warehouse Pickers – Workers who download customer orders from the Internet, pick those specific items from the warehouse shelves, and assemble the orders into boxes or containers.
  2. Shipping Auditors – Responsible for reviewing an assembled order for accuracy and completeness prior to shipping, and sending back to pickers if an order needs to be fixed or updated.
  3. Product Developers – Creates new widgets based on customer feedback and needs, then populates warehouse shelves with the newly created widgets.

The diagram below outlines how this Order Fulfillment cycle would flow, from the product developers who design and populate the warehouse, to pickers who assemble customer orders, to the auditors who validate customer orders before they are closed up and shipped out.

Order Fulfillment and Development of Code

You may already be able to see the parallels to how ABC Company handles Product Development and Order Fulfillment, and how a company could operationalize Infrastructure as Code deployments. Engineers would need to be organized into three basic teams comprised of the following skillsets:

  1. Level 1 Code Engineers – Responsible for receiving new infrastructure deployment orders, identifying the prebuilt templates required for fulfillment, and making small modifications to meet the parameters of the order. Code Engineers need only a basic level of coding experience, and in many cases, no previous experience is needed to be trained.
  2. Level 2 Code Auditors – Verifies that assembled templates and parameters meet the requirements of the order, approving configurations that meet those requirements, and rejecting or revising configurations that need correction. In most cases, the auditor is the last checkpoint before infrastructure code goes live and is provisioned. Auditors have a more advanced level of coding skill, can understand the ramifications of multiple templates implemented as a whole, and can provide basic template and deployment troubleshooting.
  3. Level 3 Code Developer – Develops new code modules (in the case of Azure, ARM templates) that meet the needs and requirements of new types of infrastructure or services. Code Developers have a deep understanding of coding techniques, and can take basic vendor supplied templates and modify them to meet the specific needs of the organization. They are also reviewing and updating current code modules for efficiency, and understanding how new platform updates will impact the current code base being leveraged.

An updated diagram depicting Code Development and Fulfillment might look like this.

As you can see, there are varying levels of skills needed to operationalize code-driven infrastructure. In many cases, fewer highly skilled engineers are needed for template development and auditing than are needed for simple assembly of pre-built template modules and parameters. This does not usually require a complete overhaul of operations teams to manage! There can be many more roles and skill levels needed to accomplish a fully operational IaC deployment cycle for an organization, but generally the minimum of these three types of engineers are required to meet the demands.

Tooling

Note that the two conveyor belts for assembled code can be meant to represent the CI and CD portions of a deployment pipeline. The first belt would represent the Continuous Integration process, where engineers are regularly integrating updates into code to match evolving requirements. And the second belt represents either Continuous Deployment or
Continuous Delivery, depending on how your organization decides to deploy production code (automatically or staged). A code repository (such as GitHub, Bitbucket, or Gitlab) is where template code is uploaded, warehoused, modified, and versioned. These can be hosted onsite or in the cloud.

Accepting the Challenge

Every company is different; the needs and requirements for coded infrastructure will vary. Your organization may need to organize teams slightly different to accomplish your IaC goals. Also, there are many different tools that can be used to build a complete code promotion process. You can DIY a tailored solution; or, in the case of Azure DevOps, the entire process can be hosted, you just bring your code and engineers. No matter which route you take, the organization of your teams and their skills into this process will help define how quickly you can reap the benefits of code-driven infrastructure. And this process isn’t as daunting as it may sound at first. With every new venture taken is the opportunity to grow your IT team into a truly world-class delivery machine.

Don’t forget . . . code rules!

Infrastructure as Code – Building a lean, mean, code-driven operations team!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.