Provision AWS Resources with Azure DevOps and Terraform - Part I
Written on
Understanding Pipeline Architecture
In this article, we will delve into the necessary steps to establish a standard CI/CD pipeline using Terraform and Azure DevOps to provision AWS resources while adhering to best practices. This series will consist of three parts: the first will cover prerequisites, the second will focus on configuration, and the final part will address the CI/CD implementation. Having worked with CI/CD methodologies in software development, I was curious to see if these principles could be applied to the provisioning of cloud resources using Terraform. If you're interested in my previous article on CI/CD with Azure DevOps, feel free to check it out. It’s important for readers to have a foundational understanding of Azure DevOps Services, AWS, and Terraform. Let's explore how this can be accomplished.
Prerequisites
- Azure DevOps Services
- AWS Account (including an S3 bucket and IAM roles)
- Docker
- Container Registry
- Kubernetes Cluster
Architecture Overview
Azure Repos and Azure Pipelines are integral components of Azure DevOps Services. The Terraform configuration for provisioning S3 resources is stored in a Git repository within Azure Repos. For executing the Azure Pipelines agents, a private Kubernetes cluster has been established with Docker serving as the container runtime. The agents are deployed as pods within this cluster. Additionally, a custom container image has been prepared, incorporating Terraform, Azure CLI, and Terrascan, which will be utilized for executing each pipeline stage. On the AWS side, a user has been created with an assume role policy, and a role has been established to allow access to S3 resources for reading from the S3 remote backend and provisioning new S3 resources.
The first pipeline stage, Initialization (Init), retrieves all necessary providers and caches them in the working directory. During this task, the IAM user will assume a role that permits reading and writing to S3 resources. I have leveraged the caching functionality of Azure Pipelines to prevent the need to re-download providers at each pipeline stage. The next stage involves linting the Terraform code and scanning it for security vulnerabilities using Terrascan, with the results published to Azure Pipelines. Following this, a plan will be created and exported to the local directory. The plan will undergo a review process in the subsequent stage, which includes a peer approval mechanism. Finally, once peer review is completed, the apply stage will be triggered to provision the resources defined in the plan.
Setup
I. Azure DevOps Services
To begin, create an organization within Azure DevOps. You can find the sign-up link here. After setting up your account, the next step is to create a project and generate a Personal Access Token (PAT) for programmatic authentication with Azure DevOps and its associated agents. Instructions for creating the project and PAT can be found in the provided links. Lastly, you will need to set up Agent Pools, which will be used for adding agent pods; detailed instructions for this process are available in the documentation. Make sure to note the Agent Pool Name and PAT, as these will be necessary for configuring Azure DevOps Agents later.
II. AWS Account — S3 Bucket and IAM Roles
You will need an AWS account to proceed with the subsequent steps. Once you have access, create a new IAM user with minimal permissions and policies. Concurrently, establish an IAM role with read-and-write access to S3 resources. The IAM user will utilize this role through AWS's assume role mechanism to perform all tasks required by our Terraform pipeline. The creation of the IAM user and policy can be achieved using the AWS CLI. A separate S3 bucket will also be created to function as the Terraform remote backend. Below are code snippets for creating IAM users and policies.
IAM User Policy
# Code snippet here for IAM User Policy
IAM User Trust Policy
# Code snippet here for IAM User Trust Policy
Bash Script for User and Role Creation
# Code snippet here for Bash script
Please ensure to take note of the ACCESS KEY, ACCESS SECRET KEY, and the ARN for the created role, as these will be essential later.
III. Docker
Docker will serve as the container runtime for the Kubernetes cluster. I have installed Docker Engine and Docker CLI version 20 as part of the setup. Since I’m using a self-managed Kubernetes cluster provisioned with kubeadm, the installation has been carried out on a Linux VM running Ubuntu.
IV. Container Registry
You will need a container registry, such as Docker Hub or GitHub Container Registry, to push your custom image for the container job in Azure Pipelines. Alternatively, you can use my image published on GitHub Container Registry. If you wish to publish a new image on GitHub, please refer to the article linked here.
V. Kubernetes Cluster
The final requirement is to create a Kubernetes cluster. I provisioned a cluster using a VM set up with kubeadm, ensuring all Docker prerequisites were met. You may opt for any cloud service provider—Azure, AWS, or GCP—to provision your cluster as well. I have documented the process for setting up a self-managed cluster, and you can find that article here.
Once all prerequisites are established, the next step will involve configuring the resources and services created, which I will detail in Part II of this series. Stay tuned for more information. Thank you for reading!
References
- Azure DevOps: Sign Up, Project Creation, PAT Generation, and Agent Pool Setup
- AWS IAM Role Creation
For any questions, feel free to reach out through the links below:
- Medium
Learn how to use Terraform and Azure DevOps to provision AWS resources, ideal for beginners.
Discover how to automate infrastructure setup in AWS cloud using Terraform and Azure DevOps pipelines.