Creating a Highly Available Two-Tier Architecture on AWS with Terraform
Written on
Chapter 1: Introduction to Two-Tier Architecture
In this guide, we will construct a two-tier architecture on AWS utilizing Terraform. Our focus will be on ensuring high availability and modularity to enhance code reuse and organization.
This tutorial will use Visual Studio Code as the integrated development environment (IDE), but you may opt for Cloud9 or any other IDE that you prefer. Please note that the instructions might vary slightly depending on your chosen IDE.
Prerequisites
Before we begin, ensure you have the following:
- An AWS Account
- Appropriate permissions for your user
- Terraform installed on your IDE
- AWS CLI installed and properly configured on your IDE
Section 1.1: Setting Up the Environment
To deploy a two-tier architecture, we will require several resources. In this project, we will set up a Virtual Private Cloud (VPC) that spans two availability zones (AZs) and includes two subnets in each AZ. The public subnets will host an EC2 instance each, while a load balancer will manage traffic to these instances. The private subnets will be designated for an RDS database.
First, create a new folder in your terminal using the command mkdir. Navigate into this folder with cd. Next, create a subfolder for the modules by executing mkdir modules, and then change directories into the modules folder.
Inside the modules folder, create a main.tf file to house our Terraform code. The specific commands may vary if you are using a different terminal. Open this file to begin writing our code.
Below is a sample code for the main.tf module file. Throughout this project, I frequently referred to the Terraform registry at registry.terraform.io for various resources. The initial block specifies AWS as the provider and links to the variables file that will define the region. I then define resources such as the VPC, subnets with specific CIDR blocks and availability zones, an internet-facing application load balancer routing traffic to one of the two EC2 instances, and an internet gateway. The final section includes a data block that utilizes the AWS Systems Manager Parameter Store to retrieve the latest Linux AMI since AMI IDs can change frequently, making our Terraform code more adaptable.
While I could have organized the main.tf file into separate modules for each component (VPC, EC2, and database), I chose to keep everything within one module for simplicity in this project.
Section 1.2: Capturing Outputs
Next, we will create an outputs.tf file in the modules folder to capture crucial values required to finalize our infrastructure, including subnet IDs and the AMI ID. The structure will resemble the following:
The final task in the modules folder is to define the variable we referenced in the main.tf file for the region. We will name this file variables.tf.
Returning to our root Two-Tier-Architecture folder, we will create another main.tf file that references our module folder and also generates the EC2 instances and the Database instance. Below, I reference the subnet IDs for placing the instances. For the Database subnet resource block, I do not enable Multi-AZ for replicas but do specify two subnets to host the Database, allowing for potential conversion to a Multi-AZ deployment in the future. Note that Multi-AZ for MySQL databases is not a complimentary service in AWS.
Next, we will create a root variables.tf file. The above code refers to a variable within the Database Instance resource. To define the username and password, we will need to create a variables.tf file. I also referenced the primary region variable for the provider and VPC resource. Below is the variables.tf file structure:
To ensure that my Database username and password remain confidential, I will mark them as sensitive, preventing Terraform from displaying their values in the output during plan, apply, or destroy commands. I will save these sensitive values in a secret.tfvars file.
Lastly, we will create an outputs.tf file in the root Terraform folder. This file will instruct Terraform to display the private IP addresses of each EC2 instance, allowing us to verify their creation in AWS.
Before proceeding, I like to run terraform fmt -recursive to ensure my formatting is accurate and the code is tidy. Your project directory should resemble this structure:
Section 1.3: Initializing Terraform
Next, ensure you are in the root Terraform folder for the project (my folder is named Two-Tier-Architecture) and run the command terraform init.
I also recommend running terraform validate to confirm that your code configuration is correct.
Since we've stored our Database username and password in the secret.tfvars file, we will need to execute the command with the plan, apply, and destroy options as follows:
terraform plan -var-file="secret.tfvars"
You will see a screen filled with information, and at the bottom, it should indicate that 11 items are scheduled for creation, along with the outputs that will be displayed after applying the plan.
If everything appears satisfactory, you can execute:
terraform apply -var-file="secret.tfvars"
Type "yes" to confirm, and the process will take a few moments as Terraform sets up all your resources in AWS. Once finished, you should receive a confirmation message.
Our state file, terraform.tfstate, has now been generated. This file maps real-world resources to your configuration and maintains metadata.
Section 1.4: Verifying the Setup
Now, let’s navigate to AWS to inspect our newly established architecture. In the EC2 console, you should see two running instances. By clicking on each instance, you can verify that their private IP addresses match the outputs received from Terraform. Additionally, confirm that the EC2 instances are operating within the designated VPC and subnet.
Next, check the load balancers section in the left-hand menu. You should see an active load balancer along with the two public subnets housing your EC2 instances.
Moving on to the VPC console, under the VPC we created, navigate to the subnets. You should find four subnets correctly named within the specified CIDR blocks, alongside a route table and internet gateway.
Finally, let’s visit the RDS service. By clicking on Databases, you should locate the one created by Terraform and confirm the presence of the two specified subnets.
Congratulations, you have successfully built a two-tier architecture using Terraform!
Don't forget to execute terraform destroy -var-file="secret.tfvars" to remove all AWS resources when you're finished.
Thank you for reading this tutorial!
More content is available at PlainEnglish.io. Consider signing up for our free weekly newsletter. Follow us on Twitter and LinkedIn. Join our Community Discord and become a part of our Talent Collective.
Chapter 2: Exploring Video Resources
In this section, we will delve deeper into two informative YouTube videos that enhance your understanding of deploying a two-tier architecture on AWS using Terraform.
The first video titled "Project 3 - Deploy A 2-tier Application On AWS Using Terraform" offers an in-depth look at the deployment process and best practices.
The second video, "Deploy Two Tier Architecture in AWS using Terraform," provides additional insights and practical examples to solidify your learning.