Published on

Create a Three Tier Architecture on AWS using Terraform

Authors
  • avatar
    Name
    Tamilarasu Gurusamy
    Twitter

Objective

Setup Terraform with AWS

  • To setup terraform so that it can interact with AWS, we need to create an user account dedicated for terraform in IAM console.
  • Navigate to IAM
    • Click on Users
    • Click on Create user
    • Enter a name for the user
    • Click on attach policies directly and choose the AdministratorAccess Policy ( It is not recommended in production settings )
    • Click on Create user
  • Next we need to generate the access keys for the user so that we can use it with terraform
  • Navigate to the newly created user in IAM
    • Click on Security credentials
    • Scroll down to Access keys and click on Create access key
    • Choose CLI, Check the confirmation and click on next
    • Enter tags if you need and click Create access key
    • Copy the Access key and Secret access key and then click on Done
  • Create a file at your home directory under .aws directory named credentials
  • Paste the following contents
    [terraform]
    aws_access_key_id=<access_key>
    aws_secret_access_key=<secret access key>
    region=<your-aws-region>
    
    
  • Remember the location of this file, we need to use in the terraform config
  • Now navigate to the directory where the terraform files will be stored, for eg: three-tier-arch-terraform
  • You can find the full source code here
  • Create a new file named provider.tf and paste the following config
    terraform {
      required_providers {
        aws = {
          source  = "hashicorp/aws"
          version = "5.92.0"
        }
      }
    
      backend "s3" {
        bucket = "your-bucket-name"
        key = "three-tier-arch.tfstate"
        region = "your-aws-region"
    
        dynamodb_table = "terraform-state-locks"
        encrypt = true
      }
    }
    
    provider "aws" {
      shared_credentials_files = ["~/.aws/credentials"]
      profile = "terraform"
    }
    
  • Here we have referenced the file that created using the shared_credentials_files argument and we also need to mention the profile, it refers to the name of the block that we have defined in the ~/.aws/credentials file.
  • For eg, in our case:
    [terraform]
    
  • Before we start, using terraform init, we need to create the s3 bucket and the dynamodb table mentioned in the above config
  • We are doing this step, so that the state file which stores the current state of the resources managed by terraform is stored in a remote backend as opposed to the default local storage, which does not allow collaboration and prone to corruption or deletion
  • Navigate to S3
    • Click on Create bucket
    • Give a bucket name, it should be unique across all AWS buckets globally
    • Keep the rest as default and click on Create bucket
  • Next navigate to DynamoDB
    • Click on Create table
    • Give a name for the table
    • Enter Partition key as LockID and set the type to String
    • Next we need to customize the Read/write capacity settings
    • Set it to Provisioned
    • Set Auto scaling to Off and Provisioned capacity units to 5 for both Read and Write capacity
    • Keep the rest as default and Click Create table
  • Now we can initialize the project using the command
    terraform init
    

Create a VPC with the required networking components for Each Tier

Create VPC

  • Before we create any resource, we will create some tags that will be applied to every resource that will be created using this project, for that we use locals in terraform

    locals {
      common_tags = {
        dev = "true"
        managed_by = "terraform"
      }
    }
    
  • Next we will merge the each resource's tags with this tag

  • To create the vpc, we need to use the aws_vpc resource with cidr_block as the argument

    resource "aws_vpc" "tta_vpc" {
      cidr_block = var.cidr_block // eg: "10.2.0.0/16"
    
      tags = merge(local.common_tags,{
        Name = var.vpc_name
      })
    }
    
  • var.cidr_block and var.vpc_name is a variable name that needs to be set in terraform.tfvars file before applying the configurations

  • Later we will have a detailed look at terraform.tfvars and variables.tf file, for now just remember its a variable that can be set later

Create Public Subnets

  • Now to create subnets we use the aws_subnet resource with four arguments
  • First we will create the public subnets where two EC2 and Load Balancer that belongs to Web Tier will reside with the file public-subnets.tf
  • We create each subnet in a different availability zone, so that if one of the AZ is down, the traffic will be served to another one
    resource "aws_subnet" "pub-subnet-1" {
      vpc_id = aws_vpc.tta_vpc.id
      cidr_block = var.public_sub_1_cidr_block
      availability_zone = var.subnet1-az // eg: ap-south-1a
      map_public_ip_on_launch = true
    }
    
    resource "aws_subnet" "pub-subnet-2" {
      vpc_id = aws_vpc.tta_vpc.id
      cidr_block = var.public_sub_2_cidr_block
      availability_zone = var.subnet2-az // eg: ap-south-1b
      map_public_ip_on_launch = true
    }
    
  • The map_public_ip_on_launch argument is optional since the requests will only be served via load balancer. I have used it so it is easy to use SSH to setup.
  • If you decide to not use Public IP for the instances and need SSH, then you can use SSM for the shell access. More on this feature later.

Create and Configure Public Route Table

  • We need to create a route table that specifies the routes for each destination like Internet.
  • A subnet by default remains private until we add a route to internet. To make the subnet public, we add a route to internet using Internet Gateway.
  • First to add an internet gateway, create file internet-gateway.tf and paste the following
    resource "aws_internet_gateway" "tta_igw" {
      vpc_id = aws_vpc.tta_vpc.id
    
      tags = merge(local.common_tags,{
        Name = var.igw_name
      })
    }
    
  • Now we can create a route to internet using the route block with internet gateway
  • Create a file public-route-table.tf
    resource "aws_route_table" "public_access_rt" {
      vpc_id = aws_vpc.tta_vpc.id
    
      route {
        cidr_block = "0.0.0.0/0"
        gateway_id = aws_internet_gateway.tta_igw.id
      }
    
      tags = merge(local.common_tags,{
        Name = var.public_rt_name
      })
    }
    
    resource "aws_route_table_association" "rta-1" {
      subnet_id = aws_subnet.pub-subnet-1.id
      route_table_id = aws_route_table.public_access_rt.id
    }
    
    resource "aws_route_table_association" "rta2" {
      subnet_id = aws_subnet.pub-subnet-2.id
      route_table_id = aws_route_table.public_access_rt.id
    }
    
  • aws_route_table needs two arguments, one is the vpc to which it needs to be associated and the route that needs to be added. Since the internet means 0.0.0.0/0 we have associated it with the internet gateway.
  • Once we have created the route that allows the traffic to flow through the internet gateway, we have to associate the subnets and the route table using aws_route_table_association so the instances or any resources present in the subnet knows how to reach the internet

Create Private Subnets

  • Now we will create private subnets which will also have internet access, but limited meaning, the instances can access the internet, but anything from the internet will not be able to initiate a connection.
  • We can't grant full internet access like in public subnets, since it increase security risks and also cannot completely remove internet access since updates and other functions may require internet access.
  • We can achieve this type of functionality using a NAT Gateway
  • Create 4 private subnets using with a file named private-subnet.tf
    resource "aws_subnet" "private_subnet_1" {
      vpc_id = aws_vpc.tta_vpc.id
      cidr_block = var.private_subnet_1_cidr_block
      availability_zone = var.subnet1-az
      map_public_ip_on_launch = false
    
      tags = merge(local.common_tags,{
        Name = "backend-subnet-1"
      })
    }
    
    resource "aws_subnet" "private_subnet_2" {
      vpc_id = aws_vpc.tta_vpc.id
      cidr_block = var.private_subnet_2_cidr_block
      availability_zone = var.subnet2-az
      map_public_ip_on_launch = false
    
      tags = merge(local.common_tags,{
        Name = "backend-subnet-2"
      })
    }
    
    resource "aws_subnet" "private_subnet_3" {
      vpc_id = aws_vpc.tta_vpc.id
      cidr_block = var.private_subnet_3_cidr_block
      availability_zone = var.subnet1-az
      map_public_ip_on_launch = false
    
      tags = merge(local.common_tags,{
        Name = "database-subnet-1"
      })
    }
    
    resource "aws_subnet" "private_subnet_4" {
      vpc_id = aws_vpc.tta_vpc.id
      cidr_block = var.private_subnet_4_cidr_block
      availability_zone = var.subnet2-az
      map_public_ip_on_launch = false
    
      tags = merge(local.common_tags,{
        Name = "database-subnet-2"
      })
    }
    
  • 2 subnets are for Application Tier and 2 Subnets for Database Tier
  • We need an elastic ip for the NAT Gateway to work, we can create it using aws_eip resource
    resource "aws_eip" "eip_for_nat_gateway" {
      domain = "vpc"
    }
    
  • Now create a nat gateway using the aws_nat_gateway resource with a file named nat-gateway.tf
    resource "aws_nat_gateway" "tta_nat_gateway" {
      subnet_id = aws_subnet.pub-subnet-1.id
      allocation_id = aws_eip.eip_for_nat_gateway.id
    
      tags = merge(local.common_tags,{
        Name = "tta-nat-gateway"
      })
    
      depends_on = [ aws_internet_gateway.tta_igw ]
    }
    
  • Elastic IP is linked to the NAT Gateway using the allocation_id argument
  • Since the NAT Gateway relies on Internet Gateway to forward the traffic to internet, we can ensure the Internet gateway is created before NAT Gateway by using depends_on option

Create and Configure Private Routes

  • It will be same as the public route table, the only difference being the internet traffic will be routed to NAT Gateway instead of Internet Gateway
  • Create a file nat-gateway.tf
    resource "aws_route_table" "private_route_table" {
      vpc_id = aws_vpc.tta_vpc.id
    
      route {
        cidr_block = "0.0.0.0/0"
        gateway_id = aws_nat_gateway.tta_nat_gateway.id
      }
    
      tags = merge(local.common_tags,{
        Name = "private-route-table"
      })
    }
    
    resource "aws_route_table_association" "rt-3" {
      subnet_id = aws_subnet.private_subnet_1.id
      route_table_id = aws_route_table.private_route_table.id
    }
    
    resource "aws_route_table_association" "rt-4" {
      subnet_id = aws_subnet.private_subnet_2.id
      route_table_id = aws_route_table.private_route_table.id
    }
    

Create Security Groups for all the Resources

Create Security group for Web Tier Application Load Balancer

  • We can create a security group using aws_security_group resource with ingress and egress block specifying the inbound and outbound rules for that security group
  • Since its a public load balancer, we allow the traffic on port 80 from anywhere and outbound traffic to anywhere
  • Create a file lb-sg.tf
    resource "aws_security_group" "lb_sg" {
      name = var.lb_sg_name
      description = "Allow http and https"
      vpc_id = aws_vpc.tta_vpc.id
    
      ingress {
        description = "http traffic"
        from_port = 80
        to_port = 80
        protocol = "tcp"
        cidr_blocks = ["0.0.0.0/0"]
      }
    
      egress {
        from_port = 0
        to_port = 0
        protocol = "-1"
        cidr_blocks = ["0.0.0.0/0"]
        ipv6_cidr_blocks = ["::/0"]
      }
    
      tags = merge(local.common_tags,{
        Name = "allow_http"
      })
    }
    

Create Security group for Web Tier EC2 Instances

  • Inbound rules for Web Tier EC2 Instances should be on port 4173 ( Used by frontend application ) from the Security group of the Application Load Balancer, and SSH from anywhere, since we will be using it for setup
  • Outbound rule can be to anywhere
  • Create a file public-ec2-sg.tf
    resource "aws_security_group" "ec2_public_sg" {
      name        = var.ec2_public_sg_name
      description = "Allow tls for inbound traffic"
      vpc_id = aws_vpc.tta_vpc.id
    
    
      ingress {
        description      = "SSH from VPC"
        from_port        = 22
        to_port          = 22
        protocol         = "tcp"
        cidr_blocks      = ["0.0.0.0/0"]
      }
    
      ingress {
        description      = "HTTP from VPC"
        from_port        = 4173
        to_port          = 4173
        protocol         = "tcp"
        security_groups = [aws_security_group.lb_sg.id]
      }
    
      egress {
        from_port        = 0
        to_port          = 0
        protocol         = "-1"
        cidr_blocks      = ["0.0.0.0/0"]
        ipv6_cidr_blocks = ["::/0"]
      }
      
      tags = {
        Name = "allow_http"
      }
    }
    

Create Security Group for Internal Load Balancer

  • Inbound rules for Internal Load Balancer will be on port 80 from the security group of Public EC2 instances and outbound to anywhere
  • Create a file private-lb-sg.tf
    resource "aws_security_group" "private_lb_sg" {
      name = var.private_lb_sg_name
      description = "Allow http and https"
      vpc_id = aws_vpc.tta_vpc.id
    
      ingress {
        description = "http traffic"
        from_port = 80
        to_port = 80
        protocol = "tcp"
        security_groups = [ aws_security_group.ec2_public_sg.id ]
      }
    
      egress {
        from_port = 0
        to_port = 0
        protocol = "-1"
        cidr_blocks = ["0.0.0.0/0"]
        ipv6_cidr_blocks = ["::/0"]
      }
    
      tags = merge(local.common_tags,{
        Name = "allow_http"
      })
    }
    

Create Security Group for Private EC2 Instances

  • Inbound rules for Private EC2 instances will be on port 3000 from the security group of internal load balancer and outbound to anywhere
  • Create a file ec2-sg.tf
    resource "aws_security_group" "ec2_private_sg" {
      name        = var.ec2_private_sg_name
      description = "Allow tls for inbound traffic"
      vpc_id = aws_vpc.tta_vpc.id
    
      ingress {
        description      = "HTTP from Public EC2"
        from_port        = 3000
        to_port          = 3000
        protocol         = "tcp"
        # cidr_blocks      = ["0.0.0.0/0"]
        security_groups = [aws_security_group.private_lb_sg.id]
      }
    
      egress {
        from_port        = 0
        to_port          = 0
        protocol         = "-1"
        cidr_blocks      = ["0.0.0.0/0"]
        ipv6_cidr_blocks = ["::/0"]
      }
    
      tags = {
        Name = "allow_http"
      }
    }
    

Create Security Group for RDS Instance

  • Inbound rules for the RDS instance would be on port 5432 for Postgres from the security group of Private EC2 instances ( Application Tier ) and Outbound traffic to anywhere
  • Create a file db-sg.tf
    resource "aws_security_group" "db-sg" {
      name = var.db_sg_name
      description = "allow traffic for db"
      vpc_id = aws_vpc.tta_vpc.id
    
      ingress {
        from_port = 5432
        to_port = 5432
        protocol = "tcp"
        security_groups = [aws_security_group.ec2_private_sg.id]
      }
    
      egress {
        from_port = 0
        to_port = 0
        protocol = "-1"
        cidr_blocks = ["0.0.0.0/0"]
        ipv6_cidr_blocks = ["::/0"]
      }
    
      tags = merge(local.common_tags,{
        Name = var.db_sg_name
      })
    }
    

Create a Database Tier with an Amazon RDS instance

  • Now we will create a rds instance with Postgres as the database engine
  • We will configure the database with the following arguments
    • Storage = 10Gb
    • Instance Class = db.t3.micro ( Free Tier )
    • Storage type = gp2
    • Name of the rds instance ( identifier ) = tta-db
    • Many more general database settings
  • Create a file rds.tf
    resource "aws_db_instance" "tta_db" {
      allocated_storage = 10
      engine = "postgres"
      instance_class = "db.t3.micro"
      username = var.db_username
      password = var.db_password
      skip_final_snapshot = true
      storage_type = "gp2"
      identifier = "tta-db"
      db_name = var.db_name
    
      vpc_security_group_ids = [aws_security_group.db-sg.id]
      db_subnet_group_name = aws_db_subnet_group.db_subnet_group.name
    }
    
    resource "aws_db_subnet_group" "db_subnet_group" {
      name = "db_subnet_group"
      subnet_ids = [aws_subnet.private_subnet_3.id, aws_subnet.private_subnet_4.id]
    
      tags = merge(local.common_tags,{
        Name = "DB Subnet Group"
      })
    }
    
  • We need a subnet group, so we create that first using the aws_db_subnet_group with the 2 private subnets that we created for the rds and associate with the rds instance
  • Also associate the security group that was created for the rds instance that only allows the traffic from the Private EC2 instances
  • We specify username, password and name of the db that should be created once the database is up within the arguments

Create a Application Tier with an Application Load Balancer

Create 4 EC2 Instances

  • To create the ec2 instances, we need to use aws_instance resource with the following arguments
    • AMI
    • Instance Type
    • Subnet
    • Security Group
    • SSH Key Name
    • User Data ( Optional )
  • Create a file named ec2.tf
    resource "aws_instance" "instance-1" {
        ami = var.ami-id
        instance_type = var.instance-type
        subnet_id = aws_subnet.pub-subnet-1.id
        security_groups = [aws_security_group.ec2_public_sg.id]
        key_name = var.key-name
    
        tags = merge(local.common_tags,{
            Name = var.instance_1_name
        })
    
        depends_on = [ aws_lb.tta_internal_lb, aws_lb.tta_lb ]
    
    }
    
    resource "aws_instance" "instance-2" {
        ami = var.ami-id
        instance_type = var.instance-type
        subnet_id = aws_subnet.pub-subnet-2.id
        security_groups = [aws_security_group.ec2_public_sg.id]
        key_name = var.key-name
    
    
        tags = merge(local.common_tags,{
            Name = var.instance_2_name
        })
    
        depends_on = [ aws_lb.tta_internal_lb, aws_lb.tta_lb ]
    }
    
    resource "aws_instance" "instance-3" {
        ami = var.ami-id
        instance_type = var.instance-type
        subnet_id = aws_subnet.private_subnet_1.id
        security_groups = [aws_security_group.ec2_private_sg.id]
        key_name = var.key-name
        iam_instance_profile = "ec2-ssm"
    
        tags = merge(local.common_tags,{
            Name = var.instance_3_name
        })
    }
    
    resource "aws_instance" "instance-4" {
        ami = var.ami-id
        instance_type = var.instance-type
        subnet_id = aws_subnet.private_subnet_2.id
        security_groups = [aws_security_group.ec2_private_sg.id]
        key_name = var.key-name
        iam_instance_profile = "ec2-ssm"
    
        tags = merge(local.common_tags,{
            Name = var.instance_4_name
        })
    }
    
  • Instance 3 and 4 cannot be accessed directly since it does not have a public IP, but has internet, being connected to NAT Gateway, so we will use Amazon Systems Manager to access the shell of the instances.
  • To do that, we need to create a role with the AmazonSSMManagedInstanceCore policy.
  • Follow this guide to create that role
  • I have given the name of ec2-ssm for that role and I have attached it to the Instance 3 and 4 using the iam_instance_profile argument.
  • I have already uploaded a public key to AWS, so I will reference the name of the SSH key whenever I use key_name

Create Private Target Group

  • Target group is a group of EC2 instances or containers or resources that can respond to a certain traffic
  • We need to create two target group, one group will contain the 2 Private EC2 instances and other target group will contain the 2 Public EC2 instances with the following arguments
    • Listening port
    • Protocol
    • Name of the target group
    • VPC ID
  • After creating the target group, the appropriate instances need to be attached to its respective target group using the aws_lb_target_group_attachment resource with the following arguments
    • Port on EC2
    • Target Group ID
    • Target Group ARN
  • These target group will then be associated with the 2 Load Balancers that will be created
  • Create a file named private-target-group.tf
    resource "aws_lb_target_group" "tta_lb_private_target_group" {
      name = var.lb_private_target_group_name
      port = 80
      protocol = "HTTP"
      vpc_id = aws_vpc.tta_vpc.id
    }
    
    resource "aws_lb_target_group_attachment" "tg-attachment-3" {
      target_group_arn = aws_lb_target_group.tta_lb_private_target_group.arn
      target_id = aws_instance.instance-3.id
      port = 3000
    }
    
    resource "aws_lb_target_group_attachment" "tg-attachment-4" {
      target_group_arn = aws_lb_target_group.tta_lb_private_target_group.arn
      target_id = aws_instance.instance-4.id
      port = 3000
    }
    

Create a Application Tier Load Balancer

  • Now we will create a load balancer that will reside in the Private Subnets and forward traffic to the Private EC2 instances using the following arguments
    • Name of the load balancer
    • Whether the load balancer is Internal of not ( If Internal, it will not be assigned a Public IP, rather a Private IP)
    • Load Balancer Type ( Application for HTTP traffic, Network for TCP Traffic )
    • Security group
    • Subnets
  • We also need to create a Listener which will listen to the traffic on a certain port and forward it to the required target group
  • Create a file named internal_lb.tf
    resource "aws_lb" "tta_internal_lb" {
      name = "tta-internal-lb"
      internal = true
      load_balancer_type = "application"
      security_groups = [aws_security_group.private_lb_sg.id]
      subnets = [aws_subnet.private_subnet_1.id, aws_subnet.private_subnet_2.id]
    
      tags = merge(local.common_tags,{
        Environment = "production"
      })
    }
    
    resource "aws_lb_listener" "tta_internal_lb_listener" {
      load_balancer_arn = aws_lb.tta_internal_lb.arn
      port = "80"
      protocol = "HTTP"
    
      default_action {
        type = "forward"
        target_group_arn = aws_lb_target_group.tta_lb_private_target_group.arn
      }
    }
    

Create a Web Tier with an Application Load Balancer

Create Public Target Group

  • The procedure will be the same as private target group except the ports for EC2 will change
  • To create a target group, we need to use aws_lb_target_group resource. Create a file named target-group.tf
    resource "aws_lb_target_group" "tta_lb_target_group" {
      name = var.lb_target_group_name
      port = 80
      protocol = "HTTP"
      vpc_id = aws_vpc.tta_vpc.id
    }
    
    resource "aws_lb_target_group_attachment" "tg-attachment-1" {
      target_group_arn = aws_lb_target_group.tta_lb_target_group.arn
      target_id = aws_instance.instance-1.id
      port = 4173
    }
    
    resource "aws_lb_target_group_attachment" "tg-attachment-2" {
      target_group_arn = aws_lb_target_group.tta_lb_target_group.arn
      target_id = aws_instance.instance-2.id
      port = 4173
    }
    

Create Web Tier Load Balancer

  • We will utilise the public target group that we created earlier for this load balancer
  • Create a file named alb.tf
    resource "aws_lb" "tta_lb" {
      name = "tta-lb"
      internal = false
      load_balancer_type = "application"
      security_groups = [aws_security_group.lb_sg.id]
      subnets = [aws_subnet.pub-subnet-1.id, aws_subnet.pub-subnet-2.id]
    
      tags = merge(local.common_tags,{
        Environment = "production"
      })
    }
    
    resource "aws_lb_listener" "tta_lb_listener" {
      load_balancer_arn = aws_lb.tta_lb.arn
      port = "80"
      protocol = "HTTP"
    
      default_action {
        type = "forward"
        target_group_arn = aws_lb_target_group.tta_lb_target_group.arn
      }
    }
    

Variables and Outputs

Variables

  • In order to define the name and data type of all the variables defined, we need to create a seperate file containing all of it.

  • Create a file named variables.tf

    variable "cidr_block" {
      type = string
      description = "CIDR Block for the VPC"
    }
    
    variable "vpc_name" {
      type = string
      description = "Name of the VPC"
    }
    
    variable "public_sub_1_cidr_block" {
      type = string
      description = "CIDR block for Public Subnet 1"
    }
    
    variable "public_sub_2_cidr_block" {
      type = string
      description = "CIDR block for Public Subnet 2"
    }
    
    variable "subnet1-az" {
      type = string
      description = "AZ for subnet 1"
    }
    
    variable "subnet2-az" {
      type = string
      description = "AZ for subnet 2"
    }
    
    variable "public_rt_name" {
      type = string
      description = "Name for Public route table"
    }
    
    variable "igw_name" {
      type = string
      description = "Name for the Internet Gateway"
    }
    
    variable "private_subnet_1_cidr_block" {
      type = string
      description = "CIDR Block for Private Subnet 1"
    }
    
    variable "private_subnet_2_cidr_block" {
      type = string
      description = "CIDR Block for Private Subnet 2"
    }
    
    variable "private_subnet_3_cidr_block" {
      type = string
      description = "CIDR Block for Private Subnet 2"
    }
    
    variable "private_subnet_4_cidr_block" {
      type = string
      description = "CIDR Block for Private Subnet 2"
    }
    
    variable "db_sg_name" {
      type = string
      description = "Security group name for DB"
    }
    
    variable "lb_sg_name" {
      type = string
      description = "Security group name for LB"
    }
    
    variable "private_lb_sg_name" {
      type = string
      description = "Security group name for LB"
    }
    
    variable "ec2_public_sg_name" {
      type = string
      description = "Security group name for EC2"
    }
    
    variable "ec2_private_sg_name" {
      type = string
      description = "Security group name for EC2"
    }
    
    variable "lb_target_group_name" {
      type = string
      description = "Name for the LB target group"
    }
    
    variable "lb_private_target_group_name" {
      type = string
      description = "Name for the LB target group"
    }
    
    variable "ami-id" {
      type = string
      description = "AMI ID for EC2"
    }
    
    variable "instance-type" {
      type = string
      description = "EC2 instance type"
    }
    
    variable "key-name" {
      type = string
      description = "SSH Key name"
    }
    
    variable "instance_1_name" {
      type = string
      description = "Name for instance 1"
    }
    
    variable "instance_2_name" {
      type = string
      description = "Name for instance 2"
    }
    
    variable "instance_3_name" {
      type = string
      description = "Name for instance 3"
    }
    
    variable "instance_4_name" {
      type = string
      description = "Name for instance 4"
    }
    
    variable "db_username" {
      type = string
      description = "Username for the rds database"
    }
    
    variable "db_password" {
      type = string
      description = "Password for the rds database"
    }
    
    variable "db_name" {
      type = string
      description = "Name of the DB to be created"
    }
    
  • I have fetched the ami from the Create EC2 instance page. In this case I have used Ubuntu

  • Next to define the actual values for all the variables defined above, we need to create another file named terraform.tfvars. This file is generally ignored in the git since it contains sensitive information

    cidr_block = "10.3.0.0/16"
    vpc_name = "tta_vpc"
    public_sub_1_cidr_block = "10.3.1.0/24"
    public_sub_2_cidr_block = "10.3.2.0/24"
    subnet1-az = "ap-south-1a"
    subnet2-az = "ap-south-1b"
    public_rt_name = "public-route-table"
    igw_name = "tta-igw"
    private_subnet_1_cidr_block = "10.3.3.0/24"
    private_subnet_2_cidr_block = "10.3.4.0/24"
    private_subnet_3_cidr_block = "10.3.5.0/24"
    private_subnet_4_cidr_block = "10.3.6.0/24"
    db_sg_name = "db-sg"
    lb_sg_name = "lb-sg"
    private_lb_sg_name = "private-lb-sg"
    ec2_public_sg_name = "ec2-public-sg"
    ec2_private_sg_name = "ec2-private-sg"
    ami-id = "ami-0e35ddab05955cf57"
    lb_target_group_name = "tta-lb-tg"
    lb_private_target_group_name = "tta-internal-lb-tg"
    instance-type = "t2.micro"
    key-name = "ssh-key-name"
    instance_1_name = "instance-1"
    instance_2_name = "instance-2"
    instance_3_name = "instance-3"
    instance_4_name = "instance-4"
    db_username = "your-username"
    db_password = "your-password"
    db_name = "postgres"
    
  • Feel free to modify the values in this file. But make sure that some resources need to exist before you run the terraform apply command like key-name, ami-id etc.

  • If you need to change the database type, then you also need to change the appropriate rules where port 5432 is mentioned to suit your needs

Outputs

  • Outputs are used to display values related to some resources. For eg: Public IP of an EC2 instance or the url of a Load Balancer
  • We need the following set of values so that we can work with the infrastructure that is created
    • Public IP of Web Tier Instances
    • Url of Public and Private Load Balancer
    • Url of the RDS Instance
  • Create a file named outputs.tf
    output "db_endpoint" {
      value = aws_db_instance.tta_db.address
    }
    
    output "instance-1-ip" {
      value = aws_instance.instance-1.public_ip
    }
    
    output "instance-2-ip" {
      value = aws_instance.instance-2.public_ip
    }
    
    output "public-loadbalancer-url" {
      value = aws_lb.tta_lb.dns_name
    }
    
    output "private-loadbalancer-url" {
      value = aws_lb.tta_internal_lb.dns_name
    }
    

Terraform Apply

  • Once all files are created, now to verify what resources will be created, we can run terraform plan
  • This command will list out all the changes that will be made and can be considered a dry run
  • Once the changes are verified, we run terraform apply
  • To approve the changes, it will prompt for a yes, type yes and press enter. Now we will have to wait till all the resources are created
  • Once the resources are created, we will be presented with the output values that we have defined in the outputs.tf file
  • With this we have successfully deployed a three tier architecture on AWS using Terraform

Resources