Deploy Nodejs On ECS Fargate using Terraform with support for Blue/Green deployment using Codedeploy.

Arnav
6 min readMay 8, 2021

--

With serverless architecture being the new hot topic I had to shift from our EC2 servers to ECS Fargate. Serverless lowers administrative overhead, takes server maintenance off developers’ plates forever and cuts server costs.

Terraform is an infrastructure orchestration tool (also known as “infrastructure as code(IaC)”). Using Terraform, you declare every single piece of your infrastructure once, in static files, allowing you to deploy and destroy cloud infrastructure easily, make incremental changes to the infrastructure, do rollbacks, infrastructure versioning, etc.This minimizes human interaction and thus reduces the risk of things going wrong if done manually.

I followed a modular approach where I divided each AWS module into different files and used a main file to call them, this way you can easily add or remove any module and it also increases code readability.

This is how one folder looks with a main.tf and variables.tf . Main files contains the code and variables file contains the variables i am passing to the main file externally.

So lets get stared with building the VPC first as most of our services will be within the VPC.

Whats happening here is that we create a vpc called main and pass cidr block to it also enabling dns support. Tags are just extra key pair to uniquely identify your resource and distinguish it from other similar resources. We then create a IGW to access the internet inside the vpc so we pass the vpc id to it.We create aws_eip which is Elastic IP addresses and connect our IGW to them using nat gateway.Then we create public and private subnets one in each availability zone.One plublic and private route table to map traffic inside respective subnets.Then we route IGW to the public route table and traffic from public subnet to private subnet using private route table.At the end we create a cloudwatch log to monitor the traffic flow. The output values at the end are the ones we will reference in other modules.

Lets move on to create security group for our ECS task and Application Load balancer.

The security group for alb is quiet straight forward , we are allowing incoming tcp in port 80 and 433 and all outgoing traffic.For ECS tasks we are allowing all incoming traffic from within the vpc.And as usual we export these two for later use.

Lets move on to one of our main module which is the Application load balancer.

Since this loadbalancer is public facing we are going give it access to only the public subnets. You can see we have also passed it the security group which we exported from our sg.tf module. Now the next part is a bit tricky and it is how we will support the blue/green deployment on out LB part.

We create two target groups with ip targeting at port 80.Then we create two listeners one at port 80 which will be the main listener for our first target group. And a second listener at port 8080 or any other port which will be our test listener.The code for using ssl is also there but for simplicity we are going to use Http instead of Https.

Now that we have the skeleton ready for our app , we finally need to deploy the ECS container.To understand how ECS fargate works you can take a look at the AWS white papers but to explain it briefly, the running tasks are containers so if there is one task running that means one instance of your container is running.For detailed explanation refer to this article.

These are the things happening starting from top:

  1. We create IAM roles for the task-execution which the container will use, role for the task which will be used to spin-up the task.And a policy that allows access to any env variables stored in aws parameter store.Then we attach some policies to the role.
  2. We then create a task definition for FARGATE where we define the cpu and memory also attack the task role and execution role we made earlier.Inside the container definition we can use the environment to pass any environment variables used by the app by fetching them from aws ssm. Finally a log group is attached for logging the application.
  3. We then create the service in which we pass the security group we created earlier and private subnets in which the service will be running.We also pass the ALB we created along the the docker container port we have our application on. The deployment_controller is set as code deploy which will handle our zero-downtime blue/green deployment.
  4. Since we want our app to scale according to the load on it, we define an auto scaling target and policy which will monitor cpu and ram usage to scale accordingly.
  5. After this we setup a aim policy for our codedeploy and grand it role to manage ecs tasks and the ALB configuration.Then we create the code deploy app.Inside this app we want a deployment group which will handle our ecs deployments every time we push an updated version of our container to ecr or any other container repository.

Now lets call all these module from one main module.

In the root of our folder create a these files.Terraform will use the main file as starting point , rest of the files are just variable files used for storing some default values .

This is my tfvar file where i have stored the subnet cidr and ecr repo link.

name                = "backend"availability_zones  = ["ap-south-1a", "ap-south-1b"]private_subnets     = ["10.0.0.0/20", "10.0.32.0/20"]public_subnets      = ["10.0.16.0/20", "10.0.48.0/20"]aws_ecr_repository_url ="xxxxxxxxxxx.dkr.ecr.ap-south-1.amazonaws.com/xxxxxxx"

For the main file this is how it looks like

provider "aws" {shared_credentials_file = "$HOME/.aws/credentials"region     = var.aws-regionversion    = "~> 3.37.0"}data "local_file" "google_key" {filename = "${path.module}/service.json"}terraform {backend "s3" {bucket  = "configfilebucket"key     = "testterraform.tfstate"region  = "ap-south-1"}}module "vpc" {source             = "./vpc"name               = var.namecidr               = var.cidrprivate_subnets    = var.private_subnetspublic_subnets     = var.public_subnetsavailability_zones = var.availability_zonesenvironment        = var.environment}module "security_groups" {source         = "./security-groups"name           = var.namevpc_id         = module.vpc.idenvironment    = var.environmentcontainer_port = var.container_portvpc_cidr       = module.vpc.cidr}module "alb" {source              = "./alb"name                = var.namevpc_id              = module.vpc.idsubnets             = module.vpc.public_subnetsenvironment         = var.environmentalb_security_groups = [module.security_groups.alb]#alb_tls_cert_arn    = var.tsl_certificate_arnhealth_check_path   = var.health_check_path}module "ecs" {source                      = "./ecs"name                        = var.nameenvironment                 = var.environmentregion                      = var.aws-regionsubnets                     = module.vpc.private_subnetsaws_alb_target_group_arn    = module.alb.aws_alb_target_groupaws_alb_listener            = module.alb.aws_alb_listenerecs_service_security_groups = [module.security_groups.ecs_tasks]container_port              = var.container_portcontainer_cpu               = var.container_cpucontainer_memory            = var.container_memoryservice_desired_count       = var.service_desired_counttag                         = var.build_tagcontainer_environment = [{ name = "LOG_LEVEL",value = "DEBUG" },{ name = "PORT",value = var.container_port }]container_image = var.aws_ecr_repository_url}

To run this just hit terraform init and terrafrom apply -var exmaple_var=value. I usually pass most of the variables through cli.

Now to trigger blue/green deployment first go to service and update it with forceupdate.You will see a deployment created in code deploy. Copy the yaml code of that deployment make upload it to the s3 bucket you are using for the config. Now when every you update the latest image inside your ecr repo you just have to run this command

aws deploy create-deployment — application-name dev-test-deploy — deployment-config-name CodeDeployDefault.ECSAllAtOnce — deployment-group-name example-deploy-group — s3-location bucket=configfilebucket,bundleType=yaml,key=appspec.yaml

This will trigger a new deployment with the latest image.

Thank you for reading this , if I helped you in anyway do leave a clap or a comment.

--

--