AWS DevOps Blog

Use AWS CodeDeploy to Deploy to Amazon EC2 Instances Behind an Elastic Load Balancer

AWS CodeDeploy is a new service that makes it easy to deploy application updates to Amazon EC2 instances. CodeDeploy is targeted at customers who manage their EC2 instances directly, instead of those who use an application management service like AWS Elastic Beanstalk or AWS OpsWorks that have their own built-in deployment features. CodeDeploy allows developers and administrators to centrally control and track their application deployments across their different development, testing, and production environments.
 
Let’s assume you have an application architecture designed for high availability that includes an Elastic Load Balancer in front of multiple application servers belonging to an Auto Scaling Group. Elastic Load Balancing enables you to distribute incoming traffic over multiple servers and Auto Scaling allows you to scale your EC2 capacity up or down automatically according to your needs. In this blog post, we will show how you can use CodeDeploy to avoid downtime when updating the code running on your application servers in such an environment. We will use the CodeDeploy rolling updates feature so that there is a minimum capacity always available to serve traffic and use a simple script to take EC2 instances out of the load balancer as and when we deploy new code on it.
 
So let’s get started. We are going to:
  1. Set up the environment described above
  2. Create your artifact bundle, which includes the deployment scripts, and upload it to Amazon S3
  3. Create an AWS CodeDeploy application and a deployment group
  4. Start the zero-downtime deployment
  5. Monitor your deployment
 

1. Set up the environment

Let’s get started by setting up some AWS resources.
 
To simplify the setup process, you can use a sample AWS CloudFormation template that sets up the following resources for you:
  • An Auto Scaling group and its launch configuration. The Auto Scaling group launches by default three Amazon EC2 instances. The AWS CloudFormation template installs Apache on each of these instances to run a sample website. It also installs the AWS CodeDeploy Agent, which performs the deployments on the instance. The template creates a service role that grants AWS CodeDeploy access to add deployment lifecycle event hooks to your Auto Scaling group so that it can kick off a deployment whenever Auto Scaling launches a new Amazon EC2 instance.
  • The Auto Scaling group spins up Amazon EC2 instances and monitors their health for you. The Auto Scaling Group spans all Availability Zones within the region for fault tolerance. 
  • An Elastic Load Balancing load balancer, which distributes the traffic across all of the Amazon EC2 instances in the Auto Scaling group.
 
Simply execute the following command using the AWS Command Line Interface (AWS CLI), or you can create an AWS CloudFormation stack with the AWS Management Console by using the value of the –template-url option shown here:
 
aws cloudformation create-stack 
  --stack-name "CodeDeploySampleELBIntegrationStack" 
  --template-url "http://s3.amazonaws.com/aws-codedeploy-us-east-1/templates/latest/CodeDeploy_SampleCF_ELB_Integration.json" 
  --capabilities "CAPABILITY_IAM" 
  --parameters "ParameterKey=KeyName,ParameterValue=<my-key-pair>"
 

Note: AWS CloudFormation will change your AWS account’s security configuration by adding two roles. These roles will enable AWS CodeDeploy to perform actions on your AWS account’s behalf. These actions include identifying Amazon EC2 instances by their tags or Auto Scaling group names and for deploying applications from Amazon S3 buckets to instances. For more information, see the AWS CodeDeploy service role and IAM instance profile documentation.

 

2. Create your artifact bundle, which includes the deployment scripts, and upload it to Amazon S3

You can use the following sample artifact bundle in Amazon S3, which includes everything you need: the Application Specification (AppSpec) file, deployment scripts, and a sample web page:
 
 
This artifact bundle contains the deployment artifacts and a set of scripts that call the AutoScaling EnterStandby and ExitStandby APIs to do both the registration and deregistration of an Amazon EC2 instance from the load balancer.
 
The installation scripts and deployment artifacts are bundled together with a CodeDeploy AppSpec file. The AppSpec file must be placed in the root of your archive and describes where to copy the application and how to execute installation scripts. 
 
Here is the appspec.yml file from the sample artifact bundle:
 
version: 0.0
os: linux
files:
  - source: /html
    destination: /var/www/html
hooks:
  BeforeInstall:
    - location: scripts/deregister_from_elb.sh
      timeout: 400
    - location: scripts/stop_server.sh
      timeout: 120
      runas: root
  ApplicationStart:
    - location: scripts/start_server.sh
      timeout: 120
      runas: root
    - location: scripts/register_with_elb.sh
      timeout: 120
 
The defined commands in the AppSepc file will be executed in the following order (see AWS CodeDeploy AppSpec File Reference for more details):
  • BeforeInstall deployment lifecycle event
    First, it deregisters the instance from the load balancer (deregister_from_elb.sh). I have increased the time out for the deregistration script above the 300 seconds that the load balancer waits until all connections are closed, which is the default value if connection draining is enabled.
    After that it stops the Apache Web Server (stop_server.sh).
  • Install deployment lifecycle event
    The next step of the host agent is to copy the HTML pages defined in the ‘files’ section from the ‘/html’ folder in the archive to ‘/var/www/html’ on the server.
  • ApplicationStart deployment lifecycle event
    It starts the Apache Web Server (start_server.sh).
    It then registers the instance with the load balancer (register_with_elb.sh).
In case you are wondering why I used the BeforeInstall instead of the ApplicationStop deployment lifecycle event, the ApplicationStop event always executes the scripts from the previous deployment bundle. If you do the deployment for the first time with AWS CodeDeploy, the instance would not get deregistered from the load balancer.
 
 
Here’s what the deregister script does, step by step:
  • The script gets the instance ID (and AWS region) from the Amazon EC2 metadata service.
  • It checks if the instance is part of an Auto Scaling group.
  • After that the script deregisters the instance from the load balancer by putting the instance into standby mode in the Auto Scaling group.
  • The script keeps polling the Auto Scaling API every second until the instance is in standby mode, which means it has been deregistered from the load balancer.
  • The deregistration might take a while if connection draining is enabled. The server has to finish processing the ongoing requests first before we can continue with the deployment.
 
For example, the following is the section of the deregister_from_elb.sh sample script that removes the Amazon EC2 instance from the load balancer:
 
# Get this instance's ID
INSTANCE_ID=$(get_instance_id)
if [ $? != 0 -o -z "$INSTANCE_ID" ]; then
  error_exit "Unable to get this instance's ID; cannot continue."
fi
   
msg "Checking if instance $INSTANCE_ID is part of an AutoScaling group"
asg=$(autoscaling_group_name $INSTANCE_ID)
if [ $? == 0 -a -n "$asg" ]; then
  msg "Found AutoScaling group for instance $INSTANCE_ID: $asg"
  
  msg "Attempting to put instance into Standby"
  autoscaling_enter_standby $INSTANCE_ID $asg
  if [ $? != 0 ]; then
      error_exit "Failed to move instance into standby"
  else
      msg "Instance is in standby"
      exit 0
  fi
fi
 
The ‘autoscaling_enter_standby’ function is defined in the common_functions.sh sample script as follows:
 
autoscaling_enter_standby() {
  local instance_id=$1
  local asg_name=$2
  
  msg "Putting instance $instance_id into Standby"
  $AWS_CLI autoscaling enter-standby 
      --instance-ids $instance_id 
      --auto-scaling-group-name $asg_name 
      --should-decrement-desired-capacity
  if [ $? != 0 ]; then
      msg "Failed to put instance $instance_id into standby for ASG $asg_name."
      return 1
  fi

  msg "Waiting for move to standby to finish."
  wait_for_state "autoscaling" $instance_id "Standby"
  if [ $? != 0 ]; then
      local wait_timeout=$(($WAITER_INTERVAL * $WAITER_ATTEMPTS))
      msg "$instance_id did not make it to standby after $wait_timeout seconds"
      return 1
  fi

  return 0
}
 
The register_with_elb.sh sample script works in a similar way. It calls the ‘ autoscaling_exit_standby‘ from the common_functions.sh sample script to put the instance back in service in the load balancer.
 
The register and deregister scripts are executed on each Amazon EC2 instance in your fleet. The instances must have access to the AutoScaling API to put themselves into standby mode and back in service. Your Amazon EC2 instance role needs the following permissions:
 
{
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "autoscaling:Describe*",
        "autoscaling:EnterStandby",
        "autoscaling:ExitStandby",
        "cloudformation:Describe*",
        "cloudformation:GetTemplate",
        "s3:Get*"
      ],
      "Resource": "*"
    }
  ]
}
 
If you use the provided CloudFormation template, an IAM instance role with the necessary permissions is automatically created for you.
 
For more details on how to create a deployment archive, see Prepare a Revision for AWS CodeDeploy.
 

3. Create an AWS CodeDeploy application and a deployment group

The next step is to create the AWS CodeDeploy resources and configure the roll-out strategy. The following commands tell AWS CodeDeploy where to deploy your artifact bundle (all instances of the given Auto Scaling group) and how to deploy it (OneAtATime). The deployment configuration ‘OneAtATime’ is the safest way to deploy, because only one instance of the Auto Scaling group will be updated at the same time.
 
# Create a new AWS CodeDeploy application.
aws deploy create-application --application-name "SampleELBWebApp"

# Get the AWS CodeDeploy service role ARN and Auto Scaling group name
# from the AWS CloudFormation template.
output_parameters=$(aws cloudformation describe-stacks 
  --stack-name "CodeDeploySampleELBIntegrationStack" 
  --output text 
  --query 'Stacks[0].Outputs[*].OutputValue')
service_role_arn=$(echo $output_parameters | awk '{print $2}')
autoscaling_group_name=$(echo $output_parameters | awk '{print $3}')

# Create an AWS CodeDeploy deployment group that uses
# the Auto Scaling group created by the AWS CloudFormation template.
# Set up the deployment group so that it deploys to
# only one instance at a time.
aws deploy create-deployment-group 
  --application-name "SampleELBWebApp" 
  --deployment-group-name "SampleELBDeploymentGroup" 
  --auto-scaling-groups "$autoscaling_group_name" 
  --service-role-arn "$service_role_arn" 
  --deployment-config-name "CodeDeployDefault.OneAtATime"
 

4. Start the zero-downtime deployment

Now you are ready to start your rolling, zero-downtime deployment. 
 
aws deploy create-deployment 
  --application-name "SampleELBWebApp" 
  --s3-location "bucket=aws-codedeploy-us-east-1,key=samples/latest/SampleApp_ELB_Integration.zip,bundleType=zip" 
  --deployment-group-name "SampleELBDeploymentGroup"
 

5. Monitor your deployment

You can see how your instances are taken out of service and back into service with the following command:
 
watch -n1 aws autoscaling describe-scaling-activities 
   --auto-scaling-group-name "$autoscaling_group_name" 
   --query 'Activities[*].Description'
 
Every 1.0s: aws autoscaling describe-scaling-activities [...]
[
    "Moving EC2 instance out of Standby: i-d308b93c",
    "Moving EC2 instance to Standby: i-d308b93c",
    "Moving EC2 instance out of Standby: i-a9695458",
    "Moving EC2 instance to Standby: i-a9695458",
    "Moving EC2 instance out of Standby: i-2478cade",
    "Moving EC2 instance to Standby: i-2478cade",
    "Launching a new EC2 instance: i-d308b93c",
    "Launching a new EC2 instance: i-a9695458",
    "Launching a new EC2 instance: i-2478cade"
]
 
The URL output parameter of the AWS CloudFormation stack contains the link to the website so that you are able to watch it change. The following command returns the URL of the load balancer:
 
# Get the URL output parameter of the AWS CloudFormation template.
aws cloudformation describe-stacks 
  --stack-name "CodeDeploySampleELBIntegrationStack" 
  --output text 
  --query 'Stacks[0].Outputs[?OutputKey==`URL`].OutputValue'
 
There are a few other points to consider in order to achieve zero-downtime deployments:
  • Graceful shut-down of your application
    You do not want to kill a process with running executions. Make sure that the running threads have enough time to finish work before shutting down your application.
  • Connection draining
    The AWS CloudFormation template sets up an Elastic Load Balancing load balancer with connection draining enabled. The load balancer does not send any new requests to the instance when the instance is deregistering, and it waits until any in-flight requests have finished executing. (For more information, see Enable or Disable Connection Draining for Your Load Balancer.)
  • Sanity test
    It is important to check that the instance is healthy and the application is running before the instance is added back to the load balancer after the deployment.
  • Backward-compatible changes (for example, database changes)
    Both application versions must work side by side until the deployment finishes, because only a part of the fleet is updated at the same time.
  • Warming of the caches and service
    This is so that no request suffers a degraded performance after the deployment.
 
This example should help you get started toward improving your deployment process. I hope that this post makes it easier to reach zero-downtime deployments with AWS CodeDeploy and allows shipping your changes continuously in order to provide a great customer experience.