Exit e-book
Show all chapters
03
Four members of the fellowship – Packer, Terraform, Ansible and Docker
03. 
Four members of the fellowship – Packer, Terraform, Ansible and Docker

Sign up to our Newsletter

Signing up to our newsletter allows you to read all our ebooks.

    How to create an automation process using AWS, IAAS, IAAC.
    03

    Four members of the fellowship – Packer, Terraform, Ansible and Docker

    Packer is an open-source solution created for manufacture machine images among different providers, for example, AWS AMI. 

    Packer is lightweight and can be configured in an easy way. What is more, Packer allows using a configuration management tool such as Ansible to install packages and required software on a particular machine image during its creation. 

    Thanks to that, when the image is prepared, we do not have to worry about installing everything from scratch on the operating system. 

    In our case, all the servers are based on Ubuntu 18.04 LTS and this exact Linux distribution was used in Packer configuration as well. 

    Packer’s configuration includes two main blocks:

    • packer_template.json
    • Vars directory 

    Template file looks as follows:

    {
      "variables": {
        "home": "{{env `HOME`}}",
        "build_number": "{{env `BUILD_NUMBER`}}"
      },
      "builders": [
        {
          "type": "amazon-ebs",
          "name": "{{user `hostname`}}_{{user `os_name`}}",
          "ami_name": "{{user `hostname`}}_{{user `os_name`}}",
          "profile": "{{user `aws_profile`}}",
          "region": "{{user `region`}}",
          "source_ami_filter": {
            "filters": {
              "virtualization-type": "hvm",
              "name": "{{user `image_query`}}",
              "root-device-type": "ebs"
            },
            "owners": ["{{user `image_owner`}}"],
            "most_recent": true
          },
          "instance_type": "{{user `instance_type`}}",
          "iam_instance_profile": "{{user `iam_instance_profile`}}",
          "ssh_username": "{{user `ssh_username`}}",
          "ami_description": "{{user `git_branch`}}",
          "launch_block_device_mappings": [{
            "device_name": "/dev/sda1",
            "volume_size": "{{user `root_size`}}",
            "volume_type": "gp2",
            "delete_on_termination": true
          }],
          "ami_block_device_mappings": [
            {
              "device_name": "/dev/sda1",
              "volume_size": "{{user `root_size`}}",
              "volume_type": "gp2",
              "delete_on_termination": true
            }
          ],
          "tags": {
            "Name": "packer_{{user `hostname`}}_{{user `os_name`}}",
            "created_by": "packer",
            "environment": "{{user `environment`}}"
          },
          "run_tags": {
            "Name": "packer_{{user `hostname`}}_{{user `os_name`}}",
            "created_by": "packer",
            "environment": "{{user `environment`}}"
          },
          "force_deregister": "true",
          "force_delete_snapshot": "true",
          "spot_price": "auto",
          "spot_price_auto_product": "Linux/UNIX (Amazon VPC)",
          "security_group_id": "{{user `security_group_id`}}",
          "subnet_id": "{{user `subnet_id`}}",
          "ssh_keypair_name": "{{user `ssh_keypair_name`}}",
          "ssh_private_key_file": "{{ user `home` }}/.ssh/{{user      `ssh_keypair_name`}}",
          "associate_public_ip_address": true
        }
      ],
      "provisioners": [
        {
          "type": "shell",
          "inline": "{{user `install_ansible_script`}}"
        },
        {
          "type": "ansible-local",
          "playbook_dir": ".",
          "clean_staging_directory": "true",
          "playbook_file": "{{user `ansible_playbook`}}",
          "inventory_file": "{{user `ansible_inventory`}}",
        }
      ]
    }
    
    

    Not to send you to sleep at this moment, this file contains all the necessary parameters for Packer to be working + some extra. 

    Basic values are mentioned in common.json in Packer vars directory. 

    This file defines an AWS profile, root size of the EBS volume, region, instance type, and a bit more:

    {
      "aws_profile": "packer-profile",
      "root_size": "10",
      "region": "eu-central-1",
      "instance_type": "t3.medium",
      "iam_instance_profile": "ci",
      "ssh_keypair_name": "ssh-key",
      "security_group_id": "sg-0459xxxx",
      "subnet_id": "subnet-0797xxxxx",
      "ansible_inventory": "packer_inventory.ini"
    }
    
    
    

    As we can see here, besides basic inputs (according to the Packer docs), there are also ones regarding Spot instances because each time Packer builds an image, it has to create an EC2 instance to configure required components and execute Ansible roles needed for particular image. We used Spot instances because we wanted to cut the costs of using EC2 instances in general billings. When Packer finishes, the AWS AMI is created and Spot Instance is being terminated. 

    Another extra part is at the end of this template file – Ansible. As I mentioned above, Packer allows us to use Configuration Management Tool and this is where Ansible comes into play. 

    For each host/server we have defined particular Ansible roles to be applied to the image. Hosts that roles applied to are described in inventory_file, which in this case is packer_inventory.ini, and for example looks like this:

    xwiki  ansible_host=127.0.0.1
    
    ansible_python_interpreter=/usr/bin/python3
    
    
    

    Ansible_host is set to localhost because of the “ansible-local” type (mentioned in the code snippet above). Of course, a remote address is possible as well. 

    In Packer vars, we have two modes: build and ci. For building machine image purposes, the build mode is used. This mode references Ansible file with details about specific roles to be applied on, for example xwiki host.

    # ROLES FOR XWIKI
    - hosts: xwiki
      any_errors_fatal: true
      roles:
        - configure_os
        - os-update
        - base-packages
        - docker
        - ssh
        - fail2ban
        - aws-cli
        - xwiki
      become: yes
    
    
    

    So each time an image for Xwiki is built, Packer applies for the above roles on the AWS AMI. Thanks to that, when a new server with Xwiki is spawned (new EC2), it contains all the necessary configurations, settings, and files. 

    The last part regarding Packer is a base image definition (ubuntu-18.04.json), and it goes as follows:

    {
      "os_name": "ubuntu18.04",
      "image_query": "ubuntu/images/*ubuntu-bionic-18.04-amd64-server-*",
      "image_owner": "099720109477",
      "ssh_username": "ubuntu",
      "install_ansible_script": "until [ -f /var/lib/cloud/instance/boot-finished ]; do sleep 1; done,sudo DEBIAN_FRONTEND=noninteractive apt-get update,sudo DEBIAN_FRONTEND=noninteractive apt-get -y install python3 python3-setuptools python3-dev python3-pip; sudo ln -s /usr/bin/python3 /usr/bin/python; sudo ln -s /usr/bin/pip3 /usr/bin/pip; sudo pip install ansible==2.8.3"
    }
    
    
    

    We base AWS AMIs on Ubuntu 18.04 and during the building process, Packer uses this particular image version mentioned in image_query. This value can be found in AWS EC2 section once spawning a new instance manually. 

    Install_ansible_script – this parameter defines instructions to be executed on this image before any others. We are using python3 as a default executor, it has to be installed and set as default by creating symlinks to the proper binary. Also, as we wanted to take advantage of Ansible, this had to be added because it was not a pre-installed software on OS. 

    Finally, to build an AWS AMI for Xwiki, we have to execute (with respect to the proper location of these files):

    packer build packer_template.json common.json os/ubuntu-18.04.json mode/build.json host/xwiki.json
    
    
    
    
    build.json:
    
    {
      "ansible_playbook": "internal_hosts.yml",
      "ci_mode": "True"
    }
    
    
    
    xwiki.json:
    
    {
      "hostname": "xwiki",
      "environment": "dev"
    }
    
    
    

    ..please do not faint now, do not fly away to /dev/null with your brain.
    ..yet more waters to be discovered.
    ..prevail your doubts to be awarded with the great knowledge.

    Terraform is a tool for creating, changing, and managing infrastructure defined in configuration files. It supports and works with well-known Cloud providers such as AWS, Azure or Google Cloud Platform. 

    After every action in configs, Terraform is able to show an execution plan presenting what will change in the infrastructure and then is able to execute it to reach an appropriate state. 

    When a small modification is made, Terraform will conduct only that, without rebuilding everything from scratch. This is why Terraform is based on state files. 

    Terraform has well-described documentation with examples and code snippets, which can easily be adjusted to the needs. 

    As we decided to choose AWS as a major Cloud provider, we started to migrate the existing resources (created manually) such as IAM, S3 Buckets, VPC and Route53, etc. to the Terraform configuration files using terraform import command. 

    That was just the tip of the iceberg.  A lot of resources, like all the servers spread across hosting providers, Auto Scaling Groups, Launch Templates, EBS, ALB and more, we had to develop from 0 state to have our infrastructure as a code. 

    All of this gave us the mobility to use Terraform code from anywhere we wanted, using laptops or Jenkins. Even if one or more components were taken by the evil forces, we were able to recreate it easily and flawlessly. 

    This process took over six months and actually is still ongoing but now we are just adding new bricks to this wall, not worrying that it is about to fall on our heads.

    PREVIOUS
    Chapter
    02
    NEXT
    Chapter
    04