Exit e-book
Show all chapters
..done talking, showtime!
..done talking, showtime!

Sign up to our Newsletter

Signing up to our newsletter allows you to read all our ebooks.

    How to create an automation process using AWS, IAAS, IAAC.

    ..done talking, showtime!

    This article is for the audience having some experience with the mentioned tools, so I will not present basic stuff like AWS CLI configuration or way how to install Terraform.

    Previously when I described Packer, I based all examples on Xwiki. Let’s stick to that in this part as well. 

    In this paragraph, my plan is to show you how we built an infrastructure for that using the IaaC concept and how it is a part of the entire orchestration process. 

    During brainstorms within the DevOps team, one of the conclusions we came to was to ensure High Availability and Self-healing mechanisms for all Scalac’s applications. 

    AWS comes with such solutions natively in EC2 section and they are called:

    • Launch Template
    • Auto Scaling Group

    Configuration for the first one regarding Xwiki is as follows: 


    data "aws_ami" "ubuntu" {
      most_recent = true
      filter {
        name   = "name"
        values = ["xwiki_ubuntu18.04"]
      filter {
        name   = "virtualization-type"
        values = ["hvm"]
      filter {
        name   = "root-device-type"
        values = ["ebs"]
      owners = ["self"]
    resource "aws_launch_template" "xwiki-lt" {
      name_prefix = "lt-xwiki"
      image_id    = data.aws_ami.ubuntu.id
      block_device_mappings {
        device_name = "/dev/sda1"
        ebs {
          volume_size = 50
          volume_type = "gp2"
      instance_type = "t3.small"
      key_name      = data.terraform_remote_state.global_variables.outputs.keypair-scalac
      user_data     = base64encode(data.template_file.user_data.rendered)
      network_interfaces {
        device_index                = 0
        associate_public_ip_address = true
        security_groups             = [aws_security_group.sg-xwiki.id]
      iam_instance_profile {
        name = aws_iam_instance_profile.xwiki-instance-profile.name
      tag_specifications {
        resource_type = "volume"
        tags = {
          Name        = "xwiki-root-volume"
          environment = "prod"

    This config creates a Launch Template and all resources related to it. 

    xwiki_ubuntu18.04 defines which AWS AMI will be used. It was created by the Packer as pictured above in the Packer section. 

    Later on, AWS EBS is configured, Instance type on which Xwiki will be installed, IAM instance profile to be attached to the EC2 instance, security groups and tags. 

    User_data describes pre-configured actions to be taken on such instance, such as: checking if EBS is mounted, associating always the same Elastic IP, creating proper FS on partitions. 

    Launch Template is strictly connected to Auto Scaling Group, so the config goes as follows:

    resource "aws_autoscaling_group" "xwiki" {
      name             = "xwiki"
      min_size         = "1"
      max_size         = "1"
      desired_capacity = "1"
      force_delete     = false
      launch_template {
        id      = aws_launch_template.xwiki-lt.id
        version = "$Latest"
      vpc_zone_identifier = [
     tag {
        key                 = "Name"
        value               = "xwiki"
        propagate_at_launch = true
     tag {
        key                 = "environment"
        value               = "prod"
        propagate_at_launch = true
     tag {
        key                 = "project"
        value               = "xwiki"
        propagate_at_launch = true

    ASG ensures how many instances will be spawned, which Launch Template will be used and in our example it also adds some tags describing resources. 

    Also, ASG is used here as a Self-healing mechanism, which is also a big keyword for the orchestration process. If it gets terminated somehow, it will be recreated one more time basing on the above settings. 

    The previously mentioned resources regarding Xwiki are not the only ones we have in Terraform config files. We also have: Route53, Security Groups, EIP defined as IaaC, but it would be boring to show you all of it as they are the same as in the documentation only adjusted to our internal needs. 

    Orchestration means possessing an Infrastructure which is safe and sound and also ready to be resurrected from the dead when required without any human interaction, or at least with less effort. 

    Ansible is a configuration management and application-deployment solution designed by Red Hat. It works in the agentless model and requires only an SSH established connection and python as an interpreter on the target machine. 

    Similarly to Terraform, it also has a declarative language that is used to describe roles and tasks to be executed. 

    Ansible can be run on one target or more simultaneously.

    In our case, the structure presents as follows:

    • host_vars –  variables used in roles and tasks for specific hosts.
    • group_vars – variables used for all hosts enclosed in the groups.
    • internal_hosts.yml – Ansible playbook.
    • inventory.ini – file containing all targets on which we execute Ansible.

    So if we would like to launch this tool, we hit terminals with the following command:

    ansible-playbook -i inventory.ini internal_hosts.yml

    On the particular target, it executes roles and tasks described in internal_hosts.yml using SSH connection. 

    If the certain object needs some extra variables included in role or task, they are defined in the host_vars directory. 

    Basing on Xwiki in my previous examples, I will do the same here.

    For example, host_vars/xwiki looks like:

    hostname: xwiki
    xwiki_test_pass: !vault |
              $ANSIBLE_VAULT;1.1;AES256        633033383465346139386238643638366633613439356332333334623338643536616631363566653939353765343         61333533396438335343166313237613034653862316635336663623636313264386
      - wiki.scalac.io
      - { domain_name: 'wiki.scalac.io', dest: 'http://localhost:8080', cert: 'wildcard', action: 'proxy_pass' }

    Hostname is matched with the SSH config; if the entry is here, Ansible starts the connection, if not, it will throw an error – hostname does not match – and stop the execution. 

    At this moment, I will explain !vault existence in the above code as this is what we use very often in files with variables.  

    To protect important and crucial data, such as passwords, tokens, access and secret keys, Ansible provides a tool called Ansible-vault. It is integrated within so there is no need to install it separately. 

    All we have to do is just put the important data in plaintext in some temporary file and then run:

    ansible-vault encrypt this_temp_file

     and fill the Vault password on the terminal prompt. Black Magic happens and data is encrypted. 

    To decrypt:

    ansible-vault decrypt this_temp_file

     and fill in the Vault password which was used to encrypt. 

    We use Ansible roles for Nginx and Certbot, directives which are also included in the host_vars/xwiki file. 

    It was already mentioned in Packer part what internal_hosts.yml for Xwiki looks like, so there is no need to repeat it once again. 

    Information in the form of tasks to be executed on Xwiki hosts is defined in Xwiki role. For this service actually there is only one:

    - name: Run xwiki container
        name: xwiki
        image: xwiki:11.3-postgres-tomcat
        state: started
        pull: true
        restart_policy: always
          - 8080:8080
          - /data/xwiki/xwiki:/usr/local/xwiki
          DB_USER: xwiki-test
          DB_PASSWORD: "{{ xwiki_test_pass }}"
          DB_DATABASE: xwiki-test
          DB_HOST: testenv-postgres.fnijfgtbdlyn.eu-central-1.rds.amazonaws.com

    It runs a built-in docker container module with all needed parameters, builds and finally launches Xwiki service aka wiki.scalac.io in container with mapping ports on the designated target. Database credentials (encrypted by ansible-vault) are injected as environment variables.  

    Besides the above role, we also have different ones:

    Role name
    Short description
    for configuring OS with setting up hostnames in /etc/hosts and disabling system services
    for keeping OS always up to date
    for installing commonly used packages like gzip, htop, git, etc.
    for installing docker on every target
    for installing and configuring Jenkins and its components
    for applying and conducting ssh keys and configs for every host
    for installing and configuring fail2ban - server security tool
    for installing and configuring one NTP server on every host
    for applying Prometheus exporters on every host
    for installing and configuring AWS CLI
    for applying backup scripts and cron configs on every server
    for installing, setting up and generating certs using Certbot
    for installing and configuring Nginx
    for installing Filebeat on every host for ELK

    In this entire AWS orchestration process, Ansible plays a big role. Having infrastructure in a code (thanks to Terraform) does not solve all the problems. Without a configuration management tool we would still be doing OS-related stuff manually and this would consequently lead to repetitive mistakes. 

    ..eventually it had to happen, this journey is almost over, but yet two more stories remain to be unfolded. Brace yourself as we are sailing to the wide waters that every brave man can conquer. Only you can decide.

    Docker was invented and developed to facilitate the entire deployment process. The software maker can easily package up an application with all its components such as libraries into one container and ship out to the desired target. 

    Containers are separated from each other and thanks to this on one server you are able to run multiple applications without having any problems with dependencies. 

    During the discovery phase on old infrastructure, we decided to package every possible application into Docker containers. Having that done, we were able to build it, ship it and deploy it onto AWS EC2 instances using Docker Hub as a harbor for container images.

    To prepare an application’s package as a Docker image, we had to create Dockerfiles which are required in the building phase. 

    This process is strictly related to Jenkins as he is the captain of this orchestration ship, so it will be described in a bit more detail in the Jenkins section, where we are heading right now.