Playbook¶
Now it's time to provision our application using Ansible. This is the final step before we get to look at some cat memes.
Application Package¶
We need to move the application package into the correct location so that Ansible can find it. We have previously created a httpcats.zip
file which Ansible will upload and unzip into the correct location for us.
1 |
|
There are two important things to note here:
- You have to change
/path/to/
to correctly represent where you've createdhttpcats.zip
- You have to change
/path/to/ansible_repository/
to correctly represent where you've been managing your Ansible repository for this book
Once you've moved the file in place your Ansible's local structure should look like this:
1 2 3 4 5 6 7 |
|
Note the presence of httpcats.zip
in the files/
directory.
Executing¶
We need to run Ansible with a few flags but before that we need to accept the SSH keys of the remote EC2 Instances we've previously created using Terraform. If we fail to do this then Ansible may struggle to connect to our servers, preventing us from configuring them.
SSH Keys¶
Using the ssh-keyscan
command we can literally scan the remote hosts for their SSH keys and then add them to our known_hosts
file:
1 2 |
|
We're providing our DNS records that point to our instances so that the tool knows what servers we want the keys for.
At the end of each command we're using a redirect, >>
, to redirect the output from the ssh-keyscan
command to the file ~/.ssh/known_hosts
. This file is used by the SSH utility, and anything that uses it like Ansible, to understand if we trust the remote system. By adding the SSH keys (known as finger prints) for the remote hosts to our known_hosts
file, we're saying we trust those hosts and connecting to them is acceptable.
Running Ansible¶
Once we've accepted the SSH host keys of our servers, we can now run our Ansible Playbook:
1 |
|
Let's break this down and then look at the results from the run.
We're using the -u
flag to set the SSH username we want to connect as: ubuntu
. That's because we're using the officially maintained Ubuntu AMI from AWS/Ubuntu. The only user that can be used to SSH into a newly created EC2 Instance based on this AMI is the ubuntu
user.
Next we use --private-key
to specify the exact SSH private key we want to use. When I created the private key I stored it in in my personal .ssh/
directory in my home folder. I called it deployment_key
which means the full path to it is ~/.ssh/deployment_key
.
Finally we use the -i
flag to indicate what inventory file to use to find the server we want to provision. Without an inventory file Ansible would have no idea what servers to target and operate against.
Output¶
Let's review the output you should see from running the above Playbook against your new systems.
Note
The output has been edited slightly but just to remove excess *
s underlying the titles of each section being executed.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 |
|
We can see quite a bit of information here. Let's go over some of it.
Firstly we see PLAY [all]
. This is telling us what Play inside of the Playbook (which are just YAML documents made up of multiple Plays). It's also telling us the host matching pattern that we're using in this Play. In this case, all
, meaning: target everything.
Secondly we have a default, built-in (but configurable) task called "Gathering Facts": TASK [Gathering Facts]
. When Ansible connects to a host it gathers information (facts) about the host which can then be used in your Ansible code to make decisions about what to do. This can be turned off.
Next we start going through more TASK
headers and you should recognise them: they're the tasks you wrote into your Playbook previously.
Finally we have the PLAY RECAP
which gives us a summary of what happened:
1 2 3 4 |
|
From this we can determine that we had no issues with the Ansible Playbook as we have 0
s in the unreachable
and failed
statuses.
Overall seven tasks were executed - our six plus the task that gathered facts. Of them they all ran correctly and to completion, with six of them resulting in changes being made to the remote host.