Ansible is a powerful automation tool for Linux. It operates on the philosophy of anything you can do via SSH you can automate with Ansible. It comes ‘batteries included’ with hundreds of pre-built modules which allow the overwhelmed administrator to accomplish nearly any task across a bevy of remote machines (including Windows machines if necessary). And the best part is that it’s agentless, meaning that as long as your target is capable of being connected to over SSH, you can begin running ansible plays from your controlling node without needing to install anything further on your target machines.
The full power and capability of this wonderful automation tool is best learned via the project documentation located at docs.ansible.com. The project documentation page includes an installation and user guide along with an index of the hundreds of pre-built modules mentioned earlier.
In Part I of this post I walk through a very typical setup for an Ansible Control Node and explain how to configure it to automate administration for the two virtual machines in my network. In Part II, I’ll provide a scenario that will give the reader a taste of Ansible’s automation capabilities.
Let’s get started:
-
Install Ansible:
If you’re Fedora / RHEL / CentOS oriented like me, installation is as simple as:
Fedora
sudo dnf install ansible
RedHat / CentOS
sudo yum install ansible
-
if you’re not similarly oriented, the installation guide recomends:
Ubuntu
$ sudo apt-get update $ sudo apt-get install software-properties-common $ sudo apt-add-repository --yes --update ppa:ansible/ansible $ sudo apt-get install ansible
Debian
$ sudo echo "deb http://ppa.launchpad.net/ansible/ansible/ubuntu trusty main" >> /etc/apt/sources.list $ sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 93C4A3FD7BB9C367 $ sudo apt-get update $ sudo apt-get install ansible
-
-
Create Local Ansible Working Directory:
A fresh installation of Ansible will place default configuration files in /etc/ansible however it’s encouraged that the admin create an ansible folder in his or her home directory to modify with site specific details. The ansible executable uses a search hierarchy similar to the one your shell uses when searching for a file along the user’s $PATH. Specifically, the ansible executable will first check to see if the ANSIBLE_CONFIG environmental variable is set, if this is absent it will then look for an ansible.cfg in $PWD, failing this it will look for a .ansible.cfg in the user’s home directory before finally defaulting to /etc/ansible/ansible.cfg.
The following commands will set up an ansible working environment in your home directory:
$ mkdir ~/ansible && cd ~/ansible $ cp /etc/ansible/ansible.cfg . $ touch hosts first_playbook.yml
A quick explanation of the directories and files we’ve created so far:
- ~/ansible Your Ansible Working Directory
- ~/ansible/ansible.cfg Your local Ansible Configuration
- ~/ansible/hosts A file which groups together the machines your ansible plays will operate on
- ~/ansible/first_playbook.yml A YAML formatted instruction set for Ansible to follow
-
Edit local ansible.cfg
Your local ansible.cfg file will tell Ansible where to look for its inventory list (~/ansible/hosts) and how to handle privilege escalation for plays that need to be ran as a privileged user.
$ vi ~/ansible/ansible.cfg
Since we copied the default ansible.cfg you’ll be able to see the full breadth of configuration options available here as key=value pairs. For now we’re concerned with the location of our inventory at or about line 14.
Change that line to read: inventory=./hosts
Next, locate a section with the header [privilege_escalation]. Uncomment the lines beginning with become (there should be 4) and change them to read:
become=False become_method=sudo become_user=root become_ask_pass=True
To recap, what we’ve done is we’ve pointed any plays ran in our ansible working directory to an inventory located at ~/ansible/hosts , which is ./hosts relatively speaking, and selected to sudo to root after asking for a password when required. We’ve left become=False for the time being, opting to set that to True at the playbook level whenever necessary.
-
Edit local inventory:
Ansible will choose from a list we specify in the hosts file for targets to run the plays on. You can call this anything you like as long as it’s the name specified in your local .ansible.cfg. It is a simple INI file where you may specify hostnames or IP addresses. If you specify hostnames these hostnames need to be configured to resolve either from your /etc/hosts file or configured DNS. In my sandbox I have three targets I want to operate on, localhost, and two web servers, web1 and web2. I have all three of these in my /etc/hosts file and will use square brackets to create two groups, one to contain localhost, and another to contain hosts 1 and 2.
cat hosts
it looks like this:
[local] localhost [web] host1 host2
Ansible also does glob expansion so I could have specified hosts for the web group as:
[web] host[1:2]
-
Check your setup.
The next step will be to use Ansible’s AdHoc functionality to check our configuration. With Ansible, plays are commonly ran using a playbook, however it’s possible to run plays AdHoc via the command line. I will use the PING module to reach out to the hosts I configured in my hosts file. Note that this is not an ICMP ping, but rather a trivial test module that will confirm our hosts file is configured correctly and a usable version of Python is installed on each remote target.
$ /usr/bin/ansible all -m ping
The details of this command are: Run ‘/usr/bin/ansible’ against ‘all’ hosts in our inventory using the ‘-m’ module ‘ping’. A successful return will look like this:
localhost | SUCCESS => { "changed": false, "ping": "pong" } host2 | SUCCESS => { "changed": false, "ping": "pong" } host1 | SUCCESS => { "changed": false, "ping": "pong" }
If you followed along and received PONG responses back from all of your configured hosts, congrats! You’ve successfully configured an Ansible Control Node. If you received some error text it might be necessary to review your configuration. The first thing I would check would be the ability to send regular ICMP to the hosts listed in your inventory file as they are specified there. For instance I am able to send both an ICMP and an Ansible ping to ‘host1’ because I’ve configured that alias in /etc/hosts, like:
192.168.1.42 host1 webserver1.example.com
Next, I would check to see that I am able to ssh to one of the hosts in the inventory.