Ansible: Up and Running, 2nd Edition

Note: This is a fat synopsis of the book Ansible: Up and Running, 2nd Edition, just for quick revisit and review.

Could we remove major architectural components from the IT automation stack? Eliminating management daemons and relying instead on OpenSSH meant the system could start managing a computer fleet immediately, without having to set up anything on the managed machines.

the“Making Ansible Go Even Faster” chapter now covers asynchronous tasks, and the“Debugging Ansible Playbooks” chapter now covers the debugger that was introduced in version 2.1.

we are all slowly turning into system engineers.

Chapter 1. Introduction

We also need to make sure we have the appropriate redundancies in place, so that when failures happen (and they will), our software systems will handle these failures gracefully. Then there are the secondary services that we also need to deploy and maintain, such as logging, monitoring, and analytics, as well as third-party services we need to interact with, such as infrastructure-as-a-service (IaaS) endpoints for managing virtual machine instances.

When we talk about configuration management, we are typically talking about writing some kind of state description for our servers, and then using a tool to enforce that the servers are, indeed, in that state: the right packages are installed, configuration files contain the expected values and have the expected permissions, the right services are running, and so on.

Ansible is a great tool for deployment as well as configuration management. Using a single tool for both configuration management and deployment makes life simpler for the folks responsible for operations.

Some people talk about the need for orchestration of deployment. This is where multiple remote servers are involved, and things have to happen in a specific order.

How Ansible works

In Ansible, a script is called a playbook. A playbook describes which hosts (what Ansible calls remote servers) to configure, and an ordered list of tasks to perform on those hosts.

Ansible will make SSH connections in parallel to web1, web2, and web3. It will execute the first task on the list on all three hosts simultaneously

To manage a remote server with Ansible, the server needs to have SSH and Python 2.5 or later installed, or Python 2.4 with the Python simplejsonlibrary installed. There’s no need to preinstall an agent or anyother software on the host.

The control machine (the one that you use to control remote machines) needs tohave Python 2.6 or later installed.

Ansible is push based, and has been used successfully in production with thousands of nodes, and has excellent support for environments where servers are dynamically added and removed.

Ansible modules are declarative; you use them to describe the state you want the server to be in. Modules are also idempotent.

Ansible has excellent support for templating, as well as defining variables at different scopes. Anybody who thinks Ansible is equivalent to working with shell scripts has never had to maintain a nontrivial program written in shell. I’ll always choose Ansible over shell scripts for config management tasks if given a choice.

To be productive with Ansible, you need to be familiar with basic Linux system administration tasks. Ansible makes it easy to automate your tasks, but it’s not the kind of tool that “automagically” does things that you otherwise wouldn’t know how to do.

Ansible uses the YAML file format and the Jinja2 templating languages, so you’ll need to learn some YAML and Jinja2 to use Ansible, but both technologies are easy to pick up.

If you prefer not to spend the money on a public cloud, I recommend you install Vagrant on your machine. Vagrant is an excellent open source tool for managing virtual machines. You can use Vagrant to boot a Linux virtual machine inside your laptop, and you can use that as a test server.

Vagrant needs the VirtualBox virtualizer to be installed on your machine. Download VirtualBox and then download Vagrant.

mkdir playbooks
cd playbooks
vagrant init ubuntu/trusty64
vagrant up

The first time you use vagrant up, it will download the virtual machine image file, which might take a while, depending on your internet connection.

You should be able to SSH into your new Ubuntu 14.04 virtual machine by running the following:

vagrant ssh

This approach lets us interact with the shell, but Ansible needs to connect to the virtual machine by using the regular SSH client, not the vagrant ssh command.

Tell Vagrant to output the SSH connection details by typing the following:

vagrant ssh-config
ssh [email protected] -p 2222 -i /Users/lorin/dev/ansiblebook/ch01/
playbooks/.vagrant/machines/default/virtualbox/private_key
testserver ansible_host=127.0.0.1 ansible_port=2222 ansible_user=vagrant ansible_private_key_file=.vagrant/machines/default/virtualbox/private_key

~/.ansible.cfg

Ansible supports the ssh-agent program, so you don’t need to explicitly specify SSH key files in your inventory files. See “SSH Agent” for more details if you haven’t used ssh-agent before

If Ansible did not succeed, add the -vvvv flag to see more details about the error:

ansible testserver -i hosts -m ping -vvvv

Simplify by ansible.cfg file

we’ll use one such mechanism, the ansible.cfg file, to set some defaults so we don’t need to type as much.

Ansible looks for an ansible.cfg file in the following places, in this order:

  • File specified by the ANSIBLE_CONFIG environment variable
  • ./ansible.cfg (ansible.cfg in the current directory)
  • ~/.ansible.cfg (.ansible.cfg in your home directory)
  • /etc/ansible/ansible.cfg

I typically put ansible.cfg in the current directory, alongside my playbooks. That way, I can check it into the same version-control repository that my playbooks are in.

[defaults]
inventory = hosts
remote_user = vagrant
private_key_file = .vagrant/machines/default/virtualbox/private_key
host_key_checking = False

Disables SSH host-key checking. Otherwise, we need to edit our ~/.ssh/known_hosts file every time we destroy and re-create a nodes.

Ansible uses /etc/ansible/hosts as the default location for the inventory file. However, I never use this because I like to keep my inventory files version-controlled alongside my playbooks.

The command module is so commonly used that it’s the default module, so we can omit it

ansible testserver -a uptime
## spaces in command use quotes
ansible testserver -a "tail /var/log/dmesg"
## -b becomes root user
ansible testserver -b -a "tail /var/log/syslog"

Chapter 2. Playbooks: A Beginning

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章