We are going to follow this docker swarm tutorial, make it work on Virtualbox with Vagrant instead of AWS and script the configuration using Ansible.
You can find the final result on this repository.
Let’s get started.
I’m on a Ubuntu computer, so this guide is going to be based on that.
Virtualbox can be installed with apt
apt install virtualbox.
It can be installed using the package manager:
apt install vagrant
The version 2.2.0 was not yet available on apt. The good thing is that it can be installed directly from the development repository using pip.
sudo pip install git+git://github.com/ansible/ansible.git@devel
Yes, we are going to need Redis we’ll see afterwards why. Luckily is extremely easy to install. You can follow these steps.
Scripting the swarm creation
Spin up the instances
Vagrant is going to create the 5 instances (1 Consul, 2 Managers, 2 Nodes).
That is all we need to initialize the instances. We can do
vagrant up and have 5 instances ready for us.
But that is obviously not enough, we are not going to install what we are missing manually.
Install docker on all instances
Enter the playbook!
What’s happening here:
- The yum repository is added to the instances.
- The docker engine is installed.
- A template is copied to configure the docker daemon.
- The docker service is started
What about Vagrant?
The Ansible playbook is ready to install docker but Vagrant still has no idea about it. Let’s make it aware of this.
Vagrant is going to look for
playbook.yml in the same directory where the
Vagrantfile is located and execute it.
Now we are ready to let Ansible provision our instances. For that,
vagrant reload --provision.
Time to start the containers
Ansible allows us group instances to select on which instances a task should be executed. We can define the groups in the Vagrantfile and pass them to Ansible. This will be the last change we make to the Vagrantfile.
This is the final Vagrantfile:
As you can see we have defined 3 groups and we are passing them to Ansible. Now we are going to put these groups to use.
Start the consul container.
Start the swarm managers.
Start the nodes.
If you execute the provision right now is not going to work. You can find a perfect explanation in this blog post.
We need 1 config file to tell Ansible to store the instances facts on Redis.
If you have done your homework, you should have Redis already installed. We can start it with
redis-server and leave it running.
Now we could complete the provisioning (
vagrant reload --provision). Or even if we wanted, reload everything from scratch with
vagrant destroy; vagrant up.
The playbook works but there are a few things that can be much improved and are not going to be covered in this post.
Everything was put together in the same playbook and roles are not being used.
By using roles we could define a role for each group (manager, consul and node) and each of them could have the role docker.
If you execute the provision more than once, you’ll see that is going to fail because is going to try to run the containers again even if they are already running or if it succeeds, we’ll see more than 1 container running on the instance.
There already is a docker module in Ansible, that can be used instead of
shell as we used in our playbook and is going to make sure that the container is not already created and start it if it’s stopped. The problem with this task is that it does not currently support passing parameter to the container execution.
To register the manager and nodes in consul, we are retrieving the IP by scavenging its value from the facts of the
consul0 instance, it would be nicer to have a DNS in place and reference a host instead.
Thanks to Vagrant and Ansible we are capable of reproducing and validating the orchestration of our environment on our development machine allowing faster iterations.