OpenStack-Ansible machinectl image management
With recent changes in OpenStack-Ansible we're now able to import and export container images for using the built-in systemd tooling via machinectl
. This gives us the ability to rapidly provision or replace containers as needed from a pre-seeded source. In this post, I'll cover how an operator might export a container and then import a container from both a local cache as well as a remote one.
The Setup
Assume you have an OpenStack-Ansible deployment running the latest release of Pike and have set the container storage options to use the new machinectl
backend. The available storage options are covered within the role documentation as found here. With the setting lxc_container_backing_store
set to machinectl all of the containers will be stored and managed by systemd. The machinectl
toolchain provides a lot of capabilities however in this post we're only going to cover export and import of a container image.
Imagine the environment
Within a container host we can login and list all of the available container images.
# machinectl list-images
NAME TYPE RO USAGE CREATED MODIFIED
compute1_aodh_container-f08d2204 subvolume no 1.1G Thu 2017-10-26 17:03:13 CDT n/a
compute1_ceilometer_central_container-56318e0e subvolume no 1.0G Thu 2017-10-26 17:03:19 CDT n/a
compute1_cinder_api_container-86552654 subvolume no 1.2G Thu 2017-10-26 17:39:03 CDT n/a
compute1_cinder_scheduler_container-2c0c6061 subvolume no 1.2G Thu 2017-10-26 17:03:08 CDT n/a
compute1_galera_container-53877d98 subvolume no 1.1G Thu 2017-10-26 17:03:18 CDT n/a
compute1_glance_container-78b73e1a subvolume no 2.1G Thu 2017-10-26 17:39:00 CDT n/a
compute1_gnocchi_container-9a4b182b subvolume no 432.0M Thu 2017-10-26 17:39:04 CDT n/a
compute1_heat_apis_container-c973ef5a subvolume no 1.1G Thu 2017-10-26 17:03:18 CDT n/a
compute1_heat_engine_container-ae51062c subvolume no 1.1G Thu 2017-10-26 17:03:12 CDT n/a
compute1_horizon_container-4148c753 subvolume no 1.3G Thu 2017-10-26 17:39:05 CDT n/a
compute1_keystone_container-7a0a3834 subvolume no 1.3G Thu 2017-10-26 17:39:09 CDT n/a
compute1_memcached_container-782a6588 subvolume no 495.7M Thu 2017-10-26 17:03:11 CDT n/a
compute1_neutron_agents_container-de8a4d37 subvolume no 1.1G Thu 2017-10-26 17:03:13 CDT n/a
compute1_neutron_server_container-219f00f7 subvolume no 1.1G Thu 2017-10-26 17:03:07 CDT n/a
compute1_nova_api_metadata_container-9a8fe9ae subvolume no 1.4G Thu 2017-10-26 17:03:17 CDT n/a
compute1_nova_api_os_compute_container-2a4faa2c subvolume no 1.4G Thu 2017-10-26 17:39:03 CDT n/a
compute1_nova_api_placement_container-42904e4c subvolume no 1.4G Thu 2017-10-26 17:39:05 CDT n/a
compute1_nova_conductor_container-5109b386 subvolume no 1.4G Thu 2017-10-26 17:03:07 CDT n/a
compute1_nova_console_container-cf223830 subvolume no 1.4G Thu 2017-10-26 17:39:00 CDT n/a
compute1_nova_scheduler_container-832bf438 subvolume no 1.4G Thu 2017-10-26 17:03:08 CDT n/a
compute1_rabbit_mq_container-652f0bda subvolume no 842.2M Thu 2017-10-26 17:39:08 CDT n/a
compute1_repo_container-754d214c subvolume no 1.3G Thu 2017-10-26 17:03:12 CDT n/a
compute1_swift_proxy_container-fb47a052 subvolume no 1.0G Thu 2017-10-26 17:03:07 CDT n/a
ubuntu-xenial-amd64 subvolume no 369.6M Mon 2017-10-16 11:17:19 CDT n/a
These container images all show the name and size of the image as well the type. The names of these container images all correspond to our running LXC containers as seen with a simple list.
# lxc-ls -f
NAME STATE AUTOSTART GROUPS IPV4
compute1_aodh_container-f08d2204 RUNNING 1 onboot, openstack 10.0.3.118, 172.16.26.75
compute1_ceilometer_central_container-56318e0e RUNNING 1 onboot, openstack 10.0.3.134, 172.16.26.190
compute1_cinder_api_container-86552654 RUNNING 1 onboot, openstack 10.0.3.34, 172.16.26.86
compute1_cinder_scheduler_container-2c0c6061 RUNNING 1 onboot, openstack 10.0.3.66, 172.16.26.153
compute1_galera_container-53877d98 RUNNING 1 onboot, openstack 10.0.3.137, 172.16.26.208
compute1_glance_container-78b73e1a RUNNING 1 onboot, openstack 10.0.3.199, 172.16.26.241
compute1_gnocchi_container-9a4b182b RUNNING 1 onboot, openstack 10.0.3.225, 172.16.26.219
compute1_heat_apis_container-c973ef5a RUNNING 1 onboot, openstack 10.0.3.253, 172.16.26.236
compute1_heat_engine_container-ae51062c RUNNING 1 onboot, openstack 10.0.3.8, 172.16.26.132
compute1_horizon_container-4148c753 RUNNING 1 onboot, openstack 10.0.3.194, 172.16.26.49
compute1_keystone_container-7a0a3834 RUNNING 1 onboot, openstack 10.0.3.133, 172.16.26.146
compute1_memcached_container-782a6588 RUNNING 1 onboot, openstack 10.0.3.18, 172.16.26.126
compute1_neutron_agents_container-de8a4d37 RUNNING 1 onboot, openstack 10.0.3.150, 172.16.26.220
compute1_neutron_server_container-219f00f7 RUNNING 1 onboot, openstack 10.0.3.87, 172.16.26.57
compute1_nova_api_metadata_container-9a8fe9ae RUNNING 1 onboot, openstack 10.0.3.101, 172.16.26.170
compute1_nova_api_os_compute_container-2a4faa2c RUNNING 1 onboot, openstack 10.0.3.49, 172.16.26.116
compute1_nova_api_placement_container-42904e4c RUNNING 1 onboot, openstack 10.0.3.158, 172.16.26.115
compute1_nova_conductor_container-5109b386 RUNNING 1 onboot, openstack 10.0.3.61, 172.16.26.101
compute1_nova_console_container-cf223830 RUNNING 1 onboot, openstack 10.0.3.139, 172.16.26.123
compute1_nova_scheduler_container-832bf438 RUNNING 1 onboot, openstack 10.0.3.92, 172.16.26.199
compute1_rabbit_mq_container-652f0bda RUNNING 1 onboot, openstack 10.0.3.173, 172.16.26.54
compute1_repo_container-754d214c RUNNING 1 onboot, openstack 10.0.3.185, 172.16.26.80
compute1_swift_proxy_container-fb47a052 RUNNING 1 onboot, openstack 10.0.3.41, 172.16.26.247
compute1_utility_container-cbd7b73e RUNNING 1 onboot, openstack 10.0.3.97, 172.16.26.94
To exercise an export and import operation I'm using the compute1_utility_container-cbd7b73e container however the steps covered within this post can apply to ANY container on the system.
The covered capabilities within this post do very little for the upgrade story at this point.
Container Export
Exporting a containers image is simple. There's nothing you need to do on the host to initiate the process and there's no setup within the container to prepare for export. To perform a basic export simply run the following command:
# machinectl export-tar compute1_utility_container-cbd7b73e utility_container.tar.gz`
The command will create a compressed archive of the running container. This task will take a minute or two resulting in a local tarball which can be used later.
Notice About Container Exports
Container exports WILL NOT create an archive of anything bind mounted within the container runtime. This means externally mounted bits like databases and repository caches will not be part of the archive. The point of a container export is to provide the building blocks of a service and is not a replacement for backups.
Container Import
In this example, I'm going to replace a running utility container with a different storage backend. Before importing a container image into the environment we need to make sure the container is not running. Once we've ensured the container is STOPPED we'll clean up the old image and replace it using one of two methods; import-tar
or pull-tar
. This operation will keep the existing LXC container definition making it possible to run these commands within a live environment without having to rerun any of the OSA playbooks.
Stopping the container
To stop the container we'll use the lxc-stop
command.
# lxc-stop -n compute1_utility_container-cbd7b73e
Now verify the container is STOPPED.
# lxc-info -n compute1_utility_container-cbd7b73e
Name: compute1_utility_container-cbd7b73e
State: STOPPED
Remove the old container storage
Validate the old image exists.
# machinectl show-image compute1_utility_container-cbd7b73e
Remove the image
# machinectl remove compute1_utility_container-cbd7b73e
After the image has been removed you can validate it's really gone with the show-image
function again or you can move on to importing the image.
Import from a local file
Assuming the new image is on the local machine, run the following command to import the container image.
# machinectl import-tar utility_container.tar.gz compute1_utility_container-cbd7b73e
After a few seconds, the container image will be ready for use.
Import from a remote file
Assuming the new image stored on a remote server which is providing access to the images via HTTP(S), run the following command to import the container image.
# machinectl --verify=no pull-tar http://lab-lb01:8181/images/ubuntu/pike/amd64/utility_container.tar.gz compute1_utility_container-cbd7b73e
After a minute or two, the container image will be ready for use.
Notice About Remote Import
When downloading an image from a remote server the ability to verify a container image is available as ether a SHA256 Sum and/or a GPG signature. In my example, I've disabled verification, however, this is not ideal especially in production. You can read more about the validation options here.
Restarting the container
Now that the container storage is replaced all that's left to do is restart the container. To do this we'll simply use the lxc-start
command on the same container we stopped earlier.
# lxc-start -dn compute1_utility_container-cbd7b73e
Once started, verify the container is active and online using the lxc-info
command.
# lxc-info -n compute1_utility_container-cbd7b73e
Name: compute1_utility_container-cbd7b73e
State: RUNNING
PID: 27243
IP: 10.0.3.97
IP: 172.16.26.94
CPU use: 0.74 seconds
BlkIO use: 68.00 KiB
Memory use: 22.69 MiB
KMem use: 8.99 MiB
Link: cbd7b73e_eth0
TX bytes: 520 bytes
RX bytes: 637 bytes
Total bytes: 1.13 KiB
Link: cbd7b73e_eth1
TX bytes: 488 bytes
RX bytes: 446 bytes
Total bytes: 934 bytes
If everything started and accessible over the network, your done.
Recap & Looking forward
Now, this simple example is great when running a simple update or replace however this same basic concept could also be applied to a greenfield deployment making the initial setup and configuration of containerized systems faster.
Imagine populating the OSA inventory and then dropping a complete set of container images from a remote source before running
openstack-ansible setup-everything.yml
.
Here's a pseudo playbook to gather facts on our container hosts and then pre-seed all of the container images; debug is being used to illustrate the command without actually running anything. With something as simple as this playbook we could ingest all of the required container images. Once this playbook was complete, we'd simply run the rest of the deployment like normal.
- name: Gather lxc container host facts
hosts: "lxc_hosts"
gather_facts: true
- name: Pull container images into the deployment
hosts: "all_containers"
gather_facts: false
tasks:
# Debug used to illustrate the command we'd run.
- name: Pre-seed container images
debug:
msg: >-
machinectl --verify=no pull-tar http://lab-lb01:8181/images/{{ physical_distro }}/pike/{{ physical_arch }}/{{ inventory_hostname.split('_', 1)[-1].split('-')[0] }}.tar.gz {{ inventory_hostname }}
delegate_to: "{{ physical_host }}"
vars:
physical_distro: "{{ hostvars[physical_host]['ansible_distribution'] }}"
physical_arch: "{{ hostvars[physical_host]['ansible_architecture'] }}"
Having the ability to pre-seed container images should help ensure a consistent initial deployment and massively improve the speed of greenfield deployments. Best of all, we wouldn't have to sacrifice any capabilities. Almost everything OpenStack-Ansible deploys is within a container and with pre-seeded images, those bits would already be on disk simply making the playbooks a means to reconfigure an environment.
Just as we can import a series of containers we can also archive all of the containers from a running deployment. Being that all we need is the base container images, we could use something as simple as an All-In-One and archive the running containers. The resulting archive would then serve as the basis for all version controlled deployments (AIO or Multi-Node). Here's a pseudo playbook to archive all of the running containers; debug is being used to illustrate the command without actually running anything.
- name: Archive all containers from a deployment
hosts: "all_containers"
gather_facts: false
tasks:
- name: Archive container images
debug:
msg: >-
machinectl export-tar {{ inventory_hostname }} {{ inventory_hostname.split('_', 1)[-1].split('-')[0] }}.tar.gz
delegate_to: "{{ physical_host }}"
The covered capabilities within this post do very little for the upgrade story at this point. That said, our more advanced storage options will give us new capabilities to explore and should help shape the conversation within the community so we can make informed decisions on how we make containerization and images work for us.
That's all folks
It's my hope that this simple post shines a light on some of the core technologies OpenStack-Ansible deployers already have access to and how our project is evolving as we move forward as a community.