Setup an LXC host with Ansible   March 30th, 2016

This is my first go at Ansible. Ansible uses SSH to setup servers with a desired environment. The scripts can be run again and again, and only apply things that have changed since the previous run.

Please also see this good tutorial which helped me with my first steps:

My environment is like this: my workstation is Fedora 23. The host that I want to configure is a CentOS7 server.

On my Fedora 23 workstation:

# Fedora 23 currently installs Ansible 1.9, but that will be soon Ansible 2.0
dnf install ansible
sudo vi /etc/ansible.cfg
  remote_user = root
# is just an example IP address of my CentOS7 server.
sudo vi /etc/ansible/hosts

I need a private and a public ssh key. The public key has been installed on the target CentOS7 machine, in /root/.ssh/authorized_keys.

Loading the private ssh key on Fedora 23:


As a first test, I run:

ansible all -m ping

Some modules are not part of Ansible 1.9 in Fedora. see also

git clone
mkdir -p /usr/share/my_modules/
cp ansible-modules-extras/packaging/os/ /usr/share/my_modules/
cp ansible-modules-extras/system/ /usr/share/my_modules/
sudo vi /etc/ansible.cfg
  library     = /usr/share/my_modules/

By the way, here are the links to the modules that I am using:

Here is my playbook for installing the lxc scripts:

- hosts: lxc_host_centos7
     containerpwd: secretPWD
   - name: Configure the Epel Repo
     yum: name=epel-release state=installed
   - name: Configure the repo lbs-tpokorra-lbs
     yum_repository: name=lbs-tpokorra-lbs description="lxc scripts" baseurl=
   - name: Install the public key for the signed lxc-scripts package
     shell: rpm --import ""
   - name: Install LXC host on CentOS7
     yum: name=lxc-scripts state=installed
   - name: Enable and start libvirtd
     service: name=libvirtd state=started enabled=yes
   - name: Setup symbolic link
     shell: ln -s /usr/share/lxc-scripts scripts creates=/root/scripts
   - name: Create a SSH key pair for the containers
     shell: ssh-keygen -t rsa -C "root@localhost" -f /root/.ssh/id_rsa -N {{ containerpwd }} creates=/root/.ssh/id_rsa
   - name: Create a new, unique Diffie-Hellman group
     shell: mkdir -p /var/lib/certs && openssl dhparam -out /var/lib/certs/dhparams.pem 2048 creates=/var/lib/certs/dhparams.pem
   - name: Init LXC
     shell: ( ./ && ./ ) > /root/lxc.installed chdir=/root/scripts creates=/root/lxc.installed
   - name: Install nginx
     yum: name=nginx state=installed
   - name: Enable and start nginx
     service: name=nginx state=started enabled=yes
   - name: Configure firewall port 80 for nginx
     iptables: chain=IN_public_allow protocol=tcp match=tcp destination_port=80 ctstate=NEW jump=ACCEPT
   - name: Configure firewall port 443 for nginx
     iptables: chain=IN_public_allow protocol=tcp match=tcp destination_port=443 ctstate=NEW jump=ACCEPT
   - name: store iptables
     shell: iptables-save > /etc/sysconfig/iptables

This is how I run the playbook:

ansible-playbook lxc.yml --extra-vars "containerpwd=topsecret"
Tags: ,
Posted in Hosting, Software Development | Comments Closed

After quite a lot of refactoring, the latest LightBuildServer release 0.2.2 is now available, quite cleanly packaged for Fedora 22.

The most important improvements are:

  • runs now with uwsgi server and nginx
  • uses sqlite to cope with persistent states, instead of using global variables
  • a cronjob triggers the processing of the build queue

For the OS that hosts the build containers I currently recommend CentOS7, with LXC 1.0.x

Here is a short tutorial how to setup a server that runs the LightBuildServer on Jiffybox. This should work on similar offerings like from Rackspace or DigitalOcean.

I have created a Jiffybox with CentOS 7. Make sure in the settings of the Jiffybox to change the kernel to pvgrub64 because that will come with the latest features from the CentOS7 default kernel. Otherwise creating LXC containers might not work, because the default Jiffybox kernel does not support SquashFS.

On the CentOS7 machine, I will now install the LXC scripts. These are useful scripts for creating LXC containers, supporting various guest Operating Systems like CentOS, Fedora, Ubuntu and Debian. For more details, see

yum install yum-utils epel-release
yum-config-manager --add-repo
yum install lxc-scripts
# setup the bridge for networking with the LXC containers
systemctl enable libvirtd
systemctl start libvirtd
# create a symbolic link in the root directory, so that you get quicker to the scripts
ln -s /usr/share/lxc-scripts scripts
cd scripts
# we need nginx as proxy to redirect requests to the container
yum install nginx
systemctl enable nginx
systemctl start nginx
# make sure the firewall allows requests on port 80 (http) or 443 (https)
iptables -A IN_public_allow -p tcp -m tcp --dport 80 -m conntrack --ctstate NEW -j ACCEPT
iptables -A IN_public_allow -p tcp -m tcp --dport 443 -m conntrack --ctstate NEW -j ACCEPT
iptables-save > /etc/sysconfig/iptables

The next step is to create a Fedora 22 container, which will run the LightBuildServer control server and Web UI:

cd ~/scripts
# 50: this is container id, and will be used to generate the IP address of the container as well
./ 50
# configure the nginx proxy for the website
# if /var/lib/certs/ and exist,
# it will be configured for https, otherwise just for http
./ 50
# start the container
lxc-start -d -n
# see the IP address
# and ssh into the container, using the password for the key you generated earlier when running
ssh root@

Now you can install the LightBuildServer inside the Fedora 22 container:

dnf install 'dnf-command(config-manager)'
dnf config-manager --add-repo
dnf install lightbuildserver
# initialize the server
# this will enable and start the services nginx, uwsgi and crond

The configuration of the LightBuildServer happens in the file /etc/lightbuildserver/config.yml. You can configure an SMTP account for the notification emails to be sent to you.
You should also define the LBSUrl and the DownloadUrl (probably the same) for your server.
You can also define your own Github or Gitlab account, both public and private. See for examples.
You can define your own projects and packages as well.

At last, you need to define the host for building your packages. We can define the CentOS7 host here. So replace build01.localhost with
You need to add a line to the /etc/hosts file on the LBS container,

# on the LBS container.
# use the IP that is the gateway for the container to the host
echo "" >> /etc/hosts
# we changed config.yml and need to restart the LBS website:
systemctl restart uwsgi

You also need to copy the public key to the host, so that the LBS container can create build machines on the host. For production use, the LBS server should obviously not have root access to the host system. You should add another host for building.

# on the CentOS7 host.
# make sure there is a new line
echo >> /root/.ssh/authorized_keys 
cat /var/lib/lxc/ >> /root/.ssh/authorized_keys

Now test inside from the LBS container if you have access to the host, and accept the host key:

# on the LBS container:
ssh -i /etc/lightbuildserver/container/container_rsa

Now you should be able to login on the webinterface, with user demo and password demo. Try building a Debian or Fedora package, or a CentOS or an Ubuntu package!

Tags: , , ,
Posted in Software Development | Comments Closed

This post shows how to setup a workstation for various Linux distributions.

This uses the lxc scripts described in this blog post:

Inside a container, you can install an LXDE or XFCE desktop and X2Go server. If you route traffic to port 22 of the container, you can connect with the x2goclient to your workstation.

For Ubuntu 14.04:

apt-get install lxde-core
apt-get install software-properties-common
apt-add-repository ppa:x2go/stable
apt-get update
apt-get install x2goserver

For Fedora 20:

yum install @lxde-desktop google-droid-*-fonts
yum install x2goserver

For CentOS 7:

yum -y groupinstall "Xfce"
rpm -ivh
yum install x2goserver
Tags: ,
Posted in Software Development | Comments Closed

The situation: you have rented this big server, and you want to utilize it better. But you don’t want to install all services together, rather you want to separate the various services into containers.

LXC is very useful for this purpose.

My LXC scripts help to make setting up a machine even easier.

The scripts are available at Github:

These scripts have been tested on Ubuntu 14.04, and I recommend this for this exercise.

To install the LXC scripts, and LXC 1.0.x, you can install an Ubuntu package, see

apt-get install apt-transport-https
echo 'deb /' >> /etc/apt/sources.list
apt-get update
apt-get install lbslxcscripts

The scripts now live in /root/scripts.

There are several scripts to create a virtual machine:

cd /root/scripts
# ./   <release, default is precise> <arch, default is amd64> <autostart, default is 1>
./ 10-UbuntuDesktop 10 trusty
./ 20-FedoraDesktop 20
./ 30-CentosDesktop 30 7
./ 40-DebianMachine 40 wheezy

Please note: I did not look into creating unprivileged containers yet!

These commands are useful for working with the containers:

# start the container
lxc-start -d -n 30-CentosDesktop
# list all containers
lxc-ls -f
# list all containers and their Linux distribution
# login to the container
ssh root@
# stop the container
lxc-stop -n 30-CentosDesktop
#destroy the container
lxc-destroy -n 30-CentosDesktop

I want fixed IP addresses for my virtual machines. The IP address for container with ID 40 will be

To make a port available from the outside, you can call this:

# This will forward the port 2010 of the host machine to the container running at IP, port 22.
./ 10 22
# This will forward the port 8010 of the host machine to the container running at IP, port 80.
./ 10 80

For websites, I use Nginx on the host machine, to manage http and https (SSL) websites on a single IP:

# the host will listen for and forward all traffic to port 80
# SSL will be setup if these files exist: /var/lib/certs/ and
./ 10

There is a script that backs up all LXC settings and IP Tables rules and Nginx configuration of your containers:

./ myusername

This scripts will upgrade the host and all containers, depending on their Linux distribution. You can run it every night with a cronjob:

Posted in Software Development | Comments Closed

Using Flockport with Jiffybox   December 18th, 2014

I am interested in the idea of Flockport: providing ready built LXC containers for download. So I wanted to try to see how I can actually download a Flockport container and install it on a Jiffybox (the German equivalent to the American Linode…)

See also my old post: LXC Linux containers on JiffyBox running CentOS on Ubuntu

So, I install a Jiffybox with Ubuntu 14.04 64 bit operating system, 2 GB RAM.

Install lxc (currently 1.0.6) from the default Ubuntu repositories:

apt-get install lxc

Now to Flockport: Check out these pages, my next steps are based on the instructions there:

I chose the WordPress container as an example. See

I wanted to download the Debian Wheezy 64 Bit package called wordpress.tar.xz, but that does not work like that, because you need to be logged in.

Here comes the Flockport utility into play. It is currently in Alpha, but it works ok, even on Ubuntu, though it is only advertised for Debian:

To install the Flock utility, follow these steps:

apt-key add flockport.gpg.key
echo "deb wheezy main" > /etc/apt/sources.list.d/flockport.list
apt-get update
apt-get install flockport

Note that I am not installing lxc from the Flockport repository, but only the Flockport utility.

Some useful commands with the Flockport utility:

#shows all Flockport containers available
flockport list 
# login with your username and password for
flockport login
# download a container, the names were displayed by the list command above
flockport get wordpress

The Flockport utility will download the container, and extract it to /var/lib/lxc.

# shows the new container
lxc-ls -f
# start the container
lxc-start -d -n wordpress 
# now show the running container, and the currently used IP address:
lxc-ls -f 
#wordpress RUNNING  -     NO

To make this container accessible to the outside, you can use iptables:

containerIP=`lxc-ls -f -F name,ipv4 | grep wordpress | awk '{ print $2 }'`
interface=`cat /etc/network/interfaces | grep "auto" | grep -v "auto lo" | awk '{ print $2 }'`
HostIP=`ifconfig ${interface} | grep "inet addr" | awk '{ print $2 }' | awk -F ':' '{ print $2 }'`
iptables -t nat -A PREROUTING -d ${HostIP}/32 -i ${interface} -p tcp -m tcp --dport 80 -j DNAT --to-destination ${containerIP}:80
echo "make sure that resolves to this HostIP: " ${HostIP}

If resolves to the IP of your Jiffybox, then you can visit the WordPress installation by browsing to

To change the domain name from to the actual domain name that you want to use, you have to first go into, login with username admin and password flockport, change the password, and change in General Settings the WordPress URL and the Site URL to your desired domain name, eg.

You also have to change the Nginx configuration inside the container to replace the domain name with your actual domain name for your website:

# switch inside the container
lxc-attach -n wordpress
# unfortunately no vi available, but nano will do as well:
nano /etc/nginx/sites-available/
# add or your domain name to the 4th line:
#    server_name;
# leave nano with Ctrl-X, and don't forget to save...
# reload nginx for the change of configuration to take effect
service nginx reload

Now you can reach the server on your own domain name, that points to your Jiffybox!

Tags: , ,
Posted in Software Development | Comments Closed

I thought it would be good to create an OpenSUSE container on my Ubuntu LXC machine.

I am using the existing OpenSUSE template in /usr/lib/lxc/templates/lxc-opensuse, and some SUSE packages built by Thomas-Karl Pietrowski for Ubuntu from, and the build scripts from

This is how you do it (tested with Ubuntu 12.04):

apt-get install python-software-properties
add-apt-repository ppa:thopiekar/zypper 
apt-get update
apt-get install rpm zypper libsolv-tools
wget -O obs-build.tar.gz
tar xzf obs-build.tar.gz
mkdir -p /usr/lib/build
mv obs-build-master/* /usr/lib/build
# small fixes for the template to avoid some errors
sed -i 's/--non-interactive --gpg-auto-import-keys/--gpg-auto-import-keys/g' /usr/lib/lxc/templates/lxc-opensuse
sed -i 's#chpasswd#/usr/sbin/chpasswd#g' /usr/lib/lxc/templates/lxc-opensuse
lxc-create -t opensuse -n demoOpenSuse
lxc-start -n demoOpenSuse
# login with root and password root

Things still to be done:

The creation of the image is interactive, since there seems to be unknown keys:
File ‘content’ from repository ‘repo-oss’ is signed with an unknown key ”. Continue? [yes/no] (no): y
File ‘repomd.xml’ from repository ‘update’ is signed with an unknown key ”. Continue? [yes/no] (no): y

/usr/lib/lxc/templates/lxc-opensuse: line 110: patch: command not found

If there are problems with the root password, try this:
echo “root:root2” | chroot /var/lib/lxc/demoOpenSuse/rootfs /usr/sbin/chpasswd

I think the network is not established yet.

But hopefully this is a start…

Tags: , ,
Posted in Software Development | Comments Closed

This post covers several topics at once:

I have got some experience with OpenVZ, and was looking how LXC could satisfy the requirements that I am used to. Especially how to install several Linux distributions on one LXC host. I will show how to install CentOS, Ubuntu, and Debian Wheezy on a Ubuntu LXC container.

The other issue is that I wanted to play with a virtual machine called JiffyBox provided by DomainFactory.

Let’s look first at how to configure the JiffyBox, before we configure the LXC containers:

This post on the web Linux Containers (lxc) in Linode (xen)  helped me to understand how LXC works fine on a JiffyBox.
With the default kernel from JiffyBox, lxc-checkconfig shows that some requirements for LXC are missing.
JiffyBox allows you to start from a custom kernel. Now I was looking for a fitting kernel to install.
I have no idea how to do that on CentOS, but on the mentioned blog I found that Ubuntu can easily install a kernel that works fine for virtualization.

So you install a 64 bit Ubuntu JiffyBox, and then run these commands to setup LXC:

apt-get install lxc linux-virtual

After that, in the admin website of JiffyBox, select the custom kernel Bootmanager 64 Bit (pvgrub64); see also the german JiffyBox FAQ. Then restart your JiffyBox.

After the restart, uname -a should show something like:

Linux 3.2.0-58-virtual #88-Ubuntu SMP Tue Dec 3 17:58:13 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

Now we will install some virtual machines:

First an Ubuntu 12.04 (precise) machine, which should work without any problems:

lxc-create -t ubuntu -n demo1
lxc-start -n demo1

You can login with username ubuntu, and password ubuntu. To get out of the machine, you type shutdown -h now.

To make the machine start at boot time, and to visit the console once the machine is running, type this:

ln -s /var/lib/lxc/demo1/config /etc/lxc/auto/demo1.conf
lxc-console -n demo1

By the way: you find the templates in /var/cache/lxc/, and the containers in /var/lib/lxc/.

For Debian, there is a template in directory /usr/lib/lxc/templates, but that is for Debian 6 (Squeeze). You need to slightly modify the template, so that Debian 7 (Wheezy) is installed. See my gist for that: Debian Wheezy template file

mv lxc-debian-wheezy /usr/lib/lxc/templates/lxc-debian-wheezy
chmod a+x /usr/lib/lxc/templates/lxc-debian-wheezy
lxc-create -t debian-wheezy -n demo2

You can login with username root and password root.

For CentOS, I have modified an existing Gist, so that it works for the latest CentOS 6.5. You might to check the latest differences in the revision, to adjust the script to future releases of CentOS. Have a look at my gist for lxc-centos.

mv lxc-centos /usr/lib/lxc/templates/lxc-centos
chmod a+x /usr/lib/lxc/templates/lxc-centos
apt-get install yum
lxc-create -t centos -n demo3

You can login with username root and password password.