Tutorial: lxc

Ubuntu (latest stuff) and LXC are nice partners and play well.
The web-based GUI is done in the right way.
But basically you do not need the GUI and you do not need Ubuntu to use LXC.

My tutorial today will show the basic low-end consolish Debian way to work with LXC.

So what is LXC? They say:

Current LXC uses the following kernel features to contain processes:

  • Kernel namespaces (ipc, uts, mount, pid, network and user)
  • Apparmor and SELinux profiles
  • Seccomp policies
  • Chroots (using pivot_root)
  • Kernel capabilities
  • Control groups (cgroups)

As such, LXC is often considered as something in the middle between
a chroot on steroids and a full fledged virtual machine.
The goal of LXC is to create an environment as close as possible as a
standard Linux installation but without the need for a separate kernel.


LXC is free software, most of the code is released under the terms of the
GNU LGPLv2.1+ license, some Android compatibility bits are released
under a standard 2-clause BSD license and some binaries and templates
are shipped under the GNU GPLv2 license.

And how can I work with it under Debian on a KVM?

  1. Install linux headers

    apt-get install linux-headers-$(uname -r)
  2. Install lxs stuff

    apt-get install lxc bridge-utils (libvirt-bin)

    You might not need the libvirt-bin stuff - only if you want to use the libvirt bridge stuff.

Next thing is enabling the cgroups:

nano /etc/fstab
#Add this line at the end
cgroup  /sys/fs/cgroup  cgroup  defaults  0   0

After that following command should show that everything is fine:


Output should look like this:

--- Namespaces ---
Namespaces: enabled
Utsname namespace: enabled
Ipc namespace: enabled
Pid namespace: enabled
User namespace: enabled
Network namespace: enabled
Multiple /dev/pts instances: enabled

--- Control groups ---
Cgroup: enabled
Cgroup clone_children flag: enabled
Cgroup device: enabled
Cgroup sched: enabled
Cgroup cpu account: enabled
Cgroup memory controller: enabled
Cgroup cpuset: enabled

--- Misc ---
Veth pair device: enabled
Macvlan: enabled
Vlan: enabled
File capabilities: enabled

Next thing is to add and configure networking:

You can use LXC, libvirt or interfaces to generate the bridge which is connecting the containers to your local network.

  1. LXC

    nano /etc/default/lxc

    Add following content:

    # Leave USE_LXC_BRIDGE as "true" if you want to use lxcbr0 for your
    # containers.  Set to "false" if you'll use virbr0 or another existing
    # bridge, or mavlan to your host's NIC.
    # If you change the LXC_BRIDGE to something other than lxcbr0, then
    # you will also need to update your /etc/lxc/lxc.conf as well as the
    # configuration (/var/lib/lxc//config) for any containers
    # already created using the default config to reflect the new bridge
    # name.
    # If you have the dnsmasq daemon installed, you'll also have to update
    # /etc/dnsmasq.d/lxc and restart the system wide dnsmasq daemon.

    This is a copy of the default configuration of the ubuntu package.
    I am used to the network but you can setup the network on your own.

  2. libvirt
    Second way to configure networking: [libvirt-bin]
    Define the network, to start it and to enable autostart:

    #First line not needed for Debian 7!
    virsh -c lxc:/// net-define /etc/libvirt/qemu/networks/default.xml
    virsh -c lxc:/// net-start default
    virsh -c lxc:/// net-autostart default

    Output is:

    ~# virsh -c lxc:/// net-define /etc/libvirt/qemu/networks/default.xml
    error: Failed to define network from /etc/libvirt/qemu/networks/default.xml
    error: operation failed: network 'default' already exists with uuid 7b950023-411a-5a72-b969-9568bc68908b
    ~# virsh -c lxc:/// net-start default
    Network default started
    ~# virsh -c lxc:/// net-autostart default
    Network default marked as autostarted

    We can look to the libvirt network config:

    cat /var/lib/libvirt/network/default.xml
  3. network
    Third way to configure networking:

    nano /etc/network/interfaces
    #Bridge setup - add at the buttom of the file
    auto br0
      iface br0 inet static
      bridge_ports eth0
      bridge_fd 0

For me the third way is the easiest - a straightforward bridge for the lxc containers.
But it is also fine to use the LXC generated bridge (doing the same as you).

If you are allready running KVM you can use the bridged network from libvirt too.

Now we need some iptables magic to enable the lxc containers to get internet access:

 iptables -t filter -A INPUT -i lxcbr0 -j ACCEPT
 iptables -t filter -A OUTPUT -o lxcbr0 -j ACCEPT
 iptables -t filter -A FORWARD -i lxcbr0 -j ACCEPT
 iptables -A FORWARD -s -o eth0 -j ACCEPT
 iptables -A FORWARD -d -o lxcbr0 -j ACCEPT

 iptables -A POSTROUTING -t nat -j MASQUERADE
 iptables -t filter -A INPUT -i virbr0 -j ACCEPT
 iptables -t filter -A OUTPUT -o virbr0 -j ACCEPT
 iptables -t filter -A FORWARD -i virbr0 -j ACCEPT
 iptables -A FORWARD -s -o eth0 -j ACCEPT
 iptables -A FORWARD -d -o virbr0 -j ACCEPT

 iptables -A POSTROUTING -t nat -j MASQUERADE

The other way round is to route ports from the host to one container - e.g. for a VestaCP instance:

  • -i eth0: the one interface of your host which should listen
  • --to-destination: ip and port of the lxc container as target
iptables -t nat -A PREROUTING -m tcp -p tcp --dport 20 -j DNAT -i eth0 --to-destination
iptables -t nat -A PREROUTING -m tcp -p tcp --dport 21 -j DNAT -i eth0 --to-destination
iptables -t nat -A PREROUTING -m udp -p udp --dport 53 -j DNAT -i eth0 --to-destination
iptables -t nat -A PREROUTING -m tcp -p tcp --dport 80 -j DNAT -i eth0 --to-destination
iptables -t nat -A PREROUTING -m tcp -p tcp --dport 25 -j DNAT -i eth0 --to-destination
iptables -t nat -A PREROUTING -m tcp -p tcp --dport 143 -j DNAT -i eth0 --to-destination
iptables -t nat -A PREROUTING -m tcp -p tcp --dport 587 -j DNAT -i eth0 --to-destination

Now - finally - the time to create our first container:

I will call it "vnc".

lxc-create -n vnc -t debian

Manpage for lxc-create: http://lxc.sourcefor...lxc-create.html

You will be asked for quite a lot of things but the important ones are the debian version,
what package sources and for the root password.

First container create might consume quite a bit of time (to get all the files).

We then should take a look at the container configuration:

nano /var/lib/lxc/vnc/config

Adding following lines:

lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = virbr0
lxc.network.ipv4 =
lxc.network.ipv4.gateway =

So this time we use the interface "virbr0" from the host to take the ip "" and using the gateway "" (host ip) to get access to the internet.

Next step: Enable autostart of the container:

ln -s /var/lib/lxc/vnc/config /etc/lxc/auto/vnc

And start/stop the container:

lxc-start -n vnc -d
lxc-stop -n vnc

"lxc-list" will list all containers:




You can enter the console of the container by:

lxc-console -n vnc

Remeber following shotcuts:

Type  to exit the console,  to enter Ctrl+a itself

Well the console should show up ... if the debian package for the templates would not be broken.
The fix is allready available for ubuntu but in debian you might have to wait for that fix.
There are patches available but looking to the problem itself ... well the ttys are missing.

But that can be fixed easily:

chroot /var/lib/lxc/vnc/rootfs 
mknod -m 666 /dev/tty1 c 4 1 
mknod -m 666 /dev/tty2 c 4 2 
mknod -m 666 /dev/tty3 c 4 3 

Next issue might be the resolve.conf

nano /var/lib/lxc/vnc/rootfs/etc/resolv.conf

Just ensure that the dns servers are correct.

So back to the console:

lxc-console -n vnc

And everything is working again:

Debian GNU/Linux 7 vnc tty1

vnc login:

So login with root and re install the ssh server:

apt-get update && apt-get install --reinstall openssh-server

Next time you restart the container you can login via ssh to the LXC container:


You can even forward the ssh port to one of the LXC containers.

That's it.

For me LXC is a good tool to separate services.

It is easy to try control panels like VestaCP because you do not have to reinstall your main KVM.

Just start a container and install what ever you want - to cannot harm your main vps.

You can even install different versions of a lib or server. Each on a own instance.