For this virtualization guide, of course, a KVM hypervisor needs to be installed. As you noticed, there is a link to an article in opening paragraph which describes how to do it. Also this is the only prerequisite required for you to continue with this guide. If you have a KVM in place and running lets start with customization.
KVM Customization – Configuration files
There are three respective files we can modify, though it is not mandatory for general KVM operations. Virtualization features will not be affected by any changes made to those files. But, of course, there are some other useful benefits.
KVM Configuration files – /etc/default/libvirt-guests
First we will modify /etc/default/libvirt-guests file. Open the file in text editor of your choice and read trough options. They are nicely commented and should be pretty much self explanatory. The one option I always change is ON_SHUTDOWN. It defines how guest machines (virtual machines) will behave on physical host (hypervisor) shutdown or restart. If you go with default options, when physical host gets shut down or rebooted, all the guest machines will also gracefully shut down. I want my guest to suspend, so I will modify ON_SHUTDOWN option accordingly:
In case of major update, like new kernel install, where physical host is required to reboot, suspend option will give me possibility to reboot or shutdown physical host without affecting the state of guest machines.
KVM Configuration files – /etc/libvirt/libvirtd.conf
Next file on the list is /ect/libvirt/libvirtd.conf. Open the file in text editor and modify the following options:
listen_tls = 0 ... listen_tcp = 1 ... listen_addr = "IP_OF_YOUR_PHY_HOST" ... log_level = 3 ... log_outputs="1:file:/var/log/libvirt/libvirtd.log"
Let’s say you have a hosting company providing virtual hosting service and you’re running several hosts with KVM running. As a good system administrator, you have a scheduled maintenance window where you perform various system updates, upgrades or faulty hardware replacements. And every now and then a need to reboot or shutdown of affected host is required. By enabling listen_tcp and listen_addr options, you get the possibility to migrate virtual machines from malfunctioned host to any other running host before you start maintenance. This will allow you to remedy any and all problems on affected host and later return virtual machines which were present on that host. Just make sure you define an IP address which is accessible by other hosts for listen_addr option. Also make sure you are running your hosts on trusted network since listen_tcp option opens an unencrypted TCP/IP socket. Other two options define what and where will be logged related to libvirtd daemon. Option log_outputs defines a location of log file and log_level = 3 means all WARNING and ERROR messages will be logged.
KVM Configuration files – /etc/default/libvirt-bin
Last file to modify is /etc/default/libvirt-bin. This file defines what options will be passed to libvirtd daemon when it starts. Since we enabled TCP listener socket in /ect/libvirt/libvirtd.conf, we also have to tell daemon to start with listen option. To do it, open the file in text editor and modify the following option:
If you are using Nagios monitoring and you enabled listener, there is a big chance your libvirtd log will be flooded with following messages:
1703: error : virNetSocketReadWire:1613 : End of file while reading data: Input/output error
Error is caused by Nagios tcp_socket checks due to reasons still unknown to me. But do not worry, this will not affect any virtualization features nor the functionality of KVM itself.
KVM Customization – Networks
There are several ways you can create virtual networks for KVM core virtualization. Most common ones are bridged network, isolated virtual network, and NAT network which will forward traffic to physical interface of host machine. NAT network is also the most common one and comes as a default when KVM is installed. I will explain how to create all three in this section of guide.
KVM Networks – default network
When you first install KVM, it comes with a default NAT network. There is nothing wrong with it, and you can use it for your projects, but I prefer to remove it and define my own networks. To remove it, execute the following in terminal:
virsh net-list Name State Autostart Persistent ---------------------------------------------- default active yes yes ... virsh net-destroy default Network default destroyed ... virsh net-undefine default Network default has been undefined
KVM Networks – network bridge
If you want to use IP addresses available on your local physical network (usually provided by dedicated DHCP server or home broadband router), you need to define a network bridge to your hosts physical interface. First make sure you have a bridge-utils package installed:
apt-cache policy bridge-utils bridge-utils: Installed: 1.5-9ubuntu1 Candidate: 1.5-9ubuntu1 Version table: *** 1.5-9ubuntu1 500 500 http://archive.ubuntu.com/ubuntu xenial/main amd64 Packages 100 /var/lib/dpkg/status
To define a bridge, we need to change network interface configuration by editing file /etc/network/interfaces. Open it in text editor. For static configuration it will look something like this:
# The primary network interface auto eth0 iface eth0 inet static address XXX.XXX.XXX.XXX netmask XXX.XXX.XXX.XXX network XXX.XXX.XXX.XXX broadcast XXX.XXX.XXX.XXX gateway XXX.XXX.XXX.XXX # dns-* options are implemented by the resolvconf package, if installed dns-nameservers XXX.XXX.XXX.XXX
For dynamic (DHCP) configuration it will look like this:
# The primary network interface auto eth0 iface eth0 inet dhcp
To define a bridge, we need to set the primary interface in manual mode and define new br0 interface which will represent our bridge. For static configuration, modify it to look like this:
# The primary network interface auto eth0 iface eth0 inet manual # Bridge interface auto br0 iface br0 inet static address XXX.XXX.XXX.XXX netmask XXX.XXX.XXX.XXX network XXX.XXX.XXX.XXX broadcast XXX.XXX.XXX.XXX gateway XXX.XXX.XXX.XXX dns-nameservers XXX.XXX.XXX.XXX bridge_ports eth0 bridge_stp off bridge_fd 0 bridge_maxwait 0
For dynamic (DHCP) configuration, modify it to look like this:
# The primary network interface auto eth0 iface eth0 inet manual # Bridge interface auto br0 iface br0 inet dhcp bridge_ports eth0 bridge_stp off bridge_fd 0 bridge_maxwait 0
Now a few settings in /etc/sysctl.conf need to be added. Open the file in text editor and add the following settings to the end of the file:
net.bridge.bridge-nf-call-ip6tables = 0 net.bridge.bridge-nf-call-iptables = 0 net.bridge.bridge-nf-call-arptables = 0
If you are running Ubuntu and before you can apply the following settings, you need to enable br_netfilter module first. On Debian, it will work out-of-the-box. On Ubuntu, it was removed from core by this commit:
commit 34666d467cbf1e2e3c7bb15a63eccfb582cdd71f Author: Pablo Neira Ayuso &lt;email@example.com&gt; Date: Thu Sep 18 11:29:03 2014 +0200 netfilter: bridge: move br_netfilter out of the core $ git describe 34666d467cbf1e2e3c7bb15a63eccfb582cdd71f v3.17-rc4-777-g34666d4
To enable it back, execute the following:
Now you can apply those settings by executing the following:
sysctl -p /etc/sysctl.conf
Unfortunately, due to bug in Debian based systems, it seems those settings do not get loaded on boot. The bug is more than 10 years old (can you believe that s***!!!), and it seems it still affects the systems. The workaround is adding the following line to /etc/rc.local just before the exit 0 line:
... /sbin/sysctl -p /etc/sysctl.conf exit 0
Also it is recommended to add a specific firewall rule to the same file due to possible Path MTU Discovery issues with MSS Clamping. Edit /etc/rc.local so it looks like this:
... /sbin/sysctl -p /etc/sysctl.conf iptables -A FORWARD -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu exit 0
Now reboot the physical host for all the settings to take place.
KVM Networks – NAT network
Creating a NAT network for KVM core virtualization is pretty simple and straightforward. You will need an XML file with configuration definitions for NAT network and virsh command to put it all in place. You can define NAT network with DHCP pool where IP addresses will be automatically allocated to your virtual machines, and you can also define a static NAT network where you will have to manually configure network in your virtual machines. I will explain how to create both types.
For DHCP enabled NAT network create an XML file with the following content:
<network> <name>nat-dhcp</name> <forward mode='nat'/> <bridge name='virbr100' stp='on' delay='0' /> <ip address='172.16.0.1' netmask='255.255.255.0'> <dhcp> <range start="172.16.0.2" end="172.16.0.254"/> </dhcp> </ip&> </network>
Save the file as nat-dhcp.xml and execute the following to enable this network:
virsh net-define /path-to-file/nat-dhcp.xml virsh net-autostart nat-dhcp virsh net-start nat-dhcp
For static NAT network an XML net definition file is almost the same as for DHCP one. We just need to exclude <dhcp> block:
<network> <name>nat-static</name> <forward mode='nat'/> <bridge name='virbr101' stp='on' delay='0' /> <ip address='10.0.0.1' netmask='255.255.255.0'> </ip> </network>
Save the file as nat-static.xml and execute the following to enable this network:
virsh net-define /path-to-file/nat-static.xml virsh net-autostart nat-static virsh net-start nat-static
No reboot of the physical host is required here.
KVM Networks – Isolated network
This type of network provides a completely isolated private network for guests (virtual machines). Guests are able to communicate to each other and the physical host (hypervisor), but are unable to access “outside world” like your LAN or internet and vice versa due to lack of forward mode option in XML definition file.
To create such a network create an XML definition file with the following content:
<network> <name>isolated-dhcp</name> <bridge name='virbr102' stp='on' delay='0' /> <ip address="192.168.0.1" netmask="255.255.255.0"> <dhcp> <range start="192.168.0.2" end="192.168.0.254"/> </dhcp> </ip> </network>
Save the file as isolated-dhcp.xml and execute the following to enable this network:
virsh net-define /path-to-file/isolated-dhcp.xml virsh net-autostart isolated-dhcp virsh net-start isolated-dhcp
This example has DHCP pool defined and to create isolated network with static IP allocation, create the same XML definition file and omit <dhcp> option.
Now you have several network options to choose from for your KVM core virtualization. This may seem a bit confusing now, why so many networks. Believe me, when you start working with your virtualization projects it will make much more sense 🙂
KVM Customization – Storage
Upon KVM core virtualization installation, a default storage pool is defined on local system. This may be OK if you have enough space to create the amount of virtual machines you want while not affecting other services on your system. If you have a limited space, here we will consider some other storage options like dedicated disk or array of disks and LVM setup.
KVM Storage – Dedicated disk(s)
As mentioned in previous paragraph, having a storage pool on the same disk where your operating system is installed can have a serious impact on system performance itself. To remedy this, a separate dedicated disk or array of disks is added to act as a storage pool. If you decide for array of disks, RAID-10 is recommended. Obvious gain will be performance while redundancy will also be ensured. Also, if you can afford it, use SSD drives which will ensure even better performance. To put all this in place, lets see how the defaults look like. Execute the following in your terminal:
virsh pool-list ... Name State Autostart ------------------------------------------- default active yes
To check the details of this default pool, execute the following in your terminal:
virsh pool-dumpxml default ... <pool type='dir'> <name>default</name> ... <target> <path>/var/lib/libvirt/images</path>
From this information we can see that default pool type is dir which means it supports disk image files ( I will talk about block devices a bit later in this article). We can also see the path where images will be stored; /var/lib/libvirt/images.
Now lets say an additional disk is added and it is identified by system as /dev/sdb. To define a storage pool from it, a file system needs to be created first. In this example I will define a simple ext4 file system:
While this may be OK for single disk, when defining an ext4 file system on RAID array, you will have to pass a correct parameters (stride and stripe-width) based on RAID level, number of disks in array, RAID chunk size, number of filesystem blocks, etc.
You can define new storage pool now, or you can place a default storage pool on a newly added disk. To define a new storage pool, we need to create a mount point. First create a directory where you will mount this new disk:
Now mount the disk:
mount /dev/sdb /mnt/kvm-images
Modify /etc/fstab to make this mount permanent by adding the following to the end of the file:
# KVM Images Pool UUID=XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX /mnt/kvm-images ext4 noatime 0 2
Now define and start a new pool by executing the following:
virsh pool-define-as --name kvm-images --type dir --target /mnt/kvm-images virsh pool-autostart kvm-images virsh pool-start kvm-images
If you would like to place an existing default storage pool on this new dedicated disk, you also need to create a file system first. The same command as above applies here. Now you we need to undefine the existing default storage pool:
virsh pool-destroy default virsh pool-undefine default
Mount the disk to an existing target:
mount /dev/sdb /var/lib/libvirt/images
Also add /etc/fstab entry to make it permanent:
# KVM Images Pool UUID=XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX /var/lib/libvirt/images ext4 noatime 0 2
Now we can define default storage pool by executing the following:
virsh pool-define-as --name default --type dir --target /var/lib/libvirt/images virsh pool-autostart default virsh pool-start default
Just a little notice here, when adding a permanent mount definition to /etc/fstab, make sure to add discard option along with noatime if you are using SSD drives which support it.
KVM Storage – LVM
Implementing LVM is the easiest way to gain block device feature for your guest machines. Advantage over disk image file is considerable performance gain. You will also need a dedicated disk or disk array on which we will create a pre-defined LVM group. We will use that group to define a block device storage pool for our KVM core virtualization.
I will also use /dev/sdb for this example. First we need to see does our system have logical volume manager package installed:
apt-cache policy lvm2 lvm2: Installed: 2.02.168-2 Candidate: 2.02.168-2 Version table: *** 2.02.168-2 500 500 http://ftp.hr.debian.org/debian stable/main amd64 Packages 100 /var/lib/dpkg/status
If it is not installed, execute the following:
aptitude install -y -R lvm2
First define a physical volume:
Now we will define a volume group which we will use for our storage pool:
vgcreate kvm-block-storage /dev/sdb
At this point we can define a storage pool:
virsh pool-define-as --name kvm-block-storage --type logical --target /dev/kvm-block-storage
You can also undefine default storage pool and recreate it so it uses pre-defined LVM group.
This is just a little peace concerning storage options you can use for your KVM core virtualization. If you are interested in other options please check this article.
Missed the introductory article?
Virtualization with KVM – Installation and configuration