Virtualization with KVM – Creating Guest Machines

KVM ultimate setup - Featured Image

This is the third article on topic of virtualization with KVM, and in this one attention will be on how to create first guest machines (virtual machines). So far I covered an installation of KVM core infrastructure virtualization itself and I talked a bit about post-install settings and configuration. Please check those two articles to introduce yourself with the topic at hand.

Prerequisites

Now that we have a KVM core virtualization infrastructure installed and configured, we can start building our first guest machines. Ubuntu wise, there are several tools that can help us with that task. Most common is virt-install, a python-based script developed by RedHat. If you opted for desktop setup, and prefer a GUI over CLI, you can use virt-manager. There is also an ubuntu-vm-builder CLI tool developed by Canonical, and to be honest I never used it, so I will skip it in this guide. If you followed my previous guides, all tools should be in place. Let us get started and build our first guest machine.

KVM Virtualization Guests – Tools

Along with the tools I mentioned in prerequisites, virsh and qemu-img are also going to be used. Virsh is a standard tool for managing our guest domains (virtual machines). Qemu-img will help us create and manage disk image files. Depending on a disk image file format, qemu-img can also be used to create and manage snapshots.

KVM Virtualization Guests – Additional ISO pool

We need an installation source from which we then build a virtual machine. When we install an operating system to a physical machine, we use some sort of installation media. Usually we do it form a CD/DVD media or USB stick. When building a virtual machine, we can also use the same sources, but using an ISO image is much more practical.

For that reason, I like to create an additional storage pool specifically for holding ISO images. Just to make it clear, this is an optional step. I just like to have things organized 🙂 I will use a virsh CLI to define my dedicated ISO storage pool:

mkdir /var/lib/libvirt/iso-images
virsh pool-define-as --name iso-images --type dir --target /var/lib/libvirt/iso-images
virsh pool-autostart iso-images
virsh pool-start iso-images

Easy as that. I will also download an Ubuntu 16.04 minimal ISO which I will use later:

cd /var/lib/libvirt/iso-images
wget http://archive.ubuntu.com/ubuntu/dists/xenial/main/installer-amd64/current/images/netboot/mini.iso

KVM Virtualization Guests – Creating a disk image

It is quite easy to create a virtual disk image now that we have everything prepared. We are going to use qemu-img tool to create a qcow2 type image. Qcow2 type image will allow us to create snapshots later. To create such an image, execute the following in your terminal:

qemu-img create -f qcow2 -o preallocation=metadata /var/lib/libvirt/images/disk-image-001.qcow2 10G
...
Formatting '/var/lib/libvirt/images/disk-image-001.qcow2', fmt=qcow2 size=10737418240 encryption=off cluster_size=65536 preallocation=metadata lazy_refcounts=off refcount_bits=16

Notice the metadata pre-allocation option I used here. There are three options we can use for pre-allocation. Along with metadata, there are falloc and full also present. You can, of course, create a disk image without pre-allocation but it will impact the performance considerably when compared with other three.

Both metadata and falloc will create a sparse disk, where metadata will allocate the space required for metadata but will not allocate space for data itself. Falloc will allocate space for the metadata and data but marks the blocks as un-allocated. Full on the other hand will allocate space for the metadata and data while consuming all the physical space that you allocate (not sparse). All empty allocated space will be written with empty data (set as zero).

Based on my extensive usage of KVM and qcow2 images and materials I managed to find which cover the topic, metadata is my choice. The gain is a sparse disk image and there is no performance hit caused by assigning and managing metadata.

KVM Virtualization Guests – A guest machine

With our disk image ready we can finally create our first virtual machine. I will use virt-install CLI to create one and one of the networks I created in my previous tutorial. Execute the following in terminal:

virt-install \
--connect qemu:///system \
-n Machine-001 \
--memory 1024 \
--vcpus 1 \
--cpu host \
--cdrom /var/lib/libvirt/iso-images/mini.iso \
--os-type linux \
--boot cdrom,hd \
--disk path=/var/lib/libvirt/images/disk-image-001.qcow2,bus=scsi,cache=none,format=qcow2 \
--controller type=scsi,model=virtio-scsi \
-w network=nat-dhcp,model=virtio \
--graphics vnc,listen=0.0.0.0,keymap=local \
--virt-type kvm \
--video=cirrus \
--memballoon virtio \
--noautoconsole

Starting install...
Creating domain... | 0 B 00:00:00 
Domain installation still in progress. You can reconnect to 
the console to complete the installation process.

As you can see, I used several options to build a guest machine of which some are self explanatory while others deserve a bit of description. Let’s start from the beginning:

  • connect – the location and type of hipervisor to connect to. In the example a connection is made to local system libvirt instance.
  • – set name for guest machine. –name can also be used.
  • memory – amount of memory assigned to instance in MiB.
  • vcpus – number of virtual CPU-s assigned to guest machine.
  • cpu – CPU model and features exposed to guest machine. In the example (host) a hypervisor’s own CPU features are exposed to guest machine.
  • cdrom –  a source from which a guest machine will be installed.
  • os-type – type of operating system being installed.
  • boot – defines a boot order for guest machine.
  • disk – specifies media to use as a storage for the guest machine, with various options. In example a bus, cache and format of disk image are defined.
  • controller – a type and model of controller device specified for guest machine.
  • w – a name and model of virtual network used by guest machine.
  • graphics – defines how guest machine graphical display can be accessed.
  • virt-type – a hypervisor to install on based on the output of the virsh capabilities command.
  • video – a video device model attached to a guest machine.
  • memballoon – a virtual memory balloon device attached to a guest machine.
  • noautoconsole – override the default behavior of running virt-viewer or virsh console command and don’t automatically try to connect to a guest machine console.

These are the most of the options I use when defining a guest machine. Of course there are many more options that can be used to fine tune your guest machine. You can check the man page for virt-install command or this link to see what other options are available.

Be careful when setting the options for cpu. If you choose host and you plan to do migrations, you will have to make sure that all the hypervisor machines (physical machines) involved have the exact same model of CPU.

To finish the installation, you will need to connect to a guest machine graphical console. Based on the options passed to virt-install, you will need a VNC Viewer or you can use virt-manager. Once connected, you can finish the installation just like you would on a regular physical machine.

KVM Virtualization Guests – Discard with guest machines

Back up in this guide when we created a virtual disk image for our guest machine, we actually created a sparse qcow2 file. This is great as it is space efficient and since we’ve only allocated metadata, the create operation itself was almost instantaneous. But over time our qcow2 disk image file slowly expands due to data moving around it. This causes a degradation in performance.

Now, with more recent editions of libvirt (KVM) itself, and newer machine types introduced, we have an option to re-sparsify those images. The mechanism used is the same as with SSDs (solid state disks); discard. Discard support was introduced soon after first SSDs appeared on market, allowing operating system to tell SSDs what data can be cleaned and re-used, to preserve performance and extend drive lifetime.

To enable this feature for our guest machines, we need to make sure our disk bus is scsi and controller model is virtio-scsi. Since we defined that during a guest machine build, we can carry on and shutdown our guest machine and libvirtd service also:

virsh shutdown Machine-001
systemctl stop libvirt-bin.service

Now open /etc/libvirt/qemu/Machine-001.xml file in text editor and locate the following line:

<driver name='qemu' type='qcow2' cache='none'/>

Modify it to look like the one below:

<driver name='qemu' type='qcow2' cache='writeback' discard='unmap'/>

Save the file, start libvirt service and the guest machine:

systemctl start libvirt-bin.service
virsh start Machine-001

Connect to guest machine and modify fstab file by adding discard option (right after noatime, comma separated) for all partitions on disk except swap (if present). Reboot the guest machine to apply changes. You can also use fstrim tool periodically by executing “fstrim -a” command.

Thank you for reading.


Missed on other “Virtualization with KVM” articles?
Virtualization with KVM – Installation and configuration
Virtualization with KVM – Extended Customization

Spread the knowledge...

Author: Zack

Zack can cook and administer systems. He likes to discuss both and share knowledge and wisdom to everyone interested. So go ahead and give him hello.

Leave a Reply

Your email address will not be published. Required fields are marked *