Tag Archives: kvm

Difference between virtio-blk and virtio-scsi ?

Both virtio-blk and virtio-scsi are type of para-virtualization then what’s the exact difference between them I was having this question in my mind for sometime.  As readers may already know, by default we are assigning a virtio-blk disk to openstack instance that’s why it shows inside the instance as vd*, we do have the option to assign a scsi disk to instance by setting the metadata properties on glance image which is going to used to spawning an instance, once the instance is spawned it will show the disk name as sd*.

Major advantages of providing the virtio-scsi over virtio-blk is having multiple block devices per virtual SCSI adapter. It’s not like that virtio-scsi is the replacement for virtio-blk. Development work for virtio-blk is also going on.


  • Three types of storage can be attached to a guest machine using virtio-blk.
    • File
    • Disk
    • LUN

Let’s understand the I/O path for virtio-blk and what improvements are coming it in near future.

Guests :

App –> VFS/Filesystem –> Generic Block Layer –> IO scheduler –> virtio-blk.ko

Host :

QEMU (user space) –> VFS/Filesystem –> Generic Block Layer –> IO Scheduler –> Block Device Layer –> Hard Disk.

We can see that in above flow two IO scheduler are coming into picture which doesn’t make sense for all kind of I/O patterns hence in “Guests” flow scheduler is going to be replaced with BIO based virtio-blk. Also, scheduler option will also be available just in case if some applications takes the advantage of scheduler.

Eventually it would be like :

  • struct request based [Using IO scheduler in guest]
  • struct bio based [Not using IO scheduler in guest]

It’s merged in Kernel 3.7

Add ‘virtio_blk.use_bio=1’ to kernel cmdline of guest no change is needed in host machine. it’s not enabled by default.

Kernel developers are planning to increase the intelligence of this feature by deciding to enable this depending upon underlying device and choosing the best I/O path according to workload.


Host side virtio-blk implementation include.

  1. Qemu Current : global mutex which is main source of bottleneck because only thread can submit  an I/O
  2. Qemu data plane : Each virtio-blk device has thread dedicated to handle request. Requests are processes without going through the QEMU block layer using Linux AIO directly.vhost-blk
  3. vhost-blk is an in-kernel virtio-blk device accelerator, similar to vhost-net. it’s skipping host user space involvement which help us to avoid the context switching.

It mainly lacks the following capability because it’s not based on scsi model :

  • Thin-provisioned Storage for manageability
  • HA cluster on virtual machines.
  • Backup server for reliability.

As it’s not based on scsi protocol hence lacks the capabilities like Persistence Reservation scsi which is required if we are running disks attached to VM while running in cluster environment, it helps to avoid the data corruption on shared devices.

Exception : In case of virtio-blk scsi commands works when storage is attached as LUN to guest.


  • It has mainly three kind of configurations.
    • Qemu [User space target]
      • File
      • Disk
      • LUN   << SCSI command compatible.
    • LIO target [Kernel space target]
    • libiscsi [User Space iscsi initiator]  << SCSI command compatible.

It can support thousands of disks per PCI device True SCSI devices, as the naming convention in guest is showing as sd* hence it’s good for p2v/v2v migration.


How to use virt-make-fs to make data available inside VM ?

Today I got a requirement to share large amount of data with Virtual machine. I came to know about that command virt-make-fs is suitable for this task. In this article, I am going to show you the usage of same command.

From man page of virt-make-fs which is provided by libguestfs-tools.

virt-make-fs – Make a filesystem from a tar archive or files

Step 1 : I created an image from the data.tar file which I want to access from VM. Image name is test.img.

[root@host Downloads]# virt-make-fs data.tar /home/host/VirtualMachines/test.img

[root@host Downloads]# cd /home/host/VirtualMachines/

[root@host VirtualMachines]# ll test.img
-rw-r–r–. 1 root root 69550694 Dec 29 21:35 test.img

Step 2 : Attached the same image to Virtual machine.

[root@host VirtualMachines]# virsh list
Id    Name                           State
14    idmserver1                     running

[root@host VirtualMachines]# virsh domblklist 17
Target     Source
vda        /home/host/VirtualMachines/rhel6.6.1451397281
hdb        /home/host/VirtualMachines/test.img

Step 3 : Inside the virtual machine I can see that new image is detected.

[root@guest ]# file /dev/sda
/dev/sda: block special

Created loopback device to mount that image inside VM.

[root@guest ]# losetup /dev/loop0 /dev/sda

[root@guest ]# losetup -a
/dev/loop0: [0005]:6211 (/dev/sda)

Mounted it successfully.

[root@guest ]# mount /dev/loop0 /mnt

[root@guest ]# df -Ph /mnt
Filesystem      Size  Used Avail Use% Mounted on
/dev/loop0       65M   62M     0 100% /mnt

I am able to access the content of the tar file from inside the VM.

[root@guest ~]# cd /mnt
[root@guest mnt]# ls

By default filesystem is created of type ext2 but you can specify the filesystem while creating the image. Please refer the man page for the same.

How to modify the size of cloud image ?

Recently while working in cloud environment, I encountered a need of expanding an existing cloud image.  I was having rhel 7 qcow2 image with me hence the task was to resize that image to meet some requirements.

Thanks to ample amount of utilities provided by “libguestfs-tools” package which make my task possible.

Step 1 : Checking the size of rhel 7 qcow2 image.

# qemu-img info rhel-guest-image-7.0-20140930.0.x86_64.qcow2
image: rhel-guest-image-7.0-20140930.0.x86_64.qcow2
file format: qcow2
virtual size: 10G (10737418240 bytes)
disk size: 417M
cluster_size: 65536
Format specific information:
compat: 0.10

We can see that virtual disk size is 10GB.

Step 2 : Filesystems inside that image are taking 6GB only.

# virt-filesystems –long -h –all -a rhel-guest-image-7.0-20140930.0.x86_64.qcow2
Name       Type        VFS  Label  MBR  Size  Parent
/dev/sda1  filesystem  xfs  –      –    6.0G  –
/dev/sda1  partition   –    –      83   6.0G  /dev/sda
/dev/sda   device      –    –      –    10G   –

Step 3 : My requirement is to have the filesystem of 15 GB. I created one more image of 15GB.

# qemu-img create -f qcow2 rhel7-guest.qcow2 15G
Formatting ‘rhel7-guest.qcow2’, fmt=qcow2 size=16106127360 encryption=off cluster_size=65536 lazy_refcounts=off

Step 4 : Install the “libguestfs-xfs” package to expand the xfs filesystem.

# yum -y install libguestfs-xfs

Step 5 : Issued the below command to perform the expansion. Make sure that this operation ends with “no error”. Also as suggested at the end of below command try to launch the instance using new image before deleting the old image.

# virt-resize –expand /dev/sda1 rhel-guest-image-7.0-20140930.0.x86_64.qcow2 rhel7-guest.qcow2
Examining rhel-guest-image-7.0-20140930.0.x86_64.qcow2 …

Summary of changes:

/dev/sda1: This partition will be resized from 6.0G to 15.0G.  The
filesystem xfs on /dev/sda1 will be expanded using the ‘xfs_growfs’

Setting up initial partition table on rhel7-guest.qcow2 …
Copying /dev/sda1 …
100% ⟦▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒⟧ 00:00
Expanding /dev/sda1 using the ‘xfs_growfs’ method …

Resize operation completed with no errors.  Before deleting the old disk,
carefully check that the resized disk boots and works correctly.

Step 6 : Let’s check the size information about new image.

# qemu-img info rhel7-guest.qcow2
image: rhel7-guest.qcow2
file format: qcow2
virtual size: 15G (16106127360 bytes)
disk size: 976M
cluster_size: 65536
Format specific information:
compat: 1.1
lazy refcounts: false

Step 7 : I can see the increased size of filesystem in new image to 15 GB from 6GB of previous image.

# virt-filesystems –long -h –all -a rhel7-guest.qcow2
Name       Type        VFS  Label  MBR  Size  Parent
/dev/sda1  filesystem  xfs  –      –    15G   –
/dev/sda1  partition   –    –      83   15G   /dev/sda
/dev/sda   device      –    –      –    15G   –