Commit 745c0496 authored by Pawel Wodkowski's avatar Pawel Wodkowski Committed by Daniel Verkamp
Browse files

doc: update vhost doc



Remove experimental attribute from multiqueue section, add description
of QEMU num_queues parameter and expand example.

Change-Id: I13d67ece0734aa8c57eeb07eabc34442ca532d47
Signed-off-by: default avatarPawel Wodkowski <pawelx.wodkowski@intel.com>
Reviewed-on: https://review.gerrithub.io/386501


Tested-by: default avatarSPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: default avatarDariusz Stojaczyk <dariuszx.stojaczyk@intel.com>
Reviewed-by: default avatarPaweł Niedźwiecki <pawelx.niedzwiecki@intel.com>
Reviewed-by: default avatarDaniel Verkamp <daniel.verkamp@intel.com>
Reviewed-by: default avatarJim Harris <james.r.harris@intel.com>
parent b1e850cd
Loading
Loading
Loading
Loading
+86 −25
Original line number Diff line number Diff line
# vhost {#vhost}

# vhost Getting Started Guide {#vhost_getting_started}
# Getting Started Guide {#vhost_getting_started}

The Storage Performance Development Kit vhost application is named `vhost`.
This application extends SPDK to present virtio storage controllers to QEMU-based
@@ -108,12 +108,12 @@ pick one core from `ReactorMask` masked by `Cpumask`. `Cpumask` must be a subset

Userspace vhost-scsi adds the following command line option for QEMU:
~~~
-device vhost-user-scsi-pci,id=scsi0,chardev=char0
-device vhost-user-scsi-pci,id=scsi0,chardev=char0[,num_queues=N]
~~~

Userspace vhost-blk adds the following command line option for QEMU:
~~~
-device vhost-user-blk-pci,logical_block_size=4096,size=512M,chardev=char0
-device vhost-user-blk-pci,logical_block_size=4096,size=512M,chardev=char0[,num_queues=N]
~~~

In order to start qemu with vhost you need to specify following options:
@@ -140,29 +140,9 @@ A full list of command line arguments to vhost can be obtained by:
app/vhost/vhost -h
~~~


## Example

Assume that qemu and spdk are in respectively `qemu` and `spdk` directories.
~~~
./qemu/build/x86_64-softmmu/qemu-system-x86_64 \
        -m 1024 \
        -object memory-backend-file,id=mem,size=1G,mem-path=/dev/hugepages,share=on \
        -numa node,memdev=mem \
        -drive file=$PROJECTS/os.qcow2,if=none,id=disk \
        -device ide-hd,drive=disk,bootindex=0 \
        -chardev socket,id=char0,path=./spdk/vhost.0 \
        -device vhost-user-scsi-pci,id=scsi0,chardev=char0 \
        -chardev socket,id=char1,path=./spdk/vhost.1 \
        -device vhost-user-blk-pci,logical_block_size=4096,size=512M,chardev=char1 \
        --enable-kvm
~~~

# Experimental Features {#vhost_experimental}

## Multi-Queue Block Layer (blk-mq)

It is possible to use the Linux kernel block multi-queue feature with vhost.
For best performance use the Linux kernel block multi-queue feature with vhost.
To enable it on Linux, it is required to modify kernel options inside the
virtual machine.

@@ -174,7 +154,88 @@ Instructions below for Ubuntu OS:
4. Reboot virtual machine

To achieve better performance, make sure to increase number of cores
assigned to the VM.
assigned to the VM and add `num_queues` parameter to the QEMU `device`. It should be enough
to set `num_queues=4` to saturate physical device. Adding too many queues might lead to SPDK
vhost performance degradation if many vhost devices are used because each device will require
additional `num_queues` to be polled.

# Example {#vhost_example}

Run SPDK vhost with two controllers: Virtio SCSI and Virtio block.

Virtio SCSI controller with two LUNs:

- SCSI target 1, LUN 0 backed by Malloc0 bdev
- SCSI target 5, LUN 0 backed by Malloc1 bdev

Virtio block device backed by Malloc2 bdev

For better performance use 4 VCPU (`-smp 4`) and 4 queues for each controller
(`num_queues=4`). Assume that QEMU and SPDK are in respectively `qemu` and `spdk` directories.
~~~
host: $ cd spdk
host: $ cat vhost.conf
[Rpc]
  Enable Yes
  Listen 127.0.0.1:5261

[Malloc]
  NumberOfLuns 3
  LunSizeInMb 128
  BlockSize 512

[VhostScsi0]
  Name vhost_scsi0_socket
  Dev 1 Malloc0
  Dev 5 Malloc1

[VhostBlk0]
  Name vhost_blk0_socket
  Dev Malloc2

host: $ sudo ./app/vhost/vhost -c vhost.conf -s 1024 -m 1 &
[ DPDK EAL parameters: vhost -c 1 -m 1024 --file-prefix=spdk_pid191213 ]
EAL: Detected 48 lcore(s)
EAL: Probing VFIO support...
EAL: VFIO support initialized
<< REMOVED CONSOLE LOG >>
VHOST_CONFIG: bind to vhost_scsi0_socket
vhost.c: 592:spdk_vhost_dev_construct: *NOTICE*: Controller vhost_scsi0_socket: new controller added
vhost_scsi.c: 840:spdk_vhost_scsi_dev_add_dev: *NOTICE*: Controller vhost_scsi0_socket: defined device 'Dev 1' using lun 'Malloc0'
vhost_scsi.c: 840:spdk_vhost_scsi_dev_add_dev: *NOTICE*: Controller vhost_scsi0_socket: defined device 'Dev 5' using lun 'Malloc1'
VHOST_CONFIG: vhost-user server: socket created, fd: 65
VHOST_CONFIG: bind to vhost_blk0_socket
vhost.c: 592:spdk_vhost_dev_construct: *NOTICE*: Controller vhost_blk0_socket: new controller added
vhost_blk.c: 720:spdk_vhost_blk_construct: *NOTICE*: Controller vhost_blk0_socket: using bdev 'Malloc2'

host: $ cd ..
host: $ sudo ./qemu/build/x86_64-softmmu/qemu-system-x86_64 --enable-kvm -m 1024 \
  -cpu host -smp 4 -nographic \
  -object memory-backend-file,id=mem,size=1G,mem-path=/dev/hugepages,share=on -numa node,memdev=mem \
  -drive file=guest_os_image.qcow2,if=none,id=disk \
  -device ide-hd,drive=disk,bootindex=0 \
  -chardev socket,id=spdk_vhost_scsi0,path=./spdk/vhost_scsi0_socket \
  -device vhost-user-scsi-pci,id=scsi0,chardev=spdk_vhost_scsi0,num_queues=4 \
  -chardev socket,id=spdk_vhost_blk0,path=./spdk/vhost_blk0_socket \
  -device vhost-user-blk-pci,logical_block_size=512,size=128M,chardev=spdk_vhost_blk0,num_queues=4

<< LOGIN TO GUEST OS >>
guest: ~$ lsblk --output "NAME,KNAME,MODEL,HCTL,SIZE,VENDOR,SUBSYSTEMS"
NAME   KNAME MODEL            HCTL        SIZE VENDOR   SUBSYSTEMS
fd0    fd0                                  4K          block:platform
sda    sda   QEMU HARDDISK    1:0:0:0      80G ATA      block:scsi:pci
  sda1 sda1                                80G          block:scsi:pci
sdb    sdb   Malloc disk      2:0:1:0     128M INTEL    block:scsi:virtio:pci
sdc    sdc   Malloc disk      2:0:5:0     128M INTEL    block:scsi:virtio:pci
vda    vda                                128M 0x1af4   block:virtio:pci

guest: $ sudo poweroff
host: $ fg
<< CTRL + C >>
vhost.c:1006:session_shutdown: *NOTICE*: Exiting
~~~

We can see that `sdb` and `sdc` are SPDK vhost-scsi LUNs, and `vda` is SPDK vhost-blk disk.

# Known bugs and limitations {#vhost_bugs}