Commit b4ce4e37 authored by Changpeng Liu's avatar Changpeng Liu Committed by Daniel Verkamp
Browse files

doc/vhost: add vhost-nvme target description

parent ef4a5bc9
Loading
Loading
Loading
Loading
+42 −0
Original line number Diff line number Diff line
@@ -79,6 +79,15 @@ the following command to confirm your QEMU supports userspace vhost-blk.
qemu-system-x86_64 -device vhost-user-blk-pci,help
~~~

Userspace vhost-nvme target was added as experimental feature for SPDK 18.04
release, patches for QEMU are available in SPDK's QEMU repository only.

Run the following command to confirm your QEMU supports userspace vhost-nvme.

~~~{.sh}
qemu-system-x86_64 -device vhost-user-nvme,help
~~~

# Starting SPDK vhost target {#vhost_start}

First, run the SPDK setup.sh script to setup some hugepages for the SPDK vhost target
@@ -176,6 +185,26 @@ extra `-r` or `--readonly` parameter.
scripts/rpc.py construct_vhost_blk_controller --cpumask 0x1 -r vhost.1 Malloc0
~~~

### Vhost-NVMe (experimental)

The following RPC will attach the Malloc0 bdev to the vhost.0 vhost-nvme
controller. Malloc0 will appear as Namespace 1 of vhost.0 controller. Users
can use `--cpumask` parameter to specify which cores should be used for this
controller. Users must specify the maximum I/O queues supported for the
controller, at least 1 Namespace is required for each controller.

~~~{.sh}
$rpc_py construct_vhost_nvme_controller --cpumask 0x1 vhost.2 16
$rpc_py add_vhost_nvme_ns vhost.2 Malloc0
~~~

Users can use the following command to remove the controller, all the block
devices attached to controller's Namespace will be removed automatically.

~~~{.sh}
$rpc_py remove_vhost_controller vhost.2
~~~

## QEMU {#vhost_qemu_config}

Now the virtual machine can be started with QEMU.  The following command-line
@@ -214,6 +243,13 @@ Finally, specify the SPDK vhost devices:
-device vhost-user-blk-pci,id=blk0,chardev=char1
~~~

### Vhost-NVMe (experimental)

~~~{.sh}
-chardev socket,id=char2,path=/var/tmp/vhost.2
-device vhost-user-nvme,id=nvme0,chardev=char2,num_io_queues=4
~~~

## Example output {#vhost_example}

This example uses an NVMe bdev alongside Mallocs. SPDK vhost application is started
@@ -376,6 +412,12 @@ scripts/rpc.py delete_bdev Malloc0

# Known bugs and limitations {#vhost_bugs}

## Vhost-NVMe (experimental) can only be supported with latest Linux kernel

Vhost-NVMe target was designed for one new feature of NVMe 1.3 specification, Doorbell
Buffer Config Admin command, which is used for emulated NVMe controller only. Linux 4.12
added this feature, so a new Guest kernel later than 4.12 is required to test this feature.

## Windows virtio-blk driver before version 0.1.130-1 only works with 512-byte sectors

The Windows `viostor` driver before version 0.1.130-1 is buggy and does not