Commit 3d2c44ba authored by Szulik, Maciej's avatar Szulik, Maciej Committed by Tomasz Zawadzki
Browse files

examples/nvme/perf: fix async create IO queue pairs for PCIe tr



Before the patch we always used async mode to create IO queues,
regardless of their number or a transport used. It could lead to
a failure in case of PCIe transport when we wanted more queues than
there were available requests, i.e. we submitted commands faster than
they could be processed and eventually we lacked free request objects.

Number of requests allocated on admin qp is controlled by the
admin_queue_size controller option. This parameter is also used to
configure actual queue sizes, so it cannot exceed 4K. We cannot simply
increase admin_queue_size, because we may exceed what a controller
can support.

One solution could be to introduce separate admin_queue_requests option
next to admin_queue_size, as it is done for IO and rely on queueing.
However, it will introduce additional complexity and we need such big
number of requests only once to create IO queues.

It is simpler to just detect described issue and use sync mode instead.

Alternative solution could be to asynchronously create queues in batches
based on admin_queue_size, but it will introduce code complexity too.

Signed-off-by: default avatarSzulik, Maciej <maciej.szulik@intel.com>
Change-Id: Ieb7ebdabc7d4a40c98bdbc02edf13bd476d59d57
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/19114


Tested-by: default avatarSPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: default avatarAleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: default avatarKonrad Sztyber <konrad.sztyber@intel.com>
parent 8d5980a8
Loading
Loading
Loading
Loading
+6 −1
Original line number Diff line number Diff line
@@ -996,6 +996,7 @@ nvme_verify_io(struct perf_task *task, struct ns_entry *entry)
static int
nvme_init_ns_worker_ctx(struct ns_worker_ctx *ns_ctx)
{
	const struct spdk_nvme_ctrlr_opts *ctrlr_opts;
	struct spdk_nvme_io_qpair_opts opts;
	struct ns_entry *entry = ns_ctx->entry;
	struct spdk_nvme_poll_group *group;
@@ -1016,7 +1017,11 @@ nvme_init_ns_worker_ctx(struct ns_worker_ctx *ns_ctx)
	}
	opts.delay_cmd_submit = true;
	opts.create_only = true;
	opts.async_mode = true;

	ctrlr_opts = spdk_nvme_ctrlr_get_opts(entry->u.nvme.ctrlr);
	opts.async_mode = !(spdk_nvme_ctrlr_get_transport_id(entry->u.nvme.ctrlr)->trtype ==
			    SPDK_NVME_TRANSPORT_PCIE
			    && ns_ctx->u.nvme.num_all_qpairs > ctrlr_opts->admin_queue_size);

	ns_ctx->u.nvme.group = spdk_nvme_poll_group_create(ns_ctx, NULL);
	if (ns_ctx->u.nvme.group == NULL) {