Loading
examples/nvme/perf: fix async create IO queue pairs for PCIe tr
Before the patch we always used async mode to create IO queues, regardless of their number or a transport used. It could lead to a failure in case of PCIe transport when we wanted more queues than there were available requests, i.e. we submitted commands faster than they could be processed and eventually we lacked free request objects. Number of requests allocated on admin qp is controlled by the admin_queue_size controller option. This parameter is also used to configure actual queue sizes, so it cannot exceed 4K. We cannot simply increase admin_queue_size, because we may exceed what a controller can support. One solution could be to introduce separate admin_queue_requests option next to admin_queue_size, as it is done for IO and rely on queueing. However, it will introduce additional complexity and we need such big number of requests only once to create IO queues. It is simpler to just detect described issue and use sync mode instead. Alternative solution could be to asynchronously create queues in batches based on admin_queue_size, but it will introduce code complexity too. Signed-off-by:Szulik, Maciej <maciej.szulik@intel.com> Change-Id: Ieb7ebdabc7d4a40c98bdbc02edf13bd476d59d57 Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/19114 Tested-by:
SPDK CI Jenkins <sys_sgci@intel.com> Community-CI: Mellanox Build Bot Reviewed-by:
Aleksey Marchuk <alexeymar@nvidia.com> Reviewed-by:
Konrad Sztyber <konrad.sztyber@intel.com>