Commit 0441dce4 authored by Michael Haeuptle's avatar Michael Haeuptle Committed by Tomasz Zawadzki
Browse files

nvmf/rdma: qpairs not distributed evenly across pgs



This fixes issue #3137 where the IO qpairs are not distrubuted evenly
across pollgroups during concurrent connections from multiple
kernel initiators.

The code incorrectly advances pg_current before the check to determine
the minimum pollgroup qpairs, which results in picking up the
pollgroup *after* the new minimum.

Change-Id: Ieb6a055fd4d38fd06d739b34e2e8c82cb339e192
Signed-off-by: default avatarMichael Haeuptle <michael.haeuptle@hpe.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/19941


Tested-by: default avatarSPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: default avatarTomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: default avatarAleksey Marchuk <alexeymar@nvidia.com>
Community-CI: Mellanox Build Bot
parent 11d4c508
Loading
Loading
Loading
Loading
+5 −5
Original line number Diff line number Diff line
@@ -4112,16 +4112,16 @@ nvmf_rdma_get_optimal_poll_group(struct spdk_nvmf_qpair *qpair)
		min_value = nvmf_poll_group_get_io_qpair_count(pg_current->group.group);

		while ((count = nvmf_poll_group_get_io_qpair_count(pg_current->group.group)) > 0) {
			pg_current = TAILQ_NEXT(pg_current, link);
			if (pg_current == NULL) {
				pg_current = TAILQ_FIRST(&rtransport->poll_groups);
			}

			if (count < min_value) {
				min_value = count;
				pg_min = pg_current;
			}

			pg_current = TAILQ_NEXT(pg_current, link);
			if (pg_current == NULL) {
				pg_current = TAILQ_FIRST(&rtransport->poll_groups);
			}

			if (pg_current == pg_start) {
				break;
			}