Commit 30c8b17f authored by Jim Harris's avatar Jim Harris Committed by Konrad Sztyber
Browse files

nvmf/rdma: account for unassociated qpairs when picking pg



If a lot of qpairs are connected all at once, the
RDMA optimal_poll_group logic does not work correctly,
because it only accounts for qpairs that received
their CONNECT capsule.  Now that we have a counter
for a poll group's unassociated qpairs, use that value
to supplement the current io qpair count.

We can just assume for now that all of these unassociated
qpairs are io qpairs.  That won't always be true, but
for purposes of picking the optimal poll group it is
sufficient.

Note that for RDMA, we could increment the counters
based on the RDMA qpair ID in the private data in the
rdmacm connect, but to keep the code simpler and common
across all transports, we defer the accounting until
after receiving the CONNECT command, so that it is
the same for all transports.

Fixes issue #2800.

Signed-off-by: default avatarJim Harris <james.r.harris@intel.com>
Change-Id: I5897d6ebac23d3b78b100e3fef5a7f9fb5304820
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15695


Reviewed-by: default avatarAleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: default avatarBen Walker <benjamin.walker@intel.com>
Reviewed-by: default avatarKonrad Sztyber <konrad.sztyber@intel.com>
Tested-by: default avatarSPDK CI Jenkins <sys_sgci@intel.com>
parent 30020c2f
Loading
Loading
Loading
Loading
+18 −2
Original line number Diff line number Diff line
@@ -3494,6 +3494,22 @@ nvmf_rdma_poll_group_create(struct spdk_nvmf_transport *transport,
	return &rgroup->group;
}

static uint32_t
nvmf_poll_group_get_io_qpair_count(struct spdk_nvmf_poll_group *pg)
{
	uint32_t count;

	/* Just assume that unassociated qpairs will eventually be io
	 * qpairs.  This is close enough for the use cases for this
	 * function.
	 */
	pthread_mutex_lock(&pg->mutex);
	count = pg->stat.current_io_qpairs + pg->current_unassociated_qpairs;
	pthread_mutex_unlock(&pg->mutex);

	return count;
}

static struct spdk_nvmf_transport_poll_group *
nvmf_rdma_get_optimal_poll_group(struct spdk_nvmf_qpair *qpair)
{
@@ -3518,9 +3534,9 @@ nvmf_rdma_get_optimal_poll_group(struct spdk_nvmf_qpair *qpair)
		pg_min = *pg;
		pg_start = *pg;
		pg_current = *pg;
		min_value = pg_current->group.group->stat.current_io_qpairs;
		min_value = nvmf_poll_group_get_io_qpair_count(pg_current->group.group);

		while ((count = pg_current->group.group->stat.current_io_qpairs) > 0) {
		while ((count = nvmf_poll_group_get_io_qpair_count(pg_current->group.group)) > 0) {
			pg_current = TAILQ_NEXT(pg_current, link);
			if (pg_current == NULL) {
				pg_current = TAILQ_FIRST(&rtransport->poll_groups);