Commit 8e8f0434 authored by Alexey Marchuk's avatar Alexey Marchuk Committed by Tomasz Zawadzki
Browse files

nvmf/rdma: Start processing pending_buf_queue on idle poll



In some cases we may not receive any CQE but we still may
have pending IO requests waiting for a resource
(e.g. WR from the data_wr_pool).
We need to start processing of such requests if no CQE reaped.
This issue has small chance to occur, scenario with
big IO (1MiB) and multhithread spdk_tgt instance (> 16 cores)
and multi SGL payload increase the chance.

Signed-off-by: default avatarAlexey Marchuk <alexeymar@nvidia.com>
Signed-off-by: default avatarAllen Zhu <allenzhu@nvidia.com>
Change-Id: I4e5ac2a90495b5a6a57e8640510dddc4b7f6ed52
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/22834


Reviewed-by: default avatarShuhei Matsumoto <smatsumoto@nvidia.com>
Community-CI: Mellanox Build Bot
Reviewed-by: default avatarJim Harris <jim.harris@samsung.com>
Tested-by: default avatarSPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: default avatarLiYankun <845245370@qq.com>
parent 7dab13c0
Loading
Loading
Loading
Loading
+22 −0
Original line number Diff line number Diff line
@@ -3378,6 +3378,21 @@ nvmf_rdma_qpair_process_pending(struct spdk_nvmf_rdma_transport *rtransport,
	}
}

static void
nvmf_rdma_poller_process_pending_buf_queue(struct spdk_nvmf_rdma_transport *rtransport,
		struct spdk_nvmf_rdma_poller *rpoller)
{
	struct spdk_nvmf_request *req, *tmp;
	struct spdk_nvmf_rdma_request *rdma_req;

	STAILQ_FOREACH_SAFE(req, &rpoller->group->group.pending_buf_queue, buf_link, tmp) {
		rdma_req = SPDK_CONTAINEROF(req, struct spdk_nvmf_rdma_request, req);
		if (nvmf_rdma_request_process(rtransport, rdma_req) == false) {
			break;
		}
	}
}

static inline bool
nvmf_rdma_can_ignore_last_wqe_reached(struct spdk_nvmf_rdma_device *device)
{
@@ -4758,6 +4773,13 @@ nvmf_rdma_poller_poll(struct spdk_nvmf_rdma_transport *rtransport,
		return -1;
	}

	if (reaped == 0) {
		/* In some cases we may not receive any CQE but we still may have pending IO requests waiting for
		 * a resource (e.g. a WR from the data_wr_pool).
		 * We need to start processing of such requests if no CQE reaped */
		nvmf_rdma_poller_process_pending_buf_queue(rtransport, rpoller);
	}

	/* submit outstanding work requests. */
	_poller_submit_recvs(rtransport, rpoller);
	_poller_submit_sends(rtransport, rpoller);