Commit c3d0a833 authored by Shuhei Matsumoto's avatar Shuhei Matsumoto Committed by Tomasz Zawadzki
Browse files

nvme/rdma: Move post WRs on send/recv queue after poll CQ



If nvme_rdma_qpair_submit_sends() returns -ENOMEM,
nvme_rdma_qpair_process_completions() returns immediately.
In this case, nvme_rdma_qpair_process_completions() does not
poll CQ.

However, nvme_rdma_qpair_process_completions() can poll CQ even
when there is no free slot in SQ.

Hence move nvme_rdma_qpair_submit_sends() and
nvme_rdma_qpair_submit_recvs() after the loop to poll CQ.

nvme_rdma_qpair_submit_sends() and nvme_rdma_qpair_submit_recvs()
output error log and so checking return code of them is not
necessary and is removed in this patch.

This fixes part of the github issue #1271.

Signed-off-by: default avatarShuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: Icf22879c69c3f84e6b1d91dc061b6f44237eedd1
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/1342


Tested-by: default avatarSPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: default avatarBen Walker <benjamin.walker@intel.com>
Reviewed-by: default avatarAleksey Marchuk <alexeymar@mellanox.com>
parent a4a8080f
Loading
Loading
Loading
Loading
+3 −5
Original line number Diff line number Diff line
@@ -2005,11 +2005,6 @@ nvme_rdma_qpair_process_completions(struct spdk_nvme_qpair *qpair,
	struct spdk_nvme_rdma_req	*rdma_req;
	struct nvme_rdma_ctrlr		*rctrlr;

	if (spdk_unlikely(nvme_rdma_qpair_submit_sends(rqpair) ||
			  nvme_rdma_qpair_submit_recvs(rqpair))) {
		return -1;
	}

	if (max_completions == 0) {
		max_completions = rqpair->num_entries;
	} else {
@@ -2082,6 +2077,9 @@ nvme_rdma_qpair_process_completions(struct spdk_nvme_qpair *qpair,
		}
	} while (reaped < max_completions);

	nvme_rdma_qpair_submit_sends(rqpair);
	nvme_rdma_qpair_submit_recvs(rqpair);

	if (spdk_unlikely(rqpair->qpair.ctrlr->timeout_enabled)) {
		nvme_rdma_qpair_check_timeout(qpair);
	}