Commit 28a4e1bd authored by Alexey Marchuk's avatar Alexey Marchuk Committed by Tomasz Zawadzki
Browse files

nvmf/rdma: Prioritize partial RDMA_READ operations



RDMA transport was updated to send part of RDMA_READ
operations in order to fully utilize QP capacity.
But requests which sent only part of RDMA_READ operations
were put to the end of a list, their latency could be greatly
increased. To avoid latency increase, put partially completed
requests to the head of the list, i.e increase their priority
over new requests.

Signed-off-by: default avatarAlexey Marchuk <alexeymar@nvidia.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/23302

 (master)

(cherry picked from commit 5b333e40)
Change-Id: Ib27783f5c6ec368f591b859c211a42d2965841ff
Signed-off-by: default avatarMarek Chomnicki <marek.chomnicki@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/23774


Reviewed-by: default avatarJim Harris <jim.harris@samsung.com>
Reviewed-by: default avatarTomasz Zawadzki <tomasz.zawadzki@intel.com>
Tested-by: default avatarSPDK CI Jenkins <sys_sgci@intel.com>
parent a404bd08
Loading
Loading
Loading
Loading
+3 −2
Original line number Diff line number Diff line
@@ -4723,10 +4723,11 @@ nvmf_rdma_poller_poll(struct spdk_nvmf_rdma_transport *rtransport,
				rqpair->current_read_depth--;
				/* wait for all outstanding reads associated with the same rdma_req to complete before proceeding. */
				if (rdma_req->num_outstanding_data_wr == 0) {
					if (spdk_unlikely(rdma_req->num_remaining_data_wr)) {
					if (rdma_req->num_remaining_data_wr) {
						/* Only part of RDMA_READ operations was submitted, process the rest */
						nvmf_rdma_request_reset_transfer_in(rdma_req, rtransport);
						STAILQ_INSERT_TAIL(&rqpair->pending_rdma_read_queue, rdma_req, state_link);
						/* Prioritize partially handled request over others to avoid latency increase */
						STAILQ_INSERT_HEAD(&rqpair->pending_rdma_read_queue, rdma_req, state_link);
						rdma_req->state = RDMA_REQUEST_STATE_DATA_TRANSFER_TO_CONTROLLER_PENDING;
						nvmf_rdma_request_process(rtransport, rdma_req);
						break;