+19
−2
Loading
Max Inbound RDMA_READ Queue Depth (IRD) is sent by host in responder_resources. Target must limit its ORD (Outbound RDMA_READ Queue Depth) so it doesn't exceed initiator's IRD. Refer to chapter 3.1.4 Fabric Dependent Settings of RDMA transport specification When connecting, intiator must set responder_resources to max outstanding RDMA_READ operations supported by local HCA as a target (or max num of incoming operations) - that is max_qp_rd_atom. When accepting, kernel cm module sets event.initiator_depth=req.responder_resources (swap) and event.responder_resources=req.intiator_depth (swap) see cm_req_handler() in drivers/infiniband/core/cm.c and rdma_get_cm_event man page. When accepting, target must limit initiator_depth (aka responder_resouces set by intiator aka incoming RDMA_READ operations supported by intiator) by local HCA max_qp_init_rd_atom. 0 value is not acceptable according to nvmf spec. At the same time, event.responder_resources (intiator_depth set by intitiator or number of incoming RDMA_READ operations requested by intiator which target must support) must be 0 since intiator is not allowed to issue RDMA operations according to nvmf spec. Signed-off-by:Alexey Marchuk <alexeymar@nvidia.com> Change-Id: I66aa015f808471d1d055d9553b8adb8350671f83 Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/19118 Tested-by:
SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by:
Michael Haeuptle <michaelhaeuptle@gmail.com> Reviewed-by:
Shuhei Matsumoto <smatsumoto@nvidia.com> Community-CI: Mellanox Build Bot Reviewed-by:
Jim Harris <jim.harris@gmail.com>