+0
−2
+2
−0
+0
−7
Loading
It is to apply recently introduced fixes in TCP to RDMA transport. For details see - nvme/tcp: fix qpair poll_status memleak - nvme: fix qpair's fabric_poll_status leak - nvme: use explicit fabric poll cleanup This patch also introduces a behavioral change for TCP, as this cleanup is no longer done immediately after disconnect but instead when disconnect_done is invoked. This ensures that the cleanup occurs only after the requests queued on the generic layer have been aborted. This is mandatory for RDMA, which does not abort requests immediately upon disconnect. While here also removed redundant nvme_fabric_qpair_auth_cleanup in spdk_nvme_ctrlr_free_io_qpair as it is covered by nvme_transport_ctrlr_disconnect_qpair_done now. Also moved assertion from nvme_tcp_ctrlr_delete_io_qpair to nvme_qpair_deinit to cover both TCP and RDMA; poll/auth cleanup should already be done in the _disconnect_done callback at this stage, and _abort_reqs should be a nop. Change-Id: I1aa955f8e1470b78f57f657ae9c70ec3920c00cb Signed-off-by:Jacek Kalwas <jacek.kalwas@nutanix.com> Reviewed-on: https://review.spdk.io/c/spdk/spdk/+/26806 Reviewed-by:
Konrad Sztyber <ksztyber@nvidia.com> Tested-by:
SPDK Automated Test System <spdkbot@gmail.com> Community-CI: Mellanox Build Bot Reviewed-by:
Jim Harris <jim.harris@nvidia.com>