+5
−3
Loading
_nvmf_qpair_destroy first calls the spdk_nvmf_poll_group_remove that sets the qpair->group pointer to NULL and only then nvmf_transport_qpair_fini. At that point we no longer can use the poll group pointer to clean up the list of requests that awaits the buffer in the iobuf maintained queue. To resolve this nvmf_tcp_cleanup_all_states is split into two steps now: 1. Abort all requests awaiting buffer in the nvmf_tcp_poll_group_remove and mark all requests as completed. That way nvmf_tcp_req_process won't try to abort them again. Do not process the requests to not change the overall cleanup order. 2. Drain the "completed" requests in the nvmf_tcp_cleanup_all_states. There is no need to drain TCP_REQUEST_STATE_NEED_BUFFER state, at this point all requests in such state are aborted and marked as completed. This fixes github issue 3627 Extend existing test with a dirty flow that mimics the github issue. Change-Id: I1c5cc20fe758797ce790b596f39af0338189d868 Signed-off-by:Krzysztof Goreczny <krzysztof.goreczny@dell.com> Reviewed-on: https://review.spdk.io/c/spdk/spdk/+/25785 Reviewed-by:
Jacek Kalwas <jacek.kalwas@nutanix.com> Tested-by:
SPDK Automated Test System <spdkbot@gmail.com> Reviewed-by:
Jim Harris <jim.harris@nvidia.com> Reviewed-by:
Ben Walker <ben@nvidia.com> Community-CI: Mellanox Build Bot