+15
−20
Loading
nvme_tcp_req_complete_safe caches values on the request, so that we can free the request *before* completing it. This allows the recently completed req to get reused in full queue depth workloads, if the callback function submits a new I/O. So do this nvme_tcp_req_complete as well, to make all of the completion paths identical. The paths that were calling nvme_tcp_req_complete previously are all non-fast-path, so the extra overhead is not important. This allows us to call nvme_tcp_req_complete from nvme_tcp_req_complete_safe to reduce code duplication, so do that in this patch as well. Signed-off-by:Jim Harris <james.r.harris@intel.com> Change-Id: I876cea5ea20aba8ccc57d179e63546a463a87b35 Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13521 Tested-by:
SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by:
Changpeng Liu <changpeng.liu@intel.com> Reviewed-by:
Shuhei Matsumoto <smatsumoto@nvidia.com> Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>