+12
−9
+12
−9
Loading
The nvmf generic transport code creates a mempool of I/O buffers, as well as its own per-thread cache of those buffers. The mempool was being created with a non-zero mempool cache, effectively duplicating work - we had a cache in the mempool and then another in the transport layer. So patch 019cbb93 removed the mempool cache, but the tcp transport was significantly affected by it. It uses a default 32 buffers per thread cache which is very small, it was actually mostly relying on the mempool cache (which was 512). Performance regression tests caught this problem, and Karol verified that specifying a higher buf_cache_size fixed the problem. So change both the tcp and rdma transports to specify UINT32_MAX as the default buf_cache_size. If the user does not override this when creating the transport, it will be dynamically sized based on the size of the buffer pool and the number of poll groups. Fixes: 019cbb93 ("nvmf: disable data buf mempool cache") Fixes issue #2934. Signed-off-by:Jim Harris <james.r.harris@intel.com> Change-Id: Idd43e99312d59940ca68402299e264cc187bfccd Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17203 Tested-by:
SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by:
Aleksey Marchuk <alexeymar@nvidia.com> Reviewed-by:
Ben Walker <benjamin.walker@intel.com> Community-CI: Mellanox Build Bot