+2
−1
Loading
Under our CI we have phy nodes with multiple nvmes, some with 3 and more. This is how debug (introduced via 9dad1237) looks like under the sw_hotplug suite on the node with 3 nvmes (per test): [2023-12-12T18:51:34.632Z] remove_attach_helper took 102.18s to complete (handling 3 nvme drive(s)) [2023-12-12T18:53:54.349Z] remove_attach_helper took 121.37s to complete (handling 3 nvme drive(s)) [2023-12-12T18:55:54.476Z] remove_attach_helper took 120.45s to complete (handling 3 nvme drive(s)) This is how it looks under node with single nvme: p2023-12-12T21:41:46.306Z] remove_attach_helper took 31.52s to complete (handling 1 nvme drive(s)) [2023-12-12T21:42:46.143Z] remove_attach_helper took 49.47s to complete (handling 1 nvme drive(s)) [2023-12-12T21:43:35.301Z] remove_attach_helper took 49.29s to complete (handling 1 nvme drive(s)) The difference is quite significant - the first case requires over 6m to finish while second ~2.3m (this accounts the total time entire sw_hotplug suite needed to complete). The former case started triggering timeouts under nvme-phy-autotest job at the very end of it - so those couple of minutes actually matter. To mitigate, limit number of nvme drives picked for this test to max to save some time. Change-Id: Ia55715df055662344a04f5cf45245c80d5086745 Signed-off-by:Michal Berger <michal.berger@intel.com> Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/21029 Reviewed-by:
Karol Latecki <karol.latecki@intel.com> Reviewed-by:
Konrad Sztyber <konrad.sztyber@intel.com> Tested-by:
SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by:
Jim Harris <jim.harris@samsung.com>