+24
−8
Loading
By default, SPDK's hotplug operates on EVERY nvme ctrl it finds.
This breaks the tgt_run_hotplug() test since the count of available
bdevs, after ctrls are removed from the bus, doesn't match what
we expect to see. Fix this by making sure only selected nvmes
are moved to userspace.
Also, remove the unnecessary calls to bdev_nvme_attach_controller().
Also, also, the timing_cmd() is "fixed" to make sure it returns exit
status of the command it's reporting the time execution for. It
was accidentally masking status of the remove_attach_helper() due
to the way it was being called. It's done here since with it being
"fixed" separately, the sw_hotplug would simply start failing.
In short, the problem was here:
$ foo=$(timing_cmd bar) <- this executes in a subshell, errexit
is not inherited
# timing_cmd() executes bar and ignores its exit status, simply
# returns its time of execution. Due to that, entire execution
# of remove_attach_helper() down the stack is not controlled by
# errexit.
The above scenario was organically fixed by 7a8d3990, but since
it also exposed issue with the underlying sw_hotplug test it
affected the entire per-patch, together with 1826c4dc which
increased number of sw_hotplug instances.
Next patch in the series brings back the 7a8d3990, but the "fix"
for timing_cmd() should remain as a root cause of what was causing
false positives in the whole sw_hotplug suite in a first place.
Change-Id: Iaed5b1080c547a86e262a895b5d2ccf10691e7bc
Signed-off-by:
Michal Berger <michal.berger@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/23170
Tested-by:
SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by:
Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by:
Jim Harris <jim.harris@samsung.com>