Commit ea72b28f authored by Shuhei Matsumoto's avatar Shuhei Matsumoto Committed by Jim Harris
Browse files

bdev/nvme: Cache current for non-optimized path in active-passive

The patch
https://github.com/spdk/spdk/commit/20f78694d4ad49f8cdc8905cefec5dc1383bf6fd


change nvme_io_path_is_current() to return true if io_path is the first
one available on the list in active-passive mode.

This caused a problem when the disable-auto-failback feature was tested.

When the disable-auto-failback feature is enabled, the active available
path should stay current even if the preferred path is back to be available.

However, current was changed because no path was used after cache reset
and the preferred path is located at the head of the list.

The original purpose of the patch was for ANA optimized/non-optimized.

If ANA event is received, I/O path cache is always cleared. Hence, we
can cache ANA non-optimized path in active-passive mode.

Let's do this. Then, a strange behavior for the disable-auto-faiback
feature is fixed and the original purpose is kept.

Change-Id: I12bee2e104b9518ea0c19639164ceca20646248c
Signed-off-by: default avatarShuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-on: https://review.spdk.io/c/spdk/spdk/+/25884


Community-CI: Mellanox Build Bot
Reviewed-by: default avatarAnkit Kumar <ankit.kumar@samsung.com>
Reviewed-by: default avatarJim Harris <jim.harris@nvidia.com>
Tested-by: default avatarSPDK Automated Test System <spdkbot@gmail.com>
Reviewed-by: default avatarBen Walker <ben@nvidia.com>
parent 539fbe74
Loading
Loading
Loading
Loading
+6 −22
Original line number Diff line number Diff line
@@ -1175,12 +1175,11 @@ _bdev_nvme_find_io_path(struct nvme_bdev_channel *nbdev_ch)
		io_path = nvme_io_path_get_next(nbdev_ch, io_path);
	} while (io_path != start);

	if (nbdev_ch->mp_policy == BDEV_NVME_MP_POLICY_ACTIVE_ACTIVE) {
	/* We come here only if there is no optimized path. Cache even non_optimized
		 * path for load balance across multiple non_optimized paths.
	 * path. If any path becomes optimized, ANA event will be received and
	 * cache will be cleared.
	 */
	nbdev_ch->current_io_path = non_optimized;
	}

	return non_optimized;
}
@@ -9152,22 +9151,7 @@ nvme_io_path_is_current(struct nvme_io_path *io_path)
		current = (io_path->nvme_ns->ana_state == SPDK_NVME_ANA_OPTIMIZED_STATE) ||
			  (optimized_io_path == NULL);
	} else {
		if (nbdev_ch->current_io_path) {
		current = (io_path == nbdev_ch->current_io_path);
		} else {
			struct nvme_io_path *first_path;

			/* We arrived here as there are no optimized paths for active-passive
			 * mode. Check if this io_path is the first one available on the list.
			 */
			current = false;
			STAILQ_FOREACH(first_path, &nbdev_ch->io_path_list, stailq) {
				if (nvme_io_path_is_available(first_path)) {
					current = (io_path == first_path);
					break;
				}
			}
		}
	}

	return current;
+2 −2
Original line number Diff line number Diff line
@@ -7834,9 +7834,9 @@ test_io_path_is_current(void)

	CU_ASSERT(nvme_io_path_is_current(&io_path2) == true);

	/* active/passive: current is true if it is the first one when there is no optimized path. */
	/* active/passive: current is true if it is available. We do not care even if it is not ANA optimized. */
	nbdev_ch.mp_policy = BDEV_NVME_MP_POLICY_ACTIVE_PASSIVE;
	nbdev_ch.current_io_path = NULL;
	nbdev_ch.current_io_path = &io_path1;

	CU_ASSERT(nvme_io_path_is_current(&io_path1) == true);
	CU_ASSERT(nvme_io_path_is_current(&io_path2) == false);