Commit 2f2ffd98 authored by paul luse's avatar paul luse Committed by Jim Harris
Browse files

test: remove test plan .md files



A sub-team over a year ago agreed that there was not enough value
in checking test plans in to the repo as they require maintenance
and their primary purpose is not to document how the module is tested
but to facilitate discussion during test development. It was agreed
that we would use the review system to iterate on test plans but once
the actual tests were developed that the plan would not get checked in.

This patch just removes those that were likely in the repo before that
discussion.

Signed-off-by: default avatarpaul luse <paul.e.luse@intel.com>
Change-Id: I75dcdd8b4754b7ecb4a21079b251c707557a3280
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/467394


Tested-by: default avatarSPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: default avatarKarol Latecki <karol.latecki@intel.com>
Reviewed-by: default avatarJim Harris <james.r.harris@intel.com>
Reviewed-by: default avatarShuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
parent e3366470
Loading
Loading
Loading
Loading

test/blobfs/test_plan.md

deleted100644 → 0
+0 −67
Original line number Diff line number Diff line
# SPDK BlobFS Test Plan

## Current Tests

# Unit tests (asynchronous API)

- Tests BlobFS w/ Blobstore with no dependencies on SPDK bdev layer or event framework.
  Uses simple DRAM buffer to simulate a block device - all block operations are immediately
  completed so no special event handling is required.
- Current tests include:
	- basic fs initialization and unload
	- open non-existent file fails if SPDK_BLOBFS_OPEN_CREATE not specified
	- open non-existent file creates the file if SPDK_BLOBFS_OPEN_CREATE is specified
	- close a file fails if there are still open references
	- closing a file with no open references fails
	- files can be truncated up and down in length
	- three-way rename
	- operations for inserting and traversing buffers in a cache tree
	- allocating and freeing I/O channels

# Unit tests (synchronous API)

- Tests BlobFS w/ Blobstore with no dependencies on SPDK bdev layer or event framework.
  The synchronous API requires a separate thread to handle any asynchronous handoffs such as
  I/O to disk.
	- basic read/write I/O operations
	- appending to a file whose cache has been flushed and evicted

# RocksDB

- Tests BlobFS as the backing store for a RocksDB database.  BlobFS uses the SPDK NVMe driver
  through the SPDK bdev layer as its block device.  Uses RocksDB db_bench utility to drive
  the workloads.  Each workload (after the initial sequential insert) reloads the database
  which validates metadata operations completed correctly in the previous run via the
  RocksDB MANIFEST file.  RocksDB also runs checksums on key/value blocks read from disk,
  verifying data integrity.
	- initialize BlobFS filesystem on NVMe SSD
	- bulk sequential insert of up to 500M keys (16B key, 1000B value)
	- overwrite test - randomly overwrite one of the keys in the database (driving both
	  flush and compaction traffic)
	- readwrite test - one thread randomly overwrites a key in the database, up to 16
	  threads randomly read a key in the database.
	- writesync - same as overwrite, but enables a WAL (write-ahead log)
	- randread - up to 16 threads randomly read a key in the database

## Future tests to add

# Unit tests

- Corrupt data in DRAM buffer, and confirm subsequent operations such as BlobFS load or
  opening a blob fail as expected (no panics, etc.)
- Test synchronous API with multiple synchronous threads.  May be implemented separately
  from existing synchronous unit tests to allow for more sophisticated thread
  synchronization.
- Add tests for out of capacity (no more space on disk for additional blobs/files)
- Pending addition of BlobFS superblob, verify that BlobFS load fails with missing or
  corrupt superblob
- Additional tests to reach 100% unit test coverage

# System/integration tests

- Use fio with BlobFS fuse module for more focused data integrity testing on individual
  files.
- Pending directory support (via an SPDK btree module), use BlobFS fuse module to do
  things like a Linux kernel compilation.  Performance may be poor but this will heavily
  stress the mechanics of BlobFS.
- Run RocksDB tests with varying amounts of BlobFS cache

test/iscsi_tgt/test_plan.md

deleted100644 → 0
+0 −41
Original line number Diff line number Diff line
# SPDK iscsi_tgt test plan

## Objective
The purpose of these tests is to verify correct behavior of SPDK iSCSI target
feature.
These tests are run either per-commit or as nightly tests.

## Configuration
All tests share the same basic configuration file for SPDK iscsi_tgt to run.
Static configuration from config file consists of setting number of per session
queues and enabling RPC for further configuration via RPC calls.
RPC calls used for dynamic configuration consist:
- creating Malloc backend devices
- creating Null Block backend devices
- creating Pmem backend devices
- constructing iSCSI subsystems
- deleting iSCSI subsystems

### Tests

#### Test 1: iSCSI  namespace on a Pmem device
This test configures a SPDK iSCSI subsystem backed by pmem
devices and uses FIO to generate I/Os that target those subsystems.
Test steps:
- Step 1: Start SPDK iscsi_tgt application.
- Step 2: Create 10 pmem pools.
- Step 3: Create pmem bdevs on pmem pools.
- Step 4: Create iSCSI subsystems with 10 pmem bdevs namespaces.
- Step 5: Connect to iSCSI susbsystems with kernel initiator.
- Step 6: Run FIO with workload parameters: blocksize=4096, iodepth=64,
	workload=randwrite; varify flag is enabled so that
	FIO reads and verifies the data written to the pmem device.
	The run time is 10 seconds for a quick test an 10 minutes
	for longer nightly test.
- Step 7: Run FIO with workload parameters: blocksize=128kB, iodepth=4,
	workload=randwrite; varify flag is enabled so that
	FIO reads and verifies the data written to the pmem device.
	The run time is 10 seconds for a quick test an 10 minutes
	for longer nightly test.
- Step 8: Disconnect kernel initiator from iSCSI subsystems.
- Step 9: Delete iSCSI subsystems from configuration.

test/nvmf/test_plan.md

deleted100644 → 0
+0 −95
Original line number Diff line number Diff line
# SPDK nvmf_tgt test plan

## Objective
The purpose of these tests is to verify correct behavior of SPDK NVMe-oF
feature.
These tests are run either per-commit or as nightly tests.

## Configuration
All tests share the same basic configuration file for SPDK nvmf_tgt to run.
Static configuration from config file consists of setting number of per session
queues and enabling RPC for further configuration via RPC calls.
RPC calls used for dynamic configuration consist:
- creating Malloc backend devices
- creating Null Block backend devices
- constructing NVMe-oF subsystems
- deleting NVMe-oF subsystems

### Tests

#### Test 1: NVMe-oF namespace on a Logical Volumes device
This test configures a SPDK NVMe-oF subsystem backed by logical volume
devices and uses FIO to generate I/Os that target those subsystems.
The logical volume bdevs are backed by malloc bdevs.
Test steps:
- Step 1: Assign IP addresses to RDMA NICs.
- Step 2: Start SPDK nvmf_tgt application.
- Step 3: Create malloc bdevs.
- Step 4: Create logical volume stores on malloc bdevs.
- Step 5: Create 10 logical volume bdevs on each logical volume store.
- Step 6: Create NVMe-oF subsystems with logical volume bdev namespaces.
- Step 7: Connect to NVMe-oF susbsystems with kernel initiator.
- Step 8: Run FIO with workload parameters: blocksize=256k, iodepth=64,
workload=randwrite; varify flag is enabled so that FIO reads and verifies
the data written to the logical device. The run time is 10 seconds for a
quick test an 10 minutes for longer nightly test.
- Step 9: Disconnect kernel initiator from NVMe-oF subsystems.
- Step 10: Delete NVMe-oF subsystems from configuration.

### Compatibility testing

- Verify functionality of SPDK `nvmf_tgt` with Linux kernel NVMe-oF host
  - Exercise various kernel NVMe host parameters
    - `nr_io_queues`
    - `queue_size`
  - Test discovery subsystem with `nvme` CLI tool
    - Verify that discovery service works correctly with `nvme discover`
    - Verify that large responses work (many subsystems)

### Specification compliance

- NVMe base spec compliance
  - Verify all mandatory admin commands are implemented
    - Get Log Page
    - Identify (including all mandatory CNS values)
      - Identify Namespace
      - Identify Controller
      - Active Namespace List
      - Allocated Namespace List
      - Identify Allocated Namespace
      - Attached Controller List
      - Controller List
    - Abort
    - Set Features
    - Get Features
    - Asynchronous Event Request
    - Keep Alive
  - Verify all mandatory NVM command set I/O commands are implemented
    - Flush
    - Write
    - Read
  - Verify all mandatory log pages
    - Error Information
    - SMART / Health Information
    - Firmware Slot Information
  - Verify all mandatory Get/Set Features
    - Arbitration
    - Power Management
    - Temperature Threshold
    - Error Recovery
    - Number of Queues
    - Write Atomicity Normal
    - Asynchronous Event Configuration
  - Verify all implemented commands behave as required by the specification
- Fabric command processing
  - Verify that Connect commands with invalid parameters are failed with correct response
    - Invalid RECFMT
    - Invalid SQSIZE
    - Invalid SUBNQN, HOSTNQN (too long, incorrect format, not null terminated)
    - QID != 0 before admin queue created
    - CNTLID != 0xFFFF (static controller mode)
  - Verify that non-Fabric commands are only allowed in the correct states

### Configuration and RPC

- Verify that invalid NQNs cannot be configured via conf file or RPC

test/pmem/test_plan.md

deleted100644 → 0
+0 −310
Original line number Diff line number Diff line
# PMEM bdev feature test plan

## Objective
The purpose of these tests is to verify possibility of using pmem bdev
configuration in SPDK by running functional tests FIO traffic verification
tests.

## Configuration
Configuration in tests is to be done using example stub application
(spdk/example/bdev/io/bdev_io).
All possible management is done using RPC calls with the exception of
use of split bdevs which have to be configured in .conf file.

Functional tests are executed as scenarios - sets of smaller test steps
in which results and return codes of RPC calls are validated.
Some configuration calls may also additionally be validated
by use of "get" (e.g. get_bdevs) RPC calls, which provide additional
information for veryfing results.
In some steps additional write/read operations will be performed on
PMEM bdevs in order to check IO path correct behavior.

FIO traffic verification tests will serve as integration tests and will
be executed to config correct behavior of PMEM bdev when working with vhost,
nvmf_tgt and iscsi_tgt applications.

## Functional tests

### bdev_pmem_get_pool_info

#### bdev_pmem_get_pool_info_tc1
Negative test for checking pmem pool file.
Call with missing path argument.
Steps & expected results:
- Call bdev_pmem_get_pool_info with missing path argument
- Check that return code != 0 and error code =

#### bdev_pmem_get_pool_info_tc2
Negative test for checking pmem pool file.
Call with non-existant path argument.
Steps & expected results:
- Call bdev_pmem_get_pool_info with path argument that points to not existing file.
- Check that return code != 0 and error code = ENODEV

#### bdev_pmem_get_pool_info_tc3
Negative test for checking pmem pool file.
Call with other type of pmem pool file.
Steps & expected results:
- Using pmem utility tools create pool of OBJ type instead of BLK
(if needed utility tools are not available - create random file in filesystem)
- Call bdev_pmem_get_pool_info and point to file created in previous step.
- Check that return code != 0 and error code = ENODEV

#### bdev_pmem_get_pool_info_tc4
Positive test for checking pmem pool file.
Call with existing pmem pool file.
Steps & expected results:
- Call bdev_pmem_get_pool_info with path argument that points to existing file.
- Check that return code == 0

### bdev_pmem_create_pool
From libpmemblk documentation:
- PMEM block size has to be bigger than 512 internal blocks; if lower value
is used then PMEM library will silently round it up to 512 which is defined
in pmem/libpmemblk.h file as PMEMBLK_MIN_BLK.
- Total pool size cannot be less than 16MB which is defined i
pmem/libpmemblk.h file as PMEMBLK_MIN_POOL
- Total number of segments in PMEP pool file cannot be less than 256

#### bdev_pmem_create_pool_tc1
Negative test case for creating a new pmem.
Call bdev_pmem_create_pool with missing arguments.
Steps & expected results:
- call bdev_pmem_create_pool without path argument
- call return code != 0
- call bdev_pmem_get_pool_info and check that pmem pool file was not created
- call return code != 0
- call bdev_pmem_create_pool with path but without size and block size arguments
- call return code != 0
- call bdev_pmem_get_pool_info and check that pmem pool file was not created
- call return code != 0
- call bdev_pmem_create_pool with path and size but without block size arguments
- call return code != 0
- call bdev_pmem_get_pool_info and check that pmem pool file was not created
- call return code != 0

#### bdev_pmem_create_pool_tc2
Negative test case for creating a new pmem.
Call bdev_pmem_create_pool with non existing path argument.
Steps & expected results:
- call bdev_pmem_create_pool with path that does not exist
- call return code != 0
- call bdev_pmem_get_pool_info and check that pmem pool file was not created
- call return code != 0

#### bdev_pmem_create_pool_tc3
Positive test case for creating a new pmem pool on disk space.
Steps & expected results:
- call bdev_pmem_create_pool with correct path argument,
blocksize=512 and total size=256MB
- call return code = 0
- call bdev_pmem_get_pool_info and check that pmem file was created
- call return code = 0
- call bdev_pmem_delete_pool on previously created pmem
- return code = 0 and no error code

#### bdev_pmem_create_pool_tc4
Positive test case for creating a new pmem pool in RAM space.
# TODO: Research test steps for creating a pool in RAM!!!
Steps & expected results:
- call bdev_pmem_create_pool with correct path argument,
blocksize=512 and total size=256MB
- call return code = 0
- call bdev_pmem_get_pool_info and check that pmem file was created
- call return code = 0
- call bdev_pmem_delete_pool on previously created pmem
- return code = 0 and no error code

#### bdev_pmem_create_pool_tc5
Negative test case for creating two pmems with same path.
Steps & expected results:
- call bdev_pmem_create_pool with correct path argument,
blocksize=512 and total size=256MB
- call return code = 0
- call bdev_pmem_get_pool_info and check that pmem file was created
- call return code = 0
- call bdev_pmem_create_pool with the same path argument as before,
blocksize=4096 and total size=512MB
- call return code != 0, error code = EEXIST
- call bdev_pmem_create_pool and check that first pmem pool file is still
available and not modified (block size and total size stay the same)
- call return code = 0
- call bdev_pmem_delete_pool on first created pmem pool
- return code =0 and no error code

#### bdev_pmem_create_pool_tc6
Positive test case for creating pmem pool file with various block sizes.
Steps & expected results:
- call bdev_pmem_create_pool with correct path argument, total size=256MB
with different block size arguments - 1, 511, 512, 513, 1024, 4096, 128k and 256k
- call bdev_pmem_get_pool_info on each of created pmem pool and check if it was created;
For pool files created with block size <512 their block size should be rounded up
to 512; other pool files should have the same block size as specified in create
command
- call return code = 0; block sizes as expected
- call bdev_pmem_delete_pool on all created pool files

#### bdev_pmem_create_pool_tc7
Negative test case for creating pmem pool file with total size of less than 16MB.
Steps & expected results:
- call bdev_pmem_create_pool with correct path argument, block size=512 and
total size less than 16MB
- return code !=0 and error code !=0
- call bdev_pmem_get_pool_info to verify pmem pool file was not created
- return code = 0

#### bdev_pmem_create_pool_tc8
Negative test case for creating pmem pool file with less than 256 blocks.
Steps & expected results:
- call bdev_pmem_create_pool with correct path argument, block size=128k and
total size=30MB
- return code !=0 and error code !=0
- call bdev_pmem_get_pool_info to verify pmem pool file was not created
- return code = 0

### bdev_pmem_delete_pool

#### bdev_pmem_delete_pool_tc1
Negative test case for deleting a pmem.
Call bdev_pmem_delete_pool on non-exisiting pmem.
Steps & expected results:
- call bdev_pmem_delete_pool on non-existing pmem.
- return code !=0 and error code = ENOENT

#### bdev_pmem_delete_pool_tc2
Negative test case for deleting a pmem.
Call bdev_pmem_delete_pool on a file of wrong type
Steps & expected results:
- Using pmem utility tools create pool of OBJ type instead of BLK
(if needed utility tools are not available - create random file in filesystem)
- Call bdev_pmem_delete_pool and point to file created in previous step.
- return code !=0 and error code = ENOTBLK

#### bdev_pmem_delete_pool_tc3
Positive test case for creating and deleting a pemem.
Steps & expected results:
- call bdev_pmem_create_pool with correct arguments
- return code = 0 and no error code
- using bdev_pmem_get_pool_info check that pmem was created
- return code = 0 and no error code
- call bdev_pmem_delete_pool on previously created pmem
- return code = 0 and no error code
- using bdev_pmem_get_pool_info check that pmem no longer exists
- return code !=0 and error code = ENODEV

#### bdev_pmem_delete_pool_tc4
Negative test case for creating and deleting a pemem.
Steps & expected results:
- run scenario from test case 3
- call bdev_pmem_delete_pool on already deleted pmem pool
- return code !=0 and error code = ENODEV

### bdev_pmem_create

#### bdev_pmem_create_tc1
Negative test for constructing new pmem bdev.
Call bdev_pmem_create with missing argument.
Steps & expected results:
- Call bdev_pmem_create with missing path argument.
- Check that return code != 0

#### bdev_pmem_create_tc2
Negative test for constructing new pmem bdev.
Call bdev_pmem_create with not existing path argument.
Steps & expected results:
- call bdev_pmem_create with incorrect (not existing) path
- call return code != 0 and error code = ENODEV
- using get_bdevs check that no pmem bdev was created

#### bdev_pmem_create_tc3
Negative test for constructing pmem bdevs with random file instead of pmemblk pool.
Steps & expected results:
- using a system tool (like dd) create a random file
- call bdev_pmem_create with path pointing to that file
- return code != 0, error code = ENOTBLK

#### bdev_pmem_create_tc4
Negative test for constructing pmem bdevs with pmemobj instead of pmemblk pool.
Steps & expected results:
- Using pmem utility tools create pool of OBJ type instead of BLK
(if needed utility tools are not available - create random file in filesystem)
- call bdev_pmem_create with path pointing to that pool
- return code != 0, error code = ENOTBLK

#### bdev_pmem_create_tc5
Positive test for constructing pmem bdev.
Steps & expected results:
- call bdev_pmem_create_pool with correct arguments
- return code = 0, no errors
- call bdev_pmem_get_pool_info and check if pmem files exists
- return code = 0, no errors
- call bdev_pmem_create with with correct arguments to create a pmem bdev
- return code = 0, no errors
- using get_bdevs check that pmem bdev was created
- delete pmem bdev using bdev_pmem_delete
- return code = 0, no error code
- delete previously created pmem pool
- return code = 0, no error code

#### bdev_pmem_create_tc6
Negative test for constructing pmem bdevs twice on the same pmem.
Steps & expected results:
- call bdev_pmem_create_pool with correct arguments
- return code = 0, no errors
- call bdev_pmem_get_pool_info and check if pmem files exists
- return code = 0, no errors
- call bdev_pmem_create with with correct arguments to create a pmem bdev
- return code = 0, no errors
- using get_bdevs check that pmem bdev was created
- call bdev_pmem_create again on the same pmem file
- return code != 0, error code = EEXIST
- delete pmem bdev using bdev_pmem_delete
- return code = 0, no error code
- delete previously created pmem pool
- return code = 0, no error code

### bdev_pmem_delete

#### delete_bdev_tc1
Positive test for deleting pmem bdevs using bdev_pmem_delete call.
Steps & expected results:
- construct malloc and aio bdevs (also NVMe if possible)
- all calls - return code = 0, no errors; bdevs created
- call bdev_pmem_create_pool with correct path argument,
block size=512, total size=256M
- return code = 0, no errors
- call bdev_pmem_get_pool_info and check if pmem file exists
- return code = 0, no errors
- call bdev_pmem_create and create a pmem bdev
- return code = 0, no errors
- using get_bdevs check that pmem bdev was created
- delete pmem bdev using bdev_pmem_delete
- return code = 0, no errors
- using get_bdevs confirm that pmem bdev was deleted and other bdevs
were unaffected.

#### bdev_pmem_delete_tc2
Negative test for deleting pmem bdev twice.
Steps & expected results:
- call bdev_pmem_create_pool with correct path argument,
block size=512, total size=256M
- return code = 0, no errors
- call bdev_pmem_get_pool_info and check if pmem file exists
- return code = 0, no errors
- call bdev_pmem_create and create a pmem bdev
- return code = 0, no errors
- using get_bdevs check that pmem bdev was created
- delete pmem bdev using bdev_pmem_delete
- return code = 0, no errors
- using get_bdevs confirm that pmem bdev was deleted
- delete pmem bdev using bdev_pmem_delete second time
- return code != 0, error code = ENODEV


## Integration tests
Description of integration tests which run FIO verification traffic against
pmem_bdevs used in vhost, iscsi_tgt and nvmf_tgt applications can be found in
test directories for these components:
- spdk/test/vhost
- spdk/test/nvmf
- spdk/test/iscsi_tgt

test/vhost/hotplug/test_plan.md

deleted100644 → 0
+0 −86
Original line number Diff line number Diff line
#Vhost hotattach and hotdetach test plan

## Objective
The purpose of these tests is to verify that SPDK vhost remains stable during
hot-attach and hot-detach operations performed on SCSI controllers devices.
Hot-attach is a scenario where a device is added to controller already in use by
guest VM, while in hot-detach device is removed from controller when already in use.

## Test Cases Description
1. FIO I/O traffic is run during hot-attach and detach operations.
By default FIO uses default_integrity*.job config files located in
test/vhost/hotfeatures/fio_jobs directory.
2. FIO mode of operation in random write (randwrite) with verification enabled
which results in also performing read operations.
3. Test case descriptions below contain manual steps for testing.
Automated tests are located in test/vhost/hotfeatures.

### Hotattach, Hotdetach Test Cases prerequisites
1. Run vhost with 8 empty controllers. Prepare 16 nvme disks.
If you don't have 16 disks use split.
2. In test cases fio status is checked after every run if there are any errors.

### Hotattach Test Cases prerequisites
1. Run vms, first with ctrlr-1 and ctrlr-2 and second one with ctrlr-3 and ctrlr-4.

## Test Case 1
1. Attach NVMe to Ctrlr 1
2. Run fio integrity on attached device

## Test Case 2
1. Run fio integrity on attached device from test case 1
2. During fio attach another NVMe to Ctrlr 1
3. Run fio integrity on both devices

## Test Case 3
1. Run fio integrity on attached devices from previous test cases
2. During fio attach NVMe to Ctrl2
3. Run fio integrity on all devices

## Test Case 4
2. Run fio integrity on attached device from previous test cases
3. During fio attach NVMe to Ctrl3/VM2
4. Run fio integrity on all devices
5. Reboot VMs
6. Run fio integrity again on all devices


### Hotdetach Test Cases prerequisites
1. Run vms, first with ctrlr-5 and ctrlr-6 and second with ctrlr-7 and ctrlr-8.

## Test Case 1
1. Run fio on all devices
2. Detatch NVMe from Ctrl5 during fio
3. Check vhost or VMs did not crash
4. Check that detatched device is gone from VM
5. Check that fio job run on detached device stopped and failed

## Test Case 2
1. Attach NVMe to Ctrlr 5
2. Run fio on 1 device from Ctrl 5
3. Detatch NVMe from Ctrl5 during fio traffic
4. Check vhost or VMs did not crash
5. Check that fio job run on detached device stopped and failed
6. Check that detatched device is gone from VM

## Test Case 3
1. Attach NVMe to Ctrlr 5
2. Run fio with integrity on all devices, except one
3. Detatch NVMe without traffic during fio running on other devices
4. Check vhost or VMs did not crash
5. Check that fio jobs did not fail
6. Check that detatched device is gone from VM

## Test Case 4
1. Attach NVMe to Ctrlr 5
2. Run fio on 1 device from Ctrl 5
3. Run separate fio with integrity on all other devices (all VMs)
4. Detatch NVMe from Ctrl1 during fio traffic
5. Check vhost or VMs did not crash
6. Check that fio job run on detached device stopped and failed
7. Check that other fio jobs did not fail
8. Check that detatched device is gone from VM
9. Reboot VMs
10. Check that detatched device is gone from VM
11. Check that all other devices are in place
12. Run fio integrity on all remianing devices
Loading