Add end-to-end throughput benchmarks for S3 Express (#3446)
## Description See [README](https://github.com/smithy-lang/smithy-rs/blob/b172a1ea4c76cf8f820d8ef4ae827d02451bddcd/aws/sdk/benchmarks/s3-express/README.md ) ## Testing Here are performance numbers measured on `Amazon Linux 2 x86_64 5.10` with `c5.4xlarge` in `us-west-2`, and benchmarks ran with `NUMBER_OF_ITERATIONS=1` to allow for quick execution ``` For 64KB objects: PUT: [11.750 ms 12.025 ms 12.291 ms] / [67.737 ms 76.367 ms 85.531 ms] GET: [16.071 ms 16.270 ms 16.461 ms] / [19.941 ms 22.622 ms 26.167 ms] PUT + DELETE: [21.492 ms 22.132 ms 22.755 ms] / [75.608 ms 86.056 ms 98.329 ms] For 1MB objects: PUT [37.400 ms 39.130 ms 40.769 ms] / [144.30 ms 160.93 ms 180.93 ms] GET [14.968 ms 15.193 ms 15.408 ms] / [24.872 ms 28.417 ms 32.984 ms] PUT + DELETE: [52.106 ms 54.875 ms 57.503 ms] / [172.38 ms 185.00 ms 200.28 ms] ``` In each row, a tuple on the left of `/` is for execution time for an S3 Express bucket and that on the right is for a regular S3 bucket. Each tuple is a 99% confidence interval, containing `[<the lower end> <the mean> <the higher end>]`. ---- _By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice._ --------- Co-authored-by:John DiSanti <jdisanti@amazon.com> Co-authored-by:
AWS SDK Rust Bot <aws-sdk-rust-primary@amazon.com> Co-authored-by:
AWS SDK Rust Bot <97246200+aws-sdk-rust-ci@users.noreply.github.com> Co-authored-by:
Zelda Hessler <zhessler@amazon.com> Co-authored-by:
Russell Cohen <rcoh@amazon.com>
Loading
Please register or sign in to comment