re-use http checksums on retry attempts (#4200)
## Description A change was needed for the S3 Flexible Checksum SEP to add guidance for reusing checksums during retry attempts to prevent data corruption. When a request fails after checksum calculation is complete, SDKs must save and reuse the checksum for retry attempts rather than recalculating it. This prevents inconsistencies when payload content might change between retries, ensuring data durability in S3. * Adds a simple cache to the checksum crate that will favor cached checksums from prior attempts if set * Adds new integration tests to verify retry behavior and re-use of checksums NOTE: If a user were to actually replace file contents between attempts with a different content length the Rust SDK uses the original content length set on the ByteStream. This happens early when we create the bytestream by taking the user provided content length OR calculating it from the file. We don't ever attempt to recalculate this and I see no great way of doing this. The result is a client side failure about stream length mismatch as opposed to sending the request to the server with the original checksum. ## Checklist <!--- If a checkbox below is not applicable, then please DELETE it rather than leaving it unchecked --> - [x] For changes to the AWS SDK, generated SDK code, or SDK runtime crates, I have created a changelog entry Markdown file in the `.changelog` directory, specifying "aws-sdk-rust" in the `applies_to` key. ---- _By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice._
Loading
Please sign in to comment