This chnage replaces the current SSE-C key derivation scheme. The 'old'
scheme derives an unique object encryption key from the client provided key.
This key derivation was not invertible. That means that a client cannot change
its key without changing the object encryption key.
AWS S3 allows users to update there SSE-C keys by executing a SSE-C COPY with
source == destination. AWS probably updates just the metadata (which is a very
cheap operation). The old key derivation scheme would require a complete copy
of the object because the minio server would not be able to derive the same
object encryption key from a different client provided key (without breaking
the crypto. hash function).
This change makes the key derivation invertible.
This change adds server-side-encryption support for HEAD, GET and PUT
operations. This PR only addresses single-part PUTs and GETs without
HTTP ranges.
Further this change adds the concept of reserved object metadata which is required
to make encrypted objects tamper-proof and provide API compatibility to AWS S3.
This PR adds the following reserved metadata entries:
- X-Minio-Internal-Server-Side-Encryption-Iv ('guarantees' tamper-proof property)
- X-Minio-Internal-Server-Side-Encryption-Kdf (makes Key-MAC computation negotiable in future)
- X-Minio-Internal-Server-Side-Encryption-Key-Mac (provides AWS S3 API compatibility)
The prefix `X-Minio_Internal` specifies an internal metadata entry which must not
send to clients. All client requests containing a metadata key starting with `X-Minio-Internal`
must also rejected. This is implemented by a generic-handler.
This PR implements SSE-C separated from client-side-encryption (CSE). This cannot decrypt
server-side-encrypted objects on the client-side. However, clients can encrypted the same object
with CSE and SSE-C.
This PR does not address:
- SSE-C Copy and Copy part
- SSE-C GET with HTTP ranges
- SSE-C multipart PUT
- SSE-C Gateway
Each point must be addressed in a separate PR.
Added to vendor dir:
- x/crypto/chacha20poly1305
- x/crypto/poly1305
- github.com/minio/sio
It is possible that x-amz-content-sha256 is set through
the query params in case of presigned PUT calls, make sure
that we validate the incoming x-amz-content-sha256 properly.
Current code simply just allows this without honoring the
set x-amz-content-sha256, fix it.
Mint is the functional testing platform for Minio server. So, it is
important that each PR to Minio server repository is checked for
addition/updating of Mint test cases.
Previously ID/ETag from backend service is used as is which causes
failure on s3cmd like tools where those tools use ETag as checksum to
validate data. This is fixed by prepending "-1".
Refer minio/mint#193minio/mint#201
When MINIO_TRACE_DIR is provided, create a new log file and store all
HTTP requests + responses data, body are excluded to reduce memory
consumption. MINIO_HTTP_TRACE=1 enables logging. Use non mem
consuming http req/resp recorders, the maximum is about 32k per request.
This logs to STDOUT, body logging is disabled for PutObject PutObjectPart
GetObject.
Ever since commit 5db1e9f3dd the
default region as been '', instead of 'us-east-1'. Update
documentation to reflect this, in particular, documentation about
notifications.
Verify() was being called by caller after the data
has been successfully read after io.EOF. This disconnection
opens a race under concurrent access to such an object.
Verification is not necessary outside of Read() call,
we can simply just do checksum verification right inside
Read() call at io.EOF.
This approach simplifies the usage.
In some cases, Cache manager returns ErrCacheFull error when creating a
new cache buffer but the code still sends object data to nil cache buffer data.
Dont print the error errFileNotFound, as it is expected that concurrent
complete-multipart-uploads or abort-multipart-uploads would have deleted
the file, and the file may not be found
Fixes: https://github.com/minio/minio/issues/5056
Every so often we get requirements for creating
directories/prefixes and we end up rejecting
such requirements. This PR implements this and
allows empty directories without any new file
addition to backend.
Existing lower APIs themselves are leveraged to provide
this behavior. Only FS backend supports this for
the time being as desired.
s3cmd cli fails when trying to upload a file to azure gateway.
Previous fixes in azure to handle client side encryption alone
did not completely address the problem.
We need to possibilly convert all the x-amz-meta-<name>
, i.e specifically <name> should be converted into a
C# identifier as mentioned in the docs for `put-blob`.
https://docs.microsoft.com/en-us/rest/api/storageservices/put-blob
```
s3cmd put README.md s3://myanis/
upload: 'README.md' -> 's3://myanis/README.md' [1 of 1]
4598 of 4598 100% in 0s 47.24 kB/s done
upload: 'README.md' -> 's3://myanis/README.md' [1 of 1]
4598 of 4598 100% in 0s 50.47 kB/s done
ERROR: S3 error: 400 (InvalidArgument): Your metadata headers are not supported.
```
There is a separate issue with s3cmd after this fix is applied where
the ETag is wronly validated https://github.com/s3tools/s3cmd/issues/880
But that is an upstream s3cmd problem which wrongly interprets ETag
to be md5sum of the content that was uploaded.