Clients like AWS SDK Java and AWS cli XML parsers are
unable to handle on `\r\n` characters to avoid these
errors send XML header first and write white space characters
instead.
Also handle cases to avoid double WriteHeader calls
Currently, we were sending errors in Select binary format,
which is incompatible with AWS S3 behavior, errors in binary
are sent after HTTP status code is already 200 OK - i.e it
happens during the evaluation of the record reader.
Different gateway implementations due to different backend
API errors, might return different unsupported errors at
our handler layer. Current code posed a problem for us because
this information was lost and we would convert it to InternalError
in this situation all S3 clients end up retrying the request.
To avoid this unexpected situation implement a way to support
this cleanly such that the underlying information is not lost
which is returned by gateway.
This PR also adds some comments and simplifies
the code. Primary handling is done to ensure
that we make sure to honor cached buffer.
Added unit tests as well
Fixes#7141
- New parser written from scratch, allows easier and complete parsing
of the full S3 Select SQL syntax. Parser definition is directly
provided by the AST defined for the SQL grammar.
- Bring support to parse and interpret SQL involving JSON path
expressions; evaluation of JSON path expressions will be
subsequently added.
- Bring automatic type inference and conversion for untyped
values (e.g. CSV data).
This PR supports iam and bucket policies to have
policy variable replacements in resource and
condition key values.
For example
- ${aws:username}
- ${aws:userid}
When auto-encryption is turned on, we pro-actively add SSEHeader
for all PUT, POST operations. This is unusual for V2 signature
calculation because V2 signature doesn't have a pre-defined set
of signed headers in the request like V4 signature. According to
V2 we should canonicalize all incoming supported HTTP headers.
Make sure to validate signatures before we mutate http headers
Before this change the CopyObjectHandler and the CopyObjectPartHandler
both looked for a `versionId` parameter on the `X-Amz-Copy-Source` URL
for the version of the object to be copied on the URL unescaped version
of the header. This meant that files that had question marks in were
truncated after the question mark so that files with `?` in their
names could not be server side copied.
After this change the URL unescaping is done during the parsing of the
`versionId` parameter which fixes the problem.
This change also introduces the same logic for the
`X-Amz-Copy-Source-Version-Id` header field which was previously
ignored, namely returning an error if it is present and not `null`
since minio does not currently support versions.
S3 Docs:
- https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html
- https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPartCopy.html
When source is encrypted multipart object and the parts are not
evenly divisible by DARE package block size, target encrypted size
will not necessarily be the same as encrypted source object.
This PR adds pass-through, single encryption at gateway and double
encryption support (gateway encryption with pass through of SSE
headers to backend).
If KMS is set up (either with Vault as KMS or using
MINIO_SSE_MASTER_KEY),gateway will automatically perform
single encryption. If MINIO_GATEWAY_SSE is set up in addition to
Vault KMS, double encryption is performed.When neither KMS nor
MINIO_GATEWAY_SSE is set, do a pass through to backend.
When double encryption is specified, MINIO_GATEWAY_SSE can be set to
"C" for SSE-C encryption at gateway and backend, "S3" for SSE-S3
encryption at gateway/backend or both to support more than one option.
Fixes#6323, #6696
minio-java tests were failing under multiple places when
auto encryption was turned on, handle all the cases properly
This PR fixes
- CopyObject should decrypt ETag before it does if-match
- CopyObject should not try to preserve metadata of source
when rotating keys, unless explicitly asked by the user.
- We should not try to decrypt Compressed object etag, the
potential case was if user sets encryption headers along
with compression enabled.
This commit adds an auto-encryption feature which allows
the Minio operator to ensure that uploaded objects are
always encrypted.
This change adds the `autoEncryption` configuration option
as part of the KMS conifguration and the ENV. variable
`MINIO_SSE_AUTO_ENCRYPTION:{on,off}`.
It also updates the KMS documentation according to the
changes.
Fixes#6502
This PR implements one of the pending items in issue #6286
in S3 API a user can request CSV output for a JSON document
and a JSON output for a CSV document. This PR refactors
the code a little bit to bring this feature.
This refactor brings a change which allows
targets to be added in a cleaner way and also
audit is now moved out.
This PR also simplifies logger dependency for auditing
To conform with AWS S3 Spec on ETag for SSE-S3 encrypted objects,
encrypt client sent MD5Sum and store it on backend as ETag.Extend
this behavior to SSE-C encrypted objects.
This improves the performance of certain queries dramatically,
such as 'count(*)' etc.
Without this PR
```
~ time mc select --query "select count(*) from S3Object" myminio/sjm-airlines/star2000.csv.gz
2173762
real 0m42.464s
user 0m0.071s
sys 0m0.010s
```
With this PR
```
~ time mc select --query "select count(*) from S3Object" myminio/sjm-airlines/star2000.csv.gz
2173762
real 0m17.603s
user 0m0.093s
sys 0m0.008s
```
Almost a 250% improvement in performance. This PR avoids a lot of type
conversions and instead relies on raw sequences of data and interprets
them lazily.
```
benchcmp old new
benchmark old ns/op new ns/op delta
BenchmarkSQLAggregate_100K-4 551213 259782 -52.87%
BenchmarkSQLAggregate_1M-4 6981901985 2432413729 -65.16%
BenchmarkSQLAggregate_2M-4 13511978488 4536903552 -66.42%
BenchmarkSQLAggregate_10M-4 68427084908 23266283336 -66.00%
benchmark old allocs new allocs delta
BenchmarkSQLAggregate_100K-4 2366 485 -79.50%
BenchmarkSQLAggregate_1M-4 47455492 21462860 -54.77%
BenchmarkSQLAggregate_2M-4 95163637 43110771 -54.70%
BenchmarkSQLAggregate_10M-4 476959550 216906510 -54.52%
benchmark old bytes new bytes delta
BenchmarkSQLAggregate_100K-4 1233079 1086024 -11.93%
BenchmarkSQLAggregate_1M-4 2607984120 557038536 -78.64%
BenchmarkSQLAggregate_2M-4 5254103616 1128149168 -78.53%
BenchmarkSQLAggregate_10M-4 26443524872 5722715992 -78.36%
```
In many situations, while testing we encounter
ErrInternalError, to reduce logging we have
removed logging from quite a few places which
is acceptable but when ErrInternalError occurs
we should have a facility to log the corresponding
error, this helps to debug Minio server.
This commit adds key-rotation for SSE-S3 objects.
To execute a key-rotation a SSE-S3 client must
- specify the `X-Amz-Server-Side-Encryption: AES256` header
for the destination
- The source == destination for the COPY operation.
Fixes#6754
This PR adds support
- Request query params
- Request headers
- Response headers
AuditLogEntry is exported and versioned as well
starting with this PR.
Execute method in s3Select package makes a response.WriteHeader call.
Not calling it again in SelectObjectContentHandler function in case of
error in s3Select.Execute call.
On a heavily loaded server, getBucketInfo() becomes slow,
one can easily observe deleting an object causes many
additional network calls.
This PR is to let the underlying call return the actual
error and write it back to the client.
Current master didn't support CopyObjectPart when source
was encrypted, this PR fixes this by allowing range
CopySource decryption at different sequence numbers.
Fixes#6698
This PR fixes
- The target object should be compressed even if the
source object is not compressed.
- The actual size for an encrypted object should be the
`decryptedSize`
This commit fixes a wrong assignment to `actualPartSize`.
The `actualPartSize` for an encrypted src object is not `srcInfo.Size`
because that's the encrypted object size which is larger than the
actual object size. So the actual part size for an encrypted
object is the decrypted size of `srcInfo.Size`.
Without this fix we have room for two different type of
errors.
- Source is encrypted and we didn't provide any source encryption keys
This results in Incomplete body error to be returned back to the client
since source is encrypted and we gave the reader as is to the object
layer which was of a decrypted value leading to "IncompleteBody"
- Source is not encrypted and we provided source encryption keys.
This results in a corrupted object on the destination which is
considered encrypted but cannot be read by the server and returns
the following error.
```
<Error><Code>XMinioObjectTampered</Code><Message>The requested object
was modified and may be compromised</Message><Resource>/id-platform-gamma/
</Resource><RequestId>155EDC3E86BFD4DA</RequestId><HostId>3L137</HostId>
</Error>
```
CopyObject handler forgot to remove multipart encryption flag in metadata
when source is an encrypted multipart object and the target is also encrypted
but single part object.
This PR also simplifies the code to facilitate review.
This PR brings an additional logger implementation
called AuditLog which logs to http targets
The intention is to use AuditLog to log all incoming
requests, this is used as a mechanism by external log
collection entities for processing Minio requests.
This PR introduces two new features
- AWS STS compatible STS API named AssumeRoleWithClientGrants
```
POST /?Action=AssumeRoleWithClientGrants&Token=<jwt>
```
This API endpoint returns temporary access credentials, access
tokens signature types supported by this API
- RSA keys
- ECDSA keys
Fetches the required public key from the JWKS endpoints, provides
them as rsa or ecdsa public keys.
- External policy engine support, in this case OPA policy engine
- Credentials are stored on disks
The new call combines GetObjectInfo and GetObject, and returns an
object with a ReadCloser interface.
Also adds a number of end-to-end encryption tests at the handler
level.
* Revert "Encrypted reader wrapped in NewGetObjectReader should be closed (#6383)"
This reverts commit 53a0bbeb5b.
* Revert "Change SelectAPI to use new GetObjectNInfo API (#6373)"
This reverts commit 5b05df215a.
* Revert "Implement GetObjectNInfo object layer call (#6290)"
This reverts commit e6d740ce09.
This combines calling GetObjectInfo and GetObject while returning a
io.ReadCloser for the object's body. This allows the two operations to
be under a single lock, fixing a race between getting object info and
reading the object body.
This commit adds error handling for SSE-KMS requests to
HEAD, GET, PUT and COPY operations. The server responds
with `not implemented` if a client sends a SSE-KMS
request.
Add support for sse-s3 encryption with vault as KMS.
Also refactoring code to make use of headers and functions defined in
crypto package and clean up duplicated code.
When a S3 client sends a GET Object with a range header, 206 http
code is returned indicating success, however the call of the object
layer's GetObject() inside the handler can return an error and will lead
to writing an XML error message, which is obviously wrong since
we already sent 206 http code. So in the case, we just stop sending
data to the S3 client, this latter can still detect if there is no
error when comparing received data with Content-Length header
in the Get Object response.
Currently, requestid field in logEntry is not populated, as the
requestid field gets set at the very end.
It is now set before regular handler functions. This is also
useful in setting it as part of the XML error response.
Travis build for ppc64le has been quite inconsistent and stays queued
for most of the time. Removing this build as part of Travis.yml for
the time being.
With CoreDNS now supporting etcdv3 as the DNS backend, we
can update our federation target to etcdv3. Users will now be
able to use etcdv3 server as the federation backbone.
Minio will update bucket data to etcdv3 and CoreDNS can pick
that data up and serve it as bucket style DNS path.
This commit fixes the size calculation for multipart
objects. The decrypted size of an encrypted multipart
object is the sum of the decrypted part sizes.
Also fixes the key derivation in CopyObjectPart.
Instead of using the same object-encryption-key for each
part now an unique per-part key is derived.
Updates #6139
This commit fixes a weakness of the key-encryption-key
derivation for SSE-C encrypted objects. Before this
change the key-encryption-key was not bound to / didn't
depend on the object path. This allows an attacker to
repalce objects - encrypted with the same
client-key - with each other.
This change fixes this issue by updating the
key-encryption-key derivation to include:
- the domain (in this case SSE-C)
- a canonical object path representation
- the encryption & key derivation algorithm
Changing the object path now causes the KDF to derive a
different key-encryption-key such that the object-key
unsealing fails.
Including the domain (SSE-C) and encryption & key
derivation algorithm is not directly neccessary for this
fix. However, both will be included for the SSE-S3 KDF.
So they are included here to avoid updating the KDF
again when we add SSE-S3.
The leagcy KDF 'DARE-SHA256' is only used for existing
objects and never for new objects / key rotation.
This PR adds CopyObject support for objects residing in buckets
in different Minio instances (where Minio instances are part of
a federated setup).
Also, added support for multiple Minio domain IPs. This is required
for distributed deployments, where one deployment may have multiple
nodes, each with a different public IP.
- remove old bucket policy handling
- add new policy handling
- add new policy handling unit tests
This patch brings support to bucket policy to have more control not
limiting to anonymous. Bucket owner controls to allow/deny any rest
API.
For example server side encryption can be controlled by allowing
PUT/GET objects with encryptions including bucket owner.
This PR adds disk based edge caching support for minio server.
Cache settings can be configured in config.json to take list of disk drives,
cache expiry in days and file patterns to exclude from cache or via environment
variables MINIO_CACHE_DRIVES, MINIO_CACHE_EXCLUDE and MINIO_CACHE_EXPIRY
Design assumes that Atime support is enabled and the list of cache drives is
fixed.
- Objects are cached on both GET and PUT/POST operations.
- Expiry is used as hint to evict older entries from cache, or if 80% of cache
capacity is filled.
- When object storage backend is down, GET, LIST and HEAD operations fetch
object seamlessly from cache.
Current Limitations
- Bucket policies are not cached, so anonymous operations are not supported in
offline mode.
- Objects are distributed using deterministic hashing among list of cache
drives specified.If one or more drives go offline, or cache drive
configuration is altered - performance could degrade to linear lookup.
Fixes#4026
This is a trival fix to support server level WORM. The feature comes
with an environment variable `MINIO_WORM`.
Usage:
```
$ export MINIO_WORM=on
$ minio server endpoint
```
Fix a compatibility issue with AWS S3 where to do key rotation
we need to replace an existing object's metadata. In such a
scenario "REPLACE" metadata directive is not necessary.
Current code didn't implement the logic to support
decrypting encrypted multiple parts, this PR fixes
by supporting copying encrypted multipart objects.
*) Add Put/Get support of multipart in encryption
*) Add GET Range support for encryption
*) Add CopyPart encrypted support
*) Support decrypting of large single PUT object
Refactor such that metadata and etag are
combined to a single argument `srcInfo`.
This is a precursor change for #5544 making
it easier for us to provide encryption/decryption
functions.
Since we do not encrypt directories we don't need to send
errors with encryption headers when the directory doesn't
have encryption metadata.
Continuation PR from 4ca10479b5
This change adds an object size check such that the server does not
encrypt empty objects (typically folders) for SSE-C. The server still
returns SSE-C headers but the object is not encrypted since there is no
point to encrypt such objects.
Fixes#5493
x-amz-content-sha256 can be optional for any AWS signature v4
requests, make sure to skip sha256 calculation when payload
checksum is not set.
Here is the overall expected behavior
** Signed request **
- X-Amz-Content-Sha256 is set to 'empty' or some 'value' or its
not 'UNSIGNED-PAYLOAD'- use it to validate the incoming payload.
- X-Amz-Content-Sha256 is set to 'UNSIGNED-PAYLOAD' - skip checksum verification
- X-Amz-Content-Sha256 is not set we use emptySHA256
** Presigned request **
- X-Amz-Content-Sha256 is set to 'empty' or some 'value' or its
not 'UNSIGNED-PAYLOAD'- use it to validate the incoming payload
- X-Amz-Content-Sha256 is set to 'UNSIGNED-PAYLOAD' - skip checksum verification
- X-Amz-Content-Sha256 is not set we use 'UNSIGNED-PAYLOAD'
Fixes#5339
- Add storage class metadata validation for request header
- Change storage class header values to be consistent with AWS S3
- Refactor internal method to take only the reqd argument
- Adds a metadata argument to the CopyObjectPart API to facilitate
implementing encryption for copying APIs too.
- Update vendored minio-go - this version implements the
CopyObjectPart client API for use with the S3 gateway.
Fixes#4885
This change introduces following simplified steps to follow
during config migration.
```
// Steps to move from version N to version N+1
// 1. Add new struct serverConfigVN+1 in config-versions.go
// 2. Set configCurrentVersion to "N+1"
// 3. Set serverConfigCurrent to serverConfigVN+1
// 4. Add new migration function (ex. func migrateVNToVN+1()) in config-migrate.go
// 5. Call migrateVNToVN+1() from migrateConfig() in config-migrate.go
// 6. Make changes in config-current_test.go for any test change
```
This change brings public data-types such that
we can ask projects to implement gateway projects
externally than maintaining in our repo.
All publicly exported structs are maintained in object-api-datatypes.go
completePart --> CompletePart
uploadMetadata --> MultipartInfo
All other exported errors are at object-api-errors.go
This change adds server-side-encryption support for HEAD, GET and PUT
operations. This PR only addresses single-part PUTs and GETs without
HTTP ranges.
Further this change adds the concept of reserved object metadata which is required
to make encrypted objects tamper-proof and provide API compatibility to AWS S3.
This PR adds the following reserved metadata entries:
- X-Minio-Internal-Server-Side-Encryption-Iv ('guarantees' tamper-proof property)
- X-Minio-Internal-Server-Side-Encryption-Kdf (makes Key-MAC computation negotiable in future)
- X-Minio-Internal-Server-Side-Encryption-Key-Mac (provides AWS S3 API compatibility)
The prefix `X-Minio_Internal` specifies an internal metadata entry which must not
send to clients. All client requests containing a metadata key starting with `X-Minio-Internal`
must also rejected. This is implemented by a generic-handler.
This PR implements SSE-C separated from client-side-encryption (CSE). This cannot decrypt
server-side-encrypted objects on the client-side. However, clients can encrypted the same object
with CSE and SSE-C.
This PR does not address:
- SSE-C Copy and Copy part
- SSE-C GET with HTTP ranges
- SSE-C multipart PUT
- SSE-C Gateway
Each point must be addressed in a separate PR.
Added to vendor dir:
- x/crypto/chacha20poly1305
- x/crypto/poly1305
- github.com/minio/sio
It is possible that x-amz-content-sha256 is set through
the query params in case of presigned PUT calls, make sure
that we validate the incoming x-amz-content-sha256 properly.
Current code simply just allows this without honoring the
set x-amz-content-sha256, fix it.
Verify() was being called by caller after the data
has been successfully read after io.EOF. This disconnection
opens a race under concurrent access to such an object.
Verification is not necessary outside of Read() call,
we can simply just do checksum verification right inside
Read() call at io.EOF.
This approach simplifies the usage.
This change refactor the ObjectLayer PutObject and PutObjectPart
functions. Instead of passing an io.Reader and a size to PUT operations
ObejectLayer expects an HashReader.
A HashReader verifies the MD5 sum (and SHA256 sum if required) of the object.
This change updates all all PutObject(Part) calls and removes unnecessary code
in all ObjectLayer implementations.
Fixes#4923
Fixed header-to-metadat extraction. The extractMetadataFromHeader function should return an error if the http.Header contains a non-canonicalized key. The reason is that the keys can be manually set (through a map access) which can lead to ugly bugs.
Also fixed header-to-metadata extraction. Return a InternalError if a non-canonicalized key is found in a http.Header. Also log the error.
The following commit f44f2e341c
fix was incomplete and we still had presigned URLs printing
in query strings in wrong fashion.
This PR fixes this properly. Avoid double encoding
percent encoded strings such as
`s3%!!(MISSING)A(MISSING)`
Print properly as json encoded.
`s3%3AObjectCreated%3A%2A`
This PR also does backend format change to 1.0.1
from 1.0.0. Backward compatible changes are still
kept to read the 'md5Sum' key. But all new objects
will be stored with the same details under 'etag'.
Fixes#4312
CopyObjectHandler() was incorrectly performing comparison
between destination and source object paths, which sometimes
leads to a lock race. This PR simplifies comparaison and add
one test case.
We can't use Content-Encoding to verify if `aws-chunked` is set
or not. Just use 'streaming' signature header instead.
While this is considered mandatory, on the contrary aws-sdk-java
doesn't set this value
http://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-streaming.html
```
Set the value to aws-chunked.
```
We will relax it and behave appropriately. Also this PR supports
saving custom encoding after trimming off the `aws-chunked`
parameter.
Fixes#3983
This change adds information like host, port and user-agent of the
client whose request triggered an event notification.
E.g, if someone uploads an object to a bucket using mc. If notifications
were configured on that bucket, the host, port and user-agent of mc
would be sent as part of event notification data.
Sample output:
```
"source": {
"host": "127.0.0.1",
"port": "55808",
"userAgent": "Minio (linux; amd64) minio-go/2.0.4 mc ..."
}
```
This change is cleanup of the postPolicyHandler code
primarily to address the flow and also converting
certain critical parts into self contained functions.
startOffset was re-assigned to '0' so it would end up
copying wrong content ignoring the requested startOffset.
This also fixes the corruption issue we observed while
using docker registry.
Fixes https://github.com/docker/distribution/issues/2205
Also fixes#3842 - incorrect routing.