This commit adds an auto-encryption feature which allows
the Minio operator to ensure that uploaded objects are
always encrypted.
This change adds the `autoEncryption` configuration option
as part of the KMS conifguration and the ENV. variable
`MINIO_SSE_AUTO_ENCRYPTION:{on,off}`.
It also updates the KMS documentation according to the
changes.
Fixes#6502
This refactor brings a change which allows
targets to be added in a cleaner way and also
audit is now moved out.
This PR also simplifies logger dependency for auditing
To conform with AWS S3 Spec on ETag for SSE-S3 encrypted objects,
encrypt client sent MD5Sum and store it on backend as ETag.Extend
this behavior to SSE-C encrypted objects.
In many situations, while testing we encounter
ErrInternalError, to reduce logging we have
removed logging from quite a few places which
is acceptable but when ErrInternalError occurs
we should have a facility to log the corresponding
error, this helps to debug Minio server.
This PR adds support
- Request query params
- Request headers
- Response headers
AuditLogEntry is exported and versioned as well
starting with this PR.
This PR brings an additional logger implementation
called AuditLog which logs to http targets
The intention is to use AuditLog to log all incoming
requests, this is used as a mechanism by external log
collection entities for processing Minio requests.
Add support for sse-s3 encryption with vault as KMS.
Also refactoring code to make use of headers and functions defined in
crypto package and clean up duplicated code.
The current code for deleting 1000 objects simultaneously
causes significant random I/O, which on slower drives
leads to servers disconnecting in a distributed setup.
Simplify this by serially deleting and reducing the
chattiness of this operation.
Currently, requestid field in logEntry is not populated, as the
requestid field gets set at the very end.
It is now set before regular handler functions. This is also
useful in setting it as part of the XML error response.
Travis build for ppc64le has been quite inconsistent and stays queued
for most of the time. Removing this build as part of Travis.yml for
the time being.
With CoreDNS now supporting etcdv3 as the DNS backend, we
can update our federation target to etcdv3. Users will now be
able to use etcdv3 server as the federation backbone.
Minio will update bucket data to etcdv3 and CoreDNS can pick
that data up and serve it as bucket style DNS path.
This commit fixes a weakness of the key-encryption-key
derivation for SSE-C encrypted objects. Before this
change the key-encryption-key was not bound to / didn't
depend on the object path. This allows an attacker to
repalce objects - encrypted with the same
client-key - with each other.
This change fixes this issue by updating the
key-encryption-key derivation to include:
- the domain (in this case SSE-C)
- a canonical object path representation
- the encryption & key derivation algorithm
Changing the object path now causes the KDF to derive a
different key-encryption-key such that the object-key
unsealing fails.
Including the domain (SSE-C) and encryption & key
derivation algorithm is not directly neccessary for this
fix. However, both will be included for the SSE-S3 KDF.
So they are included here to avoid updating the KDF
again when we add SSE-S3.
The leagcy KDF 'DARE-SHA256' is only used for existing
objects and never for new objects / key rotation.
This commit limits the amount of memory allocated by the
S3 Multi-Object-Delete-API. The server used to allocate as
many bytes as provided by the client using Content-Length.
S3 specifies that the S3 Multi-Object-Delete-API can delete
at most 1000 objects using a single request.
(See: https://docs.aws.amazon.com/AmazonS3/latest/API/multiobjectdeleteapi.html)
Since the maximum S3 object name is limited to 1024 bytes the
XML body sent by the client can only contain up to 1000 * 1024
bytes (excluding XML format overhead).
This commit limits the size of the parsed XML for the S3
Multi-Object-Delete-API to 2 MB. This fixes a DoS
vulnerability since (auth.) clients, MitM-adversaries
(without TLS) and un-auth. users accessing buckets allowing
multi-delete by policy can kill the server.
This behavior is similar to the AWS-S3 implementation.
This PR adds CopyObject support for objects residing in buckets
in different Minio instances (where Minio instances are part of
a federated setup).
Also, added support for multiple Minio domain IPs. This is required
for distributed deployments, where one deployment may have multiple
nodes, each with a different public IP.
Buckets already present on a Minio server before it joins a
bucket federated deployment will now be added to etcd during
startup. In case of a bucket name collision, admin is informed
via Minio server console message.
Added configuration migration for configuration stored in etcd
backend.
Also, environment variables are updated and ListBucket path style
request is no longer forwarded.
- remove old bucket policy handling
- add new policy handling
- add new policy handling unit tests
This patch brings support to bucket policy to have more control not
limiting to anonymous. Bucket owner controls to allow/deny any rest
API.
For example server side encryption can be controlled by allowing
PUT/GET objects with encryptions including bucket owner.
This PR adds disk based edge caching support for minio server.
Cache settings can be configured in config.json to take list of disk drives,
cache expiry in days and file patterns to exclude from cache or via environment
variables MINIO_CACHE_DRIVES, MINIO_CACHE_EXCLUDE and MINIO_CACHE_EXPIRY
Design assumes that Atime support is enabled and the list of cache drives is
fixed.
- Objects are cached on both GET and PUT/POST operations.
- Expiry is used as hint to evict older entries from cache, or if 80% of cache
capacity is filled.
- When object storage backend is down, GET, LIST and HEAD operations fetch
object seamlessly from cache.
Current Limitations
- Bucket policies are not cached, so anonymous operations are not supported in
offline mode.
- Objects are distributed using deterministic hashing among list of cache
drives specified.If one or more drives go offline, or cache drive
configuration is altered - performance could degrade to linear lookup.
Fixes#4026
This is a trival fix to support server level WORM. The feature comes
with an environment variable `MINIO_WORM`.
Usage:
```
$ export MINIO_WORM=on
$ minio server endpoint
```
This PR implements an object layer which
combines input erasure sets of XL layers
into a unified namespace.
This object layer extends the existing
erasure coded implementation, it is assumed
in this design that providing > 16 disks is
a static configuration as well i.e if you started
the setup with 32 disks with 4 sets 8 disks per
pack then you would need to provide 4 sets always.
Some design details and restrictions:
- Objects are distributed using consistent ordering
to a unique erasure coded layer.
- Each pack has its own dsync so locks are synchronized
properly at pack (erasure layer).
- Each pack still has a maximum of 16 disks
requirement, you can start with multiple
such sets statically.
- Static sets set of disks and cannot be
changed, there is no elastic expansion allowed.
- Static sets set of disks and cannot be
changed, there is no elastic removal allowed.
- ListObjects() across sets can be noticeably
slower since List happens on all servers,
and is merged at this sets layer.
Fixes#5465Fixes#5464Fixes#5461Fixes#5460Fixes#5459Fixes#5458Fixes#5460Fixes#5488Fixes#5489Fixes#5497Fixes#5496
This change introduces following simplified steps to follow
during config migration.
```
// Steps to move from version N to version N+1
// 1. Add new struct serverConfigVN+1 in config-versions.go
// 2. Set configCurrentVersion to "N+1"
// 3. Set serverConfigCurrent to serverConfigVN+1
// 4. Add new migration function (ex. func migrateVNToVN+1()) in config-migrate.go
// 5. Call migrateVNToVN+1() from migrateConfig() in config-migrate.go
// 6. Make changes in config-current_test.go for any test change
```
S3 spec requires that MethodNotAllowed error be return if object name is part
of the URL.
Fix postpolicy related unit tests to not set object name as part of target URL.
Fixes#5141
Verify() was being called by caller after the data
has been successfully read after io.EOF. This disconnection
opens a race under concurrent access to such an object.
Verification is not necessary outside of Read() call,
we can simply just do checksum verification right inside
Read() call at io.EOF.
This approach simplifies the usage.
This change refactor the ObjectLayer PutObject and PutObjectPart
functions. Instead of passing an io.Reader and a size to PUT operations
ObejectLayer expects an HashReader.
A HashReader verifies the MD5 sum (and SHA256 sum if required) of the object.
This change updates all all PutObject(Part) calls and removes unnecessary code
in all ObjectLayer implementations.
Fixes#4923
This PR fixes the issue of cleaning up in-memory state
properly. Without this PR we can lead to security
situations where new bucket would inherit wrong
permissions on bucket and expose objects erroneously.
Fixes#4714
Fixed header-to-metadat extraction. The extractMetadataFromHeader function should return an error if the http.Header contains a non-canonicalized key. The reason is that the keys can be manually set (through a map access) which can lead to ugly bugs.
Also fixed header-to-metadata extraction. Return a InternalError if a non-canonicalized key is found in a http.Header. Also log the error.
This PR also does backend format change to 1.0.1
from 1.0.0. Backward compatible changes are still
kept to read the 'md5Sum' key. But all new objects
will be stored with the same details under 'etag'.
Fixes#4312
Separate out validating v/s parsing logic in
isValidLocationConstraint() into parseLocationConstraint()
and isValidLocation()
Additionally also set `X-Amz-Bucket-Region` as part of the
common headers for the clients to fallback on in-case of any
region related errors.
This change adds information like host, port and user-agent of the
client whose request triggered an event notification.
E.g, if someone uploads an object to a bucket using mc. If notifications
were configured on that bucket, the host, port and user-agent of mc
would be sent as part of event notification data.
Sample output:
```
"source": {
"host": "127.0.0.1",
"port": "55808",
"userAgent": "Minio (linux; amd64) minio-go/2.0.4 mc ..."
}
```
This change is cleanup of the postPolicyHandler code
primarily to address the flow and also converting
certain critical parts into self contained functions.
The globalMaxObjectSize limit is instilled in S3 spec perhaps
due to certain limitations on S3 infrastructure. For minio we
don't have such limitations and we can stream a larger file
instead.
So we are going to bump this limit to 16GiB.
Fixes#3825
Add missing protection from deleting multiple objects
in parallel. Currently we are deleting objects without
proper locking through this API.
This can cause significant amount of races.
Resource strings and paths are case insensitive on windows
deployments but if user happens to use upper case instead of
lower case for certain configuration params like bucket
policies and bucket notification config. We might not honor
them which leads to a wrong behavior on windows.
This is windows only behavior, for all other platforms case
is still kept sensitive.
Avoid passing size = -1 to PutObject API by requiring content-length
header in POST request (as AWS S3 does) and in Upload web handler.
Post handler is modified to completely store multipart file to know
its size before sending it to PutObject().
This is a consolidation effort, avoiding usage
of naked strings in codebase. Whenever possible
use constants which can be repurposed elsewhere.
This also fixes `goconst ./...` reported issues.
`principalId` i.e user identity is kept as AccessKey in
accordance with S3 spec.
Additionally responseElements{} are added starting with
`x-amz-request-id` is a hexadecimal of the event time itself in nanosecs.
`x-minio-origin-server` - points to the server generating the event.
Fixes#3556
Golang HTTP client automatically detects content-type but
for S3 clients this content-type might be incorrect or
might misbehave.
For example:
```
Content-Type: text/xml; charset=utf-8
```
Should be
```
Content-Type: application/xml
```
Allow this to be set properly.
success_action_redirect in the sent Form means that the server needs to return 303 in addition to a well specific redirection url, this commit adds this feature
This is implemented so that the issues like in the
following flow don't affect the behavior of operation.
```
GetObjectInfo()
.... --> Time window for mutation (no lock held)
.... --> Time window for mutation (no lock held)
GetObject()
```
This happens when two simultaneous uploads are made
to the same object the object has returned wrong
info to the client.
Another classic example is "CopyObject" API itself
which reads from a source object and copies to
destination object.
Fixes#3370Fixes#2912
content-length-range policy in postPolicy API was
not working properly handle it. The reflection
strategy used has changed in recent version of Go.
Any free form interface{} of any integer is treated
as `float64` this caused a bug where content-length-range
parsing failed to provide any value.
Fixes#3295
- Fix distributed branch to be able to run FS version.
- Fix distributed branch to be able to run XL local disks.
- Ignore initialization failures of notification and bucket
policies, the codepath should load whatever is possible.
From the S3 layer after PutObject we were calling GetObjectInfo for bucket notification. This can
be avoided if PutObjectInfo returns ObjectInfo.
fixes#2567
This change initializes rpc servers associated with disks that are
local. It makes object layer initialization on demand, namely on the
first request to the object layer.
Also adds lock RPC service vendorized minio/dsync
- Fixes couple of error strings reported are mismatching.
- Fixes a error HTTP status which was wrong fixed.
- Remove usage of an deprecated PostResponse, au contraire
to their documentation there is no response body in
PostPolicy.
If the location was invalid, it would write an error response but then
continue to attempt to make the bucket. Whether or not it would succeed,
it would attempt to call response.WriteHeaders twice in a row, which
would cause a message to be logged to the server console (bad).
Here is the relevant Go code:
c80e0d374b/src/net/http/server.go (L878-L881)
Current master has a regression 'mc policy <policy-type> alias/bucket/prefix'
does not work anymore, due to the way new minio-go changes do json marshalling.
This led to a regression on server side when a ``prefix`` is provided
policy is rejected as malformed from th server which is not the case with
AWS S3.
This patch uses the new ``minio-go/pkg/set`` package to address the
unmarshalling problems.
Fixes#2503