when the site is being removed is missing replication config. This can
happen when a new deployment is brought in place of a site that
is lost/destroyed and needs to delink old deployment from site
replication.
console logging peer API was broken as it would
timeout after 15minutes, this never really worked
beyond this value and basically failed to provide
the streaming "log" functionality that was expected
from this implementation.
also fix convoluted channel handling by keeping things
simple, this is rewritten.
do not modify opts.UserDefined after object-handler
has set all the necessary values, any mutation needed
should be done on a copy of this value not directly.
As there are other pieces of code that access opts.UserDefined
concurrently this becomes challenging.
fixes#14856
When a decommission task is successfully completed, failed, or canceled,
this commit allows restarting the decommission again. Restarting is not
allowed when there is an ongoing decommission task.
this PR introduces a few changes such as
- sessionPolicyName is not reused in an extracted manner
to apply policies for incoming authenticated calls,
instead uses a different key to designate this
information for the callers.
- this differentiation is needed to ensure that service
account updates do not accidentally store JSON representation
instead of base64 equivalent on the disk.
- relax requirements for Deleting a service account, allow
deleting a service account that might be unreadable, i.e
a situation where the user might have removed session policy
which now carries a JSON representation, making it unparsable.
- introduce some constants to reuse instead of strings.
fixes#14784
If an invalid status code is generated from an error we risk panicking. Even if there
are no potential problems at the moment we should prevent this in the future.
Add safeguards against this.
Sample trace:
```
May 02 06:41:39 minio[52806]: panic: "GET /20180401230655.PDF": invalid WriteHeader code 0
May 02 06:41:39 minio[52806]: goroutine 16040430822 [running]:
May 02 06:41:39 minio[52806]: runtime/debug.Stack(0xc01fff7c20, 0x25c4b00, 0xc0490e4080)
May 02 06:41:39 minio[52806]: runtime/debug/stack.go:24 +0x9f
May 02 06:41:39 minio[52806]: github.com/minio/minio/cmd.setCriticalErrorHandler.func1.1(0xc022048800, 0x4f38ab0, 0xc0406e0fc0)
May 02 06:41:39 minio[52806]: github.com/minio/minio/cmd/generic-handlers.go:469 +0x85
May 02 06:41:39 minio[52806]: panic(0x25c4b00, 0xc0490e4080)
May 02 06:41:39 minio[52806]: runtime/panic.go:965 +0x1b9
May 02 06:41:39 minio[52806]: net/http.checkWriteHeaderCode(...)
May 02 06:41:39 minio[52806]: net/http/server.go:1092
May 02 06:41:39 minio[52806]: net/http.(*response).WriteHeader(0xc0406e0fc0, 0x0)
May 02 06:41:39 minio[52806]: net/http/server.go:1126 +0x718
May 02 06:41:39 minio[52806]: github.com/minio/minio/internal/logger.(*ResponseWriter).WriteHeader(0xc032fa3ea0, 0x0)
May 02 06:41:39 minio[52806]: github.com/minio/minio/internal/logger/audit.go:116 +0xb1
May 02 06:41:39 minio[52806]: github.com/minio/minio/internal/logger.(*ResponseWriter).WriteHeader(0xc032fa3f40, 0x0)
May 02 06:41:39 minio[52806]: github.com/minio/minio/internal/logger/audit.go:116 +0xb1
May 02 06:41:39 minio[52806]: github.com/minio/minio/internal/logger.(*ResponseWriter).WriteHeader(0xc002ce8000, 0x0)
May 02 06:41:39 minio[52806]: github.com/minio/minio/internal/logger/audit.go:116 +0xb1
May 02 06:41:39 minio[52806]: github.com/minio/minio/cmd.writeResponse(0x4f364a0, 0xc002ce8000, 0x0, 0xc0443b86c0, 0x1cb, 0x224, 0x2a9651e, 0xf)
May 02 06:41:39 minio[52806]: github.com/minio/minio/cmd/api-response.go:736 +0x18d
May 02 06:41:39 minio[52806]: github.com/minio/minio/cmd.writeErrorResponse(0x4f44218, 0xc069086ae0, 0x4f364a0, 0xc002ce8000, 0x0, 0x0, 0x0, 0x0, 0x0, 0xc00656afc0)
May 02 06:41:39 minio[52806]: github.com/minio/minio/cmd/api-response.go:798 +0x306
May 02 06:41:39 minio[52806]: github.com/minio/minio/cmd.objectAPIHandlers.getObjectHandler(0x4b73768, 0x4b73730, 0x4f44218, 0xc069086ae0, 0x4f82090, 0xc002d80620, 0xc040e03885, 0xe, 0xc040e03894, 0x61, ...)
May 02 06:41:39 minio[52806]: github.com/minio/minio/cmd/object-handlers.go:456 +0x252c
```
space characters at the beginning or at the end can lead to
confusion under various UI elements in differentiating the
actual name of "policy, user or group" - to avoid this behavior
this PR onwards we shall reject such inputs for newer entries.
existing saved entries will behave as is and are going to be
operable until they are removed/renamed to something more
meaningful.
- When using multiple providers, claim-based providers are not allowed. All
providers must use role policies.
- Update markdown config to allow `details` HTML element
this is allowed as long as order is preserved as is
on an existing setup, the new command line is updated
in `pool.bin` to facilitate future decommission's on
these pools.
introduce x-minio-force-create environment variable
to force create a bucket and its metadata as required,
it is useful in some situations when bucket metadata
needs recovery.
improvements in this PR include
- decommission objects that have __XLDIR__ suffix
- decommission objects that have `null` version on
a versioned bucket.
- make sure to look for any "decom" failures to ensure
that we do not wrong conclude decom as complete without
all files getting copied over.
- break out eagerly upon first error for objects with
multiple versions, leave the object as is for support
debugging and analysis.
heal bucket metadata and IAM entries for
sites participating in site replication from
the site with the most updated entry.
Co-authored-by: Harshavardhana <harsha@minio.io>
Co-authored-by: Aditya Manthramurthy <aditya@minio.io>
The site replication status call was using a loop iteration variable sent
directly into go-routines instead of being passed as an argument. As the
variable is being updated in the loop, previously launched go routines do not
necessarily use the value at the time they were launched.
This PR fixes two issues
- The first fix is a regression from #14555, the fix itself in #14555
is correct but the interpretation of that information by the
object layer code for "replication" was not correct. This PR
tries to fix this situation by making sure the "Delete" replication
works as expected when "VersionPurgeStatus" is already set.
Without this fix, there is a DELETE marker created incorrectly on
the source where the "DELETE" was triggered.
- The second fix is perhaps an older problem started since we inlined-data
on the disk for small objects, CopyObject() incorrectly inline's
a non-inlined data. This is due to the fact that we have code where
we read the `part.1` under certain conditions where the size of the
`part.1` is less than the specific "threshold".
This eventually causes problems when we are "deleting" the data that
is only inlined, which means dataDir is ignored leaving such
dataDir on the disk, that looks like an inconsistent content on
the namespace.
fixes#14767
It is wasteful to allow parallel upgrades of MinIO server. This also generates
weird error invoked by selfupdate module when it happens such as:
'rename /opt/bin/.minio.old /opt/bin/..minio.old.old'
currently filterPefix was never used and set
that would filter out entries when needed
when `prefix` doesn't end with `/` - this
often leads to objects getting Walked(), Healed()
that were never requested by the caller.
without this wait there is a potential for some objects
that are in actively being decommissioned would cancel,
however the decommission status might wrongly conclude
this as "Complete".
To avoid this make sure to add waitgroups on the parallel
workers, allowing parallel copies to complete fully before
we return.
In previous releases, mc admin user list would return the list of users
that have policies mapped in IAM database. However, this was removed but
this commit will bring it back until we revamp this.
- This change switches to a new parquet library
- SelectObjectContent now takes a single lock at the beginning and holds it
during the operation. Previously the operation took a lock every time the
parquet library performed a Seek on the underlying object stream.
- Add basic support for LogicalType annotations for timestamps.
Execute the object, drive and net speedtests as part of the healthinfo
(if requested by the client), and include their result in the response.
The options for the speedtests have been picked from the default values
used by `mc support perf` command.
This commit improves the listing of encrypted objects:
- Use `etag.Format` and `etag.Decrypt`
- Detect SSE-S3 single-part objects in a single iteration
- Fix batch size to `250`
- Pass request context to `DecryptAll` to not waste resources
when a client cancels the operation.
Signed-off-by: Andreas Auernhammer <hi@aead.dev>
This commit adds two new functions to the
internal `etag` package:
- `ETag.Format`
- `Decrypt`
The `Decrypt` function decrypts an encrypted
ETag using a decryption key. It returns not
encrypted / multipart ETags unmodified.
The `Decrypt` function is mainly used when
handling SSE-S3 encrypted single-part objects.
In particular, the ETag of an SSE-S3 encrypted
single-part object needs to be decrypted since
S3 clients expect that this ETag is equal to the
content MD5.
The `ETag.Format` method also covers SSE ETag handling.
MinIO encrypts all ETags of SSE single part objects.
However, only the ETag of SSE-S3 encrypted single part
objects needs to be decrypted.
The ETag of an SSE-C or SSE-KMS single part object
does not correspond to its content MD5 and can be
a random value.
The `ETag.Format` function formats an ETag such that
it is an AWS S3 compliant ETag. In particular, it
returns non-encrypted ETags (single / multipart)
unmodified. However, for encrypted ETags it returns
the trailing 16 bytes as ETag. For encrypted ETags
the last 16 bytes will be a random value.
The main purpose of `Format` is to format ETags
such that clients accept them as well-formed AWS S3
ETags.
It differs from the `String` method since `String`
will return string representations for encrypted
ETags that are not AWS S3 compliant.
Signed-off-by: Andreas Auernhammer <hi@aead.dev>
The deployment id was being written to the health report towards the end
of the handler. Because of this, if there was a timeout in any of the
data fetching, the deployment id was not getting written at all. Upload
of such reports fails on SUBNET as deployment id is the unique
identifier for a cluster in subnet.
Fixed by writing the deployment id at the beginning of the processing.
This commit simplifies the ETag decryption and size adjustment
when listing object parts.
When listing object parts, MinIO has to decrypt the ETag of all
parts if and only if the object resp. the parts is encrypted using
SSE-S3.
In case of SSE-KMS and SSE-C, MinIO returns a pseudo-random ETag.
This is inline with AWS S3 behavior.
Further, MinIO has to adjust the size of all encrypted parts due to
the encryption overhead.
The ListObjectParts does specifically not use the KMS bulk decryption
API (4d2fc530d0) since the ETags of all
parts are encrypted using the same object encryption key. Therefore,
MinIO only has to connect to the KMS once, even if there are multiple
parts resp. ETags. It can simply reuse the same object encryption key.
Signed-off-by: Andreas Auernhammer <hi@aead.dev>
This commit adds support for encrypted KES
client private keys.
Now, it is possible to encrypt the KES client
private key (`MINIO_KMS_KES_KEY_FILE`) with
a password.
For example, KES CLI already supports the
creation of encrypted private keys:
```
kes identity new --encrypt --key client.key --cert client.crt MinIO
```
To decrypt an encrypted private key, the password
needs to be provided:
```
MINIO_KMS_KES_KEY_PASSWORD=<password>
```
Signed-off-by: Andreas Auernhammer <hi@aead.dev>
This commit optimises the ETag decryption when
listing objects.
When MinIO lists objects, it has to decrypt the
ETags of single-part SSE-S3 objects.
It does not need to decrypt ETags of
- plaintext objects => Their ETag is not encrypted
- SSE-C objects => Their ETag is not the content MD5
- SSE-KMS objects => Their ETag is not the content MD5
- multipart objects => Their ETag is not encrypted
Hence, MinIO only needs to make a call to the KMS
when it needs to decrypt a single-part SSE-S3 object.
It can resolve the ETags off all other object types
locally.
This commit implements the above semantics by
processing an object listing in batches.
If the batch contains no single-part SSE-S3 object,
then no KMS calls will be made.
If the batch contains at least one single-part
SSE-S3 object we have to make at least one KMS call.
No we first filter all single-part SSE-S3 objects
such that we only request the decryption keys for
these objects.
Once we know which objects resp. ETags require a
decryption key, MinIO either uses the KES bulk
decryption API (if supported) or decrypts each
ETag serially.
This commit is a significant improvement compared
to the previous listing code. Before, a single
non-SSE-S3 object caused MinIO to fall-back to
a serial ETag decryption.
For example, if a batch consisted of 249 SSE-S3
objects and one single SSE-KMS object, MinIO would
send 249 requests to the KMS.
Now, MinIO will send a single request for exactly
those 249 objects and skip the one SSE-KMS object
since it can handle its ETag locally.
Further, MinIO would request decryption keys
for SSE-S3 multipart objects in the past - even
though multipart ETags are not encrypted.
So, if a bucket contained only multipart SSE-S3
objects, MinIO would make totally unnecessary
requests to the KMS.
Now, MinIO simply skips these multipart objects
since it can handle the ETags locally.
Signed-off-by: Andreas Auernhammer <hi@aead.dev>
In bulk ETag decryption, do not rely on the etag to check if it is
encrypted or not to decide if we should set the actual object size in
ObjectInfo. The reason is that multipart objects ETags are not
encrypted.
Always get the actual object size in that case.
This commit adds support for bulk ETag
decryption for SSE-S3 encrypted objects.
If KES supports a bulk decryption API, then
MinIO will check whether its policy grants
access to this API. If so, MinIO will use
a bulk API call instead of sending encrypted
ETags serially to KES.
Note that MinIO will not use the KES bulk API
if its client certificate is an admin identity.
MinIO will process object listings in batches.
A batch has a configurable size that can be set
via `MINIO_KMS_KES_BULK_API_BATCH_SIZE=N`.
It defaults to `500`.
This env. variable is experimental and may be
renamed / removed in the future.
Signed-off-by: Andreas Auernhammer <hi@aead.dev>
ListObjects, ListObjectsV2 calls are being heavily taxed when
there are many versions on objects left over from a previous
release or ILM was never setup to clean them up. Instead
of being absolutely correct at resolving the exact latest
version of an object, we simply rely on the top most 1
version and resolve the rest.
Once we have obtained the top most "1" version for
ListObject, ListObjectsV2 call we break out.
For ListObjects and ListObjectsV2 perform lifecycle checks on
all objects before returning. This will filter out objects that are
pending lifecycle expiration.
Bonus: Cheaper server pool conflict resolution by not converting to FileInfo.
When reloading a dynamic config allow the request pool to scale both ways.
Existing requests hold on to the previous pool, so they will pop the elements from that.
currently an on-going decommission, during a server
restart might block the startup sequence for relatively
longer periods, instead start the decommission in
background lazily.
This commit fixes two bugs in the `PutObjectPartHandler`.
First, `PutObjectPart` should return SSE-KMS headers
when the object is encrypted using SSE-KMS.
Before, this was not the case.
Second, the ETag should always be a 16 byte hex string,
perhaps followed by a `-X` (where `X` is the number of parts).
However, `PutObjectPart` used to return the encrypted ETag
in case of SSE-KMS. This leaks MinIO internal etag details
through the S3 API.
The combination of both bugs causes clients that use SSE-KMS
to fail when trying to validate the ETag. Since `PutObjectPart`
did not send the SSE-KMS response headers, the response looked
like a plaintext `PutObjectPart` response. Hence, the client
tries to verify that the ETag is the content-md5 of the part.
This could never be the case, since MinIO used to return the
encrypted ETag.
Therefore, clients behaving as specified by the S3 protocol
tried to verify the ETag in a situation they should not.
Signed-off-by: Andreas Auernhammer <hi@aead.dev>
When more than 2 disks are unavailable for listing, the same disk will be used for fallback.
This makes quorum calculations incorrect since the same disk will have multiple entries.
This PR keeps track of which fallback disks have been handed out and only every returns a disk once.
avoids creating new transport for each `isServerResolvable`
request, instead re-use the available global transport and do
not try to forcibly close connections to avoid TIME_WAIT
build upon large clusters.
Never use httpClient.CloseIdleConnections() since that can have
a drastic effect on existing connections on the transport pool.
Remove it everywhere.
- GetObject() with vid should return 405
- GetObject() without vid should return 404
- ListObjects() should ignore this object if this is the "latest" version of the object
- ListObjectVersions() should list this object as "DELETE marker"
- Remove data parts before sync'ing the version pending purge
changing root credentials makes service accounts
in-operable, this PR changes the way sessionToken
is generated for service accounts.
It changes service account behavior to generate
sessionToken claims from its own secret instead
of using global root credential.
Existing credentials will be supported by
falling back to verify using root credential.
fixes#14530
```
tmp = buf[want:]
```
Would potentially crash when `buf` is truncated for some reason
and does not have the expected bytes, this is of course considered
not normal and is an odd situation. But we do not need to crash
here instead allow for errors to be returned and let callers handle
the errors.
This PR simply adds a warning message when it detects older kernel
versions and warn's them about potential performance issues on this
kernel.
The issue can be seen only with parallel I/O across all drives
on denser setups such as 90 drives or 45 drives per server configurations.
This type of code is not necessary, read's of all
metadata content at `.minio.sys/config` automatically
triggers healing when necessary in the GetObjectNInfo()
call-path.
Having this code is not useful and this also adds to
the overall startup time of MinIO when there are lots
of users and policies.
The main goal of this PR is to solve the situation where disks stop
responding to operations. This generally causes an FD build-up and
eventually will crash the server.
This adds detection of hung disks, where calls on disk get stuck.
We add functionality to `xlStorageDiskIDCheck` where it keeps
track of the number of concurrent requests on a given disk.
A total number of 100 operations are allowed. If this limit is reached
we will block (but not reject) new requests, but we will monitor the
state of the disk.
If no requests have been completed or updated within a 15-second
window, we mark the disk as offline. Requests that are blocked will be
unblocked and return an error as "faulty disk".
New requests will be rejected until the disk is marked OK again.
Once a disk has been marked faulty, a check will run every 5 seconds that
will attempt to write and read back a file. As long as this fails the disk will
remain faulty.
To prevent lots of long-running requests to mark the disk faulty we
implement a callback feature that allows updating the status as parts
of these operations are running.
We add a reader and writer wrapper that will update the status of each
successful read/write operation. This should allow fine enough granularity
that a slow, but still operational disk will not reach 15 seconds where
50 operations have not progressed.
Note that errors themselves are not enough to mark a disk faulty.
A nil (or io.EOF) error will mark a disk as "good".
* Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`.
* de-couple IsOnline() from disk health tracker
The purpose of IsOnline() is to ensure that we
reconnect the drive only when the "drive" was
- disconnected from network we need to validate
if the drive is "correct" and is the same drive
which belongs to this server.
- drive was replaced we have to format it - we
support hot swapping of the drives.
IsOnline() is not meant for taking the drive offline
when it is hung, it is not useful we can let the
drive be online instead "return" errors for relevant
calls.
* return errFaultyDisk for DiskInfo() call
Co-authored-by: Harshavardhana <harsha@minio.io>
Possible future Improvements:
* Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly.
* Allow reads/writes to be aborted by the context.
* Add usage stats, concurrent count, blocked operations, etc.
Data usage does not always contain tiering info even if the data usage
information is valid. Avoid a crash in that case.
(e.g. the scanner scanned the namespace, the user enables tiering,
prometheus scrapes the server before the scanner gets a chance to
update the data usage with new tiering information)
Healing decisions would align with skipped folder counters. This can lead to files
never being selected for heal checks on "clean" paths.
Use different hashing methods and take objectHealProbDiv into account when
calculating the cycle.
Found by @vadmeste
This is a side-affect of the optimization done in PR #13544 which
causes a certain type of delete operations on given object versions
can cause lastVersion indication to be skipped, which leads to
an `xl.meta` where Versions[] slice is empty while the entire
file is intact by itself.
This PR tries to ensure that such files are visible and deletable
by regular means of listing as null 'delete-marker' and also
avoid the situation where this potential issue might arise.
When scanning using normal mode, HealObject() can report an
error saying that it found a corrupted part. This doesn't have
when HealObject() is called with bitrot scan flag. However, when
this happens, we can still restart HealObject() with the bitrot scan.
This is also important because this means the scanner and the
new disks healer will not be able to heal an object that doesn't
exist in a specific disk and has corruption in another disk.
Also without this PR, mc admin heal command without bitrot will report
an error.
This commit removes some duplicate code that
converts KES API errors.
This code was added since KES `0.18.0` changed
some exported API errors. However, the KES SDK
handles this error conversion itself.
Therefore, it is not necessary to duplicate this
behavior in MinIO.
See: 21555fa624/error.go (L94)
Signed-off-by: Andreas Auernhammer <hi@aead.dev>
- Updating KES dependency to v.0.18.0
- Fixing incompatibility issue when checking for errors during KES key creation
Signed-off-by: Lenin Alevski <alevsk.8772@gmail.com>
In a distributed setup, a DiskInfo REST call to an unformatted disk
returns an error with no disk information, such as the disk endpoint
URL, which is unexpected.
metadata headers can have headers without values
as per AWS S3 spec however, we need to skip some
headers that do not have values that potentially
can have empty values set.
small setups do not return appropriate errors when speedtest
cannot run on small tiny setups, allow the tests to fail
appropriately more pro-actively.
many users bring toy setups, this PR simply returns an error
in such situations.
healing disks take active I/O it is possible
that deleted objects might stay in .trash
folder for a really long time until the drive
is fully healed.
this PR changes it such that we are making sure
we purge the active content written to these
disks as well.
- speedtest logs calls that were canceled
spuriously, in situations where it should
be ignored.
- all errors of interest are always sent back
to the client there is no need to log them
on the server console.
- PUT failures should negate the increments
such that GET is not attempted on unsuccessful
calls.
- do not attempt MRF on speedtest objects.
In the testing mode, reformatting disks will fail because the healing
code will complain if one disk is in root mode. This commit will
automatically set all disks as non-root if MINIO_CI_CD is set.
Currently, when applying any dynamic config, the system reloads and
re-applies the config of all the dynamic sub-systems.
This PR refactors the code in such a way that changing config of a given
dynamic sub-system will work on only that sub-system.
An onlineDisk means its a valid disk but it may be a
re-connected disk, this PR verifies that based on LastConn()
to only trigger MRF. Current code would again re-load the
disk 'format.json' which is not necessary and perhaps an
unnecessary call.
A potential side affect of this is closing perfectly online
disks and getting re-replaced by reloading 'format.json'.
This PR tries to avoid this situation by making sure MRF
is triggered but not reloading 'format.json' because of MRF.
Only the first `listAndHeal` would ever be able to write on errCh, blocking all others infinitely.
Instead read all errors but return the first non-nil, if any.
The intention appears to be that this should cancel on any error,
so that part is kept.
Regression from #13990
When more than one gateway reads and writes from the same mount point
and there is a load balancer pointing to those gateways. Each gateway
will try to create its own temporary append file but fails to clear it later
when not needed.
This commit creates a routine that checks all upload IDs saved in
multipart directory and remove any stale entry with the same upload id
in the memory and in the temporary background append folder as well.
Enabled with `mc admin config set alias/ api gzip_objects=on`
Standard filtering applies (1K response minimum, not compressed content
type, not range request, gzip accepted by client).
The current code considers a pool with all root disks to be as part
of a testing environment even if there are other pools with mounted
disks. This will result to illegitimate writing in root disks.
Fix this by simplifing the logic: require MINIO_CI_CD in order to skip
root disk check.
MinIO configuration is loaded after the initialization of the server
handlers, which will miss the initialization of the bucket forwarder
handler.
Though the federation is deprecated, let's fix this for the time being.
S3 spec returns x-amz-restore header in HEAD/GET object with the
following format:
```
x-amz-restore: ongoing-request="false", expiry-date="Fri, 21 Dec 2012
00:00:00 GMT"
```
This commit adds quotes as the current code does not support it. It will
also supports the old format saved in the disk (in xl.meta) for backward
compatibility.
A regression removed support of federation in the gateway mode.
Enable it again.
Federation is deprecated for a while but let's fix this for the time being.
Deleting bulk objects had an issue since the relevant versionID
is not passed through the layers to ensure that the dangling
object purge actually works cleanly.
This is a continuation of quorum related error returned by
multi-object delete API from #14248
This PR ensures that we pass down correct information as
well as extend the scope of dangling object detection.
When setting a config of a particular sub-system, validate the existing
config and notification targets of only that sub-system, so that
existing errors related to one sub-system (e.g. notification target
offline) do not result in errors for other sub-systems.
Some users running MinIO claim that their system became slow. One
way to investigate is to look at this Prometheus history of the number of
the requests reaching the server. The existing current S3 requests metric
is not enough because it can increase of the system really becomes slow,
due to disk issues for example.
startup speed-up, currently getFormatErasureInQuorum()
would spend up to 2-3secs when there are 3000+ drives
for example in a setup, simplify this implementation
to use drive counts.
DeleteMarkers do not have a default quorum, i.e it is possible that
DeleteMarkers were created with n/2+1 quorum as well to make sure
that we satisfy situations such as those we need to make sure delete
markers only expect n/2 read quorum.
Additionally we should also look at additional metadata on the
actual objects that might have been "erasure" upgraded with new
parity when disks are down.
In such a scenario do not default to the standard storage class
parity, instead use the parityBlocks present on the FileInfo to
ensure that we are dealing with the correct quorum for READs and
DELETEs.
Retry listings, when no next marker is returned and the result isn't truncated.
This can happen when an object is queued, but no info can be fetched.
Fixes#14190
The healing code repeatedly tries to heal a root disk when it is empty
the reason is that connectEndpoint() returns errUnformattedDisk even
if the disk is a root disk. Changing that to returning another error
will avoid queueing the disk to the healing code in each connect disks
iteration.
some upgraded objects might not get listed due
to different quorum ratios across objects.
make sure to list all objects that satisfy the
maximum possible quorum.
This change allows the MinIO server to lookup users in different directory
sub-trees by allowing specification of multiple search bases separated by
semicolons.
This PR removes an unnecessary state that gets
passed around for DiskIDs, which is not necessary
since each disk exactly knows which pool and which
set it belongs to on a running system.
Currently cached DiskId's won't work properly
because it always ends up skipping offline disks
and never runs healing when disks are offline, as
it expects all the cached diskIDs to be present
always. This also sort of made things in-flexible
in terms perhaps a new diskID for `format.json`.
(however this is not a big issue)
This is an unnecessary requirement that healing
via scanner needs all drives to be online, instead
healing should trigger even when partial nodes
and drives are available this ensures that we
keep the SLA in-tact on the objects when disks
are offline for a prolonged period of time.
Wrong resource is being fetched, since idx is incremented, but mapID is reused.
Regression caused by #13454 - that part didn't optimize anything anyway.
Publish storage functions latency to help compare the performance
of different disks in a single deployment.
e.g.:
```
minio_node_disk_latency_us{api="storage.WalkDir",disk="/tmp/xl/1",server="localhost:9001"} 226
minio_node_disk_latency_us{api="storage.WalkDir",disk="/tmp/xl/2",server="localhost:9002"} 1180
minio_node_disk_latency_us{api="storage.WalkDir",disk="/tmp/xl/3",server="localhost:9003"} 1183
minio_node_disk_latency_us{api="storage.WalkDir",disk="/tmp/xl/4",server="localhost:9004"} 1625
```
- create internal erasure volumes only if the disk is unformatted
- return a copy of format data in xlStorage.ReadAll
- parse env vars only once, to be re-used by xl-storage
This speed-up is intended for faster startup times
for almost all MinIO operations. Changes here are
- Drives are not re-read for 'format.json' on a regular
basis once read during init is remembered and refreshed
at 5 second intervals.
- Do not do O_DIRECT tests on drives with existing 'format.json'
only fresh setups need this check.
- Parallelize initializing erasureSets for multiple sets.
- Avoid re-reading format.json when migrating 'format.json'
from really old V1->V2->V3
- Keep a copy of local drives for any given server in memory
for a quick lookup.
this helps in caching the resolved values early on, avoids
causing further resolution for individual nodes when
object layer comes online.
this can speed up our startup time during, upgrades etc by
an order of magnitude.
additional changes in connectLoadInitFormats() and parallelize
all calls that might be potentially blocking.
- Site replication was missing replicating users,
groups when an empty site was added.
- Add site replication for groups and users when they
are disabled and enabled.
- Add support for replicating bucket quota config.
When calculating signatures empty part ETags were not discarded, leading
to a different signature compared to freshly created ones.
This would mean that after a heal signature of the healed metadata would be
different. Fixing the calculation of signature will make these consistent.
Furthermore when inconsistent entries, with zero version ID, with the same
mod times but different signatures, the one with the lowest signature would
be picked for quorum check. Since this is 50/50, we fall back to a simple
quorum count on all signatures.
Each of these fixes by themselves will lead to quorum. Tests were added
for regressions and expected outcomes.
When the replication rule is based on tag matches, the replication process
should pick up targets matching the tags specified in the replication
rule.
Fixing regression due to #12880
repeated reads on single large objects in HPC like
workloads, need the following option to disable
O_DIRECT for a more effective usage of the kernel
page-cache.
However this optional should be used in very specific
situations only, and shouldn't be enabled on all
servers.
NVMe servers benefit always from keeping O_DIRECT on.
map labels might have been referenced else, this
can lead to concurrent access at lower layers.
avoid this by copying the information while
concurrently serving the metrics.
do not allow mutation to pool command line when there are
unfinished decommissions in place, disallow such scenarios
to avoid user mistakes.
also add testcases to cover all relevant scenarios.
When reading input for PutObject or PutObjectPart add a readahead buffer for big inputs.
This will make network reads+hashing separate run async with erasure coding and writes. This will reduce overall latency in distributed setups where the input is from upstream and writes go to other servers.
We will read at 2 buffers ahead, meaning one will always be ready/waiting and one is currently being read from.
This improves PutObject and PutObjectParts for these cases.
When deleting multiple versions it "gives" up with an errFileVersionNotFound if
a version cannot be found. This effectively skips deleting other versions
sent in the same request.
This can happen on inconsistent objects. We should ignore errFileVersionNotFound
and continue with others.
We already ignore these at the caller level, this PR is continuation of 54a9877
This PR simplifies few things
- Multipart parts are renamed, upon failure are unrenamed() keep this
multipart specific behavior it is needed and works fine.
- AbortMultipart should blindly delete once lock is acquired instead
of re-reading metadata and calculating quorum, abort is a delete()
operation and client has no business looking for errors on this.
- Skip Access() calls to folders that are operating on
`.minio.sys/multipart` folder as well.
Large clusters with multiple sets, or multi-pool setups at times might
fail and report unexpected "file not found" errors. This can become
a problem during startup sequence when some files need to be created
at multiple locations.
- This PR ensures that we nil the erasure writers such that they
are skipped in RenameData() call.
- RenameData() doesn't need to "Access()" calls for `.minio.sys`
folders they always exist.
- Make sure PutObject() never returns ObjectNotFound{} for any
errors, make sure it always returns "WriteQuorum" when renameData()
fails with ObjectNotFound{}. Return appropriate errors for all
other cases.
Currently tag removal leaves replication state as `PENDING`
because the `HEAD` api returns just a tag count but not the
actual tags, and this is treated as a no-op
```
λ mc admin decommission start alias/ http://minio{1...2}/data{1...4}
```
```
λ mc admin decommission status alias/
┌─────┬─────────────────────────────────┬──────────────────────────────────┬────────┐
│ ID │ Pools │ Capacity │ Status │
│ 1st │ http://minio{1...2}/data{1...4} │ 439 GiB (used) / 561 GiB (total) │ Active │
│ 2nd │ http://minio{3...4}/data{1...4} │ 329 GiB (used) / 421 GiB (total) │ Active │
└─────┴─────────────────────────────────┴──────────────────────────────────┴────────┘
```
```
λ mc admin decommission status alias/ http://minio{1...2}/data{1...4}
Progress: ===================> [1GiB/sec] [15%] [4TiB/50TiB]
Time Remaining: 4 hours (started 3 hours ago)
```
```
λ mc admin decommission status alias/ http://minio{1...2}/data{1...4}
ERROR: This pool is not scheduled for decommissioning currently.
```
```
λ mc admin decommission cancel alias/
┌─────┬─────────────────────────────────┬──────────────────────────────────┬──────────┐
│ ID │ Pools │ Capacity │ Status │
│ 1st │ http://minio{1...2}/data{1...4} │ 439 GiB (used) / 561 GiB (total) │ Draining │
└─────┴─────────────────────────────────┴──────────────────────────────────┴──────────┘
```
> NOTE: Canceled decommission will not make the pool active again, since we might have
> Potentially partial duplicate content on the other pools, to avoid this scenario be
> very sure to start decommissioning as a planned activity.
```
λ mc admin decommission cancel alias/ http://minio{1...2}/data{1...4}
┌─────┬─────────────────────────────────┬──────────────────────────────────┬────────────────────┐
│ ID │ Pools │ Capacity │ Status │
│ 1st │ http://minio{1...2}/data{1...4} │ 439 GiB (used) / 561 GiB (total) │ Draining(Canceled) │
└─────┴─────────────────────────────────┴──────────────────────────────────┴────────────────────┘
```
In a multi-pool setup when disks are coming up, or in a single pool
setup let's say with 100's of erasure sets with a slow network.
It's possible when healing is attempted on `.minio.sys/config`
folder, it can lead to healing unexpectedly deleting some policy
files as dangling due to a mistake in understanding when `isObjectDangling`
is considered to be 'true'.
This issue happened in commit 30135eed86
when we assumed the validMeta with empty ErasureInfo is considered
to be fully dangling. This implementation issue gets exposed when
the server is starting up.
This is most easily seen with multiple-pool setups because of the
disconnected fashion pools that come up. The decision to purge the
object as dangling is taken incorrectly prior to the correct state
being achieved on each pool, when the corresponding drive let's say
returns 'errDiskNotFound', a 'delete' is triggered. At this point,
the 'drive' comes online because this is part of the startup sequence
as drives can come online lazily.
This kind of situation exists because we allow (totalDisks/2) number
of drives to be online when the server is being restarted.
Implementation made an incorrect assumption here leading to policies
getting deleted.
Added tests to capture the implementation requirements.
- This allows site-replication to be configured when using OpenID or the
internal IDentity Provider.
- Internal IDP IAM users and groups will now be replicated to all members of the
set of replicated sites.
- When using OpenID as the external identity provider, STS and service accounts
are replicated.
- Currently this change dis-allows root service accounts from being
replicated (TODO: discuss security implications).
It is possible that GetLock() call remembers a previously
failed releaseAll() when there are networking issues, now
this state can have potential side effects.
This PR tries to avoid this side affect by making sure
to initialize NewNSLock() for each GetLock() attempts
made to avoid any prior state in the memory that can
interfere with the new lock grants.
The current usage of assuming `default` parity of `4` is not correct
for all objects stored on MinIO, objects in .minio.sys have maximum
parity, healing won't trigger on these objects due to incorrect
verification of quorum.
The AddUser() API endpoint was accepting a policy field.
This API is used to update a user's secret key and account
status, and allows a regular user to update their own secret key.
The policy update is also applied though does not appear to
be used by any existing client-side functionality.
This fix changes the accepted request body type and removes
the ability to apply policy changes as that is possible via the
policy set API.
NOTE: Changing passwords can be disabled as a workaround
for this issue by adding an explicit "Deny" rule to disable the API
for users.
This PR is an attempt to make this configurable
as not all situations have same level of tolerable
delta, i.e disks are replaced days apart or even
hours.
There is also a possibility that nodes have drifted
in time, when NTP is not configured on the system.
data shards were wrong due to a healing bug
reported in #13803 mainly with unaligned object
sizes.
This PR is an attempt to automatically avoid
these shards, with available information about
the `xl.meta` and actually disk mtime.
- When using MinIO's internal IDP, STS credential permissions did not check the
groups of a user.
- Also fix bug in policy checking in AccountInfo call
Also log all the missed events and logs instead of silently
swallowing the events.
Bonus: Extend the logger webhook to support mTLS
similar to audit webhook target.
- r.ulock was not locked when r.UsageCache was being modified
Bonus:
- simplify code by removing some unnecessary clone methods - we can
do this because go arrays are values (not pointers/references) that are
automatically copied on assignment.
- remove some unnecessary map allocation calls
data-structures were repeatedly initialized
this causes GC pressure, instead re-use the
collectors.
Initialize collectors in `init()`, also make
sure to honor the cache semantics for performance
requirements.
Avoid a global map and a global lock for metrics
lookup instead let them all be lock-free unless
the cache is being invalidated.
When STS credentials are created for a user, a unique (hopefully stable) parent
user value exists for the credential, which corresponds to the user for whom the
credentials are created. The access policy is mapped to this parent-user and is
persisted. This helps ensure that all STS credentials of a user have the same
policy assignment at all times.
Before this change, for an OIDC STS credential, when the policy claim changes in
the provider (when not using RoleARNs), the change would not take effect on
existing credentials, but only on new ones.
To support existing STS credentials without parent-user policy mappings, we
lookup the policy in the policy claim value. This behavior should be deprecated
when such support is no longer required, as it can still lead to stale
policy mappings.
Additionally this change also simplifies the implementation for all non-RoleARN
STS credentials. Specifically, for AssumeRole (internal IDP) STS credentials,
policies are picked up from the parent user's policies; for
AssumeRoleWithCertificate STS credentials, policies are picked up from the
parent user mapping created when the STS credential is generated.
AssumeRoleWithLDAP already picks up policies mapped to the virtual parent user.
A corner case can occur where the delete-marker was propagated
but the metadata could not be updated on the primary. Sending
a RemoveObject call with the Delete marker version would end
up permanently deleting the version on target. Instead, perform
a Stat on the delete-marker version on target and redo replication
only if the delete-marker is missing on target.
After the introduction of Refresh logic in locks, the data scanner can
quit when the data scanner lock is not able to get refreshed. In that
case, the context of the data scanner will get canceled and
runDataScanner() will quit. Another server would pick the scanning
routine but after some time, all nodes can just have all scanning
routine aborted, as described above.
This fix will just run the data scanner in a loop.
- Allow proper SRError to be propagated to
handlers and converted appropriately.
- Make sure to enable object locking on buckets
when requested in MakeBucketHook.
- When DNSConfig is enabled attempt to delete it
first before deleting buckets locally.
- Rename MaxNoncurrentVersions tag to NewerNoncurrentVersions
Note: We apply overlapping NewerNoncurrentVersions rules such that
we honor the highest among applicable limits. e.g if 2 overlapping rules
are configured with 2 and 3 noncurrent versions to be retained, we
will retain 3.
- Expire newer noncurrent versions after noncurrent days
- MinIO extension: allow noncurrent days to be zero, allowing expiry
of noncurrent version as soon as more than configured
NewerNoncurrentVersions are present.
- Allow NewerNoncurrentVersions rules on object-locked buckets
- No x-amz-expiration when NewerNoncurrentVersions configured
- ComputeAction should skip rules with NewerNoncurrentVersions > 0
- Add unit tests for lifecycle.ComputeAction
- Support lifecycle rules with MaxNoncurrentVersions
- Extend ExpectedExpiryTime to work with zero days
- Fix all-time comparisons to be relative to UTC
- This introduces a new admin API with a query parameter (v=2) to return a
response with the timestamps
- Older API still works for compatibility/smooth transition in console
- allow any regular user to change their own password
- allow STS credentials to create users if permissions allow
Bonus: do not allow changes to sts/service account credentials (via add user API)
ListObjects() should never list a delete-marked folder
if latest is delete marker and delimiter is not provided.
ListObjectVersions() should list a delete-marked folder
even if latest is delete marker and delimiter is not
provided.
Enhance further versioning listing on the buckets
request.Form uses less memory allocation and avoids gorilla mux matching
with weird characters in parameters such as '\n'
- Remove Queries() to avoid matching
- Ensure r.ParseForm is called to populate fields
- Add a unit test for object names with '\n'
delete marked objects should not be considered
for listing when listing is delimited, this issue
as introduced in PR #13804 which was mainly to
address listing of directories in listing when
delimited.
This PR fixes this properly and adds tests to
ensure that we behave in accordance with how
an S3 API behaves for ListObjects() without
versions.
Save part.1 for writebacks in a separate folder
and move it to cache dir atomically while saving
the cache metadata. This is to avoid GC mistaking
part.1 as orphaned cache entries and purging them.
This PR also fixes object size being overwritten during
retries for write-back mode.
- deleting policies was deleting all LDAP
user mapping, this was a regression introduced
in #13567
- deleting of policies is properly sent across
all sites.
- remove unexpected errors instead embed the real
errors as part of the 500 error response.
- deleteBucket() should be called for cleanup
if client abruptly disconnects
- out of disk errors should be sent to client
properly and also cancel the calls
- limit concurrency to available MAXPROCS not
32 for auto-tuned setup, if procs are beyond
32 then continue normally. this is to handle
smaller setups.
fixes#13834
Return errors when untar fails at once.
Current error handling was quite a mess. Errors are written
to the stream, but processing continues.
Instead, return errors when they occur and transform
internal errors to bad request errors, since it is likely a
problem with the input.
Fixes#13832
Sometimes, we see an error message like "Server expects 'storage' API
version 'v41', instead found 'v41'" shows a more generic error message
with the path of the REST call.