Use Walk(), which is a recursive listing with versioning, to check if
the bucket has some objects before being removed. This is beneficial
because the bucket can contain multiple dangling objects in multiple
drives.
Also, this will prevent a bug where a bucket is deleted in a deployment
that has many erasure sets but the bucket contains one or few objects
not spread to enough erasure sets.
Currently, retry healing of a new drive healing does not reset
HealedBuckets means that the next healing retry will skip those
buckets. The commit will fix this behavior.
Also, the skipped objects counter will include objects uploaded
that are uploaded after the healing is started.
sftp sends local requests to the S3 port while passing the session token
header when the account corresponds to a service account. However, this
is not permitted and will throw an error: "The security token included in the
request is invalid"
This commit will avoid passing the session token to the upper layer that
initializes MinIO client to avoid this error.
Sometimes, we need historical information in .healing.bin, such as the
number of expired objects that the healing avoids to heal and that can
create drive usage disparency in the same erasure set. For that reason,
this commit will not remove .healing.bin anymore and it will have a new
field called Finished so we know healing is finished in that drive.
Services are unfrozen before `initBackgroundReplication` is finished. This means that
the globalReplicationStats write is racy. Switch to an atomic pointer.
Provide the `ReplicationPool` with the stats, so it doesn't have to be grabbed
from the atomic pointer on every use.
All other loads and checks are nil, and calls return empty values when stats
still haven't been initialized.
* Allow a maximum of 10 seconds to start profiling operations.
* Download up to 16 profiles concurrently, but only allow 10 seconds for
each (does not include write time).
* Add cluster info as the first operation.
* Ignore remote download errors.
* Stop remote profiles if the request is terminated.
If the site replication is enabled and the code tries to extract jwt
claims while the site replication service account credentials are still
not loaded yet, the code will enter an infinite loop, causing in a
high CPU usage.
Another possibility of the infinite loop is having some service accounts
created by an old deployment version where the service account JWT was
signed by the root credentials, but not anymore.
This commit will remove the possibility of the infinite loop in the code
and add root credential fallback to extract claims from old service
accounts.
move away from map[string]interface{} to map[string]string
to simplify the audit, and also provide concise information.
avoids large allocations under load(), reduces the amount
of audit information generated, as the current implementation
was a bit free-form. instead all datastructures must be
flattened.
Previously, we checked if we had a quorum on the DataDir value.
We are removing this check, which allows reading objects with different
DataDir values in a few drives (due to a rebalance-stop race bug)
provided their eTags or ModTimes match.
Since a lot of operations load from storage, do remote calls, add a 10 second timeout to each operation.
This should make `mc admin info` return values even under extreme conditions.
- optimize writing part.N.meta by writing both part.N
and its meta in sequence without network component.
- remove part.N.meta, part.N which were partially success
ful, in quorum loss situations during renamePart()
- allow for strict read quorum check arbitrated via ETag
for the given part number, this makes it double safer
upon final commit.
- return an appropriate error when read quorum is missing,
instead of returning InvalidPart{}, which is non-retryable
error. This kind of situation can happen when many
nodes are going offline in rotation, an example of such
a restart() behavior is statefulset updates in k8s.
fixes#20091
during rebalance stop, it can possibly happen that
Put() would race by overwriting the same object again.
This may very well if done "successfully" it can
potentially proceed to delete the object from the pool,
causing data loss.
This PR enhances #20233 to handle more scenarios such
as these.
Rebalance-stop can race with ongoing rebalance operations. This change
prevents these operations from overwriting objects by checking the source
and destination pool indices are different.
This commit replaces the LDAP client TLS config and
adds a custom list of TLS cipher suites which support
RSA key exchange (RSA kex).
Some LDAP server connections experience a significant slowdown
when these cipher suites are not available. The Go TLS stack
disables them by default. (Can be enabled via GODEBUG=tlsrsakex=1).
fixes https://github.com/minio/minio/issues/20214
With a custom list of TLS ciphers, Go can pick the TLS RSA key-exchange
cipher. Ref:
```
if c.CipherSuites != nil {
return c.CipherSuites
}
if tlsrsakex.Value() == "1" {
return defaultCipherSuitesWithRSAKex
}
```
Ref: https://cs.opensource.google/go/go/+/refs/tags/go1.22.5:src/crypto/tls/common.go;l=1017
Signed-off-by: Andreas Auernhammer <github@aead.dev>
this allows for de-duplicating the callers when called
concurrently, allowing for bucketmetadata reads to be
single call. All concurrent callers will get the same data
as the first one.