in a specific corner case when you only have dangling
objects with single shard left over, we end up a situation
where healing is unable to list this dangling object to
purge due to the fact that listing logic expected only
`len(disks)/2+1` - where as when you make this choice you
end up with a situation that the drive where this object
is present is not part of your expected disks list, causing
it to be never listed and ignored into perpetuity.
change the logic such that HealObjects() would be able
to listAndHeal() per set properly on all its drives, since
there is really no other way to do this cleanly, however
instead of "listing" on all erasure sets simultaneously, we
list on '3' at a time. So in a large enough cluster this is
fairly staggered.
To make sure that no objects were skipped for any reason,
decommissioning does a second phase of listing to check if there
are some objects that need to be decommissioned. However, the code
forgot to skip orphan delete markers since the decom code already
skips it.
Make the code ignore delete markers in in the verification phase.
Co-authored-by: Anis Eleuch <anis@min.io>
This is a security incident fix, it would seem like since
the implementation of unsigned payload trailer on PUTs,
we do not validate the signature of the incoming request.
The signature can be invalid and is totally being ignored,
this in-turn allows any arbitrary secret to upload objects
given the user has "WRITE" permissions on the bucket, since
acces-key is a public information in general exposes these
potential users with WRITE on the bucket to be used by any
arbitrary client to make a fake request to MinIO the signature
under Authorization: header is totally ignored.
A test has been added to cover this scenario and fail
appropriately.
- Move VersionPurgeStatus into replication package
- ilm: Evaluate policy w/ obj retention/replication
- lifecycle: Use Evaluator to enforce ILM in scanner
- Unit tests covering ILM, replication and retention
- Simplify NewEvaluator constructor
When decommissioning is started, the list of buckets to decommission is
calculated, however, a bucket can be removed before decommissioning reaches
it. This will cause an infinite loop of listing error complaining about
the non-existence of the bucket. This commit will ignore
errVolumeNotFound to skip the not found bucket.
Enforce a bucket count limit on metrics for v2 calls.
If people hit this limit, they should move to v3, as certain calls explode with high bucket count.
Reviewers: This *should* only affect v2 calls, but the complexity is overwhelming.
If a user attempts to authenticate with a key but does not have an
sshpubkey attribute in LDAP, the server allows the connection, which
means the server trusted the key without reason. This is now fixed,
and a test has been added for validation.
This commit adds the `MINIO_KMS_REPLICATE_KEYID` env. variable.
By default - if not specified or not set to `off` - MinIO will
replicate the KMS key ID of an object.
If `MINIO_KMS_REPLICATE_KEYID=off`, MinIO does not include the
object's KMS Key ID when replicating an object. However, it always
sets the SSE-KMS encryption header. This ensures that the object
gets encrypted using SSE-KMS. The target site chooses the KMS key
ID that gets used based on the site and bucket config.
Signed-off-by: Andreas Auernhammer <github@aead.dev>
This commit allows clients to provide a set of intermediate CA
certificates (up to `MaxIntermediateCAs`) that the server will
use as intermediate CAs when verifying the trust chain from the
client leaf certificate up to one trusted root CA.
This is required if the client leaf certificate is not issued by
a trusted CA directly but by an intermediate CA. Without this commit,
MinIO rejects such certificates.
Signed-off-by: Andreas Auernhammer <github@aead.dev>
If object is uploaded with tags, the internal tagging-timestamp tracked
for replication will be missing. Default to ModTime in such cases to
allow tags to be synced correctly.
Also fixing a regression in fetching tags and tag comparison
* fix: proxy requests to honor global transport
Load the globalProxyEndpoint properly
also, currently, the proxy requests will fail silently for batch cancel
even if the proxy fails; instead,d properly send the corresponding error back
for such proxy failures if opted
* pass the transport to the GetProxyEnpoints function
---------
Co-authored-by: Praveen raj Mani <praveen@minio.io>
Reject new lock requests immediately when 1000 goroutines are queued
for the local lock mutex.
We do not reject unlocking, refreshing, or maintenance; they add to the count.
The limit is set to allow for bursty behavior but prevent requests from
overloading the server completely.
Currently, DeleteObjects() tries to find the object's pool before
sending a delete request. This only works well when an object has
multiple versions in different pools since looking for the pool does
not consider the version-id. When an S3 client wants to
remove a version-id that exists in pool 2, the delete request will be
directed to pool one because it has another version of the same object.
This commit will remove looking for pool logic and will send a delete
request to all pools in parallel. This should not cause any performance
regression in most of the cases since the object will unlikely exist
in only one pool, and the performance price will be similar to
getPoolIndex() in that case.
Earlier, cluster and bucket metrics were named
`minio_usage_last_activity_nano_seconds`.
The bucket level is now named as
`minio_bucket_usage_last_activity_nano_seconds`
Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
When compression is enabled, the final object size is not calculated in
that case, we need to make sure that the provided buffer is always
more significant than the shard size, the bitrot will always calculate
the hash of blocks with shard size, except the last block.
Before https://github.com/minio/minio/pull/20575, files could pick up indices
from unrelated files if no index was added.
This would result in these files not being consistent across a set.
When loading, search for the compression indicators and check if they
are within the problematic date range, and clean up any parts that have
an index but shouldn't.
The test validates that the signature matches the one in files stored without an index.
Bumps xlMetaVersion, so this check doesn't have to be made for future versions.