This commit adds the `MINIO_KMS_REPLICATE_KEYID` env. variable.
By default - if not specified or not set to `off` - MinIO will
replicate the KMS key ID of an object.
If `MINIO_KMS_REPLICATE_KEYID=off`, MinIO does not include the
object's KMS Key ID when replicating an object. However, it always
sets the SSE-KMS encryption header. This ensures that the object
gets encrypted using SSE-KMS. The target site chooses the KMS key
ID that gets used based on the site and bucket config.
Signed-off-by: Andreas Auernhammer <github@aead.dev>
This commit allows clients to provide a set of intermediate CA
certificates (up to `MaxIntermediateCAs`) that the server will
use as intermediate CAs when verifying the trust chain from the
client leaf certificate up to one trusted root CA.
This is required if the client leaf certificate is not issued by
a trusted CA directly but by an intermediate CA. Without this commit,
MinIO rejects such certificates.
Signed-off-by: Andreas Auernhammer <github@aead.dev>
If object is uploaded with tags, the internal tagging-timestamp tracked
for replication will be missing. Default to ModTime in such cases to
allow tags to be synced correctly.
Also fixing a regression in fetching tags and tag comparison
* fix: proxy requests to honor global transport
Load the globalProxyEndpoint properly
also, currently, the proxy requests will fail silently for batch cancel
even if the proxy fails; instead,d properly send the corresponding error back
for such proxy failures if opted
* pass the transport to the GetProxyEnpoints function
---------
Co-authored-by: Praveen raj Mani <praveen@minio.io>
Reject new lock requests immediately when 1000 goroutines are queued
for the local lock mutex.
We do not reject unlocking, refreshing, or maintenance; they add to the count.
The limit is set to allow for bursty behavior but prevent requests from
overloading the server completely.
Currently, DeleteObjects() tries to find the object's pool before
sending a delete request. This only works well when an object has
multiple versions in different pools since looking for the pool does
not consider the version-id. When an S3 client wants to
remove a version-id that exists in pool 2, the delete request will be
directed to pool one because it has another version of the same object.
This commit will remove looking for pool logic and will send a delete
request to all pools in parallel. This should not cause any performance
regression in most of the cases since the object will unlikely exist
in only one pool, and the performance price will be similar to
getPoolIndex() in that case.
Earlier, cluster and bucket metrics were named
`minio_usage_last_activity_nano_seconds`.
The bucket level is now named as
`minio_bucket_usage_last_activity_nano_seconds`
Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
When compression is enabled, the final object size is not calculated in
that case, we need to make sure that the provided buffer is always
more significant than the shard size, the bitrot will always calculate
the hash of blocks with shard size, except the last block.
Before https://github.com/minio/minio/pull/20575, files could pick up indices
from unrelated files if no index was added.
This would result in these files not being consistent across a set.
When loading, search for the compression indicators and check if they
are within the problematic date range, and clean up any parts that have
an index but shouldn't.
The test validates that the signature matches the one in files stored without an index.
Bumps xlMetaVersion, so this check doesn't have to be made for future versions.
It is possible delete marker was received on old pool as decom
move in progress, this PR allows decom retry to ensure these
delete markers are moved to new pool so that decommission can
be completed.
Fixes#20819
Add profiling potential crash wourkaround
Using admin traces could potentially crash the server (or handler more likely) due to upstream divide by 0: https://github.com/felixge/fgprof/pull/34
Ensure the profile always runs 100ms before stopping, so sample count isn't 0 (default sample rate ~10ms/sample, but allow for cpu starvation)
If one object has many parts where all parts are readable but some parts
are missing from some drives, this object can be sometimes un-healable,
which is wrong.
This commit will avoid reading from drives that have missing, corrupted or
outdated xl.meta. It will also check if any part is unreadable to avoid
healing in that case.
we do not need to hold the read locks at the higher
layer instead before reading the body, instead hold
the read locks properly at the time of renamePart()
for protection from racy part overwrites to compete
with concurrent completeMultipart().