If a user attempts to authenticate with a key but does not have an
sshpubkey attribute in LDAP, the server allows the connection, which
means the server trusted the key without reason. This is now fixed,
and a test has been added for validation.
Allow multiple private keys and extract all files from streams.
Place files in the folder with `.enc` removed.
Do basic checks so streams cannot traverse outside of the folder.
This commit adds the `MINIO_KMS_REPLICATE_KEYID` env. variable.
By default - if not specified or not set to `off` - MinIO will
replicate the KMS key ID of an object.
If `MINIO_KMS_REPLICATE_KEYID=off`, MinIO does not include the
object's KMS Key ID when replicating an object. However, it always
sets the SSE-KMS encryption header. This ensures that the object
gets encrypted using SSE-KMS. The target site chooses the KMS key
ID that gets used based on the site and bucket config.
Signed-off-by: Andreas Auernhammer <github@aead.dev>
This commit allows clients to provide a set of intermediate CA
certificates (up to `MaxIntermediateCAs`) that the server will
use as intermediate CAs when verifying the trust chain from the
client leaf certificate up to one trusted root CA.
This is required if the client leaf certificate is not issued by
a trusted CA directly but by an intermediate CA. Without this commit,
MinIO rejects such certificates.
Signed-off-by: Andreas Auernhammer <github@aead.dev>
If object is uploaded with tags, the internal tagging-timestamp tracked
for replication will be missing. Default to ModTime in such cases to
allow tags to be synced correctly.
Also fixing a regression in fetching tags and tag comparison
* fix: proxy requests to honor global transport
Load the globalProxyEndpoint properly
also, currently, the proxy requests will fail silently for batch cancel
even if the proxy fails; instead,d properly send the corresponding error back
for such proxy failures if opted
* pass the transport to the GetProxyEnpoints function
---------
Co-authored-by: Praveen raj Mani <praveen@minio.io>
Reject new lock requests immediately when 1000 goroutines are queued
for the local lock mutex.
We do not reject unlocking, refreshing, or maintenance; they add to the count.
The limit is set to allow for bursty behavior but prevent requests from
overloading the server completely.
Currently, DeleteObjects() tries to find the object's pool before
sending a delete request. This only works well when an object has
multiple versions in different pools since looking for the pool does
not consider the version-id. When an S3 client wants to
remove a version-id that exists in pool 2, the delete request will be
directed to pool one because it has another version of the same object.
This commit will remove looking for pool logic and will send a delete
request to all pools in parallel. This should not cause any performance
regression in most of the cases since the object will unlikely exist
in only one pool, and the performance price will be similar to
getPoolIndex() in that case.
Earlier, cluster and bucket metrics were named
`minio_usage_last_activity_nano_seconds`.
The bucket level is now named as
`minio_bucket_usage_last_activity_nano_seconds`
Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
When compression is enabled, the final object size is not calculated in
that case, we need to make sure that the provided buffer is always
more significant than the shard size, the bitrot will always calculate
the hash of blocks with shard size, except the last block.
Before https://github.com/minio/minio/pull/20575, files could pick up indices
from unrelated files if no index was added.
This would result in these files not being consistent across a set.
When loading, search for the compression indicators and check if they
are within the problematic date range, and clean up any parts that have
an index but shouldn't.
The test validates that the signature matches the one in files stored without an index.
Bumps xlMetaVersion, so this check doesn't have to be made for future versions.