* fix: proxy requests to honor global transport
Load the globalProxyEndpoint properly
also, currently, the proxy requests will fail silently for batch cancel
even if the proxy fails; instead,d properly send the corresponding error back
for such proxy failures if opted
* pass the transport to the GetProxyEnpoints function
---------
Co-authored-by: Praveen raj Mani <praveen@minio.io>
Reject new lock requests immediately when 1000 goroutines are queued
for the local lock mutex.
We do not reject unlocking, refreshing, or maintenance; they add to the count.
The limit is set to allow for bursty behavior but prevent requests from
overloading the server completely.
Currently, DeleteObjects() tries to find the object's pool before
sending a delete request. This only works well when an object has
multiple versions in different pools since looking for the pool does
not consider the version-id. When an S3 client wants to
remove a version-id that exists in pool 2, the delete request will be
directed to pool one because it has another version of the same object.
This commit will remove looking for pool logic and will send a delete
request to all pools in parallel. This should not cause any performance
regression in most of the cases since the object will unlikely exist
in only one pool, and the performance price will be similar to
getPoolIndex() in that case.
Earlier, cluster and bucket metrics were named
`minio_usage_last_activity_nano_seconds`.
The bucket level is now named as
`minio_bucket_usage_last_activity_nano_seconds`
Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
When compression is enabled, the final object size is not calculated in
that case, we need to make sure that the provided buffer is always
more significant than the shard size, the bitrot will always calculate
the hash of blocks with shard size, except the last block.
Before https://github.com/minio/minio/pull/20575, files could pick up indices
from unrelated files if no index was added.
This would result in these files not being consistent across a set.
When loading, search for the compression indicators and check if they
are within the problematic date range, and clean up any parts that have
an index but shouldn't.
The test validates that the signature matches the one in files stored without an index.
Bumps xlMetaVersion, so this check doesn't have to be made for future versions.
It is possible delete marker was received on old pool as decom
move in progress, this PR allows decom retry to ensure these
delete markers are moved to new pool so that decommission can
be completed.
Fixes#20819
Add profiling potential crash wourkaround
Using admin traces could potentially crash the server (or handler more likely) due to upstream divide by 0: https://github.com/felixge/fgprof/pull/34
Ensure the profile always runs 100ms before stopping, so sample count isn't 0 (default sample rate ~10ms/sample, but allow for cpu starvation)
If one object has many parts where all parts are readable but some parts
are missing from some drives, this object can be sometimes un-healable,
which is wrong.
This commit will avoid reading from drives that have missing, corrupted or
outdated xl.meta. It will also check if any part is unreadable to avoid
healing in that case.
we do not need to hold the read locks at the higher
layer instead before reading the body, instead hold
the read locks properly at the time of renamePart()
for protection from racy part overwrites to compete
with concurrent completeMultipart().
CheckParts call can take time to verify 10k parts of a single object in a single drive.
To avoid an internal dealine of one minute in the single handler RPC, this commit will
switch to streaming RPC instead.
This API had missing permissions checking, allowing a user to change
their policy mapping by:
1. Craft iam-info.zip file: Update own user permission in
user_mappings.json
2. Upload it via `mc admin cluster iam import nobody iam-info.zip`
Here `nobody` can be a user with pretty much any kind of permission (but
not anonymous) and this ends up working.
Some more detailed steps - start from a fresh setup:
```
./minio server /tmp/d{1...4} &
mc alias set myminio http://localhost:9000 minioadmin minioadmin
mc admin user add myminio nobody nobody123
mc admin policy attach myminio readwrite nobody nobody123
mc alias set nobody http://localhost:9000 nobody nobody123
mc admin cluster iam export myminio
mkdir /tmp/x && mv myminio-iam-info.zip /tmp/x
cd /tmp/x
unzip myminio-iam-info.zip
echo '{"nobody":{"version":1,"policy":"consoleAdmin","updatedAt":"2024-08-13T19:47:10.1Z"}}' > \
iam-assets/user_mappings.json
zip -r myminio-iam-info-updated.zip iam-assets/
mc admin cluster iam import nobody ./myminio-iam-info-updated.zip
mc admin service restart nobody
```
`mc admin heal ALIAS/bucket/object` does not have any flag to heal
object noncurrent versions, this commit will make healing of the object
noncurrent versions implicitly asked.
This also fixes the 'mc admin heal ALIAS/bucket/object' that does not work
correctly when the bucket is versioned. This has been broken since Apr 2023.
Golang http.Server will call SetReadDeadline overwriting the previous
deadline configuration set after a new connection Accept in the custom
listener code. Therefore, --idle-timeout was not correctly respected.
Make http.Server read/write timeout similar to --idle-timeout.