we do not need to hold the read locks at the higher
layer instead before reading the body, instead hold
the read locks properly at the time of renamePart()
for protection from racy part overwrites to compete
with concurrent completeMultipart().
CheckParts call can take time to verify 10k parts of a single object in a single drive.
To avoid an internal dealine of one minute in the single handler RPC, this commit will
switch to streaming RPC instead.
This API had missing permissions checking, allowing a user to change
their policy mapping by:
1. Craft iam-info.zip file: Update own user permission in
user_mappings.json
2. Upload it via `mc admin cluster iam import nobody iam-info.zip`
Here `nobody` can be a user with pretty much any kind of permission (but
not anonymous) and this ends up working.
Some more detailed steps - start from a fresh setup:
```
./minio server /tmp/d{1...4} &
mc alias set myminio http://localhost:9000 minioadmin minioadmin
mc admin user add myminio nobody nobody123
mc admin policy attach myminio readwrite nobody nobody123
mc alias set nobody http://localhost:9000 nobody nobody123
mc admin cluster iam export myminio
mkdir /tmp/x && mv myminio-iam-info.zip /tmp/x
cd /tmp/x
unzip myminio-iam-info.zip
echo '{"nobody":{"version":1,"policy":"consoleAdmin","updatedAt":"2024-08-13T19:47:10.1Z"}}' > \
iam-assets/user_mappings.json
zip -r myminio-iam-info-updated.zip iam-assets/
mc admin cluster iam import nobody ./myminio-iam-info-updated.zip
mc admin service restart nobody
```
`mc admin heal ALIAS/bucket/object` does not have any flag to heal
object noncurrent versions, this commit will make healing of the object
noncurrent versions implicitly asked.
This also fixes the 'mc admin heal ALIAS/bucket/object' that does not work
correctly when the bucket is versioned. This has been broken since Apr 2023.
Golang http.Server will call SetReadDeadline overwriting the previous
deadline configuration set after a new connection Accept in the custom
listener code. Therefore, --idle-timeout was not correctly respected.
Make http.Server read/write timeout similar to --idle-timeout.
The code assigns corrupted state to a drive for any unexpected error,
which is confusing for users. This change will make sure to assign
corrupted state only for corrupted parts or xl.meta. Use unknown state
with a explanation for any unexpected error, like canceled, deadline
errors, drive timeout, ...
Also make sure to return the bucket/object name when the object is not
found or marked not found by the heal dangling code.
* Add the policy name to the audit log tags when doing policy-based API calls
* Audit log the retention settings requested in the API call
* Audit log of retention on PutObjectRetention API path too
The experimental functions are now available in the standard library in
Go 1.23 [1].
[1]: https://go.dev/doc/go1.23#new-unique-package
Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
The RemoveUser API only removes internal users, and it reports success
when it didnt find the internal user account for deletion. When provided
with a service account, it should not report success as that is misleading.
Manual heal can return XMinioHealInvalidClientToken if the manual
healing is started in the first node, and the next mc call to get the
heal status is landed on another node. The reason is that redirection
based on the token ID is not able to redirect requests to the first node
due to a typo.
This also affects the batch cancel command if the batch is being done in
the first node, the user will never be able to cancel it due to the same
bug.
HTTP likes to slap an infinite read deadline on a connection and
do a blocking read while the response is being written.
This effectively means that a reading deadline becomes the
request-response deadline.
Instead of enforcing our timeout, we pass it through and keep
"infinite deadline" is sticky on connections.
However, we still "record" when reads are aborted, so we never overwrite that.
The HTTP server should have `ReadTimeout` and `IdleTimeout` set for the deadline to be effective.
Use --idle-timeout for incoming connections.
This PR fixes a regression introduced in https://github.com/minio/minio/pull/19797
by restoring the healing ability of transitioned objects
Bonus: support for transitioned objects to carry original
The object name is for future reverse lookups if necessary.
Also fix parity calculation for tiered objects to n/2 for n/2 == (parity)
Existing implementation runs IAM purge routines for expired LDAP and
OIDC accounts with a probability of 0.25 after every IAM refresh. This
change ensures that they are run once in each hour.
xlStorage.Healing() returns nil if there is an error reading
.healing.bin or if this latter is empty. healing.bin update()
call returns early if .healing.bin is empty; hence, no further update
of .healing.bin is possible.
A .healing.bin can be empty if os.Open() with O_TRUNC is successful
but the next Write returns an error.
To avoid this weird situation, avoid making healingTracker.update()
to return early if .healing.bin is empty, so write again.
This commit also fixes wrong error log printing when an object is
healed in another drive in the same erasure set but not in the drive
that is actively healing by fresh drive healing code. Currently, it prints
<nil> instead of a factual error.
* heal: Scan .minio.sys metadata only during site-wide heal (#137)
mc admin heal always invoke .minio.sys heal, but sometimes, this latter
contains a lot of data, many service accounts, STS accounts etc, which
makes mc admin heal command very slow.
Only invoke .minio.sys healing when no bucket was specified in `mc admin
heal` command.
Healing a large object with a normal scan mode where no parts read
is involved can still fail after 30 seconds if an object has
There are too many parts when hard disks are being used mainly.
The reason is there is a general deadline that checks for all parts we
do a deadline per part.
When listing objects with metadata, avoid returning an "expires" time
metadata value when its value is the zero time as this means that no
expires value is set on the object.
The condition were incorrect as we were comparing the filter
value against the modification time object.
For example if created after filter date is after modification
time of object, that means object was created before the filter
time and should be skipped while replication because as per the
filter we need only the objects created after the filter date.
Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
When Keycloak vendor is set, the code will start to clean up service
accounts that parents do not exist anymore. However, the code will also
look for the parent user of site-replicator-0, MINIO_ROOT_USER, which
obviously does not exist in Keycloak. Therefore, the site-replicator-0
will be removed automatically.
This commit will avoid cleaning up service accounts generated from
the root user.
The verify file handler response format was changed from gob to msgp
since two months but we forgot updating the verify handler client.
VerifyFile is only called during a heal deep scan (bitrot check).
HealObject() will fail in that case and will mark all disks corrupted and
will return early (as unrecoverable object but it will also not be
removed)
It is a bit rare for HealObject to be called with a deep scan flag. It
is called when a HealObject with a normal scan (e.g. new drive healing)
detects a bitrot corruption, therefore healing objects with a detected
bitrot corruption will fail.
in cases where we cannot possibly know a way to read and
construct the object, it is impossible to achieve any form of
quorum via xl.meta while we have sufficient responses from
all the drives, we should return object not found.
- PutObjectMetadata()
- PutObjectTags()
- DeleteObjectTags()
- TransitionObject()
- RestoreTransitionObject()
Also improve the behavior of multipart code across
pool locks, hold locks only once per upload ID for
- CompleteMultipartUpload()
- AbortMultipartUpload()
- ListObjectParts() (read-lock)
- GetMultipartInfo() (read-lock)
- PutObjectPart() (read-lock)
This avoids lock attempts across pools for no
reason, this increases O(n) when there are n-pools.