This PR fixes a regression introduced in https://github.com/minio/minio/pull/19797
by restoring the healing ability of transitioned objects
Bonus: support for transitioned objects to carry original
The object name is for future reverse lookups if necessary.
Also fix parity calculation for tiered objects to n/2 for n/2 == (parity)
Existing implementation runs IAM purge routines for expired LDAP and
OIDC accounts with a probability of 0.25 after every IAM refresh. This
change ensures that they are run once in each hour.
xlStorage.Healing() returns nil if there is an error reading
.healing.bin or if this latter is empty. healing.bin update()
call returns early if .healing.bin is empty; hence, no further update
of .healing.bin is possible.
A .healing.bin can be empty if os.Open() with O_TRUNC is successful
but the next Write returns an error.
To avoid this weird situation, avoid making healingTracker.update()
to return early if .healing.bin is empty, so write again.
This commit also fixes wrong error log printing when an object is
healed in another drive in the same erasure set but not in the drive
that is actively healing by fresh drive healing code. Currently, it prints
<nil> instead of a factual error.
* heal: Scan .minio.sys metadata only during site-wide heal (#137)
mc admin heal always invoke .minio.sys heal, but sometimes, this latter
contains a lot of data, many service accounts, STS accounts etc, which
makes mc admin heal command very slow.
Only invoke .minio.sys healing when no bucket was specified in `mc admin
heal` command.
Healing a large object with a normal scan mode where no parts read
is involved can still fail after 30 seconds if an object has
There are too many parts when hard disks are being used mainly.
The reason is there is a general deadline that checks for all parts we
do a deadline per part.
When listing objects with metadata, avoid returning an "expires" time
metadata value when its value is the zero time as this means that no
expires value is set on the object.
The condition were incorrect as we were comparing the filter
value against the modification time object.
For example if created after filter date is after modification
time of object, that means object was created before the filter
time and should be skipped while replication because as per the
filter we need only the objects created after the filter date.
Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
When Keycloak vendor is set, the code will start to clean up service
accounts that parents do not exist anymore. However, the code will also
look for the parent user of site-replicator-0, MINIO_ROOT_USER, which
obviously does not exist in Keycloak. Therefore, the site-replicator-0
will be removed automatically.
This commit will avoid cleaning up service accounts generated from
the root user.
The verify file handler response format was changed from gob to msgp
since two months but we forgot updating the verify handler client.
VerifyFile is only called during a heal deep scan (bitrot check).
HealObject() will fail in that case and will mark all disks corrupted and
will return early (as unrecoverable object but it will also not be
removed)
It is a bit rare for HealObject to be called with a deep scan flag. It
is called when a HealObject with a normal scan (e.g. new drive healing)
detects a bitrot corruption, therefore healing objects with a detected
bitrot corruption will fail.
in cases where we cannot possibly know a way to read and
construct the object, it is impossible to achieve any form of
quorum via xl.meta while we have sufficient responses from
all the drives, we should return object not found.
- PutObjectMetadata()
- PutObjectTags()
- DeleteObjectTags()
- TransitionObject()
- RestoreTransitionObject()
Also improve the behavior of multipart code across
pool locks, hold locks only once per upload ID for
- CompleteMultipartUpload()
- AbortMultipartUpload()
- ListObjectParts() (read-lock)
- GetMultipartInfo() (read-lock)
- PutObjectPart() (read-lock)
This avoids lock attempts across pools for no
reason, this increases O(n) when there are n-pools.
multi-object deletion may or may not compete with locks
granted for other callers, causing concurrent operations
to succeed on each other.
A continuation of the PR https://github.com/minio/minio/pull/20356
When custom authorization via plugin is enabled, the console will now
render the UI as if all actions are allowed. Since server cannot
determine the exact policy allowed for a user via the plugin, this is
acceptable to do. If a particular action is actually not allowed by the
plugin the call will result in an error.
Previously the server was evaluating a policy when custom authZ is
enabled - this is fixed now.
The "unlimited" value on PPC wasn't exactly the same as amd64.
Instead compare against an "unreasonably big value".
Would cause OOM in anything using the concurrent request limit.
Add TTFB for all requests in metrics-v3 in addition to the existing
GetObject. Also for the requests that do not return a body in the
response, calculate TTFB as the HTTP status code and the headers are
sent.
A batch job will fail if the retry attempt is not provided. The reason
is that the code mistakenly gets the retry attempts from the job status
rather than the job yaml file.
This will also set a default empty prefix for batch expiration.
Also this will avoid trimming the prefix since the yaml decoder already
does that if no quotes were provided, and we should not trim if quotes
were provided and the user provided a leading or a trailing space.
Tests if imported service accounts have
required access to buckets and objects.
Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
Co-authored-by: Harshavardhana <harsha@minio.io>
- PutObject() for multi-pooled was holding large
region locks, which was not necessary. This affects
almost all slowpoke clients and lengthy uploads.
- Re-arrange locks for CompleteMultipart, PutObject
to be close to rename()
Currently, it is not possible to remove a tier if it is not accessible
or contains some data, add a force flag to make the removal successful
in that case.
When the encryption and compression are both enabled, the
the server will avoid compressing the data for no apparent reason
This commit will enable it and update unit tests.