Manual heal can return XMinioHealInvalidClientToken if the manual
healing is started in the first node, and the next mc call to get the
heal status is landed on another node. The reason is that redirection
based on the token ID is not able to redirect requests to the first node
due to a typo.
This also affects the batch cancel command if the batch is being done in
the first node, the user will never be able to cancel it due to the same
bug.
HTTP likes to slap an infinite read deadline on a connection and
do a blocking read while the response is being written.
This effectively means that a reading deadline becomes the
request-response deadline.
Instead of enforcing our timeout, we pass it through and keep
"infinite deadline" is sticky on connections.
However, we still "record" when reads are aborted, so we never overwrite that.
The HTTP server should have `ReadTimeout` and `IdleTimeout` set for the deadline to be effective.
Use --idle-timeout for incoming connections.
Since DeadlineConn would send deadline updates directly upstream,
it would race with Read/Write operations. The stdlib will perform a read,
but do an async SetReadDeadLine(unix(1)) to cancel the Read in
`abortPendingRead`. In this case, the Read may override the
deadline intended to cancel the read.
Stop updating deadlines if a deadline in the past is seen and when Close is called.
A mutex now protects all upstream deadline calls to avoid races.
This should fix the short-term buildup of...
```
365 @ 0x44112e 0x4756b9 0x475699 0x483525 0x732286 0x737407 0x73816b 0x479601
# 0x475698 sync.runtime_notifyListWait+0x138 runtime/sema.go:569
# 0x483524 sync.(*Cond).Wait+0x84 sync/cond.go:70
# 0x732285 net/http.(*connReader).abortPendingRead+0xa5 net/http/server.go:729
# 0x737406 net/http.(*response).finishRequest+0x86 net/http/server.go:1676
# 0x73816a net/http.(*conn).serve+0x62a net/http/server.go:2050
```
AFAICT Only affects internode calls that create a connection (non-grid).
These are needed checks for the functions to be un-crashable with any input
given to `msgUnPath` (tested with fuzzing).
Both conditions would result in a crash, which prevents that. Some
additional upstream checks are needed.
Fixes#20610
This PR fixes a regression introduced in https://github.com/minio/minio/pull/19797
by restoring the healing ability of transitioned objects
Bonus: support for transitioned objects to carry original
The object name is for future reverse lookups if necessary.
Also fix parity calculation for tiered objects to n/2 for n/2 == (parity)
Existing implementation runs IAM purge routines for expired LDAP and
OIDC accounts with a probability of 0.25 after every IAM refresh. This
change ensures that they are run once in each hour.
xlStorage.Healing() returns nil if there is an error reading
.healing.bin or if this latter is empty. healing.bin update()
call returns early if .healing.bin is empty; hence, no further update
of .healing.bin is possible.
A .healing.bin can be empty if os.Open() with O_TRUNC is successful
but the next Write returns an error.
To avoid this weird situation, avoid making healingTracker.update()
to return early if .healing.bin is empty, so write again.
This commit also fixes wrong error log printing when an object is
healed in another drive in the same erasure set but not in the drive
that is actively healing by fresh drive healing code. Currently, it prints
<nil> instead of a factual error.
* heal: Scan .minio.sys metadata only during site-wide heal (#137)
mc admin heal always invoke .minio.sys heal, but sometimes, this latter
contains a lot of data, many service accounts, STS accounts etc, which
makes mc admin heal command very slow.
Only invoke .minio.sys healing when no bucket was specified in `mc admin
heal` command.
Healing a large object with a normal scan mode where no parts read
is involved can still fail after 30 seconds if an object has
There are too many parts when hard disks are being used mainly.
The reason is there is a general deadline that checks for all parts we
do a deadline per part.
When listing objects with metadata, avoid returning an "expires" time
metadata value when its value is the zero time as this means that no
expires value is set on the object.
The condition were incorrect as we were comparing the filter
value against the modification time object.
For example if created after filter date is after modification
time of object, that means object was created before the filter
time and should be skipped while replication because as per the
filter we need only the objects created after the filter date.
Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
When Keycloak vendor is set, the code will start to clean up service
accounts that parents do not exist anymore. However, the code will also
look for the parent user of site-replicator-0, MINIO_ROOT_USER, which
obviously does not exist in Keycloak. Therefore, the site-replicator-0
will be removed automatically.
This commit will avoid cleaning up service accounts generated from
the root user.
The verify file handler response format was changed from gob to msgp
since two months but we forgot updating the verify handler client.
VerifyFile is only called during a heal deep scan (bitrot check).
HealObject() will fail in that case and will mark all disks corrupted and
will return early (as unrecoverable object but it will also not be
removed)
It is a bit rare for HealObject to be called with a deep scan flag. It
is called when a HealObject with a normal scan (e.g. new drive healing)
detects a bitrot corruption, therefore healing objects with a detected
bitrot corruption will fail.
in cases where we cannot possibly know a way to read and
construct the object, it is impossible to achieve any form of
quorum via xl.meta while we have sufficient responses from
all the drives, we should return object not found.
- PutObjectMetadata()
- PutObjectTags()
- DeleteObjectTags()
- TransitionObject()
- RestoreTransitionObject()
Also improve the behavior of multipart code across
pool locks, hold locks only once per upload ID for
- CompleteMultipartUpload()
- AbortMultipartUpload()
- ListObjectParts() (read-lock)
- GetMultipartInfo() (read-lock)
- PutObjectPart() (read-lock)
This avoids lock attempts across pools for no
reason, this increases O(n) when there are n-pools.
multi-object deletion may or may not compete with locks
granted for other callers, causing concurrent operations
to succeed on each other.
A continuation of the PR https://github.com/minio/minio/pull/20356