CheckParts call can take time to verify 10k parts of a single object in a single drive.
To avoid an internal dealine of one minute in the single handler RPC, this commit will
switch to streaming RPC instead.
Golang http.Server will call SetReadDeadline overwriting the previous
deadline configuration set after a new connection Accept in the custom
listener code. Therefore, --idle-timeout was not correctly respected.
Make http.Server read/write timeout similar to --idle-timeout.
* Add the policy name to the audit log tags when doing policy-based API calls
* Audit log the retention settings requested in the API call
* Audit log of retention on PutObjectRetention API path too
Previously, not setting http.Config.HTTPTimeout for logger webhook
was resulting in a timeout of 0, and causing "deadline exceeded"
errors in log webhook.
This change introduces a new env variable for configuring log webhook
timeout and more importantly sets it when the config is initialised.
HTTP likes to slap an infinite read deadline on a connection and
do a blocking read while the response is being written.
This effectively means that a reading deadline becomes the
request-response deadline.
Instead of enforcing our timeout, we pass it through and keep
"infinite deadline" is sticky on connections.
However, we still "record" when reads are aborted, so we never overwrite that.
The HTTP server should have `ReadTimeout` and `IdleTimeout` set for the deadline to be effective.
Use --idle-timeout for incoming connections.
Since DeadlineConn would send deadline updates directly upstream,
it would race with Read/Write operations. The stdlib will perform a read,
but do an async SetReadDeadLine(unix(1)) to cancel the Read in
`abortPendingRead`. In this case, the Read may override the
deadline intended to cancel the read.
Stop updating deadlines if a deadline in the past is seen and when Close is called.
A mutex now protects all upstream deadline calls to avoid races.
This should fix the short-term buildup of...
```
365 @ 0x44112e 0x4756b9 0x475699 0x483525 0x732286 0x737407 0x73816b 0x479601
# 0x475698 sync.runtime_notifyListWait+0x138 runtime/sema.go:569
# 0x483524 sync.(*Cond).Wait+0x84 sync/cond.go:70
# 0x732285 net/http.(*connReader).abortPendingRead+0xa5 net/http/server.go:729
# 0x737406 net/http.(*response).finishRequest+0x86 net/http/server.go:1676
# 0x73816a net/http.(*conn).serve+0x62a net/http/server.go:2050
```
AFAICT Only affects internode calls that create a connection (non-grid).
These are needed checks for the functions to be un-crashable with any input
given to `msgUnPath` (tested with fuzzing).
Both conditions would result in a crash, which prevents that. Some
additional upstream checks are needed.
Fixes#20610
Add TTFB for all requests in metrics-v3 in addition to the existing
GetObject. Also for the requests that do not return a body in the
response, calculate TTFB as the HTTP status code and the headers are
sent.
Tests if imported service accounts have
required access to buckets and objects.
Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
Co-authored-by: Harshavardhana <harsha@minio.io>
this cache will be honored only when `prefix=""` while
performing ListMultipartUploads() operation.
This is mainly to satisfy applications like alluxio
for their underfs implementation and tests.
replaces https://github.com/minio/minio/pull/20181
AFAICT we send a canceled context to unlock (and thereby releaseAll). This will cause network calls to fail.
Instead use background and add 30s timeout.
Current implementation retries forever until our
log buffer is full, and we start dropping events.
This PR allows you to set a value until we give
up on existing audit/logger batches to proceed to
process the new ones.
Bonus:
- do not blow up buffers beyond batchSize value
- do not leak the ticker if the worker returns
The items will be saved per target batch and will
be committed to the queue store when the batch is full
Also, periodically commit the batched items to the queue store
based on configured commit_timeout; default is 30s;
Bonus: compress queue store multi writes
S3 spec does not accept an ILM XML document containing both <Filter>
and <Prefix> XML tags, even if both are empty. That is why we added
a 'set' field in some lifecycle structures to decide when and when not to
show a tag. However, we forgot to disallow marshaling of Filter when
'set' is set to false.
This will fix ILM document replication in a site replication
configuration in some cases.
Services are unfrozen before `initBackgroundReplication` is finished. This means that
the globalReplicationStats write is racy. Switch to an atomic pointer.
Provide the `ReplicationPool` with the stats, so it doesn't have to be grabbed
from the atomic pointer on every use.
All other loads and checks are nil, and calls return empty values when stats
still haven't been initialized.
move away from map[string]interface{} to map[string]string
to simplify the audit, and also provide concise information.
avoids large allocations under load(), reduces the amount
of audit information generated, as the current implementation
was a bit free-form. instead all datastructures must be
flattened.