Add TTFB for all requests in metrics-v3 in addition to the existing
GetObject. Also for the requests that do not return a body in the
response, calculate TTFB as the HTTP status code and the headers are
sent.
A batch job will fail if the retry attempt is not provided. The reason
is that the code mistakenly gets the retry attempts from the job status
rather than the job yaml file.
This will also set a default empty prefix for batch expiration.
Also this will avoid trimming the prefix since the yaml decoder already
does that if no quotes were provided, and we should not trim if quotes
were provided and the user provided a leading or a trailing space.
Tests if imported service accounts have
required access to buckets and objects.
Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
Co-authored-by: Harshavardhana <harsha@minio.io>
- PutObject() for multi-pooled was holding large
region locks, which was not necessary. This affects
almost all slowpoke clients and lengthy uploads.
- Re-arrange locks for CompleteMultipart, PutObject
to be close to rename()
Currently, it is not possible to remove a tier if it is not accessible
or contains some data, add a force flag to make the removal successful
in that case.
When the encryption and compression are both enabled, the
the server will avoid compressing the data for no apparent reason
This commit will enable it and update unit tests.
postUpload() incorrectly saves actual size as '-1'
we should save correct size when its possible.
Bonus: fix the PutObjectPart() write locker, instead
of holding a lock before we read the client stream.
We should hold it only when we need to commit the parts.
this cache will be honored only when `prefix=""` while
performing ListMultipartUploads() operation.
This is mainly to satisfy applications like alluxio
for their underfs implementation and tests.
replaces https://github.com/minio/minio/pull/20181
AFAICT we send a canceled context to unlock (and thereby releaseAll). This will cause network calls to fail.
Instead use background and add 30s timeout.
rebalance metadata is good to have only,
if it cannot be loaded when starting MinIO
for some reason we can possibly ignore it
and move on and let user start rebalance
again if needed.
readParts requires that both part.N and part.N.meta files be present.
This change addresses an issue with how an error to return to the upper
layers was picked from most drives where a UploadPart operation
had failed.
By default, even if MINIO_BROWSER=off set code tries to get free
port available for the console.
Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
When the prefix field is not provided in the remote source of a yaml
replication job, the code fails to do listing and makes replication
successful. This commit fixes it.
This change adds a consistent nonce to ensure
that multipart uploads are deterministic on a
per-part basis.
Thanks to @klauspost for the work here minio/sio@3cd3734
locks handed by different pools would become non-compete for
multi-object delete request, this is wrong for obvious
reasons.
New locking implementation and revamp will rewrite multi-object
lock anyway, this is a workaround for now.
Currently, the bucket events and replication targets are only reloaded
with buckets that failed to load during the first cluster startup,
which is wrong because if one bucket change was done in one node but
that node was not able to notify other nodes; the other nodes will
reload the bucket metadata config but fails to set the events and bucket
targets in the memory.
when a hung drive is hot-unplugged, the server might go
into a loop where the previous `format.json` is somehow
still accessible to the process, we try to re-init() drives,
but that seems to cause a previous goroutine to hang around
since it is not canceled away when the drive is closed.
Bonus: add deadline for immediate purge routine, to unblock
it if the drive is blocking mutations.
if a user policy is found, avoid reading from the drives
for missing group mappings, group mappings are not mandatory
and conditional.
This PR restores the older behavior while making sure that
if a direct user policy is not found, we would still attempt
to load from the group from the drives.
This commit simplifies and optimizes the decryption of large (multipart)
objects. This PR does two things:
- Re-write the init logic for the decryption reader
- Reduce the number of OEK decryptions
Before, the init logic copied some SSE HTTP request headers to
parse them later. This is simplified to parsing them right away. This
removes some fields from the decryption reader struct.
Further, the decryption reader decrypted the OEK using the client-provided
key (SSE-C) or the KMS (SSE-S3 / SSE-KMS) for each part. This is redundant
since the OEK is the same for all parts. In particular, a KMS call might be a
network request. Now, the OEK is decrypted once for the entire multipart object.
This should improve latency when reading encrypted multipart objects
and reduce requests to the KMS.
Signed-off-by: Andreas Auernhammer <github@aead.dev>
Use Walk(), which is a recursive listing with versioning, to check if
the bucket has some objects before being removed. This is beneficial
because the bucket can contain multiple dangling objects in multiple
drives.
Also, this will prevent a bug where a bucket is deleted in a deployment
that has many erasure sets but the bucket contains one or few objects
not spread to enough erasure sets.
Currently, retry healing of a new drive healing does not reset
HealedBuckets means that the next healing retry will skip those
buckets. The commit will fix this behavior.
Also, the skipped objects counter will include objects uploaded
that are uploaded after the healing is started.