Previously, erasure backend's `listDirFactory` may return errors which
were explicitly ignored. With this change, it returns nil. Superfluous
checks at higher-layers for ignored errors are removed as well.
It was possible to upload a big file which overcomes the minimal
disk space limit in XL, PrepareFile was actually checking for disk
space but we weren't checking its returned error. This patch fixes
this behavior.
startOffset was re-assigned to '0' so it would end up
copying wrong content ignoring the requested startOffset.
This also fixes the corruption issue we observed while
using docker registry.
Fixes https://github.com/docker/distribution/issues/2205
Also fixes#3842 - incorrect routing.
Ignore a disk which wasn't able to successfully perform an action to
avoid eventual perturbations when the disk comes back in the middle
of write change.
This PR is for readability cleanup
- getOrderedDisks as shuffleDisks
- getOrderedPartsMetadata as shufflePartsMetadata
Distribution is now a second argument instead being the
primary input argument for brevity.
Also change the usage of type casted int64(0), instead
rely on direct type reference as `var variable int64` everywhere.
Existing objects before overwrites are renamed to
temp location in completeMultipart. We make sure
that we delete it even if subsequenty calls fail.
Additionally move verifying of parent dir is a
file earlier to fail the entire operation.
Ref #3784
Make sure to skip reserved bucket names in `ListBuckets()`
current code didn't skip this properly and also generalize
this behavior for both XL and FS.
Current implementation didn't honor quorum properly and didn't
handle the errors generated properly. This patch addresses that
and also moves common code `cleanupMultipartUploads` into xl
specific private function.
Fixes#3665
This is implemented so that the issues like in the
following flow don't affect the behavior of operation.
```
GetObjectInfo()
.... --> Time window for mutation (no lock held)
.... --> Time window for mutation (no lock held)
GetObject()
```
This happens when two simultaneous uploads are made
to the same object the object has returned wrong
info to the client.
Another classic example is "CopyObject" API itself
which reads from a source object and copies to
destination object.
Fixes#3370Fixes#2912
This change brings in changes at multiple places
- Reuse buffers at almost all locations ranging
from rpc, fs, xl, checksum etc.
- Change caching behavior to disable itself
under low memory conditions i.e < 8GB of RAM.
- Only objects cached are of size 1/10th the size
of the cache for example if 4GB is the cache size
the maximum object size which will be cached
is going to be 400MB. This change is an
optimization to cache more objects rather
than few larger objects.
- If object cache is enabled default GC
percent has been reduced to 20% in lieu
with newly found behavior of GC. If the cache
utilization reaches 75% of the maximum value
GC percent is reduced to 10% to make GC
more aggressive.
- Do not use *bytes.Buffer* due to its growth
requirements. For every allocation *bytes.Buffer*
allocates an additional buffer for its internal
purposes. This is undesirable for us, so
implemented a new cappedWriter which is capped to a
desired size, beyond this all writes rejected.
Possible fix for #3403.
XL multipart fails to remove tmp files when an error occurs during upload, this case covers the scenario where an upload is canceled manually by the client in the middle of job.
- abstract out instrumentation information.
- use separate lockInstance type that encapsulates the nsMutex, volume,
path and opsID as the frontend or top-level lock object.
- Reads and writes of uploads.json in XL now uses quorum for
newMultipart, completeMultipart and abortMultipart operations.
- Each disk's `uploads.json` file is read and updated independently for
adding or removing an upload id from the file. Quorum is used to
decide if the high-level operation actually succeeded.
- Refactor FS code to simplify the flow, and fix a bug while reading
uploads.json.
opsID, a variable on the stack, changes over the course of
Completemultipartupload function in xl-v1-multipart.go. This was
being used in a function closure which was passed to defer
statement. The variables used in the closure depend on their values at
the time of evaluation which is indeterminate behaviour. It is
incorrect to depend on values of variables on stack at the end of
function, when deferred functions are executed.
In a multipart upload scenario disks going down and coming backup
can lead to certain parts missing on the disk/server which was
going down. This is a valid case since these blocks can be
missing and should be healed through heal operation. But we are
not supposed to fail prematurely since we have enough data on
the other disks as well within read-quorum.
This fix relaxes previous assumption, fixes a major corruption
issue reproduced by @vadmeste.
Fixes#2976
These messages based on our prep stage during XL
and prints more informative message regarding
drive information.
This change also does a much needed refactoring.
- Using gjson for constructing xlMetaV1{} in realXLMeta.
- Test for parsing constructing xlMetaV1{} using gjson.
- Changes made since benchmarks showed 30-40% improvement in speed.
- Follow up comments in issue https://github.com/minio/minio/issues/2208
for more details.
- gjson parsing of parts from xl.json for listParts.
- gjson parsing of statInfo from xl.json for getObjectInfo.
- Vendorizing gjson dependency.
- Instrumentation for locks.
- Detailed test coverage.
- Adding RPC control handler to fetch lock instrumentation.
- RPC control handlers suite tests with a test RPC server.