Gateway interface implementations of GetBucketInfo() under
azure and s3 gateway did not perform any bucketname input
validation resulting in incorrect responses when the tests
are expecting InvalidBucketName.
Fixes#4983
When running `make test` in docker, two test cases cause hanging.
This Patch fixes the problem by removing those test cases.
Thanks to @ws141 for identifying the problem.
The reedsolomon library now avoids allocations during reconstruction.
This change exploits that to reduce memory allocs and GC preasure during
healing and reading.
Previously we were wrongly adding `?` as part
of the resource name, add a test case to check
if this is handled properly.
Thanks to @kannappanr for reproducing this.
Without this change presigned URL generated with following
command would fail with signature mismatch.
```
aws s3 presign s3://testbucket/functional-tests.sh
```
On *NIX platforms the statfs(2) system call returns a struct containing both the
free blocks in the filesystem (Statfs_t.Bfree) and the free blocks available to
the unprivileged or non-superuser (Statfs_t.Bavail).
The `Bfree` and `Bavail` fields (with `Bfree >= Bavail`) will be set to
different values on e.g. filesystems such as ext4 that reserve a certain
percentage of the filesystem blocks which may only be allocated by admnistrative
privileged processes.
The calculations for the `Total` disk space need to subtract the difference
between the `Bfree` and `Bavail` fields for it to correctly show the total
available storage space available for unprivileged users.
This implicitly fixes a bug where the `Used = Total - Free` calculation yielded
different (and also incorrect) results for identical contents stored when only
the sizes of the disks or backing volumes differed. (as can be witnessed in the
`Used:` value displayed in the Minio browser)
See:
- https://wiki.archlinux.org/index.php/ext4#Reserved_blocks
- http://man7.org/linux/man-pages/man2/statfs.2.html
- https://man.openbsd.org/statfs
- http://lingrok.org/xref/coreutils/src/df.c#893
It can happen that an incoming PutObject() request might
have inputs of following form eg:-
- bucketName is 'testbucket'
- objectName is '/'
bucketName exists and was previously created but there
are no other objects in this bucket. In a situation like
this parentDirIsObject() goes into an infinite loop.
Verifying that if '/' is an object fails on both backends
but the resulting `path.Dir('/')` returns `'/'` this causes
the closure to loop onto itself.
Fixes#4940
This change removes the ReadFileWithVerify function from the
StorageAPI. The ReadFile was basically a redirection to ReadFileWithVerify.
This change removes the redirection and moves the logic of
ReadFileWithVerify directly into ReadFile.
This removes a lot of unnecessary code in all StorageAPI implementations.
Fixes#4946
* review: fix doc and typos
With https://github.com/minio/minio/pull/4869 maximum size of a single
multipart upload part in not restricted to 100MB. 1TB maximum object
size limitation is no longer applicable too.
Previously init multipart upload stores metadata of an object which is
used for complete multipart. This patch makes azure gateway to store
metadata information of init multipart object in azure in the name of
'minio.sys.tmp/multipart/v1/<UPLOAD-ID>/meta.json' and uses this
information on complete multipart.
This change refactor the ObjectLayer PutObject and PutObjectPart
functions. Instead of passing an io.Reader and a size to PUT operations
ObejectLayer expects an HashReader.
A HashReader verifies the MD5 sum (and SHA256 sum if required) of the object.
This change updates all all PutObject(Part) calls and removes unnecessary code
in all ObjectLayer implementations.
Fixes#4923
This is an improvement upon existing implementation
by avoiding transfer of access and secret keys over
the network. This change only exchanges JWT tokens
generated by an rpc client. Even if the JWT can be
traced over the network on a non-TLS connection, this
change makes sure that we never really expose the
secret key over the network.
Previously minio gateway returns invalid bucket name error for invalid
meta data. This is fixed by returning BadRequest with 'Unsupported
metadata' in response.
Fixes#4891
When servers are started simultaneously across multiple
nodes or simulating a local setup, it can happen such
that one of the servers in setup reaches a following
situation where it observes
- Some servers are formatted
- Some servers are unformatted
- Some servers are offline
Current state machine doesn't handle this correctly, to fix
this situation where we have unformatted, formatted and
disks offline we do not decisively know the course of
action. So we wait for the offline disks to change their state.
Once the offline disks change their state to either one of these
states we can decisively move forward.
- nil (formatted disk)
- errUnformattedDisk
- Or any other error such as errCorruptedDisk.
Fixes#4903
The default timeout of 30secs is not enough for high latency
environments, change these values to use 15 minutes instead.
With 30secs I/O timeouts seem to be quite common, this leads
to pretty much most SDKs and clients reconnect. This in-turn
causes significant performance problems. On a low latency
interconnect this can be quite challenging to transfer large
amounts of data. Setting this value to 15minutes covers
pretty much all known cases.
This PR was tested with `wondershaper <NIC> 20000 20000` by
limiting the network bandwidth to 20Mbit/sec. Default timeout
caused a significant amount of I/O timeouts, leading to
constant retires from the client. This seems to be more common
with tools like rclone, restic which have high concurrency set
by default. Once the value was fixed to 15minutes i/o timeouts
stopped and client could steadily upload data to the server
even while saturating the network.
Fixes#4670
Previously if any multipart part size > 100MiB is uploaded, azure
gateway returns error.
This patch fixes the issue by creating sub parts sizing each 100MiB of
given multipart part. On complete multipart, it fetches all uploaded
azure block ids for each parts and performs completion.
Fixes#4868
- Region handling can now use region endpoints directly.
- All uploads are streaming no more large buffer needed.
- Major API overhaul for CopyObject(dst, src)
- Fixes bugs present in existing code for copying
- metadata replace directive CopyObject
- PutObjectPart doesn't require md5Sum and sha256
All `net/rpc` requests go to `/minio`, so the existing
generic handler for reserved bucket check would essentially
erroneously send errors leading to distributed setups to
wait infinitely.
For `net/rpc` requests alone we should skip this check and
allow resource bucket names to be from `/minio` .