Current code was just using io.ReadAll() on an fd()
which might have moved underneath due to a concurrent
read operation. Subsequent read will result in EOF
We should always seek back and read again. pread()
is allowed on all platforms use io.SectionReader to
read from the beginning of the file.
Fixes#4842
Bcrypt is not neccessary and not used properly. This change
replace the whole bcrypt hash computation through a constant time
compare and removes bcrypt from the code base.
Fixes#4813
If a TopicConfiguration element or CloudFunction element is found in
configuration submitted to PutBucketNotification API, an BadRequest
error is returned.
S3 only allows http headers with a size of 8 KB and user-defined metadata
with a size of 2 KB. This change adds a new API error and returns this
error to clients which sends to large http requests.
Fixes#4634
We don't need to typecast identifiers from
their base to type to same type again. This
is not a bug and compiler is fine to skip
it but it is better to avoid if not needed.
This change provides new implementations of the XL backend operations:
- create file
- read file
- heal file
Further this change adds table based tests for all three operations.
This affects also the bitrot algorithm integration. Algorithms are now
integrated in an idiomatic way (like crypto.Hash).
Fixes#4696Fixes#4649Fixes#4359
* Hardcode snap version while a store bug is fixed
This is a workaround for for a failure in the store, which limits the version to 32 characters.
https://forum.snapcraft.io/t/versions-can-be-at-most-32-characters/1642
Capitalize the summary
* add the snapcraft dirs to gitignore bring back settings.json to ignore
It was decided that we will be deprecating ARM support
for minio builds. ARM users should simply compile from source.
Additionally 32bit version of Linux, Windows and FreeBSD (64bit)
are deprecated.
Wait for remote hosts to resolve instead of failing on first host
resolution error, when running in Kubernetes or Docker environment.
Note that
- Waiting is based on exponential back-off mechanism
- If run as a binary, server fails if remote host is not resolvable
This is needed because in orchestration platforms like Kubernetes, remote
hosts are started sequentially and all the hosts are not up initially,
though they are expected to come up in a short time frame
It is difficult to identify a cap on the waiting time due to
non-deterministic nature of infrastructure platforms, so the server waits
infinitely for the hosts to come up, while logging the error messages to
the console.
Fixes: https://github.com/minio/minio/issues/4669
Swarm routes traffic only to containers that report healthy status,
while Minio in distributed mode needs to talk to other peers before
it can respond to healthcheck probe. As the Minio containers are not
able to talk to each other, distributed Minio is not getting started
on Docker Swarm.
With this PR, Minio Healthcheck report healthy status for initial
120s enough for distributed Minio to start. After that normal
Healthcheck resumes. Also changed the healthcheck method name
in accordance with Google shell styleguide.
Fixes: https://github.com/minio/minio/issues/4761
Since go1.8 os.RemoveAll and os.MkdirAll both support long
path names i.e UNC path on windows. The code we are carrying
was directly borrowed from `pkg/os` package and doesn't need
to be in our repo anymore. As a side affect this also
addresses our codecoverage issue.
Refer #4658
Refer #4759.
Fix this to avoid issues like below
```
docker run --rm minio/minio:edge-armhf version
minio: <ERROR> .... is not compiled by Go >= 1.8.3. ... recompile...
```
* Prevent unnecessary verification of parity blocks while reading erasure
coded file.
* Update klauspost/reedsolomon and just only reconstruct data blocks while
reading (prevent unnecessary parity block reconstruction)
* Remove Verification of (all) reconstructed Data and Parity blocks since
in our case we are protected by bit rot protection. And even if the
verification would fail (essentially impossible) there is no way to
definitively say whether the data is still correct or not, so this call
make no sense for our use case.
Implement an offline mode for remote storage to cache the
offline status of a node in order to prevent network calls
that are bound to fail. After a time interval an attempt
will be made to restore the connection and mark the node
as online if successful.
Fixes#4183
It is possible at times due to a typo when distributed mode was intended
a user might end up starting standalone erasure mode causing confusion.
Add code to check this based on some standard heuristic guess work and
report an error to the user.
Fixes#4686
Under the call flow
```
Readdir
+
|
|
| path-entry
|
|
v
StatDir
```
Existing code was written in a manner where say
a bucket/top-level directory was indeed deleted
between Readdir() and before StatDir() we would
ignore certain errors. This is not a plausible
situation and might not happen in almost all
practical cases. We do not have to look for
or interpret these errors returned by StatDir()
instead we can just collect the successful
values and return back to the client. We do not
need to pre-maturely decide on bucket access
we just let filesystem decide subsequently for
real I/O operations.
Refer #4658
This is in preparation for updated admin heal API.
* Improve case analysis of healFormatXL() - fixes a case where disks
could have unhandled errors.
* Simplify healFormatXLFreshDisks() and healFormatXLCorruptedDisks()
to share more code and handle fewer cases for improved simplicity
and reduced code repetition.
* Fix test cases.
This commit changes posix's deleteFile() to not upstream errors from
removing parent directories. This fixes a race condition.
The race condition occurs when multiple deleteFile()s are called on the
same parent directory, but different child files. Because deleteFile()
recursively removes parent directories if they are empty, but
deleteFile() errors if the selected deletePath does not exist, there was
an opportunity for a race condition. The two processes would remove the
child directories successfully, then depend on the parent directory
still existing. In some cases this is an invalid assumption, because
other processes can remove the parent directory beforehand. This commit
changes deleteFile() to not upstream an error if one occurs, because the
only required error should be from the immediate deletePath, not from a
parent path.
In the specific bug report, multiple CompleteMultipartUpload requests
would launch multiple deleteFile() requests. Because they chain up on
parent directories, ultimately at the end, there would be multiple
remove files for the ultimate parent directory,
.minio.sys/multipart/{bucket}. Because only one will succeed and one
will fail, an error would be upstreamed saying that the file does not
exist, and the CompleteMultipartUpload code interpreted this as
NoSuchKey, or that the object/part id doesn't exist. This was faulty
behavior and is now fixed.
The added test fails before this change and passes after this change.
Fixes: https://github.com/minio/minio/issues/4727
Peek could fail legitimately when clients abruptly close connection. So,
io.EOF and network timeout errors are not logged while all other errors
will be logged.
This commit adds a new test for isDirEmpty (for code coverage) and
changes around the error conditional. Previously, there was a `return
nil` statement that would only be triggered under a race condition and
would trip up our test coverage for no real reason. With this new error
conditional, there's no awkward 'else'-esque condition, which means test
coverage will not change between runs for no reason in this specific
test. It's also a cleaner read.