upon errors to acquire lock context would still leak,
since the cancel would never be called. since the lock
is never acquired - proactively clear it before returning.
- write in o_dsync instead of o_direct for smaller
objects to avoid unaligned double Write() situations
that may arise for smaller objects < 128KiB
- avoid fallocate() as its not useful since we do not
use Append() semantics anymore, fallocate is not useful
for streaming I/O we can save on a syscall
- createFile() doesn't need to validate `bucket` name
with a Lstat() call since createFile() is only used
to write at `minioTmpBucket`
- use io.Copy() when writing unAligned writes to allow
usage of ReadFrom() from *os.File providing zero
buffer writes().
root-disk implemented currently had issues where root
disk partitions getting modified might race and provide
incorrect results, to avoid this lets rely again back on
DeviceID and match it instead.
In-case of containers `/data` is one such extra entity that
needs to be verified for root disk, due to how 'overlay'
filesystem works and the 'overlay' presents a completely
different 'device' id - using `/data` as another entity
for fallback helps because our containers describe 'VOLUME'
parameter that allows containers to automatically have a
virtual `/data` that points to the container root path this
can either be at `/` or `/var/lib/` (on different partition)
The code inside the `hotfix` taget is overriding the values set at the
beginning of the Makefile affecting other make targets as well.
For example, running `TAG=mytag make docker` also ends up tagging
the docker image as a hotfix instead of `mytag`.
Using the `eval` function inside the `hotfix` target fixes this.
During expansion we need to validate if
- new deployment is expanded with newer constraints
- existing deployment is expanded with older constraints
- multiple server pools rejected if they have different
deploymentID and distribution algo
This refactor is done for few reasons below
- to avoid deadlocks in scenarios when number
of nodes are smaller < actual erasure stripe
count where in N participating local lockers
can lead to deadlocks across systems.
- avoids expiry routines to run 1000 of separate
network operations and routes per disk where
as each of them are still accessing one single
local entity.
- it is ideal to have since globalLockServer
per instance.
- In a 32node deployment however, each server
group is still concentrated towards the
same set of lockers that partipicate during
the write/read phase, unlike previous minio/dsync
implementation - this potentially avoids send
32 requests instead we will still send at max
requests of unique nodes participating in a
write/read phase.
- reduces overall chattiness on smaller setups.
hotfix target will fetch the release tag prior to the latest commit and create a binary
with the same release tag plus '.hotfix' suffix
e.g. RELEASE.2020-12-03T05-49-24Z.hotfix
`mc admin info` on busy setups will not move HDD
heads unnecessarily for repeated calls, provides
a better responsiveness for the call overall.
Bonus change allow listTolerancePerSet be N-1
for good entries, to avoid skipping entries
for some reason one of the disk went offline.
Always check if the auto-generated code is still compatible with the
existing written code to avoid a possible forgetting or sometimes a non
intentional change.
enable linter using golangci-lint across
codebase to run a bunch of linters together,
we shall enable new linters as we fix more
things the codebase.
This PR fixes the first stage of this
cleanup.
Use reference format to initialize lockers
during startup, also handle `nil` for NetLocker
in dsync and remove *errorLocker* implementation
Add further tuning parameters such as
- DialTimeout is now 15 seconds from 30 seconds
- KeepAliveTimeout is not 20 seconds, 5 seconds
more than default 15 seconds
- ResponseHeaderTimeout to 10 seconds
- ExpectContinueTimeout is reduced to 3 seconds
- DualStack is enabled by default remove setting
it to `true`
- Reduce IdleConnTimeout to 30 seconds from
1 minute to avoid idleConn build up
Fixes#8773
There are multiple possibilities for running MinIO within
a container e.g. configurable address, non-root user etc.
This makes it difficult to identify actual IP / Port to
use to check healthcheck status from within a container.
It is simpler to use external healthcheck mechanisms
like healthcheck command in docker-compose to check
for MinIO health status. This is similar to how checks
work in Kubernetes as well.
This PR removes the healthcheck script used inside
Docker container and ad documentation on how to
use docker-compose based healthcheck mechanism.
There is no reliable way to handle fallbacks for
MinIO deployments, due to various command line
options and multiple locations which require
access inside container.
Parsing command line options is tricky to figure
out which is the backend disk etc, we did try
to fix this in implementations of check-user.go
but it wasn't complete and introduced more bugs.
This PR simplifies the entire approach to rather
than running Docker container as non-root by default
always, it allows users to opt-in. Such that they
are aware that that is what they are planning to do.
In-fact there are other ways docker containers can
be run as regular users, without modifying our
internal behavior and adding more complexities.
This commit fixes a privilege escalation issue against
the S3 and web handlers. An authenticated IAM user
can:
- Read from or write to the internal '.minio.sys'
bucket by simply sending a properly signed
S3 GET or PUT request. Further, the user can
- Read from or write to the internal '.minio.sys'
bucket using the 'Upload'/'Download'/'DownloadZIP'
API by sending a "browser" request authenticated
with its JWT token.
Simplify the cmd/http package overall by removing
custom plain text v/s tls connection detection, by
migrating to go1.12 and choose minimum version
to be go1.12
Also remove all the vendored deps, since they
are not useful anymore.
Go script makes it easy to read/maintain. Also updated the timeout
in Dockerfiles from 5s to default 30s and test interval to 1m
Higher timeout makes sense as server may sometimes respond slowly
if under high load as reported in #6974Fixes#6974
Also add a cross compile script to test always cross
compilation for some well known platforms and architectures
, we support out of box compilation of these platforms even
if we don't make an official release build.
This script is to avoid regressions in this area when we
add platform dependent code.
This PR simplifies the process of developer build of local
docker containers using `make docker`.
You need to provide a TAG i.e
```
TAG=y4m4/minio:exp make docker
```
This commit ditches running verifiers automatically when just building
the server. It retains the verifiers when running tests.
There is very little point to running the verifiers each time a
developer builds the library but has no intent of running the tests.
They're expensive in time; this commit halves the build time on my
system, from 57 seconds to 29 seconds. This is because verifiers updates
the libraries from GitHub each time, which is slightly wasteful.
Additionally, computing cyclomatic complexity is expensive
computationally and isn't necessary to build the library.
Additionally, this allows the library to be built offline. It no longer
requires internet to run make.
Sending envVars along with access and secret
exposes the entire minio server's sensitive
information. This will be an unexpected
situation for all users.
If at all we need to look for things like if
credentials are set through env, we should
only have access to only this information
not the entire set of system envs.
This is needed to validate if the `format.json` indeed exists
when a fresh node is brought online.
This wrapped implementation also connects to the remote node
by attempting a re-login. Subsequently after a successful
connect `format.json` is validated as well.
Fixes#3207
Golang 1.6 is default version for the build now.
Additionally set 'GODEBUG=cgocheck=0' for now, until
we fix the erasure coding package.
Readmore here https://tip.golang.org/doc/go1.6#cgo