remote error is not required to be passed back to the
client - this is mostly because we have healing that should
eventually, catch up on this and heal the bucket.
Ensure delete marker replication success, especially since the
recent optimizations to heal on HEAD, LIST and GET can force
replication attempts on delete marker before underlying object
version could have synced.
Move to using `xl.meta` data structure to keep temporary partInfo,
this allows for a future change where we move to different parts to
different drives.
PUT shall only proceed if pre-conditions are met, the new
code uses
- x-minio-source-mtime
- x-minio-source-etag
to verify if the object indeed needs to be replicated
or not, allowing us to avoid StatObject() call.
When limiting listing do not count delete, since they may be discarded.
Extend limit, since we may be discarding the forward-to marker.
Fix directories always being sent to resolve, since they didn't return as match.
On occasion this test fails:
```
2022-09-12T17:22:44.6562737Z === RUN TestGetObjectWithOutdatedDisks
2022-09-12T17:22:44.6563751Z erasure-object_test.go:1214: Test 2: Expected data to have md5sum = `c946b71bb69c07daf25470742c967e7c`, found `7d16d23f07072af1a809707ba101ae07`
2
```
Theory: Both objects are written with the same timestamp due to lower timer resolution on Windows. This results in secondary resolution, which is deterministic, but random.
Solution: Instead of hacking in a wait we request the specific version we want. Should still keep the test relevant.
Bonus: Remote action dependency for vulncheck
If replication config could not be read from bucket metadata for some
reason, issue a panic so that unexpected replication outcomes can
be avoided for replicated buckets.
For similar reasons, adding a panic while fetching object-lock config
if it failed for reason other than non-existence of config.
to avoid relying on scanner-calculated replication metrics.
This will improve the accuracy of the replication stats reported.
This PR also adds on to #15556 by handing replication
traffic that could not be queued by available workers to the
MRF queue so that entries in `PENDING` status are healed faster.
500k is a reasonable limit for any single MinIO
cluster deployment, in future we may increase this
value.
However for now we are going to keep this limit.
When healing is parallelized by setting the ` _MINIO_HEAL_WORKERS`
environment variable, multiple goroutines may race while updating the disk's
healing tracker. This change serializes only these concurrent updates using a
channel. Note, the healing tracker is still not concurrency safe in other contexts.
This PR is a continuation of the previous change instead
of returning an error, instead trigger a spot heal on the
'xl.meta' and return only after the healing is complete.
This allows for future GETs on the same resource to be
consistent for any version of the object.
xl.meta gets written and never rolled back, however
we definitely need to validate the state that is
persisted on the disk, if there are inconsistencies
- more than write quorum we should return an error
to the client
- if write quorum was achieved however there are
inconsistent xl.meta's we should simply trigger
an MRF on them
The `clusterInfo` struct in admin-handlers is same as
madmin.ClusterRegistrationInfo, except for small differences in field
names.
Removing this and using madmin.ClusterRegistrationInfo in its place will
help in following ways:
- The JSON payload generated by mc in case of cluster registration will
be consistent (same keys) with cluster.info generated by minio as part
of the profile and inspect zip
- health-analyzer can parse the cluster.info using the same struct and
won't have to define it's own
Currently, there is a short time window where the code is allowed
to save the status of a replication resync. Currently, the window is
`now.Sub(st.EndTime) <= resyncTimeInterval`. Also, any failure to
write in the backend disks is not retried.
Refactor the code a little bit to rely on the last timestamp of a
successful write of the resync status of any given bucket in the
backend disks.
When replication is enabled in a particular bucket, the listing will send
objects to bucket replication, but it is also sending prefixes for non
recursive listing which is useless and shows a lot of error logs.
This commit will ignore prefixes.
under some sequence of events following code would
reach an infinite loop.
```
idx1, idx2 := 0, 1
for ; idx2 != idx1; idx2++ {
fmt.Println(idx2)
}
```
fixes#15639
A lot of warning messages are printed in CI/CD failures generated by go
test. Avoid that by requiring at least Error level for logging when
doing go test.
inlined data often is bigger than the allowed
O_DIRECT alignment, so potentially we can write
'xl.meta' without O_DSYNC instead we can rely on
O_DIRECT + fdatasync() instead.
This PR allows O_DIRECT on inlined data that
would gain the benefits of performing O_DIRECT,
eventually performing an fdatasync() at the end.
Performance boost can be observed here for small
objects < 128KiB. The performance boost is mainly
seen on HDD, and marginal on NVMe setups.
When a node finds a change in the other replication cluster and applies
to itself will already notify other peers. No need for all nodes in a
given cluster to do site replication healing, only one node is
sufficient.
This PR improves the replication failure healing by persisting
most recent failures to disk and re-queuing them until the replication
is successful.
While this does not eliminate the need for healing during a full scan,
queuing MRF vastly improves the ETA to keeping replicated buckets
in sync as it does not wait for the scanner visit to detect unreplicated
object versions.
competing calls on the same object on versioned bucket
mutating calls on the same object may unexpected have
higher delays.
This can be reproduced with a replicated bucket
overwriting the same object writes, deletes repeatedly.
For longer locks like scanner keep the 1sec interval
This PR fixes possible leaks that may emanate from not
listening on context cancelation or timeouts.
```
goroutine 60957610 [chan send, 16 minutes]:
github.com/minio/minio/cmd.(*erasureServerPools).Walk.func1.1.1(...)
github.com/minio/minio/cmd/erasure-server-pool.go:1724 +0x368
github.com/minio/minio/cmd.listPathRaw({0x4a9a740, 0xc0666dffc0},...
github.com/minio/minio/cmd/metacache-set.go:1022 +0xfc4
github.com/minio/minio/cmd.(*erasureServerPools).Walk.func1.1()
github.com/minio/minio/cmd/erasure-server-pool.go:1764 +0x528
created by github.com/minio/minio/cmd.(*erasureServerPools).Walk.func1
github.com/minio/minio/cmd/erasure-server-pool.go:1697 +0x1b7
```
The bottom line is delete markers are a nuisance,
most applications are not version aware and this
has simply complicated the version management.
AWS S3 gave an unnecessary complication overhead
for customers, they need to now manage these
markers by applying ILM settings and clean
them up on a regular basis.
To make matters worse all these delete markers
get replicated as well in a replicated setup,
requiring two ILM settings on each site.
This PR is an attempt to address this inferior
implementation by deviating MinIO towards an
idempotent delete marker implementation i.e
MinIO will never create any more than single
consecutive delete markers.
This significantly reduces operational overhead
by making versioning more useful for real data.
This is an S3 spec deviation for pragmatic reasons.
Queue failed/pending replication for healing during listing and GET/HEAD
API calls. This includes healing of existing objects that were never
replicated or those in the middle of a resync operation.
This PR also fixes a bug in ListObjectVersions where lifecycle filtering
should be done.
when object speedtest is running keep writing
previous speedtest result back to client until
we have a new result - this avoids sending back
blank entries in between the speedtest when it
is running in 'autotune' mode.
```
commit 7bdaf9bc50
Author: Aditya Manthramurthy <donatello@users.noreply.github.com>
Date: Wed Jul 24 17:34:23 2019 -0700
Update on-disk storage format for users system (#7949)
```
Bonus: fixes a bug when etcd keys were being re-encrypted.
Currently, the code doesn't check if the user creating a bucket with
locking feature has bucket locking and versioning permissions enabled,
adding it in accordance with S3 spec.
https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateBucket.html
Object Lock - If ObjectLockEnabledForBucket is set to true in your CreateBucket request,
s3:PutBucketObjectLockConfiguration and s3:PutBucketVersioning permissions are required.
Capture average, p50, p99, p999 response times
and ttfb values. These are needed for latency
measurements and overall understanding of our
speedtest results.
listConfigItems creates a goroutine but sometimes callers will
exit without properly asking listAllIAMConfigItems() to stop sending
results, hence a goroutine leak.
Create a new context and cancel it for each listAllIAMConfigItems
call.
This commit adds support for automatically reloading
the MinIO client certificate for authentication to KES.
The client certificate will now be reloaded:
- when the private key / certificate file changes
- when a SIGHUP signal is received
- every 15 minutes
Fixes#14869
Signed-off-by: Andreas Auernhammer <hi@aead.dev>
The path is marked dirty automatically when healObject() is called, which is
wrong. HealObject() is called during self-healing and this will lead to
an increase in the false positive result of the bloom filter.
Also move NSUpdated() from renameData() and call it directly in
CompleteMultipart and PutObject, this is not a functional change but
it will make it less prone to errors in the future.
smaller setups may have less drives per server choosing
the concurrency based on number of local drives, and let
the MinIO server change the overall concurrency as
necessary.
It is possible for anyone with admin access to relatively
to get any content of any random OS location by simply
providing the file with 'mc admin update alias/ /etc/passwd`.
Workaround is to disable 'admin:ServiceUpdate' action. Everyone
is advised to upgrade to this patch.
Thanks to @alevsk for finding this bug.
this has been observed in multiple environments
where the setups are small `speedtest` naturally
fails with default '10s' and the concurrency
of '32' is big for such clusters.
choose a smaller value i.e equal to number of
drives in such clusters and let 'autotune'
increase the concurrency instead.
fixes#15334
- re-use net/url parsed value for http.Request{}
- remove gosimple, structcheck and unusued due to https://github.com/golangci/golangci-lint/issues/2649
- unwrapErrs upto leafErr to ensure that we store exactly the correct errors
"consoleAdmin" was used as the policy for root derived accounts, but this
lead to unexpected bugs when an administrator modified the consoleAdmin
policy
This change avoids evaluating a policy for root derived accounts as by
default no policy is mapped to the root user. If a session policy is
attached to a root derived account, it will be evaluated as expected.
This PR changes the handling of bucket deletes for site
replicated setups to hold on to deleted bucket state until
it syncs to all the clusters participating in site replication.
Currently, if one server in a distributed setup fails to upgrade
due to any reasons, it is not possible to upgrade again unless
nodes are restarted.
To fix this, split the upgrade process into two steps :
- download the new binary on all servers
- If successful, overwrite the old binary with the new one
This commit replaces `ioutil.TempDir` with `t.TempDir` in tests. The
directory created by `t.TempDir` is automatically removed when the test
and all its subtests complete.
Prior to this commit, temporary directory created using `ioutil.TempDir`
needs to be removed manually by calling `os.RemoveAll`, which is omitted
in some tests. The error handling boilerplate e.g.
defer func() {
if err := os.RemoveAll(dir); err != nil {
t.Fatal(err)
}
}
is also tedious, but `t.TempDir` handles this for us nicely.
Reference: https://pkg.go.dev/testing#T.TempDir
Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
Add cluster info to inspect and profiling archive.
In addition to the existing data generation for both inspect and profiling,
cluster.info file is added. This latter contains some info of the cluster.
The generation of cluster.info is is done as the last step and it can fail
if it exceed 10 seconds.
a/b/c/d/ where `a/b/c/` exists results in additional syscalls
such as an Lstat() call to verify if the `a/b/c/` exists
and its a directory.
We do not need to do this on MinIO since the parent prefixes
if exist, we can simply return success without spending
additional syscalls.
Also this implementation attempts to simply use Access() calls
to avoid os.Stat() calls since the latter does memory allocation
for things we do not need to use.
Access() is simpler since we have a predictable structure on
the backend and we know exactly how our path structures are.
A huge number of goroutines would build up from various monitors
When creating test filesystems provide a context so they can shut down when no longer needed.
Do completely independent multipart uploads.
In distributed mode, a lock was held to merge each multipart
upload as it was added. This lock was highly contested and
retries are expensive (timewise) in distributed mode.
Instead, each part adds its metadata information uniquely.
This eliminates the per object lock required for each to merge.
The metadata is read back and merged by "CompleteMultipartUpload"
without locks when constructing final object.
Co-authored-by: Harshavardhana <harsha@minio.io>
This commit adds a `context.Context` to the
the KMS `{Stat, CreateKey, GenerateKey}` API
calls.
The context will be used to terminate external calls
as soon as the client requests gets canceled.
A follow-up PR will add a `context.Context` to
the remaining `DecryptKey` API call.
Signed-off-by: Andreas Auernhammer <hi@aead.dev>