with missing nextMarker with delimiter based listing,
top level prefixes beyond 4500 or max-keys value
wouldn't be sent back for client to ask for the next
batch.
reproduced at a customer deployment, create prefixes
as shown below
```
for year in $(seq 2017 2020)
do
for month in {01..12}
do for day in {01..31}
do
mc -q cp file myminio/testbucket/dir/day_id=$year-$month-$day/;
done
done
done
```
Then perform
```
aws s3api --profile minio --endpoint-url http://localhost:9000 list-objects \
--bucket testbucket --prefix dir/ --delimiter / --max-keys 1000
```
You shall see missing NextMarker, this would disallow listing beyond max-keys
requested and also disallow beyond 4500 (maxKeyObjectList) prefixes being listed
because client wouldn't know the NextMarker available.
This PR addresses this situation properly by making the implementation
more spec compatible. i.e NextMarker in-fact can be either an object, a prefix
with delimiter depending on the input operation.
This issue was introduced after the list caching changes and has been present
for a while.
regression was introduced by cce5d7152a
which incorrectly implemented the tests and fixed a wrong requirement,
CompleteMultipart should never succeed in the tests.
issue was introduced in #11106 the following
pattern
<-t.C // timer fired
if !t.Stop() {
<-t.C // timer hangs
}
Seems to hang at the last `t.C` line, this
issue happens because a fired timer cannot be
Stopped() anymore and t.Stop() returns `false`
leading to confusing state of usage.
Refactor the code such that use timers appropriately
with exact requirements in place.
Both Azure & S3 gateways call for object information before returning
the stream of the object, however, the object content/length could be
modified meanwhile, which means it can return a corrupted object.
Use ETag to ensure that the object was not modified during the GET call
Currently, the test doesn't complete nor abort the multipart upload
after tesing the list operation. It caused the `cleanup()` to fail since
the object can't not be deleted which causes the bucket deletion to
fail.
This PR adds a `CompleteMultipartUpload` call to commit the object so
the cleanup can successfully delete the object and bucket.
crawler should only ListBuckets once not for each serverPool,
buckets are same across all pools, across sets and ListBuckets
always returns an unified view, once list buckets returns
sort it by create time to scan the latest buckets earlier
with the assumption that latest buckets would have lesser
content than older buckets allowing them to be scanned faster
and also to be able to provide more closer to latest view.
When searching the caches don't copy the ids, instead inline the loop.
```
Benchmark_bucketMetacache_findCache-32 19200 63490 ns/op 8303 B/op 5 allocs/op
Benchmark_bucketMetacache_findCache-32 20338 58609 ns/op 111 B/op 4 allocs/op
```
Add a reasonable, but still the simplistic benchmark.
Bonus - make nicer zero alloc logging
With new refactor of bucket healing, healing bucket happens
automatically including its metadata, there is no need to
redundant heal buckets also in ListBucketsHeal remove
it.
optimization mainly to avoid listing the entire
`.minio.sys/buckets/.minio.sys` directory, this
can get really huge and comes in the way of startup
routines, contents inside `.minio.sys/buckets/.minio.sys`
are rather transient and not necessary to be healed.
Tests environments (go test or manual testing) should always consider
the passed disks are root disks and should not rely on disk.IsRootDisk()
function. The reason is that this latter can return a false negative
when called in a busy system. However, returning a false negative will
only occur in a testing environment and not in a production, so we can
accept this trade-off for now.
Perform cleanup operations on copied data. Avoids read locking
data while determining which caches to keep.
Also, reduce the log(N*N) operation to log(N*M) where M caches
with the same root or below when checking potential replacements.
This refactor is done for few reasons below
- to avoid deadlocks in scenarios when number
of nodes are smaller < actual erasure stripe
count where in N participating local lockers
can lead to deadlocks across systems.
- avoids expiry routines to run 1000 of separate
network operations and routes per disk where
as each of them are still accessing one single
local entity.
- it is ideal to have since globalLockServer
per instance.
- In a 32node deployment however, each server
group is still concentrated towards the
same set of lockers that partipicate during
the write/read phase, unlike previous minio/dsync
implementation - this potentially avoids send
32 requests instead we will still send at max
requests of unique nodes participating in a
write/read phase.
- reduces overall chattiness on smaller setups.
The bucket forwarder handler considers MakeBucket to be always local but
it mistakenly thinks that PUT bucket lifecycle to be a MakeBucket call.
Fix the check of the MakeBucket call by ensuring that the query is empty
in the PUT url.
till now we used to match the inode number of the root
drive and the drive path minio would use, if they match
we knew that its a root disk.
this may not be true in all situations such as running
inside a container environment where the container might
be mounted from a different partition altogether, root
disk detection might fail.