resync status may not be upto-date by
the time the resync is over due to how
the timer is triggered.
diff is sufficient to know if replication
happened or not.
`GetParityForSC` has a value receiver, so Config is copied before the lock is obtained.
Make it pointer receiver.
Fixes:
```
WARNING: DATA RACE
Read at 0x0000079cdd10 by goroutine 190:
github.com/minio/minio/cmd.(*erasureServerPools).BackendInfo()
github.com/minio/minio/cmd/erasure-server-pool.go:579 +0x6f
github.com/minio/minio/cmd.(*erasureServerPools).LocalStorageInfo()
github.com/minio/minio/cmd/erasure-server-pool.go:614 +0x3c6
github.com/minio/minio/cmd.(*peerRESTServer).LocalStorageInfoHandler()
github.com/minio/minio/cmd/peer-rest-server.go:347 +0x4ea
github.com/minio/minio/cmd.(*peerRESTServer).LocalStorageInfoHandler-fm()
...
WARNING: DATA RACE
Read at 0x0000079cdd10 by goroutine 190:
github.com/minio/minio/cmd.(*erasureServerPools).BackendInfo()
github.com/minio/minio/cmd/erasure-server-pool.go:579 +0x6f
github.com/minio/minio/cmd.(*erasureServerPools).LocalStorageInfo()
github.com/minio/minio/cmd/erasure-server-pool.go:614 +0x3c6
github.com/minio/minio/cmd.(*peerRESTServer).LocalStorageInfoHandler()
github.com/minio/minio/cmd/peer-rest-server.go:347 +0x4ea
github.com/minio/minio/cmd.(*peerRESTServer).LocalStorageInfoHandler-fm()
```
Since relaxing quorum the error across pools
for ListBuckets(), GetBucketInfo() we hit a
situation where loading IAM could potentially
return an error for second pool that server
is not initialized.
We need to handle this, let the pool come online
and retry transparently - this PR fixes that.
x-amz-signed-headers is meant for HTTP headers only
not for query params, using that to verify things
further can lead to failure.
The generated presigned URL with custom metadata
is already kosher (tamper proof).
fixes#18281
`resourceMetricsMap` has no protection against concurrent reads and writes.
Add a mutex and don't use maps from the last iteration.
Bug introduced in #18057Fixes#18271
globalDeploymentID was being read while it was being set.
Fixes race:
```
WARNING: DATA RACE
Write at 0x0000079605a0 by main goroutine:
github.com/minio/minio/cmd.connectLoadInitFormats()
github.com/minio/minio/cmd/prepare-storage.go:269 +0x14f0
github.com/minio/minio/cmd.waitForFormatErasure()
github.com/minio/minio/cmd/prepare-storage.go:294 +0x21d
...
Previous read at 0x0000079605a0 by goroutine 105:
github.com/minio/minio/cmd.newContext()
github.com/minio/minio/cmd/utils.go:817 +0x31e
github.com/minio/minio/cmd.adminMiddleware.func1()
github.com/minio/minio/cmd/admin-router.go:110 +0x96
net/http.HandlerFunc.ServeHTTP()
net/http/server.go:2136 +0x47
github.com/minio/minio/cmd.setBucketForwardingMiddleware.func1()
github.com/minio/minio/cmd/generic-handlers.go:460 +0xb1a
net/http.HandlerFunc.ServeHTTP()
net/http/server.go:2136 +0x47
...
```
currently the default for all drives is 512, which is a lot
for HDDs the recent testing has revealed moving this to 32
for HDDs seems like a fair value.
Introducing a new version of healthinfo struct for adding this info is
not correct. It needs to be implemented differently without adding a new
version.
This reverts commit 8737025d940f80360ed4b3686b332db5156f6659.
There is a fundamental race condition in `newErasureServerPools`, where setObjectLayer is
called before the poolMeta has been loaded/populated.
We add a placeholder value to this field but disable all saving of the value, so we don't risk
overwriting the value on disk. Once the value has been loaded or created, it is replaced with
the proper value, which will also be saved.
Also fixes various accesses of `poolMeta` that were done without locks.
We make the `poolMeta.IsSuspended` return false, even if we shouldn't risk out-of-bounds
reads anymore.
If target went offline while MinIO was down, error once
while trying to send message. If target goes offline during
MinIO server running, it already comes through ping() call
and errors out if target offline.
Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
if erasure upgrade is needed rely on the in-memory
values, instead of performing a "DiskInfo()" call.
https://brendangregg.com/blog/2016-09-03/sudden-disk-busy.html
for HDDs these are problematic, lets avoid this because
there is no value in "being" absolutely strict here
in terms of parity. We are okay to increase parity
as we see based on the in-memory online/offline ratio.