mirror of
https://github.com/minio/minio.git
synced 2025-11-07 21:02:58 -05:00
Avoid synchronizing usage writes (#11560)
If the periodic `case <-t.C:` save gets held up for a long time it will end up synchronize all disk writes for saving the caches. We add jitter to per set writes so they don't sync up and don't hold a lock for the write, since it isn't needed anyway. If an outage prevents writes for a long while we also add individual waits for each disk in case there was a queue. Furthermore limit the number of buffers kept to 2GiB, since this could get huge in large clusters. This will not act as a hard limit but should be enough for normal operation.
This commit is contained in:
@@ -357,9 +357,14 @@ func newErasureSets(ctx context.Context, endpoints Endpoints, storageDisks []Sto
|
||||
|
||||
mutex := newNSLock(globalIsDistErasure)
|
||||
|
||||
// Number of buffers, max 2GB.
|
||||
n := setCount * setDriveCount
|
||||
if n > 100 {
|
||||
n = 100
|
||||
}
|
||||
// Initialize byte pool once for all sets, bpool size is set to
|
||||
// setCount * setDriveCount with each memory upto blockSizeV1.
|
||||
bp := bpool.NewBytePoolCap(setCount*setDriveCount, blockSizeV1, blockSizeV1*2)
|
||||
bp := bpool.NewBytePoolCap(n, blockSizeV1, blockSizeV1*2)
|
||||
|
||||
for i := 0; i < setCount; i++ {
|
||||
s.erasureDisks[i] = make([]StorageAPI, setDriveCount)
|
||||
|
||||
Reference in New Issue
Block a user