properly reload a fresh drive when found in a failed state during startup (#20145)

When a drive is in a failed state when a single node multiple drives
deployment is started, a replacement of a fresh disk will not be
properly healed unless the user restarts the node.

Fix this by always adding the new fresh disk to globalLocalDrivesMap. Also
remove globalLocalDrives for simplification, a map to store local node
drives can still be used since the order of local drives of a node is
not defined.
This commit is contained in:
Anis Eleuch
2024-07-25 00:30:33 +01:00
committed by GitHub
parent 33c101544d
commit b7f319b62a
9 changed files with 23 additions and 34 deletions

View File

@@ -414,10 +414,9 @@ var (
globalServiceFreezeCnt int32
globalServiceFreezeMu sync.Mutex // Updates.
// List of local drives to this node, this is only set during server startup,
// and is only mutated by HealFormat. Hold globalLocalDrivesMu to access.
globalLocalDrives []StorageAPI
globalLocalDrivesMap = make(map[string]StorageAPI)
// Map of local drives to this node, this is set during server startup,
// disk reconnect and mutated by HealFormat. Hold globalLocalDrivesMu to access.
globalLocalDrivesMap map[string]StorageAPI
globalLocalDrivesMu sync.RWMutex
globalDriveMonitoring = env.Get("_MINIO_DRIVE_ACTIVE_MONITORING", config.EnableOn) == config.EnableOn