mirror of
https://github.com/minio/minio.git
synced 2025-11-20 09:56:07 -05:00
Assume local endpoints appropriately in k8s deployments (#8375)
On Kubernetes/Docker setups DNS resolves inappropriately sometimes where there are situations same endpoints with multiple disks come online indicating either one of them is local and some of them are not local. This situation can never happen and its only a possibility in orchestrated deployments with dynamic DNS. Following code ensures that we treat if one of the endpoint says its local for a given host it is true for all endpoints for the same host. Following code ensures that this assumption is true and it works in all scenarios and it is safe to assume for a given host. This PR also adds validation such that we do not crash the server if there are bugs in the endpoints list in dsync initialization. Thanks to Daniel Valdivia <hola@danielvaldivia.com> for reproducing this, this fix is needed as part of the https://github.com/minio/m3 project.
This commit is contained in:
committed by
kannappanr
parent
42531db37e
commit
36e12a6038
@@ -276,7 +276,11 @@ func serverMain(ctx *cli.Context) {
|
||||
|
||||
// Set nodes for dsync for distributed setup.
|
||||
if globalIsDistXL {
|
||||
globalDsync, err = dsync.New(newDsyncNodes(globalEndpoints))
|
||||
clnts, myNode, err := newDsyncNodes(globalEndpoints)
|
||||
if err != nil {
|
||||
logger.Fatal(err, "Unable to initialize distributed locking on %s", globalEndpoints)
|
||||
}
|
||||
globalDsync, err = dsync.New(clnts, myNode)
|
||||
if err != nil {
|
||||
logger.Fatal(err, "Unable to initialize distributed locking on %s", globalEndpoints)
|
||||
}
|
||||
|
||||
Reference in New Issue
Block a user