mirror of https://github.com/minio/minio.git
4550ac6fff
This refactor is done for few reasons below - to avoid deadlocks in scenarios when number of nodes are smaller < actual erasure stripe count where in N participating local lockers can lead to deadlocks across systems. - avoids expiry routines to run 1000 of separate network operations and routes per disk where as each of them are still accessing one single local entity. - it is ideal to have since globalLockServer per instance. - In a 32node deployment however, each server group is still concentrated towards the same set of lockers that partipicate during the write/read phase, unlike previous minio/dsync implementation - this potentially avoids send 32 requests instead we will still send at max requests of unique nodes participating in a write/read phase. - reduces overall chattiness on smaller setups. |
||
---|---|---|
.. | ||
.gitignore | ||
drwmutex.go | ||
drwmutex_test.go | ||
dsync-server_test.go | ||
dsync.go | ||
dsync_test.go | ||
rpc-client-impl_test.go | ||
rpc-client-interface.go |