Added clear subcommand for control lock (#3013)

Added clear subcommand for control lock with following options:

```
  3. Clear lock named 'bucket/object' (exact match).
    $ minio control lock clear http://localhost:9000/bucket/object

  4. Clear all locks with names that start with 'bucket/prefix' (wildcard match).
    $ minio control lock --recursive clear http://localhost:9000/bucket/prefix

  5. Clear all locks older than 10minutes.
    $ minio control lock --older-than=10m clear http://localhost:9000/

  6. Clear all locks with names that start with 'bucket/a' and that are older than 1hour.
    $ minio control lock --recursive --older-than=1h clear http://localhost:9000/bucket/a
```
This commit is contained in:
Frank
2016-10-20 22:15:28 +02:00
committed by Harshavardhana
parent 6274727b71
commit 0e2cd1a64d
5 changed files with 244 additions and 7 deletions

View File

@@ -251,6 +251,20 @@ func (n *nsLockMap) ForceUnlock(volume, path string) {
n.lockMapMutex.Lock()
defer n.lockMapMutex.Unlock()
// Clarification on operation:
// - In case of FS or XL we call ForceUnlock on the local nsMutex
// (since there is only a single server) which will cause the 'stuck'
// mutex to be removed from the map. Existing operations for this
// will continue to be blocked (and timeout). New operations on this
// resource will use a new mutex and proceed normally.
//
// - In case of Distributed setup (using dsync), there is no need to call
// ForceUnlock on the server where the lock was acquired and is presumably
// 'stuck'. Instead dsync.ForceUnlock() will release the underlying locks
// that participated in granting the lock. Any pending dsync locks that
// are blocking can now proceed as normal and any new locks will also
// participate normally.
if n.isDist { // For distributed mode, broadcast ForceUnlock message.
dsync.NewDRWMutex(pathutil.Join(volume, path)).ForceUnlock()
}