Default golang net.Listen only listens on the first IP when
host resolves to multiple IPs.
This change addresses a problem for example your ``/etc/hosts``
has entries as following
```
127.0.1.1 minio1
192.168.1.10 minio1
```
Trying to start minio as
```
minio server --address "minio1:9001" ~/Photos
```
Causes the minio server to be bound only to "127.0.1.1" which
is an incorrect behavior since we are generally interested in
`192.168.1.10` as well.
This patch addresses this issue if the hostname is resolvable
and gives back list of addresses associated with that hostname
we just bind on all of them as it is the expected behavior.
- The benchmark initialization function was not taking into account the
instance type (FS/XL), was using XL ObjectLayer even for FS
benchmarks.
- This was leading to incorrect benchmark results for FS related
benchmarks.
- The fix takes into account the instance type (FS/XL) and correctly
returns FS backend for FS benchmarks.
Do not attempt to fetch volume/drive information for
each i/o situation. In our case we do this in all calls
`posix.go` this in-turn created a terrible situation for
windows. This issue does not affect the i/o path on Unix
platforms since statvfs calls are in the range of micro
seconds on these platforms.
This verification is only needed during startup and we
let things fail at a later stage on windows.
- Reads and writes of uploads.json in XL now uses quorum for
newMultipart, completeMultipart and abortMultipart operations.
- Each disk's `uploads.json` file is read and updated independently for
adding or removing an upload id from the file. Quorum is used to
decide if the high-level operation actually succeeded.
- Refactor FS code to simplify the flow, and fix a bug while reading
uploads.json.
For command line arguments we are currently following
- <node-1>:/path ... <node-n>:/path
This patch changes this to
- http://<node-1>/path ... http://<node-n>/path
In FS or single-node XL mode, there is no need to save listener
configuration to persistent storage. As there is only one server, if it
is restarted, any connected listenBucketAPI clients were disconnected
and will have to reconnect - so there is nothing to actually store.
This incidentally solves #3052 by avoiding the problem.
- When modifying notification configuration
- When modifying listener configuration
- When modifying policy configuration
With this change we also stop early checking if the bucket exists, since
that uses a Read-lock and causes a deadlock due to the outer Write-lock.
opsID, a variable on the stack, changes over the course of
Completemultipartupload function in xl-v1-multipart.go. This was
being used in a function closure which was passed to defer
statement. The variables used in the closure depend on their values at
the time of evaluation which is indeterminate behaviour. It is
incorrect to depend on values of variables on stack at the end of
function, when deferred functions are executed.
Added clear subcommand for control lock with following options:
```
3. Clear lock named 'bucket/object' (exact match).
$ minio control lock clear http://localhost:9000/bucket/object
4. Clear all locks with names that start with 'bucket/prefix' (wildcard match).
$ minio control lock --recursive clear http://localhost:9000/bucket/prefix
5. Clear all locks older than 10minutes.
$ minio control lock --older-than=10m clear http://localhost:9000/
6. Clear all locks with names that start with 'bucket/a' and that are older than 1hour.
$ minio control lock --recursive --older-than=1h clear http://localhost:9000/bucket/a
```
This makes sure that when SSL is enabled (for FS/single node mode),
the server address is picked up from the --address option (that needs
to include the hostname for SSL verification, and has to be input
appropriately by user), instead of just using ":<port>".
In a distributed setup that the server should not perform any operation
on the storage layer after it is exported via RPC. e.g, cleaning up of
temporary directories under .minio.sys/tmp may interfere with ongoing
PUT objects being served by the distributed setup.
In a multipart upload scenario disks going down and coming backup
can lead to certain parts missing on the disk/server which was
going down. This is a valid case since these blocks can be
missing and should be healed through heal operation. But we are
not supposed to fail prematurely since we have enough data on
the other disks as well within read-quorum.
This fix relaxes previous assumption, fixes a major corruption
issue reproduced by @vadmeste.
Fixes#2976
Don't close socket while re-initializing notify-listeners, as the rpc
client object is shared between notify-listeners and peer clients.
Also, improves SendRPC() readability by using GetPeerClient().
Fixes a serialisation bug - encoding/gob does not directly support
serializing `map[string]interface{}`, so we serialise to JSON and send a
byte array in the RPC call, and deserialize and update on the receiver.