Commit Graph

1593 Commits

Author SHA1 Message Date
Harshavardhana 3831cc9e3b
fix: [fs] CompleteMultipart use trie structure for partMatch (#10522)
performance improves by around 100x or more

```
go test -v -run NONE -bench BenchmarkGetPartFile
goos: linux
goarch: amd64
pkg: github.com/minio/minio/cmd
BenchmarkGetPartFileWithTrie
BenchmarkGetPartFileWithTrie-4          1000000000               0.140 ns/op           0 B/op          0 allocs/op
PASS
ok      github.com/minio/minio/cmd      1.737s
```

fixes #10520
2020-09-21 01:18:13 -07:00
Harshavardhana 1cf322b7d4
change leader locker only for crawler (#10509) 2020-09-18 11:15:54 -07:00
poornas 00555c747e
Strip standard ports off remote target url (#10498) 2020-09-17 11:09:50 -07:00
Harshavardhana d616d8a857
serialize replication and feed it through task model (#10500)
this allows for eventually controlling the concurrency
of replication and overally control of throughput
2020-09-16 16:04:55 -07:00
Anis Elleuch 8ea55f9dba
obd: Add console log to OBD output (#10372) 2020-09-15 18:02:54 -07:00
Harshavardhana 0ee9678190
fix: add missing delete marker created filter (#10481) 2020-09-14 21:32:52 -07:00
Harshavardhana 0104af6bcc
delayed locks until we have started reading the body (#10474)
This is to ensure that Go contexts work properly, after some
interesting experiments I found that Go net/http doesn't
cancel the context when Body is non-zero and hasn't been
read till EOF.

The following gist explains this, this can lead to pile up
of go-routines on the server which will never be canceled
and will die at a really later point in time, which can
simply overwhelm the server.

https://gist.github.com/harshavardhana/c51dcfd055780eaeb71db54f9c589150

To avoid this refactor the locking such that we take locks after we
have started reading from the body and only take locks when needed.

Also, remove contextReader as it's not useful, doesn't work as expected
context is not canceled until the body reaches EOF so there is no point
in wrapping it with context and putting a `select {` on it which
can unnecessarily increase the CPU overhead.

We will still use the context to cancel the lockers etc.
Additional simplification in the locker code to avoid timers
as re-using them is a complicated ordeal avoid them in
the hot path, since locking is very common this may avoid
lots of allocations.
2020-09-14 15:57:13 -07:00
Andreas Auernhammer 224daee391
fix nats TLS unit tests (#10476)
This commit fixes the nats TLS tests by generating new certificates
(root CA, server and client) - each valid for 10y. The new certificates
don't have a common name (deprecated by X.509) but SANs instead.

Since Go 1.15 the Go `crypto/x509` package rejects certificates that
only have a common name and no SAN. See: https://golang.org/doc/go1.15#commonname
2020-09-14 13:19:46 -07:00
Harshavardhana 48919de301
fix: for defer'ed deleteObject use internal context (#10463) 2020-09-11 06:39:19 -07:00
Anis Elleuch af88772a78
lifecycle: NoncurrentVersionExpiration considers noncurrent version age (#10444)
From https://docs.aws.amazon.com/AmazonS3/latest/dev/intro-lifecycle-rules.html#intro-lifecycle-rules-actions

```
When specifying the number of days in the NoncurrentVersionTransition
and NoncurrentVersionExpiration actions in a Lifecycle configuration,
note the following:

It is the number of days from when the version of the object becomes
noncurrent (that is, when the object is overwritten or deleted), that
Amazon S3 will perform the action on the specified object or objects.

Amazon S3 calculates the time by adding the number of days specified in
the rule to the time when the new successor version of the object is
created and rounding the resulting time to the next day midnight UTC.
For example, in your bucket, suppose that you have a current version of
an object that was created at 1/1/2014 10:30 AM UTC. If the new version
of the object that replaces the current version is created at 1/15/2014
10:30 AM UTC, and you specify 3 days in a transition rule, the
transition date of the object is calculated as 1/19/2014 00:00 UTC.
```
2020-09-09 18:11:24 -07:00
Nitish Tiwari eaaf05a7cc
Add Kubernetes operator webook server as DNS target (#10404)
This PR adds a DNS target that ensures to update an entry
into Kubernetes operator when a bucket is created or deleted.

See minio/operator#264 for details.

Co-authored-by: Harshavardhana <harsha@minio.io>
2020-09-09 12:20:49 -07:00
Klaus Post 0987069e37
select: Fix integer conversion overflow (#10437)
Do not convert float value to integer if it will over/underflow.

The comparison cannot be `<=` since rounding may overflow it.

Fixes #10436
2020-09-08 15:56:11 -07:00
Harshavardhana c13afd56e8
Remove MaxConnsPerHost settings to avoid potential hangs (#10438)
MaxConnsPerHost can potentially hang a call without any
way to timeout, we do not need this setting for our proxy
and gateway implementations instead IdleConn settings are
good enough.

Also ensure to use NewRequestWithContext and make sure to
take the disks offline only for network errors.

Fixes #10304
2020-09-08 14:22:04 -07:00
Andreas Auernhammer fbd1c5f51a
certs: refactor cert manager to support multiple certificates (#10207)
This commit refactors the certificate management implementation
in the `certs` package such that multiple certificates can be
specified at the same time. Therefore, the following layout of
the `certs/` directory is expected:
```
certs/
 │
 ├─ public.crt
 ├─ private.key
 ├─ CAs/          // CAs directory is ignored
 │   │
 │    ...
 │
 ├─ example.com/
 │   │
 │   ├─ public.crt
 │   └─ private.key
 └─ foobar.org/
     │
     ├─ public.crt
     └─ private.key
   ...
```

However, directory names like `example.com` are just for human
readability/organization and don't have any meaning w.r.t whether
a particular certificate is served or not. This decision is made based
on the SNI sent by the client and the SAN of the certificate.

***

The `Manager` will pick a certificate based on the client trying
to establish a TLS connection. In particular, it looks at the client
hello (i.e. SNI) to determine which host the client tries to access.
If the manager can find a certificate that matches the SNI it
returns this certificate to the client.

However, the client may choose to not send an SNI or tries to access
a server directly via IP (`https://<ip>:<port>`). In this case, we
cannot use the SNI to determine which certificate to serve. However,
we also should not pick "the first" certificate that would be accepted
by the client (based on crypto. parameters - like a signature algorithm)
because it may be an internal certificate that contains internal hostnames. 
We would disclose internal infrastructure details doing so.

Therefore, the `Manager` returns the "default" certificate when the
client does not specify an SNI. The default certificate the top-level
`public.crt` - i.e. `certs/public.crt`.

This approach has some consequences:
 - It's the operator's responsibility to ensure that the top-level
   `public.crt` does not disclose any information (i.e. hostnames)
   that are not publicly visible. However, this was the case in the
   past already.
 - Any other `public.crt` - except for the top-level one - must not
   contain any IP SAN. The reason for this restriction is that the
   Manager cannot match a SNI to an IP b/c the SNI is the server host
   name. The entire purpose of SNI is to indicate which host the client
   tries to connect to when multiple hosts run on the same IP. So, a
   client will not set the SNI to an IP.
   If we would allow IP SANs in a lower-level `public.crt` a user would
   expect that it is possible to connect to MinIO directly via IP address
   and that the MinIO server would pick "the right" certificate. However,
   the MinIO server cannot determine which certificate to serve, and
   therefore always picks the "default" one. This may lead to all sorts
   of confusing errors like:
   "It works if I use `https:instance.minio.local` but not when I use
   `https://10.0.2.1`.

These consequences/limitations should be pointed out / explained in our
docs in an appropriate way. However, the support for multiple
certificates should not have any impact on how deployment with a single
certificate function today.

Co-authored-by: Harshavardhana <harsha@minio.io>
2020-09-03 23:33:37 -07:00
Harshavardhana 1c6781757c
add missing ListBucketVersions from policy actions (#10414) 2020-09-03 18:25:06 -07:00
Harshavardhana 8a291e1dc0
Cluster healthcheck improvements (#10408)
- do not fail the healthcheck if heal status
  was not obtained from one of the nodes,
  if many nodes fail then report this as a
  catastrophic error.
- add "x-minio-write-quorum" value to match
  the write tolerance supported by server.
- admin info now states if a drive is healing
  where madmin.Disk.Healing is set to true
  and madmin.Disk.State is "ok"
2020-09-02 22:54:56 -07:00
飞雪无情 2d96940826
fix: adminTrace show any errors when server is shutdown. (#10370) 2020-08-28 10:04:54 -07:00
Anis Elleuch 9acdeab73d
lifecycle: Accept document without expiration (#10348) 2020-08-25 12:38:59 -07:00
KevinSmile 5f7bd2b1da
fix: lifecycle-expiration validation bug (#10327) 2020-08-24 13:56:50 -07:00
Harshavardhana caad314faa
add ruleguard support, fix all the reported issues (#10335) 2020-08-24 12:11:20 -07:00
Praveen raj Mani d0c910a6f3
Support https and basic-auth for elasticsearch notification target (#10332) 2020-08-23 09:43:48 -07:00
Tobias Nygren 052b5262ff
use statvfs(2) for disk.GetInfo on NetBSD (#10257) 2020-08-20 20:13:06 -07:00
Krishnan Parthasarathi ccd967e3be
Add ExpiresAt to LicenseInfo (#10293) 2020-08-19 19:21:04 -07:00
Harshavardhana c8b84a0e9e
Add nancy vulnerability scanner (#10289) 2020-08-19 14:25:21 -07:00
Harshavardhana 74116204ce
handle fresh setup with mixed drives (#10273)
fresh drive setups when one of the drive is
a root drive, we should ignore such a root
drive and not proceed to format.

This PR handles this properly by marking
the disks which are root disk and they are
taken offline.
2020-08-18 14:37:26 -07:00
Klaus Post adca28801d
feat: disable Parquet by default (breaking change) (#9920)
I have built a fuzz test and it crashes heavily in seconds and will OOM shortly after.
It seems like supporting Parquet is basically a completely open way to crash the 
server if you can upload a file and run s3 select on it.

Until Parquet is more hardened it is DISABLED by default since hostile 
crafted input can easily crash the server.

If you are in a controlled environment where it is safe to assume no hostile
content can be uploaded to your cluster you can safely enable Parquet.

To enable Parquet set the environment variable `MINIO_API_SELECT_PARQUET=on`
while starting the MinIO server.

Furthermore, we guard parquet by recover functions.
2020-08-18 10:23:28 -07:00
Harshavardhana ede86845e5
docs: Add policy variables for resource and conditions (#10278)
Bonus fix adds LDAP policy variable and clarifies the
usage of policy variables for temporary credentials.

fixes #10197
2020-08-17 17:39:55 -07:00
Harshavardhana 83a82d818e
allow lock tolerance to match storage-class drive tolerance (#10270) 2020-08-14 18:17:14 -07:00
Krishnan Parthasarathi 4e00b47b52
licverifier: fail verify if accountId is missing in license metadata (#10258) 2020-08-13 17:05:24 -07:00
Harshavardhana 30da442a85
rootDisk on containers can have different device Id (#10259)
use `/etc/hosts` instead of `/` to check for common
device id, if the device is same for `/etc/hosts`
and the --bind mount to detect root disks.

Bonus enhance healthcheck logging by adding maintenance
tags, for all messages.
2020-08-13 15:21:20 -07:00
Krishnan Parthasarathi ab43804efd
licverifier: Validate JWT token expiry (#10253)
With this change the expiry is validated for the license key JWT
2020-08-12 21:31:52 -07:00
Harshavardhana 34253aa595
feat: cache env value in-case network is not reachable (#10251) 2020-08-12 16:53:15 -07:00
Harshavardhana 900eebb9a4
use jwt instead of basicAuth for webEnv (#10246) 2020-08-11 16:09:34 -07:00
Harshavardhana 0dd3a08169
move the certPool loader function into pkg/certs (#10239) 2020-08-11 08:29:50 -07:00
Krishnan Parthasarathi 76b6dc0112
Add licverifier package (#10237)
license verification package implements a simple library to 
verify MinIO Subnet license keys.
2020-08-10 13:30:12 -07:00
Harshavardhana 1e2ebc9945
feat: time to bring back http2.0 support (#10230)
Bonus move our CI/CD to go1.14
2020-08-10 09:02:29 -07:00
Harshavardhana 6c6137b2e7
add cluster maintenance healthcheck drive heal affinity (#10218) 2020-08-07 13:22:53 -07:00
Anis Elleuch 433c2831ae
fix: typo in parsing non remote env variables (#10223) 2020-08-07 09:57:20 -07:00
Harshavardhana 77509ce391
Support looking up environment remotely (#10215)
adds a feature where we can fetch the MinIO
command-line remotely, this
is primarily meant to add some stateless
nature to the MinIO deployment in k8s
environments, MinIO operator would run a
webhook service endpoint
which can be used to fetch any environment
value in a generalized approach.
2020-08-06 18:03:16 -07:00
poornas adcaa6f9de
fix: Change ListBucketTargets handler (#10217)
to list all targets across a tenant.
Also fixing some validations.
2020-08-06 17:10:21 -07:00
poornas 121164db56
fix: relax some replication validations (#10210)
Also inherit storage class from source object
if replication configuration does not have a storage
class specified for destination bucket.
2020-08-05 20:01:20 -07:00
poornas 3acc0ebb81
fix: Change service name in Arn for replication (#10205) 2020-08-05 00:43:18 -07:00
Harshavardhana d61eac080b
fix: connection_string should override other params (#10180)
closes #9965
2020-08-03 09:16:00 -07:00
poornas a8dd7b3eda
Refactor replication target management. (#10154)
Generalize replication target management so
that remote targets for a bucket can be
managed with ARNs. `mc admin bucket remote`
command will be used to manage targets.
2020-07-30 19:55:22 -07:00
Harshavardhana fe157166ca
fix: Pass context all the way down to the network call in lockers (#10161)
Context timeout might race on each other when timeouts are lower
i.e when two lock attempts happened very quickly on the same resource
and the servers were yet trying to establish quorum.

This situation can lead to locks held which wouldn't be unlocked
and subsequent lock attempts would fail.

This would require a complete server restart. A potential of this
issue happening is when server is booting up and we are trying
to hold a 'transaction.lock' in quick bursts of timeout.
2020-07-29 23:15:34 -07:00
poornas b46ab7e921
Rename replication target handler (#10142)
Rename replication target handler to a generic bucket target handler
2020-07-28 11:50:47 -07:00
Harshavardhana f200a7fb6a
fix: speed up OBD tests avoid unnecessary memory allocation (#10141)
replace dummy buffer with nullReader{} instead,
to avoid large memory allocations in memory
constrainted environments. allows running
obd tests in such environments.
2020-07-27 14:51:59 -07:00
Praveen raj Mani b800541fbe
fix: a type in NSQ notification target environment key (#10118)
fixes #10100
2020-07-23 12:19:36 -07:00
Anis Elleuch 1340281cb8
Fix marshaling expiration field in lifecycle (#10117) 2020-07-23 08:01:25 -07:00
Anis Elleuch 456b2ef6eb
Avoid healing to be stuck with many concurrent event listeners (#10111)
If there are many listeners to bucket notifications or to the trace
subsystem, healing fails to work properly since it suspends itself when
the number of concurrent connections is above a certain threshold.

These connections are also continuous and not costly (*no disk access*),
it is okay to just ignore them in waitForLowHTTPReq().
2020-07-22 13:16:55 -07:00