Commit Graph

4567 Commits

Author SHA1 Message Date
Harshavardhana
1cd6713e24
copy query values before update to preserve the expected keys (#15310)
in success_action_redirect we were missing required
query params as per S3 spec - updated tests.
2022-07-15 15:04:48 -07:00
Harshavardhana
1b339ea062
allow force delete on decom pool (#15302)
Bonus:

- skip suspended pool from being
  considered for multipart uploads

- add more context for decomErrors()
2022-07-14 20:44:22 -07:00
Harshavardhana
236ef03dbd
fix: skip objects expired via lifecycle rules during decommission (#15300) 2022-07-14 16:47:09 -07:00
Poorna
7e32a17742
fix: site replication healing of missing buckets (#15298)
fixes a regression from #15186

- Adding tests to cover healing of buckets.
- Also dereference quota in SiteReplicationStatus only when non-nil
2022-07-14 14:27:47 -07:00
Krishnan Parthasarathi
1d42133d44
listing: Expire object versions past expiry (#15287)
We skip object versions which are past their ILM expiry. This change schedules
them for expiry while at it.
2022-07-14 07:21:26 -07:00
Poorna
b4f6901903
resync: Avoid concurrent access/write on map (#15286)
fixes a crash

```
fatal error: concurrent map iteration and map write
minio[19309]: goroutine 18640 [running]:
minio[19309]: runtime.throw({0x27a3399?, 0x1785?})
minio[19309]: runtime/panic.go:992 +0x71 fp=0xc0062f1c80 sp=0xc0062f1c50 pc=0x438671
minio[19309]: runtime.mapiternext(0xc0062f1e90?)
minio[19309]: runtime/map.go:871 +0x4eb fp=0xc0062f1cf0 sp=0xc0062f1c80 pc=0x41002b
minio[19309]: github.com/minio/minio/cmd.(*ReplicationPool).periodicResyncMetaSave(0xc0056c00c0, {0x4d06a48, 0xc0005b2480}, {0x4d22fc0, 0xc0015ea0
```
2022-07-13 16:29:10 -07:00
Klaus Post
0149382cdc
Add padding to compressed+encrypted files (#15282)
Add up to 256 bytes of padding for compressed+encrypted files.

This will obscure the obvious cases of extremely compressible content 
and leave a similar output size for a very wide variety of inputs.

This does *not* mean the compression ratio doesn't leak information 
about the content, but the outcome space is much smaller, 
so often *less* information is leaked.
2022-07-13 07:52:15 -07:00
Klaus Post
697c9973a7
Upgrade compression package (#15284)
Includes mitigation for CVE-2022-30631 (Go should still be updated)

Remove functions now available upstream.
2022-07-13 07:48:14 -07:00
Harshavardhana
788fd3df81
preserve incoming query params in success_action_redirect (#15280)
fixes #15274
2022-07-13 07:46:44 -07:00
Anis Elleuch
996cac5fed
Avoid listing buckets from a suspended pool (#15283)
Make bucket requests sent after decommissioning is started are not
created in a suspended pool. Therefore listing buckets should avoid
suspended pools as well.
2022-07-13 07:44:50 -07:00
Harshavardhana
0a8b78cb84
fix: simplify passing auditLog eventType (#15278)
Rename Trigger -> Event to be a more appropriate
name for the audit event.

Bonus: fixes a bug in AddMRFWorker() it did not
cancel the waitgroup, leading to waitgroup leaks.
2022-07-12 10:43:32 -07:00
Harshavardhana
b4eb74f5ff
allow custom speedtest bucket (#15271)
this allows for specifying existing buckets with

- object replication enabled
- object encryption enabled
- object versioning enabled
- object locking enabled
2022-07-12 10:12:47 -07:00
Anis Elleuch
57d1f31054
Do not log erasure read failure when disk goes offline (#15277)
Avoid printing the following log

```
API: SYSTEM
Time: Fri Jul 08 2022 11:48:40 GMT+0100
Error: Error(disk not found) reading erasure shards at...

Backtrace:
0: internal/logger/logger.go:278:logger.LogIf()
1: cmd/bitrot-streaming.go:156:cmd.(*streamingBitrotReader).ReadAt()
2: cmd/erasure-decode.go:165:cmd.(*parallelReader).Read.func1()
```
2022-07-12 09:56:56 -07:00
Klaus Post
9f02f51b87
Add 4K minimum compressed size (#15273)
There is no point in compressing very small files.

Typically the effective size on disk will be the same due to disk blocks.

So don't waste resources on extremely small files.

We don't check on multipart. 1) because we don't know and 2) this is very likely a big object anyway.
2022-07-12 07:42:04 -07:00
Klaus Post
911a17b149
Add compressed file index (#15247) 2022-07-11 17:30:56 -07:00
Poorna
3d969bd2b4
fix: ignore missing targets/replication config during site removal (#15269) 2022-07-11 14:11:46 -07:00
Andreas Auernhammer
f800cee4fa
metric: add KMS-related metrics (#15258)
This commit adds a minimal set of KMS-related metrics:
```
 # HELP minio_cluster_kms_online Reports whether the KMS is online (1) or offline (0)
 # TYPE minio_cluster_kms_online gauge
 minio_cluster_kms_online{server="127.0.0.1:9000"} 1
 # HELP minio_cluster_kms_request_error Number of KMS requests that failed with a well-defined error
 # TYPE minio_cluster_kms_request_error counter
 minio_cluster_kms_request_error{server="127.0.0.1:9000"} 16790
 # HELP minio_cluster_kms_request_success Number of KMS requests that succeeded
 # TYPE minio_cluster_kms_request_success counter
 minio_cluster_kms_request_success{server="127.0.0.1:9000"} 348031
```

Currently, we report whether the KMS is available and how many requests
succeeded/failed. However, KES exposes much more metrics that can be
exposed if necessary. See: https://pkg.go.dev/github.com/minio/kes#Metric

Signed-off-by: Andreas Auernhammer <hi@aead.dev>
2022-07-11 09:17:28 -07:00
Praveen raj Mani
b49fc33cb3
purge objects immediately with x-minio-force-delete in DeleteObject and DeleteBucket API (#15148) 2022-07-11 09:15:54 -07:00
Klaus Post
37a6b2da67
Allow compaction at bucket top level. (#15266)
If more than 1M folders (objects or prefixes) are found at the top level in a bucket allow it to be compacted.

While very suboptimal structure we should limit memory usage at some point.
2022-07-11 07:59:03 -07:00
Harshavardhana
913e977c8d
remove auto-port warning for console-address (#15260) 2022-07-08 13:36:41 -07:00
Harshavardhana
c2ddcb3b40
do not recreate deprecated delete-journal.bin, only read it (#15185)
simplify deprecated code, re-enable hot-swap replace disks
2022-07-08 12:17:02 -07:00
Anis Elleuch
ed0cbfb31e
fix: rootdisk detection by not using cached value when GetDiskInfo() errors out (#15249)
GetDiskInfo() uses timedValue to cache the disk info for one second.

timedValue behavior was recently changed to return an old cached value
when calculating a new value returns an error.

When a mount point is empty, GetDiskInfo() will return errUnformattedDisk,
timedValue will return cached disk info with unexpected IsRootDisk value,
e.g. false if the mount point belongs to a root disk. Therefore, the mount
point will be considered a valid disk and will be formatted as well.

This commit will also add more defensive code when marking root disks:
always mark a disk offline for any GetDiskInfo() error except
errUnformattedDisk. The server will try anyway to reconnect to those
disks every 10 seconds.
2022-07-07 17:05:23 -07:00
Harshavardhana
32b2f6117e
fix: do not pass around sync.Map (#15250)
it is not safe to pass around sync.Map
through pointers, as it may be concurrently
updated by different callers.

this PR simplifies by avoiding sync.Map
altogether, we do not need sync.Map
to keep object->erasureMap association.

This PR fixes a crash when concurrently
using this value when audit logs are
configured.

```
fatal error: concurrent map iteration and map write

goroutine 247651580 [running]:
runtime.throw({0x277a6c1?, 0xc002381400?})
        runtime/panic.go:992 +0x71 fp=0xc004d29b20 sp=0xc004d29af0 pc=0x438671
runtime.mapiternext(0xc0d6e87f18?)
        runtime/map.go:871 +0x4eb fp=0xc004d29b90 sp=0xc004d29b20 pc=0x41002b
```
2022-07-07 17:04:25 -07:00
Harshavardhana
ae92521310
remove unnecessary nAgreed value in partial() func (#15242) 2022-07-07 13:45:34 -07:00
Harshavardhana
5802df4365
retry and resume decom operation upon retriable failures (#15244)
it is possible in a k8s-like system reading pool.bin
might not have quorum during startup, however, add
a way to retry after this failure.
2022-07-07 12:31:44 -07:00
Anis Elleuch
8d98282afd
Better reporting of total/free usable capacity of the cluster (#15230)
The current code uses approximation using a ratio. The approximation 
can skew if we have multiple pools with different disk capacities.

Replace the algorithm with a simpler one which counts data 
disks and ignore parity disks.
2022-07-06 13:29:49 -07:00
Harshavardhana
3af6073576
no 'replicate status' without replication config (#15233)
'replicate status' shouldn't be displaying historic
values unless replication config is present on the
relevant bucket.
2022-07-06 09:53:33 -07:00
Harshavardhana
2518af5f9e
fix: allow certain mutations on objects during decommissioning (#15231)
fix: allow certain mutation on objects during decommission

currently by mistake deletion of objects was skipped,
if the object resided on the pool being decommissioned.

delete's are okay to be allowed since decommission is
designed to run on a cluster with active I/O.
2022-07-06 09:53:16 -07:00
Harshavardhana
7b793d84c8
fix: calculate scanner metric paths for single drive (#15232)
Additionally use pathJoin() to avoid double `//`
in path names.
2022-07-06 07:48:38 -07:00
Aditya Manthramurthy
af9bc7ea7d
Add external IDP management Admin API for OpenID (#15152) 2022-07-05 18:18:04 -07:00
Klaus Post
ac055b09e9
Add detailed scanner metrics (#15161) 2022-07-05 14:45:49 -07:00
haslersn
df42914da6
Fix missing whitespace in error message for IncompleteBody (#15227) 2022-07-05 12:19:57 -07:00
Klaus Post
2471bdda00
fix: for DiskInfo call cache disk metrics (#15229)
Small uploads spend a significant amount of time (~5%) fetching disk info metrics. Also maps are allocated for each call.

Add a 100ms cache to disk metrics.
2022-07-05 11:02:30 -07:00
Harshavardhana
9d80ff5a05
fix: decommission delete markers for non-current objects (#15225)
versioned buckets were not creating the delete markers
present in the versioned stack of an object, this essentially
would stop decommission to succeed.

This PR fixes creating such delete markers properly during
a decommissioning process, adds tests as well.
2022-07-05 07:37:24 -07:00
Harshavardhana
b311abed31
decom IAM, Bucket metadata properly (#15220)
Current code incorrectly passed the
config asset object name while decommissioning,
make sure that we pass the right object name
to be hashed on the newer set of pools.

This PR fixes situations after a successful
decommission, the users and policies might go
missing due to wrong hashed set.
2022-07-04 14:02:54 -07:00
Harshavardhana
ce667ddae0
do not print errFileNotFound in entries.resolve() (#15216) 2022-07-04 06:40:46 -07:00
Harshavardhana
0fee993a4b
return appropriate error under 'decom status' (#15213)
fixes #15208
2022-07-01 16:21:23 -07:00
Poorna
0ea5c9d8e8
site healing: Skip stale iam asset updates from peer. (#15203)
Allow healing to apply IAM change only when peer
gave the most recent update.
2022-07-01 13:19:13 -07:00
Harshavardhana
63ac260bd5
Simplify Prometheus metrics gather (#15210) 2022-07-01 13:18:39 -07:00
Harshavardhana
f9a4ad7904
update banner with version+runtime (#15206) 2022-06-30 13:58:09 -07:00
Minio Trusted
e60b67d246 Revert "Tighten enforcement of object retention (#14993)"
This reverts commit 5e3010d455.

This commit causes regression on object locked buckets causine
delete-markers to be not created.
2022-06-30 13:06:32 -07:00
Klaus Post
9004d69c6f
Make ReqInfo concurrency safe (#15204)
Some read/writes of ReqInfo did not get appropriate locks, leading to races.

Make sure reading and writing holds appropriate locks.
2022-06-30 10:48:50 -07:00
Harshavardhana
8856a2d77b
finalize startup-banner and remove unnecessary logs (#15202) 2022-06-29 16:32:04 -07:00
Anis Elleuch
54a061bdda
Save minio version information centrally (#15181) 2022-06-29 14:45:49 -07:00
Poorna
7cc9286e0f
site healing: Skip stale bucket metadata updates from peer (#15186)
Allow healing to apply bucket metadata change only when peer
gave the most recent update.
2022-06-28 18:09:20 -07:00
Harshavardhana
2f25639ea0
update banner to reflect the final agreed UI (#15192) 2022-06-28 16:37:40 -07:00
Harshavardhana
2070c215a2
handle missing funcNames for handlers (#15188)
also use designated names for internal
calls

- storageREST calls are storageR
- lockREST calls are lockR
- peerREST calls are just peer

Named in this fashion to facilitate wildcard matches
by having prefixes of the same name.

Additionally, also enable funcNames for generic handlers
that return errors, currently we disable '<unknown>'
2022-06-28 05:04:10 -07:00
Harshavardhana
9c605ad153
allow support for parity '0', '1' enabling support for 2,3 drive setups (#15171)
allows for further granular setups

- 2 drives (1 parity, 1 data)
- 3 drives (1 parity, 2 data)

Bonus: allows '0' parity as well.
2022-06-27 20:22:18 -07:00
Anis Elleuch
b7c7e59dac
Revert proxying requests with precondition errors (#15180)
In a replicated setup, when an object is updated in one cluster but
still waiting to be replicated to the other cluster, GET requests with
if-match, and range headers will likely fail. It is better to proxy
requests instead.

Also, this commit avoids printing verbose logs about precondition &
range errors.
2022-06-27 14:03:44 -07:00
Harshavardhana
699cf6ff45
perform object sweep after equeue the latest CopyObject() (#15183)
keep it similar to PutObject/CompleteMultipart
2022-06-27 12:11:33 -07:00