Commit Graph

33 Commits

Author SHA1 Message Date
Harshavardhana
7da9e3a6f8
support encrypted/compressed objects properly during decommission (#15320)
fixes #15314
2022-07-16 19:35:24 -07:00
Harshavardhana
e7ac1ea54c
allow decommission to continue when healing (#15312)
Bonus:

- heal buckets in-case during startup the new
  pools have bucket missing.
2022-07-15 21:03:23 -07:00
Harshavardhana
1b339ea062
allow force delete on decom pool (#15302)
Bonus:

- skip suspended pool from being
  considered for multipart uploads

- add more context for decomErrors()
2022-07-14 20:44:22 -07:00
Harshavardhana
236ef03dbd
fix: skip objects expired via lifecycle rules during decommission (#15300) 2022-07-14 16:47:09 -07:00
Harshavardhana
0a8b78cb84
fix: simplify passing auditLog eventType (#15278)
Rename Trigger -> Event to be a more appropriate
name for the audit event.

Bonus: fixes a bug in AddMRFWorker() it did not
cancel the waitgroup, leading to waitgroup leaks.
2022-07-12 10:43:32 -07:00
Harshavardhana
ae92521310
remove unnecessary nAgreed value in partial() func (#15242) 2022-07-07 13:45:34 -07:00
Harshavardhana
5802df4365
retry and resume decom operation upon retriable failures (#15244)
it is possible in a k8s-like system reading pool.bin
might not have quorum during startup, however, add
a way to retry after this failure.
2022-07-07 12:31:44 -07:00
Harshavardhana
9d80ff5a05
fix: decommission delete markers for non-current objects (#15225)
versioned buckets were not creating the delete markers
present in the versioned stack of an object, this essentially
would stop decommission to succeed.

This PR fixes creating such delete markers properly during
a decommissioning process, adds tests as well.
2022-07-05 07:37:24 -07:00
Harshavardhana
b311abed31
decom IAM, Bucket metadata properly (#15220)
Current code incorrectly passed the
config asset object name while decommissioning,
make sure that we pass the right object name
to be hashed on the newer set of pools.

This PR fixes situations after a successful
decommission, the users and policies might go
missing due to wrong hashed set.
2022-07-04 14:02:54 -07:00
Harshavardhana
0fee993a4b
return appropriate error under 'decom status' (#15213)
fixes #15208
2022-07-01 16:21:23 -07:00
Harshavardhana
30c9e50701
make sure to ignore expected errors and dirname deletes (#14945) 2022-05-18 17:58:19 -07:00
Krishna Srinivas
e34ca9acd1
retry each object decom upto 3 times, in-case of failure (#14861) 2022-05-11 11:37:32 -07:00
Krishnan Parthasarathi
ad8e611098
feat: implement prefix-level versioning exclusion (#14828)
Spark/Hadoop workloads which use Hadoop MR 
Committer v1/v2 algorithm upload objects to a 
temporary prefix in a bucket. These objects are 
'renamed' to a different prefix on Job commit. 
Object storage admins are forced to configure 
separate ILM policies to expire these objects 
and their versions to reclaim space.

Our solution:

This can be avoided by simply marking objects 
under these prefixes to be excluded from versioning, 
as shown below. Consequently, these objects are 
excluded from replication, and don't require ILM 
policies to prune unnecessary versions.

-  MinIO Extension to Bucket Version Configuration
```xml
<VersioningConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/"> 
        <Status>Enabled</Status>
        <ExcludeFolders>true</ExcludeFolders>
        <ExcludedPrefixes>
          <Prefix>app1-jobs/*/_temporary/</Prefix>
        </ExcludedPrefixes>
        <ExcludedPrefixes>
          <Prefix>app2-jobs/*/__magic/</Prefix>
        </ExcludedPrefixes>

        <!-- .. up to 10 prefixes in all -->     
</VersioningConfiguration>
```
Note: `ExcludeFolders` excludes all folders in a bucket 
from versioning. This is required to prevent the parent 
folders from accumulating delete markers, especially
those which are shared across spark workloads 
spanning projects/teams.

- To enable version exclusion on a list of prefixes

```
mc version enable --excluded-prefixes "app1-jobs/*/_temporary/,app2-jobs/*/_magic," --exclude-prefix-marker myminio/test
```
2022-05-06 19:05:28 -07:00
Anis Elleuch
44a3b58e52
Add audit log for decommissioning (#14858) 2022-05-04 00:45:27 -07:00
Anis Elleuch
46de9ac03e
Decom: Easily restart decommission when it is done (#14855)
When a decommission task is successfully completed, failed, or canceled,
this commit allows restarting the decommission again. Restarting is not
allowed when there is an ongoing decommission task.
2022-05-03 13:36:08 -07:00
Harshavardhana
424b44c247
allow changing server command line from http->https (#14832)
this is allowed as long as order is preserved as is
on an existing setup, the new command line is updated
in `pool.bin` to facilitate future decommission's on
these pools.
2022-04-28 16:27:53 -07:00
Harshavardhana
c56a139fdc
fix: support decommissioning directory objects (#14822)
improvements in this PR include

- decommission objects that have __XLDIR__ suffix
- decommission objects that have `null` version on
  a versioned bucket.
- make sure to look for any "decom" failures to ensure
  that we do not wrong conclude decom as complete without
  all files getting copied over.
- break out eagerly upon first error for objects with
  multiple versions, leave the object as is for support
  debugging and analysis.
2022-04-26 20:06:41 -07:00
Harshavardhana
7e248fc0ba
wait on parallel decom to complete before returning (#14764)
without this wait there is a potential for some objects
that are in actively being decommissioned would cancel,
however the decommission status might wrongly conclude
this as "Complete".

To avoid this make sure to add waitgroups on the parallel
workers, allowing parallel copies to complete fully before
we return.
2022-04-18 13:26:29 -07:00
Harshavardhana
8318aa0113
cancel active routine only after metadata has been saved (#14757)
currently updated pool.bin was not saved properly, that would
lead to unable to remove a pool upon a successful decommission.

fixes #14756
2022-04-15 13:16:15 -07:00
Krishna Srinivas
5f94cec1e2
Allow parallel decom migration threads to be more than erasure sets (#14733) 2022-04-12 10:49:53 -07:00
Krishna Srinivas
48594617b5
Parallelize decommissioning process (#14704) 2022-04-07 23:19:13 -07:00
Harshavardhana
ee49a23220
resume/start decommission on the first node of the pool under decommission (#14705)
Additionally fixes

- IsSuspended() can use read locks
- Avoid double cancels panic on canceler
2022-04-06 23:42:05 -07:00
Krishna Srinivas
bdd816488d
Get the BackendInfo to fill the apporpriate struct fields (#14660) 2022-03-30 10:48:35 -07:00
Krishna Srinivas
36dcfee2f7
Allow decomission of pool even if a drive in it is down (#14656) 2022-03-29 22:51:31 -07:00
Harshavardhana
bd6f7b6d83
fix: make decommission restart non-blocking (#14591)
currently an on-going decommission, during a server
restart might block the startup sequence for relatively
longer periods, instead start the decommission in
background lazily.
2022-03-20 14:46:43 -07:00
Harshavardhana
e3071157f0
allow MakeBucketLocation to work for metaBucket (#14548)
decommission would fail to start due to failure
in MakeBucketLocation() error on .minio.sys/ bucket
creation.

Allow these special buckets.
2022-03-14 11:25:24 -07:00
Harshavardhana
5d6f6d8d5b
create missing .minio.sys/config, .minio.sys/buckets during decommission (#14497) 2022-03-07 16:18:57 -08:00
Harshavardhana
aaea94a48d
update quorum requirement to list all objects (#14201)
some upgraded objects might not get listed due
to different quorum ratios across objects.

make sure to list all objects that satisfy the
maximum possible quorum.
2022-01-27 17:00:15 -08:00
Harshavardhana
0df31f63ab
reject changing pools when there are pending decommissions in-progress (#14102)
do not allow mutation to pool command line when there are
unfinished decommissions in place, disallow such scenarios
to avoid user mistakes.

also add testcases to cover all relevant scenarios.
2022-01-14 10:32:35 -08:00
Harshavardhana
d50442da01
fix: simplify usage calculation and progress (#14086) 2022-01-11 18:48:43 -08:00
Harshavardhana
404b05a44c
fix: ignore drained pool in Healing, hold lock additionally (#14080) 2022-01-11 12:27:47 -08:00
Harshavardhana
737a3f0bad
fix: decommission bugfixes found during migration of .minio.sys/config (#14078) 2022-01-10 17:26:00 -08:00
Harshavardhana
76b21de0c6
feat: decommission feature for pools (#14012)
```
λ mc admin decommission start alias/ http://minio{1...2}/data{1...4}
```

```
λ mc admin decommission status alias/
┌─────┬─────────────────────────────────┬──────────────────────────────────┬────────┐
│ ID  │ Pools                           │ Capacity                         │ Status │
│ 1st │ http://minio{1...2}/data{1...4} │ 439 GiB (used) / 561 GiB (total) │ Active │
│ 2nd │ http://minio{3...4}/data{1...4} │ 329 GiB (used) / 421 GiB (total) │ Active │
└─────┴─────────────────────────────────┴──────────────────────────────────┴────────┘
```

```
λ mc admin decommission status alias/ http://minio{1...2}/data{1...4}
Progress: ===================> [1GiB/sec] [15%] [4TiB/50TiB]
Time Remaining: 4 hours (started 3 hours ago)
```

```
λ mc admin decommission status alias/ http://minio{1...2}/data{1...4}
ERROR: This pool is not scheduled for decommissioning currently.
```

```
λ mc admin decommission cancel alias/
┌─────┬─────────────────────────────────┬──────────────────────────────────┬──────────┐
│ ID  │ Pools                           │ Capacity                         │ Status   │
│ 1st │ http://minio{1...2}/data{1...4} │ 439 GiB (used) / 561 GiB (total) │ Draining │
└─────┴─────────────────────────────────┴──────────────────────────────────┴──────────┘
```

> NOTE: Canceled decommission will not make the pool active again, since we might have
> Potentially partial duplicate content on the other pools, to avoid this scenario be
> very sure to start decommissioning as a planned activity.

```
λ mc admin decommission cancel alias/ http://minio{1...2}/data{1...4}
┌─────┬─────────────────────────────────┬──────────────────────────────────┬────────────────────┐
│ ID  │ Pools                           │ Capacity                         │ Status             │
│ 1st │ http://minio{1...2}/data{1...4} │ 439 GiB (used) / 561 GiB (total) │ Draining(Canceled) │
└─────┴─────────────────────────────────┴──────────────────────────────────┴────────────────────┘
```
2022-01-10 09:07:49 -08:00