Commit Graph

72 Commits

Author SHA1 Message Date
Harshavardhana 9081346c40 fix: more regressions listing policy mappings (#18060)
also relax ListServiceAccounts() returning error if
no service accounts exist.
2023-09-19 15:23:18 -07:00
Harshavardhana 8c4561b8da
add all missing go.mod for debugging tools (#18049) 2023-09-18 13:47:03 -07:00
Harshavardhana fa6d082bfd
reduce all major allocations in replication path (#18032)
- remove targetClient for passing around via replicationObjectInfo{}
- remove cloing to object info unnecessarily
- remove objectInfo from replicationObjectInfo{} (only require necessary fields)
2023-09-16 02:28:06 -07:00
Harshavardhana 8a9b886011
update grafana dashboard with disk -> drive rename (#17857) 2023-08-15 16:04:20 -07:00
Harshavardhana dfd7cca0d2
fix: allow cancel of decom only when its in progress (#17607) 2023-07-10 07:55:38 -07:00
Harshavardhana 4a425cbac1
cleanup scripts and apply shfmt (#17284) 2023-05-25 22:07:25 -07:00
Harshavardhana fb1492f531
check for quorum errors for DeleteBucket() (#16859) 2023-03-20 23:38:06 -07:00
Poorna d1e775313d
support decommissioning of tiered objects (#16751) 2023-03-16 07:48:05 -07:00
Daryl White d44f3526dc
Update links to documentation site (#15750) 2022-09-28 21:28:45 -07:00
Harshavardhana b6eb8dff64
Add decommission compression+encryption enabled tests (#15322)
update compression environment variables to follow
the expected sub-system style, however support fallback
mode.
2022-07-17 08:43:14 -07:00
Harshavardhana 7da9e3a6f8
support encrypted/compressed objects properly during decommission (#15320)
fixes #15314
2022-07-16 19:35:24 -07:00
daniel-bogusz95 00e235a1ee
fix grammatic errors and minor rewrites (#15264)
Thank you @djwfyi for the help
2022-07-11 07:59:49 -07:00
Harshavardhana 9d80ff5a05
fix: decommission delete markers for non-current objects (#15225)
versioned buckets were not creating the delete markers
present in the versioned stack of an object, this essentially
would stop decommission to succeed.

This PR fixes creating such delete markers properly during
a decommissioning process, adds tests as well.
2022-07-05 07:37:24 -07:00
Harshavardhana b311abed31
decom IAM, Bucket metadata properly (#15220)
Current code incorrectly passed the
config asset object name while decommissioning,
make sure that we pass the right object name
to be hashed on the newer set of pools.

This PR fixes situations after a successful
decommission, the users and policies might go
missing due to wrong hashed set.
2022-07-04 14:02:54 -07:00
Harshavardhana 9c605ad153
allow support for parity '0', '1' enabling support for 2,3 drive setups (#15171)
allows for further granular setups

- 2 drives (1 parity, 1 data)
- 3 drives (1 parity, 2 data)

Bonus: allows '0' parity as well.
2022-06-27 20:22:18 -07:00
Harshavardhana f088e8960b
docs: turn-on more markdown rules and fix them (#14301) 2022-02-14 08:50:42 -08:00
Harshavardhana e3e0532613
cleanup markdown docs across multiple files (#14296)
enable markdown-linter
2022-02-11 16:51:25 -08:00
Harshavardhana cd7a5cab8a update docs for Decommission 2022-01-25 11:56:04 -08:00
Harshavardhana f30afa4956
docs: add decommission docs about pool removal (#14159) 2022-01-24 09:47:06 -08:00
Harshavardhana 8fb4ae916c update decommission docs 2022-01-21 18:34:06 -08:00
Harshavardhana 76b21de0c6
feat: decommission feature for pools (#14012)
```
λ mc admin decommission start alias/ http://minio{1...2}/data{1...4}
```

```
λ mc admin decommission status alias/
┌─────┬─────────────────────────────────┬──────────────────────────────────┬────────┐
│ ID  │ Pools                           │ Capacity                         │ Status │
│ 1st │ http://minio{1...2}/data{1...4} │ 439 GiB (used) / 561 GiB (total) │ Active │
│ 2nd │ http://minio{3...4}/data{1...4} │ 329 GiB (used) / 421 GiB (total) │ Active │
└─────┴─────────────────────────────────┴──────────────────────────────────┴────────┘
```

```
λ mc admin decommission status alias/ http://minio{1...2}/data{1...4}
Progress: ===================> [1GiB/sec] [15%] [4TiB/50TiB]
Time Remaining: 4 hours (started 3 hours ago)
```

```
λ mc admin decommission status alias/ http://minio{1...2}/data{1...4}
ERROR: This pool is not scheduled for decommissioning currently.
```

```
λ mc admin decommission cancel alias/
┌─────┬─────────────────────────────────┬──────────────────────────────────┬──────────┐
│ ID  │ Pools                           │ Capacity                         │ Status   │
│ 1st │ http://minio{1...2}/data{1...4} │ 439 GiB (used) / 561 GiB (total) │ Draining │
└─────┴─────────────────────────────────┴──────────────────────────────────┴──────────┘
```

> NOTE: Canceled decommission will not make the pool active again, since we might have
> Potentially partial duplicate content on the other pools, to avoid this scenario be
> very sure to start decommissioning as a planned activity.

```
λ mc admin decommission cancel alias/ http://minio{1...2}/data{1...4}
┌─────┬─────────────────────────────────┬──────────────────────────────────┬────────────────────┐
│ ID  │ Pools                           │ Capacity                         │ Status             │
│ 1st │ http://minio{1...2}/data{1...4} │ 439 GiB (used) / 561 GiB (total) │ Draining(Canceled) │
└─────┴─────────────────────────────────┴──────────────────────────────────┴────────────────────┘
```
2022-01-10 09:07:49 -08:00
Klaus Post acc452b7ce
Add more erasure codes on degraded systems. (#11852)
In cases where a cluster is degraded, we do not uphold our consistency 
guarantee and we will write fewer erasure codes and rely on healing 
to recreate the missing shards.

In some cases replacing known bad disks in practice take days.
We want to change the behavior of a known degraded system to keep
the erasure code promise of the storage class for each object.

This will create the objects with the same confidence as a fully 
functional cluster. The tradeoff will be that objects created 
during a partial outage will take up slightly more space.

This means that when the storage class is EC:4, there should 
always be written 4 parity shards, even if some disks are unavailable.

When an object is created on a set, the disks are immediately 
checked. If any disks are unavailable additional parity shards 
will be made for each offline disk, up to 50% of the number of disks.

We add an internal metadata field with the actual and intended 
erasure code level, this can optionally be picked up later by 
the scanner if we decide that data like this should be re-sharded.
2021-05-27 11:38:09 -07:00
Harshavardhana 3d9873106d
feat: distributed setup can start now with default credentials (#12303)
In lieu of new changes coming for server command line, this
change is to deprecate strict requirement for distributed setups
to provide root credentials.

Bonus: remove MINIO_WORM warning from April 2020, it is time to
remove this warning.
2021-05-17 08:45:22 -07:00
Harshavardhana 7334247c98 update docs about NFS consistency model 2021-05-14 11:34:56 -07:00
WangYuMu c70240b893
fix incorrect values in sizing guide (#11583) 2021-02-19 10:05:04 -08:00
Harshavardhana f903cae6ff
Support variable server pools (#11256)
Current implementation requires server pools to have
same erasure stripe sizes, to facilitate same SLA
and expectations.

This PR allows server pools to be variadic, i.e they
do not have to be same erasure stripe sizes - instead
they should have SLA for parity ratio.

If the parity ratio cannot be guaranteed by the new
server pool, the deployment is rejected i.e server
pool expansion is not allowed.
2021-01-16 12:08:02 -08:00
Harshavardhana b5d291ea88
fix: rename remaining zone -> pool (#11231) 2021-01-06 09:35:47 -08:00
Harshavardhana cb0eaeaad8
feat: migrate to ROOT_USER/PASSWORD from ACCESS/SECRET_KEY (#11185) 2021-01-05 10:22:57 -08:00
Harshavardhana b43906f6ee fix: docs typos and keywords 2020-12-23 11:59:20 -08:00
Harshavardhana 4ec45753e6 rename server sets to server pools 2020-12-01 13:50:33 -08:00
Harshavardhana 790833f3b2 Revert "Support variable server sets (#10314)"
This reverts commit aabf053d2f.
2020-12-01 12:02:29 -08:00
Harshavardhana aabf053d2f
Support variable server sets (#10314) 2020-11-25 16:28:47 -08:00
Harshavardhana ad726b49b4
rename zones to serverSets to avoid terminology conflict (#10679)
we are bringing in availability zones, we should avoid
zones as per server expansion concept.
2020-10-15 14:28:50 -07:00
Eco 92cd1eed45
Clarify zone example (#10374) 2020-08-28 14:03:29 -07:00
Harshavardhana 38eef5ce4c
fix: documentation fixes for docker ENV settings (#9975)
- update CREDITS file
- fix markdown links
- talk a bit more about upgrades
2020-07-06 06:42:34 -07:00
Ritesh H Shukla 920b863955
Fix formatting and wording in distributed docs (#9784) 2020-06-08 18:03:15 -07:00
Harshavardhana 0113035237
update distributed setup guide (#9566) 2020-05-11 09:19:10 -07:00
Harshavardhana 30707659b5
[feature] allow for an odd number of erasure packs (#9221)
Too many deployments come up with an odd number
of hosts or drives, to facilitate even distribution
among those setups allow for odd and prime numbers
based packs.
2020-03-31 09:32:16 -07:00
Harshavardhana 38cf263409 fix: docs remove goreportcard, its deprecated 2020-03-24 14:51:06 -07:00
Harshavardhana 60813bef29
Allow proper setCount SLAs across zones (#8752)
Fixes scenario where zones are appropriately
handled, along with supporting overriding set
count. The new fix also ensures that we handle
the various setup types properly.

Update documentation to properly indicate the
behavior.

Fixes #8750

Co-authored-by: Nitish Tiwari <nitish@minio.io>
2020-01-07 09:13:44 -08:00
Brian Candler 9f44fcd540 Clarify behaviour of erasure coding sets (#8745) 2020-01-05 13:00:11 -08:00
Harshavardhana dd311623df Update design doc with zone implementation details (#8738)
Fixes #8719
2020-01-02 16:46:16 -08:00
Harshavardhana c8d82588c2 Fix crash in console logger and also handle bucket DNS updates (#8654)
Also fix listenBucketNotification bugs seen by minio-js
listen bucket notification API.
2019-12-16 20:30:57 -08:00
Harshavardhana 347b29d059 Implement bucket expansion (#8509) 2019-11-19 17:42:27 -08:00
Harshavardhana 87e6533cf3 Add some design docs for distributed setup (#7950) 2019-07-23 07:48:10 +05:30
Harshavardhana a075015293 doc: Merge large bucket with distributed docs (#7761) 2019-06-11 13:44:33 -07:00
ebozduman 78be3f8947 Removes the incorrect coverage badge from the docs (#7651) 2019-05-16 12:11:49 +05:30
kannappanr 5ecac91a55
Replace Minio refs in docs with MinIO and links (#7494) 2019-04-09 11:39:42 -07:00
Eco 3457e504cf Spelling changes and fixed link (#6596) 2018-10-17 10:55:55 -07:00
Eco 2af0f11731 Update readme.md (#6568) 2018-10-05 16:25:22 -07:00