mirror of https://github.com/minio/minio.git
acc452b7ce
In cases where a cluster is degraded, we do not uphold our consistency guarantee and we will write fewer erasure codes and rely on healing to recreate the missing shards. In some cases replacing known bad disks in practice take days. We want to change the behavior of a known degraded system to keep the erasure code promise of the storage class for each object. This will create the objects with the same confidence as a fully functional cluster. The tradeoff will be that objects created during a partial outage will take up slightly more space. This means that when the storage class is EC:4, there should always be written 4 parity shards, even if some disks are unavailable. When an object is created on a set, the disks are immediately checked. If any disks are unavailable additional parity shards will be made for each offline disk, up to 50% of the number of disks. We add an internal metadata field with the actual and intended erasure code level, this can optionally be picked up later by the scanner if we decide that data like this should be re-sharded. |
||
---|---|---|
.. | ||
bigdata | ||
bucket | ||
chroot | ||
compression | ||
config | ||
debugging | ||
deployment/kernel-tuning | ||
disk-caching | ||
distributed | ||
docker | ||
erasure | ||
federation/lookup | ||
gateway | ||
integrations/veeam | ||
kms | ||
logging | ||
metrics | ||
multi-tenancy | ||
multi-user | ||
orchestration | ||
screenshots | ||
security | ||
select | ||
shared-backend | ||
sts | ||
throttle | ||
tls | ||
LICENSE | ||
minio-limits.md |