Commit Graph

281 Commits

Author SHA1 Message Date
Harshavardhana
6186d11761
handle the locks properly for multi-pool callers (#20495)
- PutObjectMetadata()
- PutObjectTags()
- DeleteObjectTags()
- TransitionObject()
- RestoreTransitionObject()

Also improve the behavior of multipart code across
pool locks, hold locks only once per upload ID for

- CompleteMultipartUpload()
- AbortMultipartUpload()
- ListObjectParts() (read-lock)
- GetMultipartInfo() (read-lock)
- PutObjectPart() (read-lock)

This avoids lock attempts across pools for no
reason, this increases O(n) when there are n-pools.
2024-09-29 15:40:36 -07:00
Harshavardhana
70d40083e9
remove windows CI/CD for now (#20441)
windows has decided to be a community support
only and source compile-friendly.
2024-09-16 13:46:53 -07:00
Harshavardhana
5bf41aff17
hold granular locking for multi-pool PutObject() (#20434)
- PutObject() for multi-pooled was holding large
  region locks, which was not necessary. This affects
  almost all slowpoke clients and lengthy uploads.

- Re-arrange locks for CompleteMultipart, PutObject
  to be close to rename()
2024-09-13 13:26:02 -07:00
Harshavardhana
8c9ab85cfa
Add multipart uploads cache for ListMultipartUploads() (#20407)
this cache will be honored only when `prefix=""` while
performing ListMultipartUploads() operation.

This is mainly to satisfy applications like alluxio
for their underfs implementation and tests.

replaces https://github.com/minio/minio/pull/20181
2024-09-09 09:58:30 -07:00
Harshavardhana
6c746843ac
fix: keep locks on same pool for simplicity (#20356)
locks handed by different pools would become non-compete for
multi-object delete request, this is wrong for obvious 
reasons.

New locking implementation and revamp will rewrite multi-object
lock anyway, this is a workaround for now.
2024-08-30 19:26:49 -07:00
Anis Eleuch
73992d2b9f
s3: DeleteBucket to use listing before returning bucket not empty error (#20301)
Use Walk(), which is a recursive listing with versioning, to check if 
the bucket has some objects before being removed. This is beneficial
because the bucket can contain multiple dangling objects in multiple
drives.

Also, this will prevent a bug where a bucket is deleted in a deployment
that has many erasure sets but the bucket contains one or few objects
not spread to enough erasure sets.
2024-08-22 14:57:20 -07:00
Harshavardhana
cc0c41d216
remove region locks and make them simpler (#20268)
- single flight approach is now optional, instead of default.
- parallelize the loaders upto 32 items per assets (more room for improvement possible)
2024-08-15 08:41:03 -07:00
Anis Eleuch
51b1f41518
heal: Persist MRF queue in the disk during shutdown (#19410) 2024-08-13 15:26:05 -07:00
Harshavardhana
909b169593
avoid source index to be same as destination index (#20238)
during rebalance stop, it can possibly happen that
Put() would race by overwriting the same object again.

This may very well if done "successfully" it can
potentially proceed to delete the object from the pool,
causing data loss.

This PR enhances #20233 to handle more scenarios such
as these.
2024-08-09 19:30:44 -07:00
Krishnan Parthasarathi
4e67a4027e
Prevent overwrites due to rebalance-stop race (#20233)
Rebalance-stop can race with ongoing rebalance operations. This change
prevents these operations from overwriting objects by checking the source
and destination pool indices are different.
2024-08-08 19:05:14 -07:00
Anis Eleuch
b7f319b62a
properly reload a fresh drive when found in a failed state during startup (#20145)
When a drive is in a failed state when a single node multiple drives
deployment is started, a replacement of a fresh disk will not be
properly healed unless the user restarts the node.

Fix this by always adding the new fresh disk to globalLocalDrivesMap. Also
remove globalLocalDrives for simplification, a map to store local node
drives can still be used since the order of local drives of a node is
not defined.
2024-07-24 16:30:33 -07:00
Anis Eleuch
d9ee668b6d
s3: Fix wrong continuation token during listing with ILM enabled bucket (#20113) 2024-07-18 13:37:34 -07:00
Harshavardhana
7fcb428622
do not print unexpected logs (#20083) 2024-07-12 13:51:54 -07:00
Anis Eleuch
b35acb3dbc
heal: Add support of healing particular pool/set (#20024) 2024-07-01 15:02:25 -07:00
Anis Eleuch
4d7d008741
bootstrap: Speed up bucket metadata loading (#19969)
Currently, bucket metadata is being loaded serially inside ListBuckets
Objet API. Fix that by loading the bucket metadata as the number of
erasure sets * 10, which is a good approximation.
2024-06-21 15:22:24 -07:00
Klaus Post
2d7a3d1516
Return error from mergeEntryChannels (#19970)
- Add error from mergeEntryChannels to `results.`
- Make sure we check the context error before we close the channel.
2024-06-21 12:06:51 -07:00
Harshavardhana
2825294b7b
allow server startup to come online with READ success (#19957) 2024-06-19 22:21:31 -07:00
Harshavardhana
7a4b250c8b
avoid waiting for quorum health while debugging (#19955) 2024-06-19 10:12:20 -07:00
Harshavardhana
ee48f9f206
perform healthchecks before initializing everything fully (#19953)
adds more informative logs that provide details on which
erasure set is losing quorum etc.
2024-06-19 07:33:40 -07:00
Klaus Post
2f9018f03b
Do regular checks for healing status while scanning (#19946) 2024-06-18 09:11:04 -07:00
Harshavardhana
bbb64eaade
skip healing properly in the scanner when a drive is hotplugged (#19939)
skip healing properly in scanner when drive is hotplugged

due to how the state is passed around the SkipHealing
might not be the true state() of the system always, causing
a situation where we might healing from the scanner on the
same drive which is being. Due to this competing heals get
triggered that slow each other down.
2024-06-17 16:39:11 -07:00
Harshavardhana
4af31e654b
avoid pre-populating buffers for deployments < 32GiB memory (#19839) 2024-05-30 04:58:12 -07:00
Harshavardhana
38d059b0ae
fix: single node multi-drive must register local drives properly (#19832)
since #19688 there was a regression introduced during drive
lookups for single node multi-drive setups, drive replacement
would not work correctly without this PR.
2024-05-29 13:12:44 -07:00
Harshavardhana
64baedf5a4
fix: hide prefixes for Hadoop properly (#19821) 2024-05-28 15:53:15 -07:00
Aditya Manthramurthy
5f78691fcf
ldap: Add user DN attributes list config param (#19758)
This change uses the updated ldap library in minio/pkg (bumped
up to v3). A new config parameter is added for LDAP configuration to
specify extra user attributes to load from the LDAP server and to store
them as additional claims for the user.

A test is added in sts_handlers.go that shows how to access the LDAP
attributes as a claim.

This is in preparation for adding SSH pubkey authentication to MinIO's SFTP
integration.
2024-05-24 16:05:23 -07:00
Poorna
7d29030292
fix list results returned for spark max-keys=2 listing (#19791)
This PR continues fix #19725 for some unhandled cases
2024-05-22 16:16:34 -07:00
Shubhendu
7c7650b7c3
Add sufficient deadlines and countermeasures to handle hung node scenario (#19688)
Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
Signed-off-by: Harshavardhana <harsha@minio.io>
2024-05-22 16:07:14 -07:00
Harshavardhana
0b3eb7f218
add more deadlines and pass around context under most situations (#19752) 2024-05-15 15:19:00 -07:00
Klaus Post
b792b36495
Add Veeam storage class override (#19748)
Recent Veeam is very picky about storage class names. Add `_MINIO_VEEAM_FORCE_SC` env var.

It will override the storage class returned by the storage backend if it is non-standard
and we detect a Veeam client by checking the User Agent.

Applies to HeadObject/GetObject/ListObject*
2024-05-15 11:04:16 -07:00
Poorna
7752b03add
optimize max-keys=2 listing for spark workloads (#19725)
to return results appropriately for versioned buckets, especially
when underlying prefixes have been deleted
2024-05-13 07:57:42 -07:00
Harshavardhana
9a267f9270
allow caller context during reloads() to cancel (#19687)
canceled callers might linger around longer,
can potentially overwhelm the system. Instead
provider a caller context and canceled callers
don't hold on to them.

Bonus: we have no reason to cache errors, we should
never cache errors otherwise we can potentially have
quorum errors creeping in unexpectedly. We should
let the cache when invalidating hit the actual resources
instead.
2024-05-08 17:51:34 -07:00
Klaus Post
847ee5ac45
Make WalkDir return errors (#19677)
If used, 'opts.Marker` will cause many missed entries since results are returned 
unsorted, and pools are serialized.

Switch to fully concurrent listing and merging across pools to return sorted entries.
2024-05-06 13:27:52 -07:00
Harshavardhana
da3e7747ca
avoid using 10MiB EC buffers in maxAPI calculations (#19665)
max requests per node is more conservative in its value
causing premature serialization of the calls, avoid it
for newer deployments.
2024-05-03 13:08:20 -07:00
Klaus Post
4afb59e63f
fix: walk missing entries with opts.Marker set (#19661)
'opts.Marker` is causing many missed entries if used since results are returned unsorted. Also since pools are serialized.

Switch to do fully concurrent listing and merging across pools to return sorted entries.

Returning errors on listings is impossible with the current API, so document that.

Return an error at once if no drives are found instead of just returning an empty listing and no error.
2024-05-03 10:26:51 -07:00
Anis Eleuch
135874ebdc
heal: Avoid marking a bucket as done when remote drives are offline (#19587) 2024-04-25 23:32:14 -07:00
Poorna
e7aa26dc29
fix: allow DeleteObject unversioned objects with insufficient read quorum (#19581)
Since the object is being permanently deleted, the lack of read quorum should not
matter as long as sufficient disks are online to complete the deletion with parity
requirements.

If several pools have the same object with insufficient read quorum, attempt to
delete object from all the pools where it exists
2024-04-25 17:31:12 -07:00
Klaus Post
ec816f3840
Reduce parallelReader allocs (#19558) 2024-04-19 09:44:59 -07:00
Anis Eleuch
95bf4a57b6
logging: Add subsystem to log API (#19002)
Create new code paths for multiple subsystems in the code. This will
make maintaing this easier later.

Also introduce bugLogIf() for errors that should not happen in the first
place.
2024-04-04 05:04:40 -07:00
jiuker
8222a640ac
fix: slice append lose the data for NSScanner (#19373) 2024-03-28 08:13:36 -07:00
Anis Eleuch
b5e074e54c
list: Fix IsTruncated and NextMarker when encountering expired objects (#19290) 2024-03-19 13:23:12 -07:00
Harshavardhana
2cc4997d24
fix: crash on 32bit systems during pre-allocation (#19225) 2024-03-08 05:55:28 -08:00
Praveen raj Mani
df57bfcd6c
fix: cluster read health check to return proper values (#19203)
Fixes #19202
2024-03-05 10:25:49 -08:00
Harshavardhana
1b5f28e99b
fix: skip local disks properly in cluster health maintenance check (#19184) 2024-03-04 20:48:44 -08:00
Praveen raj Mani
d5656eeb65
fix: healthcheck to fail even if one erasure set doesn't have quorum (#19180)
fix: healthcheck to return false even if one erasure set doesn't have quorum
2024-03-04 08:34:14 -08:00
Krishnan Parthasarathi
a7577da768
Improve expiration of tiered objects (#18926)
- Use a shared worker pool for all ILM expiry tasks
- Free version cleanup executes in a separate goroutine
- Add a free version only if removing the remote object fails
- Add ILM expiry metrics to the node namespace
- Move tier journal tasks to expiryState
- Remove unused on-disk journal for tiered objects pending deletion
- Distribute expiry tasks across workers such that the expiry of versions of
  the same object serialized
- Ability to resize worker pool without server restart
- Make scaling down of expiryState workers' concurrency safe; Thanks
  @klauspost
- Add error logs when expiryState and transition state are not
  initialized (yet)
* metrics: Add missed tier journal entry tasks
* Initialize the ILM worker pool after the object layer
2024-03-01 21:11:03 -08:00
Harshavardhana
d7520f0ae6
fix: make sure maintenance=true is honored properly (#19156)
fixes a regression from #18700
2024-02-29 08:37:57 -08:00
Aditya Manthramurthy
62ce52c8fd
cachevalue: simplify exported interface (#19137)
- Also add cache options type
2024-02-28 09:09:09 -08:00
Harshavardhana
9a012a53ef
initialize the disk healer early on (#19143)
This PR fixes a bug that perhaps has been long introduced,
with no visible workarounds. In any deployment, if an entire
erasure set is deleted, there is no way the cluster recovers.
2024-02-27 23:02:14 -08:00
Klaus Post
2b5e4b853c
Improve caching (#19130)
* Remove lock for cached operations.
* Rename "Relax" to `ReturnLastGood`.
* Add `CacheError` to allow caching values even on errors.
* Add NoWait that will return current value with async fetching if within 2xTTL.
* Make benchmark somewhat representative.

```
Before: BenchmarkCache-12       16408370                63.12 ns/op            0 B/op
After:  BenchmarkCache-12       428282187                2.789 ns/op           0 B/op
```

* Remove `storageRESTClient.scanning`. Nonsensical - RPC clients will not have any idea about scanning.
* Always fetch remote diskinfo metrics and cache them. Seems most calls are requesting metrics.
* Do async fetching of usage caches.
2024-02-26 10:49:19 -08:00
Harshavardhana
a3ac62596c
move timedValue -> cachevalue package (#19114) 2024-02-23 13:28:14 -08:00