Commit Graph

110 Commits

Author SHA1 Message Date
Klaus Post
10099357b6
listcache: Wrap returned errors (#10882)
To give an indication of where they happen
2020-11-17 09:11:59 -08:00
Klaus Post
b5a3d79bce
listobjectversions: Add shortcut for Veeam blocks (#10893)
Add shortcut for `APN/1.0 Veeam/1.0 Backup/10.0`

It requests unique blocks with a specific prefix. We skip 
scanning the parent directory for more objects matching the prefix.
2020-11-13 16:58:20 -08:00
Klaus Post
a3017c724e
Sort directory objects correctly (#10886)
Decode dir objects when listing and sort them correctly.
2020-11-12 13:09:34 -08:00
Harshavardhana
ca88ca753c
ignore typed errors correctly in list cache layer (#10879)
bonus write bucket metadata cache with enough quorum

Possible fix for #10868
2020-11-12 09:28:56 -08:00
Klaus Post
b9277c8030
metacache: Add trashcan (#10820)
Add trashcan that keeps recently updated lists after bucket deletion.
All caches were deleted once a bucket was deleted, so caches still running would report errors. Now they are canceled.
Fix `.minio.sys` not being transient.
2020-11-03 12:47:52 -08:00
Klaus Post
0a796505c1
metacache: Check only one disk for updates (#10809)
Check only one disk for updates.

This will reduce IO while waiting for lists to finish.
2020-11-02 17:20:27 -08:00
Klaus Post
6135f072d2
Fix invalidated metacaches (#10784)
* Fix caches having EOF marked as a failure.
* Simplify cache updates.
* Provide context for checkMetacacheState failures.
* Log 499 when the client disconnects.
2020-10-30 09:33:16 -07:00
Harshavardhana
4bf90ca67f
fix: handle a crash when AskDisks is set to -1 (#10777) 2020-10-29 09:25:43 -07:00
Klaus Post
bfc36aed89
Add update retry limit and compare error by string instead (#10776) 2020-10-28 13:19:53 -07:00
Klaus Post
a982baff27
ListObjects Metadata Caching (#10648)
Design: https://gist.github.com/klauspost/025c09b48ed4a1293c917cecfabdf21c

Gist of improvements:

* Cross-server caching and listing will use the same data across servers and requests.
* Lists can be arbitrarily resumed at a constant speed.
* Metadata for all files scanned is stored for streaming retrieval.
* The existing bloom filters controlled by the crawler is used for validating caches.
* Concurrent requests for the same data (or parts of it) will not spawn additional walkers.
* Listing a subdirectory of an existing recursive cache will use the cache.
* All listing operations are fully streamable so the number of objects in a bucket no 
  longer dictates the amount of memory.
* Listings can be handled by any server within the cluster.
* Caches are cleaned up when out of date or superseded by a more recent one.
2020-10-28 09:18:35 -07:00