Commit Graph

7 Commits

Author SHA1 Message Date
Harshavardhana 35fafb837b
fix: issues with handling delete markers in metacache (#11150)
Additional cases handled

- fix address situations where healing is not
  triggered on failed writes and deletes.

- consider object exists during listing when
  metadata can be successfully decoded.
2020-12-22 09:16:43 -08:00
Klaus Post 0422eda6a2
metacache: Always close block writer (#10973)
In some cases a writer could be left behind unclosed, leaking compression blocks.

Always close and set compression concurrency to 2 which should be fine to keep up.
2020-11-25 09:37:30 -08:00
Klaus Post a75fafdbe2
Remove msgp workaround (#10964)
The error in `github.com/philhofer/fwd` was quickly fixed through 
https://github.com/philhofer/fwd/pull/22 - update the dependency 
and remove the workaround.
2020-11-24 11:58:10 -08:00
Klaus Post a58b7874ef
Temporary workaround for msgp skipping (#10960)
Due to https://github.com/philhofer/fwd/issues/20 when skipping a metadata entry that is >2048 bytes and the buffer is full (2048 bytes) the skip will fail with `io.ErrNoProgress`.

Enlarge the buffer so we temporarily make this much more unlikely.

If it still happens we will have to rewrite the skips to reads.

Fixes #10959
2020-11-23 18:51:59 -08:00
Harshavardhana ca88ca753c
ignore typed errors correctly in list cache layer (#10879)
bonus write bucket metadata cache with enough quorum

Possible fix for #10868
2020-11-12 09:28:56 -08:00
Klaus Post d1e1205036
metacache: Always close the s2 writer (#10836)
The s2 writer could be leaked if there was an error.

Make sure it is always closed.
2020-11-05 07:30:14 -08:00
Klaus Post a982baff27
ListObjects Metadata Caching (#10648)
Design: https://gist.github.com/klauspost/025c09b48ed4a1293c917cecfabdf21c

Gist of improvements:

* Cross-server caching and listing will use the same data across servers and requests.
* Lists can be arbitrarily resumed at a constant speed.
* Metadata for all files scanned is stored for streaming retrieval.
* The existing bloom filters controlled by the crawler is used for validating caches.
* Concurrent requests for the same data (or parts of it) will not spawn additional walkers.
* Listing a subdirectory of an existing recursive cache will use the cache.
* All listing operations are fully streamable so the number of objects in a bucket no 
  longer dictates the amount of memory.
* Listings can be handled by any server within the cluster.
* Caches are cleaned up when out of date or superseded by a more recent one.
2020-10-28 09:18:35 -07:00