Commit Graph

51 Commits

Author SHA1 Message Date
jiuker
b4ab8e095a
fix: preserve bucket metric of data usage for replication info (#18895) 2024-01-29 08:54:20 -08:00
chienguo
8cd967803c
fix: a typo in storeDataUsageInBackend() comment (#18778) 2024-01-16 15:48:54 -08:00
Harshavardhana
d521c84d55
reduce logging during permission denied errors (#18641)
log them if any only once
2023-12-12 16:11:17 -08:00
Harshavardhana
c3d70e0795
cache usage, prefix-usage, and buckets for AccountInfo up to 10 secs (#18051)
AccountInfo is quite frequently called by the Console UI 
login attempts, when many users are logging in it is important
that we provide them with better responsiveness.

- ListBuckets information is cached every second
- Bucket usage info is cached for up to 10 seconds
- Prefix usage (optional) info is cached for up to 10 secs

Failure to update after cache expiration, would still
allow login which would end up providing information
previously cached.

This allows for seamless responsiveness for the Console UI
logins, and overall responsiveness on a heavily loaded
system.
2023-09-18 22:13:03 -07:00
Harshavardhana
a2aabfabd9
add backups for usage-caches to rely on upon error (#18029)
This allows scanner to avoid lengthy scans, skip
things appropriately and also not lose metrics in
any manner.

reduce longer deadlines for usage-cache loads/saves
to match the disk timeout which is 2minutes now per
IOP.
2023-09-14 11:53:52 -07:00
Harshavardhana
b53376a3a4
change directory objects to never create new versions (#17109) 2023-05-02 16:09:33 -07:00
Harshavardhana
433b6fa8fe
upgrade golang-lint to the latest (#15600) 2022-08-26 12:52:29 -07:00
Anis Elleuch
16431d222c
heal: Enable periodic bitrot scan configuration (#14464) 2022-04-07 08:10:40 -07:00
Harshavardhana
3d7c1ad31d
ignore configNotFound error in AccountInfo() (#14082)
fixes #14081
2022-01-11 08:43:18 -08:00
Harshavardhana
001b77e7e1
use readConfig/saveConfig to simplify I/O on usage/tracker info (#14019) 2022-01-03 10:22:58 -08:00
Harshavardhana
f527c708f2
run gofumpt cleanup across code-base (#14015) 2022-01-02 09:15:06 -08:00
Harshavardhana
914bfb2d9c
fix: allow compaction on replicated buckets (#13711)
currently getReplicationConfig() failure incorrectly
returns error on unexpected buckets upon upgrade, we
should always calculate usage as much as possible.
2021-11-19 14:46:14 -08:00
Poorna Krishnamoorthy
c4373ef290
Add support for multi site replication (#12880) 2021-09-18 13:31:35 -07:00
Harshavardhana
14fe8ecb58
fix: decodeDirObject in prefix usage function (#13026)
prefixes at top level create such as

```
~ mc mb alias/bucket/prefix
```

The prefix/ incorrect appears as prefix__XL_DIR__/
in the accountInfo output, make sure to trim '__XL_DIR__'
2021-08-22 16:46:45 -07:00
Harshavardhana
27e07e80bc
fix: accountInfo() error FS mode (#12734) 2021-07-17 01:17:35 -07:00
Anis Elleuch
aa78505181
Add prefixes usage in Accounting Usage Info (#12687) 2021-07-13 10:42:11 -07:00
Harshavardhana
1f262daf6f
rename all remaining packages to internal/ (#12418)
This is to ensure that there are no projects
that try to import `minio/minio/pkg` into
their own repo. Any such common packages should
go to `https://github.com/minio/pkg`
2021-06-01 14:59:40 -07:00
Harshavardhana
b81fada834
use json unmarshal/marshal from jsoniter in hotpaths (#12269) 2021-05-11 02:02:32 -07:00
Harshavardhana
1aa5858543
move madmin to github.com/minio/madmin-go (#12239) 2021-05-06 08:52:02 -07:00
Harshavardhana
069432566f update license change for MinIO
Signed-off-by: Harshavardhana <harsha@minio.io>
2021-04-23 11:58:53 -07:00
Poorna Krishnamoorthy
47c09a1e6f
Various improvements in replication (#11949)
- collect real time replication metrics for prometheus.
- add pending_count, failed_count metric for total pending/failed replication operations.

- add API to get replication metrics

- add MRF worker to handle spill-over replication operations

- multiple issues found with replication
- fixes an issue when client sends a bucket
 name with `/` at the end from SetRemoteTarget
 API call make sure to trim the bucket name to 
 avoid any extra `/`.

- hold write locks in GetObjectNInfo during replication
  to ensure that object version stack is not overwritten
  while reading the content.

- add additional protection during WriteMetadata() to
  ensure that we always write a valid FileInfo{} and avoid
  ever writing empty FileInfo{} to the lowest layers.

Co-authored-by: Poorna Krishnamoorthy <poorna@minio.io>
Co-authored-by: Harshavardhana <harsha@minio.io>
2021-04-03 09:03:42 -07:00
Harshavardhana
9171d6ef65
rename all references from crawl -> scanner (#11621) 2021-02-26 15:11:42 -08:00
Andreas Auernhammer
d4b822d697
pkg/etag: add new package for S3 ETag handling (#11577)
This commit adds a new package `etag` for dealing
with S3 ETags.

Even though ETag is often viewed as MD5 checksum of
an object, handling S3 ETags correctly is a surprisingly
complex task. While it is true that the ETag corresponds
to the MD5 for the most basic S3 API operations, there are
many exceptions in case of multipart uploads or encryption.

In worse, some S3 clients expect very specific behavior when
it comes to ETags. For example, some clients expect that the
ETag is a double-quoted string and fail otherwise.
Non-AWS compliant ETag handling has been a source of many bugs
in the past.

Therefore, this commit adds a dedicated `etag` package that provides
functionality for parsing, generating and converting S3 ETags.
Further, this commit removes the ETag computation from the `hash`
package. Instead, the `hash` package (i.e. `hash.Reader`) should
focus only on computing and verifying the content-sha256.

One core feature of this commit is to provide a mechanism to
communicate a computed ETag from a low-level `io.Reader` to
a high-level `io.Reader`.

This problem occurs when an S3 server receives a request and
has to compute the ETag of the content. However, the server
may also wrap the initial body with several other `io.Reader`,
e.g. when encrypting or compressing the content:
```
   reader := Encrypt(Compress(ETag(content)))
```
In such a case, the ETag should be accessible by the high-level
`io.Reader`.

The `etag` provides a mechanism to wrap `io.Reader` implementations
such that the `ETag` can be accessed by a type-check.
This technique is applied to the PUT, COPY and Upload handlers.
2021-02-23 12:31:53 -08:00
Krishnan Parthasarathi
b87fae0049
Simplify PutObjReader for plain-text reader usage (#11470)
This change moves away from a unified constructor for plaintext and encrypted
usage. NewPutObjReader is simplified for the plain-text reader use. For
encrypted reader use, WithEncryption should be called on an initialized PutObjReader.

Plaintext:
func NewPutObjReader(rawReader *hash.Reader) *PutObjReader

The hash.Reader is used to provide payload size and md5sum to the downstream
consumers. This is different from the previous version in that there is no need
to pass nil values for unused parameters.

Encrypted:
func WithEncryption(encReader *hash.Reader,
key *crypto.ObjectKey) (*PutObjReader, error)

This method sets up encrypted reader along with the key to seal the md5sum
produced by the plain-text reader (already setup when NewPutObjReader was
called).

Usage:
```
  pReader := NewPutObjReader(rawReader)
  // ... other object handler code goes here

  // Prepare the encrypted hashed reader
  pReader, err = pReader.WithEncryption(encReader, objEncKey)

```
2021-02-10 08:52:50 -08:00
Harshavardhana
8bb580abfc
fix: use getObjectNInfo to avoid bytes.Buffer usage (#11428)
few places were still using legacy call GetObject()
which was mainly designed for client response writer,
use GetObjectNInfo() for internal calls instead.
2021-02-05 09:57:30 -08:00
Harshavardhana
a4383051d9
remove/deprecate crawler disable environment (#11214)
with changes present to automatically throttle crawler
at runtime, there is no need to have an environment
value to disable crawling. crawling is a fundamental
piece for healing, lifecycle and many other features
there is no good reason anyone would need to disable
this on a production system.

* Apply suggestions from code review
2021-01-04 09:43:31 -08:00
Ritesh H Shukla
038bcd9079
Add replication capacity metrics support in crawler (#10786) 2020-12-07 13:47:48 -08:00
Klaus Post
a896125490
Add crawler delay config + dynamic config values (#11018) 2020-12-04 09:32:35 -08:00
Klaus Post
43d6e3ae06
merge object lifecycle checks into usage crawler (#9579) 2020-06-12 10:28:21 -07:00
Harshavardhana
53aaa5d2a5
Export bucket usage counts as part of bucket metrics (#9710)
Bonus fixes in quota enforcement to use the
new datastructure and use timedValue to cache
a value/reload automatically avoids one less
global variable.
2020-05-27 06:45:43 -07:00
Harshavardhana
6e0575a53d
Revert "Disable crawler in FS/NAS gateway mode (#9695)" (#9702)
This reverts commit eba423bb9d.

Additionally also address the FS crawler to properly
calculate the sizes for encrypted/compressed content.
2020-05-25 11:32:53 -07:00
Harshavardhana
eba423bb9d
Disable crawler in FS/NAS gateway mode (#9695)
No one really uses FS for large scale accounting
usage, neither we crawl in NAS gateway mode. It is
worthwhile to simply disable this feature as its
not useful for anyone.

Bonus disable bucket quota ops as well in, FS
and gateway mode
2020-05-25 00:17:52 -07:00
Klaus Post
e25ace2151
Forward RPC errors from crawler (#9569)
The `keepHTTPResponseAlive` would cause errors to be 
returned with status OK.

- Add '32' as a filler byte until a response is ready
- '0' to indicate the response is ready to be consumed
- '1' to indicate response has an error which needs
to be returned to the caller

Clear out 'file not found' errors from dir walker, since it may be 
in a folder that has been deleted since it was scanned.
2020-05-11 20:41:38 -07:00
Harshavardhana
498389123e
avoid unnecessary logging on fresh/newly replaced drives (#9470)
data usage tracker and crawler seem to be logging
non-actionable information on console, which is not
useful and is fixed on its own in almost all deployments,
lets keep this logging to minimal.
2020-04-28 01:16:57 -07:00
Klaus Post
073aac3d92
add data update tracking using bloom filter (#9208)
By monitoring PUT/DELETE and heal operations it is possible
to track changed paths and keep a bloom filter for this data. 

This can help prioritize paths to scan. The bloom filter can identify
paths that have not changed, and the few collisions will only result
in a marginal extra workload. This can be implemented on either a
bucket+(1 prefix level) with reasonable performance.

The bloom filter is set to have a false positive rate at 1% at 1M 
entries. A bloom table of this size is about ~2500 bytes when serialized.

To not force a full scan of all paths that have changed cycle bloom
filters would need to be kept, so we guarantee that dirty paths have
been scanned within cycle runs. Until cycle bloom filters have been
collected all paths are considered dirty.
2020-04-27 10:06:21 -07:00
Harshavardhana
cfc9cfd84a
fix: various optimizations, idiomatic changes (#9179)
- acquire since leader lock for all background operations
  - healing, crawling and applying lifecycle policies.

- simplify lifecyle to avoid network calls, which was a
  bug in implementation - we should hold a leader and
  do everything from there, we have access to entire
  name space.

- make listing, walking not interfere by slowing itself
  down like the crawler.

- effectively use global context everywhere to ensure
  proper shutdown, in cache, lifecycle, healing

- don't read `format.json` for prometheus metrics in
  StorageInfo() call.
2020-03-22 12:16:36 -07:00
Harshavardhana
b1a2169dcc
fix: data usage crawler env handling, usage-cache.bin location (#9163)
canonicalize the ENVs such that we can bring these ENVs 
as part of the config values, as a subsequent change.

- fix location of per bucket usage to `.minio.sys/buckets/<bucket_name>/usage-cache.bin`
- fix location of the overall usage in `json` at `.minio.sys/buckets/.usage.json`
  (avoid conflicts with a bucket named `usage.json` )
- fix location of the overall usage in `msgp` at `.minio.sys/buckets/.usage.bin`
  (avoid conflicts with a bucket named `usage.bin`
2020-03-19 09:47:47 -07:00
Klaus Post
8d98662633
re-implement data usage crawler to be more efficient (#9075)
Implementation overview: 

https://gist.github.com/klauspost/1801c858d5e0df391114436fdad6987b
2020-03-18 16:19:29 -07:00
Anis Elleuch
75a0661213
data-usage: Fix the calculation of the next crawling round (#9096)
This commit fixes a simple typo miscalculated the waiting time
until the next round of data crawling to compute the data usage.
2020-03-06 11:34:12 -08:00
kannappanr
d9be8bc693
Add env. variable to disable data usage crawling (#9086) 2020-03-04 15:51:03 -08:00
Klaus Post
b2db1e96e2
Remove crawler concurrency (#9023)
Only have one crawler per disk. Removes locking, but keep
fastwalk itself able to run concurrently.
2020-02-21 20:50:16 +05:30
Harshavardhana
ab7d3cd508
fix: Speed up multi-object delete by taking bulk locks (#8974)
Change distributed locking to allow taking bulk locks
across objects, reduces usually 1000 calls to 1.

Also allows for situations where multiple clients sends
delete requests to objects with following names

```
{1,2,3,4,5}
```

```
{5,4,3,2,1}
```

will block and ensure that we do not fail the request
on each other.
2020-02-21 11:29:57 +05:30
Klaus Post
2165d45d3f
Time getSize and use to estimate latency (#8959)
Remove the random sleep. This is running in 4 goroutines, 
so mostly doing nothing.

We use the getSize latency to estimate system load, 
meaning when there is little load on the system and 
we get the result fast we sleep a little.

If it took a long time we have high load and release
ourselves longer.

We are sleeping inside the mutex so this affects all
goroutines doing IO.
2020-02-07 09:05:55 +05:30
Harshavardhana
49df290270 Add metadata parsing to be inside mutex to slow down (#8952)
Adding mutex slows down the crawler to avoid large
spikes in CPU, also add millisecond interval jitter
in calculation of disk usage to slow down the spikes
further.
2020-02-06 00:22:11 -08:00
Harshavardhana
2d295a31de
Avoid select inside a recursive function to avoid CPU spikes (#8923)
Additionally also allow configurable go-routines
2020-02-03 16:45:59 -08:00
Harshavardhana
f14f60a487 fix: Avoid double usage calculation on every restart (#8856)
On every restart of the server, usage was being
calculated which is not useful instead wait for
sufficient time to start the crawling routine.

This PR also avoids lots of double allocations
through strings, optimizes usage of string builders
and also avoids crawling through symbolic links.

Fixes #8844
2020-01-21 14:07:49 -08:00
Anis Elleuch
017067e11f data-usage: Avoid crawling duplicated call (#8843)
This fix will also picks 3 and not 4 disks from a single erasure set.
2020-01-17 09:59:37 -08:00
Harshavardhana
aa2e89bfe3 Use jsoniter whenever applicable instead of encoding/json (#8766)
This PR adds jsoniter package to replace encoding/json
in places where faster json unmarshal is necessary
whenever input JSON is large enough.

Some benchmarking comparison between jsoniter and enconding/json

benchmark                            old MB/s     new MB/s     speedup
BenchmarkParseUnmarshal/N10-4        110.02       331.17       3.01x
BenchmarkParseUnmarshal/N100-4       125.74       524.09       4.17x
BenchmarkParseUnmarshal/N500-4       131.68       542.60       4.12x
BenchmarkParseUnmarshal/N1000-4      133.93       514.88       3.84x
BenchmarkParseUnmarshal/N5000-4      122.10       415.36       3.40x
BenchmarkParseUnmarshal/N10000-4     132.13       403.90       3.06x
2020-01-08 17:01:42 +05:30
Harshavardhana
5f2318567e
Allow metadata updates on meta bucket even in WORM mode (#8657)
This ensures that we can update the

- .minio.sys is updated for accounting/data usage purposes
- .minio.sys is updated to indicate if backend is encrypted
  or not.
2019-12-17 10:13:12 -08:00
Harshavardhana
c8d82588c2 Fix crash in console logger and also handle bucket DNS updates (#8654)
Also fix listenBucketNotification bugs seen by minio-js
listen bucket notification API.
2019-12-16 20:30:57 -08:00