Compare commits

...

608 Commits

Author SHA1 Message Date
dorman
3a0cc6c86e
fix doc 404 (#21670) 2025-10-26 19:47:37 -07:00
yangw
10b0a234d2
fix: update metric descriptions to specify current MinIO server instance (#21638)
Signed-off-by: yangw <wuyangmuc@gmail.com>
2025-10-23 21:06:31 -07:00
Raul-Mircea Crivineanu
18f97e70b1
Updates for conditional put read quorum issue (#21653) 2025-10-23 21:05:31 -07:00
Menno Finlay-Smits
52eee5a2f1
fix(api): Don't send multiple responses for one request (#21651)
fix(api): Don't send responses twice.

In some cases multiple responses are being sent for one request, causing
the API server to incorrectly drop connections.

This change introduces a ResponseWriter which tracks whether a
response has already been sent. This is used to prevent a response being
sent if something already has (e.g. by a preconditions check function).

Fixes #21633.

Co-authored-by: Menno Finlay-Smits <hello@menno.io>
2025-10-23 21:05:19 -07:00
Rishabh Agrahari
c6d3aac5c4
Fix typo in entrypoint script path in README (#21657) 2025-10-23 08:10:39 -07:00
M Alvee
fa18589d1c
fix: Tagging in PostPolicy upload does not enforce policy tags (#21656) 2025-10-23 08:10:12 -07:00
Harshavardhana
05e569960a update scripts pointing to internal registry for community releases 2025-10-19 01:22:05 -07:00
Harshavardhana
9e49d5e7a6 update README.md and other docs to point to source only releases 2025-10-15 10:29:55 -07:00
Aditya Manthramurthy
c1a49490c7
fix: check sub-policy properly when present (#21642)
This fixes a security issue where sub-policy attached to a service
account or STS account is not properly validated under certain "own"
account operations (like creating new service accounts). This allowed a
service account to create new service accounts for the same user
bypassing the inline policy restriction.
2025-10-15 10:00:45 -07:00
Ravind Kumar
334c313da4
Change documentation link in README (#21636)
Updated documentation link to point to the GitHub project.
2025-10-10 12:00:53 -07:00
cduzer
1b8ac0af9f
fix: allow trailing slash in AWS S3 POST policies (#21612) 2025-10-10 11:57:35 -07:00
Mark Theunissen
ba3c0fd1c7
Bump Go version in toolchain directive to 1.24.8 (#21629) 2025-10-10 11:57:03 -07:00
Ravind Kumar
d51a4a4ff6
Update README with Docker and Helm installation instructions (#21627)
Added instructions for building Docker image and using Helm charts.

This closes the loop on supported methods for deploying MinIO with latest changes.
2025-10-09 15:10:11 -07:00
Harshavardhana
62383dfbfe
Fix formatting of features in README.md 2025-10-07 09:59:23 -07:00
Ravind Kumar
bde0d5a291
Updating readme for MinIO docs (#21625) 2025-10-06 22:36:26 -07:00
yangw
534f4a9fb1
fix: timeN function return final closure not be called (#21615) 2025-09-30 23:06:01 -07:00
Klaus Post
b8631cf531
Use new gofumpt (#21613)
Update tinylib. Should fix CI.

`gofumpt -w .&&go generate ./...`
2025-09-28 13:59:21 -07:00
jiuker
456d9462e5
fix: after saveRebalanceStats cancel will be empty (#21597) 2025-09-19 21:51:57 -07:00
jiuker
756f3c8142
fix: incorrect poolID when after decommission adding pools (#21590) 2025-09-18 04:47:48 -07:00
mosesdd
7a80ec1cce
fix: LDAP TLS handshake fails with StartTLS and tls_skip_verify=off (#21582)
Fixes #21581
2025-09-17 00:58:27 -07:00
M Alvee
ae71d76901
fix: remove unnecessary replication checks (#21569) 2025-09-08 10:43:13 -07:00
M Alvee
07c3a429bf
fix: conditional checks write for multipart (#21567) 2025-09-07 09:13:09 -07:00
Minio Trusted
0cde982902 Update yaml files to latest version RELEASE.2025-09-06T17-38-46Z 2025-09-07 05:14:10 +00:00
Ian Roberts
d0f50cdd9b
fix: use correct dummy ARN for claim-based OIDC provider when listing access keys (#21549)
fix: use correct dummy ARN for claim-based OIDC provider

When listing OIDC access keys, use the correct ARN when looking up the provider configuration for the claim-based provider.  Without this it was impossible to list access keys for a claim-based provider, only for a role-policy-based provider.

Fixes minio/minio#21548
2025-09-06 10:38:46 -07:00
WGH
da532ab93d
Fix support for legacy compression env variables (#21533)
Commit b6eb8dff649b0f46c12d24e89aa11254fb0132fa renamed compression
setting environment variables to follow consistent style.

Although it preserved backward compatibility for the most part (i.e. it
handled MINIO_COMPRESS_ALLOW_ENCRYPTION, MINIO_COMPRESS_EXTENSIONS, and
MINIO_COMPRESS_MIME_TYPES), MINIO_COMPRESS_ENABLE was left behind.

Additionally, due to incorrect fallback ordering, and DefaultKVS
containing enable=off allow_encryption=off (so kvs.Get should've been
tried last), that commit broke MINIO_COMPRESS_ALLOW_ENCRYPTION (even
though it appeared to be handled), and even older MINIO_COMPRESS, too.

The legacy MIME types and extensions variables take precedence over both
config and new variables, so they don't need fixing.
2025-09-06 10:37:10 -07:00
M Alvee
558fc1c09c
fix: return error on conditional write for non existing object (#21550) 2025-09-06 10:34:38 -07:00
Alex
9fdbf6fe83
Updated object-browser to the latest version v2.0.4 (#21564)
Signed-off-by: Benjamin Perez <benjamin@bexsoft.net>
2025-09-06 10:33:19 -07:00
jiuker
5c87d4ae87
fix: when save the rebalanceStats not found the config file (#21547) 2025-09-04 13:47:24 -07:00
Klaus Post
f0b91e5504
Run modernize (#21546)
`go run golang.org/x/tools/gopls/internal/analysis/modernize/cmd/modernize@latest -fix -test ./...` executed.

`go generate ./...` ran afterwards to keep generated.
2025-08-28 19:39:48 -07:00
Manuel Reis
3b7cb6512c
Revert dns.msgUnPath, fixes #21541 (#21542)
* Add more tests to UnPath function
* Revert implementation on dns.msgUnPath. Fixes: #21541
2025-08-28 10:31:12 -07:00
Mark Theunissen
4ea6f3b06b
fix: invalid checksum on site replication with conforming checksum types (#21535) 2025-08-22 07:15:21 -07:00
jiuker
86d9d9b55e
fix: use amqp.ParseURL to parse amqp url (#21528) 2025-08-20 21:25:07 -07:00
Denis Peshkov
5a35585acd
http/listener: fix bugs and simplify (#21514)
* Store `ctx.Done` channel in a struct instead of a `ctx`. See: https://go.dev/blog/context-and-structs
* Return from `handleListener` on `ctx` cancellation, preventing goroutine leaks
* Simplify `handleListener` by removing the `send` closure. The `handleListener` is inlined by the compiler
* Return the first error from `Close`
* Preallocate slice in `Addrs`
* Reduce duplication in handling `opts.Trace`
* http/listener: revert error propagation from Close()
* http/listener: preserve original listener address in Addr()
* Preserve the original address when calling Addr() with multiple listeners
* Remove unused listeners from the slice
2025-08-12 11:22:12 -07:00
Daryl White
0848e69602
Update docs links throughout (#21513) 2025-08-12 11:20:36 -07:00
M Alvee
02ba581ecf
custom user-agent transport wrapper (#21483) 2025-08-08 10:51:53 -07:00
Ian Roberts
b44b2a090c
fix: when claim-based OIDC is configured, treat unknown roleArn as claim-based auth (#21512)
RoleARN is a required parameter in AssumeRoleWithWebIdentity, 
according to the standard AWS implementation, and the official 
AWS SDKs and CLI will not allow you to assume a role from a JWT 
without also specifying a RoleARN.  This meant that it was not 
possible to use the official SDKs for claim-based OIDC with Minio 
(minio/minio#21421), since Minio required you to _omit_ the RoleARN in this case.

minio/minio#21468 attempted to fix this by disabling the validation 
of the RoleARN when a claim-based provider was configured, but this had 
the side effect of making it impossible to have a mixture of claim-based 
and role-based OIDC providers configured at the same time - every 
authentication would be treated as claim-based, ignoring the RoleARN entirely.

This is an alternative fix, whereby:

- _if_ the `RoleARN` is one that Minio knows about, then use the associated role policy
- if the `RoleARN` is not recognised, but there is a claim-based provider configured, then ignore the role ARN and attempt authentication with the claim-based provider
- if the `RoleARN` is not recognised, and there is _no_ claim-based provider, then return an error.
2025-08-08 10:51:23 -07:00
dorman
c7d6a9722d
Modify permission verification type (#21505) 2025-08-08 02:47:37 -07:00
jiuker
a8abdc797e
fix: add name and description to ldap accesskey list (#21511) 2025-08-07 19:46:04 -07:00
M Alvee
0638ccc5f3
fix: claim based oidc for official aws libraries (#21468) 2025-08-07 19:42:38 -07:00
jiuker
b1a34fd63f
fix: errUploadIDNotFound will be ignored when err is from peer client (#21504) 2025-08-07 19:38:41 -07:00
Klaus Post
ffcfa36b13
Check legalHoldPerm (#21508)
The provided parameter should be checked before accepting legal hold
2025-08-07 19:38:25 -07:00
Aditya Kotra
376fbd11a7
fix(helm): do not suspend versioning by default for buckets, only set versioning if specified(21349) (#21494)
Signed-off-by: Aditya Kotra <kaditya030@gmail.com>
2025-08-07 02:47:02 -07:00
dorman
c76f209ccc
Optimize outdated commands in the log (#21498) 2025-08-06 16:48:58 -07:00
M Alvee
7a6a2256b1
imagePullSecrets consistent types for global , local (#21500) 2025-08-06 16:48:24 -07:00
Johannes Horn
d002beaee3
feat: add variable for datasource in grafana dashboards (#21470) 2025-08-03 18:46:49 -07:00
jiuker
71f293d9ab
fix: record extral skippedEntry for listObject (#21484) 2025-08-01 08:53:35 -07:00
jiuker
e3d183b6a4
bring more idempotent behavior to AbortMultipartUpload() (#21475)
fix #21456
2025-07-30 23:57:23 -07:00
Alex
752abc2e2c
Update console to v2.0.3 (#21474)
Signed-off-by: Benjamin Perez <benjamin@bexsoft.net>
Co-authored-by: Benjamin Perez <benjamin@bexsoft.net>
2025-07-30 10:57:17 -07:00
Minio Trusted
b9f0e8c712 Update yaml files to latest version RELEASE.2025-07-23T15-54-02Z 2025-07-23 18:28:46 +00:00
M Alvee
7ced9663e6
simplify validating policy mapping (#21450) 2025-07-23 08:54:02 -07:00
MagicPig
50fcf9b670
fix boundary value bug when objTime ends in whole seconds (without sub-second) (#21419) 2025-07-23 05:36:06 -07:00
Harshavardhana
64f5c6103f
wait for metadata reads on minDisks+1 for HEAD/GET when data==parity (#21449)
fixes a regression since #19741
2025-07-23 04:21:15 -07:00
Poorna
e909be6380 send replication requests to correct pool (#1162)
Fixes incorrect application of ilm expiry rules on versioned objects
when replication is enabled.

Regression from https://github.com/minio/minio/pull/20441 which sends
DeleteObject calls to all pools. This is a problem for replication + ilm
scenario since replicated version can end up in a pool by itself instead of
pool where remaining object versions reside.

For example, if the delete marker is set on pool1 and object versions exist on
pool2, the second rule below will cause the delete marker to be expired by ilm
policy since it is the single version present in pool1
```
{
  "Rules": [
   {
    "ID": "cs6il1ri2hp48g71mdjg",
    "NoncurrentVersionExpiration": {
     "NoncurrentDays": 14
    },
    "Status": "Enabled"
   },
   {
    "Expiration": {
     "ExpiredObjectDeleteMarker": true
    },
    "ID": "cs6inj3i2hp4po19cil0",
    "Status": "Enabled"
   }
  ]
}
```
2025-07-19 13:27:52 -07:00
jiuker
83b2ad418b
fix: restrict SinglePool by the minimum free drive threshold (#21115) 2025-07-18 23:25:44 -07:00
Loganaden Velvindron
7a64bb9766
Add support for X25519MLKEM768 (#21435)
Signed-off-by: Bhuvanesh Fokeer <fokeerbhuvanesh@cyberstorm.mu>
Signed-off-by: Nakul Baboolall <nkb@cyberstorm.mu>
Signed-off-by: Sehun Bissessur <sehun.bissessur@cyberstorm.mu>
2025-07-18 23:23:15 -07:00
Minio Trusted
34679befef Update yaml files to latest version RELEASE.2025-07-18T21-56-31Z 2025-07-18 23:28:59 +00:00
Harshavardhana
4021d8c8e2
fix: lambda handler response to match the lambda return status (#21436) 2025-07-18 14:56:31 -07:00
Burkov Egor
de234b888c
fix: admin api - SetPolicyForUserOrGroup avoid nil deref (#21400) 2025-07-01 09:00:17 -07:00
Mark Theunissen
2718d9a430
CopyObject must preserve checksums and encrypt them if required (#21399) 2025-06-25 08:08:54 -07:00
Alex
a65292cab1
Update Console to latest version (#21397)
Signed-off-by: Benjamin Perez <benjamin@bexsoft.net>
2025-06-24 17:33:22 -07:00
Minio Trusted
e0c79be251 Update yaml files to latest version RELEASE.2025-06-13T11-33-47Z 2025-06-23 20:28:38 +00:00
jiuker
a6c538c5a1
fix: honor renamePart's PathNotFound (#21378) 2025-06-13 04:33:47 -07:00
jiuker
e1fcaebc77
fix: when ListMultipartUploads append result from cache should filter with bucket (#21376) 2025-06-12 00:09:12 -07:00
Johannes Horn
21409f112d
add networkpolicy for job and add possibility to define egress ports (#20951) 2025-06-08 09:14:18 -07:00
Sung Jeon
417c8648f0
use provided region in tier configuration for S3 backend (#21365)
fixes #21364
2025-06-08 09:13:30 -07:00
ffgan
e2245a0b12
allow cross-compiling support for RISC-V 64 (#21348)
this is minor PR that supports building on RISC-V 64,
this PR is for compilation only. There is no guarantee 
that code is tested and will work in production.
2025-06-08 09:12:05 -07:00
Shubhendu
b4b3d208dd
Add targetArn label for bucket replication metrics (#21354)
Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
2025-06-04 13:45:31 -07:00
ILIYA
0a36d41dcd
modernizes for loop in cmd/, internal/ (#21309) 2025-05-27 08:19:03 -07:00
jiuker
ea77bcfc98
fix: panic for TestListObjectsWithILM (#21322) 2025-05-27 08:18:36 -07:00
jiuker
9f24ca5d66
fix: empty fileName cause Reader nil for PostPolicyBucketHandler (#21323) 2025-05-27 08:18:26 -07:00
VARUN SHARMA
816666a4c6
make some targeted updates to README.md (#21125) 2025-05-26 12:34:56 -07:00
Anis Eleuch
2c7fe094d1
s3: Fix early listing stopping when ILM is enabled (#472) (#21246)
S3 listing call is usually sent with a 'max-keys' parameter. This
'max-keys' will also be passed to WalkDir() call. However, when ILM is
enabled in a bucket and some objects are skipped, the listing can
return IsTruncated set to false even if there are more entries in
the drives.

The reason is that drives stop feeding the listing code because it has
max-keys parameter and the listing code thinks listing is finished
because it is being fed anymore.

Ask the drives to not stop listing and relies on the context
cancellation to stop listing in the drives as fast as possible.
2025-05-26 00:06:43 -07:00
Harshavardhana
9ebe168782 add pull requests etiquette 2025-05-25 09:32:03 -07:00
Minio Trusted
ee2028cde6 Update yaml files to latest version RELEASE.2025-05-24T17-08-30Z 2025-05-24 21:37:47 +00:00
Frank Elsinga
ecde75f911
docs: use github-style-notes in the readme (#21308)
use notes in the readme
2025-05-24 10:08:30 -07:00
jiuker
12a6ea89cc
fix: Use mime encode for Non-US-ASCII metadata (#21282) 2025-05-22 08:42:54 -07:00
Anis Eleuch
63e102c049
heal: Avoid disabling scanner healing in single and dist erasure mode (#21302)
A typo disabled the scanner healing in erasure mode. Fix it.
2025-05-22 08:42:29 -07:00
Alex
160f8a901b
Update Console UI to latest version (#21294) 2025-05-21 08:59:37 -07:00
jiuker
ef9b03fbf5
fix: unable to get net.Interface cause panic (#21277) 2025-05-16 07:28:04 -07:00
Andreas Auernhammer
1d50cae43d
remove support for FIPS 140-2 with boringcrypto (#21292)
This commit removes FIPS 140-2 related code for the following
reasons:
 - FIPS 140-2 is a compliance, not a security requirement. Being
   FIPS 140-2 compliant has no security implication on its own.
   From a tech. perspetive, a FIPS 140-2 compliant implementation
   is not necessarily secure and a non-FIPS 140-2 compliant implementation
   is not necessarily insecure. It depends on the concret design and
   crypto primitives/constructions used.
 - The boringcrypto branch used to achieve FIPS 140-2 compliance was never
   officially supported by the Go team and is now in maintainance mode.
   It is replaced by a built-in FIPS 140-3 module. It will be removed
   eventually. Ref: https://github.com/golang/go/issues/69536
 - FIPS 140-2 modules are no longer re-certified after Sep. 2026.
   Ref: https://csrc.nist.gov/projects/cryptographic-module-validation-program

Signed-off-by: Andreas Auernhammer <github@aead.dev>
2025-05-16 07:27:42 -07:00
Klaus Post
c0a33952c6
Allow FTPS to force TLS (#21251)
Fixes #21249

Example params: `-ftp=force-tls=true -ftp="tls-private-key=ftp/private.key" -ftp="tls-public-cert=ftp/public.crt"`

If MinIO is set up for TLS those certs will be used.
2025-05-09 13:10:19 -07:00
Alex
8cad40a483
Update UI console to the latest version (#21278)
Signed-off-by: Benjamin Perez <benjamin@bexsoft.net>
2025-05-09 13:09:54 -07:00
Harshavardhana
6d18dba9a2
return error for AppendObject() API (#21272) 2025-05-07 08:37:12 -07:00
jiuker
9ea14c88d8
cleanup: use NewWithOptions replace the Deprecated one (#21243) 2025-04-29 08:35:51 -07:00
jiuker
30a1261c22
fix: track object and bucket for exipreAll (#21241) 2025-04-27 21:19:38 -07:00
Matt Lloyd
0e017ab071
feat: support nats nkey seed auth (#21231) 2025-04-26 21:30:57 -07:00
Harshavardhana
f14198e3dc update with newer pkger release 2025-04-26 17:44:22 -07:00
Burkov Egor
93c389dbc9
typo: return actual error from RemoveRemoteTargetsForEndpoint (#21238) 2025-04-26 01:43:10 -07:00
jiuker
ddd9a84cd7
allow concurrent aborts on active uploadParts() (#21229)
allow aborting on active uploads in progress, however fail these
uploads subsequently during commit phase and return appropriate errors
2025-04-24 22:41:04 -07:00
Celis
b7540169a2
Add documentation for replication_max_lrg_workers (#21236) 2025-04-24 16:34:26 -07:00
Klaus Post
f01374950f
Use go mod tool to install tools for go generate (#21232)
Use go tool for generators

* Use go.mod tool section
* Install tools with go generate
* Update dependencies
* Remove madmin fork.
2025-04-24 16:34:11 -07:00
Taran Pelkey
18aceae620
Fix nil dereference in adding service account (#21235)
Fixes #21234
2025-04-24 11:14:00 -07:00
Andreas Auernhammer
427826abc5
update minio/kms-go/kms SDK (#21233)
Signed-off-by: Andreas Auernhammer <github@aead.dev>
2025-04-24 08:33:57 -07:00
Harshavardhana
2780778c10 Revert "Fix: Change TTFB metric type to histogram (#20999)"
This reverts commit 8d223e07fb7f8593ae56dfd2f4a0688fe1ee8a17.
2025-04-23 13:56:18 -07:00
Shubhendu
2d8ba15b9e
Correct spelling (#21225) 2025-04-23 08:13:23 -07:00
Minio Trusted
bd6dd55e7f Update yaml files to latest version RELEASE.2025-04-22T22-12-26Z 2025-04-22 22:34:07 +00:00
Matt Lloyd
0d7408fc99
feat: support nats tls handshake first (#21008) 2025-04-22 15:12:26 -07:00
jiuker
864f80e226
fix: batch expiry job doesn't report delete marker in batch-status (#21183) 2025-04-22 04:16:32 -07:00
Harshavardhana
0379d6a37f fix: permissions for docker-compose 2025-04-21 09:24:31 -07:00
Harshavardhana
43aa8e4259
support autogenerated credentials for KMS_SECRET_KEY properly (#21223)
we had a chicken and egg problem with this feature even
when used with kes the credentials generation would
not work in correct sequence causing setup/deployment
disruptions.

This PR streamlines all of this properly to ensure that
this functionality works as advertised.
2025-04-21 09:23:51 -07:00
Harshavardhana
e2ed696619 fix: docker-compose link since latest release 2025-04-20 10:05:30 -07:00
Klaus Post
fb3f67a597
Fix shared error buffer (#21203)
v.cancelFn(RemoteErr(m.Payload)) would use an already returned buffer.

Simplify code a bit as well by returning on errors.
2025-04-18 02:10:55 -07:00
dependabot[bot]
7ee75368e0
build(deps): bump github.com/nats-io/nats-server/v2 from 2.9.23 to 2.10.27 (#21191)
build(deps): bump github.com/nats-io/nats-server/v2

Bumps [github.com/nats-io/nats-server/v2](https://github.com/nats-io/nats-server) from 2.9.23 to 2.10.27.
- [Release notes](https://github.com/nats-io/nats-server/releases)
- [Changelog](https://github.com/nats-io/nats-server/blob/main/.goreleaser.yml)
- [Commits](https://github.com/nats-io/nats-server/compare/v2.9.23...v2.10.27)

---
updated-dependencies:
- dependency-name: github.com/nats-io/nats-server/v2
  dependency-version: 2.10.27
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-17 04:45:51 -07:00
dependabot[bot]
1d6478b8ae
build(deps): bump golang.org/x/net from 0.34.0 to 0.38.0 in /docs/debugging/s3-verify (#21199)
build(deps): bump golang.org/x/net in /docs/debugging/s3-verify

Bumps [golang.org/x/net](https://github.com/golang/net) from 0.34.0 to 0.38.0.
- [Commits](https://github.com/golang/net/compare/v0.34.0...v0.38.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-version: 0.38.0
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-17 04:45:33 -07:00
dependabot[bot]
0581001b6f
build(deps): bump golang.org/x/net from 0.37.0 to 0.38.0 (#21200)
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.37.0 to 0.38.0.
- [Commits](https://github.com/golang/net/compare/v0.37.0...v0.38.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-version: 0.38.0
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-17 04:45:15 -07:00
dependabot[bot]
479303e7e9
build(deps): bump golang.org/x/crypto from 0.32.0 to 0.35.0 in /docs/debugging/inspect (#21192) 2025-04-16 14:54:16 -07:00
Burkov Egor
89aec6804b
typo: fix return of checkDiskFatalErrs (#21121) 2025-04-16 08:20:41 -07:00
Taran Pelkey
eb33bc6bf5 Add New Accesskey Info and OpenID Accesskey List API endpoints (#21097) 2025-04-16 00:34:24 -07:00
dependabot[bot]
3310f740f0
build(deps): bump golang.org/x/crypto from 0.32.0 to 0.35.0 in /docs/debugging/s3-verify (#21185)
build(deps): bump golang.org/x/crypto in /docs/debugging/s3-verify

Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.32.0 to 0.35.0.
- [Commits](https://github.com/golang/crypto/compare/v0.32.0...v0.35.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-version: 0.35.0
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-15 07:00:14 -07:00
Burkov Egor
4595293ca0
typo: fix error msg for decoding XL headers (#21120) 2025-04-10 08:55:43 -07:00
Klaus Post
02a67cbd2a
Fix buffered streams missing final entries (#21122)
On buffered streams the final entries could be missing, if a lot 
are delivered when stream ends.

Fixes end-of-stream cancelling return of final entries by canceling
with the StreamEOF error.
2025-04-10 08:29:19 -07:00
Harshavardhana
2b34e5b9ae
move to go1.24 (#21114) 2025-04-09 07:28:39 -07:00
Minio Trusted
a6258668a6 Update yaml files to latest version RELEASE.2025-04-08T15-41-24Z 2025-04-08 19:37:51 +00:00
Krishnan Parthasarathi
d0cada583f
ilm: Expect objects with only free versions when scanning (#21112) 2025-04-08 08:41:24 -07:00
Harshavardhana
0bd8f06b62 fix: healing to list, purge dangling objects (#621)
in a specific corner case when you only have dangling
objects with single shard left over, we end up a situation
where healing is unable to list this dangling object to
purge due to the fact that listing logic expected only
`len(disks)/2+1` - where as when you make this choice you
end up with a situation that the drive where this object
is present is not part of your expected disks list, causing
it to be never listed and ignored into perpetuity.

change the logic such that HealObjects() would be able
to listAndHeal() per set properly on all its drives, since
there is really no other way to do this cleanly, however
instead of "listing" on all erasure sets simultaneously, we
list on '3' at a time. So in a large enough cluster this is
fairly staggered.
2025-04-04 06:49:12 -07:00
Harshavardhana
6640be3bed fix: listParts crash when partNumberMarker is expected (#620)
fixes https://github.com/minio/minio/issues/21098
2025-04-04 06:44:38 -07:00
Anis Eleuch
eafeb27e90
decom: Ignore orphan delete markers in verification stage (#21106)
To make sure that no objects were skipped for any reason,
decommissioning does a second phase of listing to check if there
are some objects that need to be decommissioned. However, the code
forgot to skip orphan delete markers since the decom code already
skips it.

Make the code ignore delete markers in in the verification phase.

Co-authored-by: Anis Eleuch <anis@min.io>
2025-04-03 15:07:24 -07:00
Minio Trusted
f2c9eb0f79 Update yaml files to latest version RELEASE.2025-04-03T14-56-28Z 2025-04-03 18:57:40 +00:00
爱折腾的小竹同学
f2619d1f62
Fix description error in README (#21099)
There is prefix in json, but not in the equivalent command line. Although the role of prefix has been explained in the previous example, I think it should be supplemented.
2025-04-03 07:56:28 -07:00
Harshavardhana
8c70975283
make sure to validate signature unsigned trailer stream (#21103)
This is a security incident fix, it would seem like since
the implementation of unsigned payload trailer on PUTs,
we do not validate the signature of the incoming request.

The signature can be invalid and is totally being ignored,
this in-turn allows any arbitrary secret to upload objects
given the user has "WRITE" permissions on the bucket, since
acces-key is a public information in general exposes these
potential users with WRITE on the bucket to be used by any
arbitrary client to make a fake request to MinIO the signature
under Authorization: header is totally ignored.

A test has been added to cover this scenario and fail
appropriately.
2025-04-03 07:55:52 -07:00
Krishnan Parthasarathi
01447d2438
Fix evaluation of NewerNoncurrentVersions (#21096)
- Move VersionPurgeStatus into replication package
- ilm: Evaluate policy w/ obj retention/replication
- lifecycle: Use Evaluator to enforce ILM in scanner
- Unit tests covering ILM, replication and retention
- Simplify NewEvaluator constructor
2025-04-02 23:45:06 -07:00
Shubhendu
07f31e574c
Try reconnect IAM systems if failed initially (#20333)
Fixes: https://github.com/minio/minio/issues/20118

Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
2025-04-02 10:29:33 -07:00
iamsagar99
8d223e07fb
Fix: Change TTFB metric type to histogram (#20999) 2025-04-01 22:48:58 -07:00
Harshavardhana
4041a8727c start publishing latest-cicd images 2025-04-01 20:53:54 -07:00
Klaus Post
5f243fde9a
Fix anonymous unsigned trailing headers (#21095)
Do not fail on anonymous requests with trailing headers.

Fixes #21005

With modified minio-go (will send PR):

```
<DEBUG> PUT /tbb/mc.exe HTTP/1.1
Host: 127.0.0.1:9001
User-Agent: MinIO (windows; amd64) minio-go/v7.0.90 mc/DEVELOPMENT.GOGET
Content-Length: 44301288
Accept-Encoding: zstd,gzip
Content-Encoding: aws-chunked
Content-Type: application/x-msdownload
X-Amz-Content-Sha256: STREAMING-UNSIGNED-PAYLOAD-TRAILER
X-Amz-Date: 20250401T150402Z
X-Amz-Decoded-Content-Length: 44295168
X-Amz-Trailer: x-amz-checksum-crc32

mc: <DEBUG> HTTP/1.1 200 OK
Content-Length: 0
Accept-Ranges: bytes
Date: Tue, 01 Apr 2025 15:04:02 GMT
Etag: "46273a30f232dc015ead1c0da8925c98"
Server: MinIO
Strict-Transport-Security: max-age=31536000; includeSubDomains
Vary: Origin
Vary: Accept-Encoding
X-Amz-Checksum-Crc32: wElc/A==
X-Amz-Id-2: 7987905dee74cdeb212432486a178e511309594cee7cb75f892cd53e35f09ea4
X-Amz-Request-Id: 18323A0F322B41C8
X-Content-Type-Options: nosniff
X-Ratelimit-Limit: 2478
X-Ratelimit-Remaining: 2478
X-Xss-Protection: 1; mode=block
```

Tested on multipart uploads as well.
2025-04-01 11:23:27 -07:00
Burkov Egor
a0e3f1cc18
internal: add handling of KVS config parse (#21079) 2025-04-01 08:28:26 -07:00
Name
b1bc641105
chore(all): replace map key deletion loop with clear() (#21082) 2025-04-01 08:28:06 -07:00
jiuker
e0c8738230
fix: token is invalid for admin heal when minio is distErasure on windows (#21092) 2025-04-01 08:21:33 -07:00
alingse
9aa24b1920
fix call toAPIErrorCode with a nil value error after check another err (#21083)
if check lerr != nil and return a toAPIErrorCode(nil)

it should  return toAPIErrorCode(lerr)
2025-03-31 13:31:15 -07:00
Taran Pelkey
53d40e41bc
Add new API endpoint to revoke STS tokens (#21072) 2025-03-31 11:51:24 -07:00
Taran Pelkey
e88d494775
Migrate golanglint-ci config to V2 (#21081) 2025-03-29 17:56:02 -07:00
dependabot[bot]
b67f0cf721
build(deps): bump github.com/golang-jwt/jwt/v4 from 4.5.1 to 4.5.2 (#21056)
Bumps [github.com/golang-jwt/jwt/v4](https://github.com/golang-jwt/jwt) from 4.5.1 to 4.5.2.
- [Release notes](https://github.com/golang-jwt/jwt/releases)
- [Changelog](https://github.com/golang-jwt/jwt/blob/main/VERSION_HISTORY.md)
- [Commits](https://github.com/golang-jwt/jwt/compare/v4.5.1...v4.5.2)

---
updated-dependencies:
- dependency-name: github.com/golang-jwt/jwt/v4
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-23 08:18:21 -07:00
Alexander Kalaj
46922c71b7
Updating Prom queries to include tilde needed to work (#21054) 2025-03-22 08:22:29 -07:00
dependabot[bot]
670edb4fcf
build(deps): bump github.com/golang-jwt/jwt/v5 from 5.2.1 to 5.2.2 (#21055)
Bumps [github.com/golang-jwt/jwt/v5](https://github.com/golang-jwt/jwt) from 5.2.1 to 5.2.2.
- [Release notes](https://github.com/golang-jwt/jwt/releases)
- [Changelog](https://github.com/golang-jwt/jwt/blob/main/VERSION_HISTORY.md)
- [Commits](https://github.com/golang-jwt/jwt/compare/v5.2.1...v5.2.2)

---
updated-dependencies:
- dependency-name: github.com/golang-jwt/jwt/v5
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-22 08:21:04 -07:00
itsJohnySmith
42d4ab2a0a
fix(templates): replace dash with underscore (#19566) 2025-03-14 13:01:11 -07:00
Harshavardhana
5e2eb372bf update dependencies for CVE fix x/net 2025-03-12 22:29:51 -07:00
Minio Trusted
cccb37a5ac Update yaml files to latest version RELEASE.2025-03-12T18-04-18Z 2025-03-12 18:22:31 +00:00
Anis Eleuch
dbf31af6cb
decom: Ignore not found buckets (#509) (#21023)
When decommissioning is started, the list of buckets to decommission is
calculated, however, a bucket can be removed before decommissioning reaches
it. This will cause an infinite loop of listing error complaining about
the non-existence of the bucket. This commit will ignore
errVolumeNotFound to skip the not found bucket.
2025-03-12 11:04:18 -07:00
Klaus Post
93e40c3ab4
Disable unstable test (#20996)
Disable unstable test in vendored package. Only used for s3 select.
2025-03-12 10:26:50 -07:00
Aditya Manthramurthy
8aa0e9ff7c
Update ssh and jws libs for fixed CVEs (#21017)
- https://pkg.go.dev/vuln/GO-2025-3488
- https://pkg.go.dev/vuln/GO-2025-3487
2025-03-12 08:16:19 -07:00
Aditya Manthramurthy
bbd6f18afb
Update typos config (#21018) 2025-03-11 08:44:54 -07:00
Harshavardhana
2a3acc4f24 drive heal if we have enough success, do not error setList() (#516) 2025-03-10 19:57:24 -07:00
Klaus Post
11507d46da
Enforce a bucket limit of 100 to v2 metrics calls (#20761)
Enforce a bucket count limit on metrics for v2 calls.

If people hit this limit, they should move to v3, as certain calls explode with high bucket count.

Reviewers: This *should* only affect v2 calls, but the complexity is overwhelming.
2025-02-28 11:33:08 -08:00
Minio Trusted
f9c62dea55 Update yaml files to latest version RELEASE.2025-02-28T09-55-16Z 2025-02-28 18:16:28 +00:00
Klaus Post
8c2c92f7af
Fix healing probability for skipped folders (#20988)
We must update the heal probability when selectively skipping folders.
2025-02-28 01:55:16 -08:00
Aditya Manthramurthy
4c71f1b4ec
fix: SFTP auth bypass with no pub key in LDAP (#20986)
If a user attempts to authenticate with a key but does not have an
sshpubkey attribute in LDAP, the server allows the connection, which 
means the server trusted the key without reason. This is now fixed, 
and a test has been added for validation.
2025-02-27 10:43:32 -08:00
Poorna
6cd8a372cb
replication: set checksum type correctly (#20985)
Fixes: #20978
2025-02-26 15:17:28 -08:00
Anis Eleuch
953a3e2bbd
check for errors on bitrotWriter Close() (#20982) 2025-02-26 11:26:13 -08:00
Mark Theunissen
7cc0c69228
Allow disabling of all X-Forwarded-For header processing (#20977) 2025-02-26 11:25:49 -08:00
Anis Eleuch
f129fd48f2
Update golang.org/x/crypto to address govulncheck complaint (#20983) 2025-02-26 08:15:09 -08:00
TripleChecker
bc4008ced4
Fix typos (#20970) 2025-02-26 01:25:50 -08:00
dependabot[bot]
526053339b
build(deps): bump github.com/go-jose/go-jose/v4 from 4.0.4 to 4.0.5 (#20976)
Bumps [github.com/go-jose/go-jose/v4](https://github.com/go-jose/go-jose) from 4.0.4 to 4.0.5.
- [Release notes](https://github.com/go-jose/go-jose/releases)
- [Changelog](https://github.com/go-jose/go-jose/blob/main/CHANGELOG.md)
- [Commits](https://github.com/go-jose/go-jose/compare/v4.0.4...v4.0.5)

---
updated-dependencies:
- dependency-name: github.com/go-jose/go-jose/v4
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-02-26 01:25:19 -08:00
Taran Pelkey
62a35b3e77
Update SRSvcAccCreate with new type (#20974) 2025-02-24 17:43:59 -08:00
Taran Pelkey
39df134204
Fix importIAM issue with importing implied policies (#20956) 2025-02-19 10:10:53 -08:00
Minio Trusted
ad4cbce22d Update yaml files to latest version RELEASE.2025-02-18T16-25-55Z 2025-02-18 20:59:14 +00:00
Klaus Post
90f5e1e5f6
tests: Do not allow forced type asserts (#20905) 2025-02-18 08:25:55 -08:00
Klaus Post
aeabac9181
Test checksum types for invalid combinations (#20953) 2025-02-18 08:24:01 -08:00
Klaus Post
b312f13473
Extract all files from encrypted stream with inspect (#20937)
Allow multiple private keys and extract all files from streams.

Place files in the folder with `.enc` removed.

Do basic checks so streams cannot traverse outside of the folder.
2025-02-17 09:09:42 -08:00
Rodrigo dos Santos Felix
727a803bc0
fix(docs): update mc admin trace link to MinIO official docs (#20943) 2025-02-16 20:52:27 -08:00
Name
d0e443172d
chore: remove unused and incorrect IsEmpty method from TargetIDSet (#20939) 2025-02-16 08:43:15 -08:00
Jeeva Kandasamy
60446e7ac0
ftp: Enable trailing headers, just like sftp (#20938) 2025-02-15 02:32:09 -08:00
Harshavardhana
b8544266e5 fix: typo in queuestore.go 2025-02-15 02:31:50 -08:00
Ramon de Klein
437dd4e32a
Fix missing authorization check for PutObjectRetentionHandler (#20929) 2025-02-12 08:08:13 -08:00
Cesar N.
447054b841
Update console to 1.7.6 (#20925) 2025-02-11 15:43:04 -08:00
Harshavardhana
9bf43e54cd allow ARCH specific hotfixes 2025-02-11 14:33:31 -08:00
Manuel Reis
60f8423157
Quick patch for Snowball AutoExtract: #20883 (#20885)
* Checking allowance on empty prefix or Snowball-prefix - fixes #20883
* Check the policy for each object during Snowball auto-extraction
2025-02-10 15:52:59 -08:00
Klaus Post
4355ea3c3f
(s)ftp: Enable trailing headers for upload (#20914)
Since we always "connect" to minio, it is fine.
2025-02-10 08:35:49 -08:00
Klaus Post
e30f1ad7bd
Fix nil pointer deref in PeerPolicyMappingHandler (#20913)
The following lines will attempt to de-reference the nil value. Instead just return the error at once.
2025-02-10 08:35:13 -08:00
Minio Trusted
f00c8c4cce Update yaml files to latest version RELEASE.2025-02-07T23-21-09Z 2025-02-08 21:03:40 +00:00
Andreas Auernhammer
703f51164d
kms: add MINIO_KMS_REPLICATE_KEYID option (#20909)
This commit adds the `MINIO_KMS_REPLICATE_KEYID` env. variable.
By default - if not specified or not set to `off` - MinIO will
replicate the KMS key ID of an object.

If `MINIO_KMS_REPLICATE_KEYID=off`, MinIO does not include the
object's KMS Key ID when replicating an object. However, it always
sets the SSE-KMS encryption header. This ensures that the object
gets encrypted using SSE-KMS. The target site chooses the KMS key
ID that gets used based on the site and bucket config.

Signed-off-by: Andreas Auernhammer <github@aead.dev>
2025-02-07 15:21:09 -08:00
Klaus Post
b8dde47d4e
fix: multipart replication with single part objects (#20895)
x-amz-checksum-algorithm is not set, causing all multipart single-part objects
to fail to replicate going via sftp/FTP uploads.
2025-02-05 15:06:02 -08:00
Andreas Auernhammer
7fa3e39f85
sts: allow client-provided intermediate CAs (#20896)
This commit allows clients to provide a set of intermediate CA
certificates (up to `MaxIntermediateCAs`) that the server will
use as intermediate CAs when verifying the trust chain from the
client leaf certificate up to one trusted root CA.

This is required if the client leaf certificate is not issued by
a trusted CA directly but by an intermediate CA. Without this commit,
MinIO rejects such certificates.

Signed-off-by: Andreas Auernhammer <github@aead.dev>
2025-02-04 16:29:41 -08:00
Poorna
4df7a3aa8f fix: site replication of bucket deletion sync (#352)
Bucket deletion timestamp was not being passed back
in GetBucketInfo, which is needed to decide on the bucket
creation/deletion
2025-02-04 00:36:03 -08:00
Poorna
64a8f2e554
replication: default tag timestamps in CopyObject call (#20891)
If object is uploaded with tags, the internal tagging-timestamp tracked
for replication will be missing. Default to ModTime in such cases to
allow tags to be synced correctly.

Also fixing a regression in fetching tags and tag comparison
2025-02-04 00:35:55 -08:00
Minio Trusted
f4fd4ea66d Update yaml files to latest version RELEASE.2025-02-03T21-03-04Z 2025-02-04 06:55:11 +00:00
Anis Eleuch
712fe1a8df
fix: proxy requests to honor global transport
* fix: proxy requests to honor global transport 
Load the globalProxyEndpoint properly

also, currently, the proxy requests will fail silently for batch cancel
even if the proxy fails; instead,d properly send the corresponding error back
for such proxy failures if opted

* pass the transport to the GetProxyEnpoints function

---------

Co-authored-by: Praveen raj Mani <praveen@minio.io>
2025-02-03 22:03:04 +01:00
Klaus Post
4a319bedc9
Redact sensitive fields from DescribeBatchJob (#20881)
Redacts the following if set:

* replicate/credentials/secretKey
* replicate/credentials/sessionToken
* expire/notify/token
2025-02-03 08:56:26 -08:00
Klaus Post
bdb3db6dad
Add lock overload protection (#20876)
Reject new lock requests immediately when 1000 goroutines are queued 
for the local lock mutex.

We do not reject unlocking, refreshing, or maintenance; they add to the count.

The limit is set to allow for bursty behavior but prevent requests from 
overloading the server completely.
2025-01-31 11:54:34 -08:00
Klaus Post
abb385af41
Check for valid checksum (#20878)
Add a few safety measures for checksums.
2025-01-28 16:59:23 -08:00
Harshavardhana
4ee62606e4 update govulncheck 2025-01-28 11:11:08 -08:00
Anis Eleuch
079d64c801
DeleteObjects: Send delete to all pools (#172) (#20821)
Currently, DeleteObjects() tries to find the object's pool before
sending a delete request. This only works well when an object has
multiple versions in different pools since looking for the pool does
not consider the version-id. When an S3 client wants to
remove a version-id that exists in pool 2, the delete request will be
directed to pool one because it has another version of the same object.

This commit will remove looking for pool logic and will send a delete
request to all pools in parallel. This should not cause any performance
regression in most of the cases since the object will unlikely exist
in only one pool, and the performance price will be similar to
getPoolIndex() in that case.
2025-01-28 08:57:18 -08:00
Klaus Post
dcc000ae2c
Allow URLs up to 32KB and improve parsing speed (#20874)
Before/after...
```
Benchmark_hasBadPathComponent/long-32          	   43936	     27232 ns/op	 146.89 MB/s	   32768 B/op	       1 allocs/op
Benchmark_hasBadPathComponent/long-32          	   89956	     13375 ns/op	 299.07 MB/s	       0 B/op	       0 allocs/op
```

* Remove unused.
2025-01-27 08:42:45 -08:00
Harshavardhana
c5d19ecebb
do not expose secret-key to lambda event handler (#20870) 2025-01-24 11:27:43 -08:00
Harshavardhana
ed29a525b3 remove fips builds 2025-01-21 02:10:10 -08:00
Minio Trusted
020c46cd3c Update yaml files to latest version RELEASE.2025-01-20T14-49-07Z 2025-01-21 09:44:32 +00:00
Klaus Post
827004cd6d
Add Full Object Checksums and CRC64-NVME (#20855)
Backport of AIStor PR 247.

Add support for full object checksums as described here:

https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html

New checksum types are fully supported. Mint tests from https://github.com/minio/minio-go/pull/2026 are now passing.

Includes fixes from https://github.com/minio/minio/pull/20743 for mint tests.

Add using checksums as validation for object content. Fixes #20845 #20849

Fixes checksum replication (downstream PR 250)
2025-01-20 06:49:07 -08:00
Harshavardhana
779ec8f0d4
do not list buckets without local quorum (#20852)
ListBuckets() would result in listing buckets
without quorum, this PR fixes the behavior.
2025-01-19 15:13:17 -08:00
Minio Trusted
3d0f513ee2 Update yaml files to latest version RELEASE.2025-01-18T00-31-37Z 2025-01-18 01:38:52 +00:00
Harshavardhana
4b6eadbd80
update deps (#20851) 2025-01-17 16:31:37 -08:00
Shubhendu
6f47414b23
Correct bucket metrics name (#20823)
Earlier, cluster and bucket metrics were named 
`minio_usage_last_activity_nano_seconds`.

The bucket level is now named as 
`minio_bucket_usage_last_activity_nano_seconds`

Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
2025-01-17 13:12:21 -08:00
Harshavardhana
224a27992a update go-deps on debugging tools 2025-01-17 11:41:46 -08:00
Harshavardhana
232544e1d8 update inspect export, which was missed from #20846 2025-01-17 11:38:44 -08:00
Anis Eleuch
dbcb71828d
s3: Provide enough buffer when the object final size is unknown (#20847)
When compression is enabled, the final object size is not calculated in
that case, we need to make sure that the provided buffer is always
more significant than the shard size, the bitrot will always calculate 
the hash of blocks with shard size, except the last block.
2025-01-17 11:19:30 -08:00
Klaus Post
b9196757fd
Fix inconsistently written compressed files. (#20846)
Before https://github.com/minio/minio/pull/20575, files could pick up indices 
from unrelated files if no index was added.

This would result in these files not being consistent across a set.

When loading, search for the compression indicators and check if they 
are within the problematic date range, and clean up any parts that have 
an index but shouldn't.

The test validates that the signature matches the one in files stored without an index.

Bumps xlMetaVersion, so this check doesn't have to be made for future versions.
2025-01-17 11:17:18 -08:00
Andreas Auernhammer
b4ac53d157
update github.com/minio/kms-go/kes to v0.3.1 (#20843)
Signed-off-by: Andreas Auernhammer <github@aead.dev>
2025-01-16 01:13:28 -08:00
Poorna
4952bdb770
decom: avoid skipping single delete markers for replication (#20836)
It is possible delete marker was received on old pool as decom
move in progress, this PR allows decom retry to ensure these
delete markers are moved to new pool so that decommission can
be completed.

Fixes #20819
2025-01-14 11:53:02 -08:00
Anis Eleuch
00b2ef2932
Bump golang.org/x/net to silence wrong vuln checker (#20814) 2025-01-08 16:39:24 +05:30
Klaus Post
4536ecfaa4
Add cpuio profiling potential crash workaround (#20809)
Add profiling potential crash wourkaround

Using admin traces could potentially crash the server (or handler more likely) due to upstream divide by 0: https://github.com/felixge/fgprof/pull/34

Ensure the profile always runs 100ms before stopping, so sample count isn't 0 (default sample rate ~10ms/sample, but allow for cpu starvation)
2025-01-06 21:21:54 +05:30
Harshavardhana
43a7402968 update helm release v5.4.0 2025-01-02 21:34:47 -08:00
Allan Roger Reid
330dca9a35
Add resiliency tests (#20786) 2024-12-20 20:24:45 -08:00
Klaus Post
ddd137d317
ListObjectParts should return actual size (#20782)
Fixes #20781:

```
λ aws --endpoint-url http://127.0.0.1:9001 s3api list-parts --bucket testbucket --key test.testcompress --upload-id "ZDM0YzUwM2YtZWM1Zi00NWI2LTgxMzYtZTIwMGE3Yjc0Y2Y1LjYyMzgyMmFhLWU2N2QtNGUyYS04NDE1LWUzZDFlZmJmMWUyZHgxNzM0NjI1MjgyMDkyNzY4MDAw"
{
    "Parts": [
        {
            "PartNumber": 1,
            "LastModified": "2024-12-19T16:47:04.334000+00:00",
            "ETag": "\"7025f242f56479e06c435c0b500cdbb2\"",
            "Size": 2002
        }
    ],
    "ChecksumAlgorithm": "",
    "Initiator": {
        "ID": "02d6176db174dc93cb1b899f7c6078f08654445fe8cf1b6ce98d8855f66bdbf4",
        "DisplayName": "02d6176db174dc93cb1b899f7c6078f08654445fe8cf1b6ce98d8855f66bdbf4"
    },
    "Owner": {
        "DisplayName": "02d6176db174dc93cb1b899f7c6078f08654445fe8cf1b6ce98d8855f66bdbf4",
        "ID": "02d6176db174dc93cb1b899f7c6078f08654445fe8cf1b6ce98d8855f66bdbf4"
    },
    "StorageClass": "STANDARD"
}
```

(for whatever reason the python script generated a 2002 byte file ;)
2024-12-19 23:51:46 +05:30
Minio Trusted
06ddd8770e Update yaml files to latest version RELEASE.2024-12-18T13-15-44Z 2024-12-19 00:24:40 +00:00
Anis Eleuch
16f8cf1c52
heal: Include more use case of not healable but readable objects (#248) (#20776)
If one object has many parts where all parts are readable but some parts
are missing from some drives, this object can be sometimes un-healable,
which is wrong.

This commit will avoid reading from drives that have missing, corrupted or
outdated xl.meta. It will also check if any part is unreadable to avoid
healing in that case.
2024-12-18 05:15:44 -08:00
Mark Theunissen
01e520eb23
s3: Sanitize the source object name in CopyObject handler (#20774) 2024-12-17 07:01:07 -08:00
Harshavardhana
02f770a0c0
update all dependencies and use latest msgp (#20768) 2024-12-16 04:20:12 +05:30
dependabot[bot]
2f4c79bc0f
Bump golang.org/x/crypto from 0.29.0 to 0.31.0 (#20767)
Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.29.0 to 0.31.0.
- [Commits](https://github.com/golang/crypto/compare/v0.29.0...v0.31.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-14 23:04:23 -08:00
dependabot[bot]
969ee7dfbe
Bump golang.org/x/crypto from 0.23.0 to 0.31.0 in /docs/debugging/inspect (#20760)
Bump golang.org/x/crypto in /docs/debugging/inspect

Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.23.0 to 0.31.0.
- [Commits](https://github.com/golang/crypto/compare/v0.23.0...v0.31.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-14 22:04:58 -08:00
Minio Trusted
5f0b086b05 Update yaml files to latest version RELEASE.2024-12-13T22-19-12Z 2024-12-15 03:52:57 +00:00
Artur Melanchyk
68b004a48f
fix: specify size in some map allocations (#20764) 2024-12-13 14:19:12 -08:00
Harshavardhana
54ecce66f0 move readLock on uploadId at EOF in PutObjectPart (#258)
we do not need to hold the read locks at the higher
layer instead before reading the body, instead hold
the read locks properly at the time of renamePart()
for protection from racy part overwrites to compete
with concurrent completeMultipart().
2024-12-13 14:06:18 -08:00
Artur Melanchyk
2b008c598b
fix: replace mutex with atomic (#20762) 2024-12-13 19:32:46 +05:30
dependabot[bot]
86d02b17cf
Bump golang.org/x/crypto from 0.23.0 to 0.31.0 in /docs/debugging/s3-verify (#20757)
Bump golang.org/x/crypto in /docs/debugging/s3-verify

Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.23.0 to 0.31.0.
- [Commits](https://github.com/golang/crypto/compare/v0.23.0...v0.31.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-12 05:00:05 -08:00
Anis Eleuch
c1a95a70ac
heal: Move CheckParts from single handler to streaming RPC (#20755)
CheckParts call can take time to verify 10k parts of a single object in a single drive.
To avoid an internal dealine of one minute in the single handler RPC, this commit will
switch to streaming RPC instead.
2024-12-12 11:20:57 +05:30
Aditya Manthramurthy
f246c9053f
fix: Privilege escalation in IAM import API (#20756)
This API had missing permissions checking, allowing a user to change
their policy mapping by:

1. Craft iam-info.zip file: Update own user permission in
user_mappings.json
2. Upload it via `mc admin cluster iam import nobody iam-info.zip`

Here `nobody` can be a user with pretty much any kind of permission (but
not anonymous) and this ends up working.

Some more detailed steps - start from a fresh setup:

```
./minio server /tmp/d{1...4} &
mc alias set myminio http://localhost:9000 minioadmin minioadmin
mc admin user add myminio nobody nobody123
mc admin policy attach myminio readwrite nobody nobody123
mc alias set nobody http://localhost:9000 nobody nobody123

mc admin cluster iam export myminio
mkdir /tmp/x && mv myminio-iam-info.zip /tmp/x
cd /tmp/x
unzip myminio-iam-info.zip
echo '{"nobody":{"version":1,"policy":"consoleAdmin","updatedAt":"2024-08-13T19:47:10.1Z"}}' > \
      iam-assets/user_mappings.json
zip -r myminio-iam-info-updated.zip iam-assets/

mc admin cluster iam import nobody ./myminio-iam-info-updated.zip
mc admin service restart nobody
```
2024-12-12 07:39:40 +05:30
Cesar N.
9cdd204ae4
Upgrade Console version to v1.7.5 (#20748) 2024-12-11 16:23:53 +05:30
Harshavardhana
7b3eb9f7f8
fix: groups lookup performance issue with users with lots of groups (#20740)
fixes https://github.com/minio/minio/issues/20717
2024-12-11 16:23:28 +05:30
ebozduman
d56ef8dbe1
Adds AIstore documentation link (#20738) 2024-12-11 16:22:17 +05:30
Mark Theunissen
a248ed5ff5
Fixes for POST policy checks and the x-ignore implementation (#20674) 2024-12-11 16:21:34 +05:30
Klaus Post
5bb31e4883
Disable mint full object tests (#20743)
Remove expected failures from https://github.com/minio/minio-go/pull/2026
2024-12-09 18:59:22 -08:00
Taran Pelkey
aff2a76d80
Return error when attempting to create a policy with commas in name (#20724) 2024-12-04 03:51:26 -08:00
Anis Eleuch
eddbe6bca2
heal: Report bucket healing result correctly (#20721) 2024-12-04 04:42:25 +05:30
Anis Eleuch
734d1e320a
heal: Single object heal to look for older versions as well (#203) (#20723)
`mc admin heal ALIAS/bucket/object` does not have any flag to heal
object noncurrent versions, this commit will make healing of the object
noncurrent versions implicitly asked.

This also fixes the 'mc admin heal ALIAS/bucket/object' that does not work 
correctly when the bucket is versioned. This has been broken since Apr 2023.
2024-12-04 04:42:04 +05:30
Anis Eleuch
b8dab7b1a9
Set http server read/write timeout from --idle-timeout (#228) (#20715)
Golang http.Server will call SetReadDeadline overwriting the previous
deadline configuration set after a new connection Accept in the custom
listener code. Therefore, --idle-timeout was not correctly respected.

Make http.Server read/write timeout similar to --idle-timeout.
2024-12-02 18:51:17 +05:30
Klaus Post
abd6bf060d
Add 'X-Forwarded-For' to (s)FTP requests (#20709)
Fixes #20707
2024-11-29 18:25:37 +05:30
Alex
f0d4ef604c
Updated Console to v1.7.4 (#20693)
Signed-off-by: Benjamin Perez <benjamin@bexsoft.net>
2024-11-28 13:14:11 +05:30
Ramon de Klein
2712f75762
prevent IAM cleanup errors (#20691) 2024-11-28 13:13:51 +05:30
Krutika Dhananjay
4c46668da8
Add a test case for fix #20684 (#20688)
The test fails without the change.
Also, removed a duplicate test case involving lifecycle config with no rules.
2024-11-28 13:13:31 +05:30
Anis Eleuch
02e93fd6ba
heal: Better reporting to mc with dangling/timeout errors (#20690)
The code assigns corrupted state to a drive for any unexpected error,
which is confusing for users. This change will make sure to assign
corrupted state only for corrupted parts or xl.meta. Use unknown state
with a explanation for any unexpected error, like canceled, deadline
errors, drive timeout, ...

Also make sure to return the bucket/object name when the object is not
found or marked not found by the heal dangling code.
2024-11-26 10:45:35 -08:00
Krutika Dhananjay
366876e98b
Fix prefix validation in lifecycle rule (#20684) 2024-11-25 12:12:21 -08:00
Mark Theunissen
d202fdd022
Add the policy name to the audit logs tags when doing policy-based API calls. Add retention settings to tags (#20638)
* Add the policy name to the audit log tags when doing policy-based API calls

* Audit log the retention settings requested in the API call

* Audit log of retention on PutObjectRetention API path too
2024-11-25 09:17:12 -08:00
Eng Zer Jun
c07e5b49d4
refactor: replace experimental maps and slices with stdlib (#20679)
The experimental functions are now available in the standard library in
Go 1.23 [1].

[1]: https://go.dev/doc/go1.23#new-unique-package

Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
2024-11-25 09:10:22 -08:00
Aditya Manthramurthy
9a39f8ad4d
fix: Remove User should fail for a service account (#20677)
The RemoveUser API only removes internal users, and it reports success
when it didnt find the internal user account for deletion. When provided
with a service account, it should not report success as that is misleading.
2024-11-21 18:24:04 -08:00
Nao Yonashiro
7e0c1c9413
feat: bump github.com/cosnicolaou/pbzip2 from 1.0.3 to 1.0.5 (#20671)
Update github.com/cosnicolaou/pbzip2 to latest version for
significant performance improvements. This update brings a 45%
reduction in processing time.
2024-11-20 18:53:52 -08:00
John Morales
485d833cd7
fix: API label casing and count value for +Inf bucket v2 metrics (#20656) 2024-11-18 11:17:52 -08:00
Klaus Post
e8a476ef5a
Keep larger merge buffers for RPC (#20654)
Keep larger merge buffers

When sending large messages >1K, the merge buffer would continuously be reallocated.

This could happen on listings, where blocks are typically 4->8K.

Keep merge buffer of up to 256KB.

Benchmark with 4096b messages:

```
benchmark                                          old ns/op     new ns/op     delta
BenchmarkRequests/servers=2/bytes/par=32-32        8271          6360          -23.10%
BenchmarkRequests/servers=2/bytes/par=64-32        7840          4731          -39.66%
BenchmarkRequests/servers=2/bytes/par=128-32       7291          4740          -34.99%
BenchmarkRequests/servers=2/bytes/par=256-32       7095          4580          -35.45%
BenchmarkRequests/servers=2/bytes/par=512-32       6757          4584          -32.16%
BenchmarkRequests/servers=2/bytes/par=1024-32      6429          4453          -30.74%

benchmark                                          old bytes     new bytes     delta
BenchmarkRequests/servers=2/bytes/par=32-32        12090         821           -93.21%
BenchmarkRequests/servers=2/bytes/par=64-32        17423         820           -95.29%
BenchmarkRequests/servers=2/bytes/par=128-32       18493         822           -95.56%
BenchmarkRequests/servers=2/bytes/par=256-32       18892         821           -95.65%
BenchmarkRequests/servers=2/bytes/par=512-32       19064         826           -95.67%
BenchmarkRequests/servers=2/bytes/par=1024-32      19038         842           -95.58%
```
2024-11-16 09:18:48 -08:00
Krutika Dhananjay
267f0ecea2
Fix 0 httpTimeout for logger webhook (#20653)
Previously, not setting http.Config.HTTPTimeout for logger webhook
was resulting in a timeout of 0, and causing "deadline exceeded"
errors in log webhook.

This change introduces a new env variable for configuring log webhook
timeout and more importantly sets it when the config is initialised.
2024-11-16 02:01:36 -08:00
Harshavardhana
4ee3434854
updating all dependencies as per regular cadence (#20646) 2024-11-14 12:33:18 -08:00
Anis Eleuch
0e9854372e
heal/batch: Fix missing redirection to the first node (#20642)
Manual heal can return XMinioHealInvalidClientToken if the manual
healing is started in the first node, and the next mc call to get the
heal status is landed on another node. The reason is that redirection
based on the token ID is not able to redirect requests to the first node
due to a typo.

This also affects the batch cancel command if the batch is being done in
the first node, the user will never be able to cancel it due to the same
bug.
2024-11-13 04:07:28 -08:00
Klaus Post
b5177993b3
Make DeadlineConn http.Listener compatible (#20635)
HTTP likes to slap an infinite read deadline on a connection and 
do a blocking read while the response is being written.

This effectively means that a reading deadline becomes the 
request-response deadline.

Instead of enforcing our timeout, we pass it through and keep 
"infinite deadline" is sticky on connections.

However, we still "record" when reads are aborted, so we never overwrite that.

The HTTP server should have `ReadTimeout` and `IdleTimeout` set for the deadline to be effective.

Use --idle-timeout for incoming connections.
2024-11-12 12:41:41 -08:00
Klaus Post
55f5c18fd9
Harden internode DeadlineConn (#20631)
Since DeadlineConn would send deadline updates directly upstream,
it would race with Read/Write operations. The stdlib will perform a read, 
but do an async SetReadDeadLine(unix(1)) to cancel the Read in 
`abortPendingRead`. In this case, the Read may override the 
deadline intended to cancel the read.

Stop updating deadlines if a deadline in the past is seen and when Close is called. 
A mutex now protects all upstream deadline calls to avoid races. 

This should fix the short-term buildup of...

```
365 @ 0x44112e 0x4756b9 0x475699 0x483525 0x732286 0x737407 0x73816b 0x479601
#	0x475698	sync.runtime_notifyListWait+0x138		runtime/sema.go:569
#	0x483524	sync.(*Cond).Wait+0x84				sync/cond.go:70
#	0x732285	net/http.(*connReader).abortPendingRead+0xa5	net/http/server.go:729
#	0x737406	net/http.(*response).finishRequest+0x86		net/http/server.go:1676
#	0x73816a	net/http.(*conn).serve+0x62a			net/http/server.go:2050
```

AFAICT Only affects internode calls that create a connection (non-grid).
2024-11-11 09:15:17 -08:00
Harshavardhana
8ce101c174 fix: LDAP service port number in tests 2024-11-11 07:01:29 -08:00
Klaus Post
4972735507
Fix lint issues from v1.62.0 upgrade (#20633)
* Fix lint issues from v1.62.0 upgrade

* Fix xlMetaV2TrimData version checks.
2024-11-11 06:51:43 -08:00
Minio Trusted
e6ca6de194 Update yaml files to latest version RELEASE.2024-11-07T00-52-20Z 2024-11-07 23:58:42 +00:00
Harshavardhana
cefc43e4da simplify the Get()/GetMultiple() re-use GetRaw() for both (#179)
Remember GetMultiple() must be used if your target is calling
PutMultiple(), without that the multiple events will not be
replayed.
2024-11-06 16:52:20 -08:00
Ramon de Klein
25e34fda5f
decompress audit log properly before sending to remote target (#20619) 2024-11-06 13:25:24 -08:00
Erfan
4208d7af5a
docs: remove redundant prometheus metric (#20618) 2024-11-06 07:44:21 -08:00
Klaus Post
8d42f37e4b
Fix msgUnPath crash (#20614)
These are needed checks for the functions to be un-crashable with any input 
given to `msgUnPath` (tested with fuzzing).

Both conditions would result in a crash, which prevents that. Some 
additional upstream checks are needed.

Fixes #20610
2024-11-05 04:37:59 -08:00
dependabot[bot]
7cb4b5c636
Bump github.com/golang-jwt/jwt/v4 from 4.5.0 to 4.5.1 (#20611)
Bumps [github.com/golang-jwt/jwt/v4](https://github.com/golang-jwt/jwt) from 4.5.0 to 4.5.1.
- [Release notes](https://github.com/golang-jwt/jwt/releases)
- [Changelog](https://github.com/golang-jwt/jwt/blob/main/VERSION_HISTORY.md)
- [Commits](https://github.com/golang-jwt/jwt/compare/v4.5.0...v4.5.1)

---
updated-dependencies:
- dependency-name: github.com/golang-jwt/jwt/v4
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-05 02:22:36 -08:00
Harshavardhana
1615920f48 fix typos reported in CI/CD 2024-11-04 11:06:02 -08:00
Cesar N.
7ee42b3ff5
Update console package to v1.7.3 (#20606) 2024-11-04 08:47:08 -08:00
Harshavardhana
a6f1e727fb
add tests for ILM transition and healing (#166) (#20601)
This PR fixes a regression introduced in https://github.com/minio/minio/pull/19797
by restoring the healing ability of transitioned objects

Bonus: support for transitioned objects to carry original
The object name is for future reverse lookups if necessary.

Also fix parity calculation for tiered objects to n/2 for n/2 == (parity)
2024-10-31 15:10:24 -07:00
Aditya Manthramurthy
c1fc7779ca
Remove expires field from list objects metadata (#20600)
This field was always 0 regardless of whether the object had an expiry
so we are basically removing dead code.
2024-10-31 12:27:06 -07:00
Allan Roger Reid
b3ab7546ee
Fix typos in README.md (#20599) 2024-10-31 10:38:53 -07:00
Minio Trusted
ad88a81e3d Update yaml files to latest version RELEASE.2024-10-29T16-01-48Z 2024-10-30 21:24:58 +00:00
Aditya Manthramurthy
c4239ced22
run IAM purge routines deterministically every hr (#20587)
Existing implementation runs IAM purge routines for expired LDAP and
OIDC accounts with a probability of 0.25 after every IAM refresh. This
change ensures that they are run once in each hour.
2024-10-29 09:01:48 -07:00
Anis Eleuch
f85c28e960
heal: large objects fix and avoid .healing.bin corner case premature exit (#20577)
xlStorage.Healing() returns nil if there is an error reading
.healing.bin or if this latter is empty. healing.bin update()
call returns early if .healing.bin is empty; hence, no further update
of .healing.bin is possible.

A .healing.bin can be empty if os.Open() with O_TRUNC is successful
but the next Write returns an error.

To avoid this weird situation, avoid making healingTracker.update()
to return early if .healing.bin is empty, so write again.

This commit also fixes wrong error log printing when an object is 
healed in another drive in the same erasure set but not in the drive 
that is actively healing by fresh drive healing code. Currently, it prints 
<nil> instead of a factual error.

* heal: Scan .minio.sys metadata only during site-wide heal (#137)

mc admin heal always invoke .minio.sys heal, but sometimes, this latter
contains a lot of data, many service accounts, STS accounts etc, which
makes mc admin heal command very slow.

Only invoke .minio.sys healing when no bucket was specified in `mc admin
heal` command.
2024-10-26 02:58:27 -07:00
Anis Eleuch
f7e176d4ca
heal: Avoid deadline error with very large objects (#140) (#20586)
Healing a large object with a normal scan mode where no parts read 
is involved can still fail after 30 seconds if an object has

There are too many parts when hard disks are being used mainly. 
The reason is there is a general deadline that checks for all parts we 
do a deadline per part.
2024-10-26 02:56:26 -07:00
Aditya Manthramurthy
72a0d14195
fix: avoid useless expires value in listing meta (#20584)
When listing objects with metadata, avoid returning an "expires" time
metadata value when its value is the zero time as this means that no
expires value is set on the object.
2024-10-24 19:13:19 -07:00
Klaus Post
6abe4128d7
Fix ILM expire workers exiting (#20578)
Fix expire workers exiting

Under 2 conditions ILM expire workers would exit, eventually causing all workers to terminate.
2024-10-23 08:35:37 -07:00
Klaus Post
ed5ed7e490
Trace ILM errors (#20576)
Some paths would attempt transitions but in case of failures 
no traces would be emitted.

Add traces (with errors) when transition operations fail.
2024-10-22 14:10:34 -07:00
Klaus Post
51410c9023
Clear omitted fields (#20575)
Searched `msg:"[a-zA-Z0-9]*,omitempty` through the codebase.

Uses latest tinylib master.
2024-10-22 08:30:50 -07:00
Shubhendu
96ca402dcd
Correct the date filter check for batch replication (#20569)
The condition were incorrect as we were comparing the filter
value against the modification time object.

For example if created after filter date is after modification
time of object, that means object was created before the filter
time and should be skipped while replication because as per the
filter we need only the objects created after the filter date.

Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
2024-10-18 08:32:09 -07:00
Anis Eleuch
3da7c9cce3 repl: Fix removal of replicator svc when keycloak is configured (#120)
When Keycloak vendor is set, the code will start to clean up service
accounts that parents do not exist anymore. However, the code will also
look for the parent user of site-replicator-0, MINIO_ROOT_USER, which
obviously does not exist in Keycloak. Therefore, the site-replicator-0
will be removed automatically.

This commit will avoid cleaning up service accounts generated from
the root user.
2024-10-14 09:35:37 -07:00
Harshavardhana
a14e19ec54 remove support for s390x 2024-10-13 07:06:17 -07:00
Minio Trusted
e091dde041 Update yaml files to latest version RELEASE.2024-10-13T13-34-11Z 2024-10-13 14:05:41 +00:00
Harshavardhana
d10bb7e1b6 remove reference to s390x binary, we wont publish them anymore 2024-10-13 06:34:11 -07:00
Anis Eleuch
7ebceacac6 heal: Fix deep scan failing to heal objects (#117)
The verify file handler response format was changed from gob to msgp
since two months but we forgot updating the verify handler client.

VerifyFile is only called during a heal deep scan (bitrot check).
HealObject() will fail in that case and will mark all disks corrupted and
will return early (as unrecoverable object but it will also not be
removed)

It is a bit rare for HealObject to be called with a deep scan flag. It
is called when a HealObject with a normal scan (e.g. new drive healing)
detects a bitrot corruption, therefore healing objects with a detected
bitrot corruption will fail.
2024-10-13 06:07:21 -07:00
Harshavardhana
1593cb615d
avoid unnecessary logging for KMS secret key mismatch (#20549) 2024-10-13 06:06:08 -07:00
Yannis Mazzer
86a41d1631
fix(helm) removing clusterDomain from startup command to avoid local … (#20547) 2024-10-11 05:21:05 -07:00
Taran Pelkey
d4157b819c
Allow LDAP DNs with slashes to be loaded from object store (#20541) 2024-10-10 16:40:37 -07:00
Yannis Mazzer
e0aceca1b7
feat(helm) making securityContext consistent (#20546) 2024-10-10 08:48:31 -07:00
Harrison Brown
87804624fe
Helm: Add extraVolumes and extraVolumeMounts to the customCommandJob section (#19988)
added extraVolumes and extraVolumeMounts to the customCommandJob section
2024-10-10 08:48:16 -07:00
Poorna
e029f8a9d7
set kms keyid in replication opts (#20542) 2024-10-09 23:49:55 -07:00
Poorna
1bc6681176
fix tagging overwrite during resync (#20525) 2024-10-04 22:16:15 -07:00
Poorna
28322124e2
remove replication stats from data usage cache (#20524)
This is no longer needed since historical stats are not maintained anymore.
2024-10-04 15:23:33 -07:00
Harshavardhana
cbfe9de3e7
do not download binary before verifying the version (#20523)
fixes https://github.com/minio/mc/issues/4980
2024-10-04 04:32:32 -07:00
Harshavardhana
dc86b8d9d4
fix: when readQuorum, inconsistent metadata return 404 (#20522)
in cases where we cannot possibly know a way to read and 
construct the object,  it is impossible to achieve any form of 
quorum via xl.meta while we have sufficient responses from 
all the drives, we should return object not found.
2024-10-04 00:13:14 -07:00
Taran Pelkey
ba70118e2b
Add root user to ListAccessKeysBulk (#20517) 2024-10-03 16:11:02 -07:00
Minio Trusted
cb1d3e50f7 Update yaml files to latest version RELEASE.2024-10-02T17-50-41Z 2024-10-02 21:26:16 +00:00
Harshavardhana
ded0b19d97
avoid audit logs with unexpected errors (#20516)
fixes #20513
2024-10-02 10:50:41 -07:00
Poorna
d0bb3dd136
list all batch job types (#20510)
continues #20480
2024-10-01 23:38:17 -07:00
Harshavardhana
ab7714b01e
upgrade relevant dependencies (#20507) 2024-10-01 23:37:55 -07:00
Ramon de Klein
e5b18df6db
Fix checksum error during startup when minio is loaded via PATH environment variable (#20509) 2024-10-01 15:13:18 -07:00
Anis Eleuch
0abfd1bcb1
heal: Use etag as quorum when none found for modtime (#20500) 2024-10-01 08:19:10 -07:00
Harshavardhana
6186d11761
handle the locks properly for multi-pool callers (#20495)
- PutObjectMetadata()
- PutObjectTags()
- DeleteObjectTags()
- TransitionObject()
- RestoreTransitionObject()

Also improve the behavior of multipart code across
pool locks, hold locks only once per upload ID for

- CompleteMultipartUpload()
- AbortMultipartUpload()
- ListObjectParts() (read-lock)
- GetMultipartInfo() (read-lock)
- PutObjectPart() (read-lock)

This avoids lock attempts across pools for no
reason, this increases O(n) when there are n-pools.
2024-09-29 15:40:36 -07:00
Poorna
e8b457e8a6
Change delete marker proxy test to use distributed setup (#20494) 2024-09-27 18:02:26 -07:00
Harshavardhana
afea40cc0f fix: keep locks based on the first pool, first EC set (#93)
multi-object deletion may or may not compete with locks
granted for other callers, causing concurrent operations
to succeed on each other.

A continuation of the PR https://github.com/minio/minio/pull/20356
2024-09-27 03:41:37 -07:00
Aditya Manthramurthy
402b798f1b
fix: allow all console actions with custom authZ (#20489)
When custom authorization via plugin is enabled, the console will now
render the UI as if all actions are allowed. Since server cannot
determine the exact policy allowed for a user via the plugin, this is
acceptable to do. If a particular action is actually not allowed by the
plugin the call will result in an error.

Previously the server was evaluating a policy when custom authZ is
enabled - this is fixed now.
2024-09-26 23:44:44 -07:00
Klaus Post
4759532e90
Fix PPC cgroup memory limit (#20488)
The "unlimited" value on PPC wasn't exactly the same as amd64.

Instead compare against an "unreasonably big value".

Would cause OOM in anything using the concurrent request limit.
2024-09-26 10:07:10 -07:00
Harshavardhana
7f1e1713ab
use absolute path for binary checksum verification (#20487) 2024-09-26 08:03:08 -07:00
Poorna
b2c5819dbc
hold on to batch job stats till cleanup (#20480)
This PR also fixes job stats not available after restart
2024-09-24 14:50:11 -07:00
Anis Eleuch
2b0156b1fc Add TTFB to all APIs and enable for responses without body (#20479)
Add TTFB for all requests in metrics-v3 in addition to the existing
GetObject. Also for the requests that do not return a body in the
response, calculate TTFB as the HTTP status code and the headers are
sent.
2024-09-24 10:13:00 -07:00
Harshavardhana
f6f0807c86
cleanup existing part.N's before renamePart() (#20466)
this is a safety-net to avoid any unexpected parts to show up.
2024-09-24 04:26:41 -07:00
Harshavardhana
0c53d86017
remove the list from 'mc stat' from testing via '--no-list' (#20468)
avoid 'listing' as it may get incorrect results for
these tests, we are only interested in 'mc stat' as
in HEAD object here.
2024-09-24 01:03:58 -07:00
Minio Trusted
6a6ee46d76 Update yaml files to latest version RELEASE.2024-09-22T00-33-43Z 2024-09-23 19:42:46 +00:00
Klaus Post
974cbb3bb7
Limit jstream parse depth (#20474)
Add https://github.com/bcicen/jstream/pull/15 by vendoring the package.

Sets JSON depth limit to 100 entries in S3 Select.
2024-09-23 12:35:41 -07:00
Harshavardhana
03e996320e
upgrade deps pkg/v3, madmin-go/v3 and lz4/v4 (#20467) 2024-09-21 17:33:43 -07:00
Taran Pelkey
78fcb76294
Add ListAccessKeysBulk API for builtin user access keys (#20381) 2024-09-21 04:35:40 -07:00
Ramon de Klein
3d152015eb
Use MinIO console v1.7.1 (#20465) 2024-09-20 18:18:54 -07:00
jiuker
ade8925155
fix: add default http timeout for audit webhook (#20460) 2024-09-20 09:02:50 -07:00
Klaus Post
05a6c170bf
Fix PutObject Trailing checksum (#20456)
PutObject would verify trailing checksums, but not store them.

Fixes #20455
2024-09-19 05:59:07 -07:00
Ramon de Klein
e1c2344591
Log an error when calculating the binary checksum failed (#20454) 2024-09-18 20:48:32 -07:00
Ramon de Klein
48a591e9b4
Ensure proper stale_uploads_cleanup_interval is used at all times (#20451) 2024-09-18 10:59:26 -07:00
Anis Eleuch
fa5d9c02ef
batch: Set a default retry attempts and a prefix (#20452)
A batch job will fail if the retry attempt is not provided. The reason
is that the code mistakenly gets the retry attempts from the job status
rather than the job yaml file.

This will also set a default empty prefix for batch expiration.

Also this will avoid trimming the prefix since the yaml decoder already
does that if no quotes were provided, and we should not trim if quotes
were provided and the user provided a leading or a trailing space.
2024-09-18 10:59:03 -07:00
Shubhendu
5bd27346ac
Added iam import tests for openid (#20432)
Tests if imported service accounts have 
required access to buckets and objects.

Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>

Co-authored-by: Harshavardhana <harsha@minio.io>
2024-09-17 09:45:46 -07:00
Taran Pelkey
3c82cf9327
Fix behavior of AddServiceAccountLDAP for non-admin users (#20442) 2024-09-16 16:04:51 -07:00
Harshavardhana
70d40083e9
remove windows CI/CD for now (#20441)
windows has decided to be a community support
only and source compile-friendly.
2024-09-16 13:46:53 -07:00
Klaus Post
8a30967542
Limit S3 Select JSON documents to 10MB (#20439)
Closes #20430

Limit allocations from badly formed documents.
2024-09-16 09:59:03 -07:00
Harshavardhana
229f04ab79 fix: the permissions one more time on /usr/bin
fixes https://github.com/minio/operator/issues/2319
2024-09-15 16:10:23 -07:00
Minio Trusted
1123dc3676 Update yaml files to latest version RELEASE.2024-09-13T20-26-02Z 2024-09-14 13:18:19 +00:00
Harshavardhana
5bf41aff17
hold granular locking for multi-pool PutObject() (#20434)
- PutObject() for multi-pooled was holding large
  region locks, which was not necessary. This affects
  almost all slowpoke clients and lengthy uploads.

- Re-arrange locks for CompleteMultipart, PutObject
  to be close to rename()
2024-09-13 13:26:02 -07:00
Anis Eleuch
e47d787adb
tier: Add force param to force tiering removal (#20355)
Currently, it is not possible to remove a tier if it is not accessible
or contains some data, add a force flag to make the removal successful
in that case.
2024-09-12 13:44:05 -07:00
Anis Eleuch
398ffb1136
Enable compression with encryption in CopyObject API (#20411)
When the encryption and compression are both enabled, the
the server will avoid compressing the data for no apparent reason

This commit will enable it and update unit tests.
2024-09-12 13:10:44 -07:00
Shubhendu
5862582cd7
IAM import test with missing entities (#20368)
Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
2024-09-12 08:59:00 -07:00
fumoboy007
e36b1146d6
Download static cURL into release Docker image for all supported architectures (#20424)
Download static cURL into release Docker image for all supported architectures.

Currently, the static cURL executable is only downloaded for the `amd64` architecture. However, `arm64` and `ppc64le` variants are also [available](https://github.com/moparisthebest/static-curl/releases/tag/v8.7.1).
2024-09-12 08:45:19 -07:00
Harshavardhana
c28a4beeb7
multipart support etag and pre-read small objects (#20423) 2024-09-12 05:24:04 -07:00
Sveinn
15ab0808b3
making sure we don't panic if globalReplicationStats have not been set (#20427) 2024-09-12 04:39:51 -07:00
Sveinn
3bae73fb42
Add http_timeout to audit webhook configurations (#20421) 2024-09-11 15:20:42 -07:00
Harshavardhana
bc527eceda
handle the actualSize() properly for PostUpload() (#20422)
postUpload() incorrectly saves actual size as '-1'
we should save correct size when its possible.

Bonus: fix the PutObjectPart() write locker, instead
of holding a lock before we read the client stream.

We should hold it only when we need to commit the parts.
2024-09-11 11:35:37 -07:00
Anis Eleuch
b963f36e1e
fix: Add missing grid handler of clearing upload-id from the cache (#20420) 2024-09-11 09:09:13 -07:00
Poorna
cdd7512a2e
use rename() safety for in-place 'xl.meta' updates (#20414) 2024-09-11 09:08:51 -07:00
Frank Wessels
a6d5287310
Reenable SVE support for Graviton 4 (#20410) 2024-09-10 08:35:12 -07:00
Minio Trusted
22822f4151 Update yaml files to latest version RELEASE.2024-09-09T16-59-28Z 2024-09-10 00:47:02 +00:00
Shubhendu
0b7aa6af87
Skip non existent ldap entities while import (#20352)
Dont hard error for nonexisting LDAP entries instead of logging them
report them via `mc`

Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
2024-09-09 09:59:28 -07:00
Harshavardhana
8c9ab85cfa
Add multipart uploads cache for ListMultipartUploads() (#20407)
this cache will be honored only when `prefix=""` while
performing ListMultipartUploads() operation.

This is mainly to satisfy applications like alluxio
for their underfs implementation and tests.

replaces https://github.com/minio/minio/pull/20181
2024-09-09 09:58:30 -07:00
Klaus Post
b1c849bedc
Don't send a canceled context to Unlock (#20409)
AFAICT we send a canceled context to unlock (and thereby releaseAll). This will cause network calls to fail.

Instead use background and add 30s timeout.
2024-09-09 08:49:49 -07:00
Harshavardhana
fb24bcfee0
fix: set audit/logger webhook retry interval to maximum 1m (#20404) 2024-09-09 02:36:47 -07:00
Harshavardhana
8268c12cfb
Add support for audit/logger max retry and retry interval (#20402)
Current implementation retries forever until our
log buffer is full, and we start dropping events.

This PR allows you to set a value until we give
up on existing audit/logger batches to proceed to
process the new ones.

Bonus:
 - do not blow up buffers beyond batchSize value
 - do not leak the ticker if the worker returns
2024-09-08 05:15:09 -07:00
Sveinn
3f39da48ea
fix: retries and failed message counter (#20401) 2024-09-07 17:13:57 -07:00
Klaus Post
9d5cdaa2e3
Limit Response Recorder memory (#20399)
Disable body recording for...

* admin inspect
* admin metrics
* profiling download

Also, if the recorded body is > 10MB, drop it.
2024-09-07 12:16:04 -07:00
Taran Pelkey
84e122c5c3
Fix duplicate groups in ListGroups API (#20396) 2024-09-06 17:28:47 -07:00
Praveen raj Mani
261111e728
Kafka notify: support batched commits for queue store (#20377)
The items will be saved per target batch and will
be committed to the queue store when the batch is full

Also, periodically commit the batched items to the queue store
based on configured commit_timeout; default is 30s;

Bonus: compress queue store multi writes
2024-09-06 16:06:30 -07:00
Harshavardhana
0f1e8db4c5
all 2xx status codes to be success for audit (#20394) 2024-09-06 15:53:34 -07:00
Harshavardhana
64e803b136
fix: avoid waiting on rebalance metadata (#20392)
rebalance metadata is good to have only,
if it cannot be loaded when starting MinIO
for some reason we can possibly ignore it
and move on and let user start rebalance
again if needed.
2024-09-06 06:20:19 -07:00
Krishnan Parthasarathi
a0f9e9f661
readParts: Return error when quorum unavailable (#20389)
readParts requires that both part.N and part.N.meta files be present.
This change addresses an issue with how an error to return to the upper
layers was picked from most drives where a UploadPart operation 
had failed.
2024-09-06 03:51:23 -07:00
Harshavardhana
b6b7cddc9c
make sure listParts returns parts that are valid (#20390) 2024-09-06 02:42:21 -07:00
jiuker
241be9709c
fix: jwt error overrwriten by nil public key (#20387) 2024-09-05 19:46:36 -07:00
Harshavardhana
85f08d7752
verify part.N exists before reading part.N.meta (#20383)
if part.N doesn't exist we do not have to complete
the multipart transaction, it simply means that we
have some partial upload situation at hand.
2024-09-05 13:37:19 -07:00
Klaus Post
6be88a2b99
xl-meta - verify parity data (#20384) 2024-09-05 04:57:44 -07:00
Poorna
060276932d
batch:repl fix copy from source -> remote (#20382)
completes fix started by  #20365
2024-09-05 04:57:23 -07:00
Shubhendu
6224849fd1
Dont start console service if MINIO_BROWSER=off (#20374)
By default, even if MINIO_BROWSER=off set code tries to get free
port available for the console. 

Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
2024-09-04 10:02:39 -07:00
Anis Eleuch
9b79eec29e
site-repl: Fix ILM document replication in some cases (#20380)
S3 spec does not accept an ILM XML document containing both <Filter>
and <Prefix> XML tags, even if both are empty. That is why we added
a 'set' field in some lifecycle structures to decide when and when not to
show a tag. However, we forgot to disallow marshaling of Filter when
'set' is set to false.

This will fix ILM document replication in a site replication
configuration in some cases.
2024-09-04 10:01:26 -07:00
Harshavardhana
c2e318dd40
remove mincache EOS related feature from upstream (#20375) 2024-09-03 11:23:41 -07:00
T-TRz879
69258d5945
ignore if-unmodified-since header if if-match is set (#20326) 2024-09-02 23:33:53 -07:00
Mark Theunissen
d7ef6315ae
Store the checksum in PostPolicyHandler so that we can return it on Get/Head (#20364) 2024-09-02 06:13:54 -07:00
Anis Eleuch
aaf4fb1184
batch: repl: A missing prefix in the remote source will fail replication (#20365)
When the prefix field is not provided in the remote source of a yaml
replication job, the code fails to do listing and makes replication
successful. This commit fixes it.
2024-09-02 05:36:43 -07:00
Harshavardhana
f05641c3c6 fix: /usr/bin path permissions for docker 2024-09-01 04:25:07 -07:00
Harshavardhana
7a34c88d73
add consistent nonce to make multipart deterministic per part (#20359)
This change adds a consistent nonce to ensure
that multipart uploads are deterministic on a 
per-part basis.

Thanks to @klauspost for the work here minio/sio@3cd3734
2024-08-31 11:25:48 -07:00
Harshavardhana
6c746843ac
fix: keep locks on same pool for simplicity (#20356)
locks handed by different pools would become non-compete for
multi-object delete request, this is wrong for obvious 
reasons.

New locking implementation and revamp will rewrite multi-object
lock anyway, this is a workaround for now.
2024-08-30 19:26:49 -07:00
Harshavardhana
bb07df7e7b
do not list dangling objects with unmatched ECs (#20351)
This mostly applies to all new objects, this simply
ignores these objects and no application would have
to deal with getting 503s on them.
2024-08-30 09:02:26 -07:00
Minio Trusted
1cb824039e Update yaml files to latest version RELEASE.2024-08-29T01-40-52Z 2024-08-29 22:33:43 +00:00
Harshavardhana
504e52b45e
protect bpool from buffer pollution by invalid buffers (#20342) 2024-08-28 18:40:52 -07:00
Anis Eleuch
38c0840834
bucket-metadata: Reload events/repl-targets for all buckets (#20334)
Currently, the bucket events and replication targets are only reloaded
with buckets that failed to load during the first cluster startup,
which is wrong because if one bucket change was done in one node but
that node was not able to notify other nodes; the other nodes will
reload the bucket metadata config but fails to set the events and bucket
targets in the memory.
2024-08-28 08:32:18 -07:00
Harshavardhana
c65e67c357
add more details on the payload sent to webhook audit (#20335) 2024-08-28 08:31:56 -07:00
Harshavardhana
fb2360ff88
when a drive is closed cancel the cleanupTrash goroutine (#20337)
when a hung drive is hot-unplugged, the server might go
into a loop where the previous `format.json` is somehow
still accessible to the process, we try to re-init() drives,
but that seems to cause a previous goroutine to hang around
since it is not canceled away when the drive is closed.

Bonus: add deadline for immediate purge routine, to unblock
it if the drive is blocking mutations.
2024-08-28 08:31:42 -07:00
jiuker
1a2de1bdde
fix: string format when log IAM refresh take over 5s (#20331) 2024-08-26 23:40:33 -07:00
Harshavardhana
993b97f1db fix: docker buildx warnings 2024-08-26 11:30:15 -07:00
Harshavardhana
1c4d28d7af
update pkg/v3, minio-go/v7 and mc (#20327) 2024-08-26 08:33:07 -07:00
Harshavardhana
af55f37b27
do not fallback on the drives to load groups for LDAP (#20320)
if a user policy is found, avoid reading from the drives
for missing group mappings, group mappings are not mandatory
and conditional.

This PR restores the older behavior while making sure that
if a direct user policy is not found, we would still attempt
to load from the group from the drives.
2024-08-25 17:22:45 -07:00
Andreas Auernhammer
2d67c26794
improve multipart decryption (#20324)
This commit simplifies and optimizes the decryption of large (multipart)
objects. This PR does two things:
 
- Re-write the init logic for the decryption reader
- Reduce the number of OEK decryptions

Before, the init logic copied some SSE HTTP request headers to 
parse them later. This is simplified to parsing them right away. This
removes some fields from the decryption reader struct.

Further, the decryption reader decrypted the OEK using the client-provided 
key (SSE-C) or the KMS (SSE-S3 / SSE-KMS) for each part. This is redundant 
since the OEK is the same for all parts. In particular, a KMS call might be a 
network request. Now, the OEK is decrypted once for the entire multipart object.

This should improve latency when reading encrypted multipart objects 
and reduce requests to the KMS.

Signed-off-by: Andreas Auernhammer <github@aead.dev>
2024-08-25 11:07:13 -07:00
Harshavardhana
006cacfefb
to turn-off healing drop legacy ENV (#20315) 2024-08-23 15:43:31 -07:00
bestgopher
c28f09d4a7
refactor: displays the OS-specific doc url (#20313) 2024-08-23 07:11:35 -07:00
Anis Eleuch
73992d2b9f
s3: DeleteBucket to use listing before returning bucket not empty error (#20301)
Use Walk(), which is a recursive listing with versioning, to check if 
the bucket has some objects before being removed. This is beneficial
because the bucket can contain multiple dangling objects in multiple
drives.

Also, this will prevent a bug where a bucket is deleted in a deployment
that has many erasure sets but the bucket contains one or few objects
not spread to enough erasure sets.
2024-08-22 14:57:20 -07:00
Anis Eleuch
a8f143298f
heal: Reset healing params when a retry is decided (#20285)
Currently, retry healing of a new drive healing does not reset
HealedBuckets means that the next healing retry will skip those
buckets. The commit will fix this behavior.

Also, the skipped objects counter will include objects uploaded
that are uploaded after the healing is started.
2024-08-22 05:35:43 -07:00
jiuker
2d44c161c7
fix: support export bucket policy with ExportBucketMetadata (#20308) 2024-08-22 03:44:35 -07:00
Mark Theunissen
9511056f44
fix: simplify error logged when logger target is unreachable (#20304) 2024-08-22 02:43:48 -07:00
Mark Theunissen
fb4ad000b6
support parseObjectAttributes to handle multiple header values (#20295) 2024-08-21 14:13:59 -07:00
shandongzhejiang
a8ff12bc72
chore: fix some comments (#20294)
Signed-off-by: shandongzhejiang <shandongzhejiang@icloud.com>
2024-08-21 13:14:24 -07:00
jiuker
1e1bd3afd9
use io.NopCloser replace closeWrapper (#20287) 2024-08-21 05:20:54 -07:00
Anis Eleuch
7b239ae154
sftp: Fix operations with a internal service account (#20293)
sftp sends local requests to the S3 port while passing the session token
header when the account corresponds to a service account. However, this
is not permitted and will throw an error: "The security token included in the
request is invalid"

This commit will avoid passing the session token to the upper layer that
initializes MinIO client to avoid this error.
2024-08-20 13:00:29 -07:00
Aditya Manthramurthy
8a11282522
[fix] S3Select: Add some missing input validation (#20278)
Prevents server panic when some CSV parameters are empty.
2024-08-20 11:31:45 -07:00
Anis Eleuch
85c3db3a93
heal: Add finished flag to .healing.bin to avoid removing this latter (#20250)
Sometimes, we need historical information in .healing.bin, such as the
number of expired objects that the healing avoids to heal and that can
create drive usage disparency in the same erasure set. For that reason,
this commit will not remove .healing.bin anymore and it will have a new
field called Finished so we know healing is finished in that drive.
2024-08-20 08:42:49 -07:00
Ramon de Klein
37383ecd09
Ensure that sig/sha are in the same layer (#20282) 2024-08-18 00:13:33 -07:00
Mark Theunissen
6378ca10a4
kms.ListKeys returns CreatedBy/CreatedAt when information is available (#20223) 2024-08-17 23:43:03 -07:00
Minio Trusted
9e81ccd2d9 Update yaml files to latest version RELEASE.2024-08-17T01-24-54Z 2024-08-17 12:00:24 +00:00
Harshavardhana
72cff79c8a
add missing STS accounts loading (#20279)
PR #20268 missed loading STS accounts map properly
2024-08-16 18:24:54 -07:00
Harshavardhana
a5702f978e
remove requests deadline, instead just reject the requests (#20272)
Additionally set

 - x-ratelimit-limit
 - x-ratelimit-remaining

To indicate the request rates.
2024-08-16 01:43:49 -07:00
Poorna
4687c4616f
try loading temp account if not in cache (#20266) 2024-08-15 23:12:42 -07:00
Harshavardhana
d8dfb57d5c fix: typos from previous PR #20270 2024-08-15 12:33:45 -07:00
Ramon de Klein
b07c58aa05
Add signature and SHA to the Docker images (#20270)
add signature and SHA to the Docker images
2024-08-15 08:48:04 -07:00
Harshavardhana
cc0c41d216
remove region locks and make them simpler (#20268)
- single flight approach is now optional, instead of default.
- parallelize the loaders upto 32 items per assets (more room for improvement possible)
2024-08-15 08:41:03 -07:00
Klaus Post
f1302c40fe
Fix uninitialized replication stats (#20260)
Services are unfrozen before `initBackgroundReplication` is finished. This means that 
the globalReplicationStats write is racy. Switch to an atomic pointer.

Provide the `ReplicationPool` with the stats, so it doesn't have to be grabbed 
from the atomic pointer on every use.

All other loads and checks are nil, and calls return empty values when stats 
still haven't been initialized.
2024-08-15 05:04:40 -07:00
Harshavardhana
3b1aa40372
support relative paths for KMS_SECRET_KEY_FILE (#20264)
fixes #20251
2024-08-15 04:46:39 -07:00
Klaus Post
d96798ae7b
Add support profile deadlines and concurrent operations (#20244)
* Allow a maximum of 10 seconds to start profiling operations.
* Download up to 16 profiles concurrently, but only allow 10 seconds for
  each (does not include write time).
* Add cluster info as the first operation.
* Ignore remote download errors.
* Stop remote profiles if the request is terminated.
2024-08-15 03:36:00 -07:00
Anis Eleuch
b508264ac4
sr: Avoid recursion when loading site replicator credentials (#20262)
If the site replication is enabled and the code tries to extract jwt
claims while the site replication service account credentials are still
not loaded yet, the code will enter an infinite loop, causing in a
high CPU usage.

Another possibility of the infinite loop is having some service accounts
created by an old deployment version where the service account JWT was
signed by the root credentials, but not anymore.

This commit will remove the possibility of the infinite loop in the code
and add root credential fallback to extract claims from old service
accounts.
2024-08-14 18:29:20 -07:00
Harshavardhana
db78431b1d
avoid crash when initializing bucket quota cache (#20258) 2024-08-14 17:34:56 -07:00
Sveinn
743ddb196a
Removing the audit log retry mechanism (#20259) 2024-08-14 15:25:08 -07:00
Klaus Post
3ffeabdfcb
Fix govet+staticcheck issues (#20263)
This is better: https://github.com/golang/go/issues/60529
2024-08-14 10:11:51 -07:00
Anis Eleuch
51b1f41518
heal: Persist MRF queue in the disk during shutdown (#19410) 2024-08-13 15:26:05 -07:00
Harshavardhana
e7a56f35b9
flatten out audit tags, do not send as free-form (#20256)
move away from map[string]interface{} to map[string]string
to simplify the audit, and also provide concise information.

avoids large allocations under load(), reduces the amount
of audit information generated, as the current implementation
was a bit free-form. instead all datastructures must be
flattened.
2024-08-13 15:22:04 -07:00
rubyisrust
516af01a12
chore: fix some function names (#20243)
Signed-off-by: rubyisrust <rustrover@icloud.com>
2024-08-13 11:23:33 -07:00
Harshavardhana
acdb355070
update deps and update azure WARM tier implementation (#20247) 2024-08-13 11:21:34 -07:00
Mark Theunissen
37c02a5f7b
Add dummy DeleteBucketCors for safety (#20253) 2024-08-13 08:25:16 -07:00
Krishnan Parthasarathi
04be352ae9
Relax quorum agreement on DataDir values (#20232)
Previously, we checked if we had a quorum on the DataDir value. 
We are removing this check, which allows reading objects with different 
DataDir values in a few drives (due to a rebalance-stop race bug) 
provided their eTags or ModTimes match.
2024-08-12 12:02:21 -07:00
Klaus Post
53eb7656de
Add admin info timeouts (#20249)
Since a lot of operations load from storage, do remote calls, add a 10 second timeout to each operation.

This should make `mc admin info` return values even under extreme conditions.
2024-08-12 10:24:29 -07:00
Klaus Post
d8f0e0ea6e
Simplify error logging on event send (#20246)
Overly verbose, hard to read and can leak data.

Print even as JSON and simplify target&error printing.
2024-08-12 08:55:28 -07:00
Harshavardhana
2e0fd2cba9
implement a safer completeMultipart implementation (#20227)
- optimize writing part.N.meta by writing both part.N
  and its meta in sequence without network component.

- remove part.N.meta, part.N which were partially success
  ful, in quorum loss situations during renamePart()

- allow for strict read quorum check arbitrated via ETag
  for the given part number, this makes it double safer
  upon final commit.

- return an appropriate error when read quorum is missing,
  instead of returning InvalidPart{}, which is non-retryable
  error. This kind of situation can happen when many
  nodes are going offline in rotation, an example of such
  a restart() behavior is statefulset updates in k8s.

fixes #20091
2024-08-12 01:38:15 -07:00
Harshavardhana
909b169593
avoid source index to be same as destination index (#20238)
during rebalance stop, it can possibly happen that
Put() would race by overwriting the same object again.

This may very well if done "successfully" it can
potentially proceed to delete the object from the pool,
causing data loss.

This PR enhances #20233 to handle more scenarios such
as these.
2024-08-09 19:30:44 -07:00
Krishnan Parthasarathi
4e67a4027e
Prevent overwrites due to rebalance-stop race (#20233)
Rebalance-stop can race with ongoing rebalance operations. This change
prevents these operations from overwriting objects by checking the source
and destination pool indices are different.
2024-08-08 19:05:14 -07:00
Klaus Post
49055658a9
Fix missing hash in GetObjectAttributes (#20231)
SHA256/SHA1 were mixed up.

Simplify code as well.
2024-08-08 13:19:41 -07:00
Harshavardhana
89c58ce87d
enhance getActualSize() to return valid values for most situations (#20228) 2024-08-08 08:29:58 -07:00
Andreas Auernhammer
14876a4df1
ldap: use custom TLS cipher suites (#20221)
This commit replaces the LDAP client TLS config and
adds a custom list of TLS cipher suites which support
RSA key exchange (RSA kex).

Some LDAP server connections experience a significant slowdown
when these cipher suites are not available. The Go TLS stack
disables them by default. (Can be enabled via GODEBUG=tlsrsakex=1).

fixes https://github.com/minio/minio/issues/20214

With a custom list of TLS ciphers, Go can pick the TLS RSA key-exchange
cipher. Ref:
```
	if c.CipherSuites != nil {
		return c.CipherSuites
	}
	if tlsrsakex.Value() == "1" {
		return defaultCipherSuitesWithRSAKex
	}
```
Ref: https://cs.opensource.google/go/go/+/refs/tags/go1.22.5:src/crypto/tls/common.go;l=1017

Signed-off-by: Andreas Auernhammer <github@aead.dev>
2024-08-07 05:59:47 -07:00
Mark Theunissen
2681219039
Add dummy PutBucketCors for functional test compatibility (#20220) 2024-08-06 08:41:38 -07:00
Leo
5da7f0100a
chore: Adjust setup guide for development. (#20204)
Signed-off-by: ifuryst <ifuryst@gmail.com>
2024-08-05 11:35:53 -07:00
Harshavardhana
dea9abed29
use singleflight when bucket metadata is reloaded() (#20216)
this allows for de-duplicating the callers when called
concurrently, allowing for bucketmetadata reads to be
single call. All concurrent callers will get the same data
as the first one.
2024-08-05 09:50:11 -07:00
Harshavardhana
e3eb5c1328
batch-exp: Remove 1000 maximum objects per call (#20212)
It seems ObjectAPI.DeleteObjects() is clogging up when it is removing
10k versions of a single object.

Authored-by: Anis Eleuch <anis@min.io>
2024-08-04 21:55:25 -07:00
Minio Trusted
fb9364f1fb Update yaml files to latest version RELEASE.2024-08-03T04-33-23Z 2024-08-03 08:48:40 +00:00
Cesar N.
6efb56851c
Update console version to 1.7.0 (#20211) 2024-08-02 21:33:23 -07:00
Andrea Longo
db2c7ed1d1
Docs: link to prom collector repo for info on debug metrics (#20209)
link to prom collector repo for info on debug metrics
2024-08-02 15:30:11 -07:00
Poorna
74c047cb03
fix replication last hour metric (#20199)
also adding missing recent_backlog_count metric to v3 metrics
2024-08-01 17:55:27 -07:00
jiuker
50a5ad48fc
feat: support batch replication prefix slice (#20033) 2024-08-01 05:53:30 -07:00
Minio Trusted
292fccff6e Update yaml files to latest version RELEASE.2024-07-31T05-46-26Z 2024-07-31 08:50:26 +00:00
Harshavardhana
a9dc061d84
count metrics properly for any failures during drive heal (#20193)
or via `mc admin heal --set 1 --pool 1`
2024-07-30 22:46:26 -07:00
Krishnan Parthasarathi
01a8c09920
Add fmt-gen subcommand (#20192)
fmt-gen subcommand is only available when built with build tag `fmtgen`.
2024-07-30 15:59:48 -07:00
Aditya Manthramurthy
4c8562bcec
Fix v2 metrics: Send all ttfb api labels (#20191)
Fix a regression in #19733 where TTFB metrics for all APIs except
GetObject were removed in v2 and v3 metrics. This causes breakage for
existing v2 metrics users. Instead we continue to send TTFB for all APIs
in V2 but only send for GetObject in V3.
2024-07-30 15:28:46 -07:00
Harshavardhana
f13c04629b
allow multipart uploads expiration to be dynamic (#20190)
allow multipart uploads expiration to be dyamic

It would seem like the new values will take effect
only after a restart for changes in multipart_expiration.
This PR fixes this by making it dynamic as it should have
been.
2024-07-30 12:01:06 -07:00
Harshavardhana
80ff907d08
add DeleteBulk support, add sufficient deadlines per rename() (#20185)
deadlines per moveToTrash() allows for a more granular timeout
approach for syscalls, instead of an aggregate timeout.

This PR also enhances multipart state cleanup to be optimal by
removing 100's of multipart network rename() calls into single
network call.
2024-07-29 18:56:40 -07:00
Minio Trusted
673df6d517 Update yaml files to latest version RELEASE.2024-07-29T22-14-52Z 2024-07-30 00:00:47 +00:00
Poorna
2d40433bc1
remove replication throttle deadline for objects > 128MiB (#20184)
context deadline was introduced to avoid a slow transfer from blocking
replication queue(s) shared by other buckets that may not be under throttling.

This PR removes this context deadline for larger objects since they are 
anyway restricted to a limited set of workers. Otherwise, objects would 
get dequeued when the throttle limit is exceeded and cannot proceed 
within the deadline.
2024-07-29 15:14:52 -07:00
Andrea Longo
3bc39db34e
Restructure metrics v3 readme for docs use (#20114) 2024-07-29 11:48:51 -07:00
Harshavardhana
a17f14f73a
separate lock from common grid to avoid epoll contention (#20180)
epoll contention on TCP causes latency build-up when
we have high volume ingress. This PR is an attempt to
relieve this pressure.

upstream issue https://github.com/golang/go/issues/65064
It seems to be a deeper problem; haven't yet tried the fix
provide in this issue, but however this change without
changing the compiler helps. 

Of course, this is a workaround for now, hoping for a
more comprehensive fix from Go runtime.
2024-07-29 11:10:04 -07:00
Poorna
6651c655cb
fix replication of checksum when encryption is enabled (#20161)
- Adding functional tests
- Return checksum header on GET/HEAD, previously this was returning
  InvalidPartNumber error
2024-07-29 01:02:16 -07:00
Harshavardhana
3ae104edae
change Read* calls over net/http to move to http.MethodGet (#20173)
- ReadVersion
- ReadFile
- ReadXL

Further changes include to

- Compact internode resource RPC paths
- Compact internode query params

To optimize on parsing by gorilla/mux as the
length of this string increases latency in
gorilla/mux - reduce to a meaningful string.
2024-07-29 01:00:12 -07:00
jiuker
c87a489514
fix: support prefix when batchJob replicate enable the snowball (#20178) 2024-07-29 00:59:50 -07:00
Minio Trusted
a60267501d Update yaml files to latest version RELEASE.2024-07-26T20-48-21Z 2024-07-27 10:43:28 +00:00
Poorna
641a56da0d
fix panic in replication queuing (#20169)
Regression from #20077

```
Jul 26 19:08:29 minio-dr-0101a minio[275423]: Error: grid handler (NSScanner) panic: runtime error: index out of range [4] with length 1 (*errors.errorString)
Jul 26 19:08:29 minio-dr-0101a minio[275423]:       33: internal/logger/logger.go:268:logger.LogIf()
Jul 26 19:08:29 minio-dr-0101a minio[275423]:       32: internal/grid/connection.go:50:grid.gridLogIf()
Jul 26 19:08:29 minio-dr-0101a minio[275423]:       31: internal/grid/muxserver.go:234:grid.(*muxServer).handleRequests.func1()
Jul 26 19:08:29 minio-dr-0101a minio[275423]:       30: cmd/bucket-replication.go:2165:cmd.(*ReplicationPool).queueReplicaTask()
Jul 26 19:08:29 minio-dr-0101a minio[275423]:       29: cmd/bucket-replication.go:3440:cmd.queueReplicationHeal()
Jul 26 19:08:29 minio-dr-0101a minio[275423]:       28: cmd/data-scanner.go:1396:cmd.(*scannerItem).healReplication()
Jul 26 19:08:29 minio-dr-0101a minio[275423]:       27: cmd/data-scanner.go:1220:cmd.(*scannerItem).applyActions()
Jul 26 19:08:29 minio-dr-0101a minio[275423]:       26: cmd/xl-storage.go:627:cmd.(*xlStorage).NSScanner.func2()
```
2024-07-26 13:48:21 -07:00
Klaus Post
59788e25c7
Update connection deadlines less frequently (#20166)
Only set write deadline on connections every second. Combine the 2 write locations into 1.
2024-07-26 10:40:11 -07:00
Harshavardhana
a16193bb50
remove fdatasync() discard, we write with O_SYNC (#20168)
fdatasync() discard for page-cached READs is not
needed, it would seem like this can cause latencies
in situations when things are loaded.
2024-07-26 10:27:56 -07:00
jiuker
132e7413ba
fix: check once ready for site-replication (#20149) 2024-07-26 10:27:42 -07:00
Klaus Post
1966668066
Avoid Batch Replication Job log spam (#20158)
Only print once per job and error location.

Set default retry to default 1 second wait, and use as minimum.
2024-07-26 05:55:50 -07:00
Harshavardhana
064f36ca5a
move to GET for internal stream READs instead of POST (#20160)
the main reason is to let Go net/http perform necessary
book keeping properly, and in essential from consistency
point of view its GETs all the way.

Deprecate sendFile() as its buggy inside Go runtime.
2024-07-26 05:55:01 -07:00
Klaus Post
15b609ecea
Expose RPC reconnections and ping time (#20157)
- Keeps track of reconnection count.
- Keeps track of connection ping roundtrip times. 
  Sends timestamp in ping message.
- Allow ping without payload.
2024-07-25 14:07:21 -07:00
Krishnan Parthasarathi
4a1edfd9aa
Different read quorum for tiered objects (#20115)
For a non-tiered object, MinIO requires that EcM (# of data blocks) of
xl.meta agree, corresponding to the number of data blocks needed to 
read this object.

OTOH, tiered objects have metadata in the hot tier and data in the 
warm tier. The data and its integrity are offloaded to the warm tier. This
allows us to reduce the read quorum from EcM (typically > N/2, where N -
erasure stripe width) to N/2 + 1. The simple majority of metadata
ensures consensus on what the object is and where it is
located.
2024-07-25 14:02:50 -07:00
Anis Eleuch
b7f319b62a
properly reload a fresh drive when found in a failed state during startup (#20145)
When a drive is in a failed state when a single node multiple drives
deployment is started, a replacement of a fresh disk will not be
properly healed unless the user restarts the node.

Fix this by always adding the new fresh disk to globalLocalDrivesMap. Also
remove globalLocalDrives for simplification, a map to store local node
drives can still be used since the order of local drives of a node is
not defined.
2024-07-24 16:30:33 -07:00
Anis Eleuch
33c101544d
kms: Expose API when bucket federation is enabled (#20143)
kms: Expose API available when bucket federation is enabled

When bucket federation feature is enabled, KMS API will not work, such
as `mc admin kms key list`

The commit will fix the issue by disabling bucket forwarding when this
is a KMS request.
2024-07-24 15:44:29 -07:00
Anis Eleuch
21cf29330e
grafana: Fix the unit in Open FDs panel (#20144) 2024-07-24 07:51:03 -07:00
Harshavardhana
3b21bb5be8
use unixNanoTime instead of time.Time in lockRequestorInfo (#20140)
Bonus: Skip Source, Quorum fields in lockArgs that are never
sent during Unlock() phase.
2024-07-24 03:24:01 -07:00
Harshavardhana
6fe2b3f901
avoid sendFile() for ranges or object lengths < 4MiB (#20141) 2024-07-24 03:22:50 -07:00
Taran Pelkey
b368d4cc13
Fix updateGroupMembershipsForLDAP behavior with unicode (#20137) 2024-07-23 19:10:03 -07:00
Klaus Post
0680af7414
Set O_NONBLOCK for reads and writes on unix (#20133)
Tracing syscalls, opening and reading an `xl.meta` looks like this:

```
openat(AT_FDCWD, "/mnt/drive1/ss8-old/testbucket/ObjSize4MiBThreads72/(554O51H/peTb(0iztdbTKw59.csv/xl.meta", O_RDONLY|O_NOATIME|O_CLOEXEC) = 34 <0.000>
fcntl(34, F_GETFL)                      = 0x48000 (flags O_RDONLY|O_LARGEFILE|O_NOATIME) <0.000>
fcntl(34, F_SETFL, O_RDONLY|O_NONBLOCK|O_LARGEFILE|O_NOATIME) = 0 <0.000>
epoll_ctl(4, EPOLL_CTL_ADD, 34, {events=EPOLLIN|EPOLLOUT|EPOLLRDHUP|EPOLLET, data={u32=3172471557, u64=8145488475984499461}}) = -1 EPERM (Operation not permitted) <0.000>
fcntl(34, F_GETFL)                      = 0x48800 (flags O_RDONLY|O_NONBLOCK|O_LARGEFILE|O_NOATIME) <0.000>
fcntl(34, F_SETFL, O_RDONLY|O_LARGEFILE|O_NOATIME) = 0 <0.000>
fstat(34, {st_mode=S_IFREG|0644, st_size=354, ...}) = 0 <0.000>
read(34, "XL2 \1\0\3\0\306\0\0\1P\2\2\1\304$\225\304\20\0\0\0\0\0\0\0\0\0\0\0"..., 354) = 354 <0.000>
close(34)                               = 0 <0.000>
```

Everything until `fstat` is the `os.Open` call.

Looking at the code: https://github.com/golang/go/blob/master/src/os/file_unix.go#L212-L243

It seems for every file it "tries" to see if it is pollable. This causes `syscall.SetNonblock(fd, true)` to be called. This is the first `F_SETFL`.

It then calls `f.pfd.Init("file", true)`. This will attempt to set it as pollable using `epoll_ctl`. This will always fail for files. It therefore calls `syscall.SetNonblock(fd, false)` resulting in the second `F_SETFL`.

If we set the `O_NONBLOCK` call on the initial open, we should avoid the 4 `fcntl` syscalls per file.

I don't see any way to avoid the `epoll_ctl` call, since kind is either `kindOpenFile` or `kindNonBlock`, so "pollable" will always be true. However avoiding 4 of 6 syscalls still seems worth it.

This should not have any effect, since files will end up with "nonblock" anyway.
2024-07-23 09:36:24 -07:00
Harshavardhana
91805bcab6
add optimizations to bring performance on unversioned READS (#20128)
allow non-inlined on disk to be inlined via
an unversioned ReadVersion() call, we only
need ReadXL() to resolve objects with multiple
versions only.

The choice of this block makes it to be dynamic
and chosen by the user via `mc admin config set`

Other bonus things

- Start measuring internode TTFB performance.
- Set TCP_NODELAY, TCP_CORK for low latency
2024-07-23 03:53:03 -07:00
Klaus Post
c0e2886e37
Tweak grid for less writes (#20129)
Use `runtime.Gosched()` if we have less than maxMergeMessages and the 
queue is empty.  Up maxMergeMessages to 50 to merge more messages into 
a single write.

Add length check for an early bailout on readAllInto when we know packet length.
2024-07-23 03:28:14 -07:00
Andreas Auernhammer
4f5dded4d4
fips: enforce FIPS-compliant TLS ciphers in FIPS mode (#20131)
This commit enforces FIPS-compliant TLS ciphers in FIPS mode
by importing the `fipsonly` module.

Otherwise, MinIO still accepts non-FIPS compliant TLS connections.
2024-07-23 03:11:25 -07:00
jiuker
b3a94c4e85
fix: Use xtime duration to parse batch job (#20117) 2024-07-23 00:05:53 -07:00
Harshavardhana
8e618d45fc
remove unnecessary LRU for internode auth token (#20119)
removes contentious usage of mutexes in LRU, which
were never really reused in any manner; we do not
need it.

To trust hosts, the correct way is TLS certs; this PR completely
removes this dependency, which has never been useful.

```
0  0%  100%  25.83s 26.76%  github.com/hashicorp/golang-lru/v2/expirable.(*LRU[...])
0  0%  100%  28.03s 29.04%  github.com/hashicorp/golang-lru/v2/expirable.(*LRU[...])
```

Bonus: use `x-minio-time` as a nanosecond to avoid unnecessary
parsing logic of time strings instead of using a more
straightforward mechanism.
2024-07-22 00:04:48 -07:00
Harshavardhana
3ef59d2821
do not set KMSSecretKey env from KMSSecretKeyFile (#20122)
fixes #20121
2024-07-21 14:39:15 -07:00
Harshavardhana
23db4958f5
fix tuned-adm command typo 2024-07-18 18:15:02 -07:00
Anis Eleuch
d9ee668b6d
s3: Fix wrong continuation token during listing with ILM enabled bucket (#20113) 2024-07-18 13:37:34 -07:00
Anis Eleuch
2e5d792f0c
batch-expiry: Save progress regularly in the drives and at the end (#20098)
- Also, fix failure reporting at the end.
- Also, avoid parsing report objects when listing or resuming jobs, this
does not cause any bugs, it is only printing, not useful errors.
2024-07-17 09:42:32 -07:00
Minio Trusted
b276651eaa Update yaml files to latest version RELEASE.2024-07-16T23-46-41Z 2024-07-17 15:26:12 +00:00
Poorna
3535197f99
replication: proxy only on missing object or read quorum err (#20101) 2024-07-16 16:46:41 -07:00
Frank Wessels
95f076340a
Update reedsolomon dependency with fix for Graviton4 processor (#20102) 2024-07-16 12:27:21 -07:00
Mark Theunissen
698bb93a46
Allow a KMS Action to specify keys in the Resources of a policy (#20079) 2024-07-16 07:03:03 -07:00
Minio Trusted
2584430141 Update yaml files to latest version RELEASE.2024-07-15T19-02-30Z 2024-07-15 22:10:04 +00:00
Klaus Post
ded373e600
Split handleMessages (cosmetic) (#20095)
Split the read and write sides of handleMessages into two separate functions

Cosmetic. The only non-copy-and-paste change is that `cancel(ErrDisconnected)` is moved 
into the defer on `readStream`.
2024-07-15 12:02:30 -07:00
Harshavardhana
e8c54c3d6c
add validation test for v3 metrics for all its endpoints (#20094)
add unit test for v3 metrics for all its exposed endpoints

Bonus:
  - support OpenMetrics encoding
  - adds boot time for prometheus
  - continueOnError is better to serve as
    much metrics as possible.
2024-07-15 09:28:02 -07:00
Shubhendu
f944a42886
Removed user and group details from logs (#20072)
Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
2024-07-14 11:12:07 -07:00
Harshavardhana
eff0ea43aa
fix: typo in BucketUsageMetrics group registration in v3 metrics (#20090)
```
curl http://localhost:9000/minio/metrics/v3/cluster/usage/buckets
```

Did not work as documented, due to the fact that there was a typo
in the bucket usage metrics registration group. This endpoint is
a cluster endpoint and does not require any `buckets` argument.
2024-07-14 11:11:42 -07:00
Minio Trusted
3b602bb532 Update yaml files to latest version RELEASE.2024-07-13T01-46-15Z 2024-07-13 02:08:28 +00:00
Alex
459985f0fa
Update Console to v1.6.3 (#20084)
Signed-off-by: Benjamin Perez <benjamin@bexsoft.net>
2024-07-12 18:46:15 -07:00
Harshavardhana
d0080046c2 allow sysfs tuning from tuned 2024-07-12 16:31:18 -07:00
Harshavardhana
7fcb428622
do not print unexpected logs (#20083) 2024-07-12 13:51:54 -07:00
dependabot[bot]
4ea6f94ed8
Bump github.com/nats-io/nats-streaming-server from 0.24.3 to 0.24.6 (#20082)
Bumps [github.com/nats-io/nats-streaming-server](https://github.com/nats-io/nats-streaming-server) from 0.24.3 to 0.24.6.
- [Release notes](https://github.com/nats-io/nats-streaming-server/releases)
- [Changelog](https://github.com/nats-io/nats-streaming-server/blob/main/.goreleaser.yml)
- [Commits](https://github.com/nats-io/nats-streaming-server/compare/v0.24.3...v0.24.6)

---
updated-dependencies:
- dependency-name: github.com/nats-io/nats-streaming-server
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-07-12 11:45:09 -07:00
Klaus Post
83adc2eebf
Fix ListObjects aborting after 3 minute on async request (#20074)
When creating the async listing, if the first request does not return within 3 
minutes, it is stopped, since it isn't being kept alive.

Keep updating `lastHandout` while we are waiting for the initial request to be fulfilled.
2024-07-12 09:23:16 -07:00
Poorna
989c318a28
replication: make large workers configurable (#20077)
This PR also improves throttling by reducing tokens requested
from rate limiter based on available tokens to avoid exceeding
throttle wait deadlines
2024-07-12 07:57:31 -07:00
Frank Wessels
ef802f2b2c
Updated dependencies for ARM SVE support (#20081) 2024-07-12 07:36:29 -07:00
Taran Pelkey
f5d2fbc84c
Add DecodeDN and QuickNormalizeDN functions to LDAP config (#20076) 2024-07-11 18:04:53 -07:00
Allan Roger Reid
e139673969
Audit failure in batch job key rotate (#20073) 2024-07-11 16:13:15 -07:00
Harshavardhana
a8c6465f22
hide some deprecated fields from 'get' output (#20069)
also update wording on `subnet license="" api_key=""`
2024-07-10 13:16:44 -07:00
Minio Trusted
27538e2d22 Update yaml files to latest version RELEASE.2024-07-10T18-41-49Z 2024-07-10 19:27:17 +00:00
Taran Pelkey
6c6f0987dc
Add groups to policy entities (#20052)
* Add groups to policy entities

* update comment

---------

Co-authored-by: Harshavardhana <harsha@minio.io>
2024-07-10 11:41:49 -07:00
Austin Chang
5f64658faa
clarify error message for root user credential (#20043)
Signed-off-by: Austin Chang <austin880625@gmail.com>
2024-07-10 09:57:01 -07:00
Anis Eleuch
ce183cb2b4
heal: List and heal again for any listing error (#19999)
When a fresh drive healing is finished, add more checks for the drive listing
errors. If any, re-list and heal again. Although this is an infrequent use
case to have listPathRaw() returning nil when minDisks is set to 1, we
still need to handle all possible use cases to avoid missing healing
any object.

Also, check for HealObject result to decide of an object is healed in the
fresh disk since HealObject returns nil if an object is healed in any
disk, and not in the new fresh drive.
2024-07-10 09:55:36 -07:00
Klaus Post
b3bac73c0f
Clarify post policy error message (#20067)
It is not really clear that the listed keys are missing.

Clarify the error
2024-07-10 07:18:44 -07:00
Anis Eleuch
e726d8ff0f
list: Hide objects/versions with pending/failed replicated deletion (#20047)
In regular listing, this commit will avoid showing an object when its
latest version has a pending or failed deletion. In replicated setup.
It will also prevent showing older versions in the same case.
2024-07-09 15:26:42 -07:00
Shubhendu
f4230777b3
Log replication errors once (#20063)
Also, sort the error map for multiple sites in ascending order
of deployment IDs, so that the error message generated is always
definitive order and same.

Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
2024-07-09 10:10:31 -07:00
Krishnan Parthasarathi
380233d646
batch: Update job info object on success (#20053) 2024-07-08 18:45:54 -07:00
Allan Roger Reid
d592bc0c1c
Fix documentation for removal of delete markers ILM rule (#20056) 2024-07-08 18:45:38 -07:00
Klaus Post
0d0b0aa599
Abstract grid connections (#20038)
Add `ConnDialer` to abstract connection creation.

- `IncomingConn(ctx context.Context, conn net.Conn)` is provided as an entry point for 
   incoming custom connections.

- `ConnectWS` is provided to create web socket connections.
2024-07-08 14:44:00 -07:00
Anis Eleuch
b433bf14ba
Add typos check to Makefile (#20051) 2024-07-08 14:39:49 -07:00
Minio Trusted
cf371da346 Update yaml files to latest version RELEASE.2024-07-04T14-25-45Z 2024-07-04 14:58:08 +00:00
Klaus Post
107d951893
Log ILM failed object name (#20040)
Log so we know which object we are dealing with.

Log each object once.
2024-07-04 07:25:45 -07:00
Shireesh Anjal
22c53b1c70
Remove license update job (#20037) 2024-07-03 11:49:48 -07:00
Mark Theunissen
88926ad8e9
return appropriate error upon tier update for incorrect credentials (#20034) 2024-07-03 00:17:20 -07:00
Harshavardhana
32d04091a2
resume any batch jobs in a goroutine (#20035)
Bonus: move batch job initialization to the last item after all other initialization, 
            allowing for faster startup time for different subsystems.
2024-07-03 00:16:05 -07:00
Harshavardhana
b6d4a77b94 update vulncheck 2024-07-02 14:34:59 -07:00
Harshavardhana
be84a4fd68
do not proxy invalid object names (#20031) 2024-07-02 14:28:55 -07:00
Anis Eleuch
2ec1f404ac
info: Always refresh the root disk status (#20023)
Add root drive status in the disk info cache function, so unmounting a
drive without restarting a local node reflects the correct value.
2024-07-02 13:41:29 -07:00
Klaus Post
2040559f71
Fix SkipReader performance with small initial read (#20030)
If `SkipReader` is called with a small initial buffer it may be doing a huge number if Reads to skip the requested number of bytes. If a small buffer is provided grab a 32K buffer and use that.

Fixes slow execution of `testAPIGetObjectWithMPHandler`.

Bonuses:

* Use `-short` with `-race` test.
* Do all suite test types with `-short`.
* Enable compressed+encrypted in `testAPIGetObjectWithMPHandler`.
* Disable big file tests in `testAPIGetObjectWithMPHandler` when using `-short`.
2024-07-02 08:13:05 -07:00
Anis Eleuch
ca0ce4c6ef
tests: Fix setting max openfds as memory limit (#20029)
The code was advertenly passing max openfds to debug.SetMemoryLimit(),
fixing this accelerate go test in my machine.

This is only a testing bug, since the server context has always a valid
MaxMem, so the buggy code was never called in users environments.
2024-07-02 08:09:36 -07:00
Anis Eleuch
757cf413cb
Add batch status API (#19679)
Currently the status of a completed or failed batch is held in the
memory, a simple restart will lose the status and the user will not
have any visibility of the job that was long running.

In addition to the metrics, add a new API that reads the batch status
from the drives. A batch job will be cleaned up three days after
completion.

Also add the batch type in the batch id, the reason is that the batch
job request is removed immediately when the job is finished, then we
do not know the type of batch job anymore, hence a difficulty to locate
the job report
2024-07-02 01:17:52 -07:00
Anis Eleuch
b35acb3dbc
heal: Add support of healing particular pool/set (#20024) 2024-07-01 15:02:25 -07:00
Sveinn
e404abf103
Letting password enable auth bypass caPublicKey (only if passauth is … (#20022) 2024-07-01 15:02:01 -07:00
jiuker
f7ff19cb18
fix: warning for decommissioned pool while start (#20019) 2024-07-01 07:38:46 -07:00
Minio Trusted
f736702da8 Update yaml files to latest version RELEASE.2024-06-29T01-20-47Z 2024-06-29 16:45:31 +00:00
Poorna
91faaa1387
fix panic in batch replicate (#20014)
Fixes:

```
panic: send on closed channel
	panic: close of closed channel

goroutine 878 [running]:
github.com/minio/minio/internal/ioutil.SafeClose[...](...)
	/Users/kp/code/src/github.com/minio/minio/internal/ioutil/ioutil.go:407
github.com/minio/minio/cmd.(*erasureServerPools).Walk.func2.2()
	/Users/kp/code/src/github.com/minio/minio/cmd/erasure-server-pool.go:2229 +0xc0
panic({0x108c25e60?, 0x1090b28d0?})
	/usr/local/go/src/runtime/panic.go:770 +0x124
github.com/minio/minio/cmd.(*erasureServerPools).Walk.func2.3({{0x1400e397316, 0x5}, {0x1400d88b8a8, 0x8}, {0x1f99d80, 0xede101c42, 0x0}, 0x3bc, 0x0, 0x0, ...})
	/Users/kp/code/src/github.com/minio/minio/cmd/erasure-server-pool.go:2235 +0xb4
github.com/minio/minio/cmd.(*erasureServerPools).Walk.func2()
	/Users/kp/code/src/github.com/minio/minio/cmd/erasure-server-pool.go:2277 +0xabc
created by github.com/minio/minio/cmd.(*erasureServerPools).Walk in goroutine 575
	/Users/kp/code/src/github.com/minio/minio/cmd/erasure-server-pool.go:2210 +0x33c
```
2024-06-28 18:20:47 -07:00
Poorna
68a9f521d5
fix object lock metadata filter (#20011) 2024-06-28 18:20:27 -07:00
Harshavardhana
f365a98029
fix: hot-reloading STS credential policy documents (#20012)
* fix: hot-reloading STS credential policy documents
* Support Role ARNs hot load policies (#28)

---------

Co-authored-by: Anis Eleuch <vadmeste@users.noreply.github.com>
2024-06-28 16:17:22 -07:00
Minio Trusted
47bbc272df Update yaml files to latest version RELEASE.2024-06-28T09-06-49Z 2024-06-28 14:11:36 +00:00
Anis Eleuch
aebac90013
tests: Fix minor issue in the config yaml file testing (#20005)
Convert x86_64 to amd64 in the test script to correctly download mc binary.
2024-06-28 02:06:49 -07:00
Taran Pelkey
7ca4ba77c4
Update tests to use AttachPolicy(LDAP) instead of deprecated SetPolicy (#19972) 2024-06-28 02:06:25 -07:00
Poorna
13512170b5
list: Do not decrypt SSE-S3 Etags in a non encrypted format (#20008) 2024-06-27 19:44:56 -07:00
Krishnan Parthasarathi
154fcaeb56
Allow rebalance start when it's stopped/completed (#20009) 2024-06-27 17:22:30 -07:00
Anis Eleuch
722118386d
iam: Hot load of the policy during request authorization (#20007)
Hot load a policy document when during account authorization evaluation
to avoid returning 403 during server startup, when not all policies are
already loaded.

Add this support for group policies as well.
2024-06-27 17:03:07 -07:00
Harshavardhana
709612cb37
fix: rebalance upon pool expansion would crash when in progress (#20004)
you can attempt a rebalance first i.e, start with 2 pools.

```
mc admin rebalance start alias/
```

and after that you can add a new pool, this would
potentially crash.

```
Jun 27 09:22:19 xxx minio[7828]: panic: runtime error: invalid memory address or nil pointer dereference
Jun 27 09:22:19 xxx minio[7828]: [signal SIGSEGV: segmentation violation code=0x1 addr=0x58 pc=0x22cc225]
Jun 27 09:22:19 xxx minio[7828]: goroutine 1 [running]:
Jun 27 09:22:19 xxx minio[7828]: github.com/minio/minio/cmd.(*erasureServerPools).findIndex(...)
```
2024-06-27 11:35:34 -07:00
Harshavardhana
b35d083872
fix; change retry-after 60sec for 503s and 10s for 429s (#19996) 2024-06-26 01:32:06 -07:00
Harshavardhana
5e7b243bde
extend cluster health to return errors for IAM, and Bucket metadata (#19995)
Bonus: make API freeze to be opt-in instead of default
2024-06-26 00:44:34 -07:00
Minio Trusted
f8f9fc77ac Update yaml files to latest version RELEASE.2024-06-26T01-06-18Z 2024-06-26 02:12:12 +00:00
Harshavardhana
499531f0b5 update minio/console v1.6.1
Signed-off-by: Harshavardhana <harsha@minio.io>
2024-06-25 18:06:18 -07:00
Taran Pelkey
3c2141513f
add ListAccessKeysLDAPBulk API to list accessKeys for multiple/all LDAP users (#19835) 2024-06-25 14:21:28 -07:00
Aditya Manthramurthy
602f6a9ad0
Add IAM (re)load timing logs (#19984)
This is useful to debug large IAM load times - the usual cause is when
there are a large amount of temporary accounts.
2024-06-25 10:33:10 -07:00
Harshavardhana
22c5a5b91b
add healing retries when there are failed heal attempts (#19986)
transient errors for long running tasks are normal, allow for
drive to retry again upto 3 times before giving up on healing
the drive.
2024-06-25 10:32:56 -07:00
jiuker
41f508765d
fix: format the scanner object error (#19991) 2024-06-25 08:54:24 -07:00
Aditya Manthramurthy
7dccd1f589
fix: bootstrap msgs should only be sent at startup (#19985) 2024-06-24 19:30:28 -07:00
Allan Roger Reid
55ff598b23
Refactor the documentation on minio server config notation (#19987)
Refactor minio server config notation to add bracket notation to the TODO list
2024-06-24 19:30:18 -07:00
Harshavardhana
a22ce4550c protect workers and simplify use of atomics (#19982)
without atomic load() it is possible that for
a slow receiver we would get into a hot-loop, when
logCh is full and there are many incoming callers.

to avoid this as a workaround enable BATCH_SIZE
greater than 100 to ensure that your slow receiver
receives data in bulk to avoid being throttled in
some manner.

this PR however fixes the unprotected access to
the current workers value.
2024-06-24 18:15:27 -07:00
Taran Pelkey
168ae81b1f
Fix error when validating DN that is not under base DN (#19971) 2024-06-21 23:35:35 -07:00
Minio Trusted
5f6a25cdd0 Update yaml files to latest version RELEASE.2024-06-22T05-26-45Z 2024-06-22 06:20:13 +00:00
Harshavardhana
be97ae4c5d
fix: gcs tier going offline due to customer HTTPclient (#19973)
specifying customer HTTP client makes the gcs SDK
ignore the passed credentials, instead let the GCS
SDK manage the transport.

this PR fixes #19922 a regression from #19565
2024-06-21 22:26:45 -07:00
Anis Eleuch
4d7d008741
bootstrap: Speed up bucket metadata loading (#19969)
Currently, bucket metadata is being loaded serially inside ListBuckets
Objet API. Fix that by loading the bucket metadata as the number of
erasure sets * 10, which is a good approximation.
2024-06-21 15:22:24 -07:00
Klaus Post
2d7a3d1516
Return error from mergeEntryChannels (#19970)
- Add error from mergeEntryChannels to `results.`
- Make sure we check the context error before we close the channel.
2024-06-21 12:06:51 -07:00
Harshavardhana
dfab400d43
reject bootup, if binaries are different in a cluster (#19968) 2024-06-21 07:49:49 -07:00
Pedro Juarez
70078eab10
Fix browser UI animation (#19966)
Browse UI is not showing the animation because the default 
content-security-policy do not trust the file https://unpkg.com/detect-gpu@5.0.38/dist/benchmarks/d-apple.json 
the GPU library needs to identify if the web browser can play it.
2024-06-20 17:58:58 -07:00
Klaus Post
3415c4dd1e
Fix reconnected deadlock with full queue (#19964)
When a reconnection happens, `handleMessages` must be able to complete and exit. 

This can be prevented in a full queue.

Deadlock chain (May 10th release)

```
1 @ 0x44110e 0x453125 0x109f88c 0x109f7d5 0x10a472c 0x10a3f72 0x10a34ed 0x4795e1
#	0x109f88b	github.com/minio/minio/internal/grid.(*Connection).send+0x3eb			github.com/minio/minio/internal/grid/connection.go:548
#	0x109f7d4	github.com/minio/minio/internal/grid.(*Connection).queueMsg+0x334		github.com/minio/minio/internal/grid/connection.go:586
#	0x10a472b	github.com/minio/minio/internal/grid.(*Connection).handleAckMux+0xab		github.com/minio/minio/internal/grid/connection.go:1284
#	0x10a3f71	github.com/minio/minio/internal/grid.(*Connection).handleMsg+0x231		github.com/minio/minio/internal/grid/connection.go:1211
#	0x10a34ec	github.com/minio/minio/internal/grid.(*Connection).handleMessages.func1+0x6cc	github.com/minio/minio/internal/grid/connection.go:1019

---> blocks ---> via (Connection).handleMsgWg

1 @ 0x44110e 0x454165 0x454134 0x475325 0x486b08 0x10a161a 0x10a1465 0x2470e67 0x7395a9 0x20e61af 0x20e5f1f 0x7395a9 0x22f781c 0x7395a9 0x22f89a5 0x7395a9 0x22f6e82 0x7395a9 0x22f49a2 0x7395a9 0x2206e45 0x7395a9 0x22f4d9c 0x7395a9 0x210ba06 0x7395a9 0x23089c2 0x7395a9 0x22f86e9 0x7395a9 0xd42582 0x2106c04
#	0x475324	sync.runtime_Semacquire+0x24								runtime/sema.go:62
#	0x486b07	sync.(*WaitGroup).Wait+0x47								sync/waitgroup.go:116
#	0x10a1619	github.com/minio/minio/internal/grid.(*Connection).reconnected+0xb9			github.com/minio/minio/internal/grid/connection.go:857
#	0x10a1464	github.com/minio/minio/internal/grid.(*Connection).handleIncoming+0x384			github.com/minio/minio/internal/grid/connection.go:825
```

Add a queue cleaner in reconnected that will pop old messages so `handleMessages` can 
send messages without blocking and exit appropriately for the connection to be re-established.

Messages are likely dropped by the remote, but we may have some that can succeed, 
so we only drop when running out of space.
2024-06-20 16:11:40 -07:00
Shireesh Anjal
e200808ab7
fix errors in metrics code on macos (#19965)
- do not load proc fs metrics in case of macos
- null-check TimeStat before accessing
2024-06-20 10:55:03 -07:00
Klaus Post
fae563b85d
Add fixed timed restarts to updates (#19960) 2024-06-20 07:49:22 -07:00
Klaus Post
3e6dc02f8f
Add actual inline data to JSON output in xl-meta (#19958)
Add the inlined data as base64 encoded field and try to add a string version if feasible.

Example:

```
λ xl-meta -data xl.meta
{
  "8e03504e-1123-4957-b272-7bc53eda0d55": {
    "bitrot_valid": true,
    "bytes": 58,
    "data_base64": "Z29sYW5nLm9yZy94L3N5cyB2MC4xNS4wIC8=",
    "data_string": "golang.org/x/sys v0.15.0 /"
}
```

The string will have quotes, newlines escaped to produce valid JSON.

If content isn't valid utf8 or the encoding otherwise fails, only the base64 data will be added.

`-export` can still be used separately to extract the data as files (including bitrot).
2024-06-20 07:46:44 -07:00
Anis Eleuch
95e4cbbfde
Do not ping event targets during cluster initialization (#19959)
S3 operations are frozen during startup, therefore we should avoid pinging
event targets during the initialization since it can stall.
2024-06-20 07:46:02 -07:00
Harshavardhana
2825294b7b
allow server startup to come online with READ success (#19957) 2024-06-19 22:21:31 -07:00
Sveinn
bce93b5cfa
Removing timeout on shutdown (#19956) 2024-06-19 11:42:47 -07:00
Harshavardhana
7a4b250c8b
avoid waiting for quorum health while debugging (#19955) 2024-06-19 10:12:20 -07:00
Anis Eleuch
e5335450a4
test: Healing test to avoid infinite waiting for servers to be up (#19954)
tests: Healing test to avoid infinite waiting for servers to be up

Quit after 15 minutes and print server logs instead
2024-06-19 09:00:38 -07:00
Klaus Post
a6ffdf1dd4
Do not block on distributed unlocks (#19952)
* Prevents blocking when losing quorum (standard on cluster restarts).
* Time out to prevent endless buildup. Timed-out remote locks will be canceled because they miss the refresh anyway.
* Reduces latency for all calls since the wall time for the roundtrip to remotes no longer adds to the requests.
2024-06-19 07:35:19 -07:00
Harshavardhana
69e41f87ef
compute localIPs only once per server startup() (#19951)
repeatedly calling this function is not necessary,
on systems with lots of interfaces, including virtual
ones can make this reasonably delayed.
2024-06-19 07:34:00 -07:00
Harshavardhana
ee48f9f206
perform healthchecks before initializing everything fully (#19953)
adds more informative logs that provide details on which
erasure set is losing quorum etc.
2024-06-19 07:33:40 -07:00
Sveinn
9ba39d7fad
Removing a channel that was not being used (#19948) 2024-06-19 01:59:39 -07:00
Harshavardhana
d2fb371f80
do not need response record body (#19949)
since the connection is active, the
response recorder body can grow endlessly
causing leak, as this bytes buffer is
never given back to GC due to an goroutine.
2024-06-19 01:59:21 -07:00
Klaus Post
2f9018f03b
Do regular checks for healing status while scanning (#19946) 2024-06-18 09:11:04 -07:00
Harshavardhana
eb990f64a9 update pkger to use v2.3.1 2024-06-17 22:48:41 -07:00
Harshavardhana
bbb64eaade
skip healing properly in the scanner when a drive is hotplugged (#19939)
skip healing properly in scanner when drive is hotplugged

due to how the state is passed around the SkipHealing
might not be the true state() of the system always, causing
a situation where we might healing from the scanner on the
same drive which is being. Due to this competing heals get
triggered that slow each other down.
2024-06-17 16:39:11 -07:00
Harshavardhana
7bd1d899bc
remove overzealous check during HEAD() (#19940)
due to a historic bug in CopyObject() where
an inlined object loses its metadata, the
check causes an incorrect fallback verifying
data-dir.

CopyObject() bug was fixed in ffa91f97942 however
the occurrence of this problem is historic, so
the aforementioned check is stretching too much.

Bonus: simplify fileInfoRaw() to read xl.json as well,
also recreate buckets properly.
2024-06-17 07:29:18 -07:00
Harshavardhana
c91d1ec2e3
fix: avoid metadata cache without data for all callers (#19935) 2024-06-14 06:28:35 -07:00
Minio Trusted
c50b64027d Update yaml files to latest version RELEASE.2024-06-13T22-53-53Z 2024-06-14 05:40:03 +00:00
Cesar N
20960b6a2d
Update console to v1.6.0 (#19933) 2024-06-13 15:53:53 -07:00
Shubhendu
3bd3470d0b
Corrected names of node replication metrics (#19932)
Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
2024-06-13 15:26:54 -07:00
Harshavardhana
ba39ed9af7
loadUser() if not able to load() credential return error (#19931) 2024-06-13 15:26:38 -07:00
jiuker
62e6dc950d
fix: do not update metadata cache upon headObject() (#19929) 2024-06-13 08:42:02 -07:00
Harshavardhana
5a5046ce45
upgrade all deps and credits (#19930)
Signed-off-by: Harshavardhana <harsha@minio.io>
2024-06-13 08:34:20 -07:00
Klaus Post
ad04afe381
Fix SSEC multipart checksum replication (#19915)
* Multipart SSEC checksums were not transferred.
* Remove key mismatch logging. This key is user-controlled with SSEC.
* If the source is SSEC and the destination reports ErrSSEEncryptedObject, 
  assume replication is good.
2024-06-12 23:56:12 -07:00
Harshavardhana
ba9f0f2480
fix: attempt to fix CI/CD upgrade tests with docker-compose (#19926) 2024-06-12 22:08:11 -07:00
Harshavardhana
d06b63d056
load credential for in-flights requests as singleflight (#19920)
avoid concurrent callers for LoadUser() to even initiate
object read() requests, if an on-going operation is in progress.

this avoids many callers hitting the drives causing I/O
spikes, also allows for loading credentials faster.
2024-06-12 13:47:56 -07:00
Andreas Auernhammer
7ce28c3b1d
kms: use GetClientCertificate callback for KES API keys (#19921)
This commit fixes an issue in the KES client configuration
that can cause the following error when connecting to KES:
```
ERROR Failed to connect to KMS: failed to generate data key with KMS key: tls: client certificate is required
```

The Go TLS stack seems to not send a client certificate if it
thinks the client certificate cannot be validated by the peer.
In case of an API key, we don't care about this since we use
public key pinning and the X.509 certificate is just a transport
encoding.

The `GetClientCertificate` seems to be honored always such that
this error does not occur.

Signed-off-by: Andreas Auernhammer <github@aead.dev>
2024-06-12 07:31:26 -07:00
Harshavardhana
e3ac4035b9
decrement requests inqueue correctly after the request is processed (#19918) 2024-06-12 01:13:12 -07:00
Harshavardhana
d21b6daa49
fix: avoid crash when delete() returns an error in batch expiration (#19909) 2024-06-11 06:50:53 -07:00
Minio Trusted
76ebb16688 Update yaml files to latest version RELEASE.2024-06-11T03-13-30Z 2024-06-11 06:11:10 +00:00
Harshavardhana
55aa431578
fix: on windows avoid ':' as part of the object name (#19907)
fixes #18865
avoid-colon
2024-06-10 20:13:30 -07:00
Harshavardhana
614981e566
allow purge expired STS while loading credentials (#19905)
the reason for this is to avoid STS mappings to be
purged without a successful load of other policies,
and all the credentials only loaded successfully
are properly handled.

This also avoids unnecessary cache store which was
implemented earlier for optimization.
2024-06-10 11:45:50 -07:00
Harshavardhana
b8b956a05d add changes to Makefile to support dev build 2024-06-10 10:41:02 -07:00
Klaus Post
d2eed44c78
Fix replication checksum transfer (#19906)
Compression will be disabled by default if SSE-C is specified. So we can still honor SSE-C.
2024-06-10 10:40:33 -07:00
Anis Eleuch
789cbc6fb2
heal: Dangling check to evaluate object parts separately (#19797) 2024-06-10 08:51:27 -07:00
jiuker
0662c90b5c
fix: copyObject restore with a specific version, update test cases (#19895) 2024-06-10 08:50:49 -07:00
Klaus Post
a2cab02554
Fix SSE-C checksums (#19896)
Compression will be disabled by default if SSE-C is specified. So we can still honor SSE-C.
2024-06-10 08:31:51 -07:00
Harshavardhana
6c7a21df6b
turn-off unexpected debug logging in List() calls (#19903) 2024-06-09 21:34:26 -07:00
Ali Afsharzadeh
f933b0b708
Upgrade setup-helm action from v3 to v4 (#19897) 2024-06-09 02:13:09 -07:00
Shubhendu
9f305273a7
Added tests for replication of checksum headers (#19879)
Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
2024-06-08 09:24:15 -07:00
Cesar N
cbd9efcb43
Update console to v1.5.0 (#19899) 2024-06-08 01:12:00 -07:00
Harshavardhana
29a25a538f
fix: make sure we list freeVersions like DEL marker with --versions (#19878)
freeVersions() was being incorrectly skipped; list it as
valid objects properly.

Co-authored-by: Krishnan Parthasarathi <Krishnan Parthasarathi>
2024-06-07 15:18:44 -07:00
Harshavardhana
2dd8faaedc remove unnecessary log in Listing() 2024-06-07 14:52:55 -07:00
Klaus Post
f00187033d
Two way streams for upcoming locking enhancements (#19796) 2024-06-07 08:51:52 -07:00
Aditya Manthramurthy
c5141d65ac
Update docker build script to pull all changes (#19892) 2024-06-07 08:43:38 -07:00
Krishnan Parthasarathi
069c4015cd
Don't tier directory objects (#19891)
Directory objects are used by applications that simulate the folder
structure of an on-disk filesystem. These are zero-byte objects with names
ending with '/'. They are only used to check whether a 'folder' exists in
the namespace.
2024-06-07 08:43:17 -07:00
Shubhendu
2f6e03fb60
Calculate correct object size while replication (#19888)
It was missing in case of `replicateObject` but was present for
`replicateAll` already

Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
2024-06-06 12:31:01 -07:00
Klaus Post
0fbb945e13
Disable caching of encrypted objects (#19890)
Don't write encrypted objects to cache, if configured.
2024-06-06 11:39:18 -07:00
Anis Eleuch
b94dd835c9
decom: Fix CurrentSize output when generating the status (#19883)
StartSize starts with the raw free space of all disks in the given pool,
however during the status, CurrentSize is not showing the current free
raw space, as expected at least by `mc admin decom status` since it was
written.
2024-06-06 07:30:43 -07:00
Minio Trusted
44fc707423 Update yaml files to latest version RELEASE.2024-06-06T09-36-42Z 2024-06-06 13:36:05 +00:00
Poorna
5aaef9790f
replication: pass checksum headers to replica (#19834) 2024-06-06 02:36:42 -07:00
Bala FA
7edc352d23
Add ILM metrics in metrics-v3 (#19539)
Signed-off-by: Bala.FA <bala@minio.io>
2024-06-06 02:36:25 -07:00
Poorna
850a84b08a
simplify site replication multipart proxying (#19885) 2024-06-05 18:01:15 -07:00
Taran Pelkey
4148754ce0
Check both given and normalized group DN on LDAP policy detach requests (#19876) 2024-06-05 15:42:40 -07:00
Harshavardhana
2107722829
upgrade go-oidc to fix GO-2024-2631 (#19884) 2024-06-05 15:00:34 -07:00
jiuker
d326ba52e9
feat: support batchJob for windows (#19877) 2024-06-05 08:44:53 -07:00
Sveinn
91e1487de4
Add LDAP public key authentication to SFTP (#19833) 2024-06-05 00:51:13 -07:00
Minio Trusted
5ffb2a9605 Update yaml files to latest version RELEASE.2024-06-04T19-20-08Z 2024-06-04 22:25:53 +00:00
Harshavardhana
17fe91d6d1
chore: update all deps (#19875) 2024-06-04 12:20:08 -07:00
jiuker
90a9f2dd70
fix: log diskerror when detect the disk space failed (#19861) 2024-06-04 09:42:03 -07:00
Harshavardhana
d5e48cfd65
fix: remove DriveOPTimeout for REST callers as they don't work properly (#19873)
Go's net/http is notoriously difficult to have a streaming
deadlines per READ/WRITE on the net.Conn if we add them they
interfere with the Go's internal requirements for a HTTP
connection.

Remove this support for now

fixes #19853
2024-06-04 08:12:57 -07:00
Anis Eleuch
d274566463
race: Fix rare race detected by testing (#19872)
Below is the race warning:

```
WARNING: DATA RACE
Write at 0x00c02d3d27c0 by goroutine 1210:
  github.com/minio/minio/cmd.(*healingTracker).bucketDone()
      github.com/minio/minio/cmd/background-newdisks-heal-ops.go:273 +0x13a
  github.com/minio/minio/cmd.(*erasureObjects).healErasureSet()
      github.com/minio/minio/cmd/global-heal.go:525 +0x2158
  github.com/minio/minio/cmd.healFreshDisk()
      github.com/minio/minio/cmd/background-newdisks-heal-ops.go:450 +0x107e
  github.com/minio/minio/cmd.monitorLocalDisksAndHeal.func1()
      github.com/minio/minio/cmd/background-newdisks-heal-ops.go:528 +0x150
  github.com/minio/minio/cmd.monitorLocalDisksAndHeal.gowrap2()
      github.com/minio/minio/cmd/background-newdisks-heal-ops.go:538 +0x82

Previous read at 0x00c02d3d27c0 by goroutine 1446:
  github.com/minio/minio/cmd.(*erasureObjects).healErasureSet.func5()
      github.com/minio/minio/cmd/global-heal.go:232 +0xfd
```
2024-06-04 08:12:32 -07:00
Shubhendu
39ac720826
Remove hardcoded override as not needed (#19868)
Fixes: https://github.com/minio/minio/issues/19867

Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
2024-06-04 06:24:37 -07:00
Shubhendu
21b6204692
Test proxying of DEL marker for bucket replication (#19870)
Make sure to avoid proxying for DEL markers

Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
2024-06-04 04:38:26 -07:00
Taran Pelkey
d98faeb26a
Check if LDAP User has attached policy before creating Service Account (#19843)
Check if ldap user has policy before creating
2024-06-03 12:58:48 -07:00
Klaus Post
0a63dc199c
Add trace sizes to more trace types (#19864)
Add trace sizes to

* ILM traces
* Replication traces
* Healing traces
* Decommission traces
* Rebalance traces
* (s)ftp traces
* http traces.
2024-06-03 08:45:54 -07:00
Anis Eleuch
3ba857dfa1
race: Fix detected test race in the internal audit code (#19865) 2024-06-03 08:44:50 -07:00
Klaus Post
a8554c4022
Update madmin (#19862)
Make it include https://github.com/minio/madmin-go/pull/285
2024-06-03 05:00:14 -07:00
Harshavardhana
ba54b39c02
fix: crash when audit webhook queue_dir is not writable (#19854)
This is regression introduced in #19275 refactor
2024-06-01 20:03:39 -07:00
Anis Eleuch
2a75225569
kafka: _MINIO_KAFKA_DEBUG to enable sarama debug messages (#19849) 2024-06-01 08:02:59 -07:00
Klaus Post
e72429c79c
Add sizes to traces (#19851)
added to storage and grid traces. Can provide more context for traces that aren't HTTP. Others may apply.
2024-05-31 22:17:37 -07:00
Klaus Post
c5b3f5553f
Add per connection RPC metrics (#19852)
Provides individual and aggregate stats for each RPC connection.

Example:

```
  "rpc": {
   "collectedAt": "2024-05-31T14:33:29.1373103+02:00",
   "connected": 30,
   "disconnected": 0,
   "outgoingStreams": 69,
   "incomingStreams": 0,
   "outgoingBytes": 174822796,
   "incomingBytes": 175821566,
   "outgoingMessages": 768595,
   "incomingMessages": 768589,
   "outQueue": 0,
   "lastPongTime": "2024-05-31T12:33:28Z",
   "byDestination": {
    "http://127.0.0.1:9001": {
     "collectedAt": "2024-05-31T14:33:29.1373103+02:00",
     "connected": 5,
     "disconnected": 0,
     "outgoingStreams": 2,
     "incomingStreams": 0,
     "outgoingBytes": 38432543,
     "incomingBytes": 66604052,
     "outgoingMessages": 229496,
     "incomingMessages": 229575,
     "outQueue": 0,
     "lastPongTime": "2024-05-31T12:33:27Z"
    },
    "http://127.0.0.1:9002": {
     "collectedAt": "2024-05-31T14:33:29.1373103+02:00",
     "connected": 5,
     "disconnected": 0,
     "outgoingStreams": 6,
     "incomingStreams": 0,
     "outgoingBytes": 38215680,
     "incomingBytes": 66121283,
     "outgoingMessages": 228525,
     "incomingMessages": 228510,
     "outQueue": 0,
     "lastPongTime": "2024-05-31T12:33:27Z"
    },
...
```
2024-05-31 22:16:24 -07:00
Klaus Post
d3ae0aaad3
Add max buffering to SFTP (#19848)
Prevent OOM by adversarial use of SFTP upload by setting a 100MB max upload buffer.
2024-05-31 14:28:07 -07:00
Klaus Post
d67bccf861
Add xl-meta partial shard reconstruction (#19841)
Add partial shard reconstruction

* Add partial shard reconstruction
* Fix padding causing the last shard to be rejected
* Add md5 checks on single parts
* Move md5 verified to `verified/filename.ext`
* Move complete (without md5) to `complete/filename.ext.partno`

It's not pretty, but at least now the md5 gives some confidence it works correctly.
2024-05-31 07:49:23 -07:00
Anis Eleuch
1277ad69a6
heal: Remove .healing.bin when all ES drives are healing (#19846)
In the very rare case when all drives in a erasure set need to be healed,
remove .healing.bin from all drives, otherwise it will be stuck in a
loop

Also, fix a unit test that fails sometimes due to wrong test.
2024-05-31 07:48:50 -07:00
Harshavardhana
8f93e81afb
change service account embedded policy size limit (#19840)
Bonus: trim-off all the unnecessary spaces to allow
for real 2048 characters in policies for STS handlers
and re-use the code in all STS handlers.
2024-05-30 11:10:41 -07:00
Harshavardhana
4af31e654b
avoid pre-populating buffers for deployments < 32GiB memory (#19839) 2024-05-30 04:58:12 -07:00
Harshavardhana
aad50579ba
fix: wire up ILM sub-system properly for help (#19836) 2024-05-30 01:14:58 -07:00
Harshavardhana
38d059b0ae
fix: single node multi-drive must register local drives properly (#19832)
since #19688 there was a regression introduced during drive
lookups for single node multi-drive setups, drive replacement
would not work correctly without this PR.
2024-05-29 13:12:44 -07:00
Klaus Post
bd4eeb4522
Fix flipped EcM, EcN in metadata header (#19831)
Since this is a tuple encoded field we can just flip the struct members.
2024-05-29 12:14:09 -07:00
jiuker
03e3493288
fix: correct parse the tagging error for PostPolicyBucketHandler (#19825) 2024-05-29 11:50:46 -07:00
Harshavardhana
64baedf5a4
fix: hide prefixes for Hadoop properly (#19821) 2024-05-28 15:53:15 -07:00
Minio Trusted
2f64d5f77e Update yaml files to latest version RELEASE.2024-05-28T17-19-04Z 2024-05-28 19:23:04 +00:00
793 changed files with 41497 additions and 22823 deletions

View File

@ -1,14 +1,19 @@
---
name: Bug report
about: Create a report to help us improve
about: Report a bug in MinIO (community edition is source-only)
title: ''
labels: community, triage
assignees: ''
---
## NOTE
If this case is urgent, please subscribe to [Subnet](https://min.io/pricing) so that our 24/7 support team may help you faster.
## IMPORTANT NOTES
**Community Edition**: MinIO community edition is now source-only. Install via `go install github.com/minio/minio@latest`
**Feature Requests**: We are no longer accepting feature requests for the community edition. For feature requests and enterprise support, please subscribe to [MinIO Enterprise Support](https://min.io/pricing).
**Urgent Issues**: If this case is urgent or affects production, please subscribe to [SUBNET](https://min.io/pricing) for 24/7 enterprise support.
<!--- Provide a general summary of the issue in the Title above -->

View File

@ -2,7 +2,7 @@ blank_issues_enabled: false
contact_links:
- name: MinIO Community Support
url: https://slack.min.io
about: Join here for Community Support
- name: MinIO SUBNET Support
about: Community support via Slack - for questions and discussions
- name: MinIO Enterprise Support (SUBNET)
url: https://min.io/pricing
about: Join here for Enterprise Support
about: Enterprise support with SLA - for production deployments and feature requests

View File

@ -1,20 +0,0 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: community, triage
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.

View File

@ -20,7 +20,7 @@ jobs:
runs-on: ${{ matrix.os }}
strategy:
matrix:
go-version: [1.22.x]
go-version: [1.24.x]
os: [ubuntu-latest]
steps:
- uses: actions/checkout@v4

View File

@ -1,59 +0,0 @@
name: FIPS Build Test
on:
pull_request:
branches:
- master
# This ensures that previous jobs for the PR are canceled when the PR is
# updated.
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref }}
cancel-in-progress: true
permissions:
contents: read
jobs:
build:
name: Go BoringCrypto ${{ matrix.go-version }} on ${{ matrix.os }}
runs-on: ${{ matrix.os }}
strategy:
matrix:
go-version: [1.22.x]
os: [ubuntu-latest]
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version: ${{ matrix.go-version }}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Setup dockerfile for build test
run: |
GO_VERSION=$(go version | cut -d ' ' -f 3 | sed 's/go//')
echo Detected go version $GO_VERSION
cat > Dockerfile.fips.test <<EOF
FROM golang:${GO_VERSION}
COPY . /minio
WORKDIR /minio
ENV GOEXPERIMENT=boringcrypto
RUN make
EOF
- name: Build
uses: docker/build-push-action@v3
with:
context: .
file: Dockerfile.fips.test
push: false
load: true
tags: minio/fips-test:latest
# This should fail if grep returns non-zero exit
- name: Test binary
run: |
docker run --rm minio/fips-test:latest ./minio --version
docker run --rm -i minio/fips-test:latest /bin/bash -c 'go tool nm ./minio | grep FIPS | grep -q FIPS'

View File

@ -20,7 +20,7 @@ jobs:
runs-on: ${{ matrix.os }}
strategy:
matrix:
go-version: [1.22.x]
go-version: [1.24.x]
os: [ubuntu-latest]
steps:
- uses: actions/checkout@v4

View File

@ -20,24 +20,14 @@ jobs:
runs-on: ${{ matrix.os }}
strategy:
matrix:
go-version: [1.22.x]
os: [ubuntu-latest, Windows]
go-version: [1.24.x]
os: [ubuntu-latest]
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version: ${{ matrix.go-version }}
check-latest: true
- name: Build on ${{ matrix.os }}
if: matrix.os == 'Windows'
env:
CGO_ENABLED: 0
GO111MODULE: on
run: |
Set-MpPreference -DisableRealtimeMonitoring $true
netsh int ipv4 set dynamicport tcp start=60000 num=61000
go build --ldflags="-s -w" -o %GOPATH%\bin\minio.exe
go test -v --timeout 120m ./...
- name: Build on ${{ matrix.os }}
if: matrix.os == 'ubuntu-latest'
env:

39
.github/workflows/go-resiliency.yml vendored Normal file
View File

@ -0,0 +1,39 @@
name: Resiliency Functional Tests
on:
pull_request:
branches:
- master
# This ensures that previous jobs for the PR are canceled when the PR is
# updated.
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref }}
cancel-in-progress: true
permissions:
contents: read
jobs:
build:
name: Go ${{ matrix.go-version }} on ${{ matrix.os }}
runs-on: ${{ matrix.os }}
strategy:
matrix:
go-version: [1.24.x]
os: [ubuntu-latest]
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version: ${{ matrix.go-version }}
check-latest: true
- name: Build on ${{ matrix.os }}
if: matrix.os == 'ubuntu-latest'
env:
CGO_ENABLED: 0
GO111MODULE: on
run: |
sudo sysctl net.ipv6.conf.all.disable_ipv6=0
sudo sysctl net.ipv6.conf.default.disable_ipv6=0
make test-resiliency

View File

@ -20,7 +20,7 @@ jobs:
runs-on: ${{ matrix.os }}
strategy:
matrix:
go-version: [1.22.x]
go-version: [1.24.x]
os: [ubuntu-latest]
steps:
- uses: actions/checkout@v4
@ -39,3 +39,4 @@ jobs:
sudo sysctl net.ipv6.conf.all.disable_ipv6=0
sudo sysctl net.ipv6.conf.default.disable_ipv6=0
make verify
make test-timeout

View File

@ -22,7 +22,7 @@ jobs:
uses: actions/checkout@v4
- name: Install Helm
uses: azure/setup-helm@v3
uses: azure/setup-helm@v4
- name: Run helm lint
run: |

View File

@ -61,7 +61,7 @@ jobs:
# are turned off - i.e. if ldap="", then ldap server is not enabled for
# the tests.
matrix:
go-version: [1.22.x]
go-version: [1.24.x]
ldap: ["", "localhost:389"]
etcd: ["", "http://localhost:2379"]
openid: ["", "http://127.0.0.1:5556/dex"]
@ -125,3 +125,37 @@ jobs:
if: matrix.openid == 'http://127.0.0.1:5556/dex'
run: |
make test-site-replication-oidc
iam-import-with-missing-entities:
name: Test IAM import in new cluster with missing entities
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version: ${{ matrix.go-version }}
check-latest: true
- name: Checkout minio-iam-testing
uses: actions/checkout@v4
with:
repository: minio/minio-iam-testing
path: minio-iam-testing
- name: Test import of IAM artifacts when in fresh cluster there are missing groups etc
run: |
make test-iam-import-with-missing-entities
iam-import-with-openid:
name: Test IAM import in new cluster with opendid configurations
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version: ${{ matrix.go-version }}
check-latest: true
- name: Checkout minio-iam-testing
uses: actions/checkout@v4
with:
repository: minio/minio-iam-testing
path: minio-iam-testing
- name: Test import of IAM artifacts when in fresh cluster with openid configurations
run: |
make test-iam-import-with-openid

View File

@ -29,7 +29,7 @@ jobs:
- name: setup-go-step
uses: actions/setup-go@v5
with:
go-version: 1.22.x
go-version: 1.24.x
- name: github sha short
id: vars
@ -55,9 +55,10 @@ jobs:
run: |
${GITHUB_WORKSPACE}/.github/workflows/run-mint.sh "erasure" "minio" "minio123" "${{ steps.vars.outputs.sha_short }}"
- name: resiliency
run: |
${GITHUB_WORKSPACE}/.github/workflows/run-mint.sh "resiliency" "minio" "minio123" "${{ steps.vars.outputs.sha_short }}"
# FIXME: renable this back when we have a valid way to add deadlines for PUT()s (internode CreateFile)
# - name: resiliency
# run: |
# ${GITHUB_WORKSPACE}/.github/workflows/run-mint.sh "resiliency" "minio" "minio123" "${{ steps.vars.outputs.sha_short }}"
- name: The job must cleanup
if: ${{ always() }}

View File

@ -21,7 +21,7 @@ jobs:
strategy:
matrix:
go-version: [1.22.x]
go-version: [1.24.x]
steps:
- uses: actions/checkout@v4
@ -40,6 +40,7 @@ jobs:
sudo sysctl net.ipv6.conf.all.disable_ipv6=0
sudo sysctl net.ipv6.conf.default.disable_ipv6=0
make test-ilm
make test-ilm-transition
- name: Test PBAC
run: |
@ -70,3 +71,9 @@ jobs:
sudo sysctl net.ipv6.conf.all.disable_ipv6=0
sudo sysctl net.ipv6.conf.default.disable_ipv6=0
make test-versioning
- name: Test Multipart upload with failures
run: |
sudo sysctl net.ipv6.conf.all.disable_ipv6=0
sudo sysctl net.ipv6.conf.default.disable_ipv6=0
make test-multipart

View File

@ -20,7 +20,7 @@ jobs:
runs-on: ${{ matrix.os }}
strategy:
matrix:
go-version: [1.22.x]
go-version: [1.24.x]
os: [ubuntu-latest]
steps:

View File

@ -15,6 +15,9 @@ docker volume rm $(docker volume ls -f dangling=true) || true
## change working directory
cd .github/workflows/mint
## always pull latest
docker pull docker.io/minio/mint:edge
docker-compose -f minio-${MODE}.yaml up -d
sleep 1m

View File

@ -20,7 +20,7 @@ jobs:
runs-on: ${{ matrix.os }}
strategy:
matrix:
go-version: [1.22.x]
go-version: [1.24.x]
os: [ubuntu-latest]
steps:

View File

@ -21,8 +21,8 @@ jobs:
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: 1.22.3
check-latest: true
go-version: 1.24.x
cached: false
- name: Get official govulncheck
run: go install golang.org/x/vuln/cmd/govulncheck@latest
shell: bash

View File

@ -1,36 +1,64 @@
linters-settings:
gofumpt:
simplify: true
misspell:
locale: US
staticcheck:
checks: ['all', '-ST1005', '-ST1000', '-SA4000', '-SA9004', '-SA1019', '-SA1008', '-U1000', '-ST1016']
version: "2"
linters:
disable-all: true
default: none
enable:
- durationcheck
- forcetypeassert
- gocritic
- gofumpt
- goimports
- gomodguard
- govet
- ineffassign
- misspell
- revive
- staticcheck
- tenv
- typecheck
- unconvert
- unused
- usetesting
- whitespace
settings:
misspell:
locale: US
staticcheck:
checks:
- all
- -SA1008
- -SA1019
- -SA4000
- -SA9004
- -ST1000
- -ST1005
- -ST1016
- -U1000
exclusions:
generated: lax
rules:
- linters:
- forcetypeassert
path: _test\.go
- path: (.+)\.go$
text: 'empty-block:'
- path: (.+)\.go$
text: 'unused-parameter:'
- path: (.+)\.go$
text: 'dot-imports:'
- path: (.+)\.go$
text: should have a package comment
- path: (.+)\.go$
text: error strings should not be capitalized or end with punctuation or a newline
paths:
- third_party$
- builtin$
- examples$
issues:
exclude-use-default: false
exclude:
- "empty-block:"
- "unused-parameter:"
- "dot-imports:"
- should have a package comment
- error strings should not be capitalized or end with punctuation or a newline
max-issues-per-linter: 100
max-same-issues: 100
formatters:
enable:
- gofumpt
- goimports
exclusions:
generated: lax
paths:
- third_party$
- builtin$
- examples$

View File

@ -1,8 +1,5 @@
[files]
extend-exclude = [
".git/",
"docs/",
]
extend-exclude = [".git/", "docs/", "CREDITS", "go.mod", "go.sum"]
ignore-hidden = false
[default]
@ -16,6 +13,8 @@ extend-ignore-re = [
"MIIDBTCCAe2gAwIBAgIQWHw7h.*",
'http\.Header\{"X-Amz-Server-Side-Encryptio":',
"ZoEoZdLlzVbOlT9rbhD7ZN7TLyiYXSAlB79uGEge",
"ERRO:",
"(?Rm)^.*(#|//)\\s*spellchecker:disable-line$", # ignore line
]
[default.extend-words]
@ -36,3 +35,11 @@ extend-ignore-re = [
"TestGetPartialObjectMisAligned" = "TestGetPartialObjectMisAligned"
"thr" = "thr"
"toi" = "toi"
[type.go]
extend-ignore-identifiers-re = [
# Variants of `typ` used to mean `type` in golang as it is otherwise a
# keyword - some of these (like typ1 -> type1) can be fixed, but probably
# not worth the effort.
"[tT]yp[0-9]*",
]

View File

@ -12,8 +12,9 @@ Fork [MinIO upstream](https://github.com/minio/minio/fork) source repository to
```sh
git clone https://github.com/minio/minio
cd minio
go install -v
ls /go/bin/minio
ls $(go env GOPATH)/bin/minio
```
### Set up git remote as ``upstream``

9702
CREDITS

File diff suppressed because it is too large Load Diff

View File

@ -1,6 +1,14 @@
FROM minio/minio:latest
COPY ./minio /usr/bin/minio
ARG TARGETARCH
ARG RELEASE
RUN chmod -R 777 /usr/bin
COPY ./minio-${TARGETARCH}.${RELEASE} /usr/bin/minio
COPY ./minio-${TARGETARCH}.${RELEASE}.minisig /usr/bin/minio.minisig
COPY ./minio-${TARGETARCH}.${RELEASE}.sha256sum /usr/bin/minio.sha256sum
COPY dockerscripts/docker-entrypoint.sh /usr/bin/docker-entrypoint.sh
ENTRYPOINT ["/usr/bin/docker-entrypoint.sh"]

View File

@ -1,12 +0,0 @@
FROM minio/minio:latest
ENV PATH=/opt/bin:$PATH
COPY ./minio /opt/bin/minio
COPY dockerscripts/docker-entrypoint.sh /usr/bin/docker-entrypoint.sh
ENTRYPOINT ["/usr/bin/docker-entrypoint.sh"]
VOLUME ["/data"]
CMD ["minio"]

View File

@ -1,24 +1,26 @@
FROM golang:1.21-alpine as build
FROM golang:1.24-alpine as build
ARG TARGETARCH
ARG RELEASE
ENV GOPATH /go
ENV CGO_ENABLED 0
ENV GOPATH=/go
ENV CGO_ENABLED=0
# Install curl and minisign
RUN apk add -U --no-cache ca-certificates && \
apk add -U --no-cache curl && \
go install aead.dev/minisign/cmd/minisign@v0.2.1
# Download minio binary and signature file
# Download minio binary and signature files
RUN curl -s -q https://dl.min.io/server/minio/hotfixes/linux-${TARGETARCH}/archive/minio.${RELEASE} -o /go/bin/minio && \
curl -s -q https://dl.min.io/server/minio/hotfixes/linux-${TARGETARCH}/archive/minio.${RELEASE}.minisig -o /go/bin/minio.minisig && \
curl -s -q https://dl.min.io/server/minio/hotfixes/linux-${TARGETARCH}/archive/minio.${RELEASE}.sha256sum -o /go/bin/minio.sha256sum && \
chmod +x /go/bin/minio
# Download mc binary and signature file
# Download mc binary and signature files
RUN curl -s -q https://dl.min.io/client/mc/release/linux-${TARGETARCH}/mc -o /go/bin/mc && \
curl -s -q https://dl.min.io/client/mc/release/linux-${TARGETARCH}/mc.minisig -o /go/bin/mc.minisig && \
curl -s -q https://dl.min.io/client/mc/release/linux-${TARGETARCH}/mc.sha256sum -o /go/bin/mc.sha256sum && \
chmod +x /go/bin/mc
RUN if [ "$TARGETARCH" = "amd64" ]; then \
@ -51,9 +53,11 @@ ENV MINIO_ACCESS_KEY_FILE=access_key \
MINIO_CONFIG_ENV_FILE=config.env \
MC_CONFIG_DIR=/tmp/.mc
RUN chmod -R 777 /usr/bin
COPY --from=build /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
COPY --from=build /go/bin/minio /usr/bin/minio
COPY --from=build /go/bin/mc /usr/bin/mc
COPY --from=build /go/bin/minio* /usr/bin/
COPY --from=build /go/bin/mc* /usr/bin/
COPY --from=build /go/bin/cur* /usr/bin/
COPY CREDITS /licenses/CREDITS

View File

@ -1,35 +1,39 @@
FROM golang:1.21-alpine as build
FROM golang:1.24-alpine AS build
ARG TARGETARCH
ARG RELEASE
ENV GOPATH /go
ENV CGO_ENABLED 0
ENV GOPATH=/go
ENV CGO_ENABLED=0
WORKDIR /build
# Install curl and minisign
RUN apk add -U --no-cache ca-certificates && \
apk add -U --no-cache curl && \
apk add -U --no-cache bash && \
go install aead.dev/minisign/cmd/minisign@v0.2.1
# Download minio binary and signature file
# Download minio binary and signature files
RUN curl -s -q https://dl.min.io/server/minio/release/linux-${TARGETARCH}/archive/minio.${RELEASE} -o /go/bin/minio && \
curl -s -q https://dl.min.io/server/minio/release/linux-${TARGETARCH}/archive/minio.${RELEASE}.minisig -o /go/bin/minio.minisig && \
curl -s -q https://dl.min.io/server/minio/release/linux-${TARGETARCH}/archive/minio.${RELEASE}.sha256sum -o /go/bin/minio.sha256sum && \
chmod +x /go/bin/minio
# Download mc binary and signature file
# Download mc binary and signature files
RUN curl -s -q https://dl.min.io/client/mc/release/linux-${TARGETARCH}/mc -o /go/bin/mc && \
curl -s -q https://dl.min.io/client/mc/release/linux-${TARGETARCH}/mc.minisig -o /go/bin/mc.minisig && \
curl -s -q https://dl.min.io/client/mc/release/linux-${TARGETARCH}/mc.sha256sum -o /go/bin/mc.sha256sum && \
chmod +x /go/bin/mc
RUN if [ "$TARGETARCH" = "amd64" ]; then \
curl -L -s -q https://github.com/moparisthebest/static-curl/releases/latest/download/curl-${TARGETARCH} -o /go/bin/curl; \
chmod +x /go/bin/curl; \
fi
# Verify binary signature using public key "RWTx5Zr1tiHQLwG9keckT0c45M3AGeHD6IvimQHpyRywVWGbP1aVSGavRUN"
RUN minisign -Vqm /go/bin/minio -x /go/bin/minio.minisig -P RWTx5Zr1tiHQLwG9keckT0c45M3AGeHD6IvimQHpyRywVWGbP1aVSGav && \
minisign -Vqm /go/bin/mc -x /go/bin/mc.minisig -P RWTx5Zr1tiHQLwG9keckT0c45M3AGeHD6IvimQHpyRywVWGbP1aVSGav
COPY dockerscripts/download-static-curl.sh /build/download-static-curl
RUN chmod +x /build/download-static-curl && \
/build/download-static-curl
FROM registry.access.redhat.com/ubi9/ubi-micro:latest
ARG RELEASE
@ -51,10 +55,12 @@ ENV MINIO_ACCESS_KEY_FILE=access_key \
MINIO_CONFIG_ENV_FILE=config.env \
MC_CONFIG_DIR=/tmp/.mc
RUN chmod -R 777 /usr/bin
COPY --from=build /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
COPY --from=build /go/bin/minio /usr/bin/minio
COPY --from=build /go/bin/mc /usr/bin/mc
COPY --from=build /go/bin/cur* /usr/bin/
COPY --from=build /go/bin/minio* /usr/bin/
COPY --from=build /go/bin/mc* /usr/bin/
COPY --from=build /go/bin/curl* /usr/bin/
COPY CREDITS /licenses/CREDITS
COPY LICENSE /licenses/LICENSE

View File

@ -1,59 +0,0 @@
FROM golang:1.21-alpine as build
ARG TARGETARCH
ARG RELEASE
ENV GOPATH /go
ENV CGO_ENABLED 0
# Install curl and minisign
RUN apk add -U --no-cache ca-certificates && \
apk add -U --no-cache curl && \
go install aead.dev/minisign/cmd/minisign@v0.2.1
# Download minio binary and signature file
RUN curl -s -q https://dl.min.io/server/minio/release/linux-${TARGETARCH}/archive/minio.${RELEASE}.fips -o /go/bin/minio && \
curl -s -q https://dl.min.io/server/minio/release/linux-${TARGETARCH}/archive/minio.${RELEASE}.fips.minisig -o /go/bin/minio.minisig && \
chmod +x /go/bin/minio
RUN if [ "$TARGETARCH" = "amd64" ]; then \
curl -L -s -q https://github.com/moparisthebest/static-curl/releases/latest/download/curl-${TARGETARCH} -o /go/bin/curl; \
chmod +x /go/bin/curl; \
fi
# Verify binary signature using public key "RWTx5Zr1tiHQLwG9keckT0c45M3AGeHD6IvimQHpyRywVWGbP1aVSGavRUN"
RUN minisign -Vqm /go/bin/minio -x /go/bin/minio.minisig -P RWTx5Zr1tiHQLwG9keckT0c45M3AGeHD6IvimQHpyRywVWGbP1aVSGav
FROM registry.access.redhat.com/ubi9/ubi-micro:latest
ARG RELEASE
LABEL name="MinIO" \
vendor="MinIO Inc <dev@min.io>" \
maintainer="MinIO Inc <dev@min.io>" \
version="${RELEASE}" \
release="${RELEASE}" \
summary="MinIO is a High Performance Object Storage, API compatible with Amazon S3 cloud storage service." \
description="MinIO object storage is fundamentally different. Designed for performance and the S3 API, it is 100% open-source. MinIO is ideal for large, private cloud environments with stringent security requirements and delivers mission-critical availability across a diverse range of workloads."
ENV MINIO_ACCESS_KEY_FILE=access_key \
MINIO_SECRET_KEY_FILE=secret_key \
MINIO_ROOT_USER_FILE=access_key \
MINIO_ROOT_PASSWORD_FILE=secret_key \
MINIO_KMS_SECRET_KEY_FILE=kms_master_key \
MINIO_UPDATE_MINISIGN_PUBKEY="RWTx5Zr1tiHQLwG9keckT0c45M3AGeHD6IvimQHpyRywVWGbP1aVSGav" \
MINIO_CONFIG_ENV_FILE=config.env
COPY --from=build /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
COPY --from=build /go/bin/minio /usr/bin/minio
COPY --from=build /go/bin/cur* /usr/bin/
COPY CREDITS /licenses/CREDITS
COPY LICENSE /licenses/LICENSE
COPY dockerscripts/docker-entrypoint.sh /usr/bin/docker-entrypoint.sh
EXPOSE 9000
VOLUME ["/data"]
ENTRYPOINT ["/usr/bin/docker-entrypoint.sh"]
CMD ["minio"]

View File

@ -1,24 +1,26 @@
FROM golang:1.21-alpine as build
FROM golang:1.24-alpine AS build
ARG TARGETARCH
ARG RELEASE
ENV GOPATH /go
ENV CGO_ENABLED 0
ENV GOPATH=/go
ENV CGO_ENABLED=0
# Install curl and minisign
RUN apk add -U --no-cache ca-certificates && \
apk add -U --no-cache curl && \
go install aead.dev/minisign/cmd/minisign@v0.2.1
# Download minio binary and signature file
# Download minio binary and signature files
RUN curl -s -q https://dl.min.io/server/minio/release/linux-${TARGETARCH}/archive/minio.${RELEASE} -o /go/bin/minio && \
curl -s -q https://dl.min.io/server/minio/release/linux-${TARGETARCH}/archive/minio.${RELEASE}.minisig -o /go/bin/minio.minisig && \
curl -s -q https://dl.min.io/server/minio/release/linux-${TARGETARCH}/archive/minio.${RELEASE}.sha256sum -o /go/bin/minio.sha256sum && \
chmod +x /go/bin/minio
# Download mc binary and signature file
# Download mc binary and signature files
RUN curl -s -q https://dl.min.io/client/mc/release/linux-${TARGETARCH}/mc -o /go/bin/mc && \
curl -s -q https://dl.min.io/client/mc/release/linux-${TARGETARCH}/mc.minisig -o /go/bin/mc.minisig && \
curl -s -q https://dl.min.io/client/mc/release/linux-${TARGETARCH}/mc.sha256sum -o /go/bin/mc.sha256sum && \
chmod +x /go/bin/mc
RUN if [ "$TARGETARCH" = "amd64" ]; then \
@ -51,9 +53,11 @@ ENV MINIO_ACCESS_KEY_FILE=access_key \
MINIO_CONFIG_ENV_FILE=config.env \
MC_CONFIG_DIR=/tmp/.mc
RUN chmod -R 777 /usr/bin
COPY --from=build /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
COPY --from=build /go/bin/minio /usr/bin/minio
COPY --from=build /go/bin/mc /usr/bin/mc
COPY --from=build /go/bin/minio* /usr/bin/
COPY --from=build /go/bin/mc* /usr/bin/
COPY --from=build /go/bin/cur* /usr/bin/
COPY CREDITS /licenses/CREDITS

View File

@ -2,8 +2,8 @@ PWD := $(shell pwd)
GOPATH := $(shell go env GOPATH)
LDFLAGS := $(shell go run buildscripts/gen-ldflags.go)
GOARCH := $(shell go env GOARCH)
GOOS := $(shell go env GOOS)
GOOS ?= $(shell go env GOOS)
GOARCH ?= $(shell go env GOARCH)
VERSION ?= $(shell git describe --tags)
REPO ?= quay.io/minio
@ -24,8 +24,6 @@ help: ## print this help
getdeps: ## fetch necessary dependencies
@mkdir -p ${GOPATH}/bin
@echo "Installing golangci-lint" && curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(GOLANGCI_DIR)
@echo "Installing msgp" && go install -v github.com/tinylib/msgp@v1.1.10-0.20240227114326-6d6f813fff1b
@echo "Installing stringer" && go install -v golang.org/x/tools/cmd/stringer@latest
crosscompile: ## cross compile minio
@(env bash $(PWD)/buildscripts/cross-compile.sh)
@ -34,11 +32,14 @@ verifiers: lint check-gen
check-gen: ## check for updated autogenerated files
@go generate ./... >/dev/null
@go mod tidy -compat=1.21
@(! git diff --name-only | grep '_gen.go$$') || (echo "Non-committed changes in auto-generated code is detected, please commit them to proceed." && false)
@(! git diff --name-only | grep 'go.sum') || (echo "Non-committed changes in auto-generated go.sum is detected, please commit them to proceed." && false)
lint: getdeps ## runs golangci-lint suite of linters
@echo "Running $@ check"
@$(GOLANGCI) run --build-tags kqueue --timeout=10m --config ./.golangci.yml
@command typos && typos ./ || echo "typos binary is not found.. skipping.."
lint-fix: getdeps ## runs golangci-lint suite of linters with automatic fixes
@echo "Running $@ check"
@ -47,7 +48,7 @@ lint-fix: getdeps ## runs golangci-lint suite of linters with automatic fixes
check: test
test: verifiers build ## builds minio, runs linters, tests
@echo "Running unit tests"
@MINIO_API_REQUESTS_MAX=10000 CGO_ENABLED=0 go test -v -tags kqueue ./...
@MINIO_API_REQUESTS_MAX=10000 CGO_ENABLED=0 go test -v -tags kqueue,dev ./...
test-root-disable: install-race
@echo "Running minio root lockdown tests"
@ -57,6 +58,10 @@ test-ilm: install-race
@echo "Running ILM tests"
@env bash $(PWD)/docs/bucket/replication/setup_ilm_expiry_replication.sh
test-ilm-transition: install-race
@echo "Running ILM tiering tests with healing"
@env bash $(PWD)/docs/bucket/lifecycle/setup_ilm_transition.sh
test-pbac: install-race
@echo "Running bucket policies tests"
@env bash $(PWD)/docs/iam/policies/pbac-tests.sh
@ -84,16 +89,24 @@ test-race: verifiers build ## builds minio, runs linters, tests (race)
@echo "Running unit tests under -race"
@(env bash $(PWD)/buildscripts/race.sh)
test-iam: build ## verify IAM (external IDP, etcd backends)
test-iam: install-race ## verify IAM (external IDP, etcd backends)
@echo "Running tests for IAM (external IDP, etcd backends)"
@MINIO_API_REQUESTS_MAX=10000 CGO_ENABLED=0 go test -tags kqueue -v -run TestIAM* ./cmd
@MINIO_API_REQUESTS_MAX=10000 CGO_ENABLED=0 go test -timeout 15m -tags kqueue,dev -v -run TestIAM* ./cmd
@echo "Running tests for IAM (external IDP, etcd backends) with -race"
@MINIO_API_REQUESTS_MAX=10000 GORACE=history_size=7 CGO_ENABLED=1 go test -race -tags kqueue -v -run TestIAM* ./cmd
@MINIO_API_REQUESTS_MAX=10000 GORACE=history_size=7 CGO_ENABLED=1 go test -timeout 15m -race -tags kqueue,dev -v -run TestIAM* ./cmd
test-iam-ldap-upgrade-import: build ## verify IAM (external LDAP IDP)
test-iam-ldap-upgrade-import: install-race ## verify IAM (external LDAP IDP)
@echo "Running upgrade tests for IAM (LDAP backend)"
@env bash $(PWD)/buildscripts/minio-iam-ldap-upgrade-import-test.sh
test-iam-import-with-missing-entities: install-race ## test import of external iam config withg missing entities
@echo "Test IAM import configurations with missing entities"
@env bash $(PWD)/docs/distributed/iam-import-with-missing-entities.sh
test-iam-import-with-openid: install-race
@echo "Test IAM import configurations with openid"
@env bash $(PWD)/docs/distributed/iam-import-with-openid.sh
test-sio-error:
@(env bash $(PWD)/docs/bucket/replication/sio-error.sh)
@ -106,7 +119,10 @@ test-replication-3site:
test-delete-replication:
@(env bash $(PWD)/docs/bucket/replication/delete-replication.sh)
test-replication: install-race test-replication-2site test-replication-3site test-delete-replication test-sio-error ## verify multi site replication
test-delete-marker-proxying:
@(env bash $(PWD)/docs/bucket/replication/test_del_marker_proxying.sh)
test-replication: install-race test-replication-2site test-replication-3site test-delete-replication test-sio-error test-delete-marker-proxying ## verify multi site replication
@echo "Running tests for replicating three sites"
test-site-replication-ldap: install-race ## verify automatic site replication
@ -127,6 +143,14 @@ test-site-replication-minio: install-race ## verify automatic site replication
@echo "Running tests for automatic site replication of SSE-C objects with compression enabled for site"
@(env bash $(PWD)/docs/site-replication/run-ssec-object-replication-with-compression.sh)
test-multipart: install-race ## test multipart
@echo "Test multipart behavior when part files are missing"
@(env bash $(PWD)/buildscripts/multipart-quorum-test.sh)
test-timeout: install-race ## test multipart
@echo "Test server timeout"
@(env bash $(PWD)/buildscripts/test-timeout.sh)
verify: install-race ## verify minio various setups
@echo "Verifying build with race"
@(env bash $(PWD)/buildscripts/verify-build.sh)
@ -154,7 +178,7 @@ build-debugging:
build: checks build-debugging ## builds minio to $(PWD)
@echo "Building minio binary to './minio'"
@CGO_ENABLED=0 go build -tags kqueue -trimpath --ldflags "$(LDFLAGS)" -o $(PWD)/minio 1>/dev/null
@CGO_ENABLED=0 GOOS=$(GOOS) GOARCH=$(GOARCH) go build -tags kqueue -trimpath --ldflags "$(LDFLAGS)" -o $(PWD)/minio 1>/dev/null
hotfix-vars:
$(eval LDFLAGS := $(shell MINIO_RELEASE="RELEASE" MINIO_HOTFIX="hotfix.$(shell git rev-parse --short HEAD)" go run buildscripts/gen-ldflags.go $(shell git describe --tags --abbrev=0 | \
@ -162,9 +186,9 @@ hotfix-vars:
$(eval VERSION := $(shell git describe --tags --abbrev=0).hotfix.$(shell git rev-parse --short HEAD))
hotfix: hotfix-vars clean install ## builds minio binary with hotfix tags
@wget -q -c https://github.com/minio/pkger/releases/download/v2.2.9/pkger_2.2.9_linux_amd64.deb
@wget -q -c https://raw.githubusercontent.com/minio/minio-service/v1.0.1/linux-systemd/distributed/minio.service
@sudo apt install ./pkger_2.2.9_linux_amd64.deb --yes
@wget -q -c https://github.com/minio/pkger/releases/download/v2.3.11/pkger_2.3.11_linux_amd64.deb
@wget -q -c https://raw.githubusercontent.com/minio/minio-service/v1.1.1/linux-systemd/distributed/minio.service
@sudo apt install ./pkger_2.3.11_linux_amd64.deb --yes
@mkdir -p minio-release/$(GOOS)-$(GOARCH)/archive
@cp -af ./minio minio-release/$(GOOS)-$(GOARCH)/minio
@cp -af ./minio minio-release/$(GOOS)-$(GOARCH)/minio.$(VERSION)
@ -174,11 +198,11 @@ hotfix: hotfix-vars clean install ## builds minio binary with hotfix tags
@pkger -r $(VERSION) --ignore
hotfix-push: hotfix
@scp -q -r minio-release/$(GOOS)-$(GOARCH)/* minio@dl-0.minio.io:~/releases/server/minio/hotfixes/linux-amd64/
@scp -q -r minio-release/$(GOOS)-$(GOARCH)/* minio@dl-0.minio.io:~/releases/server/minio/hotfixes/linux-amd64/archive
@scp -q -r minio-release/$(GOOS)-$(GOARCH)/* minio@dl-1.minio.io:~/releases/server/minio/hotfixes/linux-amd64/
@scp -q -r minio-release/$(GOOS)-$(GOARCH)/* minio@dl-1.minio.io:~/releases/server/minio/hotfixes/linux-amd64/archive
@echo "Published new hotfix binaries at https://dl.min.io/server/minio/hotfixes/linux-amd64/archive/minio.$(VERSION)"
@scp -q -r minio-release/$(GOOS)-$(GOARCH)/* minio@dl-0.minio.io:~/releases/server/minio/hotfixes/linux-$(GOOS)/
@scp -q -r minio-release/$(GOOS)-$(GOARCH)/* minio@dl-0.minio.io:~/releases/server/minio/hotfixes/linux-$(GOOS)/archive
@scp -q -r minio-release/$(GOOS)-$(GOARCH)/* minio@dl-1.minio.io:~/releases/server/minio/hotfixes/linux-$(GOOS)/
@scp -q -r minio-release/$(GOOS)-$(GOARCH)/* minio@dl-1.minio.io:~/releases/server/minio/hotfixes/linux-$(GOOS)/archive
@echo "Published new hotfix binaries at https://dl.min.io/server/minio/hotfixes/linux-$(GOOS)/archive/minio.$(VERSION)"
docker-hotfix-push: docker-hotfix
@docker push -q $(TAG) && echo "Published new container $(TAG)"
@ -191,9 +215,13 @@ docker: build ## builds minio docker container
@echo "Building minio docker image '$(TAG)'"
@docker build -q --no-cache -t $(TAG) . -f Dockerfile
test-resiliency: build
@echo "Running resiliency tests"
@(DOCKER_COMPOSE_FILE=$(PWD)/docs/resiliency/docker-compose.yaml env bash $(PWD)/docs/resiliency/resiliency-tests.sh)
install-race: checks build-debugging ## builds minio to $(PWD)
@echo "Building minio binary with -race to './minio'"
@GORACE=history_size=7 CGO_ENABLED=1 go build -tags kqueue -race -trimpath --ldflags "$(LDFLAGS)" -o $(PWD)/minio 1>/dev/null
@GORACE=history_size=7 CGO_ENABLED=1 go build -tags kqueue,dev -race -trimpath --ldflags "$(LDFLAGS)" -o $(PWD)/minio 1>/dev/null
@echo "Installing minio binary with -race to '$(GOPATH)/bin/minio'"
@mkdir -p $(GOPATH)/bin && cp -af $(PWD)/minio $(GOPATH)/bin/minio

View File

@ -0,0 +1,93 @@
# MinIO Pull Request Guidelines
These guidelines ensure high-quality commits in MinIOs GitHub repositories, maintaining
a clear, valuable commit history for our open-source projects. They apply to all contributors,
fostering efficient reviews and robust code.
## Why Pull Requests?
Pull Requests (PRs) drive quality in MinIOs codebase by:
- Enabling peer review without pair programming.
- Documenting changes for future reference.
- Ensuring commits tell a clear story of development.
**A poor commit lasts forever, even if code is refactored.**
## Crafting a Quality PR
A strong MinIO PR:
- Delivers a complete, valuable change (feature, bug fix, or improvement).
- Has a concise title (e.g., `[S3] Fix bucket policy parsing #1234`) and a summary with context, referencing issues (e.g., `#1234`).
- Contains well-written, logical commits explaining *why* changes were made (e.g., “Add S3 bucket tagging support so that users can organize resources efficiently”).
- Is small, focused, and easy to review—ideally one commit, unless multiple commits better narrate complex work.
- Adheres to MinIOs coding standards (e.g., Go style, error handling, testing).
PRs must flow smoothly through review to reach production. Large PRs should be split into smaller, manageable ones.
## Submitting PRs
1. **Title and Summary**:
- Use a scannable title: `[Subsystem] Action Description #Issue` (e.g., `[IAM] Add role-based access control #567`).
- Include context in the summary: what changed, why, and any issue references.
- Use `[WIP]` for in-progress PRs to avoid premature merging or choose GitHub draft PRs.
2. **Commits**:
- Write clear messages: what changed and why (e.g., “Refactor S3 API handler to reduce latency so that requests process 20% faster”).
- Rebase to tidy commits before submitting (e.g., `git rebase -i main` to squash typos or reword messages), unless multiple contributors worked on the branch.
- Keep PRs focused—one feature or fix. Split large changes into multiple PRs.
3. **Testing**:
- Include unit tests for new functionality or bug fixes.
- Ensure existing tests pass (`make test`).
- Document testing steps in the PR summary if manual testing was performed.
4. **Before Submitting**:
- Run `make verify` to check formatting, linting, and tests.
- Reference related issues (e.g., “Closes #1234”).
- Notify team members via GitHub `@mentions` if urgent or complex.
## Reviewing PRs
Reviewers ensure MinIOs commit history remains a clear, reliable record. Responsibilities include:
1. **Commit Quality**:
- Verify each commit explains *why* the change was made (e.g., “So that…”).
- Request rebasing if commits are unclear, redundant, or lack context (e.g., “Please squash typo fixes into the parent commit”).
2. **Code Quality**:
- Check adherence to MinIOs Go standards (e.g., error handling, documentation).
- Ensure tests cover new code and pass CI.
- Flag bugs or critical issues for immediate fixes; suggest non-blocking improvements as follow-up issues.
3. **Flow**:
- Review promptly to avoid blocking progress.
- Balance quality and speed—minor issues can be addressed later via issues, not PR blocks.
- If unable to complete the review, tag another reviewer (e.g., `@username please take over`).
4. **Shared Responsibility**:
- All MinIO contributors are reviewers. The first commenter on a PR owns the review unless they delegate.
- Multiple reviewers are encouraged for complex PRs.
5. **No Self-Edits**:
- Dont modify the PR directly (e.g., fixing bugs). Request changes from the submitter or create a follow-up PR.
- If you edit, youre a collaborator, not a reviewer, and cannot merge.
6. **Testing**:
- Assume the submitter tested the code. If testing is unclear, ask for details (e.g., “How was this tested?”).
- Reject untested PRs unless testing is infeasible, then assist with test setup.
## Tips for Success
- **Small PRs**: Easier to review, faster to merge. Split large changes logically.
- **Clear Commits**: Use `git rebase -i` to refine history before submitting.
- **Engage Early**: Discuss complex changes in issues or Slack (https://slack.min.io) before coding.
- **Be Responsive**: Address reviewer feedback promptly to keep PRs moving.
- **Learn from Reviews**: Use feedback to improve future contributions.
## Resources
- [MinIO Coding Standards](https://github.com/minio/minio/blob/master/CONTRIBUTING.md)
- [Effective Commit Messages](https://mislav.net/2014/02/hidden-documentation/)
- [GitHub PR Tips](https://github.com/blog/1943-how-to-write-the-perfect-pull-request)
By following these guidelines, we ensure MinIOs codebase remains high-quality, maintainable, and a joy to contribute to. Happy coding!

View File

@ -1,7 +0,0 @@
# MinIO FIPS Builds
MinIO creates FIPS builds using a patched version of the Go compiler (that uses BoringCrypto, from BoringSSL, which is [FIPS 140-2 validated](https://csrc.nist.gov/csrc/media/projects/cryptographic-module-validation-program/documents/security-policies/140sp2964.pdf)) published by the Golang Team [here](https://github.com/golang/go/tree/dev.boringcrypto/misc/boring).
MinIO FIPS executables are available at <http://dl.min.io> - they are only published for `linux-amd64` architecture as binary files with the suffix `.fips`. We also publish corresponding container images to our official image repositories.
We are not making any statements or representations about the suitability of this code or build in relation to the FIPS 140-2 standard. Interested users will have to evaluate for themselves whether this is useful for their own purposes.

268
README.md
View File

@ -4,254 +4,154 @@
[![MinIO](https://raw.githubusercontent.com/minio/minio/master/.github/logo.svg?sanitize=true)](https://min.io)
MinIO is a High Performance Object Storage released under GNU Affero General Public License v3.0. It is API compatible with Amazon S3 cloud storage service. Use MinIO to build high performance infrastructure for machine learning, analytics and application data workloads.
MinIO is a high-performance, S3-compatible object storage solution released under the GNU AGPL v3.0 license.
Designed for speed and scalability, it powers AI/ML, analytics, and data-intensive workloads with industry-leading performance.
This README provides quickstart instructions on running MinIO on bare metal hardware, including container-based installations. For Kubernetes environments, use the [MinIO Kubernetes Operator](https://github.com/minio/operator/blob/master/README.md).
- S3 API Compatible Seamless integration with existing S3 tools
- Built for AI & Analytics Optimized for large-scale data pipelines
- High Performance Ideal for demanding storage workloads.
## Container Installation
This README provides instructions for building MinIO from source and deploying onto baremetal hardware.
Use the [MinIO Documentation](https://github.com/minio/docs) project to build and host a local copy of the documentation.
Use the following commands to run a standalone MinIO server as a container.
## MinIO is Open Source Software
Standalone MinIO servers are best suited for early development and evaluation. Certain features such as versioning, object locking, and bucket replication
require distributed deploying MinIO with Erasure Coding. For extended development and production, deploy MinIO with Erasure Coding enabled - specifically,
with a *minimum* of 4 drives per MinIO server. See [MinIO Erasure Code Overview](https://min.io/docs/minio/linux/operations/concepts/erasure-coding.html)
for more complete documentation.
We designed MinIO as Open Source software for the Open Source software community. We encourage the community to remix, redesign, and reshare MinIO under the terms of the AGPLv3 license.
### Stable
All usage of MinIO in your application stack requires validation against AGPLv3 obligations, which include but are not limited to the release of modified code to the community from which you have benefited. Any commercial/proprietary usage of the AGPLv3 software, including repackaging or reselling services/features, is done at your own risk.
Run the following command to run the latest stable image of MinIO as a container using an ephemeral data volume:
The AGPLv3 provides no obligation by any party to support, maintain, or warranty the original or any modified work.
All support is provided on a best-effort basis through Github and our [Slack](https//slack.min.io) channel, and any member of the community is welcome to contribute and assist others in their usage of the software.
```sh
podman run -p 9000:9000 -p 9001:9001 \
quay.io/minio/minio server /data --console-address ":9001"
```
MinIO [AIStor](https://www.min.io/product/aistor) includes enterprise-grade support and licensing for workloads which require commercial or proprietary usage and production-level SLA/SLO-backed support. For more information, [reach out for a quote](https://min.io/pricing).
The MinIO deployment starts using default root credentials `minioadmin:minioadmin`. You can test the deployment using the MinIO Console, an embedded
object browser built into MinIO Server. Point a web browser running on the host machine to <http://127.0.0.1:9000> and log in with the
root credentials. You can use the Browser to create buckets, upload objects, and browse the contents of the MinIO server.
## Source-Only Distribution
You can also connect using any S3-compatible tool, such as the MinIO Client `mc` commandline tool. See
[Test using MinIO Client `mc`](#test-using-minio-client-mc) for more information on using the `mc` commandline tool. For application developers,
see <https://min.io/docs/minio/linux/developers/minio-drivers.html> to view MinIO SDKs for supported languages.
**Important:** The MinIO community edition is now distributed as source code only. We will no longer provide pre-compiled binary releases for the community version.
> NOTE: To deploy MinIO on with persistent storage, you must map local persistent directories from the host OS to the container using the `podman -v` option. For example, `-v /mnt/data:/data` maps the host OS drive at `/mnt/data` to `/data` on the container.
### Installing Latest MinIO Community Edition
## macOS
To use MinIO community edition, you have two options:
Use the following commands to run a standalone MinIO server on macOS.
1. **Install from source** using `go install github.com/minio/minio@latest` (recommended)
2. **Build a Docker image** from the provided Dockerfile
Standalone MinIO servers are best suited for early development and evaluation. Certain features such as versioning, object locking, and bucket replication require distributed deploying MinIO with Erasure Coding. For extended development and production, deploy MinIO with Erasure Coding enabled - specifically, with a *minimum* of 4 drives per MinIO server. See [MinIO Erasure Code Overview](https://min.io/docs/minio/linux/operations/concepts/erasure-coding.html) for more complete documentation.
See the sections below for detailed instructions on each method.
### Homebrew (recommended)
### Legacy Binary Releases
Run the following command to install the latest stable MinIO package using [Homebrew](https://brew.sh/). Replace ``/data`` with the path to the drive or directory in which you want MinIO to store data.
Historical pre-compiled binary releases remain available for reference but are no longer maintained:
- GitHub Releases: https://github.com/minio/minio/releases
- Direct downloads: https://dl.min.io/server/minio/release/
```sh
brew install minio/stable/minio
minio server /data
```
> NOTE: If you previously installed minio using `brew install minio` then it is recommended that you reinstall minio from `minio/stable/minio` official repo instead.
```sh
brew uninstall minio
brew install minio/stable/minio
```
The MinIO deployment starts using default root credentials `minioadmin:minioadmin`. You can test the deployment using the MinIO Console, an embedded web-based object browser built into MinIO Server. Point a web browser running on the host machine to <http://127.0.0.1:9000> and log in with the root credentials. You can use the Browser to create buckets, upload objects, and browse the contents of the MinIO server.
You can also connect using any S3-compatible tool, such as the MinIO Client `mc` commandline tool. See [Test using MinIO Client `mc`](#test-using-minio-client-mc) for more information on using the `mc` commandline tool. For application developers, see <https://min.io/docs/minio/linux/developers/minio-drivers.html/> to view MinIO SDKs for supported languages.
### Binary Download
Use the following command to download and run a standalone MinIO server on macOS. Replace ``/data`` with the path to the drive or directory in which you want MinIO to store data.
```sh
wget https://dl.min.io/server/minio/release/darwin-amd64/minio
chmod +x minio
./minio server /data
```
The MinIO deployment starts using default root credentials `minioadmin:minioadmin`. You can test the deployment using the MinIO Console, an embedded web-based object browser built into MinIO Server. Point a web browser running on the host machine to <http://127.0.0.1:9000> and log in with the root credentials. You can use the Browser to create buckets, upload objects, and browse the contents of the MinIO server.
You can also connect using any S3-compatible tool, such as the MinIO Client `mc` commandline tool. See [Test using MinIO Client `mc`](#test-using-minio-client-mc) for more information on using the `mc` commandline tool. For application developers, see <https://min.io/docs/minio/linux/developers/minio-drivers.html> to view MinIO SDKs for supported languages.
## GNU/Linux
Use the following command to run a standalone MinIO server on Linux hosts running 64-bit Intel/AMD architectures. Replace ``/data`` with the path to the drive or directory in which you want MinIO to store data.
```sh
wget https://dl.min.io/server/minio/release/linux-amd64/minio
chmod +x minio
./minio server /data
```
The following table lists supported architectures. Replace the `wget` URL with the architecture for your Linux host.
| Architecture | URL |
| -------- | ------ |
| 64-bit Intel/AMD | <https://dl.min.io/server/minio/release/linux-amd64/minio> |
| 64-bit ARM | <https://dl.min.io/server/minio/release/linux-arm64/minio> |
| 64-bit PowerPC LE (ppc64le) | <https://dl.min.io/server/minio/release/linux-ppc64le/minio> |
| IBM Z-Series (S390X) | <https://dl.min.io/server/minio/release/linux-s390x/minio> |
The MinIO deployment starts using default root credentials `minioadmin:minioadmin`. You can test the deployment using the MinIO Console, an embedded web-based object browser built into MinIO Server. Point a web browser running on the host machine to <http://127.0.0.1:9000> and log in with the root credentials. You can use the Browser to create buckets, upload objects, and browse the contents of the MinIO server.
You can also connect using any S3-compatible tool, such as the MinIO Client `mc` commandline tool. See [Test using MinIO Client `mc`](#test-using-minio-client-mc) for more information on using the `mc` commandline tool. For application developers, see <https://min.io/docs/minio/linux/developers/minio-drivers.html> to view MinIO SDKs for supported languages.
> NOTE: Standalone MinIO servers are best suited for early development and evaluation. Certain features such as versioning, object locking, and bucket replication require distributed deploying MinIO with Erasure Coding. For extended development and production, deploy MinIO with Erasure Coding enabled - specifically, with a *minimum* of 4 drives per MinIO server. See [MinIO Erasure Code Overview](https://min.io/docs/minio/linux/operations/concepts/erasure-coding.html#) for more complete documentation.
## Microsoft Windows
To run MinIO on 64-bit Windows hosts, download the MinIO executable from the following URL:
```sh
https://dl.min.io/server/minio/release/windows-amd64/minio.exe
```
Use the following command to run a standalone MinIO server on the Windows host. Replace ``D:\`` with the path to the drive or directory in which you want MinIO to store data. You must change the terminal or powershell directory to the location of the ``minio.exe`` executable, *or* add the path to that directory to the system ``$PATH``:
```sh
minio.exe server D:\
```
The MinIO deployment starts using default root credentials `minioadmin:minioadmin`. You can test the deployment using the MinIO Console, an embedded web-based object browser built into MinIO Server. Point a web browser running on the host machine to <http://127.0.0.1:9000> and log in with the root credentials. You can use the Browser to create buckets, upload objects, and browse the contents of the MinIO server.
You can also connect using any S3-compatible tool, such as the MinIO Client `mc` commandline tool. See [Test using MinIO Client `mc`](#test-using-minio-client-mc) for more information on using the `mc` commandline tool. For application developers, see <https://min.io/docs/minio/linux/developers/minio-drivers.html> to view MinIO SDKs for supported languages.
> NOTE: Standalone MinIO servers are best suited for early development and evaluation. Certain features such as versioning, object locking, and bucket replication require distributed deploying MinIO with Erasure Coding. For extended development and production, deploy MinIO with Erasure Coding enabled - specifically, with a *minimum* of 4 drives per MinIO server. See [MinIO Erasure Code Overview](https://min.io/docs/minio/linux/operations/concepts/erasure-coding.html#) for more complete documentation.
**These legacy binaries will not receive updates.** We strongly recommend using source builds for access to the latest features, bug fixes, and security updates.
## Install from Source
Use the following commands to compile and run a standalone MinIO server from source. Source installation is only intended for developers and advanced users. If you do not have a working Golang environment, please follow [How to install Golang](https://golang.org/doc/install). Minimum version required is [go1.21](https://golang.org/dl/#stable)
Use the following commands to compile and run a standalone MinIO server from source.
If you do not have a working Golang environment, please follow [How to install Golang](https://golang.org/doc/install). Minimum version required is [go1.24](https://golang.org/dl/#stable)
```sh
go install github.com/minio/minio@latest
```
The MinIO deployment starts using default root credentials `minioadmin:minioadmin`. You can test the deployment using the MinIO Console, an embedded web-based object browser built into MinIO Server. Point a web browser running on the host machine to <http://127.0.0.1:9000> and log in with the root credentials. You can use the Browser to create buckets, upload objects, and browse the contents of the MinIO server.
You can alternatively run `go build` and use the `GOOS` and `GOARCH` environment variables to control the OS and architecture target.
For example:
You can also connect using any S3-compatible tool, such as the MinIO Client `mc` commandline tool. See [Test using MinIO Client `mc`](#test-using-minio-client-mc) for more information on using the `mc` commandline tool. For application developers, see <https://min.io/docs/minio/linux/developers/minio-drivers.html> to view MinIO SDKs for supported languages.
> NOTE: Standalone MinIO servers are best suited for early development and evaluation. Certain features such as versioning, object locking, and bucket replication require distributed deploying MinIO with Erasure Coding. For extended development and production, deploy MinIO with Erasure Coding enabled - specifically, with a *minimum* of 4 drives per MinIO server. See [MinIO Erasure Code Overview](https://min.io/docs/minio/linux/operations/concepts/erasure-coding.html) for more complete documentation.
MinIO strongly recommends *against* using compiled-from-source MinIO servers for production environments.
## Deployment Recommendations
### Allow port access for Firewalls
By default MinIO uses the port 9000 to listen for incoming connections. If your platform blocks the port by default, you may need to enable access to the port.
### ufw
For hosts with ufw enabled (Debian based distros), you can use `ufw` command to allow traffic to specific ports. Use below command to allow access to port 9000
```sh
ufw allow 9000
```
env GOOS=linux GOARCh=arm64 go build
```
Below command enables all incoming traffic to ports ranging from 9000 to 9010.
Start MinIO by running `minio server PATH` where `PATH` is any empty folder on your local filesystem.
The MinIO deployment starts using default root credentials `minioadmin:minioadmin`.
You can test the deployment using the MinIO Console, an embedded web-based object browser built into MinIO Server.
Point a web browser running on the host machine to <http://127.0.0.1:9000> and log in with the root credentials.
You can use the Browser to create buckets, upload objects, and browse the contents of the MinIO server.
You can also connect using any S3-compatible tool, such as the MinIO Client `mc` commandline tool:
```sh
ufw allow 9000:9010/tcp
mc alias set local http://localhost:9000 minioadmin minioadmin
mc admin info local
```
### firewall-cmd
See [Test using MinIO Client `mc`](#test-using-minio-client-mc) for more information on using the `mc` commandline tool.
For application developers, see <https://docs.min.io/enterprise/aistor-object-store/developers/sdk/> to view MinIO SDKs for supported languages.
For hosts with firewall-cmd enabled (CentOS), you can use `firewall-cmd` command to allow traffic to specific ports. Use below commands to allow access to port 9000
> [!NOTE]
> Production environments using compiled-from-source MinIO binaries do so at their own risk.
> The AGPLv3 license provides no warranties nor liabilites for any such usage.
## Build Docker Image
You can use the `docker build .` command to build a Docker image on your local host machine.
You must first [build MinIO](#install-from-source) and ensure the `minio` binary exists in the project root.
The following command builds the Docker image using the default `Dockerfile` in the root project directory with the repository and image tag `myminio:minio`
```sh
firewall-cmd --get-active-zones
docker build -t myminio:minio .
```
This command gets the active zone(s). Now, apply port rules to the relevant zones returned above. For example if the zone is `public`, use
Use `docker image ls` to confirm the image exists in your local repository.
You can run the server using standard Docker invocation:
```sh
firewall-cmd --zone=public --add-port=9000/tcp --permanent
docker run -p 9000:9000 -p 9001:9001 myminio:minio server /tmp/minio --console-address :9001
```
Note that `permanent` makes sure the rules are persistent across firewall start, restart or reload. Finally reload the firewall for changes to take effect.
Complete documentation for building Docker containers, managing custom images, or loading images into orchestration platforms is out of scope for this documentation.
You can modify the `Dockerfile` and `dockerscripts/docker-entrypoint.sh` as-needed to reflect your specific image requirements.
```sh
firewall-cmd --reload
```
See the [MinIO Container](https://docs.min.io/community/minio-object-store/operations/deployments/baremetal-deploy-minio-as-a-container.html#deploy-minio-container) documentation for more guidance on running MinIO within a Container image.
### iptables
## Install using Helm Charts
For hosts with iptables enabled (RHEL, CentOS, etc), you can use `iptables` command to enable all traffic coming to specific ports. Use below command to allow
access to port 9000
There are two paths for installing MinIO onto Kubernetes infrastructure:
```sh
iptables -A INPUT -p tcp --dport 9000 -j ACCEPT
service iptables restart
```
- Use the [MinIO Operator](https://github.com/minio/operator)
- Use the community-maintained [Helm charts](https://github.com/minio/minio/tree/master/helm/minio)
Below command enables all incoming traffic to ports ranging from 9000 to 9010.
```sh
iptables -A INPUT -p tcp --dport 9000:9010 -j ACCEPT
service iptables restart
```
See the [MinIO Documentation](https://docs.min.io/community/minio-object-store/operations/deployments/kubernetes.html) for guidance on deploying using the Operator.
The Community Helm chart has instructions in the folder-level README.
## Test MinIO Connectivity
### Test using MinIO Console
MinIO Server comes with an embedded web based object browser. Point your web browser to <http://127.0.0.1:9000> to ensure your server has started successfully.
MinIO Server comes with an embedded web based object browser.
Point your web browser to <http://127.0.0.1:9000> to ensure your server has started successfully.
> NOTE: MinIO runs console on random port by default, if you wish to choose a specific port use `--console-address` to pick a specific interface and port.
> [!NOTE]
> MinIO runs console on random port by default, if you wish to choose a specific port use `--console-address` to pick a specific interface and port.
### Things to consider
### Test using MinIO Client `mc`
MinIO redirects browser access requests to the configured server port (i.e. `127.0.0.1:9000`) to the configured Console port. MinIO uses the hostname or IP address specified in the request when building the redirect URL. The URL and port *must* be accessible by the client for the redirection to work.
`mc` provides a modern alternative to UNIX commands like ls, cat, cp, mirror, diff etc. It supports filesystems and Amazon S3 compatible cloud storage services.
For deployments behind a load balancer, proxy, or ingress rule where the MinIO host IP address or port is not public, use the `MINIO_BROWSER_REDIRECT_URL` environment variable to specify the external hostname for the redirect. The LB/Proxy must have rules for directing traffic to the Console port specifically.
For example, consider a MinIO deployment behind a proxy `https://minio.example.net`, `https://console.minio.example.net` with rules for forwarding traffic on port :9000 and :9001 to MinIO and the MinIO Console respectively on the internal network. Set `MINIO_BROWSER_REDIRECT_URL` to `https://console.minio.example.net` to ensure the browser receives a valid reachable URL.
| Dashboard | Creating a bucket |
| ------------- | ------------- |
| ![Dashboard](https://github.com/minio/minio/blob/master/docs/screenshots/pic1.png?raw=true) | ![Dashboard](https://github.com/minio/minio/blob/master/docs/screenshots/pic2.png?raw=true) |
## Test using MinIO Client `mc`
`mc` provides a modern alternative to UNIX commands like ls, cat, cp, mirror, diff etc. It supports filesystems and Amazon S3 compatible cloud storage services. Follow the MinIO Client [Quickstart Guide](https://min.io/docs/minio/linux/reference/minio-mc.html#quickstart) for further instructions.
## Upgrading MinIO
Upgrades require zero downtime in MinIO, all upgrades are non-disruptive, all transactions on MinIO are atomic. So upgrading all the servers simultaneously is the recommended way to upgrade MinIO.
> NOTE: requires internet access to update directly from <https://dl.min.io>, optionally you can host any mirrors at <https://my-artifactory.example.com/minio/>
- For deployments that installed the MinIO server binary by hand, use [`mc admin update`](https://min.io/docs/minio/linux/reference/minio-mc-admin/mc-admin-update.html)
The following commands set a local alias, validate the server information, create a bucket, copy data to that bucket, and list the contents of the bucket.
```sh
mc admin update <minio alias, e.g., myminio>
mc alias set local http://localhost:9000 minioadmin minioadmin
mc admin info
mc mb data
mc cp ~/Downloads/mydata data/
mc ls data/
```
- For deployments without external internet access (e.g. airgapped environments), download the binary from <https://dl.min.io> and replace the existing MinIO binary let's say for example `/opt/bin/minio`, apply executable permissions `chmod +x /opt/bin/minio` and proceed to perform `mc admin service restart alias/`.
- For installations using Systemd MinIO service, upgrade via RPM/DEB packages **parallelly** on all servers or replace the binary lets say `/opt/bin/minio` on all nodes, apply executable permissions `chmod +x /opt/bin/minio` and process to perform `mc admin service restart alias/`.
### Upgrade Checklist
- Test all upgrades in a lower environment (DEV, QA, UAT) before applying to production. Performing blind upgrades in production environments carries significant risk.
- Read the release notes for MinIO *before* performing any upgrade, there is no forced requirement to upgrade to latest release upon every release. Some release may not be relevant to your setup, avoid upgrading production environments unnecessarily.
- If you plan to use `mc admin update`, MinIO process must have write access to the parent directory where the binary is present on the host system.
- `mc admin update` is not supported and should be avoided in kubernetes/container environments, please upgrade containers by upgrading relevant container images.
- **We do not recommend upgrading one MinIO server at a time, the product is designed to support parallel upgrades please follow our recommended guidelines.**
Follow the MinIO Client [Quickstart Guide](https://docs.min.io/community/minio-object-store/reference/minio-mc.html#quickstart) for further instructions.
## Explore Further
- [MinIO Erasure Code Overview](https://min.io/docs/minio/linux/operations/concepts/erasure-coding.html)
- [Use `mc` with MinIO Server](https://min.io/docs/minio/linux/reference/minio-mc.html)
- [Use `minio-go` SDK with MinIO Server](https://min.io/docs/minio/linux/developers/go/minio-go.html)
- [The MinIO documentation website](https://min.io/docs/minio/linux/index.html)
- [The MinIO documentation website](https://docs.min.io/community/minio-object-store/index.html)
- [MinIO Erasure Code Overview](https://docs.min.io/community/minio-object-store/operations/concepts/erasure-coding.html)
- [Use `mc` with MinIO Server](https://docs.min.io/community/minio-object-store/reference/minio-mc.html)
- [Use `minio-go` SDK with MinIO Server](https://docs.min.io/enterprise/aistor-object-store/developers/sdk/go/)
## Contribute to MinIO Project
Please follow MinIO [Contributor's Guide](https://github.com/minio/minio/blob/master/CONTRIBUTING.md)
Please follow MinIO [Contributor's Guide](https://github.com/minio/minio/blob/master/CONTRIBUTING.md) for guidance on making new contributions to the repository.
## License

4
buildscripts/checkdeps.sh Normal file → Executable file
View File

@ -74,11 +74,11 @@ check_minimum_version() {
assert_is_supported_arch() {
case "${ARCH}" in
x86_64 | amd64 | aarch64 | ppc64le | arm* | s390x | loong64 | loongarch64)
x86_64 | amd64 | aarch64 | ppc64le | arm* | s390x | loong64 | loongarch64 | riscv64)
return
;;
*)
echo "Arch '${ARCH}' is not supported. Supported Arch: [x86_64, amd64, aarch64, ppc64le, arm*, s390x, loong64, loongarch64]"
echo "Arch '${ARCH}' is not supported. Supported Arch: [x86_64, amd64, aarch64, ppc64le, arm*, s390x, loong64, loongarch64, riscv64]"
exit 1
;;
esac

View File

@ -9,7 +9,7 @@ function _init() {
export CGO_ENABLED=0
## List of architectures and OS to test coss compilation.
SUPPORTED_OSARCH="linux/ppc64le linux/mips64 linux/amd64 linux/arm64 linux/s390x darwin/arm64 darwin/amd64 freebsd/amd64 windows/amd64 linux/arm linux/386 netbsd/amd64 linux/mips openbsd/amd64"
SUPPORTED_OSARCH="linux/ppc64le linux/mips64 linux/amd64 linux/arm64 linux/s390x darwin/arm64 darwin/amd64 freebsd/amd64 windows/amd64 linux/arm linux/386 netbsd/amd64 linux/mips openbsd/amd64 linux/riscv64"
}
function _build() {

View File

@ -8,7 +8,7 @@
#
# This script assumes that LDAP server is at:
#
# `localhost:1389`
# `localhost:389`
#
# if this is not the case, set the environment variable
# `_MINIO_LDAP_TEST_SERVER`.
@ -41,7 +41,7 @@ __init__() {
fi
if [ -z "$_MINIO_LDAP_TEST_SERVER" ]; then
export _MINIO_LDAP_TEST_SERVER=localhost:1389
export _MINIO_LDAP_TEST_SERVER=localhost:389
echo "Using default LDAP endpoint: $_MINIO_LDAP_TEST_SERVER"
fi
@ -58,7 +58,7 @@ create_iam_content_in_old_minio() {
mc alias set old-minio http://localhost:9000 minioadmin minioadmin
mc ready old-minio
mc idp ldap add old-minio \
server_addr=localhost:1389 \
server_addr=localhost:389 \
server_insecure=on \
lookup_bind_dn=cn=admin,dc=min,dc=io \
lookup_bind_password=admin \

28
buildscripts/minio-upgrade.sh Normal file → Executable file
View File

@ -4,10 +4,22 @@ trap 'cleanup $LINENO' ERR
# shellcheck disable=SC2120
cleanup() {
MINIO_VERSION=dev docker-compose \
MINIO_VERSION=dev /tmp/gopath/bin/docker-compose \
-f "buildscripts/upgrade-tests/compose.yml" \
rm -s -f
down || true
MINIO_VERSION=dev /tmp/gopath/bin/docker-compose \
-f "buildscripts/upgrade-tests/compose.yml" \
rm || true
for volume in $(docker volume ls -q | grep upgrade); do
docker volume rm ${volume} || true
done
docker volume prune -f
docker system prune -f || true
docker volume prune -f || true
docker volume rm $(docker volume ls -q -f dangling=true) || true
}
verify_checksum_after_heal() {
@ -57,8 +69,12 @@ __init__() {
## this is needed because github actions don't have
## docker-compose on all runners
go install github.com/docker/compose/v2/cmd@latest
mv -v /tmp/gopath/bin/cmd /tmp/gopath/bin/docker-compose
COMPOSE_VERSION=v2.35.1
mkdir -p /tmp/gopath/bin/
wget -O /tmp/gopath/bin/docker-compose https://github.com/docker/compose/releases/download/${COMPOSE_VERSION}/docker-compose-linux-x86_64
chmod +x /tmp/gopath/bin/docker-compose
cleanup
TAG=minio/minio:dev make docker
@ -77,11 +93,11 @@ __init__() {
curl -s http://127.0.0.1:9000/minio-test/to-read/hosts | sha256sum
MINIO_VERSION=dev docker-compose -f "buildscripts/upgrade-tests/compose.yml" stop
MINIO_VERSION=dev /tmp/gopath/bin/docker-compose -f "buildscripts/upgrade-tests/compose.yml" stop
}
main() {
MINIO_VERSION=dev docker-compose -f "buildscripts/upgrade-tests/compose.yml" up -d --build
MINIO_VERSION=dev /tmp/gopath/bin/docker-compose -f "buildscripts/upgrade-tests/compose.yml" up -d --build
add_alias

View File

@ -0,0 +1,126 @@
#!/bin/bash
if [ -n "$TEST_DEBUG" ]; then
set -x
fi
WORK_DIR="$PWD/.verify-$RANDOM"
MINIO_CONFIG_DIR="$WORK_DIR/.minio"
MINIO=("$PWD/minio" --config-dir "$MINIO_CONFIG_DIR" server)
if [ ! -x "$PWD/minio" ]; then
echo "minio executable binary not found in current directory"
exit 1
fi
if [ ! -x "$PWD/minio" ]; then
echo "minio executable binary not found in current directory"
exit 1
fi
trap 'catch $LINENO' ERR
function purge() {
rm -rf "$1"
}
# shellcheck disable=SC2120
catch() {
if [ $# -ne 0 ]; then
echo "error on line $1"
fi
echo "Cleaning up instances of MinIO"
pkill minio || true
pkill -9 minio || true
purge "$WORK_DIR"
if [ $# -ne 0 ]; then
exit $#
fi
}
catch
function start_minio_10drive() {
start_port=$1
export MINIO_ROOT_USER=minio
export MINIO_ROOT_PASSWORD=minio123
export MC_HOST_minio="http://minio:minio123@127.0.0.1:${start_port}/"
unset MINIO_KMS_AUTO_ENCRYPTION # do not auto-encrypt objects
export MINIO_CI_CD=1
mkdir ${WORK_DIR}
C_PWD=${PWD}
if [ ! -x "$PWD/mc" ]; then
MC_BUILD_DIR="mc-$RANDOM"
if ! git clone --quiet https://github.com/minio/mc "$MC_BUILD_DIR"; then
echo "failed to download https://github.com/minio/mc"
purge "${MC_BUILD_DIR}"
exit 1
fi
(cd "${MC_BUILD_DIR}" && go build -o "$C_PWD/mc")
# remove mc source.
purge "${MC_BUILD_DIR}"
fi
"${MINIO[@]}" --address ":$start_port" "${WORK_DIR}/disk{1...10}" >"${WORK_DIR}/server1.log" 2>&1 &
pid=$!
disown $pid
sleep 5
if ! ps -p ${pid} 1>&2 >/dev/null; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
"${PWD}/mc" mb --with-versioning minio/bucket
export AWS_ACCESS_KEY_ID=minio
export AWS_SECRET_ACCESS_KEY=minio123
aws --endpoint-url http://localhost:"$start_port" s3api create-multipart-upload --bucket bucket --key obj-1 >upload-id.json
uploadId=$(jq -r '.UploadId' upload-id.json)
truncate -s 5MiB file-5mib
for i in {1..2}; do
aws --endpoint-url http://localhost:"$start_port" s3api upload-part \
--upload-id "$uploadId" --bucket bucket --key obj-1 \
--part-number "$i" --body ./file-5mib
done
for i in {1..6}; do
find ${WORK_DIR}/disk${i}/.minio.sys/multipart/ -type f -name "part.1" -delete
done
cat <<EOF >parts.json
{
"Parts": [
{
"PartNumber": 1,
"ETag": "5f363e0e58a95f06cbe9bbc662c5dfb6"
},
{
"PartNumber": 2,
"ETag": "5f363e0e58a95f06cbe9bbc662c5dfb6"
}
]
}
EOF
err=$(aws --endpoint-url http://localhost:"$start_port" s3api complete-multipart-upload --upload-id "$uploadId" --bucket bucket --key obj-1 --multipart-upload file://./parts.json 2>&1)
rv=$?
if [ $rv -eq 0 ]; then
echo "Failed to receive an error"
exit 1
fi
echo "Received an error during complete-multipart as expected: $err"
}
function main() {
start_port=$(shuf -i 10000-65000 -n 1)
start_minio_10drive ${start_port}
}
main "$@"

View File

@ -0,0 +1,137 @@
#!/bin/bash
if [ -n "$TEST_DEBUG" ]; then
set -x
fi
WORK_DIR="$PWD/.verify-$RANDOM"
MINIO_CONFIG_DIR="$WORK_DIR/.minio"
MINIO=("$PWD/minio" --config-dir "$MINIO_CONFIG_DIR" server)
if [ ! -x "$PWD/minio" ]; then
echo "minio executable binary not found in current directory"
exit 1
fi
if [ ! -x "$PWD/minio" ]; then
echo "minio executable binary not found in current directory"
exit 1
fi
trap 'catch $LINENO' ERR
function purge() {
rm -rf "$1"
}
# shellcheck disable=SC2120
catch() {
if [ $# -ne 0 ]; then
echo "error on line $1"
fi
echo "Cleaning up instances of MinIO"
pkill minio || true
pkill -9 minio || true
purge "$WORK_DIR"
if [ $# -ne 0 ]; then
exit $#
fi
}
catch
function gen_put_request() {
hdr_sleep=$1
body_sleep=$2
echo "PUT /testbucket/testobject HTTP/1.1"
sleep $hdr_sleep
echo "Host: foo-header"
echo "User-Agent: curl/8.2.1"
echo "Accept: */*"
echo "Content-Length: 30"
echo ""
sleep $body_sleep
echo "random line 0"
echo "random line 1"
echo ""
echo ""
}
function send_put_object_request() {
hdr_timeout=$1
body_timeout=$2
start=$(date +%s)
timeout 5m bash -c "gen_put_request $hdr_timeout $body_timeout | netcat 127.0.0.1 $start_port | read" || return -1
[ $(($(date +%s) - start)) -gt $((srv_hdr_timeout + srv_idle_timeout + 1)) ] && return -1
return 0
}
function test_minio_with_timeout() {
start_port=$1
export MINIO_ROOT_USER=minio
export MINIO_ROOT_PASSWORD=minio123
export MC_HOST_minio="http://minio:minio123@127.0.0.1:${start_port}/"
export MINIO_CI_CD=1
mkdir ${WORK_DIR}
C_PWD=${PWD}
if [ ! -x "$PWD/mc" ]; then
MC_BUILD_DIR="mc-$RANDOM"
if ! git clone --quiet https://github.com/minio/mc "$MC_BUILD_DIR"; then
echo "failed to download https://github.com/minio/mc"
purge "${MC_BUILD_DIR}"
exit 1
fi
(cd "${MC_BUILD_DIR}" && go build -o "$C_PWD/mc")
# remove mc source.
purge "${MC_BUILD_DIR}"
fi
"${MINIO[@]}" --address ":$start_port" --read-header-timeout ${srv_hdr_timeout}s --idle-timeout ${srv_idle_timeout}s "${WORK_DIR}/disk/" >"${WORK_DIR}/server1.log" 2>&1 &
pid=$!
disown $pid
sleep 1
if ! ps -p ${pid} 1>&2 >/dev/null; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
set -e
"${PWD}/mc" mb minio/testbucket
"${PWD}/mc" anonymous set public minio/testbucket
# slow header writing
send_put_object_request 20 0 && exit -1
"${PWD}/mc" stat minio/testbucket/testobject && exit -1
# quick header write and slow bodywrite
send_put_object_request 0 40 && exit -1
"${PWD}/mc" stat minio/testbucket/testobject && exit -1
# quick header and body write
send_put_object_request 1 1 || exit -1
"${PWD}/mc" stat minio/testbucket/testobject || exit -1
}
function main() {
export start_port=$(shuf -i 10000-65000 -n 1)
export srv_hdr_timeout=5
export srv_idle_timeout=5
export -f gen_put_request
test_minio_with_timeout ${start_port}
}
main "$@"

View File

@ -1,5 +1,3 @@
version: '3.7'
# Settings and configurations that are common for all containers
x-minio-common: &minio-common
image: minio/minio:${MINIO_VERSION}

View File

@ -38,40 +38,47 @@ function start_minio_3_node() {
disown $pid3
export MC_HOST_myminio="http://minio:minio123@127.0.0.1:$((start_port + 1))"
/tmp/mc ready myminio
timeout 15m /tmp/mc ready myminio || fail
# Wait for all drives to be online and formatted
while [ $(/tmp/mc admin info --json myminio | jq '.info.servers[].drives[].state | select(. != "ok")' | wc -l) -gt 0 ]; do sleep 1; done
# Wait for all drives to be healed
while [ $(/tmp/mc admin info --json myminio | jq '.info.servers[].drives[].healing | select(. != null) | select(. == true)' | wc -l) -gt 0 ]; do sleep 1; done
# Wait for Status: in MinIO output
while true; do
rv=$(check_online)
if [ "$rv" != "1" ]; then
# success
break
fi
# Check if we should retry
retry=$((retry + 1))
if [ $retry -le 20 ]; then
sleep 5
continue
fi
# Failure
fail
done
if ! ps -p $pid1 1>&2 >/dev/null; then
echo "server1 log:"
cat "${WORK_DIR}/dist-minio-server1.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
echo "minio-server-1 is not running." && fail
fi
if ! ps -p $pid2 1>&2 >/dev/null; then
echo "server2 log:"
cat "${WORK_DIR}/dist-minio-server2.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
echo "minio-server-2 is not running." && fail
fi
if ! ps -p $pid3 1>&2 >/dev/null; then
echo "server3 log:"
cat "${WORK_DIR}/dist-minio-server3.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
echo "minio-server-3 is not running." && fail
fi
if ! pkill minio; then
for i in $(seq 1 3); do
echo "server$i log:"
cat "${WORK_DIR}/dist-minio-server$i.log"
done
echo "FAILED"
purge "$WORK_DIR"
exit 1
fail
fi
sleep 1
@ -83,14 +90,24 @@ function start_minio_3_node() {
fi
}
function fail() {
for i in $(seq 1 3); do
echo "server$i log:"
cat "${WORK_DIR}/dist-minio-server$i.log"
done
echo "FAILED"
purge "$WORK_DIR"
exit 1
}
function check_online() {
if ! grep -q 'Status:' ${WORK_DIR}/dist-minio-*.log; then
if ! grep -q 'API:' ${WORK_DIR}/dist-minio-*.log; then
echo "1"
fi
}
function purge() {
rm -rf "$1"
echo rm -rf "$1"
}
function __init__() {
@ -117,18 +134,6 @@ function perform_test() {
set -x
start_minio_3_node $2
rv=$(check_online)
if [ "$rv" == "1" ]; then
for i in $(seq 1 3); do
echo "server$i log:"
cat "${WORK_DIR}/dist-minio-server$i.log"
done
pkill -9 minio
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
}
function main() {

View File

@ -47,43 +47,25 @@ function start_minio_3_node() {
disown $pid3
export MC_HOST_myminio="http://minio:minio123@127.0.0.1:$((start_port + 1))"
/tmp/mc ready myminio
timeout 15m /tmp/mc ready myminio || fail
[ ${first_time} -eq 0 ] && upload_objects
[ ${first_time} -ne 0 ] && sleep 120
if ! ps -p $pid1 1>&2 >/dev/null; then
echo "server1 log:"
cat "${WORK_DIR}/dist-minio-server1.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
echo "minio server 1 is not running" && fail
fi
if ! ps -p $pid2 1>&2 >/dev/null; then
echo "server2 log:"
cat "${WORK_DIR}/dist-minio-server2.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
echo "minio server 2 is not running" && fail
fi
if ! ps -p $pid3 1>&2 >/dev/null; then
echo "server3 log:"
cat "${WORK_DIR}/dist-minio-server3.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
echo "minio server 3 is not running" && fail
fi
if ! pkill minio; then
for i in $(seq 1 3); do
echo "server$i log:"
cat "${WORK_DIR}/dist-minio-server$i.log"
done
echo "FAILED"
purge "$WORK_DIR"
exit 1
fail
fi
sleep 1
@ -96,7 +78,7 @@ function start_minio_3_node() {
}
function check_heal() {
if ! grep -q 'Status:' ${WORK_DIR}/dist-minio-*.log; then
if ! grep -q 'API:' ${WORK_DIR}/dist-minio-*.log; then
return 1
fi
@ -118,6 +100,17 @@ function purge() {
rm -rf "$1"
}
function fail() {
for i in $(seq 1 3); do
echo "server$i log:"
cat "${WORK_DIR}/dist-minio-server$i.log"
done
pkill -9 minio
echo "FAILED"
purge "$WORK_DIR"
exit 1
}
function __init__() {
echo "Initializing environment"
mkdir -p "$WORK_DIR"
@ -155,14 +148,7 @@ function perform_test() {
check_heal ${1}
rv=$?
if [ "$rv" == "1" ]; then
for i in $(seq 1 3); do
echo "server$i log:"
cat "${WORK_DIR}/dist-minio-server$i.log"
done
pkill -9 minio
echo "FAILED"
purge "$WORK_DIR"
exit 1
fail
fi
}

View File

@ -38,6 +38,7 @@ import (
objectlock "github.com/minio/minio/internal/bucket/object/lock"
"github.com/minio/minio/internal/bucket/versioning"
"github.com/minio/minio/internal/event"
xhttp "github.com/minio/minio/internal/http"
"github.com/minio/minio/internal/kms"
"github.com/minio/mux"
"github.com/minio/pkg/v3/policy"
@ -427,6 +428,21 @@ func (a adminAPIHandlers) ExportBucketMetadataHandler(w http.ResponseWriter, r *
cfgPath := pathJoin(bi.Name, cfgFile)
bucket := bi.Name
switch cfgFile {
case bucketPolicyConfig:
config, _, err := globalBucketMetadataSys.GetBucketPolicy(bucket)
if err != nil {
if errors.Is(err, BucketPolicyNotFound{Bucket: bucket}) {
continue
}
writeErrorResponse(ctx, w, exportError(ctx, err, cfgFile, bucket), r.URL)
return
}
configData, err := json.Marshal(config)
if err != nil {
writeErrorResponse(ctx, w, exportError(ctx, err, cfgFile, bucket), r.URL)
return
}
rawDataFn(bytes.NewReader(configData), cfgPath, len(configData))
case bucketNotificationConfig:
config, err := globalBucketMetadataSys.GetNotificationConfig(bucket)
if err != nil {
@ -796,11 +812,12 @@ func (a adminAPIHandlers) ImportBucketMetadataHandler(w http.ResponseWriter, r *
}
bucketMap[bucket].NotificationConfigXML = configData
bucketMap[bucket].NotificationConfigUpdatedAt = updatedAt
rpt.SetStatus(bucket, fileName, nil)
case bucketPolicyConfig:
// Error out if Content-Length is beyond allowed size.
if sz > maxBucketPolicySize {
rpt.SetStatus(bucket, fileName, fmt.Errorf(ErrPolicyTooLarge.String()))
rpt.SetStatus(bucket, fileName, errors.New(ErrPolicyTooLarge.String()))
continue
}
@ -818,7 +835,7 @@ func (a adminAPIHandlers) ImportBucketMetadataHandler(w http.ResponseWriter, r *
// Version in policy must not be empty
if bucketPolicy.Version == "" {
rpt.SetStatus(bucket, fileName, fmt.Errorf(ErrPolicyInvalidVersion.String()))
rpt.SetStatus(bucket, fileName, errors.New(ErrPolicyInvalidVersion.String()))
continue
}
@ -964,7 +981,6 @@ func (a adminAPIHandlers) ImportBucketMetadataHandler(w http.ResponseWriter, r *
rpt.SetStatus(bucket, "", err)
continue
}
}
rptData, err := json.Marshal(rpt.BucketMetaImportErrs)
@ -1023,7 +1039,7 @@ func (a adminAPIHandlers) ReplicationDiffHandler(w http.ResponseWriter, r *http.
}
if len(diffCh) == 0 {
// Flush if nothing is queued
w.(http.Flusher).Flush()
xhttp.Flush(w)
}
case <-keepAliveTicker.C:
if len(diffCh) > 0 {
@ -1032,7 +1048,7 @@ func (a adminAPIHandlers) ReplicationDiffHandler(w http.ResponseWriter, r *http.
if _, err := w.Write([]byte(" ")); err != nil {
return
}
w.(http.Flusher).Flush()
xhttp.Flush(w)
case <-ctx.Done():
return
}
@ -1082,7 +1098,7 @@ func (a adminAPIHandlers) ReplicationMRFHandler(w http.ResponseWriter, r *http.R
}
if len(mrfCh) == 0 {
// Flush if nothing is queued
w.(http.Flusher).Flush()
xhttp.Flush(w)
}
case <-keepAliveTicker.C:
if len(mrfCh) > 0 {
@ -1091,7 +1107,7 @@ func (a adminAPIHandlers) ReplicationMRFHandler(w http.ResponseWriter, r *http.R
if _, err := w.Write([]byte(" ")); err != nil {
return
}
w.(http.Flusher).Flush()
xhttp.Flush(w)
case <-ctx.Done():
return
}

View File

@ -216,6 +216,12 @@ func toAdminAPIErr(ctx context.Context, err error) APIError {
Description: err.Error(),
HTTPStatusCode: http.StatusBadRequest,
}
case errors.Is(err, errTierInvalidConfig):
apiErr = APIError{
Code: "XMinioAdminTierInvalidConfig",
Description: err.Error(),
HTTPStatusCode: http.StatusBadRequest,
}
default:
apiErr = errorCodes.ToAPIErrWithErr(toAdminAPIErrCode(ctx, err), err)
}

View File

@ -193,27 +193,27 @@ func (a adminAPIHandlers) SetConfigKVHandler(w http.ResponseWriter, r *http.Requ
func setConfigKV(ctx context.Context, objectAPI ObjectLayer, kvBytes []byte) (result setConfigResult, err error) {
result.Cfg, err = readServerConfig(ctx, objectAPI, nil)
if err != nil {
return
return result, err
}
result.Dynamic, err = result.Cfg.ReadConfig(bytes.NewReader(kvBytes))
if err != nil {
return
return result, err
}
result.SubSys, _, _, err = config.GetSubSys(string(kvBytes))
if err != nil {
return
return result, err
}
tgts, err := config.ParseConfigTargetID(bytes.NewReader(kvBytes))
if err != nil {
return
return result, err
}
ctx = context.WithValue(ctx, config.ContextKeyForTargetFromConfig, tgts)
if verr := validateConfig(ctx, result.Cfg, result.SubSys); verr != nil {
err = badConfigErr{Err: verr}
return
return result, err
}
// Check if subnet proxy being set and if so set the same value to proxy of subnet
@ -222,12 +222,12 @@ func setConfigKV(ctx context.Context, objectAPI ObjectLayer, kvBytes []byte) (re
// Update the actual server config on disk.
if err = saveServerConfig(ctx, objectAPI, result.Cfg); err != nil {
return
return result, err
}
// Write the config input KV to history.
err = saveServerConfigHistory(ctx, objectAPI, kvBytes)
return
return result, err
}
// GetConfigKVHandler - GET /minio/admin/v3/get-config-kv?key={key}

View File

@ -125,7 +125,6 @@ func addOrUpdateIDPHandler(ctx context.Context, w http.ResponseWriter, r *http.R
}
if err = validateConfig(ctx, cfg, subSys); err != nil {
var validationErr ldap.Validation
if errors.As(err, &validationErr) {
// If we got an LDAP validation error, we need to send appropriate
@ -416,7 +415,6 @@ func (a adminAPIHandlers) DeleteIdentityProviderCfg(w http.ResponseWriter, r *ht
return
}
if err = validateConfig(ctx, cfg, subSys); err != nil {
var validationErr ldap.Validation
if errors.As(err, &validationErr) {
// If we got an LDAP validation error, we need to send appropriate

View File

@ -20,6 +20,7 @@ package cmd
import (
"encoding/json"
"errors"
"fmt"
"io"
"net/http"
"strings"
@ -189,7 +190,7 @@ func (a adminAPIHandlers) AttachDetachPolicyLDAP(w http.ResponseWriter, r *http.
//
// PUT /minio/admin/v3/idp/ldap/add-service-account
func (a adminAPIHandlers) AddServiceAccountLDAP(w http.ResponseWriter, r *http.Request) {
ctx, cred, opts, createReq, targetUser, APIError := commonAddServiceAccount(r)
ctx, cred, opts, createReq, targetUser, APIError := commonAddServiceAccount(r, true)
if APIError.Code != "" {
writeErrorResponseJSON(ctx, w, APIError, r.URL)
return
@ -213,10 +214,7 @@ func (a adminAPIHandlers) AddServiceAccountLDAP(w http.ResponseWriter, r *http.R
}
// Check if we are creating svc account for request sender.
isSvcAccForRequestor := false
if targetUser == requestorUser || targetUser == requestorParentUser {
isSvcAccForRequestor = true
}
isSvcAccForRequestor := targetUser == requestorUser || targetUser == requestorParentUser
var (
targetGroups []string
@ -284,6 +282,18 @@ func (a adminAPIHandlers) AddServiceAccountLDAP(w http.ResponseWriter, r *http.R
opts.claims[ldapUser] = targetUser // DN
opts.claims[ldapActualUser] = lookupResult.ActualDN
// Check if this user or their groups have a policy applied.
ldapPolicies, err := globalIAMSys.PolicyDBGet(targetUser, targetGroups...)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
if len(ldapPolicies) == 0 {
err = fmt.Errorf("No policy set for user `%s` or any of their groups: `%s`", opts.claims[ldapActualUser], strings.Join(targetGroups, "`,`"))
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErrWithErr(ErrAdminNoSuchUser, err), r.URL)
return
}
// Add LDAP attributes that were looked up into the claims.
for attribKey, attribValue := range lookupResult.Attributes {
opts.claims[ldapAttribPrefix+attribKey] = attribValue
@ -332,7 +342,7 @@ func (a adminAPIHandlers) AddServiceAccountLDAP(w http.ResponseWriter, r *http.R
Name: newCred.Name,
Description: newCred.Description,
Claims: opts.claims,
SessionPolicy: createReq.Policy,
SessionPolicy: madmin.SRSessionPolicy(createReq.Policy),
Status: auth.AccountOn,
Expiration: createReq.Expiration,
},
@ -435,8 +445,10 @@ func (a adminAPIHandlers) ListAccessKeysLDAP(w http.ResponseWriter, r *http.Requ
for _, svc := range serviceAccounts {
expiryTime := svc.Expiration
serviceAccountList = append(serviceAccountList, madmin.ServiceAccountInfo{
AccessKey: svc.AccessKey,
Expiration: &expiryTime,
AccessKey: svc.AccessKey,
Expiration: &expiryTime,
Name: svc.Name,
Description: svc.Description,
})
}
for _, sts := range stsKeys {
@ -466,3 +478,180 @@ func (a adminAPIHandlers) ListAccessKeysLDAP(w http.ResponseWriter, r *http.Requ
writeSuccessResponseJSON(w, encryptedData)
}
// ListAccessKeysLDAPBulk - GET /minio/admin/v3/idp/ldap/list-access-keys-bulk
func (a adminAPIHandlers) ListAccessKeysLDAPBulk(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
// Get current object layer instance.
objectAPI := newObjectLayerFn()
if objectAPI == nil || globalNotificationSys == nil {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrServerNotInitialized), r.URL)
return
}
cred, owner, s3Err := validateAdminSignature(ctx, r, "")
if s3Err != ErrNone {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(s3Err), r.URL)
return
}
dnList := r.Form["userDNs"]
isAll := r.Form.Get("all") == "true"
selfOnly := !isAll && len(dnList) == 0
if isAll && len(dnList) > 0 {
// This should be checked on client side, so return generic error
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrInvalidRequest), r.URL)
return
}
// Empty DN list and not self, list access keys for all users
if isAll {
if !globalIAMSys.IsAllowed(policy.Args{
AccountName: cred.AccessKey,
Groups: cred.Groups,
Action: policy.ListUsersAdminAction,
ConditionValues: getConditionValues(r, "", cred),
IsOwner: owner,
Claims: cred.Claims,
}) {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAccessDenied), r.URL)
return
}
} else if len(dnList) == 1 {
var dn string
foundResult, err := globalIAMSys.LDAPConfig.GetValidatedDNForUsername(dnList[0])
if err == nil {
dn = foundResult.NormDN
}
if dn == cred.ParentUser || dnList[0] == cred.ParentUser {
selfOnly = true
}
}
if !globalIAMSys.IsAllowed(policy.Args{
AccountName: cred.AccessKey,
Groups: cred.Groups,
Action: policy.ListServiceAccountsAdminAction,
ConditionValues: getConditionValues(r, "", cred),
IsOwner: owner,
Claims: cred.Claims,
DenyOnly: selfOnly,
}) {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAccessDenied), r.URL)
return
}
if selfOnly && len(dnList) == 0 {
selfDN := cred.AccessKey
if cred.ParentUser != "" {
selfDN = cred.ParentUser
}
dnList = append(dnList, selfDN)
}
var ldapUserList []string
if isAll {
ldapUsers, err := globalIAMSys.ListLDAPUsers(ctx)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
for user := range ldapUsers {
ldapUserList = append(ldapUserList, user)
}
} else {
for _, userDN := range dnList {
// Validate the userDN
foundResult, err := globalIAMSys.LDAPConfig.GetValidatedDNForUsername(userDN)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
if foundResult == nil {
continue
}
ldapUserList = append(ldapUserList, foundResult.NormDN)
}
}
listType := r.Form.Get("listType")
var listSTSKeys, listServiceAccounts bool
switch listType {
case madmin.AccessKeyListUsersOnly:
listSTSKeys = false
listServiceAccounts = false
case madmin.AccessKeyListSTSOnly:
listSTSKeys = true
listServiceAccounts = false
case madmin.AccessKeyListSvcaccOnly:
listSTSKeys = false
listServiceAccounts = true
case madmin.AccessKeyListAll:
listSTSKeys = true
listServiceAccounts = true
default:
err := errors.New("invalid list type")
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErrWithErr(ErrInvalidRequest, err), r.URL)
return
}
accessKeyMap := make(map[string]madmin.ListAccessKeysLDAPResp)
for _, internalDN := range ldapUserList {
externalDN := globalIAMSys.LDAPConfig.DecodeDN(internalDN)
accessKeys := madmin.ListAccessKeysLDAPResp{}
if listSTSKeys {
stsKeys, err := globalIAMSys.ListSTSAccounts(ctx, internalDN)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
for _, sts := range stsKeys {
accessKeys.STSKeys = append(accessKeys.STSKeys, madmin.ServiceAccountInfo{
AccessKey: sts.AccessKey,
Expiration: &sts.Expiration,
})
}
// if only STS keys, skip if user has no STS keys
if !listServiceAccounts && len(stsKeys) == 0 {
continue
}
}
if listServiceAccounts {
serviceAccounts, err := globalIAMSys.ListServiceAccounts(ctx, internalDN)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
for _, svc := range serviceAccounts {
accessKeys.ServiceAccounts = append(accessKeys.ServiceAccounts, madmin.ServiceAccountInfo{
AccessKey: svc.AccessKey,
Expiration: &svc.Expiration,
Name: svc.Name,
Description: svc.Description,
})
}
// if only service accounts, skip if user has no service accounts
if !listSTSKeys && len(serviceAccounts) == 0 {
continue
}
}
accessKeyMap[externalDN] = accessKeys
}
data, err := json.Marshal(accessKeyMap)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
encryptedData, err := madmin.EncryptData(cred.SecretKey, data)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
writeSuccessResponseJSON(w, encryptedData)
}

View File

@ -0,0 +1,248 @@
// Copyright (c) 2015-2025 MinIO, Inc.
//
// This file is part of MinIO Object Storage stack
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package cmd
import (
"encoding/json"
"errors"
"net/http"
"sort"
"github.com/minio/madmin-go/v3"
"github.com/minio/minio-go/v7/pkg/set"
"github.com/minio/pkg/v3/policy"
)
const dummyRoleARN = "dummy-internal"
// ListAccessKeysOpenIDBulk - GET /minio/admin/v3/idp/openid/list-access-keys-bulk
func (a adminAPIHandlers) ListAccessKeysOpenIDBulk(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
// Get current object layer instance.
objectAPI := newObjectLayerFn()
if objectAPI == nil || globalNotificationSys == nil {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrServerNotInitialized), r.URL)
return
}
cred, owner, s3Err := validateAdminSignature(ctx, r, "")
if s3Err != ErrNone {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(s3Err), r.URL)
return
}
if !globalIAMSys.OpenIDConfig.Enabled {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminOpenIDNotEnabled), r.URL)
return
}
userList := r.Form["users"]
isAll := r.Form.Get("all") == "true"
selfOnly := !isAll && len(userList) == 0
cfgName := r.Form.Get("configName")
allConfigs := r.Form.Get("allConfigs") == "true"
if cfgName == "" && !allConfigs {
cfgName = madmin.Default
}
if isAll && len(userList) > 0 {
// This should be checked on client side, so return generic error
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrInvalidRequest), r.URL)
return
}
// Empty DN list and not self, list access keys for all users
if isAll {
if !globalIAMSys.IsAllowed(policy.Args{
AccountName: cred.AccessKey,
Groups: cred.Groups,
Action: policy.ListUsersAdminAction,
ConditionValues: getConditionValues(r, "", cred),
IsOwner: owner,
Claims: cred.Claims,
}) {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAccessDenied), r.URL)
return
}
} else if len(userList) == 1 && userList[0] == cred.ParentUser {
selfOnly = true
}
if !globalIAMSys.IsAllowed(policy.Args{
AccountName: cred.AccessKey,
Groups: cred.Groups,
Action: policy.ListServiceAccountsAdminAction,
ConditionValues: getConditionValues(r, "", cred),
IsOwner: owner,
Claims: cred.Claims,
DenyOnly: selfOnly,
}) {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAccessDenied), r.URL)
return
}
if selfOnly && len(userList) == 0 {
selfDN := cred.AccessKey
if cred.ParentUser != "" {
selfDN = cred.ParentUser
}
userList = append(userList, selfDN)
}
listType := r.Form.Get("listType")
var listSTSKeys, listServiceAccounts bool
switch listType {
case madmin.AccessKeyListUsersOnly:
listSTSKeys = false
listServiceAccounts = false
case madmin.AccessKeyListSTSOnly:
listSTSKeys = true
listServiceAccounts = false
case madmin.AccessKeyListSvcaccOnly:
listSTSKeys = false
listServiceAccounts = true
case madmin.AccessKeyListAll:
listSTSKeys = true
listServiceAccounts = true
default:
err := errors.New("invalid list type")
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErrWithErr(ErrInvalidRequest, err), r.URL)
return
}
s := globalServerConfig.Clone()
roleArnMap := make(map[string]string)
// Map of configs to a map of users to their access keys
cfgToUsersMap := make(map[string]map[string]madmin.OpenIDUserAccessKeys)
configs, err := globalIAMSys.OpenIDConfig.GetConfigList(s)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
for _, config := range configs {
if !allConfigs && cfgName != config.Name {
continue
}
arn := dummyRoleARN
if config.RoleARN != "" {
arn = config.RoleARN
}
roleArnMap[arn] = config.Name
newResp := make(map[string]madmin.OpenIDUserAccessKeys)
cfgToUsersMap[config.Name] = newResp
}
if len(roleArnMap) == 0 {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminNoSuchConfigTarget), r.URL)
return
}
userSet := set.CreateStringSet(userList...)
accessKeys, err := globalIAMSys.ListAllAccessKeys(ctx)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
for _, accessKey := range accessKeys {
// Filter out any disqualifying access keys
_, ok := accessKey.Claims[subClaim]
if !ok {
continue // OpenID access keys must have a sub claim
}
if (!listSTSKeys && !accessKey.IsServiceAccount()) || (!listServiceAccounts && accessKey.IsServiceAccount()) {
continue // skip if not the type we want
}
arn, ok := accessKey.Claims[roleArnClaim].(string)
if !ok {
if _, ok := accessKey.Claims[iamPolicyClaimNameOpenID()]; !ok {
continue // skip if no roleArn and no policy claim
}
// claim-based provider is in the roleArnMap under dummy ARN
arn = dummyRoleARN
}
matchingCfgName, ok := roleArnMap[arn]
if !ok {
continue // skip if not part of the target config
}
var id string
if idClaim := globalIAMSys.OpenIDConfig.GetUserIDClaim(matchingCfgName); idClaim != "" {
id, _ = accessKey.Claims[idClaim].(string)
}
if !userSet.IsEmpty() && !userSet.Contains(accessKey.ParentUser) && !userSet.Contains(id) {
continue // skip if not in the user list
}
openIDUserAccessKeys, ok := cfgToUsersMap[matchingCfgName][accessKey.ParentUser]
// Add new user to map if not already present
if !ok {
var readableClaim string
if rc := globalIAMSys.OpenIDConfig.GetUserReadableClaim(matchingCfgName); rc != "" {
readableClaim, _ = accessKey.Claims[rc].(string)
}
openIDUserAccessKeys = madmin.OpenIDUserAccessKeys{
MinioAccessKey: accessKey.ParentUser,
ID: id,
ReadableName: readableClaim,
}
}
svcAccInfo := madmin.ServiceAccountInfo{
AccessKey: accessKey.AccessKey,
Expiration: &accessKey.Expiration,
}
if accessKey.IsServiceAccount() {
openIDUserAccessKeys.ServiceAccounts = append(openIDUserAccessKeys.ServiceAccounts, svcAccInfo)
} else {
openIDUserAccessKeys.STSKeys = append(openIDUserAccessKeys.STSKeys, svcAccInfo)
}
cfgToUsersMap[matchingCfgName][accessKey.ParentUser] = openIDUserAccessKeys
}
// Convert map to slice and sort
resp := make([]madmin.ListAccessKeysOpenIDResp, 0, len(cfgToUsersMap))
for cfgName, usersMap := range cfgToUsersMap {
users := make([]madmin.OpenIDUserAccessKeys, 0, len(usersMap))
for _, user := range usersMap {
users = append(users, user)
}
sort.Slice(users, func(i, j int) bool {
return users[i].MinioAccessKey < users[j].MinioAccessKey
})
resp = append(resp, madmin.ListAccessKeysOpenIDResp{
ConfigName: cfgName,
Users: users,
})
}
sort.Slice(resp, func(i, j int) bool {
return resp[i].ConfigName < resp[j].ConfigName
})
data, err := json.Marshal(resp)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
encryptedData, err := madmin.EncryptData(cred.SecretKey, data)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
writeSuccessResponseJSON(w, encryptedData)
}

View File

@ -61,7 +61,7 @@ func (a adminAPIHandlers) StartDecommission(w http.ResponseWriter, r *http.Reque
return
}
if z.IsRebalanceStarted() {
if z.IsRebalanceStarted(ctx) {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminRebalanceAlreadyStarted), r.URL)
return
}
@ -258,8 +258,8 @@ func (a adminAPIHandlers) RebalanceStart(w http.ResponseWriter, r *http.Request)
// concurrent rebalance-start commands.
if ep := globalEndpoints[0].Endpoints[0]; !ep.IsLocal {
for nodeIdx, proxyEp := range globalProxyEndpoints {
if proxyEp.Endpoint.Host == ep.Host {
if proxyRequestByNodeIndex(ctx, w, r, nodeIdx) {
if proxyEp.Host == ep.Host {
if proxied, success := proxyRequestByNodeIndex(ctx, w, r, nodeIdx, false); proxied && success {
return
}
}
@ -277,7 +277,7 @@ func (a adminAPIHandlers) RebalanceStart(w http.ResponseWriter, r *http.Request)
return
}
if pools.IsRebalanceStarted() {
if pools.IsRebalanceStarted(ctx) {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminRebalanceAlreadyStarted), r.URL)
return
}
@ -329,8 +329,8 @@ func (a adminAPIHandlers) RebalanceStatus(w http.ResponseWriter, r *http.Request
// pools may temporarily have out of date info on the others.
if ep := globalEndpoints[0].Endpoints[0]; !ep.IsLocal {
for nodeIdx, proxyEp := range globalProxyEndpoints {
if proxyEp.Endpoint.Host == ep.Host {
if proxyRequestByNodeIndex(ctx, w, r, nodeIdx) {
if proxyEp.Host == ep.Host {
if proxied, success := proxyRequestByNodeIndex(ctx, w, r, nodeIdx, false); proxied && success {
return
}
}
@ -374,19 +374,20 @@ func (a adminAPIHandlers) RebalanceStop(w http.ResponseWriter, r *http.Request)
globalNotificationSys.StopRebalance(r.Context())
writeSuccessResponseHeadersOnly(w)
adminLogIf(ctx, pools.saveRebalanceStats(GlobalContext, 0, rebalSaveStoppedAt))
globalNotificationSys.LoadRebalanceMeta(ctx, false)
}
func proxyDecommissionRequest(ctx context.Context, defaultEndPoint Endpoint, w http.ResponseWriter, r *http.Request) (proxy bool) {
host := env.Get("_MINIO_DECOM_ENDPOINT_HOST", defaultEndPoint.Host)
if host == "" {
return
return proxy
}
for nodeIdx, proxyEp := range globalProxyEndpoints {
if proxyEp.Endpoint.Host == host && !proxyEp.IsLocal {
if proxyRequestByNodeIndex(ctx, w, r, nodeIdx) {
if proxyEp.Host == host && !proxyEp.IsLocal {
if proxied, success := proxyRequestByNodeIndex(ctx, w, r, nodeIdx, false); proxied && success {
return true
}
}
}
return
return proxy
}

View File

@ -70,7 +70,7 @@ func (a adminAPIHandlers) SiteReplicationAdd(w http.ResponseWriter, r *http.Requ
func getSRAddOptions(r *http.Request) (opts madmin.SRAddOptions) {
opts.ReplicateILMExpiry = r.Form.Get("replicateILMExpiry") == "true"
return
return opts
}
// SRPeerJoin - PUT /minio/admin/v3/site-replication/join
@ -304,7 +304,7 @@ func (a adminAPIHandlers) SRPeerGetIDPSettings(w http.ResponseWriter, r *http.Re
}
}
func parseJSONBody(ctx context.Context, body io.Reader, v interface{}, encryptionKey string) error {
func parseJSONBody(ctx context.Context, body io.Reader, v any, encryptionKey string) error {
data, err := io.ReadAll(body)
if err != nil {
return SRError{
@ -422,7 +422,7 @@ func (a adminAPIHandlers) SiteReplicationEdit(w http.ResponseWriter, r *http.Req
func getSREditOptions(r *http.Request) (opts madmin.SREditOptions) {
opts.DisableILMExpiryReplication = r.Form.Get("disableILMExpiryReplication") == "true"
opts.EnableILMExpiryReplication = r.Form.Get("enableILMExpiryReplication") == "true"
return
return opts
}
// SRPeerEdit - PUT /minio/admin/v3/site-replication/peer/edit
@ -484,7 +484,7 @@ func getSRStatusOptions(r *http.Request) (opts madmin.SRStatusOptions) {
opts.EntityValue = q.Get("entityvalue")
opts.ShowDeleted = q.Get("showDeleted") == "true"
opts.Metrics = q.Get("metrics") == "true"
return
return opts
}
// SiteReplicationRemove - PUT /minio/admin/v3/site-replication/remove

View File

@ -89,7 +89,7 @@ func (s *TestSuiteIAM) TestDeleteUserRace(c *check) {
// Create a policy policy
policy := "mypolicy"
policyBytes := []byte(fmt.Sprintf(`{
policyBytes := fmt.Appendf(nil, `{
"Version": "2012-10-17",
"Statement": [
{
@ -104,7 +104,7 @@ func (s *TestSuiteIAM) TestDeleteUserRace(c *check) {
]
}
]
}`, bucket))
}`, bucket)
err = s.adm.AddCannedPolicy(ctx, policy, policyBytes)
if err != nil {
c.Fatalf("policy add error: %v", err)
@ -113,16 +113,19 @@ func (s *TestSuiteIAM) TestDeleteUserRace(c *check) {
userCount := 50
accessKeys := make([]string, userCount)
secretKeys := make([]string, userCount)
for i := 0; i < userCount; i++ {
for i := range userCount {
accessKey, secretKey := mustGenerateCredentials(c)
err = s.adm.SetUser(ctx, accessKey, secretKey, madmin.AccountEnabled)
if err != nil {
c.Fatalf("Unable to set user: %v", err)
}
err = s.adm.SetPolicy(ctx, policy, accessKey, false)
if err != nil {
c.Fatalf("Unable to set policy: %v", err)
userReq := madmin.PolicyAssociationReq{
Policies: []string{policy},
User: accessKey,
}
if _, err := s.adm.AttachPolicy(ctx, userReq); err != nil {
c.Fatalf("Unable to attach policy: %v", err)
}
accessKeys[i] = accessKey
@ -130,7 +133,7 @@ func (s *TestSuiteIAM) TestDeleteUserRace(c *check) {
}
g := errgroup.Group{}
for i := 0; i < userCount; i++ {
for i := range userCount {
g.Go(func(i int) func() error {
return func() error {
uClient := s.getUserClient(c, accessKeys[i], secretKeys[i], "")

View File

@ -24,18 +24,21 @@ import (
"errors"
"fmt"
"io"
"maps"
"net/http"
"os"
"slices"
"sort"
"strconv"
"strings"
"time"
"unicode/utf8"
"github.com/klauspost/compress/zip"
"github.com/minio/madmin-go/v3"
"github.com/minio/minio/internal/auth"
"github.com/minio/minio/internal/cachevalue"
"github.com/minio/minio/internal/config/dns"
"github.com/minio/minio/internal/logger"
"github.com/minio/mux"
xldap "github.com/minio/pkg/v3/ldap"
"github.com/minio/pkg/v3/policy"
@ -64,6 +67,17 @@ func (a adminAPIHandlers) RemoveUser(w http.ResponseWriter, r *http.Request) {
return
}
// This API only supports removal of internal users not service accounts.
ok, _, err = globalIAMSys.IsServiceAccount(accessKey)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
if ok {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, errIAMActionNotAllowed), r.URL)
return
}
// When the user is root credential you are not allowed to
// remove the root user. Also you cannot delete yourself.
if accessKey == globalActiveCred.AccessKey || accessKey == cred.AccessKey {
@ -144,9 +158,7 @@ func (a adminAPIHandlers) ListUsers(w http.ResponseWriter, r *http.Request) {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
for k, v := range ldapUsers {
allCredentials[k] = v
}
maps.Copy(allCredentials, ldapUsers)
// Marshal the response
data, err := json.Marshal(allCredentials)
@ -184,12 +196,7 @@ func (a adminAPIHandlers) GetUserInfo(w http.ResponseWriter, r *http.Request) {
return
}
checkDenyOnly := false
if name == cred.AccessKey {
// Check that there is no explicit deny - otherwise it's allowed
// to view one's own info.
checkDenyOnly = true
}
checkDenyOnly := name == cred.AccessKey
if !globalIAMSys.IsAllowed(policy.Args{
AccountName: cred.AccessKey,
@ -480,12 +487,7 @@ func (a adminAPIHandlers) AddUser(w http.ResponseWriter, r *http.Request) {
return
}
checkDenyOnly := false
if accessKey == cred.AccessKey {
// Check that there is no explicit deny - otherwise it's allowed
// to change one's own password.
checkDenyOnly = true
}
checkDenyOnly := accessKey == cred.AccessKey
if !globalIAMSys.IsAllowed(policy.Args{
AccountName: cred.AccessKey,
@ -636,7 +638,7 @@ func (a adminAPIHandlers) TemporaryAccountInfo(w http.ResponseWriter, r *http.Re
// AddServiceAccount - PUT /minio/admin/v3/add-service-account
func (a adminAPIHandlers) AddServiceAccount(w http.ResponseWriter, r *http.Request) {
ctx, cred, opts, createReq, targetUser, APIError := commonAddServiceAccount(r)
ctx, cred, opts, createReq, targetUser, APIError := commonAddServiceAccount(r, false)
if APIError.Code != "" {
writeErrorResponseJSON(ctx, w, APIError, r.URL)
return
@ -676,10 +678,7 @@ func (a adminAPIHandlers) AddServiceAccount(w http.ResponseWriter, r *http.Reque
}
// Check if we are creating svc account for request sender.
isSvcAccForRequestor := false
if targetUser == requestorUser || targetUser == requestorParentUser {
isSvcAccForRequestor = true
}
isSvcAccForRequestor := targetUser == requestorUser || targetUser == requestorParentUser
// If we are creating svc account for request sender, ensure
// that targetUser is a real user (i.e. not derived
@ -770,7 +769,7 @@ func (a adminAPIHandlers) AddServiceAccount(w http.ResponseWriter, r *http.Reque
Name: newCred.Name,
Description: newCred.Description,
Claims: opts.claims,
SessionPolicy: createReq.Policy,
SessionPolicy: madmin.SRSessionPolicy(createReq.Policy),
Status: auth.AccountOn,
Expiration: createReq.Expiration,
},
@ -894,7 +893,7 @@ func (a adminAPIHandlers) UpdateServiceAccount(w http.ResponseWriter, r *http.Re
Status: opts.status,
Name: opts.name,
Description: opts.description,
SessionPolicy: updateReq.NewPolicy,
SessionPolicy: madmin.SRSessionPolicy(updateReq.NewPolicy),
Expiration: updateReq.NewExpiration,
},
},
@ -1166,6 +1165,172 @@ func (a adminAPIHandlers) DeleteServiceAccount(w http.ResponseWriter, r *http.Re
writeSuccessNoContent(w)
}
// ListAccessKeysBulk - GET /minio/admin/v3/list-access-keys-bulk
func (a adminAPIHandlers) ListAccessKeysBulk(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
// Get current object layer instance.
objectAPI := newObjectLayerFn()
if objectAPI == nil || globalNotificationSys == nil {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrServerNotInitialized), r.URL)
return
}
cred, owner, s3Err := validateAdminSignature(ctx, r, "")
if s3Err != ErrNone {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(s3Err), r.URL)
return
}
users := r.Form["users"]
isAll := r.Form.Get("all") == "true"
selfOnly := !isAll && len(users) == 0
if isAll && len(users) > 0 {
// This should be checked on client side, so return generic error
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrInvalidRequest), r.URL)
return
}
// Empty user list and not self, list access keys for all users
if isAll {
if !globalIAMSys.IsAllowed(policy.Args{
AccountName: cred.AccessKey,
Groups: cred.Groups,
Action: policy.ListUsersAdminAction,
ConditionValues: getConditionValues(r, "", cred),
IsOwner: owner,
Claims: cred.Claims,
}) {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAccessDenied), r.URL)
return
}
} else if len(users) == 1 {
if users[0] == cred.AccessKey || users[0] == cred.ParentUser {
selfOnly = true
}
}
if !globalIAMSys.IsAllowed(policy.Args{
AccountName: cred.AccessKey,
Groups: cred.Groups,
Action: policy.ListServiceAccountsAdminAction,
ConditionValues: getConditionValues(r, "", cred),
IsOwner: owner,
Claims: cred.Claims,
DenyOnly: selfOnly,
}) {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAccessDenied), r.URL)
return
}
if selfOnly && len(users) == 0 {
selfUser := cred.AccessKey
if cred.ParentUser != "" {
selfUser = cred.ParentUser
}
users = append(users, selfUser)
}
var checkedUserList []string
if isAll {
users, err := globalIAMSys.ListUsers(ctx)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
for user := range users {
checkedUserList = append(checkedUserList, user)
}
checkedUserList = append(checkedUserList, globalActiveCred.AccessKey)
} else {
for _, user := range users {
// Validate the user
_, ok := globalIAMSys.GetUser(ctx, user)
if !ok {
continue
}
checkedUserList = append(checkedUserList, user)
}
}
listType := r.Form.Get("listType")
var listSTSKeys, listServiceAccounts bool
switch listType {
case madmin.AccessKeyListUsersOnly:
listSTSKeys = false
listServiceAccounts = false
case madmin.AccessKeyListSTSOnly:
listSTSKeys = true
listServiceAccounts = false
case madmin.AccessKeyListSvcaccOnly:
listSTSKeys = false
listServiceAccounts = true
case madmin.AccessKeyListAll:
listSTSKeys = true
listServiceAccounts = true
default:
err := errors.New("invalid list type")
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErrWithErr(ErrInvalidRequest, err), r.URL)
return
}
accessKeyMap := make(map[string]madmin.ListAccessKeysResp)
for _, user := range checkedUserList {
accessKeys := madmin.ListAccessKeysResp{}
if listSTSKeys {
stsKeys, err := globalIAMSys.ListSTSAccounts(ctx, user)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
for _, sts := range stsKeys {
accessKeys.STSKeys = append(accessKeys.STSKeys, madmin.ServiceAccountInfo{
AccessKey: sts.AccessKey,
Expiration: &sts.Expiration,
})
}
// if only STS keys, skip if user has no STS keys
if !listServiceAccounts && len(stsKeys) == 0 {
continue
}
}
if listServiceAccounts {
serviceAccounts, err := globalIAMSys.ListServiceAccounts(ctx, user)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
for _, svc := range serviceAccounts {
accessKeys.ServiceAccounts = append(accessKeys.ServiceAccounts, madmin.ServiceAccountInfo{
AccessKey: svc.AccessKey,
Expiration: &svc.Expiration,
})
}
// if only service accounts, skip if user has no service accounts
if !listSTSKeys && len(serviceAccounts) == 0 {
continue
}
}
accessKeyMap[user] = accessKeys
}
data, err := json.Marshal(accessKeyMap)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
encryptedData, err := madmin.EncryptData(cred.SecretKey, data)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
writeSuccessResponseJSON(w, encryptedData)
}
// AccountInfoHandler returns usage, permissions and other bucket metadata for incoming us
func (a adminAPIHandlers) AccountInfoHandler(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
@ -1235,18 +1400,6 @@ func (a adminAPIHandlers) AccountInfoHandler(w http.ResponseWriter, r *http.Requ
return rd, wr
}
bucketStorageCache.InitOnce(10*time.Second,
cachevalue.Opts{ReturnLastGood: true},
func(ctx context.Context) (DataUsageInfo, error) {
ctx, done := context.WithTimeout(ctx, 2*time.Second)
defer done()
return loadDataUsageFromBackend(ctx, objectAPI)
},
)
dataUsageInfo, _ := bucketStorageCache.Get()
// If etcd, dns federation configured list buckets from etcd.
var err error
var buckets []BucketInfo
@ -1287,7 +1440,12 @@ func (a adminAPIHandlers) AccountInfoHandler(w http.ResponseWriter, r *http.Requ
var buf []byte
switch {
case accountName == globalActiveCred.AccessKey:
case accountName == globalActiveCred.AccessKey || newGlobalAuthZPluginFn() != nil:
// For owner account and when plugin authZ is configured always set
// effective policy as `consoleAdmin`.
//
// In the latter case, we let the UI render everything, but individual
// actions would fail if not permitted by the external authZ service.
for _, policy := range policy.DefaultPolicies {
if policy.Name == "consoleAdmin" {
effectivePolicy = policy.Definition
@ -1315,8 +1473,8 @@ func (a adminAPIHandlers) AccountInfoHandler(w http.ResponseWriter, r *http.Requ
return
}
effectivePolicy = globalIAMSys.GetCombinedPolicy(policies...)
}
buf, err = json.MarshalIndent(effectivePolicy, "", " ")
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
@ -1333,15 +1491,12 @@ func (a adminAPIHandlers) AccountInfoHandler(w http.ResponseWriter, r *http.Requ
rd, wr := isAllowedAccess(bucket.Name)
if rd || wr {
// Fetch the data usage of the current bucket
var size uint64
var objectsCount uint64
var objectsHist, versionsHist map[string]uint64
if !dataUsageInfo.LastUpdate.IsZero() {
size = dataUsageInfo.BucketsUsage[bucket.Name].Size
objectsCount = dataUsageInfo.BucketsUsage[bucket.Name].ObjectsCount
objectsHist = dataUsageInfo.BucketsUsage[bucket.Name].ObjectSizesHistogram
versionsHist = dataUsageInfo.BucketsUsage[bucket.Name].ObjectVersionsHistogram
}
bui := globalBucketQuotaSys.GetBucketUsageInfo(ctx, bucket.Name)
size := bui.Size
objectsCount := bui.ObjectsCount
objectsHist := bui.ObjectSizesHistogram
versionsHist := bui.ObjectVersionsHistogram
// Fetch the prefix usage of the current bucket
var prefixUsage map[string]uint64
if enablePrefixUsage {
@ -1411,6 +1566,7 @@ func (a adminAPIHandlers) InfoCannedPolicy(w http.ResponseWriter, r *http.Reques
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, errTooManyPolicies), r.URL)
return
}
setReqInfoPolicyName(ctx, name)
policyDoc, err := globalIAMSys.InfoPolicy(name)
if err != nil {
@ -1514,6 +1670,7 @@ func (a adminAPIHandlers) RemoveCannedPolicy(w http.ResponseWriter, r *http.Requ
vars := mux.Vars(r)
policyName := vars["name"]
setReqInfoPolicyName(ctx, policyName)
if err := globalIAMSys.DeletePolicy(ctx, policyName, true); err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
@ -1546,6 +1703,13 @@ func (a adminAPIHandlers) AddCannedPolicy(w http.ResponseWriter, r *http.Request
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminResourceInvalidArgument), r.URL)
return
}
setReqInfoPolicyName(ctx, policyName)
// Reject policy names with commas.
if strings.Contains(policyName, ",") {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrPolicyInvalidName), r.URL)
return
}
// Error out if Content-Length is missing.
if r.ContentLength <= 0 {
@ -1611,6 +1775,7 @@ func (a adminAPIHandlers) SetPolicyForUserOrGroup(w http.ResponseWriter, r *http
policyName := vars["policyName"]
entityName := vars["userOrGroup"]
isGroup := vars["isGroup"] == "true"
setReqInfoPolicyName(ctx, policyName)
if !isGroup {
ok, _, err := globalIAMSys.IsTempUser(entityName)
@ -1661,16 +1826,18 @@ func (a adminAPIHandlers) SetPolicyForUserOrGroup(w http.ResponseWriter, r *http
iamLogIf(ctx, err)
} else if foundGroupDN == nil || !underBaseDN {
err = errNoSuchGroup
} else {
entityName = foundGroupDN.NormDN
}
entityName = foundGroupDN.NormDN
} else {
var foundUserDN *xldap.DNSearchResult
if foundUserDN, err = globalIAMSys.LDAPConfig.GetValidatedDNForUsername(entityName); err != nil {
iamLogIf(ctx, err)
} else if foundUserDN == nil {
err = errNoSuchUser
} else {
entityName = foundUserDN.NormDN
}
entityName = foundUserDN.NormDN
}
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
@ -1696,7 +1863,7 @@ func (a adminAPIHandlers) SetPolicyForUserOrGroup(w http.ResponseWriter, r *http
}))
}
// ListPolicyMappingEntities - GET /minio/admin/v3/idp/builtin/polciy-entities?policy=xxx&user=xxx&group=xxx
// ListPolicyMappingEntities - GET /minio/admin/v3/idp/builtin/policy-entities?policy=xxx&user=xxx&group=xxx
func (a adminAPIHandlers) ListPolicyMappingEntities(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
@ -1798,6 +1965,7 @@ func (a adminAPIHandlers) AttachDetachPolicyBuiltin(w http.ResponseWriter, r *ht
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
setReqInfoPolicyName(ctx, strings.Join(addedOrRemoved, ","))
respBody := madmin.PolicyAssociationResp{
UpdatedAt: updatedAt,
@ -1823,6 +1991,227 @@ func (a adminAPIHandlers) AttachDetachPolicyBuiltin(w http.ResponseWriter, r *ht
writeSuccessResponseJSON(w, encryptedData)
}
// RevokeTokens - POST /minio/admin/v3/revoke-tokens/{userProvider}
func (a adminAPIHandlers) RevokeTokens(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
// Get current object layer instance.
objectAPI := newObjectLayerFn()
if objectAPI == nil || globalNotificationSys == nil {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrServerNotInitialized), r.URL)
return
}
cred, owner, s3Err := validateAdminSignature(ctx, r, "")
if s3Err != ErrNone {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(s3Err), r.URL)
return
}
userProvider := mux.Vars(r)["userProvider"]
user := r.Form.Get("user")
tokenRevokeType := r.Form.Get("tokenRevokeType")
fullRevoke := r.Form.Get("fullRevoke") == "true"
isTokenSelfRevoke := user == ""
if !isTokenSelfRevoke {
var err error
user, err = getUserWithProvider(ctx, userProvider, user, false)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
}
if (user != "" && tokenRevokeType == "" && !fullRevoke) || (tokenRevokeType != "" && fullRevoke) {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrInvalidRequest), r.URL)
return
}
adminPrivilege := globalIAMSys.IsAllowed(policy.Args{
AccountName: cred.AccessKey,
Groups: cred.Groups,
Action: policy.RemoveServiceAccountAdminAction,
ConditionValues: getConditionValues(r, "", cred),
IsOwner: owner,
Claims: cred.Claims,
})
if !adminPrivilege || isTokenSelfRevoke {
parentUser := cred.AccessKey
if cred.ParentUser != "" {
parentUser = cred.ParentUser
}
if !isTokenSelfRevoke && user != parentUser {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAccessDenied), r.URL)
return
}
user = parentUser
}
// Infer token revoke type from the request if requestor is STS.
if isTokenSelfRevoke && tokenRevokeType == "" && !fullRevoke {
if cred.IsTemp() {
tokenRevokeType, _ = cred.Claims[tokenRevokeTypeClaim].(string)
}
if tokenRevokeType == "" {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrNoTokenRevokeType), r.URL)
return
}
}
err := globalIAMSys.RevokeTokens(ctx, user, tokenRevokeType)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
writeSuccessNoContent(w)
}
// InfoAccessKey - GET /minio/admin/v3/info-access-key?access-key=<access-key>
func (a adminAPIHandlers) InfoAccessKey(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
// Get current object layer instance.
objectAPI := newObjectLayerFn()
if objectAPI == nil || globalNotificationSys == nil {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrServerNotInitialized), r.URL)
return
}
cred, owner, s3Err := validateAdminSignature(ctx, r, "")
if s3Err != ErrNone {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(s3Err), r.URL)
return
}
accessKey := mux.Vars(r)["accessKey"]
if accessKey == "" {
accessKey = cred.AccessKey
}
u, ok := globalIAMSys.GetUser(ctx, accessKey)
targetCred := u.Credentials
if !globalIAMSys.IsAllowed(policy.Args{
AccountName: cred.AccessKey,
Groups: cred.Groups,
Action: policy.ListServiceAccountsAdminAction,
ConditionValues: getConditionValues(r, "", cred),
IsOwner: owner,
Claims: cred.Claims,
}) {
// If requested user does not exist and requestor is not allowed to list service accounts, return access denied.
if !ok {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAccessDenied), r.URL)
return
}
requestUser := cred.AccessKey
if cred.ParentUser != "" {
requestUser = cred.ParentUser
}
if requestUser != targetCred.ParentUser {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAccessDenied), r.URL)
return
}
}
if !ok {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminNoSuchAccessKey), r.URL)
return
}
var (
sessionPolicy *policy.Policy
err error
userType string
)
switch {
case targetCred.IsTemp():
userType = "STS"
_, sessionPolicy, err = globalIAMSys.GetTemporaryAccount(ctx, accessKey)
if err == errNoSuchTempAccount {
err = errNoSuchAccessKey
}
case targetCred.IsServiceAccount():
userType = "Service Account"
_, sessionPolicy, err = globalIAMSys.GetServiceAccount(ctx, accessKey)
if err == errNoSuchServiceAccount {
err = errNoSuchAccessKey
}
default:
err = errNoSuchAccessKey
}
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
// if session policy is nil or empty, then it is implied policy
impliedPolicy := sessionPolicy == nil || (sessionPolicy.Version == "" && len(sessionPolicy.Statements) == 0)
var svcAccountPolicy policy.Policy
if !impliedPolicy {
svcAccountPolicy = *sessionPolicy
} else {
policiesNames, err := globalIAMSys.PolicyDBGet(targetCred.ParentUser, targetCred.Groups...)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
svcAccountPolicy = globalIAMSys.GetCombinedPolicy(policiesNames...)
}
policyJSON, err := json.MarshalIndent(svcAccountPolicy, "", " ")
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
var expiration *time.Time
if !targetCred.Expiration.IsZero() && !targetCred.Expiration.Equal(timeSentinel) {
expiration = &targetCred.Expiration
}
userProvider := guessUserProvider(targetCred)
infoResp := madmin.InfoAccessKeyResp{
AccessKey: accessKey,
InfoServiceAccountResp: madmin.InfoServiceAccountResp{
ParentUser: targetCred.ParentUser,
Name: targetCred.Name,
Description: targetCred.Description,
AccountStatus: targetCred.Status,
ImpliedPolicy: impliedPolicy,
Policy: string(policyJSON),
Expiration: expiration,
},
UserType: userType,
UserProvider: userProvider,
}
populateProviderInfoFromClaims(targetCred.Claims, userProvider, &infoResp)
data, err := json.Marshal(infoResp)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
encryptedData, err := madmin.EncryptData(cred.SecretKey, data)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
writeSuccessResponseJSON(w, encryptedData)
}
const (
allPoliciesFile = "policies.json"
allUsersFile = "users.json"
@ -1852,6 +2241,7 @@ func (a adminAPIHandlers) ExportIAM(w http.ResponseWriter, r *http.Request) {
// Get current object layer instance.
objectAPI, _ := validateAdminReq(ctx, w, r, policy.ExportIAMAction)
if objectAPI == nil {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrServerNotInitialized), r.URL)
return
}
// Initialize a zip writer which will provide a zipped content
@ -1988,7 +2378,7 @@ func (a adminAPIHandlers) ExportIAM(w http.ResponseWriter, r *http.Request) {
SecretKey: acc.Credentials.SecretKey,
Groups: acc.Credentials.Groups,
Claims: claims,
SessionPolicy: json.RawMessage(policyJSON),
SessionPolicy: policyJSON,
Status: acc.Credentials.Status,
Name: sa.Name,
Description: sa.Description,
@ -2062,19 +2452,25 @@ func (a adminAPIHandlers) ExportIAM(w http.ResponseWriter, r *http.Request) {
// ImportIAM - imports all IAM info into MinIO
func (a adminAPIHandlers) ImportIAM(w http.ResponseWriter, r *http.Request) {
a.importIAM(w, r, "")
}
// ImportIAMV2 - imports all IAM info into MinIO
func (a adminAPIHandlers) ImportIAMV2(w http.ResponseWriter, r *http.Request) {
a.importIAM(w, r, "v2")
}
// ImportIAM - imports all IAM info into MinIO
func (a adminAPIHandlers) importIAM(w http.ResponseWriter, r *http.Request, apiVer string) {
ctx := r.Context()
// Get current object layer instance.
objectAPI := newObjectLayerFn()
// Validate signature, permissions and get current object layer instance.
objectAPI, _ := validateAdminReq(ctx, w, r, policy.ImportIAMAction)
if objectAPI == nil || globalNotificationSys == nil {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrServerNotInitialized), r.URL)
return
}
cred, owner, s3Err := validateAdminSignature(ctx, r, "")
if s3Err != ErrNone {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(s3Err), r.URL)
return
}
data, err := io.ReadAll(r.Body)
if err != nil {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrInvalidRequest), r.URL)
@ -2086,9 +2482,12 @@ func (a adminAPIHandlers) ImportIAM(w http.ResponseWriter, r *http.Request) {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrInvalidRequest), r.URL)
return
}
var skipped, removed, added madmin.IAMEntities
var failed madmin.IAMErrEntities
// import policies first
{
f, err := zr.Open(pathJoin(iamAssetsDir, allPoliciesFile))
switch {
case errors.Is(err, os.ErrNotExist):
@ -2111,8 +2510,10 @@ func (a adminAPIHandlers) ImportIAM(w http.ResponseWriter, r *http.Request) {
for policyName, policy := range allPolicies {
if policy.IsEmpty() {
err = globalIAMSys.DeletePolicy(ctx, policyName, true)
removed.Policies = append(removed.Policies, policyName)
} else {
_, err = globalIAMSys.SetPolicy(ctx, policyName, policy)
added.Policies = append(added.Policies, policyName)
}
if err != nil {
writeErrorResponseJSON(ctx, w, importError(ctx, err, allPoliciesFile, policyName), r.URL)
@ -2158,43 +2559,17 @@ func (a adminAPIHandlers) ImportIAM(w http.ResponseWriter, r *http.Request) {
return
}
if (cred.IsTemp() || cred.IsServiceAccount()) && cred.ParentUser == accessKey {
// Incoming access key matches parent user then we should
// reject password change requests.
writeErrorResponseJSON(ctx, w, importErrorWithAPIErr(ctx, ErrAddUserInvalidArgument, err, allUsersFile, accessKey), r.URL)
return
}
// Check if accessKey has beginning and end space characters, this only applies to new users.
if !exists && hasSpaceBE(accessKey) {
writeErrorResponseJSON(ctx, w, importErrorWithAPIErr(ctx, ErrAdminResourceInvalidArgument, err, allUsersFile, accessKey), r.URL)
return
}
checkDenyOnly := false
if accessKey == cred.AccessKey {
// Check that there is no explicit deny - otherwise it's allowed
// to change one's own password.
checkDenyOnly = true
}
if !globalIAMSys.IsAllowed(policy.Args{
AccountName: cred.AccessKey,
Groups: cred.Groups,
Action: policy.CreateUserAdminAction,
ConditionValues: getConditionValues(r, "", cred),
IsOwner: owner,
Claims: cred.Claims,
DenyOnly: checkDenyOnly,
}) {
writeErrorResponseJSON(ctx, w, importErrorWithAPIErr(ctx, ErrAccessDenied, err, allUsersFile, accessKey), r.URL)
return
}
if _, err = globalIAMSys.CreateUser(ctx, accessKey, ureq); err != nil {
writeErrorResponseJSON(ctx, w, importErrorWithAPIErr(ctx, toAdminAPIErrCode(ctx, err), err, allUsersFile, accessKey), r.URL)
return
failed.Users = append(failed.Users, madmin.IAMErrEntity{Name: accessKey, Error: err})
} else {
added.Users = append(added.Users, accessKey)
}
}
}
}
@ -2230,8 +2605,9 @@ func (a adminAPIHandlers) ImportIAM(w http.ResponseWriter, r *http.Request) {
}
}
if _, gerr := globalIAMSys.AddUsersToGroup(ctx, group, grpInfo.Members); gerr != nil {
writeErrorResponseJSON(ctx, w, importError(ctx, gerr, allGroupsFile, group), r.URL)
return
failed.Groups = append(failed.Groups, madmin.IAMErrEntity{Name: group, Error: err})
} else {
added.Groups = append(added.Groups, group)
}
}
}
@ -2260,7 +2636,8 @@ func (a adminAPIHandlers) ImportIAM(w http.ResponseWriter, r *http.Request) {
// Validations for LDAP enabled deployments.
if globalIAMSys.LDAPConfig.Enabled() {
err := globalIAMSys.NormalizeLDAPAccessKeypairs(ctx, serviceAcctReqs)
skippedAccessKeys, err := globalIAMSys.NormalizeLDAPAccessKeypairs(ctx, serviceAcctReqs)
skipped.ServiceAccounts = append(skipped.ServiceAccounts, skippedAccessKeys...)
if err != nil {
writeErrorResponseJSON(ctx, w, importError(ctx, err, allSvcAcctsFile, ""), r.URL)
return
@ -2268,6 +2645,9 @@ func (a adminAPIHandlers) ImportIAM(w http.ResponseWriter, r *http.Request) {
}
for user, svcAcctReq := range serviceAcctReqs {
if slices.Contains(skipped.ServiceAccounts, user) {
continue
}
var sp *policy.Policy
var err error
if len(svcAcctReq.SessionPolicy) > 0 {
@ -2283,17 +2663,6 @@ func (a adminAPIHandlers) ImportIAM(w http.ResponseWriter, r *http.Request) {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminResourceInvalidArgument), r.URL)
return
}
if !globalIAMSys.IsAllowed(policy.Args{
AccountName: cred.AccessKey,
Groups: cred.Groups,
Action: policy.CreateServiceAccountAdminAction,
ConditionValues: getConditionValues(r, "", cred),
IsOwner: owner,
Claims: cred.Claims,
}) {
writeErrorResponseJSON(ctx, w, importErrorWithAPIErr(ctx, ErrAccessDenied, err, allSvcAcctsFile, user), r.URL)
return
}
updateReq := true
_, _, err = globalIAMSys.GetServiceAccount(ctx, svcAcctReq.AccessKey)
if err != nil {
@ -2308,7 +2677,7 @@ func (a adminAPIHandlers) ImportIAM(w http.ResponseWriter, r *http.Request) {
// clean import.
err := globalIAMSys.DeleteServiceAccount(ctx, svcAcctReq.AccessKey, true)
if err != nil {
delErr := fmt.Errorf("failed to delete existing service account(%s) before importing it: %w", svcAcctReq.AccessKey, err)
delErr := fmt.Errorf("failed to delete existing service account (%s) before importing it: %w", svcAcctReq.AccessKey, err)
writeErrorResponseJSON(ctx, w, importError(ctx, delErr, allSvcAcctsFile, user), r.URL)
return
}
@ -2325,10 +2694,10 @@ func (a adminAPIHandlers) ImportIAM(w http.ResponseWriter, r *http.Request) {
}
if _, _, err = globalIAMSys.NewServiceAccount(ctx, svcAcctReq.Parent, svcAcctReq.Groups, opts); err != nil {
writeErrorResponseJSON(ctx, w, importError(ctx, err, allSvcAcctsFile, user), r.URL)
return
failed.ServiceAccounts = append(failed.ServiceAccounts, madmin.IAMErrEntity{Name: user, Error: err})
} else {
added.ServiceAccounts = append(added.ServiceAccounts, user)
}
}
}
}
@ -2365,8 +2734,15 @@ func (a adminAPIHandlers) ImportIAM(w http.ResponseWriter, r *http.Request) {
return
}
if _, err := globalIAMSys.PolicyDBSet(ctx, u, pm.Policies, regUser, false); err != nil {
writeErrorResponseJSON(ctx, w, importError(ctx, err, userPolicyMappingsFile, u), r.URL)
return
failed.UserPolicies = append(
failed.UserPolicies,
madmin.IAMErrPolicyEntity{
Name: u,
Policies: strings.Split(pm.Policies, ","),
Error: err,
})
} else {
added.UserPolicies = append(added.UserPolicies, map[string][]string{u: strings.Split(pm.Policies, ",")})
}
}
}
@ -2396,7 +2772,8 @@ func (a adminAPIHandlers) ImportIAM(w http.ResponseWriter, r *http.Request) {
// Validations for LDAP enabled deployments.
if globalIAMSys.LDAPConfig.Enabled() {
isGroup := true
err := globalIAMSys.NormalizeLDAPMappingImport(ctx, isGroup, grpPolicyMap)
skippedDN, err := globalIAMSys.NormalizeLDAPMappingImport(ctx, isGroup, grpPolicyMap)
skipped.Groups = append(skipped.Groups, skippedDN...)
if err != nil {
writeErrorResponseJSON(ctx, w, importError(ctx, err, groupPolicyMappingsFile, ""), r.URL)
return
@ -2404,9 +2781,19 @@ func (a adminAPIHandlers) ImportIAM(w http.ResponseWriter, r *http.Request) {
}
for g, pm := range grpPolicyMap {
if slices.Contains(skipped.Groups, g) {
continue
}
if _, err := globalIAMSys.PolicyDBSet(ctx, g, pm.Policies, unknownIAMUserType, true); err != nil {
writeErrorResponseJSON(ctx, w, importError(ctx, err, groupPolicyMappingsFile, g), r.URL)
return
failed.GroupPolicies = append(
failed.GroupPolicies,
madmin.IAMErrPolicyEntity{
Name: g,
Policies: strings.Split(pm.Policies, ","),
Error: err,
})
} else {
added.GroupPolicies = append(added.GroupPolicies, map[string][]string{g: strings.Split(pm.Policies, ",")})
}
}
}
@ -2436,13 +2823,17 @@ func (a adminAPIHandlers) ImportIAM(w http.ResponseWriter, r *http.Request) {
// Validations for LDAP enabled deployments.
if globalIAMSys.LDAPConfig.Enabled() {
isGroup := true
err := globalIAMSys.NormalizeLDAPMappingImport(ctx, !isGroup, userPolicyMap)
skippedDN, err := globalIAMSys.NormalizeLDAPMappingImport(ctx, !isGroup, userPolicyMap)
skipped.Users = append(skipped.Users, skippedDN...)
if err != nil {
writeErrorResponseJSON(ctx, w, importError(ctx, err, stsUserPolicyMappingsFile, ""), r.URL)
return
}
}
for u, pm := range userPolicyMap {
if slices.Contains(skipped.Users, u) {
continue
}
// disallow setting policy mapping if user is a temporary user
ok, _, err := globalIAMSys.IsTempUser(u)
if err != nil && err != errNoSuchUser {
@ -2455,19 +2846,43 @@ func (a adminAPIHandlers) ImportIAM(w http.ResponseWriter, r *http.Request) {
}
if _, err := globalIAMSys.PolicyDBSet(ctx, u, pm.Policies, stsUser, false); err != nil {
writeErrorResponseJSON(ctx, w, importError(ctx, err, stsUserPolicyMappingsFile, u), r.URL)
return
failed.STSPolicies = append(
failed.STSPolicies,
madmin.IAMErrPolicyEntity{
Name: u,
Policies: strings.Split(pm.Policies, ","),
Error: err,
})
} else {
added.STSPolicies = append(added.STSPolicies, map[string][]string{u: strings.Split(pm.Policies, ",")})
}
}
}
}
if apiVer == "v2" {
iamr := madmin.ImportIAMResult{
Skipped: skipped,
Removed: removed,
Added: added,
Failed: failed,
}
b, err := json.Marshal(iamr)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
writeSuccessResponseJSON(w, b)
}
}
func addExpirationToCondValues(exp *time.Time, condValues map[string][]string) error {
if exp == nil || exp.IsZero() || exp.Equal(timeSentinel) {
return nil
}
dur := exp.Sub(time.Now())
dur := time.Until(*exp)
if dur <= 0 {
return errors.New("unsupported expiration time")
}
@ -2475,7 +2890,7 @@ func addExpirationToCondValues(exp *time.Time, condValues map[string][]string) e
return nil
}
func commonAddServiceAccount(r *http.Request) (context.Context, auth.Credentials, newServiceAccountOpts, madmin.AddServiceAccountReq, string, APIError) {
func commonAddServiceAccount(r *http.Request, ldap bool) (context.Context, auth.Credentials, newServiceAccountOpts, madmin.AddServiceAccountReq, string, APIError) {
ctx := r.Context()
// Get current object layer instance.
@ -2533,7 +2948,7 @@ func commonAddServiceAccount(r *http.Request) (context.Context, auth.Credentials
name: createReq.Name,
description: description,
expiration: createReq.Expiration,
claims: make(map[string]interface{}),
claims: make(map[string]any),
}
condValues := getConditionValues(r, "", cred)
@ -2542,6 +2957,14 @@ func commonAddServiceAccount(r *http.Request) (context.Context, auth.Credentials
return ctx, auth.Credentials{}, newServiceAccountOpts{}, madmin.AddServiceAccountReq{}, "", toAdminAPIErr(ctx, err)
}
denyOnly := (targetUser == cred.AccessKey || targetUser == cred.ParentUser)
if ldap && !denyOnly {
res, _ := globalIAMSys.LDAPConfig.GetValidatedDNForUsername(targetUser)
if res != nil && res.NormDN == cred.ParentUser {
denyOnly = true
}
}
// Check if action is allowed if creating access key for another user
// Check if action is explicitly denied if for self
if !globalIAMSys.IsAllowed(policy.Args{
@ -2551,7 +2974,7 @@ func commonAddServiceAccount(r *http.Request) (context.Context, auth.Credentials
ConditionValues: condValues,
IsOwner: owner,
Claims: cred.Claims,
DenyOnly: (targetUser == cred.AccessKey || targetUser == cred.ParentUser),
DenyOnly: denyOnly,
}) {
return ctx, auth.Credentials{}, newServiceAccountOpts{}, madmin.AddServiceAccountReq{}, "", errorCodes.ToAPIErr(ErrAccessDenied)
}
@ -2568,3 +2991,10 @@ func commonAddServiceAccount(r *http.Request) (context.Context, auth.Credentials
return ctx, cred, opts, createReq, targetUser, APIError{}
}
// setReqInfoPolicyName will set the given policyName as a tag on the context's request info,
// so that it appears in audit logs.
func setReqInfoPolicyName(ctx context.Context, policyName string) {
reqInfo := logger.GetReqInfo(ctx)
reqInfo.SetTags("policyName", policyName)
}

View File

@ -28,6 +28,7 @@ import (
"net/http"
"net/url"
"runtime"
"slices"
"strings"
"testing"
"time"
@ -159,7 +160,7 @@ func (s *TestSuiteIAM) SetUpSuite(c *check) {
}
func (s *TestSuiteIAM) RestartIAMSuite(c *check) {
s.TestSuiteCommon.RestartTestServer(c)
s.RestartTestServer(c)
s.iamSetup(c)
}
@ -207,6 +208,8 @@ func TestIAMInternalIDPServerSuite(t *testing.T) {
suite.TestGroupAddRemove(c)
suite.TestServiceAccountOpsByAdmin(c)
suite.TestServiceAccountPrivilegeEscalationBug(c)
suite.TestServiceAccountPrivilegeEscalationBug2_2025_10_15(c, true)
suite.TestServiceAccountPrivilegeEscalationBug2_2025_10_15(c, false)
suite.TestServiceAccountOpsByUser(c)
suite.TestServiceAccountDurationSecondsCondition(c)
suite.TestAddServiceAccountPerms(c)
@ -239,9 +242,12 @@ func (s *TestSuiteIAM) TestUserCreate(c *check) {
c.Assert(v.Status, madmin.AccountEnabled)
// 3. Associate policy and check that user can access
err = s.adm.SetPolicy(ctx, "readwrite", accessKey, false)
_, err = s.adm.AttachPolicy(ctx, madmin.PolicyAssociationReq{
Policies: []string{"readwrite"},
User: accessKey,
})
if err != nil {
c.Fatalf("unable to set policy: %v", err)
c.Fatalf("unable to attach policy: %v", err)
}
client := s.getUserClient(c, accessKey, secretKey, "")
@ -328,29 +334,40 @@ func (s *TestSuiteIAM) TestUserPolicyEscalationBug(c *check) {
// 2.2 create and associate policy to user
policy := "mypolicy-test-user-update"
policyBytes := []byte(fmt.Sprintf(`{
policyBytes := fmt.Appendf(nil, `{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::%s"
]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::%s/*"
]
}
]
}`, bucket))
}`, bucket, bucket)
err = s.adm.AddCannedPolicy(ctx, policy, policyBytes)
if err != nil {
c.Fatalf("policy add error: %v", err)
}
err = s.adm.SetPolicy(ctx, policy, accessKey, false)
_, err = s.adm.AttachPolicy(ctx, madmin.PolicyAssociationReq{
Policies: []string{policy},
User: accessKey,
})
if err != nil {
c.Fatalf("Unable to set policy: %v", err)
c.Fatalf("unable to attach policy: %v", err)
}
// 2.3 check user has access to bucket
c.mustListObjects(ctx, uClient, bucket)
@ -436,7 +453,7 @@ func (s *TestSuiteIAM) TestAddServiceAccountPerms(c *check) {
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::testbucket/*"
"arn:aws:s3:::testbucket"
]
}
]
@ -470,9 +487,12 @@ func (s *TestSuiteIAM) TestAddServiceAccountPerms(c *check) {
c.mustNotListObjects(ctx, uClient, "testbucket")
// 3.2 associate policy to user
err = s.adm.SetPolicy(ctx, policy1, accessKey, false)
_, err = s.adm.AttachPolicy(ctx, madmin.PolicyAssociationReq{
Policies: []string{policy1},
User: accessKey,
})
if err != nil {
c.Fatalf("Unable to set policy: %v", err)
c.Fatalf("unable to attach policy: %v", err)
}
admClnt := s.getAdminClient(c, accessKey, secretKey, "")
@ -490,10 +510,22 @@ func (s *TestSuiteIAM) TestAddServiceAccountPerms(c *check) {
c.Fatalf("policy was missing!")
}
// 3.2 associate policy to user
err = s.adm.SetPolicy(ctx, policy2, accessKey, false)
// Detach policy1 to set up for policy2
_, err = s.adm.DetachPolicy(ctx, madmin.PolicyAssociationReq{
Policies: []string{policy1},
User: accessKey,
})
if err != nil {
c.Fatalf("Unable to set policy: %v", err)
c.Fatalf("unable to detach policy: %v", err)
}
// 3.2 associate policy to user
_, err = s.adm.AttachPolicy(ctx, madmin.PolicyAssociationReq{
Policies: []string{policy2},
User: accessKey,
})
if err != nil {
c.Fatalf("unable to attach policy: %v", err)
}
// 3.3 check user can create service account implicitly.
@ -532,22 +564,30 @@ func (s *TestSuiteIAM) TestPolicyCreate(c *check) {
// 1. Create a policy
policy := "mypolicy"
policyBytes := []byte(fmt.Sprintf(`{
policyBytes := fmt.Appendf(nil, `{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::%s"
]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::%s/*"
]
}
]
}`, bucket))
}`, bucket, bucket)
err = s.adm.AddCannedPolicy(ctx, policy, policyBytes)
if err != nil {
c.Fatalf("policy add error: %v", err)
@ -571,9 +611,12 @@ func (s *TestSuiteIAM) TestPolicyCreate(c *check) {
c.mustNotListObjects(ctx, uClient, bucket)
// 3.2 associate policy to user
err = s.adm.SetPolicy(ctx, policy, accessKey, false)
_, err = s.adm.AttachPolicy(ctx, madmin.PolicyAssociationReq{
Policies: []string{policy},
User: accessKey,
})
if err != nil {
c.Fatalf("Unable to set policy: %v", err)
c.Fatalf("unable to attach policy: %v", err)
}
// 3.3 check user has access to bucket
c.mustListObjects(ctx, uClient, bucket)
@ -639,22 +682,30 @@ func (s *TestSuiteIAM) TestCannedPolicies(c *check) {
c.Fatalf("bucket creat error: %v", err)
}
policyBytes := []byte(fmt.Sprintf(`{
policyBytes := fmt.Appendf(nil, `{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::%s"
]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::%s/*"
]
}
]
}`, bucket))
}`, bucket, bucket)
// Check that default policies can be overwritten.
err = s.adm.AddCannedPolicy(ctx, "readwrite", policyBytes)
@ -667,6 +718,12 @@ func (s *TestSuiteIAM) TestCannedPolicies(c *check) {
c.Fatalf("policy info err: %v", err)
}
// Check that policy with comma is rejected.
err = s.adm.AddCannedPolicy(ctx, "invalid,policy", policyBytes)
if err == nil {
c.Fatalf("invalid policy created successfully")
}
infoStr := string(info)
if !strings.Contains(infoStr, `"s3:PutObject"`) || !strings.Contains(infoStr, ":"+bucket+"/") {
c.Fatalf("policy contains unexpected content!")
@ -684,22 +741,30 @@ func (s *TestSuiteIAM) TestGroupAddRemove(c *check) {
}
policy := "mypolicy"
policyBytes := []byte(fmt.Sprintf(`{
policyBytes := fmt.Appendf(nil, `{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::%s"
]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::%s/*"
]
}
]
}`, bucket))
}`, bucket, bucket)
err = s.adm.AddCannedPolicy(ctx, policy, policyBytes)
if err != nil {
c.Fatalf("policy add error: %v", err)
@ -726,9 +791,12 @@ func (s *TestSuiteIAM) TestGroupAddRemove(c *check) {
c.mustNotListObjects(ctx, uClient, bucket)
// 3. Associate policy to group and check user got access.
err = s.adm.SetPolicy(ctx, policy, group, true)
_, err = s.adm.AttachPolicy(ctx, madmin.PolicyAssociationReq{
Policies: []string{policy},
Group: group,
})
if err != nil {
c.Fatalf("Unable to set policy: %v", err)
c.Fatalf("unable to attach policy: %v", err)
}
// 3.1 check user has access to bucket
c.mustListObjects(ctx, uClient, bucket)
@ -743,8 +811,9 @@ func (s *TestSuiteIAM) TestGroupAddRemove(c *check) {
if err != nil {
c.Fatalf("group list err: %v", err)
}
if !set.CreateStringSet(groups...).Contains(group) {
c.Fatalf("created group not present!")
expected := []string{group}
if !slices.Equal(groups, expected) {
c.Fatalf("expected group listing: %v, got: %v", expected, groups)
}
groupInfo, err := s.adm.GetGroupDescription(ctx, group)
if err != nil {
@ -844,22 +913,30 @@ func (s *TestSuiteIAM) TestServiceAccountOpsByUser(c *check) {
// Create policy, user and associate policy
policy := "mypolicy"
policyBytes := []byte(fmt.Sprintf(`{
policyBytes := fmt.Appendf(nil, `{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::%s"
]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::%s/*"
]
}
]
}`, bucket))
}`, bucket, bucket)
err = s.adm.AddCannedPolicy(ctx, policy, policyBytes)
if err != nil {
c.Fatalf("policy add error: %v", err)
@ -871,9 +948,12 @@ func (s *TestSuiteIAM) TestServiceAccountOpsByUser(c *check) {
c.Fatalf("Unable to set user: %v", err)
}
err = s.adm.SetPolicy(ctx, policy, accessKey, false)
_, err = s.adm.AttachPolicy(ctx, madmin.PolicyAssociationReq{
Policies: []string{policy},
User: accessKey,
})
if err != nil {
c.Fatalf("Unable to set policy: %v", err)
c.Fatalf("unable to attach policy: %v", err)
}
// Create an madmin client with user creds
@ -917,7 +997,7 @@ func (s *TestSuiteIAM) TestServiceAccountDurationSecondsCondition(c *check) {
// Create policy, user and associate policy
policy := "mypolicy"
policyBytes := []byte(fmt.Sprintf(`{
policyBytes := fmt.Appendf(nil, `{
"Version": "2012-10-17",
"Statement": [
{
@ -931,16 +1011,24 @@ func (s *TestSuiteIAM) TestServiceAccountDurationSecondsCondition(c *check) {
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::%s"
]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::%s/*"
]
}
]
}`, bucket))
}`, bucket, bucket)
err = s.adm.AddCannedPolicy(ctx, policy, policyBytes)
if err != nil {
c.Fatalf("policy add error: %v", err)
@ -952,9 +1040,12 @@ func (s *TestSuiteIAM) TestServiceAccountDurationSecondsCondition(c *check) {
c.Fatalf("Unable to set user: %v", err)
}
err = s.adm.SetPolicy(ctx, policy, accessKey, false)
_, err = s.adm.AttachPolicy(ctx, madmin.PolicyAssociationReq{
Policies: []string{policy},
User: accessKey,
})
if err != nil {
c.Fatalf("Unable to set policy: %v", err)
c.Fatalf("unable to attach policy: %v", err)
}
// Create an madmin client with user creds
@ -1004,22 +1095,30 @@ func (s *TestSuiteIAM) TestServiceAccountOpsByAdmin(c *check) {
// Create policy, user and associate policy
policy := "mypolicy"
policyBytes := []byte(fmt.Sprintf(`{
policyBytes := fmt.Appendf(nil, `{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::%s"
]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::%s/*"
]
}
]
}`, bucket))
}`, bucket, bucket)
err = s.adm.AddCannedPolicy(ctx, policy, policyBytes)
if err != nil {
c.Fatalf("policy add error: %v", err)
@ -1031,9 +1130,12 @@ func (s *TestSuiteIAM) TestServiceAccountOpsByAdmin(c *check) {
c.Fatalf("Unable to set user: %v", err)
}
err = s.adm.SetPolicy(ctx, policy, accessKey, false)
_, err = s.adm.AttachPolicy(ctx, madmin.PolicyAssociationReq{
Policies: []string{policy},
User: accessKey,
})
if err != nil {
c.Fatalf("Unable to set policy: %v", err)
c.Fatalf("unable to attach policy: %v", err)
}
// 1. Create a service account for the user
@ -1149,6 +1251,108 @@ func (s *TestSuiteIAM) TestServiceAccountPrivilegeEscalationBug(c *check) {
}
}
func (s *TestSuiteIAM) TestServiceAccountPrivilegeEscalationBug2_2025_10_15(c *check, forRoot bool) {
ctx, cancel := context.WithTimeout(context.Background(), testDefaultTimeout)
defer cancel()
for i := range 3 {
err := s.client.MakeBucket(ctx, fmt.Sprintf("bucket%d", i+1), minio.MakeBucketOptions{})
if err != nil {
c.Fatalf("bucket create error: %v", err)
}
defer func(i int) {
_ = s.client.RemoveBucket(ctx, fmt.Sprintf("bucket%d", i+1))
}(i)
}
allow2BucketsPolicyBytes := []byte(`{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ListBucket1AndBucket2",
"Effect": "Allow",
"Action": ["s3:ListBucket"],
"Resource": ["arn:aws:s3:::bucket1", "arn:aws:s3:::bucket2"]
},
{
"Sid": "ReadWriteBucket1AndBucket2Objects",
"Effect": "Allow",
"Action": [
"s3:DeleteObject",
"s3:DeleteObjectVersion",
"s3:GetObject",
"s3:GetObjectVersion",
"s3:PutObject"
],
"Resource": ["arn:aws:s3:::bucket1/*", "arn:aws:s3:::bucket2/*"]
}
]
}`)
if forRoot {
// Create a service account for the root user.
_, err := s.adm.AddServiceAccount(ctx, madmin.AddServiceAccountReq{
Policy: allow2BucketsPolicyBytes,
AccessKey: "restricted",
SecretKey: "restricted123",
})
if err != nil {
c.Fatalf("could not create service account")
}
defer func() {
_ = s.adm.DeleteServiceAccount(ctx, "restricted")
}()
} else {
// Create a regular user and attach consoleAdmin policy
err := s.adm.AddUser(ctx, "foobar", "foobar123")
if err != nil {
c.Fatalf("could not create user")
}
_, err = s.adm.AttachPolicy(ctx, madmin.PolicyAssociationReq{
Policies: []string{"consoleAdmin"},
User: "foobar",
})
if err != nil {
c.Fatalf("could not attach policy")
}
// Create a service account for the regular user.
_, err = s.adm.AddServiceAccount(ctx, madmin.AddServiceAccountReq{
Policy: allow2BucketsPolicyBytes,
TargetUser: "foobar",
AccessKey: "restricted",
SecretKey: "restricted123",
})
if err != nil {
c.Fatalf("could not create service account: %v", err)
}
defer func() {
_ = s.adm.DeleteServiceAccount(ctx, "restricted")
_ = s.adm.RemoveUser(ctx, "foobar")
}()
}
restrictedClient := s.getUserClient(c, "restricted", "restricted123", "")
buckets, err := restrictedClient.ListBuckets(ctx)
if err != nil {
c.Fatalf("err fetching buckets %s", err)
}
if len(buckets) != 2 || buckets[0].Name != "bucket1" || buckets[1].Name != "bucket2" {
c.Fatalf("restricted service account should only have access to bucket1 and bucket2")
}
// Try to escalate privileges
restrictedAdmClient := s.getAdminClient(c, "restricted", "restricted123", "")
_, err = restrictedAdmClient.AddServiceAccount(ctx, madmin.AddServiceAccountReq{
AccessKey: "newroot",
SecretKey: "newroot123",
})
if err == nil {
c.Fatalf("restricted service account was able to create service account bypassing sub-policy!")
}
}
func (s *TestSuiteIAM) SetUpAccMgmtPlugin(c *check) {
ctx, cancel := context.WithTimeout(context.Background(), testDefaultTimeout)
defer cancel()
@ -1267,7 +1471,7 @@ func (s *TestSuiteIAM) TestAccMgmtPlugin(c *check) {
svcAK, svcSK := mustGenerateCredentials(c)
// This policy does not allow listing objects.
policyBytes := []byte(fmt.Sprintf(`{
policyBytes := fmt.Appendf(nil, `{
"Version": "2012-10-17",
"Statement": [
{
@ -1281,7 +1485,7 @@ func (s *TestSuiteIAM) TestAccMgmtPlugin(c *check) {
]
}
]
}`, bucket))
}`, bucket)
cr, err := userAdmClient.AddServiceAccount(ctx, madmin.AddServiceAccountReq{
Policy: policyBytes,
TargetUser: accessKey,
@ -1458,7 +1662,7 @@ func (c *check) mustDownload(ctx context.Context, client *minio.Client, bucket s
func (c *check) mustUploadReturnVersions(ctx context.Context, client *minio.Client, bucket string) []string {
c.Helper()
versions := []string{}
for i := 0; i < 5; i++ {
for range 5 {
ui, err := client.PutObject(ctx, bucket, "some-object", bytes.NewBuffer([]byte("stuff")), 5, minio.PutObjectOptions{})
if err != nil {
c.Fatalf("upload did not succeed got %#v", err)
@ -1527,7 +1731,7 @@ func (c *check) assertSvcAccSessionPolicyUpdate(ctx context.Context, s *TestSuit
svcAK, svcSK := mustGenerateCredentials(c)
// This policy does not allow listing objects.
policyBytes := []byte(fmt.Sprintf(`{
policyBytes := fmt.Appendf(nil, `{
"Version": "2012-10-17",
"Statement": [
{
@ -1541,7 +1745,7 @@ func (c *check) assertSvcAccSessionPolicyUpdate(ctx context.Context, s *TestSuit
]
}
]
}`, bucket))
}`, bucket)
cr, err := madmClient.AddServiceAccount(ctx, madmin.AddServiceAccountReq{
Policy: policyBytes,
TargetUser: accessKey,
@ -1555,7 +1759,7 @@ func (c *check) assertSvcAccSessionPolicyUpdate(ctx context.Context, s *TestSuit
c.mustNotListObjects(ctx, svcClient, bucket)
// This policy allows listing objects.
newPolicyBytes := []byte(fmt.Sprintf(`{
newPolicyBytes := fmt.Appendf(nil, `{
"Version": "2012-10-17",
"Statement": [
{
@ -1564,11 +1768,11 @@ func (c *check) assertSvcAccSessionPolicyUpdate(ctx context.Context, s *TestSuit
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::%s/*"
"arn:aws:s3:::%s"
]
}
]
}`, bucket))
}`, bucket)
err = madmClient.UpdateServiceAccount(ctx, svcAK, madmin.UpdateServiceAccountReq{
NewPolicy: newPolicyBytes,
})

View File

@ -49,6 +49,7 @@ import (
"github.com/klauspost/compress/zip"
"github.com/minio/madmin-go/v3"
"github.com/minio/madmin-go/v3/estream"
"github.com/minio/madmin-go/v3/logger/log"
"github.com/minio/minio-go/v7/pkg/set"
"github.com/minio/minio/internal/auth"
"github.com/minio/minio/internal/dsync"
@ -59,7 +60,6 @@ import (
"github.com/minio/minio/internal/kms"
"github.com/minio/minio/internal/logger"
"github.com/minio/mux"
"github.com/minio/pkg/v3/logger/message/log"
xnet "github.com/minio/pkg/v3/net"
"github.com/minio/pkg/v3/policy"
"github.com/secure-io/sio-go"
@ -93,11 +93,18 @@ func (a adminAPIHandlers) ServerUpdateV2Handler(w http.ResponseWriter, r *http.R
return
}
if globalInplaceUpdateDisabled || currentReleaseTime.IsZero() {
if globalInplaceUpdateDisabled {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrMethodNotAllowed), r.URL)
return
}
if currentReleaseTime.IsZero() || currentReleaseTime.Equal(timeSentinel) {
apiErr := errorCodes.ToAPIErr(ErrMethodNotAllowed)
apiErr.Description = fmt.Sprintf("unable to perform in-place update, release time is unrecognized: %s", currentReleaseTime)
writeErrorResponseJSON(ctx, w, apiErr, r.URL)
return
}
vars := mux.Vars(r)
updateURL := vars["updateURL"]
dryRun := r.Form.Get("dry-run") == "true"
@ -110,6 +117,11 @@ func (a adminAPIHandlers) ServerUpdateV2Handler(w http.ResponseWriter, r *http.R
}
}
local := globalLocalNodeName
if local == "" {
local = "127.0.0.1"
}
u, err := url.Parse(updateURL)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
@ -128,6 +140,39 @@ func (a adminAPIHandlers) ServerUpdateV2Handler(w http.ResponseWriter, r *http.R
return
}
updateStatus := madmin.ServerUpdateStatusV2{
DryRun: dryRun,
Results: make([]madmin.ServerPeerUpdateStatus, 0, len(globalNotificationSys.allPeerClients)),
}
peerResults := make(map[string]madmin.ServerPeerUpdateStatus, len(globalNotificationSys.allPeerClients))
failedClients := make(map[int]bool, len(globalNotificationSys.allPeerClients))
if lrTime.Sub(currentReleaseTime) <= 0 {
updateStatus.Results = append(updateStatus.Results, madmin.ServerPeerUpdateStatus{
Host: local,
Err: fmt.Sprintf("server is running the latest version: %s", Version),
CurrentVersion: Version,
})
for _, client := range globalNotificationSys.peerClients {
updateStatus.Results = append(updateStatus.Results, madmin.ServerPeerUpdateStatus{
Host: client.String(),
Err: fmt.Sprintf("server is running the latest version: %s", Version),
CurrentVersion: Version,
})
}
// Marshal API response
jsonBytes, err := json.Marshal(updateStatus)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
writeSuccessResponseJSON(w, jsonBytes)
return
}
u.Path = path.Dir(u.Path) + SlashSeparator + releaseInfo
// Download Binary Once
binC, bin, err := downloadBinary(u, mode)
@ -137,16 +182,6 @@ func (a adminAPIHandlers) ServerUpdateV2Handler(w http.ResponseWriter, r *http.R
return
}
updateStatus := madmin.ServerUpdateStatusV2{DryRun: dryRun}
peerResults := make(map[string]madmin.ServerPeerUpdateStatus)
local := globalLocalNodeName
if local == "" {
local = "127.0.0.1"
}
failedClients := make(map[int]struct{})
if globalIsDistErasure {
// Push binary to other servers
for idx, nerr := range globalNotificationSys.VerifyBinary(ctx, u, sha256Sum, releaseInfo, binC) {
@ -156,7 +191,7 @@ func (a adminAPIHandlers) ServerUpdateV2Handler(w http.ResponseWriter, r *http.R
Err: nerr.Err.Error(),
CurrentVersion: Version,
}
failedClients[idx] = struct{}{}
failedClients[idx] = true
} else {
peerResults[nerr.Host.String()] = madmin.ServerPeerUpdateStatus{
Host: nerr.Host.String(),
@ -167,25 +202,17 @@ func (a adminAPIHandlers) ServerUpdateV2Handler(w http.ResponseWriter, r *http.R
}
}
if lrTime.Sub(currentReleaseTime) > 0 {
if err = verifyBinary(u, sha256Sum, releaseInfo, mode, bytes.NewReader(bin)); err != nil {
peerResults[local] = madmin.ServerPeerUpdateStatus{
Host: local,
Err: err.Error(),
CurrentVersion: Version,
}
} else {
peerResults[local] = madmin.ServerPeerUpdateStatus{
Host: local,
CurrentVersion: Version,
UpdatedVersion: lrTime.Format(MinioReleaseTagTimeLayout),
}
if err = verifyBinary(u, sha256Sum, releaseInfo, mode, bytes.NewReader(bin)); err != nil {
peerResults[local] = madmin.ServerPeerUpdateStatus{
Host: local,
Err: err.Error(),
CurrentVersion: Version,
}
} else {
peerResults[local] = madmin.ServerPeerUpdateStatus{
Host: local,
Err: fmt.Sprintf("server is already running the latest version: %s", Version),
CurrentVersion: Version,
UpdatedVersion: lrTime.Format(MinioReleaseTagTimeLayout),
}
}
@ -193,8 +220,7 @@ func (a adminAPIHandlers) ServerUpdateV2Handler(w http.ResponseWriter, r *http.R
if globalIsDistErasure {
ng := WithNPeers(len(globalNotificationSys.peerClients))
for idx, client := range globalNotificationSys.peerClients {
_, ok := failedClients[idx]
if ok {
if failedClients[idx] {
continue
}
client := client
@ -237,17 +263,18 @@ func (a adminAPIHandlers) ServerUpdateV2Handler(w http.ResponseWriter, r *http.R
if globalIsDistErasure {
// Notify all other MinIO peers signal service.
startTime := time.Now().Add(restartUpdateDelay)
ng := WithNPeers(len(globalNotificationSys.peerClients))
for idx, client := range globalNotificationSys.peerClients {
_, ok := failedClients[idx]
if ok {
if failedClients[idx] {
continue
}
client := client
ng.Go(ctx, func() error {
prs, ok := peerResults[client.String()]
if ok && prs.CurrentVersion != prs.UpdatedVersion && prs.UpdatedVersion != "" {
return client.SignalService(serviceRestart, "", dryRun)
// We restart only on success, not for any failures.
if ok && prs.Err == "" {
return client.SignalService(serviceRestart, "", dryRun, &startTime)
}
return nil
}, idx, *client.host)
@ -283,7 +310,9 @@ func (a adminAPIHandlers) ServerUpdateV2Handler(w http.ResponseWriter, r *http.R
writeSuccessResponseJSON(w, jsonBytes)
if !dryRun {
if lrTime.Sub(currentReleaseTime) > 0 {
prs, ok := peerResults[local]
// We restart only on success, not for any failures.
if ok && prs.Err == "" {
globalServiceSignalCh <- serviceRestart
}
}
@ -542,9 +571,12 @@ func (a adminAPIHandlers) ServiceV2Handler(w http.ResponseWriter, r *http.Reques
}
var objectAPI ObjectLayer
var execAt *time.Time
switch serviceSig {
case serviceRestart:
objectAPI, _ = validateAdminReq(ctx, w, r, policy.ServiceRestartAdminAction)
t := time.Now().Add(restartUpdateDelay)
execAt = &t
case serviceStop:
objectAPI, _ = validateAdminReq(ctx, w, r, policy.ServiceStopAdminAction)
case serviceFreeze, serviceUnFreeze:
@ -571,7 +603,7 @@ func (a adminAPIHandlers) ServiceV2Handler(w http.ResponseWriter, r *http.Reques
}
if globalIsDistErasure {
for _, nerr := range globalNotificationSys.SignalServiceV2(serviceSig, dryRun) {
for _, nerr := range globalNotificationSys.SignalServiceV2(serviceSig, dryRun, execAt) {
if nerr.Err != nil && process {
waitingDrives := map[string]madmin.DiskMetrics{}
jerr := json.Unmarshal([]byte(nerr.Err.Error()), &waitingDrives)
@ -797,7 +829,7 @@ func (a adminAPIHandlers) MetricsHandler(w http.ResponseWriter, r *http.Request)
}
// Flush before waiting for next...
w.(http.Flusher).Flush()
xhttp.Flush(w)
select {
case <-ticker.C:
@ -844,9 +876,10 @@ func (a adminAPIHandlers) DataUsageInfoHandler(w http.ResponseWriter, r *http.Re
}
func lriToLockEntry(l lockRequesterInfo, now time.Time, resource, server string) *madmin.LockEntry {
t := time.Unix(0, l.Timestamp)
entry := &madmin.LockEntry{
Timestamp: l.Timestamp,
Elapsed: now.Sub(l.Timestamp),
Timestamp: t,
Elapsed: now.Sub(t),
Resource: resource,
ServerList: []string{server},
Source: l.Source,
@ -921,7 +954,7 @@ func (a adminAPIHandlers) ForceUnlockHandler(w http.ResponseWriter, r *http.Requ
var args dsync.LockArgs
var lockers []dsync.NetLocker
for _, path := range strings.Split(vars["paths"], ",") {
for path := range strings.SplitSeq(vars["paths"], ",") {
if path == "" {
continue
}
@ -1031,7 +1064,7 @@ func (a adminAPIHandlers) StartProfilingHandler(w http.ResponseWriter, r *http.R
// Start profiling on remote servers.
var hostErrs []NotificationPeerErr
for _, profiler := range profiles {
hostErrs = append(hostErrs, globalNotificationSys.StartProfiling(profiler)...)
hostErrs = append(hostErrs, globalNotificationSys.StartProfiling(ctx, profiler)...)
// Start profiling locally as well.
prof, err := startProfiler(profiler)
@ -1112,7 +1145,11 @@ func (a adminAPIHandlers) ProfileHandler(w http.ResponseWriter, r *http.Request)
// Start profiling on remote servers.
for _, profiler := range profiles {
globalNotificationSys.StartProfiling(profiler)
// Limit start time to max 10s.
ctx, cancel := context.WithTimeout(ctx, 10*time.Second)
globalNotificationSys.StartProfiling(ctx, profiler)
// StartProfiling blocks, so we can cancel now.
cancel()
// Start profiling locally as well.
prof, err := startProfiler(profiler)
@ -1127,6 +1164,10 @@ func (a adminAPIHandlers) ProfileHandler(w http.ResponseWriter, r *http.Request)
for {
select {
case <-ctx.Done():
// Stop remote profiles
go globalNotificationSys.DownloadProfilingData(GlobalContext, io.Discard)
// Stop local
globalProfilerMu.Lock()
defer globalProfilerMu.Unlock()
for k, v := range globalProfiler {
@ -1152,7 +1193,7 @@ type dummyFileInfo struct {
mode os.FileMode
modTime time.Time
isDir bool
sys interface{}
sys any
}
func (f dummyFileInfo) Name() string { return f.name }
@ -1160,7 +1201,7 @@ func (f dummyFileInfo) Size() int64 { return f.size }
func (f dummyFileInfo) Mode() os.FileMode { return f.mode }
func (f dummyFileInfo) ModTime() time.Time { return f.modTime }
func (f dummyFileInfo) IsDir() bool { return f.isDir }
func (f dummyFileInfo) Sys() interface{} { return f.sys }
func (f dummyFileInfo) Sys() any { return f.sys }
// DownloadProfilingHandler - POST /minio/admin/v3/profiling/download
// ----------
@ -1202,17 +1243,17 @@ func extractHealInitParams(vars map[string]string, qParams url.Values, r io.Read
if hip.objPrefix != "" {
// Bucket is required if object-prefix is given
err = ErrHealMissingBucket
return
return hip, err
}
} else if isReservedOrInvalidBucket(hip.bucket, false) {
err = ErrInvalidBucketName
return
return hip, err
}
// empty prefix is valid.
if !IsValidObjectPrefix(hip.objPrefix) {
err = ErrInvalidObjectName
return
return hip, err
}
if len(qParams[mgmtClientToken]) > 0 {
@ -1234,7 +1275,7 @@ func extractHealInitParams(vars map[string]string, qParams url.Values, r io.Read
if (hip.forceStart && hip.forceStop) ||
(hip.clientToken != "" && (hip.forceStart || hip.forceStop)) {
err = ErrInvalidRequest
return
return hip, err
}
// ignore body if clientToken is provided
@ -1243,12 +1284,12 @@ func extractHealInitParams(vars map[string]string, qParams url.Values, r io.Read
if jerr != nil {
adminLogIf(GlobalContext, jerr, logger.ErrorKind)
err = ErrRequestBodyParse
return
return hip, err
}
}
err = ErrNone
return
return hip, err
}
// HealHandler - POST /minio/admin/v3/heal/
@ -1279,7 +1320,7 @@ func (a adminAPIHandlers) HealHandler(w http.ResponseWriter, r *http.Request) {
}
// Analyze the heal token and route the request accordingly
token, success := proxyRequestByToken(ctx, w, r, hip.clientToken)
token, _, success := proxyRequestByToken(ctx, w, r, hip.clientToken, false)
if success {
return
}
@ -1318,7 +1359,7 @@ func (a adminAPIHandlers) HealHandler(w http.ResponseWriter, r *http.Request) {
if _, err := w.Write([]byte(" ")); err != nil {
return
}
w.(http.Flusher).Flush()
xhttp.Flush(w)
case hr := <-respCh:
switch hr.apiErr {
case noError:
@ -1326,7 +1367,7 @@ func (a adminAPIHandlers) HealHandler(w http.ResponseWriter, r *http.Request) {
if _, err := w.Write(hr.respBytes); err != nil {
return
}
w.(http.Flusher).Flush()
xhttp.Flush(w)
} else {
writeSuccessResponseJSON(w, hr.respBytes)
}
@ -1353,7 +1394,7 @@ func (a adminAPIHandlers) HealHandler(w http.ResponseWriter, r *http.Request) {
if _, err := w.Write(errorRespJSON); err != nil {
return
}
w.(http.Flusher).Flush()
xhttp.Flush(w)
}
break forLoop
}
@ -1366,7 +1407,7 @@ func (a adminAPIHandlers) HealHandler(w http.ResponseWriter, r *http.Request) {
if exists && !nh.hasEnded() && len(nh.currentStatus.Items) > 0 {
clientToken := nh.clientToken
if globalIsDistErasure {
clientToken = fmt.Sprintf("%s:%d", nh.clientToken, GetProxyEndpointLocalIndex(globalProxyEndpoints))
clientToken = fmt.Sprintf("%s%s%d", nh.clientToken, getKeySeparator(), GetProxyEndpointLocalIndex(globalProxyEndpoints))
}
b, err := json.Marshal(madmin.HealStartSuccess{
ClientToken: clientToken,
@ -1570,7 +1611,6 @@ func (a adminAPIHandlers) ClientDevNull(w http.ResponseWriter, r *http.Request)
if err != nil || ctx.Err() != nil || totalRx > 100*humanize.GiByte {
break
}
}
w.WriteHeader(http.StatusOK)
}
@ -1799,7 +1839,7 @@ func (a adminAPIHandlers) ObjectSpeedTestHandler(w http.ResponseWriter, r *http.
return
}
}
w.(http.Flusher).Flush()
xhttp.Flush(w)
case result, ok := <-ch:
if !ok {
return
@ -1808,7 +1848,7 @@ func (a adminAPIHandlers) ObjectSpeedTestHandler(w http.ResponseWriter, r *http.
return
}
prevResult = result
w.(http.Flusher).Flush()
xhttp.Flush(w)
}
}
}
@ -1917,7 +1957,7 @@ func (a adminAPIHandlers) DriveSpeedtestHandler(w http.ResponseWriter, r *http.R
if err := enc.Encode(madmin.DriveSpeedTestResult{}); err != nil {
return
}
w.(http.Flusher).Flush()
xhttp.Flush(w)
case result, ok := <-ch:
if !ok {
return
@ -1925,7 +1965,7 @@ func (a adminAPIHandlers) DriveSpeedtestHandler(w http.ResponseWriter, r *http.R
if err := enc.Encode(result); err != nil {
return
}
w.(http.Flusher).Flush()
xhttp.Flush(w)
}
}
}
@ -1982,7 +2022,7 @@ func extractTraceOptions(r *http.Request) (opts madmin.ServiceTraceOpts, err err
opts.OS = true
// Older mc - cannot deal with more types...
}
return
return opts, err
}
// TraceHandler - POST /minio/admin/v3/trace
@ -2042,7 +2082,7 @@ func (a adminAPIHandlers) TraceHandler(w http.ResponseWriter, r *http.Request) {
grid.PutByteBuffer(entry)
if len(traceCh) == 0 {
// Flush if nothing is queued
w.(http.Flusher).Flush()
xhttp.Flush(w)
}
case <-keepAliveTicker.C:
if len(traceCh) > 0 {
@ -2051,7 +2091,7 @@ func (a adminAPIHandlers) TraceHandler(w http.ResponseWriter, r *http.Request) {
if _, err := w.Write([]byte(" ")); err != nil {
return
}
w.(http.Flusher).Flush()
xhttp.Flush(w)
case <-ctx.Done():
return
}
@ -2143,7 +2183,7 @@ func (a adminAPIHandlers) ConsoleLogHandler(w http.ResponseWriter, r *http.Reque
grid.PutByteBuffer(log)
if len(logCh) == 0 {
// Flush if nothing is queued
w.(http.Flusher).Flush()
xhttp.Flush(w)
}
case <-keepAliveTicker.C:
if len(logCh) > 0 {
@ -2152,7 +2192,7 @@ func (a adminAPIHandlers) ConsoleLogHandler(w http.ResponseWriter, r *http.Reque
if _, err := w.Write([]byte(" ")); err != nil {
return
}
w.(http.Flusher).Flush()
xhttp.Flush(w)
case <-ctx.Done():
return
}
@ -2182,7 +2222,7 @@ func (a adminAPIHandlers) KMSCreateKeyHandler(w http.ResponseWriter, r *http.Req
writeSuccessResponseHeadersOnly(w)
}
// KMSKeyStatusHandler - GET /minio/admin/v3/kms/status
// KMSStatusHandler - GET /minio/admin/v3/kms/status
func (a adminAPIHandlers) KMSStatusHandler(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
@ -2326,6 +2366,7 @@ func getPoolsInfo(ctx context.Context, allDisks []madmin.Disk) (map[int]map[int]
}
func getServerInfo(ctx context.Context, pools, metrics bool, r *http.Request) madmin.InfoMessage {
const operationTimeout = 10 * time.Second
ldap := madmin.LDAP{}
if globalIAMSys.LDAPConfig.Enabled() {
ldapConn, err := globalIAMSys.LDAPConfig.LDAP.Connect()
@ -2366,7 +2407,9 @@ func getServerInfo(ctx context.Context, pools, metrics bool, r *http.Request) ma
mode = madmin.ItemOnline
// Load data usage
dataUsageInfo, err := loadDataUsageFromBackend(ctx, objectAPI)
ctx2, cancel := context.WithTimeout(ctx, operationTimeout)
dataUsageInfo, err := loadDataUsageFromBackend(ctx2, objectAPI)
cancel()
if err == nil {
buckets = madmin.Buckets{Count: dataUsageInfo.BucketsCount}
objects = madmin.Objects{Count: dataUsageInfo.ObjectsTotalCount}
@ -2400,24 +2443,30 @@ func getServerInfo(ctx context.Context, pools, metrics bool, r *http.Request) ma
}
if pools {
poolsInfo, _ = getPoolsInfo(ctx, allDisks)
ctx2, cancel := context.WithTimeout(ctx, operationTimeout)
poolsInfo, _ = getPoolsInfo(ctx2, allDisks)
cancel()
}
}
domain := globalDomainNames
services := madmin.Services{
KMSStatus: fetchKMSStatus(ctx),
LDAP: ldap,
Logger: log,
Audit: audit,
Notifications: notifyTarget,
}
{
ctx2, cancel := context.WithTimeout(ctx, operationTimeout)
services.KMSStatus = fetchKMSStatus(ctx2)
cancel()
}
return madmin.InfoMessage{
Mode: string(mode),
Domain: domain,
Region: globalSite.Region(),
SQSARN: globalEventNotifier.GetARNList(false),
SQSARN: globalEventNotifier.GetARNList(),
DeploymentID: globalDeploymentID(),
Buckets: buckets,
Objects: objects,
@ -2627,7 +2676,7 @@ func fetchHealthInfo(healthCtx context.Context, objectAPI ObjectLayer, query *ur
// disk metrics are already included under drive info of each server
getRealtimeMetrics := func() *madmin.RealtimeMetrics {
var m madmin.RealtimeMetrics
var types madmin.MetricType = madmin.MetricsAll &^ madmin.MetricsDisk
types := madmin.MetricsAll &^ madmin.MetricsDisk
mLocal := collectLocalMetrics(types, collectMetricsOpts{})
m.Merge(&mLocal)
cctx, cancel := context.WithTimeout(healthCtx, time.Second/2)
@ -2671,7 +2720,7 @@ func fetchHealthInfo(healthCtx context.Context, objectAPI ObjectLayer, query *ur
poolsArgs := re.ReplaceAllString(cmdLine, `$3`)
var anonPools []string
if !(strings.Contains(poolsArgs, "{") && strings.Contains(poolsArgs, "}")) {
if !strings.Contains(poolsArgs, "{") || !strings.Contains(poolsArgs, "}") {
// No ellipses pattern. Anonymize host name from every pool arg
pools := strings.Fields(poolsArgs)
anonPools = make([]string, len(pools))
@ -2913,13 +2962,13 @@ func (a adminAPIHandlers) HealthInfoHandler(w http.ResponseWriter, r *http.Reque
}
if len(healthInfoCh) == 0 {
// Flush if nothing is queued
w.(http.Flusher).Flush()
xhttp.Flush(w)
}
case <-ticker.C:
if _, err := w.Write([]byte(" ")); err != nil {
return
}
w.(http.Flusher).Flush()
xhttp.Flush(w)
case <-healthCtx.Done():
return
}
@ -3045,7 +3094,7 @@ func targetStatus(ctx context.Context, h logger.Target) madmin.Status {
return madmin.Status{Status: string(madmin.ItemOffline)}
}
// fetchLoggerDetails return log info
// fetchLoggerInfo return log info
func fetchLoggerInfo(ctx context.Context) ([]madmin.Logger, []madmin.Audit) {
var loggerInfo []madmin.Logger
var auditloggerInfo []madmin.Audit
@ -3371,7 +3420,7 @@ func (a adminAPIHandlers) InspectDataHandler(w http.ResponseWriter, r *http.Requ
}
// save the format.json as part of inspect by default
if !(volume == minioMetaBucket && file == formatConfigFile) {
if volume != minioMetaBucket || file != formatConfigFile {
err = o.GetRawData(ctx, minioMetaBucket, formatConfigFile, rawDataFn)
}
if !errors.Is(err, errFileNotFound) {

View File

@ -263,7 +263,7 @@ func buildAdminRequest(queryVal url.Values, method, path string,
}
func TestAdminServerInfo(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
ctx, cancel := context.WithCancel(t.Context())
defer cancel()
adminTestBed, err := prepareAdminErasureTestBed(ctx)
@ -402,7 +402,7 @@ func (b byResourceUID) Less(i, j int) bool {
func TestTopLockEntries(t *testing.T) {
locksHeld := make(map[string][]lockRequesterInfo)
var owners []string
for i := 0; i < 4; i++ {
for i := range 4 {
owners = append(owners, fmt.Sprintf("node-%d", i))
}
@ -410,7 +410,7 @@ func TestTopLockEntries(t *testing.T) {
// request UID, but 10 different resource names associated with it.
var lris []lockRequesterInfo
uuid := mustGetUUID()
for i := 0; i < 10; i++ {
for i := range 10 {
resource := fmt.Sprintf("bucket/delete-object-%d", i)
lri := lockRequesterInfo{
Name: resource,
@ -425,7 +425,7 @@ func TestTopLockEntries(t *testing.T) {
}
// Add a few concurrent read locks to the mix
for i := 0; i < 50; i++ {
for i := range 50 {
resource := fmt.Sprintf("bucket/get-object-%d", i)
lri := lockRequesterInfo{
Name: resource,
@ -463,6 +463,7 @@ func TestTopLockEntries(t *testing.T) {
Owner: lri.Owner,
ID: lri.UID,
Quorum: lri.Quorum,
Timestamp: time.Unix(0, lri.Timestamp),
})
}

View File

@ -22,6 +22,7 @@ import (
"encoding/json"
"errors"
"fmt"
"maps"
"net/http"
"sort"
"sync"
@ -260,7 +261,7 @@ func (ahs *allHealState) stopHealSequence(path string) ([]byte, APIError) {
} else {
clientToken := he.clientToken
if globalIsDistErasure {
clientToken = fmt.Sprintf("%s:%d", he.clientToken, GetProxyEndpointLocalIndex(globalProxyEndpoints))
clientToken = fmt.Sprintf("%s%s%d", he.clientToken, getKeySeparator(), GetProxyEndpointLocalIndex(globalProxyEndpoints))
}
hsp = madmin.HealStopSuccess{
@ -331,7 +332,7 @@ func (ahs *allHealState) LaunchNewHealSequence(h *healSequence, objAPI ObjectLay
clientToken := h.clientToken
if globalIsDistErasure {
clientToken = fmt.Sprintf("%s:%d", h.clientToken, GetProxyEndpointLocalIndex(globalProxyEndpoints))
clientToken = fmt.Sprintf("%s%s%d", h.clientToken, getKeySeparator(), GetProxyEndpointLocalIndex(globalProxyEndpoints))
}
if h.clientToken == bgHealingUUID {
@ -520,9 +521,7 @@ func (h *healSequence) getScannedItemsMap() map[madmin.HealItemType]int64 {
// Make a copy before returning the value
retMap := make(map[madmin.HealItemType]int64, len(h.scannedItemsMap))
for k, v := range h.scannedItemsMap {
retMap[k] = v
}
maps.Copy(retMap, h.scannedItemsMap)
return retMap
}
@ -534,9 +533,7 @@ func (h *healSequence) getHealedItemsMap() map[madmin.HealItemType]int64 {
// Make a copy before returning the value
retMap := make(map[madmin.HealItemType]int64, len(h.healedItemsMap))
for k, v := range h.healedItemsMap {
retMap[k] = v
}
maps.Copy(retMap, h.healedItemsMap)
return retMap
}
@ -549,9 +546,7 @@ func (h *healSequence) getHealFailedItemsMap() map[madmin.HealItemType]int64 {
// Make a copy before returning the value
retMap := make(map[madmin.HealItemType]int64, len(h.healFailedItemsMap))
for k, v := range h.healFailedItemsMap {
retMap[k] = v
}
maps.Copy(retMap, h.healFailedItemsMap)
return retMap
}
@ -761,6 +756,15 @@ func (h *healSequence) queueHealTask(source healSource, healType madmin.HealItem
return nil
}
countOKDrives := func(drives []madmin.HealDriveInfo) (count int) {
for _, drive := range drives {
if drive.State == madmin.DriveStateOk {
count++
}
}
return count
}
// task queued, now wait for the response.
select {
case res := <-task.respCh:
@ -781,6 +785,11 @@ func (h *healSequence) queueHealTask(source healSource, healType madmin.HealItem
if res.err != nil {
res.result.Detail = res.err.Error()
}
if res.result.ParityBlocks > 0 && res.result.DataBlocks > 0 && res.result.DataBlocks > res.result.ParityBlocks {
if got := countOKDrives(res.result.After.Drives); got < res.result.ParityBlocks {
res.result.Detail = fmt.Sprintf("quorum loss - expected %d minimum, got drive states in OK %d", res.result.ParityBlocks, got)
}
}
return h.pushHealResultItem(res.result)
case <-h.ctx.Done():
return nil
@ -792,18 +801,20 @@ func (h *healSequence) healDiskMeta(objAPI ObjectLayer) error {
return h.healMinioSysMeta(objAPI, minioConfigPrefix)()
}
func (h *healSequence) healItems(objAPI ObjectLayer, bucketsOnly bool) error {
func (h *healSequence) healItems(objAPI ObjectLayer) error {
if h.clientToken == bgHealingUUID {
// For background heal do nothing.
return nil
}
if err := h.healDiskMeta(objAPI); err != nil {
return err
if h.bucket == "" { // heal internal meta only during a site-wide heal
if err := h.healDiskMeta(objAPI); err != nil {
return err
}
}
// Heal buckets and objects
return h.healBuckets(objAPI, bucketsOnly)
return h.healBuckets(objAPI)
}
// traverseAndHeal - traverses on-disk data and performs healing
@ -814,8 +825,7 @@ func (h *healSequence) healItems(objAPI ObjectLayer, bucketsOnly bool) error {
// has to wait until a safe point is reached, such as between scanning
// two objects.
func (h *healSequence) traverseAndHeal(objAPI ObjectLayer) {
bucketsOnly := false // Heals buckets and objects also.
h.traverseAndHealDoneCh <- h.healItems(objAPI, bucketsOnly)
h.traverseAndHealDoneCh <- h.healItems(objAPI)
xioutil.SafeClose(h.traverseAndHealDoneCh)
}
@ -826,6 +836,7 @@ func (h *healSequence) healMinioSysMeta(objAPI ObjectLayer, metaPrefix string) f
// NOTE: Healing on meta is run regardless
// of any bucket being selected, this is to ensure that
// meta are always upto date and correct.
h.settings.Recursive = true
return objAPI.HealObjects(h.ctx, minioMetaBucket, metaPrefix, h.settings, func(bucket, object, versionID string, scanMode madmin.HealScanMode) error {
if h.isQuitting() {
return errHealStopSignalled
@ -842,14 +853,14 @@ func (h *healSequence) healMinioSysMeta(objAPI ObjectLayer, metaPrefix string) f
}
// healBuckets - check for all buckets heal or just particular bucket.
func (h *healSequence) healBuckets(objAPI ObjectLayer, bucketsOnly bool) error {
func (h *healSequence) healBuckets(objAPI ObjectLayer) error {
if h.isQuitting() {
return errHealStopSignalled
}
// 1. If a bucket was specified, heal only the bucket.
if h.bucket != "" {
return h.healBucket(objAPI, h.bucket, bucketsOnly)
return h.healBucket(objAPI, h.bucket, false)
}
buckets, err := objAPI.ListBuckets(h.ctx, BucketOptions{})
@ -863,7 +874,7 @@ func (h *healSequence) healBuckets(objAPI ObjectLayer, bucketsOnly bool) error {
})
for _, bucket := range buckets {
if err = h.healBucket(objAPI, bucket.Name, bucketsOnly); err != nil {
if err = h.healBucket(objAPI, bucket.Name, false); err != nil {
return err
}
}
@ -881,16 +892,6 @@ func (h *healSequence) healBucket(objAPI ObjectLayer, bucket string, bucketsOnly
return nil
}
if !h.settings.Recursive {
if h.object != "" {
if err := h.healObject(bucket, h.object, "", h.settings.ScanMode); err != nil {
return err
}
}
return nil
}
if err := objAPI.HealObjects(h.ctx, bucket, h.object, h.settings, h.healObject); err != nil {
return errFnHealFromAPIErr(h.ctx, err)
}

View File

@ -159,14 +159,14 @@ func registerAdminRouter(router *mux.Router, enableConfigOps bool) {
// Info operations
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/info").HandlerFunc(adminMiddleware(adminAPI.ServerInfoHandler, traceAllFlag, noObjLayerFlag))
adminRouter.Methods(http.MethodGet, http.MethodPost).Path(adminVersion + "/inspect-data").HandlerFunc(adminMiddleware(adminAPI.InspectDataHandler, noGZFlag, traceAllFlag))
adminRouter.Methods(http.MethodGet, http.MethodPost).Path(adminVersion + "/inspect-data").HandlerFunc(adminMiddleware(adminAPI.InspectDataHandler, noGZFlag, traceHdrsS3HFlag))
// StorageInfo operations
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/storageinfo").HandlerFunc(adminMiddleware(adminAPI.StorageInfoHandler, traceAllFlag))
// DataUsageInfo operations
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/datausageinfo").HandlerFunc(adminMiddleware(adminAPI.DataUsageInfoHandler, traceAllFlag))
// Metrics operation
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/metrics").HandlerFunc(adminMiddleware(adminAPI.MetricsHandler, traceAllFlag))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/metrics").HandlerFunc(adminMiddleware(adminAPI.MetricsHandler, traceHdrsS3HFlag))
if globalIsDistErasure || globalIsErasure {
// Heal operations
@ -193,9 +193,9 @@ func registerAdminRouter(router *mux.Router, enableConfigOps bool) {
// Profiling operations - deprecated API
adminRouter.Methods(http.MethodPost).Path(adminVersion+"/profiling/start").HandlerFunc(adminMiddleware(adminAPI.StartProfilingHandler, traceAllFlag, noObjLayerFlag)).
Queries("profilerType", "{profilerType:.*}")
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/profiling/download").HandlerFunc(adminMiddleware(adminAPI.DownloadProfilingHandler, traceAllFlag, noObjLayerFlag))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/profiling/download").HandlerFunc(adminMiddleware(adminAPI.DownloadProfilingHandler, traceHdrsS3HFlag, noObjLayerFlag))
// Profiling operations
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/profile").HandlerFunc(adminMiddleware(adminAPI.ProfileHandler, traceAllFlag, noObjLayerFlag))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/profile").HandlerFunc(adminMiddleware(adminAPI.ProfileHandler, traceHdrsS3HFlag, noObjLayerFlag))
// Config KV operations.
if enableConfigOps {
@ -244,6 +244,10 @@ func registerAdminRouter(router *mux.Router, enableConfigOps bool) {
// STS accounts ops
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/temporary-account-info").HandlerFunc(adminMiddleware(adminAPI.TemporaryAccountInfo)).Queries("accessKey", "{accessKey:.*}")
// Access key (service account/STS) operations
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/list-access-keys-bulk").HandlerFunc(adminMiddleware(adminAPI.ListAccessKeysBulk)).Queries("listType", "{listType:.*}")
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/info-access-key").HandlerFunc(adminMiddleware(adminAPI.InfoAccessKey)).Queries("accessKey", "{accessKey:.*}")
// Info policy IAM latest
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/info-canned-policy").HandlerFunc(adminMiddleware(adminAPI.InfoCannedPolicy)).Queries("name", "{name:.*}")
// List policies latest
@ -290,8 +294,9 @@ func registerAdminRouter(router *mux.Router, enableConfigOps bool) {
// Import IAM info
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/import-iam").HandlerFunc(adminMiddleware(adminAPI.ImportIAM, noGZFlag))
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/import-iam-v2").HandlerFunc(adminMiddleware(adminAPI.ImportIAMV2, noGZFlag))
// IDentity Provider configuration APIs
// Identity Provider configuration APIs
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/idp-config/{type}/{name}").HandlerFunc(adminMiddleware(adminAPI.AddIdentityProviderCfg))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/idp-config/{type}/{name}").HandlerFunc(adminMiddleware(adminAPI.UpdateIdentityProviderCfg))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/idp-config/{type}").HandlerFunc(adminMiddleware(adminAPI.ListIdentityProviderCfg))
@ -301,12 +306,18 @@ func registerAdminRouter(router *mux.Router, enableConfigOps bool) {
// LDAP specific service accounts ops
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/idp/ldap/add-service-account").HandlerFunc(adminMiddleware(adminAPI.AddServiceAccountLDAP))
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/idp/ldap/list-access-keys").
HandlerFunc(adminMiddleware(adminAPI.ListAccessKeysLDAP)).
Queries("userDN", "{userDN:.*}", "listType", "{listType:.*}")
HandlerFunc(adminMiddleware(adminAPI.ListAccessKeysLDAP)).Queries("userDN", "{userDN:.*}", "listType", "{listType:.*}")
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/idp/ldap/list-access-keys-bulk").
HandlerFunc(adminMiddleware(adminAPI.ListAccessKeysLDAPBulk)).Queries("listType", "{listType:.*}")
// LDAP IAM operations
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/idp/ldap/policy-entities").HandlerFunc(adminMiddleware(adminAPI.ListLDAPPolicyMappingEntities))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/idp/ldap/policy/{operation}").HandlerFunc(adminMiddleware(adminAPI.AttachDetachPolicyLDAP))
// OpenID specific service accounts ops
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/idp/openid/list-access-keys-bulk").
HandlerFunc(adminMiddleware(adminAPI.ListAccessKeysOpenIDBulk)).Queries("listType", "{listType:.*}")
// -- END IAM APIs --
// GetBucketQuotaConfig
@ -340,6 +351,9 @@ func registerAdminRouter(router *mux.Router, enableConfigOps bool) {
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/list-jobs").HandlerFunc(
adminMiddleware(adminAPI.ListBatchJobs))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/status-job").HandlerFunc(
adminMiddleware(adminAPI.BatchJobStatus))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/describe-job").HandlerFunc(
adminMiddleware(adminAPI.DescribeBatchJob))
adminRouter.Methods(http.MethodDelete).Path(adminVersion + "/cancel-job").HandlerFunc(
@ -416,6 +430,9 @@ func registerAdminRouter(router *mux.Router, enableConfigOps bool) {
// -- Health API --
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/healthinfo").
HandlerFunc(adminMiddleware(adminAPI.HealthInfoHandler))
// STS Revocation
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/revoke-tokens/{userProvider}").HandlerFunc(adminMiddleware(adminAPI.RevokeTokens))
}
// If none of the routes match add default error handler routes

View File

@ -32,6 +32,8 @@ type DeletedObject struct {
DeleteMarkerMTime DeleteMarkerMTime `xml:"-"`
// MinIO extensions to support delete marker replication
ReplicationState ReplicationState `xml:"-"`
found bool // the object was found during deletion
}
// DeleteMarkerMTime is an embedded type containing time.Time for XML marshal
@ -42,10 +44,10 @@ type DeleteMarkerMTime struct {
// MarshalXML encodes expiration date if it is non-zero and encodes
// empty string otherwise
func (t DeleteMarkerMTime) MarshalXML(e *xml.Encoder, startElement xml.StartElement) error {
if t.Time.IsZero() {
if t.IsZero() {
return nil
}
return e.EncodeElement(t.Time.Format(time.RFC3339), startElement)
return e.EncodeElement(t.Format(time.RFC3339), startElement)
}
// ObjectV object version key/versionId

View File

@ -28,7 +28,7 @@ import (
"strconv"
"strings"
"github.com/Azure/azure-storage-blob-go/azblob"
"github.com/Azure/azure-sdk-for-go/sdk/azcore"
"github.com/minio/minio/internal/ioutil"
"google.golang.org/api/googleapi"
@ -213,6 +213,10 @@ const (
ErrPolicyAlreadyAttached
ErrPolicyNotAttached
ErrExcessData
ErrPolicyInvalidName
ErrNoTokenRevokeType
ErrAdminOpenIDNotEnabled
ErrAdminNoSuchAccessKey
// Add new error codes here.
// SSE-S3/SSE-KMS related API errors
@ -445,6 +449,8 @@ const (
ErrAdminNoAccessKey
ErrAdminNoSecretKey
ErrIAMNotInitialized
apiErrCodeEnd // This is used only for the testing code
)
@ -559,6 +565,16 @@ var errorCodes = errorCodeMap{
Description: "More data provided than indicated content length",
HTTPStatusCode: http.StatusBadRequest,
},
ErrPolicyInvalidName: {
Code: "PolicyInvalidName",
Description: "Policy name may not contain comma",
HTTPStatusCode: http.StatusBadRequest,
},
ErrAdminOpenIDNotEnabled: {
Code: "OpenIDNotEnabled",
Description: "No enabled OpenID Connect identity providers",
HTTPStatusCode: http.StatusBadRequest,
},
ErrPolicyTooLarge: {
Code: "PolicyTooLarge",
Description: "Policy exceeds the maximum allowed document size.",
@ -621,7 +637,7 @@ var errorCodes = errorCodeMap{
},
ErrMissingContentMD5: {
Code: "MissingContentMD5",
Description: "Missing required header for this request: Content-Md5.",
Description: "Missing or invalid required header for this request: Content-Md5 or Amz-Content-Checksum",
HTTPStatusCode: http.StatusBadRequest,
},
ErrMissingSecurityHeader: {
@ -967,7 +983,7 @@ var errorCodes = errorCodeMap{
ErrReplicationRemoteConnectionError: {
Code: "XMinioAdminReplicationRemoteConnectionError",
Description: "Remote service connection error",
HTTPStatusCode: http.StatusNotFound,
HTTPStatusCode: http.StatusServiceUnavailable,
},
ErrReplicationBandwidthLimitError: {
Code: "XMinioAdminReplicationBandwidthLimitError",
@ -976,7 +992,7 @@ var errorCodes = errorCodeMap{
},
ErrReplicationNoExistingObjects: {
Code: "XMinioReplicationNoExistingObjects",
Description: "No matching ExistingsObjects rule enabled",
Description: "No matching ExistingObjects rule enabled",
HTTPStatusCode: http.StatusBadRequest,
},
ErrRemoteTargetDenyAddError: {
@ -1256,6 +1272,16 @@ var errorCodes = errorCodeMap{
Description: "The security token included in the request is invalid",
HTTPStatusCode: http.StatusForbidden,
},
ErrNoTokenRevokeType: {
Code: "InvalidArgument",
Description: "No token revoke type specified and one could not be inferred from the request",
HTTPStatusCode: http.StatusBadRequest,
},
ErrAdminNoSuchAccessKey: {
Code: "XMinioAdminNoSuchAccessKey",
Description: "The specified access key does not exist.",
HTTPStatusCode: http.StatusNotFound,
},
// S3 extensions.
ErrContentSHA256Mismatch: {
@ -1305,6 +1331,11 @@ var errorCodes = errorCodeMap{
Description: "Server not initialized yet, please try again.",
HTTPStatusCode: http.StatusServiceUnavailable,
},
ErrIAMNotInitialized: {
Code: "XMinioIAMNotInitialized",
Description: "IAM sub-system not initialized yet, please try again.",
HTTPStatusCode: http.StatusServiceUnavailable,
},
ErrBucketMetadataNotInitialized: {
Code: "XMinioBucketMetadataNotInitialized",
Description: "Bucket metadata not initialized yet, please try again.",
@ -1479,7 +1510,7 @@ var errorCodes = errorCodeMap{
},
ErrTooManyRequests: {
Code: "TooManyRequests",
Description: "Deadline exceeded while waiting in incoming queue, please reduce your request rate",
Description: "Please reduce your request rate",
HTTPStatusCode: http.StatusTooManyRequests,
},
ErrUnsupportedMetadata: {
@ -2148,6 +2179,8 @@ func toAPIErrorCode(ctx context.Context, err error) (apiErr APIErrorCode) {
apiErr = ErrAdminNoSuchUserLDAPWarn
case errNoSuchServiceAccount:
apiErr = ErrAdminServiceAccountNotFound
case errNoSuchAccessKey:
apiErr = ErrAdminNoSuchAccessKey
case errNoSuchGroup:
apiErr = ErrAdminNoSuchGroup
case errGroupNotEmpty:
@ -2241,6 +2274,8 @@ func toAPIErrorCode(ctx context.Context, err error) (apiErr APIErrorCode) {
apiErr = ErrServerNotInitialized
case errBucketMetadataNotInitialized:
apiErr = ErrBucketMetadataNotInitialized
case hash.ErrInvalidChecksum:
apiErr = ErrInvalidChecksum
}
// Compression errors
@ -2542,11 +2577,11 @@ func toAPIError(ctx context.Context, err error) APIError {
if len(e.Errors) >= 1 {
apiErr.Code = e.Errors[0].Reason
}
case azblob.StorageError:
case *azcore.ResponseError:
apiErr = APIError{
Code: string(e.ServiceCode()),
Code: e.ErrorCode,
Description: e.Error(),
HTTPStatusCode: e.Response().StatusCode,
HTTPStatusCode: e.StatusCode,
}
// Add more other SDK related errors here if any in future.
default:
@ -2585,7 +2620,7 @@ func getAPIError(code APIErrorCode) APIError {
return errorCodes.ToAPIErr(ErrInternalError)
}
// getErrorResponse gets in standard error and resource value and
// getAPIErrorResponse gets in standard error and resource value and
// provides a encodable populated response values
func getAPIErrorResponse(ctx context.Context, err APIError, resource, requestID, hostID string) APIErrorResponse {
reqInfo := logger.GetReqInfo(ctx)

View File

@ -18,7 +18,6 @@
package cmd
import (
"context"
"errors"
"testing"
@ -64,7 +63,7 @@ var toAPIErrorTests = []struct {
}
func TestAPIErrCode(t *testing.T) {
ctx := context.Background()
ctx := t.Context()
for i, testCase := range toAPIErrorTests {
errCode := toAPIErrorCode(ctx, testCase.err)
if errCode != testCase.errCode {

View File

@ -23,6 +23,7 @@ import (
"encoding/json"
"encoding/xml"
"fmt"
"mime"
"net/http"
"strconv"
"strings"
@ -64,7 +65,7 @@ func setCommonHeaders(w http.ResponseWriter) {
}
// Encodes the response headers into XML format.
func encodeResponse(response interface{}) []byte {
func encodeResponse(response any) []byte {
var buf bytes.Buffer
buf.WriteString(xml.Header)
if err := xml.NewEncoder(&buf).Encode(response); err != nil {
@ -82,7 +83,7 @@ func encodeResponse(response interface{}) []byte {
// Do not use this function for anything other than ListObjects()
// variants, please open a github discussion if you wish to use
// this in other places.
func encodeResponseList(response interface{}) []byte {
func encodeResponseList(response any) []byte {
var buf bytes.Buffer
buf.WriteString(xxml.Header)
if err := xxml.NewEncoder(&buf).Encode(response); err != nil {
@ -93,7 +94,7 @@ func encodeResponseList(response interface{}) []byte {
}
// Encodes the response headers into JSON format.
func encodeResponseJSON(response interface{}) []byte {
func encodeResponseJSON(response any) []byte {
var bytesBuffer bytes.Buffer
e := json.NewEncoder(&bytesBuffer)
e.Encode(response)
@ -168,6 +169,32 @@ func setObjectHeaders(ctx context.Context, w http.ResponseWriter, objInfo Object
if !stringsHasPrefixFold(k, userMetadataPrefix) {
continue
}
// check the doc https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingMetadata.html
// For metadata values like "ö", "ÄMÄZÕÑ S3", and "öha, das sollte eigentlich
// funktionieren", tested against a real AWS S3 bucket, S3 may encode incorrectly. For
// example, "ö" was encoded as =?UTF-8?B?w4PCtg==?=, producing invalid UTF-8 instead
// of =?UTF-8?B?w7Y=?=. This mirrors errors like the ä½ in another string.
//
// S3 uses B-encoding (Base64) for non-ASCII-heavy metadata and Q-encoding
// (quoted-printable) for mostly ASCII strings. Long strings are split at word
// boundaries to fit RFC 2047s 75-character limit, ensuring HTTP parser
// compatibility.
//
// However, this splitting increases header size and can introduce errors, unlike Gos
// mime package in MinIO, which correctly encodes strings with fixed B/Q encodings,
// avoiding S3s heuristic-driven issues.
//
// For MinIO developers, decode S3 metadata with mime.WordDecoder, validate outputs,
// report encoding bugs to AWS, and use ASCII-only metadata to ensure reliable S3 API
// compatibility.
if needsMimeEncoding(v) {
// see https://github.com/golang/go/blob/release-branch.go1.24/src/net/mail/message.go#L325
if strings.ContainsAny(v, "\"#$%&'(),.:;<>@[]^`{|}~") {
v = mime.BEncoding.Encode("UTF-8", v)
} else {
v = mime.QEncoding.Encode("UTF-8", v)
}
}
w.Header()[strings.ToLower(k)] = []string{v}
isSet = true
break
@ -229,3 +256,14 @@ func setObjectHeaders(ctx context.Context, w http.ResponseWriter, objInfo Object
return nil
}
// needsEncoding reports whether s contains any bytes that need to be encoded.
// see mime.needsEncoding
func needsMimeEncoding(s string) bool {
for _, b := range s {
if (b < ' ' || b > '~') && b != '\t' {
return true
}
}
return false
}

View File

@ -34,7 +34,8 @@ func TestNewRequestID(t *testing.T) {
e = char
// Ensure that it is alphanumeric, in this case, between 0-9 and A-Z.
if !(('0' <= e && e <= '9') || ('A' <= e && e <= 'Z')) {
isAlnum := ('0' <= e && e <= '9') || ('A' <= e && e <= 'Z')
if !isAlnum {
t.Fail()
}
}

View File

@ -31,7 +31,7 @@ func getListObjectsV1Args(values url.Values) (prefix, marker, delimiter string,
var err error
if maxkeys, err = strconv.Atoi(values.Get("max-keys")); err != nil {
errCode = ErrInvalidMaxKeys
return
return prefix, marker, delimiter, maxkeys, encodingType, errCode
}
} else {
maxkeys = maxObjectList
@ -41,7 +41,7 @@ func getListObjectsV1Args(values url.Values) (prefix, marker, delimiter string,
marker = values.Get("marker")
delimiter = values.Get("delimiter")
encodingType = values.Get("encoding-type")
return
return prefix, marker, delimiter, maxkeys, encodingType, errCode
}
func getListBucketObjectVersionsArgs(values url.Values) (prefix, marker, delimiter string, maxkeys int, encodingType, versionIDMarker string, errCode APIErrorCode) {
@ -51,7 +51,7 @@ func getListBucketObjectVersionsArgs(values url.Values) (prefix, marker, delimit
var err error
if maxkeys, err = strconv.Atoi(values.Get("max-keys")); err != nil {
errCode = ErrInvalidMaxKeys
return
return prefix, marker, delimiter, maxkeys, encodingType, versionIDMarker, errCode
}
} else {
maxkeys = maxObjectList
@ -62,7 +62,7 @@ func getListBucketObjectVersionsArgs(values url.Values) (prefix, marker, delimit
delimiter = values.Get("delimiter")
encodingType = values.Get("encoding-type")
versionIDMarker = values.Get("version-id-marker")
return
return prefix, marker, delimiter, maxkeys, encodingType, versionIDMarker, errCode
}
// Parse bucket url queries for ListObjects V2.
@ -73,7 +73,7 @@ func getListObjectsV2Args(values url.Values) (prefix, token, startAfter, delimit
if val, ok := values["continuation-token"]; ok {
if len(val[0]) == 0 {
errCode = ErrIncorrectContinuationToken
return
return prefix, token, startAfter, delimiter, fetchOwner, maxkeys, encodingType, errCode
}
}
@ -81,7 +81,7 @@ func getListObjectsV2Args(values url.Values) (prefix, token, startAfter, delimit
var err error
if maxkeys, err = strconv.Atoi(values.Get("max-keys")); err != nil {
errCode = ErrInvalidMaxKeys
return
return prefix, token, startAfter, delimiter, fetchOwner, maxkeys, encodingType, errCode
}
} else {
maxkeys = maxObjectList
@ -97,11 +97,11 @@ func getListObjectsV2Args(values url.Values) (prefix, token, startAfter, delimit
decodedToken, err := base64.StdEncoding.DecodeString(token)
if err != nil {
errCode = ErrIncorrectContinuationToken
return
return prefix, token, startAfter, delimiter, fetchOwner, maxkeys, encodingType, errCode
}
token = string(decodedToken)
}
return
return prefix, token, startAfter, delimiter, fetchOwner, maxkeys, encodingType, errCode
}
// Parse bucket url queries for ?uploads
@ -112,7 +112,7 @@ func getBucketMultipartResources(values url.Values) (prefix, keyMarker, uploadID
var err error
if maxUploads, err = strconv.Atoi(values.Get("max-uploads")); err != nil {
errCode = ErrInvalidMaxUploads
return
return prefix, keyMarker, uploadIDMarker, delimiter, maxUploads, encodingType, errCode
}
} else {
maxUploads = maxUploadsList
@ -123,7 +123,7 @@ func getBucketMultipartResources(values url.Values) (prefix, keyMarker, uploadID
uploadIDMarker = values.Get("upload-id-marker")
delimiter = values.Get("delimiter")
encodingType = values.Get("encoding-type")
return
return prefix, keyMarker, uploadIDMarker, delimiter, maxUploads, encodingType, errCode
}
// Parse object url queries
@ -134,7 +134,7 @@ func getObjectResources(values url.Values) (uploadID string, partNumberMarker, m
if values.Get("max-parts") != "" {
if maxParts, err = strconv.Atoi(values.Get("max-parts")); err != nil {
errCode = ErrInvalidMaxParts
return
return uploadID, partNumberMarker, maxParts, encodingType, errCode
}
} else {
maxParts = maxPartsList
@ -143,11 +143,11 @@ func getObjectResources(values url.Values) (uploadID string, partNumberMarker, m
if values.Get("part-number-marker") != "" {
if partNumberMarker, err = strconv.Atoi(values.Get("part-number-marker")); err != nil {
errCode = ErrInvalidPartNumberMarker
return
return uploadID, partNumberMarker, maxParts, encodingType, errCode
}
}
uploadID = values.Get("uploadId")
encodingType = values.Get("encoding-type")
return
return uploadID, partNumberMarker, maxParts, encodingType, errCode
}

View File

@ -166,10 +166,11 @@ type Part struct {
Size int64
// Checksum values
ChecksumCRC32 string `xml:"ChecksumCRC32,omitempty"`
ChecksumCRC32C string `xml:"ChecksumCRC32C,omitempty"`
ChecksumSHA1 string `xml:"ChecksumSHA1,omitempty"`
ChecksumSHA256 string `xml:"ChecksumSHA256,omitempty"`
ChecksumCRC32 string `xml:"ChecksumCRC32,omitempty"`
ChecksumCRC32C string `xml:"ChecksumCRC32C,omitempty"`
ChecksumSHA1 string `xml:"ChecksumSHA1,omitempty"`
ChecksumSHA256 string `xml:"ChecksumSHA256,omitempty"`
ChecksumCRC64NVME string `xml:",omitempty"`
}
// ListPartsResponse - format for list parts response.
@ -192,6 +193,8 @@ type ListPartsResponse struct {
IsTruncated bool
ChecksumAlgorithm string
ChecksumType string
// List of parts.
Parts []Part `xml:"Part"`
}
@ -413,10 +416,11 @@ type CompleteMultipartUploadResponse struct {
Key string
ETag string
ChecksumCRC32 string `xml:"ChecksumCRC32,omitempty"`
ChecksumCRC32C string `xml:"ChecksumCRC32C,omitempty"`
ChecksumSHA1 string `xml:"ChecksumSHA1,omitempty"`
ChecksumSHA256 string `xml:"ChecksumSHA256,omitempty"`
ChecksumCRC32 string `xml:"ChecksumCRC32,omitempty"`
ChecksumCRC32C string `xml:"ChecksumCRC32C,omitempty"`
ChecksumSHA1 string `xml:"ChecksumSHA1,omitempty"`
ChecksumSHA256 string `xml:"ChecksumSHA256,omitempty"`
ChecksumCRC64NVME string `xml:",omitempty"`
}
// DeleteError structure.
@ -516,7 +520,6 @@ func cleanReservedKeys(metadata map[string]string) map[string]string {
}
case crypto.SSEC:
m[xhttp.AmzServerSideEncryptionCustomerAlgorithm] = xhttp.AmzEncryptionAES
}
var toRemove []string
@ -593,8 +596,6 @@ func generateListVersionsResponse(ctx context.Context, bucket, prefix, marker, v
for k, v := range cleanReservedKeys(object.UserDefined) {
content.UserMetadata.Set(k, v)
}
content.UserMetadata.Set("expires", object.Expires.Format(http.TimeFormat))
content.Internal = &ObjectInternalInfo{
K: object.DataBlocks,
M: object.ParityBlocks,
@ -729,7 +730,6 @@ func generateListObjectsV2Response(ctx context.Context, bucket, prefix, token, n
for k, v := range cleanReservedKeys(object.UserDefined) {
content.UserMetadata.Set(k, v)
}
content.UserMetadata.Set("expires", object.Expires.Format(http.TimeFormat))
content.Internal = &ObjectInternalInfo{
K: object.DataBlocks,
M: object.ParityBlocks,
@ -789,18 +789,19 @@ func generateInitiateMultipartUploadResponse(bucket, key, uploadID string) Initi
}
// generates CompleteMultipartUploadResponse for given bucket, key, location and ETag.
func generateCompleteMultpartUploadResponse(bucket, key, location string, oi ObjectInfo) CompleteMultipartUploadResponse {
cs := oi.decryptChecksums(0)
func generateCompleteMultipartUploadResponse(bucket, key, location string, oi ObjectInfo, h http.Header) CompleteMultipartUploadResponse {
cs, _ := oi.decryptChecksums(0, h)
c := CompleteMultipartUploadResponse{
Location: location,
Bucket: bucket,
Key: key,
// AWS S3 quotes the ETag in XML, make sure we are compatible here.
ETag: "\"" + oi.ETag + "\"",
ChecksumSHA1: cs[hash.ChecksumSHA1.String()],
ChecksumSHA256: cs[hash.ChecksumSHA256.String()],
ChecksumCRC32: cs[hash.ChecksumCRC32.String()],
ChecksumCRC32C: cs[hash.ChecksumCRC32C.String()],
ETag: "\"" + oi.ETag + "\"",
ChecksumSHA1: cs[hash.ChecksumSHA1.String()],
ChecksumSHA256: cs[hash.ChecksumSHA256.String()],
ChecksumCRC32: cs[hash.ChecksumCRC32.String()],
ChecksumCRC32C: cs[hash.ChecksumCRC32C.String()],
ChecksumCRC64NVME: cs[hash.ChecksumCRC64NVME.String()],
}
return c
}
@ -828,6 +829,7 @@ func generateListPartsResponse(partsInfo ListPartsInfo, encodingType string) Lis
listPartsResponse.IsTruncated = partsInfo.IsTruncated
listPartsResponse.NextPartNumberMarker = partsInfo.NextPartNumberMarker
listPartsResponse.ChecksumAlgorithm = partsInfo.ChecksumAlgorithm
listPartsResponse.ChecksumType = partsInfo.ChecksumType
listPartsResponse.Parts = make([]Part, len(partsInfo.Parts))
for index, part := range partsInfo.Parts {
@ -840,6 +842,7 @@ func generateListPartsResponse(partsInfo ListPartsInfo, encodingType string) Lis
newPart.ChecksumCRC32C = part.ChecksumCRC32C
newPart.ChecksumSHA1 = part.ChecksumSHA1
newPart.ChecksumSHA256 = part.ChecksumSHA256
newPart.ChecksumCRC64NVME = part.ChecksumCRC64NVME
listPartsResponse.Parts[index] = newPart
}
return listPartsResponse
@ -886,6 +889,12 @@ func generateMultiDeleteResponse(quiet bool, deletedObjects []DeletedObject, err
}
func writeResponse(w http.ResponseWriter, statusCode int, response []byte, mType mimeType) {
// Don't write a response if one has already been written.
// Fixes https://github.com/minio/minio/issues/21633
if headersAlreadyWritten(w) {
return
}
if statusCode == 0 {
statusCode = 200
}
@ -946,10 +955,11 @@ func writeSuccessResponseHeadersOnly(w http.ResponseWriter) {
// writeErrorResponse writes error headers
func writeErrorResponse(ctx context.Context, w http.ResponseWriter, err APIError, reqURL *url.URL) {
if err.HTTPStatusCode == http.StatusServiceUnavailable {
// Set retry-after header to indicate user-agents to retry request after 120secs.
switch err.HTTPStatusCode {
case http.StatusServiceUnavailable, http.StatusTooManyRequests:
// Set retry-after header to indicate user-agents to retry request after 60 seconds.
// https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Retry-After
w.Header().Set(xhttp.RetryAfter, "120")
w.Header().Set(xhttp.RetryAfter, "60")
}
switch err.Code {
@ -1011,3 +1021,45 @@ func writeCustomErrorResponseJSON(ctx context.Context, w http.ResponseWriter, er
encodedErrorResponse := encodeResponseJSON(errorResponse)
writeResponse(w, err.HTTPStatusCode, encodedErrorResponse, mimeJSON)
}
type unwrapper interface {
Unwrap() http.ResponseWriter
}
// headersAlreadyWritten returns true if the headers have already been written
// to this response writer. It will unwrap the ResponseWriter if possible to try
// and find a trackingResponseWriter.
func headersAlreadyWritten(w http.ResponseWriter) bool {
for {
if trw, ok := w.(*trackingResponseWriter); ok {
return trw.headerWritten
} else if uw, ok := w.(unwrapper); ok {
w = uw.Unwrap()
} else {
return false
}
}
}
// trackingResponseWriter wraps a ResponseWriter and notes when WriterHeader has
// been called. This allows high level request handlers to check if something
// has already sent the header.
type trackingResponseWriter struct {
http.ResponseWriter
headerWritten bool
}
func (w *trackingResponseWriter) WriteHeader(statusCode int) {
if !w.headerWritten {
w.headerWritten = true
w.ResponseWriter.WriteHeader(statusCode)
}
}
func (w *trackingResponseWriter) Write(b []byte) (int, error) {
return w.ResponseWriter.Write(b)
}
func (w *trackingResponseWriter) Unwrap() http.ResponseWriter {
return w.ResponseWriter
}

View File

@ -18,8 +18,12 @@
package cmd
import (
"io"
"net/http"
"net/http/httptest"
"testing"
"github.com/klauspost/compress/gzhttp"
)
// Tests object location.
@ -100,7 +104,6 @@ func TestObjectLocation(t *testing.T) {
},
}
for _, testCase := range testCases {
testCase := testCase
t.Run("", func(t *testing.T) {
gotLocation := getObjectLocation(testCase.request, testCase.domains, testCase.bucket, testCase.object)
if testCase.expectedLocation != gotLocation {
@ -123,3 +126,89 @@ func TestGetURLScheme(t *testing.T) {
t.Errorf("Expected %s, got %s", httpsScheme, gotScheme)
}
}
func TestTrackingResponseWriter(t *testing.T) {
rw := httptest.NewRecorder()
trw := &trackingResponseWriter{ResponseWriter: rw}
trw.WriteHeader(123)
if !trw.headerWritten {
t.Fatal("headerWritten was not set by WriteHeader call")
}
_, err := trw.Write([]byte("hello"))
if err != nil {
t.Fatalf("Write unexpectedly failed: %v", err)
}
// Check that WriteHeader and Write were called on the underlying response writer
resp := rw.Result()
if resp.StatusCode != 123 {
t.Fatalf("unexpected status: %v", resp.StatusCode)
}
body, err := io.ReadAll(resp.Body)
if err != nil {
t.Fatalf("reading response body failed: %v", err)
}
if string(body) != "hello" {
t.Fatalf("response body incorrect: %v", string(body))
}
// Check that Unwrap works
if trw.Unwrap() != rw {
t.Fatalf("Unwrap returned wrong result: %v", trw.Unwrap())
}
}
func TestHeadersAlreadyWritten(t *testing.T) {
rw := httptest.NewRecorder()
trw := &trackingResponseWriter{ResponseWriter: rw}
if headersAlreadyWritten(trw) {
t.Fatal("headers have not been written yet")
}
trw.WriteHeader(123)
if !headersAlreadyWritten(trw) {
t.Fatal("headers were written")
}
}
func TestHeadersAlreadyWrittenWrapped(t *testing.T) {
rw := httptest.NewRecorder()
trw := &trackingResponseWriter{ResponseWriter: rw}
wrap1 := &gzhttp.NoGzipResponseWriter{ResponseWriter: trw}
wrap2 := &gzhttp.NoGzipResponseWriter{ResponseWriter: wrap1}
if headersAlreadyWritten(wrap2) {
t.Fatal("headers have not been written yet")
}
wrap2.WriteHeader(123)
if !headersAlreadyWritten(wrap2) {
t.Fatal("headers were written")
}
}
func TestWriteResponseHeadersNotWritten(t *testing.T) {
rw := httptest.NewRecorder()
trw := &trackingResponseWriter{ResponseWriter: rw}
writeResponse(trw, 299, []byte("hello"), "application/foo")
resp := rw.Result()
if resp.StatusCode != 299 {
t.Fatal("response wasn't written")
}
}
func TestWriteResponseHeadersWritten(t *testing.T) {
rw := httptest.NewRecorder()
rw.Code = -1
trw := &trackingResponseWriter{ResponseWriter: rw, headerWritten: true}
writeResponse(trw, 200, []byte("hello"), "application/foo")
if rw.Code != -1 {
t.Fatalf("response was written when it shouldn't have been (Code=%v)", rw.Code)
}
}

View File

@ -218,6 +218,8 @@ func s3APIMiddleware(f http.HandlerFunc, flags ...s3HFlag) http.HandlerFunc {
handlerName := getHandlerName(f, "objectAPIHandlers")
var handler http.HandlerFunc = func(w http.ResponseWriter, r *http.Request) {
w = &trackingResponseWriter{ResponseWriter: w}
// Wrap the actual handler with the appropriate tracing middleware.
var tracedHandler http.HandlerFunc
if handlerFlags.has(traceHdrsS3HFlag) {
@ -227,13 +229,13 @@ func s3APIMiddleware(f http.HandlerFunc, flags ...s3HFlag) http.HandlerFunc {
}
// Skip wrapping with the gzip middleware if specified.
var gzippedHandler http.HandlerFunc = tracedHandler
gzippedHandler := tracedHandler
if !handlerFlags.has(noGZS3HFlag) {
gzippedHandler = gzipHandler(gzippedHandler)
}
// Skip wrapping with throttling middleware if specified.
var throttledHandler http.HandlerFunc = gzippedHandler
throttledHandler := gzippedHandler
if !handlerFlags.has(noThrottleS3HFlag) {
throttledHandler = maxClients(throttledHandler)
}
@ -387,6 +389,11 @@ func registerAPIRouter(router *mux.Router) {
HeadersRegexp(xhttp.AmzSnowballExtract, "true").
HandlerFunc(s3APIMiddleware(api.PutObjectExtractHandler, traceHdrsS3HFlag))
// AppendObject to be rejected
router.Methods(http.MethodPut).Path("/{object:.+}").
HeadersRegexp(xhttp.AmzWriteOffsetBytes, "").
HandlerFunc(s3APIMiddleware(errorResponseHandler))
// PutObject
router.Methods(http.MethodPut).Path("/{object:.+}").
HandlerFunc(s3APIMiddleware(api.PutObjectHandler, traceHdrsS3HFlag))
@ -436,7 +443,7 @@ func registerAPIRouter(router *mux.Router) {
Queries("notification", "")
// ListenNotification
router.Methods(http.MethodGet).
HandlerFunc(s3APIMiddleware(api.ListenNotificationHandler, noThrottleS3HFlag)).
HandlerFunc(s3APIMiddleware(api.ListenNotificationHandler, noThrottleS3HFlag, traceHdrsS3HFlag)).
Queries("events", "{events:.*}")
// ResetBucketReplicationStatus - MinIO extension API
router.Methods(http.MethodGet).
@ -456,6 +463,14 @@ func registerAPIRouter(router *mux.Router) {
router.Methods(http.MethodGet).
HandlerFunc(s3APIMiddleware(api.GetBucketCorsHandler)).
Queries("cors", "")
// PutBucketCors - this is a dummy call.
router.Methods(http.MethodPut).
HandlerFunc(s3APIMiddleware(api.PutBucketCorsHandler)).
Queries("cors", "")
// DeleteBucketCors - this is a dummy call.
router.Methods(http.MethodDelete).
HandlerFunc(s3APIMiddleware(api.DeleteBucketCorsHandler)).
Queries("cors", "")
// GetBucketWebsiteHandler - this is a dummy call.
router.Methods(http.MethodGet).
HandlerFunc(s3APIMiddleware(api.GetBucketWebsiteHandler)).
@ -472,6 +487,7 @@ func registerAPIRouter(router *mux.Router) {
router.Methods(http.MethodGet).
HandlerFunc(s3APIMiddleware(api.GetBucketLoggingHandler)).
Queries("logging", "")
// GetBucketTaggingHandler
router.Methods(http.MethodGet).
HandlerFunc(s3APIMiddleware(api.GetBucketTaggingHandler)).
@ -615,7 +631,7 @@ func registerAPIRouter(router *mux.Router) {
// ListenNotification
apiRouter.Methods(http.MethodGet).Path(SlashSeparator).
HandlerFunc(s3APIMiddleware(api.ListenNotificationHandler, noThrottleS3HFlag)).
HandlerFunc(s3APIMiddleware(api.ListenNotificationHandler, noThrottleS3HFlag, traceHdrsS3HFlag)).
Queries("events", "{events:.*}")
// ListBuckets

View File

@ -43,7 +43,7 @@ func shouldEscape(c byte) bool {
// - Force encoding of '~'
func s3URLEncode(s string) string {
spaceCount, hexCount := 0, 0
for i := 0; i < len(s); i++ {
for i := range len(s) {
c := s[i]
if shouldEscape(c) {
if c == ' ' {
@ -70,7 +70,7 @@ func s3URLEncode(s string) string {
if hexCount == 0 {
copy(t, s)
for i := 0; i < len(s); i++ {
for i := range len(s) {
if s[i] == ' ' {
t[i] = '+'
}
@ -79,7 +79,7 @@ func s3URLEncode(s string) string {
}
j := 0
for i := 0; i < len(s); i++ {
for i := range len(s) {
switch c := s[i]; {
case c == ' ':
t[j] = '+'

File diff suppressed because one or more lines are too long

View File

@ -96,7 +96,7 @@ func isRequestSignStreamingTrailerV4(r *http.Request) bool {
// Verify if the request has AWS Streaming Signature Version '4', with unsigned content and trailer.
func isRequestUnsignedTrailerV4(r *http.Request) bool {
return r.Header.Get(xhttp.AmzContentSha256) == unsignedPayloadTrailer &&
r.Method == http.MethodPut && strings.Contains(r.Header.Get(xhttp.ContentEncoding), streamingContentEncoding)
r.Method == http.MethodPut
}
// Authorization type.
@ -162,7 +162,6 @@ func validateAdminSignature(ctx context.Context, r *http.Request, region string)
s3Err := ErrAccessDenied
if _, ok := r.Header[xhttp.AmzContentSha256]; ok &&
getRequestAuthType(r) == authTypeSigned {
// Get credential information from the request.
cred, owner, s3Err = getReqAccessKeyV4(r, region, serviceS3)
if s3Err != ErrNone {
@ -217,12 +216,12 @@ func getSessionToken(r *http.Request) (token string) {
// Fetch claims in the security token returned by the client, doesn't return
// errors - upon errors the returned claims map will be empty.
func mustGetClaimsFromToken(r *http.Request) map[string]interface{} {
func mustGetClaimsFromToken(r *http.Request) map[string]any {
claims, _ := getClaimsFromToken(getSessionToken(r))
return claims
}
func getClaimsFromTokenWithSecret(token, secret string) (map[string]interface{}, error) {
func getClaimsFromTokenWithSecret(token, secret string) (*xjwt.MapClaims, error) {
// JWT token for x-amz-security-token is signed with admin
// secret key, temporary credentials become invalid if
// server admin credentials change. This is done to ensure
@ -244,7 +243,7 @@ func getClaimsFromTokenWithSecret(token, secret string) (map[string]interface{},
// If AuthZPlugin is set, return without any further checks.
if newGlobalAuthZPluginFn() != nil {
return claims.Map(), nil
return claims, nil
}
// Check if a session policy is set. If so, decode it here.
@ -263,16 +262,20 @@ func getClaimsFromTokenWithSecret(token, secret string) (map[string]interface{},
claims.MapClaims[sessionPolicyNameExtracted] = string(spBytes)
}
return claims.Map(), nil
return claims, nil
}
// Fetch claims in the security token returned by the client.
func getClaimsFromToken(token string) (map[string]interface{}, error) {
return getClaimsFromTokenWithSecret(token, globalActiveCred.SecretKey)
func getClaimsFromToken(token string) (map[string]any, error) {
jwtClaims, err := getClaimsFromTokenWithSecret(token, globalActiveCred.SecretKey)
if err != nil {
return nil, err
}
return jwtClaims.Map(), nil
}
// Fetch claims in the security token returned by the client and validate the token.
func checkClaimsFromToken(r *http.Request, cred auth.Credentials) (map[string]interface{}, APIErrorCode) {
func checkClaimsFromToken(r *http.Request, cred auth.Credentials) (map[string]any, APIErrorCode) {
token := getSessionToken(r)
if token != "" && cred.AccessKey == "" {
// x-amz-security-token is not allowed for anonymous access.
@ -319,7 +322,7 @@ func checkClaimsFromToken(r *http.Request, cred auth.Credentials) (map[string]in
if err != nil {
return nil, toAPIErrorCode(r.Context(), err)
}
return claims, ErrNone
return claims.Map(), ErrNone
}
claims := xjwt.NewMapClaims()
@ -360,7 +363,7 @@ func authenticateRequest(ctx context.Context, r *http.Request, action policy.Act
var cred auth.Credentials
var owner bool
switch getRequestAuthType(r) {
case authTypeUnknown, authTypeStreamingSigned:
case authTypeUnknown, authTypeStreamingSigned, authTypeStreamingSignedTrailer, authTypeStreamingUnsignedTrailer:
return ErrSignatureVersionNotSupported
case authTypePresignedV2, authTypeSignedV2:
if s3Err = isReqAuthenticatedV2(r); s3Err != ErrNone {
@ -671,32 +674,6 @@ func setAuthMiddleware(h http.Handler) http.Handler {
})
}
func validateSignature(atype authType, r *http.Request) (auth.Credentials, bool, APIErrorCode) {
var cred auth.Credentials
var owner bool
var s3Err APIErrorCode
switch atype {
case authTypeUnknown, authTypeStreamingSigned:
return cred, owner, ErrSignatureVersionNotSupported
case authTypeSignedV2, authTypePresignedV2:
if s3Err = isReqAuthenticatedV2(r); s3Err != ErrNone {
return cred, owner, s3Err
}
cred, owner, s3Err = getReqAccessKeyV2(r)
case authTypePresigned, authTypeSigned:
region := globalSite.Region()
if s3Err = isReqAuthenticated(GlobalContext, r, region, serviceS3); s3Err != ErrNone {
return cred, owner, s3Err
}
cred, owner, s3Err = getReqAccessKeyV4(r, region, serviceS3)
}
if s3Err != ErrNone {
return cred, owner, s3Err
}
return cred, owner, ErrNone
}
func isPutRetentionAllowed(bucketName, objectName string, retDays int, retDate time.Time, retMode objectlock.RetMode, byPassSet bool, r *http.Request, cred auth.Credentials, owner bool) (s3Err APIErrorCode) {
var retSet bool
if cred.AccessKey == "" {
@ -751,8 +728,14 @@ func isPutActionAllowed(ctx context.Context, atype authType, bucketName, objectN
return ErrSignatureVersionNotSupported
case authTypeSignedV2, authTypePresignedV2:
cred, owner, s3Err = getReqAccessKeyV2(r)
case authTypeStreamingSigned, authTypePresigned, authTypeSigned, authTypeStreamingSignedTrailer, authTypeStreamingUnsignedTrailer:
case authTypeStreamingSigned, authTypePresigned, authTypeSigned, authTypeStreamingSignedTrailer:
cred, owner, s3Err = getReqAccessKeyV4(r, region, serviceS3)
case authTypeStreamingUnsignedTrailer:
cred, owner, s3Err = getReqAccessKeyV4(r, region, serviceS3)
if s3Err == ErrMissingFields {
// Could be anonymous. cred + owner is zero value.
s3Err = ErrNone
}
}
if s3Err != ErrNone {
return s3Err

View File

@ -413,7 +413,7 @@ func TestIsReqAuthenticated(t *testing.T) {
}
func TestCheckAdminRequestAuthType(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
ctx, cancel := context.WithCancel(t.Context())
defer cancel()
objLayer, fsDir, err := prepareFS(ctx)
@ -450,7 +450,7 @@ func TestCheckAdminRequestAuthType(t *testing.T) {
}
func TestValidateAdminSignature(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
ctx, cancel := context.WithCancel(t.Context())
defer cancel()
objLayer, fsDir, err := prepareFS(ctx)

View File

@ -102,7 +102,7 @@ func waitForLowHTTPReq() {
func initBackgroundHealing(ctx context.Context, objAPI ObjectLayer) {
bgSeq := newBgHealSequence()
// Run the background healer
for i := 0; i < globalBackgroundHealRoutine.workers; i++ {
for range globalBackgroundHealRoutine.workers {
go globalBackgroundHealRoutine.AddWorker(ctx, objAPI, bgSeq)
}

View File

@ -24,6 +24,7 @@ import (
"fmt"
"io"
"os"
"slices"
"sort"
"strings"
"sync"
@ -72,10 +73,12 @@ type healingTracker struct {
// Numbers when current bucket started healing,
// for resuming with correct numbers.
ResumeItemsHealed uint64 `json:"-"`
ResumeItemsFailed uint64 `json:"-"`
ResumeBytesDone uint64 `json:"-"`
ResumeBytesFailed uint64 `json:"-"`
ResumeItemsHealed uint64 `json:"-"`
ResumeItemsFailed uint64 `json:"-"`
ResumeItemsSkipped uint64 `json:"-"`
ResumeBytesDone uint64 `json:"-"`
ResumeBytesFailed uint64 `json:"-"`
ResumeBytesSkipped uint64 `json:"-"`
// Filled on startup/restarts.
QueuedBuckets []string
@ -88,6 +91,11 @@ type healingTracker struct {
ItemsSkipped uint64
BytesSkipped uint64
RetryAttempts uint64
Finished bool // finished healing, whether with errors or not
// Add future tracking capabilities
// Be sure that they are included in toHealingDisk
}
@ -141,14 +149,34 @@ func initHealingTracker(disk StorageAPI, healID string) *healingTracker {
return h
}
func (h healingTracker) getLastUpdate() time.Time {
func (h *healingTracker) resetHealing() {
h.mu.Lock()
defer h.mu.Unlock()
h.ItemsHealed = 0
h.ItemsFailed = 0
h.BytesDone = 0
h.BytesFailed = 0
h.ResumeItemsHealed = 0
h.ResumeItemsFailed = 0
h.ResumeBytesDone = 0
h.ResumeBytesFailed = 0
h.ItemsSkipped = 0
h.BytesSkipped = 0
h.HealedBuckets = nil
h.Object = ""
h.Bucket = ""
}
func (h *healingTracker) getLastUpdate() time.Time {
h.mu.RLock()
defer h.mu.RUnlock()
return h.LastUpdate
}
func (h healingTracker) getBucket() string {
func (h *healingTracker) getBucket() string {
h.mu.RLock()
defer h.mu.RUnlock()
@ -162,7 +190,7 @@ func (h *healingTracker) setBucket(bucket string) {
h.Bucket = bucket
}
func (h healingTracker) getObject() string {
func (h *healingTracker) getObject() string {
h.mu.RLock()
defer h.mu.RUnlock()
@ -196,9 +224,6 @@ func (h *healingTracker) updateProgress(success, skipped bool, bytes uint64) {
// update will update the tracker on the disk.
// If the tracker has been deleted an error is returned.
func (h *healingTracker) update(ctx context.Context) error {
if h.disk.Healing() == nil {
return fmt.Errorf("healingTracker: drive %q is not marked as healing", h.ID)
}
h.mu.Lock()
if h.ID == "" || h.PoolIndex < 0 || h.SetIndex < 0 || h.DiskIndex < 0 {
h.ID, _ = h.disk.GetDiskID()
@ -245,12 +270,7 @@ func (h *healingTracker) delete(ctx context.Context) error {
func (h *healingTracker) isHealed(bucket string) bool {
h.mu.RLock()
defer h.mu.RUnlock()
for _, v := range h.HealedBuckets {
if v == bucket {
return true
}
}
return false
return slices.Contains(h.HealedBuckets, bucket)
}
// resume will reset progress to the numbers at the start of the bucket.
@ -260,8 +280,10 @@ func (h *healingTracker) resume() {
h.ItemsHealed = h.ResumeItemsHealed
h.ItemsFailed = h.ResumeItemsFailed
h.ItemsSkipped = h.ResumeItemsSkipped
h.BytesDone = h.ResumeBytesDone
h.BytesFailed = h.ResumeBytesFailed
h.BytesSkipped = h.ResumeBytesSkipped
}
// bucketDone should be called when a bucket is done healing.
@ -272,8 +294,10 @@ func (h *healingTracker) bucketDone(bucket string) {
h.ResumeItemsHealed = h.ItemsHealed
h.ResumeItemsFailed = h.ItemsFailed
h.ResumeItemsSkipped = h.ItemsSkipped
h.ResumeBytesDone = h.BytesDone
h.ResumeBytesFailed = h.BytesFailed
h.ResumeBytesSkipped = h.BytesSkipped
h.HealedBuckets = append(h.HealedBuckets, bucket)
for i, b := range h.QueuedBuckets {
if b == bucket {
@ -322,6 +346,7 @@ func (h *healingTracker) toHealingDisk() madmin.HealingDisk {
PoolIndex: h.PoolIndex,
SetIndex: h.SetIndex,
DiskIndex: h.DiskIndex,
Finished: h.Finished,
Path: h.Path,
Started: h.Started.UTC(),
LastUpdate: h.LastUpdate.UTC(),
@ -337,6 +362,7 @@ func (h *healingTracker) toHealingDisk() madmin.HealingDisk {
Object: h.Object,
QueuedBuckets: h.QueuedBuckets,
HealedBuckets: h.HealedBuckets,
RetryAttempts: h.RetryAttempts,
ObjectsHealed: h.ItemsHealed, // Deprecated July 2021
ObjectsFailed: h.ItemsFailed, // Deprecated July 2021
@ -351,24 +377,26 @@ func initAutoHeal(ctx context.Context, objAPI ObjectLayer) {
}
initBackgroundHealing(ctx, objAPI) // start quick background healing
if env.Get("_MINIO_AUTO_DRIVE_HEALING", config.EnableOn) == config.EnableOn || env.Get("_MINIO_AUTO_DISK_HEALING", config.EnableOn) == config.EnableOn {
if env.Get("_MINIO_AUTO_DRIVE_HEALING", config.EnableOn) == config.EnableOn {
globalBackgroundHealState.pushHealLocalDisks(getLocalDisksToHeal()...)
go monitorLocalDisksAndHeal(ctx, z)
}
go globalMRFState.startMRFPersistence()
go globalMRFState.healRoutine(z)
}
func getLocalDisksToHeal() (disksToHeal Endpoints) {
globalLocalDrivesMu.RLock()
localDrives := cloneDrives(globalLocalDrives)
localDrives := cloneDrives(globalLocalDrivesMap)
globalLocalDrivesMu.RUnlock()
for _, disk := range localDrives {
_, err := disk.GetDiskID()
_, err := disk.DiskInfo(context.Background(), DiskInfoOptions{})
if errors.Is(err, errUnformattedDisk) {
disksToHeal = append(disksToHeal, disk.Endpoint())
continue
}
if disk.Healing() != nil {
if h := disk.Healing(); h != nil && !h.Finished {
disksToHeal = append(disksToHeal, disk.Endpoint())
}
}
@ -382,6 +410,8 @@ func getLocalDisksToHeal() (disksToHeal Endpoints) {
var newDiskHealingTimeout = newDynamicTimeout(30*time.Second, 10*time.Second)
var errRetryHealing = errors.New("some items failed to heal, we will retry healing this drive again")
func healFreshDisk(ctx context.Context, z *erasureServerPools, endpoint Endpoint) error {
poolIdx, setIdx := endpoint.PoolIdx, endpoint.SetIdx
disk := getStorageViaEndpoint(endpoint)
@ -389,6 +419,17 @@ func healFreshDisk(ctx context.Context, z *erasureServerPools, endpoint Endpoint
return fmt.Errorf("Unexpected error disk must be initialized by now after formatting: %s", endpoint)
}
_, err := disk.DiskInfo(ctx, DiskInfoOptions{})
if err != nil {
if errors.Is(err, errDriveIsRoot) {
// This is a root drive, ignore and move on
return nil
}
if !errors.Is(err, errUnformattedDisk) {
return err
}
}
// Prevent parallel erasure set healing
locker := z.NewNSLock(minioMetaBucket, fmt.Sprintf("new-drive-healing/%d/%d", poolIdx, setIdx))
lkctx, err := locker.GetLock(ctx, newDiskHealingTimeout)
@ -451,12 +492,30 @@ func healFreshDisk(ctx context.Context, z *erasureServerPools, endpoint Endpoint
return err
}
healingLogEvent(ctx, "Healing of drive '%s' is finished (healed: %d, skipped: %d, failed: %d).", disk, tracker.ItemsHealed, tracker.ItemsSkipped, tracker.ItemsFailed)
// if objects have failed healing, we attempt a retry to heal the drive upto 3 times before giving up.
if tracker.ItemsFailed > 0 && tracker.RetryAttempts < 4 {
tracker.RetryAttempts++
if len(tracker.QueuedBuckets) > 0 {
return fmt.Errorf("not all buckets were healed: %v", tracker.QueuedBuckets)
healingLogEvent(ctx, "Healing of drive '%s' is incomplete, retrying %s time (healed: %d, skipped: %d, failed: %d).", disk,
humanize.Ordinal(int(tracker.RetryAttempts)), tracker.ItemsHealed, tracker.ItemsSkipped, tracker.ItemsFailed)
tracker.resetHealing()
bugLogIf(ctx, tracker.update(ctx))
return errRetryHealing
}
if tracker.ItemsFailed > 0 {
healingLogEvent(ctx, "Healing of drive '%s' is incomplete, retried %d times (healed: %d, skipped: %d, failed: %d).", disk,
tracker.RetryAttempts, tracker.ItemsHealed, tracker.ItemsSkipped, tracker.ItemsFailed)
} else {
if tracker.RetryAttempts > 0 {
healingLogEvent(ctx, "Healing of drive '%s' is complete, retried %d times (healed: %d, skipped: %d).", disk,
tracker.RetryAttempts-1, tracker.ItemsHealed, tracker.ItemsSkipped)
} else {
healingLogEvent(ctx, "Healing of drive '%s' is finished (healed: %d, skipped: %d).", disk, tracker.ItemsHealed, tracker.ItemsSkipped)
}
}
if serverDebugLog {
tracker.printTo(os.Stdout)
fmt.Printf("\n")
@ -486,7 +545,8 @@ func healFreshDisk(ctx context.Context, z *erasureServerPools, endpoint Endpoint
continue
}
if t.HealID == tracker.HealID {
t.delete(ctx)
t.Finished = true
t.update(ctx)
}
}
@ -528,7 +588,7 @@ func monitorLocalDisksAndHeal(ctx context.Context, z *erasureServerPools) {
if err := healFreshDisk(ctx, z, disk); err != nil {
globalBackgroundHealState.setDiskHealingStatus(disk, false)
timedout := OperationTimedOut{}
if !errors.Is(err, context.Canceled) && !errors.As(err, &timedout) {
if !errors.Is(err, context.Canceled) && !errors.As(err, &timedout) && !errors.Is(err, errRetryHealing) {
printEndpointError(disk, err, false)
}
return

View File

@ -1,7 +1,7 @@
package cmd
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
package cmd
import (
"github.com/tinylib/msgp/msgp"
)
@ -132,6 +132,12 @@ func (z *healingTracker) DecodeMsg(dc *msgp.Reader) (err error) {
err = msgp.WrapError(err, "ResumeItemsFailed")
return
}
case "ResumeItemsSkipped":
z.ResumeItemsSkipped, err = dc.ReadUint64()
if err != nil {
err = msgp.WrapError(err, "ResumeItemsSkipped")
return
}
case "ResumeBytesDone":
z.ResumeBytesDone, err = dc.ReadUint64()
if err != nil {
@ -144,6 +150,12 @@ func (z *healingTracker) DecodeMsg(dc *msgp.Reader) (err error) {
err = msgp.WrapError(err, "ResumeBytesFailed")
return
}
case "ResumeBytesSkipped":
z.ResumeBytesSkipped, err = dc.ReadUint64()
if err != nil {
err = msgp.WrapError(err, "ResumeBytesSkipped")
return
}
case "QueuedBuckets":
var zb0002 uint32
zb0002, err = dc.ReadArrayHeader()
@ -200,6 +212,18 @@ func (z *healingTracker) DecodeMsg(dc *msgp.Reader) (err error) {
err = msgp.WrapError(err, "BytesSkipped")
return
}
case "RetryAttempts":
z.RetryAttempts, err = dc.ReadUint64()
if err != nil {
err = msgp.WrapError(err, "RetryAttempts")
return
}
case "Finished":
z.Finished, err = dc.ReadBool()
if err != nil {
err = msgp.WrapError(err, "Finished")
return
}
default:
err = dc.Skip()
if err != nil {
@ -213,9 +237,9 @@ func (z *healingTracker) DecodeMsg(dc *msgp.Reader) (err error) {
// EncodeMsg implements msgp.Encodable
func (z *healingTracker) EncodeMsg(en *msgp.Writer) (err error) {
// map header, size 25
// map header, size 29
// write "ID"
err = en.Append(0xde, 0x0, 0x19, 0xa2, 0x49, 0x44)
err = en.Append(0xde, 0x0, 0x1d, 0xa2, 0x49, 0x44)
if err != nil {
return
}
@ -394,6 +418,16 @@ func (z *healingTracker) EncodeMsg(en *msgp.Writer) (err error) {
err = msgp.WrapError(err, "ResumeItemsFailed")
return
}
// write "ResumeItemsSkipped"
err = en.Append(0xb2, 0x52, 0x65, 0x73, 0x75, 0x6d, 0x65, 0x49, 0x74, 0x65, 0x6d, 0x73, 0x53, 0x6b, 0x69, 0x70, 0x70, 0x65, 0x64)
if err != nil {
return
}
err = en.WriteUint64(z.ResumeItemsSkipped)
if err != nil {
err = msgp.WrapError(err, "ResumeItemsSkipped")
return
}
// write "ResumeBytesDone"
err = en.Append(0xaf, 0x52, 0x65, 0x73, 0x75, 0x6d, 0x65, 0x42, 0x79, 0x74, 0x65, 0x73, 0x44, 0x6f, 0x6e, 0x65)
if err != nil {
@ -414,6 +448,16 @@ func (z *healingTracker) EncodeMsg(en *msgp.Writer) (err error) {
err = msgp.WrapError(err, "ResumeBytesFailed")
return
}
// write "ResumeBytesSkipped"
err = en.Append(0xb2, 0x52, 0x65, 0x73, 0x75, 0x6d, 0x65, 0x42, 0x79, 0x74, 0x65, 0x73, 0x53, 0x6b, 0x69, 0x70, 0x70, 0x65, 0x64)
if err != nil {
return
}
err = en.WriteUint64(z.ResumeBytesSkipped)
if err != nil {
err = msgp.WrapError(err, "ResumeBytesSkipped")
return
}
// write "QueuedBuckets"
err = en.Append(0xad, 0x51, 0x75, 0x65, 0x75, 0x65, 0x64, 0x42, 0x75, 0x63, 0x6b, 0x65, 0x74, 0x73)
if err != nil {
@ -478,15 +522,35 @@ func (z *healingTracker) EncodeMsg(en *msgp.Writer) (err error) {
err = msgp.WrapError(err, "BytesSkipped")
return
}
// write "RetryAttempts"
err = en.Append(0xad, 0x52, 0x65, 0x74, 0x72, 0x79, 0x41, 0x74, 0x74, 0x65, 0x6d, 0x70, 0x74, 0x73)
if err != nil {
return
}
err = en.WriteUint64(z.RetryAttempts)
if err != nil {
err = msgp.WrapError(err, "RetryAttempts")
return
}
// write "Finished"
err = en.Append(0xa8, 0x46, 0x69, 0x6e, 0x69, 0x73, 0x68, 0x65, 0x64)
if err != nil {
return
}
err = en.WriteBool(z.Finished)
if err != nil {
err = msgp.WrapError(err, "Finished")
return
}
return
}
// MarshalMsg implements msgp.Marshaler
func (z *healingTracker) MarshalMsg(b []byte) (o []byte, err error) {
o = msgp.Require(b, z.Msgsize())
// map header, size 25
// map header, size 29
// string "ID"
o = append(o, 0xde, 0x0, 0x19, 0xa2, 0x49, 0x44)
o = append(o, 0xde, 0x0, 0x1d, 0xa2, 0x49, 0x44)
o = msgp.AppendString(o, z.ID)
// string "PoolIndex"
o = append(o, 0xa9, 0x50, 0x6f, 0x6f, 0x6c, 0x49, 0x6e, 0x64, 0x65, 0x78)
@ -539,12 +603,18 @@ func (z *healingTracker) MarshalMsg(b []byte) (o []byte, err error) {
// string "ResumeItemsFailed"
o = append(o, 0xb1, 0x52, 0x65, 0x73, 0x75, 0x6d, 0x65, 0x49, 0x74, 0x65, 0x6d, 0x73, 0x46, 0x61, 0x69, 0x6c, 0x65, 0x64)
o = msgp.AppendUint64(o, z.ResumeItemsFailed)
// string "ResumeItemsSkipped"
o = append(o, 0xb2, 0x52, 0x65, 0x73, 0x75, 0x6d, 0x65, 0x49, 0x74, 0x65, 0x6d, 0x73, 0x53, 0x6b, 0x69, 0x70, 0x70, 0x65, 0x64)
o = msgp.AppendUint64(o, z.ResumeItemsSkipped)
// string "ResumeBytesDone"
o = append(o, 0xaf, 0x52, 0x65, 0x73, 0x75, 0x6d, 0x65, 0x42, 0x79, 0x74, 0x65, 0x73, 0x44, 0x6f, 0x6e, 0x65)
o = msgp.AppendUint64(o, z.ResumeBytesDone)
// string "ResumeBytesFailed"
o = append(o, 0xb1, 0x52, 0x65, 0x73, 0x75, 0x6d, 0x65, 0x42, 0x79, 0x74, 0x65, 0x73, 0x46, 0x61, 0x69, 0x6c, 0x65, 0x64)
o = msgp.AppendUint64(o, z.ResumeBytesFailed)
// string "ResumeBytesSkipped"
o = append(o, 0xb2, 0x52, 0x65, 0x73, 0x75, 0x6d, 0x65, 0x42, 0x79, 0x74, 0x65, 0x73, 0x53, 0x6b, 0x69, 0x70, 0x70, 0x65, 0x64)
o = msgp.AppendUint64(o, z.ResumeBytesSkipped)
// string "QueuedBuckets"
o = append(o, 0xad, 0x51, 0x75, 0x65, 0x75, 0x65, 0x64, 0x42, 0x75, 0x63, 0x6b, 0x65, 0x74, 0x73)
o = msgp.AppendArrayHeader(o, uint32(len(z.QueuedBuckets)))
@ -566,6 +636,12 @@ func (z *healingTracker) MarshalMsg(b []byte) (o []byte, err error) {
// string "BytesSkipped"
o = append(o, 0xac, 0x42, 0x79, 0x74, 0x65, 0x73, 0x53, 0x6b, 0x69, 0x70, 0x70, 0x65, 0x64)
o = msgp.AppendUint64(o, z.BytesSkipped)
// string "RetryAttempts"
o = append(o, 0xad, 0x52, 0x65, 0x74, 0x72, 0x79, 0x41, 0x74, 0x74, 0x65, 0x6d, 0x70, 0x74, 0x73)
o = msgp.AppendUint64(o, z.RetryAttempts)
// string "Finished"
o = append(o, 0xa8, 0x46, 0x69, 0x6e, 0x69, 0x73, 0x68, 0x65, 0x64)
o = msgp.AppendBool(o, z.Finished)
return
}
@ -695,6 +771,12 @@ func (z *healingTracker) UnmarshalMsg(bts []byte) (o []byte, err error) {
err = msgp.WrapError(err, "ResumeItemsFailed")
return
}
case "ResumeItemsSkipped":
z.ResumeItemsSkipped, bts, err = msgp.ReadUint64Bytes(bts)
if err != nil {
err = msgp.WrapError(err, "ResumeItemsSkipped")
return
}
case "ResumeBytesDone":
z.ResumeBytesDone, bts, err = msgp.ReadUint64Bytes(bts)
if err != nil {
@ -707,6 +789,12 @@ func (z *healingTracker) UnmarshalMsg(bts []byte) (o []byte, err error) {
err = msgp.WrapError(err, "ResumeBytesFailed")
return
}
case "ResumeBytesSkipped":
z.ResumeBytesSkipped, bts, err = msgp.ReadUint64Bytes(bts)
if err != nil {
err = msgp.WrapError(err, "ResumeBytesSkipped")
return
}
case "QueuedBuckets":
var zb0002 uint32
zb0002, bts, err = msgp.ReadArrayHeaderBytes(bts)
@ -763,6 +851,18 @@ func (z *healingTracker) UnmarshalMsg(bts []byte) (o []byte, err error) {
err = msgp.WrapError(err, "BytesSkipped")
return
}
case "RetryAttempts":
z.RetryAttempts, bts, err = msgp.ReadUint64Bytes(bts)
if err != nil {
err = msgp.WrapError(err, "RetryAttempts")
return
}
case "Finished":
z.Finished, bts, err = msgp.ReadBoolBytes(bts)
if err != nil {
err = msgp.WrapError(err, "Finished")
return
}
default:
bts, err = msgp.Skip(bts)
if err != nil {
@ -777,7 +877,7 @@ func (z *healingTracker) UnmarshalMsg(bts []byte) (o []byte, err error) {
// Msgsize returns an upper bound estimate of the number of bytes occupied by the serialized message
func (z *healingTracker) Msgsize() (s int) {
s = 3 + 3 + msgp.StringPrefixSize + len(z.ID) + 10 + msgp.IntSize + 9 + msgp.IntSize + 10 + msgp.IntSize + 5 + msgp.StringPrefixSize + len(z.Path) + 9 + msgp.StringPrefixSize + len(z.Endpoint) + 8 + msgp.TimeSize + 11 + msgp.TimeSize + 18 + msgp.Uint64Size + 17 + msgp.Uint64Size + 12 + msgp.Uint64Size + 12 + msgp.Uint64Size + 10 + msgp.Uint64Size + 12 + msgp.Uint64Size + 7 + msgp.StringPrefixSize + len(z.Bucket) + 7 + msgp.StringPrefixSize + len(z.Object) + 18 + msgp.Uint64Size + 18 + msgp.Uint64Size + 16 + msgp.Uint64Size + 18 + msgp.Uint64Size + 14 + msgp.ArrayHeaderSize
s = 3 + 3 + msgp.StringPrefixSize + len(z.ID) + 10 + msgp.IntSize + 9 + msgp.IntSize + 10 + msgp.IntSize + 5 + msgp.StringPrefixSize + len(z.Path) + 9 + msgp.StringPrefixSize + len(z.Endpoint) + 8 + msgp.TimeSize + 11 + msgp.TimeSize + 18 + msgp.Uint64Size + 17 + msgp.Uint64Size + 12 + msgp.Uint64Size + 12 + msgp.Uint64Size + 10 + msgp.Uint64Size + 12 + msgp.Uint64Size + 7 + msgp.StringPrefixSize + len(z.Bucket) + 7 + msgp.StringPrefixSize + len(z.Object) + 18 + msgp.Uint64Size + 18 + msgp.Uint64Size + 19 + msgp.Uint64Size + 16 + msgp.Uint64Size + 18 + msgp.Uint64Size + 19 + msgp.Uint64Size + 14 + msgp.ArrayHeaderSize
for za0001 := range z.QueuedBuckets {
s += msgp.StringPrefixSize + len(z.QueuedBuckets[za0001])
}
@ -785,6 +885,6 @@ func (z *healingTracker) Msgsize() (s int) {
for za0002 := range z.HealedBuckets {
s += msgp.StringPrefixSize + len(z.HealedBuckets[za0002])
}
s += 7 + msgp.StringPrefixSize + len(z.HealID) + 13 + msgp.Uint64Size + 13 + msgp.Uint64Size
s += 7 + msgp.StringPrefixSize + len(z.HealID) + 13 + msgp.Uint64Size + 13 + msgp.Uint64Size + 14 + msgp.Uint64Size + 9 + msgp.BoolSize
return
}

View File

@ -1,7 +1,7 @@
package cmd
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
package cmd
import (
"bytes"
"testing"

View File

@ -36,6 +36,7 @@ import (
"github.com/minio/pkg/v3/env"
"github.com/minio/pkg/v3/wildcard"
"github.com/minio/pkg/v3/workers"
"github.com/minio/pkg/v3/xtime"
"gopkg.in/yaml.v3"
)
@ -116,7 +117,7 @@ func (p BatchJobExpirePurge) Validate() error {
// BatchJobExpireFilter holds all the filters currently supported for batch replication
type BatchJobExpireFilter struct {
line, col int
OlderThan time.Duration `yaml:"olderThan,omitempty" json:"olderThan"`
OlderThan xtime.Duration `yaml:"olderThan,omitempty" json:"olderThan"`
CreatedBefore *time.Time `yaml:"createdBefore,omitempty" json:"createdBefore"`
Tags []BatchJobKV `yaml:"tags,omitempty" json:"tags"`
Metadata []BatchJobKV `yaml:"metadata,omitempty" json:"metadata"`
@ -162,7 +163,7 @@ func (ef BatchJobExpireFilter) Matches(obj ObjectInfo, now time.Time) bool {
if len(ef.Name) > 0 && !wildcard.Match(ef.Name, obj.Name) {
return false
}
if ef.OlderThan > 0 && now.Sub(obj.ModTime) <= ef.OlderThan {
if ef.OlderThan > 0 && now.Sub(obj.ModTime) <= ef.OlderThan.D() {
return false
}
@ -194,8 +195,8 @@ func (ef BatchJobExpireFilter) Matches(obj ObjectInfo, now time.Time) bool {
return false
}
}
}
if len(ef.Metadata) > 0 && !obj.DeleteMarker {
for _, kv := range ef.Metadata {
// Object (version) must match all x-amz-meta and
@ -280,7 +281,7 @@ type BatchJobExpire struct {
line, col int
APIVersion string `yaml:"apiVersion" json:"apiVersion"`
Bucket string `yaml:"bucket" json:"bucket"`
Prefix string `yaml:"prefix" json:"prefix"`
Prefix BatchJobPrefix `yaml:"prefix" json:"prefix"`
NotificationCfg BatchJobNotification `yaml:"notify" json:"notify"`
Retry BatchJobRetry `yaml:"retry" json:"retry"`
Rules []BatchJobExpireFilter `yaml:"rules" json:"rules"`
@ -288,6 +289,16 @@ type BatchJobExpire struct {
var _ yaml.Unmarshaler = &BatchJobExpire{}
// RedactSensitive will redact any sensitive information in b.
func (r *BatchJobExpire) RedactSensitive() {
if r == nil {
return
}
if r.NotificationCfg.Token != "" {
r.NotificationCfg.Token = redactedText
}
}
// UnmarshalYAML - BatchJobExpire extends default unmarshal to extract line, col information.
func (r *BatchJobExpire) UnmarshalYAML(val *yaml.Node) error {
type expireJob BatchJobExpire
@ -340,8 +351,24 @@ func (r *BatchJobExpire) Expire(ctx context.Context, api ObjectLayer, vc *versio
PrefixEnabledFn: vc.PrefixEnabled,
VersionSuspended: vc.Suspended(),
}
_, errs := api.DeleteObjects(ctx, r.Bucket, objsToDel, opts)
return errs
allErrs := make([]error, 0, len(objsToDel))
for {
count := len(objsToDel)
if count == 0 {
break
}
if count > maxDeleteList {
count = maxDeleteList
}
_, errs := api.DeleteObjects(ctx, r.Bucket, objsToDel[:count], opts)
allErrs = append(allErrs, errs...)
// Next batch of deletion
objsToDel = objsToDel[count:]
}
return allErrs
}
const (
@ -371,9 +398,12 @@ func (oiCache objInfoCache) Get(toDel ObjectToDelete) (*ObjectInfo, bool) {
func batchObjsForDelete(ctx context.Context, r *BatchJobExpire, ri *batchJobInfo, job BatchJobRequest, api ObjectLayer, wk *workers.Workers, expireCh <-chan []expireObjInfo) {
vc, _ := globalBucketVersioningSys.Get(r.Bucket)
retryAttempts := r.Retry.Attempts
retryAttempts := job.Expire.Retry.Attempts
if retryAttempts <= 0 {
retryAttempts = batchExpireJobDefaultRetries
}
delay := job.Expire.Retry.Delay
if delay == 0 {
if delay <= 0 {
delay = batchExpireJobDefaultRetryDelay
}
@ -394,12 +424,12 @@ func batchObjsForDelete(ctx context.Context, r *BatchJobExpire, ri *batchJobInfo
go func(toExpire []expireObjInfo) {
defer wk.Give()
toExpireAll := make([]ObjectInfo, 0, len(toExpire))
toExpireAll := make([]expireObjInfo, 0, len(toExpire))
toDel := make([]ObjectToDelete, 0, len(toExpire))
oiCache := newObjInfoCache()
for _, exp := range toExpire {
if exp.ExpireAll {
toExpireAll = append(toExpireAll, exp.ObjectInfo)
toExpireAll = append(toExpireAll, exp)
continue
}
// Cache ObjectInfo value via pointers for
@ -415,14 +445,14 @@ func batchObjsForDelete(ctx context.Context, r *BatchJobExpire, ri *batchJobInfo
oiCache.Add(od, &exp.ObjectInfo)
}
var done bool
// DeleteObject(deletePrefix: true) to expire all versions of an object
for _, exp := range toExpireAll {
var success bool
for attempts := 1; attempts <= retryAttempts; attempts++ {
select {
case <-ctx.Done():
done = true
ri.trackMultipleObjectVersions(exp, success)
return
default:
}
stopFn := globalBatchJobsMetrics.trace(batchJobMetricExpire, ri.JobID, attempts)
@ -432,21 +462,14 @@ func batchObjsForDelete(ctx context.Context, r *BatchJobExpire, ri *batchJobInfo
})
if err != nil {
stopFn(exp, err)
batchLogIf(ctx, fmt.Errorf("Failed to expire %s/%s versionID=%s due to %v (attempts=%d)", toExpire[i].Bucket, toExpire[i].Name, toExpire[i].VersionID, err, attempts))
batchLogIf(ctx, fmt.Errorf("Failed to expire %s/%s due to %v (attempts=%d)", exp.Bucket, exp.Name, err, attempts))
} else {
stopFn(exp, err)
success = true
break
}
}
ri.trackMultipleObjectVersions(r.Bucket, exp, success)
if done {
break
}
}
if done {
return
ri.trackMultipleObjectVersions(exp, success)
}
// DeleteMultiple objects
@ -464,13 +487,13 @@ func batchObjsForDelete(ctx context.Context, r *BatchJobExpire, ri *batchJobInfo
copy(toDelCopy, toDel)
var failed int
errs := r.Expire(ctx, api, vc, toDel)
// reslice toDel in preparation for next retry
// attempt
// reslice toDel in preparation for next retry attempt
toDel = toDel[:0]
for i, err := range errs {
if err != nil {
stopFn(toDelCopy[i], err)
batchLogIf(ctx, fmt.Errorf("Failed to expire %s/%s versionID=%s due to %v (attempts=%d)", ri.Bucket, toDelCopy[i].ObjectName, toDelCopy[i].VersionID, err, attempts))
batchLogIf(ctx, fmt.Errorf("Failed to expire %s/%s versionID=%s due to %v (attempts=%d)", ri.Bucket, toDelCopy[i].ObjectName, toDelCopy[i].VersionID,
err, attempts))
failed++
if oi, ok := oiCache.Get(toDelCopy[i]); ok {
ri.trackCurrentBucketObject(r.Bucket, *oi, false, attempts)
@ -504,7 +527,8 @@ func batchObjsForDelete(ctx context.Context, r *BatchJobExpire, ri *batchJobInfo
type expireObjInfo struct {
ObjectInfo
ExpireAll bool
ExpireAll bool
DeleteMarkerCount int64
}
// Start the batch expiration job, resumes if there was a pending job via "job.ID"
@ -514,7 +538,7 @@ func (r *BatchJobExpire) Start(ctx context.Context, api ObjectLayer, job BatchJo
JobType: string(job.Type()),
StartTime: job.Started,
}
if err := ri.load(ctx, api, job); err != nil {
if err := ri.loadOrInit(ctx, api, job); err != nil {
return err
}
@ -534,40 +558,58 @@ func (r *BatchJobExpire) Start(ctx context.Context, api ObjectLayer, job BatchJo
return err
}
ctx, cancel := context.WithCancel(ctx)
defer cancel()
ctx, cancelCause := context.WithCancelCause(ctx)
defer cancelCause(nil)
results := make(chan itemOrErr[ObjectInfo], workerSize)
if err := api.Walk(ctx, r.Bucket, r.Prefix, results, WalkOptions{
Marker: lastObject,
LatestOnly: false, // we need to visit all versions of the object to implement purge: retainVersions
VersionsSort: WalkVersionsSortDesc,
}); err != nil {
// Do not need to retry if we can't list objects on source.
return err
}
go func() {
prefixes := r.Prefix.F()
if len(prefixes) == 0 {
prefixes = []string{""}
}
for _, prefix := range prefixes {
prefixResultCh := make(chan itemOrErr[ObjectInfo], workerSize)
err := api.Walk(ctx, r.Bucket, prefix, prefixResultCh, WalkOptions{
Marker: lastObject,
LatestOnly: false, // we need to visit all versions of the object to implement purge: retainVersions
VersionsSort: WalkVersionsSortDesc,
})
if err != nil {
cancelCause(err)
xioutil.SafeClose(results)
return
}
for result := range prefixResultCh {
results <- result
}
}
xioutil.SafeClose(results)
}()
// Goroutine to periodically save batch-expire job's in-memory state
saverQuitCh := make(chan struct{})
go func() {
saveTicker := time.NewTicker(10 * time.Second)
defer saveTicker.Stop()
for {
quit := false
after := time.Minute
for !quit {
select {
case <-saveTicker.C:
// persist in-memory state to disk after every 10secs.
batchLogIf(ctx, ri.updateAfter(ctx, api, 10*time.Second, job))
case <-ctx.Done():
// persist in-memory state immediately before exiting due to context cancellation.
batchLogIf(ctx, ri.updateAfter(ctx, api, 0, job))
return
quit = true
case <-saverQuitCh:
// persist in-memory state immediately to disk.
batchLogIf(ctx, ri.updateAfter(ctx, api, 0, job))
return
quit = true
}
if quit {
// save immediately if we are quitting
after = 0
}
ctx, cancel := context.WithTimeout(GlobalContext, 30*time.Second) // independent context
batchLogIf(ctx, ri.updateAfter(ctx, api, after, job))
cancel()
}
}()
@ -583,76 +625,115 @@ func (r *BatchJobExpire) Start(ctx context.Context, api ObjectLayer, job BatchJo
matchedFilter BatchJobExpireFilter
versionsCount int
toDel []expireObjInfo
failed bool
done bool
)
failed := true
for result := range results {
if result.Err != nil {
failed = true
batchLogIf(ctx, result.Err)
continue
deleteMarkerCountMap := map[string]int64{}
pushToExpire := func() {
// set preObject deleteMarkerCount
if len(toDel) > 0 {
lastDelIndex := len(toDel) - 1
lastDel := toDel[lastDelIndex]
if lastDel.ExpireAll {
toDel[lastDelIndex].DeleteMarkerCount = deleteMarkerCountMap[lastDel.Name]
// delete the key
delete(deleteMarkerCountMap, lastDel.Name)
}
}
// Apply filter to find the matching rule to apply expiry
// actions accordingly.
// nolint:gocritic
if result.Item.IsLatest {
// send down filtered entries to be deleted using
// DeleteObjects method
if len(toDel) > 10 { // batch up to 10 objects/versions to be expired simultaneously.
xfer := make([]expireObjInfo, len(toDel))
copy(xfer, toDel)
var done bool
select {
case <-ctx.Done():
done = true
case expireCh <- xfer:
toDel = toDel[:0] // resetting toDel
}
if done {
break
}
// send down filtered entries to be deleted using
// DeleteObjects method
if len(toDel) > 10 { // batch up to 10 objects/versions to be expired simultaneously.
xfer := make([]expireObjInfo, len(toDel))
copy(xfer, toDel)
select {
case expireCh <- xfer:
toDel = toDel[:0] // resetting toDel
case <-ctx.Done():
done = true
}
var match BatchJobExpireFilter
var found bool
for _, rule := range r.Rules {
if rule.Matches(result.Item, now) {
match = rule
found = true
break
}
}
if !found {
continue
}
prevObj = result.Item
matchedFilter = match
versionsCount = 1
// Include the latest version
if matchedFilter.Purge.RetainVersions == 0 {
toDel = append(toDel, expireObjInfo{
ObjectInfo: result.Item,
ExpireAll: true,
})
continue
}
} else if prevObj.Name == result.Item.Name {
if matchedFilter.Purge.RetainVersions == 0 {
continue // including latest version in toDel suffices, skipping other versions
}
versionsCount++
} else {
continue
}
if versionsCount <= matchedFilter.Purge.RetainVersions {
continue // retain versions
}
toDel = append(toDel, expireObjInfo{
ObjectInfo: result.Item,
})
}
for {
select {
case result, ok := <-results:
if !ok {
done = true
break
}
if result.Err != nil {
failed = true
batchLogIf(ctx, result.Err)
continue
}
if result.Item.DeleteMarker {
deleteMarkerCountMap[result.Item.Name]++
}
// Apply filter to find the matching rule to apply expiry
// actions accordingly.
// nolint:gocritic
if result.Item.IsLatest {
var match BatchJobExpireFilter
var found bool
for _, rule := range r.Rules {
if rule.Matches(result.Item, now) {
match = rule
found = true
break
}
}
if !found {
continue
}
if prevObj.Name != result.Item.Name {
// switch the object
pushToExpire()
}
prevObj = result.Item
matchedFilter = match
versionsCount = 1
// Include the latest version
if matchedFilter.Purge.RetainVersions == 0 {
toDel = append(toDel, expireObjInfo{
ObjectInfo: result.Item,
ExpireAll: true,
})
continue
}
} else if prevObj.Name == result.Item.Name {
if matchedFilter.Purge.RetainVersions == 0 {
continue // including latest version in toDel suffices, skipping other versions
}
versionsCount++
} else {
// switch the object
pushToExpire()
// a file switched with no LatestVersion, logging it
batchLogIf(ctx, fmt.Errorf("skipping object %s, no latest version found", result.Item.Name))
continue
}
if versionsCount <= matchedFilter.Purge.RetainVersions {
continue // retain versions
}
toDel = append(toDel, expireObjInfo{
ObjectInfo: result.Item,
})
pushToExpire()
case <-ctx.Done():
done = true
}
if done {
break
}
}
if context.Cause(ctx) != nil {
xioutil.SafeClose(expireCh)
return context.Cause(ctx)
}
pushToExpire()
// Send any remaining objects downstream
if len(toDel) > 0 {
select {

View File

@ -1,7 +1,7 @@
package cmd
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
package cmd
import (
"time"
@ -39,7 +39,7 @@ func (z *BatchJobExpire) DecodeMsg(dc *msgp.Reader) (err error) {
return
}
case "Prefix":
z.Prefix, err = dc.ReadString()
err = z.Prefix.DecodeMsg(dc)
if err != nil {
err = msgp.WrapError(err, "Prefix")
return
@ -114,7 +114,7 @@ func (z *BatchJobExpire) EncodeMsg(en *msgp.Writer) (err error) {
if err != nil {
return
}
err = en.WriteString(z.Prefix)
err = z.Prefix.EncodeMsg(en)
if err != nil {
err = msgp.WrapError(err, "Prefix")
return
@ -171,7 +171,11 @@ func (z *BatchJobExpire) MarshalMsg(b []byte) (o []byte, err error) {
o = msgp.AppendString(o, z.Bucket)
// string "Prefix"
o = append(o, 0xa6, 0x50, 0x72, 0x65, 0x66, 0x69, 0x78)
o = msgp.AppendString(o, z.Prefix)
o, err = z.Prefix.MarshalMsg(o)
if err != nil {
err = msgp.WrapError(err, "Prefix")
return
}
// string "NotificationCfg"
o = append(o, 0xaf, 0x4e, 0x6f, 0x74, 0x69, 0x66, 0x69, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x43, 0x66, 0x67)
o, err = z.NotificationCfg.MarshalMsg(o)
@ -230,7 +234,7 @@ func (z *BatchJobExpire) UnmarshalMsg(bts []byte) (o []byte, err error) {
return
}
case "Prefix":
z.Prefix, bts, err = msgp.ReadStringBytes(bts)
bts, err = z.Prefix.UnmarshalMsg(bts)
if err != nil {
err = msgp.WrapError(err, "Prefix")
return
@ -280,7 +284,7 @@ func (z *BatchJobExpire) UnmarshalMsg(bts []byte) (o []byte, err error) {
// Msgsize returns an upper bound estimate of the number of bytes occupied by the serialized message
func (z *BatchJobExpire) Msgsize() (s int) {
s = 1 + 11 + msgp.StringPrefixSize + len(z.APIVersion) + 7 + msgp.StringPrefixSize + len(z.Bucket) + 7 + msgp.StringPrefixSize + len(z.Prefix) + 16 + z.NotificationCfg.Msgsize() + 6 + z.Retry.Msgsize() + 6 + msgp.ArrayHeaderSize
s = 1 + 11 + msgp.StringPrefixSize + len(z.APIVersion) + 7 + msgp.StringPrefixSize + len(z.Bucket) + 7 + z.Prefix.Msgsize() + 16 + z.NotificationCfg.Msgsize() + 6 + z.Retry.Msgsize() + 6 + msgp.ArrayHeaderSize
for za0001 := range z.Rules {
s += z.Rules[za0001].Msgsize()
}
@ -306,7 +310,7 @@ func (z *BatchJobExpireFilter) DecodeMsg(dc *msgp.Reader) (err error) {
}
switch msgp.UnsafeString(field) {
case "OlderThan":
z.OlderThan, err = dc.ReadDuration()
err = z.OlderThan.DecodeMsg(dc)
if err != nil {
err = msgp.WrapError(err, "OlderThan")
return
@ -433,7 +437,7 @@ func (z *BatchJobExpireFilter) EncodeMsg(en *msgp.Writer) (err error) {
if err != nil {
return
}
err = en.WriteDuration(z.OlderThan)
err = z.OlderThan.EncodeMsg(en)
if err != nil {
err = msgp.WrapError(err, "OlderThan")
return
@ -544,7 +548,11 @@ func (z *BatchJobExpireFilter) MarshalMsg(b []byte) (o []byte, err error) {
// map header, size 8
// string "OlderThan"
o = append(o, 0x88, 0xa9, 0x4f, 0x6c, 0x64, 0x65, 0x72, 0x54, 0x68, 0x61, 0x6e)
o = msgp.AppendDuration(o, z.OlderThan)
o, err = z.OlderThan.MarshalMsg(o)
if err != nil {
err = msgp.WrapError(err, "OlderThan")
return
}
// string "CreatedBefore"
o = append(o, 0xad, 0x43, 0x72, 0x65, 0x61, 0x74, 0x65, 0x64, 0x42, 0x65, 0x66, 0x6f, 0x72, 0x65)
if z.CreatedBefore == nil {
@ -613,7 +621,7 @@ func (z *BatchJobExpireFilter) UnmarshalMsg(bts []byte) (o []byte, err error) {
}
switch msgp.UnsafeString(field) {
case "OlderThan":
z.OlderThan, bts, err = msgp.ReadDurationBytes(bts)
bts, err = z.OlderThan.UnmarshalMsg(bts)
if err != nil {
err = msgp.WrapError(err, "OlderThan")
return
@ -734,7 +742,7 @@ func (z *BatchJobExpireFilter) UnmarshalMsg(bts []byte) (o []byte, err error) {
// Msgsize returns an upper bound estimate of the number of bytes occupied by the serialized message
func (z *BatchJobExpireFilter) Msgsize() (s int) {
s = 1 + 10 + msgp.DurationSize + 14
s = 1 + 10 + z.OlderThan.Msgsize() + 14
if z.CreatedBefore == nil {
s += msgp.NilSize
} else {

View File

@ -1,7 +1,7 @@
package cmd
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
package cmd
import (
"bytes"
"testing"

View File

@ -18,9 +18,10 @@
package cmd
import (
"slices"
"testing"
"gopkg.in/yaml.v2"
"gopkg.in/yaml.v3"
)
func TestParseBatchJobExpire(t *testing.T) {
@ -32,7 +33,7 @@ expire: # Expire objects that match a condition
rules:
- type: object # regular objects with zero or more older versions
name: NAME # match object names that satisfy the wildcard expression.
olderThan: 70h # match objects older than this value
olderThan: 7d10h # match objects older than this value
createdBefore: "2006-01-02T15:04:05.00Z" # match objects created before "date"
tags:
- key: name
@ -64,8 +65,61 @@ expire: # Expire objects that match a condition
delay: 500ms # least amount of delay between each retry
`
var job BatchJobRequest
err := yaml.UnmarshalStrict([]byte(expireYaml), &job)
err := yaml.Unmarshal([]byte(expireYaml), &job)
if err != nil {
t.Fatal("Failed to parse batch-job-expire yaml", err)
}
if !slices.Equal(job.Expire.Prefix.F(), []string{"myprefix"}) {
t.Fatal("Failed to parse batch-job-expire yaml")
}
multiPrefixExpireYaml := `
expire: # Expire objects that match a condition
apiVersion: v1
bucket: mybucket # Bucket where this batch job will expire matching objects from
prefix: # (Optional) Prefix under which this job will expire objects matching the rules below.
- myprefix
- myprefix1
rules:
- type: object # regular objects with zero or more older versions
name: NAME # match object names that satisfy the wildcard expression.
olderThan: 7d10h # match objects older than this value
createdBefore: "2006-01-02T15:04:05.00Z" # match objects created before "date"
tags:
- key: name
value: pick* # match objects with tag 'name', all values starting with 'pick'
metadata:
- key: content-type
value: image/* # match objects with 'content-type', all values starting with 'image/'
size:
lessThan: "10MiB" # match objects with size less than this value (e.g. 10MiB)
greaterThan: 1MiB # match objects with size greater than this value (e.g. 1MiB)
purge:
# retainVersions: 0 # (default) delete all versions of the object. This option is the fastest.
# retainVersions: 5 # keep the latest 5 versions of the object.
- type: deleted # objects with delete marker as their latest version
name: NAME # match object names that satisfy the wildcard expression.
olderThan: 10h # match objects older than this value (e.g. 7d10h31s)
createdBefore: "2006-01-02T15:04:05.00Z" # match objects created before "date"
purge:
# retainVersions: 0 # (default) delete all versions of the object. This option is the fastest.
# retainVersions: 5 # keep the latest 5 versions of the object including delete markers.
notify:
endpoint: https://notify.endpoint # notification endpoint to receive job completion status
token: Bearer xxxxx # optional authentication token for the notification endpoint
retry:
attempts: 10 # number of retries for the job before giving up
delay: 500ms # least amount of delay between each retry
`
var multiPrefixJob BatchJobRequest
err = yaml.Unmarshal([]byte(multiPrefixExpireYaml), &multiPrefixJob)
if err != nil {
t.Fatal("Failed to parse batch-job-expire yaml", err)
}
if !slices.Equal(multiPrefixJob.Expire.Prefix.F(), []string{"myprefix", "myprefix1"}) {
t.Fatal("Failed to parse batch-job-expire yaml")
}
}

File diff suppressed because it is too large Load Diff

View File

@ -1,11 +1,94 @@
package cmd
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
package cmd
import (
"github.com/tinylib/msgp/msgp"
)
// DecodeMsg implements msgp.Decodable
func (z *BatchJobPrefix) DecodeMsg(dc *msgp.Reader) (err error) {
var zb0002 uint32
zb0002, err = dc.ReadArrayHeader()
if err != nil {
err = msgp.WrapError(err)
return
}
if cap((*z)) >= int(zb0002) {
(*z) = (*z)[:zb0002]
} else {
(*z) = make(BatchJobPrefix, zb0002)
}
for zb0001 := range *z {
(*z)[zb0001], err = dc.ReadString()
if err != nil {
err = msgp.WrapError(err, zb0001)
return
}
}
return
}
// EncodeMsg implements msgp.Encodable
func (z BatchJobPrefix) EncodeMsg(en *msgp.Writer) (err error) {
err = en.WriteArrayHeader(uint32(len(z)))
if err != nil {
err = msgp.WrapError(err)
return
}
for zb0003 := range z {
err = en.WriteString(z[zb0003])
if err != nil {
err = msgp.WrapError(err, zb0003)
return
}
}
return
}
// MarshalMsg implements msgp.Marshaler
func (z BatchJobPrefix) MarshalMsg(b []byte) (o []byte, err error) {
o = msgp.Require(b, z.Msgsize())
o = msgp.AppendArrayHeader(o, uint32(len(z)))
for zb0003 := range z {
o = msgp.AppendString(o, z[zb0003])
}
return
}
// UnmarshalMsg implements msgp.Unmarshaler
func (z *BatchJobPrefix) UnmarshalMsg(bts []byte) (o []byte, err error) {
var zb0002 uint32
zb0002, bts, err = msgp.ReadArrayHeaderBytes(bts)
if err != nil {
err = msgp.WrapError(err)
return
}
if cap((*z)) >= int(zb0002) {
(*z) = (*z)[:zb0002]
} else {
(*z) = make(BatchJobPrefix, zb0002)
}
for zb0001 := range *z {
(*z)[zb0001], bts, err = msgp.ReadStringBytes(bts)
if err != nil {
err = msgp.WrapError(err, zb0001)
return
}
}
o = bts
return
}
// Msgsize returns an upper bound estimate of the number of bytes occupied by the serialized message
func (z BatchJobPrefix) Msgsize() (s int) {
s = msgp.ArrayHeaderSize
for zb0003 := range z {
s += msgp.StringPrefixSize + len(z[zb0003])
}
return
}
// DecodeMsg implements msgp.Decodable
func (z *BatchJobRequest) DecodeMsg(dc *msgp.Reader) (err error) {
var field []byte

View File

@ -1,7 +1,7 @@
package cmd
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
package cmd
import (
"bytes"
"testing"
@ -9,6 +9,119 @@ import (
"github.com/tinylib/msgp/msgp"
)
func TestMarshalUnmarshalBatchJobPrefix(t *testing.T) {
v := BatchJobPrefix{}
bts, err := v.MarshalMsg(nil)
if err != nil {
t.Fatal(err)
}
left, err := v.UnmarshalMsg(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after UnmarshalMsg(): %q", len(left), left)
}
left, err = msgp.Skip(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after Skip(): %q", len(left), left)
}
}
func BenchmarkMarshalMsgBatchJobPrefix(b *testing.B) {
v := BatchJobPrefix{}
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.MarshalMsg(nil)
}
}
func BenchmarkAppendMsgBatchJobPrefix(b *testing.B) {
v := BatchJobPrefix{}
bts := make([]byte, 0, v.Msgsize())
bts, _ = v.MarshalMsg(bts[0:0])
b.SetBytes(int64(len(bts)))
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
bts, _ = v.MarshalMsg(bts[0:0])
}
}
func BenchmarkUnmarshalBatchJobPrefix(b *testing.B) {
v := BatchJobPrefix{}
bts, _ := v.MarshalMsg(nil)
b.ReportAllocs()
b.SetBytes(int64(len(bts)))
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, err := v.UnmarshalMsg(bts)
if err != nil {
b.Fatal(err)
}
}
}
func TestEncodeDecodeBatchJobPrefix(t *testing.T) {
v := BatchJobPrefix{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
m := v.Msgsize()
if buf.Len() > m {
t.Log("WARNING: TestEncodeDecodeBatchJobPrefix Msgsize() is inaccurate")
}
vn := BatchJobPrefix{}
err := msgp.Decode(&buf, &vn)
if err != nil {
t.Error(err)
}
buf.Reset()
msgp.Encode(&buf, &v)
err = msgp.NewReader(&buf).Skip()
if err != nil {
t.Error(err)
}
}
func BenchmarkEncodeBatchJobPrefix(b *testing.B) {
v := BatchJobPrefix{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
en := msgp.NewWriter(msgp.Nowhere)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.EncodeMsg(en)
}
en.Flush()
}
func BenchmarkDecodeBatchJobPrefix(b *testing.B) {
v := BatchJobPrefix{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
rd := msgp.NewEndlessReader(buf.Bytes(), b)
dc := msgp.NewReader(rd)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
err := v.DecodeMsg(dc)
if err != nil {
b.Fatal(err)
}
}
}
func TestMarshalUnmarshalBatchJobRequest(t *testing.T) {
v := BatchJobRequest{}
bts, err := v.MarshalMsg(nil)

View File

@ -0,0 +1,75 @@
// Copyright (c) 2015-2024 MinIO, Inc.
//
// This file is part of MinIO Object Storage stack
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package cmd
import (
"slices"
"testing"
"gopkg.in/yaml.v3"
)
func TestBatchJobPrefix_UnmarshalYAML(t *testing.T) {
type args struct {
yamlStr string
}
type PrefixTemp struct {
Prefix BatchJobPrefix `yaml:"prefix"`
}
tests := []struct {
name string
b PrefixTemp
args args
want []string
wantErr bool
}{
{
name: "test1",
b: PrefixTemp{},
args: args{
yamlStr: `
prefix: "foo"
`,
},
want: []string{"foo"},
wantErr: false,
},
{
name: "test2",
b: PrefixTemp{},
args: args{
yamlStr: `
prefix:
- "foo"
- "bar"
`,
},
want: []string{"foo", "bar"},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if err := yaml.Unmarshal([]byte(tt.args.yamlStr), &tt.b); (err != nil) != tt.wantErr {
t.Errorf("UnmarshalYAML() error = %v, wantErr %v", err, tt.wantErr)
}
if !slices.Equal(tt.b.Prefix.F(), tt.want) {
t.Errorf("UnmarshalYAML() = %v, want %v", tt.b.Prefix.F(), tt.want)
}
})
}
}

View File

@ -275,7 +275,7 @@ func (sf BatchJobSizeFilter) Validate() error {
type BatchJobSize int64
// UnmarshalYAML to parse humanized byte values
func (s *BatchJobSize) UnmarshalYAML(unmarshal func(interface{}) error) error {
func (s *BatchJobSize) UnmarshalYAML(unmarshal func(any) error) error {
var batchExpireSz string
err := unmarshal(&batchExpireSz)
if err != nil {

View File

@ -1,7 +1,7 @@
package cmd
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
package cmd
import (
"github.com/tinylib/msgp/msgp"
)

View File

@ -1,7 +1,7 @@
package cmd
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
package cmd
import (
"bytes"
"testing"

View File

@ -21,8 +21,8 @@ import (
"time"
miniogo "github.com/minio/minio-go/v7"
"github.com/minio/minio/internal/auth"
"github.com/minio/pkg/v3/xtime"
)
//go:generate msgp -file $GOFILE
@ -65,12 +65,12 @@ import (
// BatchReplicateFilter holds all the filters currently supported for batch replication
type BatchReplicateFilter struct {
NewerThan time.Duration `yaml:"newerThan,omitempty" json:"newerThan"`
OlderThan time.Duration `yaml:"olderThan,omitempty" json:"olderThan"`
CreatedAfter time.Time `yaml:"createdAfter,omitempty" json:"createdAfter"`
CreatedBefore time.Time `yaml:"createdBefore,omitempty" json:"createdBefore"`
Tags []BatchJobKV `yaml:"tags,omitempty" json:"tags"`
Metadata []BatchJobKV `yaml:"metadata,omitempty" json:"metadata"`
NewerThan xtime.Duration `yaml:"newerThan,omitempty" json:"newerThan"`
OlderThan xtime.Duration `yaml:"olderThan,omitempty" json:"olderThan"`
CreatedAfter time.Time `yaml:"createdAfter,omitempty" json:"createdAfter"`
CreatedBefore time.Time `yaml:"createdBefore,omitempty" json:"createdBefore"`
Tags []BatchJobKV `yaml:"tags,omitempty" json:"tags"`
Metadata []BatchJobKV `yaml:"metadata,omitempty" json:"metadata"`
}
// BatchJobReplicateFlags various configurations for replication job definition currently includes
@ -151,7 +151,7 @@ func (t BatchJobReplicateTarget) ValidPath() bool {
type BatchJobReplicateSource struct {
Type BatchJobReplicateResourceType `yaml:"type" json:"type"`
Bucket string `yaml:"bucket" json:"bucket"`
Prefix string `yaml:"prefix" json:"prefix"`
Prefix BatchJobPrefix `yaml:"prefix" json:"prefix"`
Endpoint string `yaml:"endpoint" json:"endpoint"`
Path string `yaml:"path" json:"path"`
Creds BatchJobReplicateCredentials `yaml:"credentials" json:"credentials"`

View File

@ -1,7 +1,7 @@
package cmd
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
package cmd
import (
"github.com/tinylib/msgp/msgp"
)
@ -411,7 +411,7 @@ func (z *BatchJobReplicateSource) DecodeMsg(dc *msgp.Reader) (err error) {
return
}
case "Prefix":
z.Prefix, err = dc.ReadString()
err = z.Prefix.DecodeMsg(dc)
if err != nil {
err = msgp.WrapError(err, "Prefix")
return
@ -514,7 +514,7 @@ func (z *BatchJobReplicateSource) EncodeMsg(en *msgp.Writer) (err error) {
if err != nil {
return
}
err = en.WriteString(z.Prefix)
err = z.Prefix.EncodeMsg(en)
if err != nil {
err = msgp.WrapError(err, "Prefix")
return
@ -600,7 +600,11 @@ func (z *BatchJobReplicateSource) MarshalMsg(b []byte) (o []byte, err error) {
o = msgp.AppendString(o, z.Bucket)
// string "Prefix"
o = append(o, 0xa6, 0x50, 0x72, 0x65, 0x66, 0x69, 0x78)
o = msgp.AppendString(o, z.Prefix)
o, err = z.Prefix.MarshalMsg(o)
if err != nil {
err = msgp.WrapError(err, "Prefix")
return
}
// string "Endpoint"
o = append(o, 0xa8, 0x45, 0x6e, 0x64, 0x70, 0x6f, 0x69, 0x6e, 0x74)
o = msgp.AppendString(o, z.Endpoint)
@ -664,7 +668,7 @@ func (z *BatchJobReplicateSource) UnmarshalMsg(bts []byte) (o []byte, err error)
return
}
case "Prefix":
z.Prefix, bts, err = msgp.ReadStringBytes(bts)
bts, err = z.Prefix.UnmarshalMsg(bts)
if err != nil {
err = msgp.WrapError(err, "Prefix")
return
@ -742,7 +746,7 @@ func (z *BatchJobReplicateSource) UnmarshalMsg(bts []byte) (o []byte, err error)
// Msgsize returns an upper bound estimate of the number of bytes occupied by the serialized message
func (z *BatchJobReplicateSource) Msgsize() (s int) {
s = 1 + 5 + msgp.StringPrefixSize + len(string(z.Type)) + 7 + msgp.StringPrefixSize + len(z.Bucket) + 7 + msgp.StringPrefixSize + len(z.Prefix) + 9 + msgp.StringPrefixSize + len(z.Endpoint) + 5 + msgp.StringPrefixSize + len(z.Path) + 6 + 1 + 10 + msgp.StringPrefixSize + len(z.Creds.AccessKey) + 10 + msgp.StringPrefixSize + len(z.Creds.SecretKey) + 13 + msgp.StringPrefixSize + len(z.Creds.SessionToken) + 9 + z.Snowball.Msgsize()
s = 1 + 5 + msgp.StringPrefixSize + len(string(z.Type)) + 7 + msgp.StringPrefixSize + len(z.Bucket) + 7 + z.Prefix.Msgsize() + 9 + msgp.StringPrefixSize + len(z.Endpoint) + 5 + msgp.StringPrefixSize + len(z.Path) + 6 + 1 + 10 + msgp.StringPrefixSize + len(z.Creds.AccessKey) + 10 + msgp.StringPrefixSize + len(z.Creds.SecretKey) + 13 + msgp.StringPrefixSize + len(z.Creds.SessionToken) + 9 + z.Snowball.Msgsize()
return
}
@ -1409,13 +1413,13 @@ func (z *BatchReplicateFilter) DecodeMsg(dc *msgp.Reader) (err error) {
}
switch msgp.UnsafeString(field) {
case "NewerThan":
z.NewerThan, err = dc.ReadDuration()
err = z.NewerThan.DecodeMsg(dc)
if err != nil {
err = msgp.WrapError(err, "NewerThan")
return
}
case "OlderThan":
z.OlderThan, err = dc.ReadDuration()
err = z.OlderThan.DecodeMsg(dc)
if err != nil {
err = msgp.WrapError(err, "OlderThan")
return
@ -1489,7 +1493,7 @@ func (z *BatchReplicateFilter) EncodeMsg(en *msgp.Writer) (err error) {
if err != nil {
return
}
err = en.WriteDuration(z.NewerThan)
err = z.NewerThan.EncodeMsg(en)
if err != nil {
err = msgp.WrapError(err, "NewerThan")
return
@ -1499,7 +1503,7 @@ func (z *BatchReplicateFilter) EncodeMsg(en *msgp.Writer) (err error) {
if err != nil {
return
}
err = en.WriteDuration(z.OlderThan)
err = z.OlderThan.EncodeMsg(en)
if err != nil {
err = msgp.WrapError(err, "OlderThan")
return
@ -1567,10 +1571,18 @@ func (z *BatchReplicateFilter) MarshalMsg(b []byte) (o []byte, err error) {
// map header, size 6
// string "NewerThan"
o = append(o, 0x86, 0xa9, 0x4e, 0x65, 0x77, 0x65, 0x72, 0x54, 0x68, 0x61, 0x6e)
o = msgp.AppendDuration(o, z.NewerThan)
o, err = z.NewerThan.MarshalMsg(o)
if err != nil {
err = msgp.WrapError(err, "NewerThan")
return
}
// string "OlderThan"
o = append(o, 0xa9, 0x4f, 0x6c, 0x64, 0x65, 0x72, 0x54, 0x68, 0x61, 0x6e)
o = msgp.AppendDuration(o, z.OlderThan)
o, err = z.OlderThan.MarshalMsg(o)
if err != nil {
err = msgp.WrapError(err, "OlderThan")
return
}
// string "CreatedAfter"
o = append(o, 0xac, 0x43, 0x72, 0x65, 0x61, 0x74, 0x65, 0x64, 0x41, 0x66, 0x74, 0x65, 0x72)
o = msgp.AppendTime(o, z.CreatedAfter)
@ -1619,13 +1631,13 @@ func (z *BatchReplicateFilter) UnmarshalMsg(bts []byte) (o []byte, err error) {
}
switch msgp.UnsafeString(field) {
case "NewerThan":
z.NewerThan, bts, err = msgp.ReadDurationBytes(bts)
bts, err = z.NewerThan.UnmarshalMsg(bts)
if err != nil {
err = msgp.WrapError(err, "NewerThan")
return
}
case "OlderThan":
z.OlderThan, bts, err = msgp.ReadDurationBytes(bts)
bts, err = z.OlderThan.UnmarshalMsg(bts)
if err != nil {
err = msgp.WrapError(err, "OlderThan")
return
@ -1694,7 +1706,7 @@ func (z *BatchReplicateFilter) UnmarshalMsg(bts []byte) (o []byte, err error) {
// Msgsize returns an upper bound estimate of the number of bytes occupied by the serialized message
func (z *BatchReplicateFilter) Msgsize() (s int) {
s = 1 + 10 + msgp.DurationSize + 10 + msgp.DurationSize + 13 + msgp.TimeSize + 14 + msgp.TimeSize + 5 + msgp.ArrayHeaderSize
s = 1 + 10 + z.NewerThan.Msgsize() + 10 + z.OlderThan.Msgsize() + 13 + msgp.TimeSize + 14 + msgp.TimeSize + 5 + msgp.ArrayHeaderSize
for za0001 := range z.Tags {
s += z.Tags[za0001].Msgsize()
}

View File

@ -1,7 +1,7 @@
package cmd
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
package cmd
import (
"bytes"
"testing"

182
cmd/batch-replicate_test.go Normal file
View File

@ -0,0 +1,182 @@
// Copyright (c) 2015-2024 MinIO, Inc.
//
// This file is part of MinIO Object Storage stack
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package cmd
import (
"slices"
"testing"
"gopkg.in/yaml.v3"
)
func TestParseBatchJobReplicate(t *testing.T) {
replicateYaml := `
replicate:
apiVersion: v1
# source of the objects to be replicated
source:
type: minio # valid values are "s3" or "minio"
bucket: mytest
prefix: object-prefix1 # 'PREFIX' is optional
# If your source is the 'local' alias specified to 'mc batch start', then the 'endpoint' and 'credentials' fields are optional and can be omitted
# Either the 'source' or 'remote' *must* be the "local" deployment
# endpoint: "http://127.0.0.1:9000"
# # path: "on|off|auto" # "on" enables path-style bucket lookup. "off" enables virtual host (DNS)-style bucket lookup. Defaults to "auto"
# credentials:
# accessKey: minioadmin # Required
# secretKey: minioadmin # Required
# # sessionToken: SESSION-TOKEN # Optional only available when rotating credentials are used
snowball: # automatically activated if the source is local
disable: true # optionally turn-off snowball archive transfer
# batch: 100 # upto this many objects per archive
# inmemory: true # indicates if the archive must be staged locally or in-memory
# compress: false # S2/Snappy compressed archive
# smallerThan: 5MiB # create archive for all objects smaller than 5MiB
# skipErrs: false # skips any source side read() errors
# target where the objects must be replicated
target:
type: minio # valid values are "s3" or "minio"
bucket: mytest
prefix: stage # 'PREFIX' is optional
# If your source is the 'local' alias specified to 'mc batch start', then the 'endpoint' and 'credentials' fields are optional and can be omitted
# Either the 'source' or 'remote' *must* be the "local" deployment
endpoint: "http://127.0.0.1:9001"
# path: "on|off|auto" # "on" enables path-style bucket lookup. "off" enables virtual host (DNS)-style bucket lookup. Defaults to "auto"
credentials:
accessKey: minioadmin
secretKey: minioadmin
# sessionToken: SESSION-TOKEN # Optional only available when rotating credentials are used
# NOTE: All flags are optional
# - filtering criteria only applies for all source objects match the criteria
# - configurable notification endpoints
# - configurable retries for the job (each retry skips successfully previously replaced objects)
flags:
filter:
newerThan: "7d10h31s" # match objects newer than this value (e.g. 7d10h31s)
olderThan: "7d" # match objects older than this value (e.g. 7d10h31s)
# createdAfter: "date" # match objects created after "date"
# createdBefore: "date" # match objects created before "date"
## NOTE: tags are not supported when "source" is remote.
tags:
- key: "name"
value: "pick*" # match objects with tag 'name', with all values starting with 'pick'
metadata:
- key: "content-type"
value: "image/*" # match objects with 'content-type', with all values starting with 'image/'
# notify:
# endpoint: "https://notify.endpoint" # notification endpoint to receive job status events
# token: "Bearer xxxxx" # optional authentication token for the notification endpoint
#
# retry:
# attempts: 10 # number of retries for the job before giving up
# delay: "500ms" # least amount of delay between each retry
`
var job BatchJobRequest
err := yaml.Unmarshal([]byte(replicateYaml), &job)
if err != nil {
t.Fatal("Failed to parse batch-job-replicate yaml", err)
}
if !slices.Equal(job.Replicate.Source.Prefix.F(), []string{"object-prefix1"}) {
t.Fatal("Failed to parse batch-job-replicate yaml", err)
}
multiPrefixReplicateYaml := `
replicate:
apiVersion: v1
# source of the objects to be replicated
source:
type: minio # valid values are "s3" or "minio"
bucket: mytest
prefix: # 'PREFIX' is optional
- object-prefix1
- object-prefix2
# If your source is the 'local' alias specified to 'mc batch start', then the 'endpoint' and 'credentials' fields are optional and can be omitted
# Either the 'source' or 'remote' *must* be the "local" deployment
# endpoint: "http://127.0.0.1:9000"
# # path: "on|off|auto" # "on" enables path-style bucket lookup. "off" enables virtual host (DNS)-style bucket lookup. Defaults to "auto"
# credentials:
# accessKey: minioadmin # Required
# secretKey: minioadmin # Required
# # sessionToken: SESSION-TOKEN # Optional only available when rotating credentials are used
snowball: # automatically activated if the source is local
disable: true # optionally turn-off snowball archive transfer
# batch: 100 # upto this many objects per archive
# inmemory: true # indicates if the archive must be staged locally or in-memory
# compress: false # S2/Snappy compressed archive
# smallerThan: 5MiB # create archive for all objects smaller than 5MiB
# skipErrs: false # skips any source side read() errors
# target where the objects must be replicated
target:
type: minio # valid values are "s3" or "minio"
bucket: mytest
prefix: stage # 'PREFIX' is optional
# If your source is the 'local' alias specified to 'mc batch start', then the 'endpoint' and 'credentials' fields are optional and can be omitted
# Either the 'source' or 'remote' *must* be the "local" deployment
endpoint: "http://127.0.0.1:9001"
# path: "on|off|auto" # "on" enables path-style bucket lookup. "off" enables virtual host (DNS)-style bucket lookup. Defaults to "auto"
credentials:
accessKey: minioadmin
secretKey: minioadmin
# sessionToken: SESSION-TOKEN # Optional only available when rotating credentials are used
# NOTE: All flags are optional
# - filtering criteria only applies for all source objects match the criteria
# - configurable notification endpoints
# - configurable retries for the job (each retry skips successfully previously replaced objects)
flags:
filter:
newerThan: "7d10h31s" # match objects newer than this value (e.g. 7d10h31s)
olderThan: "7d" # match objects older than this value (e.g. 7d10h31s)
# createdAfter: "date" # match objects created after "date"
# createdBefore: "date" # match objects created before "date"
## NOTE: tags are not supported when "source" is remote.
tags:
- key: "name"
value: "pick*" # match objects with tag 'name', with all values starting with 'pick'
metadata:
- key: "content-type"
value: "image/*" # match objects with 'content-type', with all values starting with 'image/'
# notify:
# endpoint: "https://notify.endpoint" # notification endpoint to receive job status events
# token: "Bearer xxxxx" # optional authentication token for the notification endpoint
#
# retry:
# attempts: 10 # number of retries for the job before giving up
# delay: "500ms" # least amount of delay between each retry
`
var multiPrefixJob BatchJobRequest
err = yaml.Unmarshal([]byte(multiPrefixReplicateYaml), &multiPrefixJob)
if err != nil {
t.Fatal("Failed to parse batch-job-replicate yaml", err)
}
if !slices.Equal(multiPrefixJob.Replicate.Source.Prefix.F(), []string{"object-prefix1", "object-prefix2"}) {
t.Fatal("Failed to parse batch-job-replicate yaml")
}
}

View File

@ -21,6 +21,7 @@ import (
"context"
"encoding/base64"
"fmt"
"maps"
"math/rand"
"net/http"
"runtime"
@ -110,9 +111,7 @@ func (e BatchJobKeyRotateEncryption) Validate() error {
}
}
e.kmsContext = kms.Context{}
for k, v := range ctx {
e.kmsContext[k] = v
}
maps.Copy(e.kmsContext, ctx)
ctx["MinIO batch API"] = "batchrotate" // Context for a test key operation
if _, err := GlobalKMS.GenerateKey(GlobalContext, &kms.GenerateKeyRequest{Name: e.Key, AssociatedData: ctx}); err != nil {
return err
@ -225,9 +224,7 @@ func (r *BatchJobKeyRotateV1) KeyRotate(ctx context.Context, api ObjectLayer, ob
// Since we are rotating the keys, make sure to update the metadata.
oi.metadataOnly = true
oi.keyRotation = true
for k, v := range encMetadata {
oi.UserDefined[k] = v
}
maps.Copy(oi.UserDefined, encMetadata)
if _, err := api.CopyObject(ctx, r.Bucket, oi.Name, r.Bucket, oi.Name, oi, ObjectOptions{
VersionID: oi.VersionID,
}, ObjectOptions{
@ -257,7 +254,7 @@ func (r *BatchJobKeyRotateV1) Start(ctx context.Context, api ObjectLayer, job Ba
JobType: string(job.Type()),
StartTime: job.Started,
}
if err := ri.load(ctx, api, job); err != nil {
if err := ri.loadOrInit(ctx, api, job); err != nil {
return err
}
if ri.Complete {
@ -267,8 +264,12 @@ func (r *BatchJobKeyRotateV1) Start(ctx context.Context, api ObjectLayer, job Ba
globalBatchJobsMetrics.save(job.ID, ri)
lastObject := ri.Object
retryAttempts := job.KeyRotate.Flags.Retry.Attempts
if retryAttempts <= 0 {
retryAttempts = batchKeyRotateJobDefaultRetries
}
delay := job.KeyRotate.Flags.Retry.Delay
if delay == 0 {
if delay <= 0 {
delay = batchKeyRotateJobDefaultRetryDelay
}
@ -354,7 +355,6 @@ func (r *BatchJobKeyRotateV1) Start(ctx context.Context, api ObjectLayer, job Ba
return err
}
retryAttempts := ri.RetryAttempts
ctx, cancel := context.WithCancel(ctx)
results := make(chan itemOrErr[ObjectInfo], 100)
@ -389,6 +389,17 @@ func (r *BatchJobKeyRotateV1) Start(ctx context.Context, api ObjectLayer, job Ba
stopFn(result, err)
batchLogIf(ctx, err)
success = false
if attempts >= retryAttempts {
auditOptions := AuditLogOptions{
Event: "KeyRotate",
APIName: "StartBatchJob",
Bucket: result.Bucket,
Object: result.Name,
VersionID: result.VersionID,
Error: err.Error(),
}
auditLogInternal(ctx, auditOptions)
}
} else {
stopFn(result, nil)
}

View File

@ -1,7 +1,7 @@
package cmd
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
package cmd
import (
"github.com/tinylib/msgp/msgp"
)

View File

@ -1,7 +1,7 @@
package cmd
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
package cmd
import (
"bytes"
"testing"

View File

@ -35,7 +35,7 @@ func runPutObjectBenchmark(b *testing.B, obj ObjectLayer, objSize int) {
// obtains random bucket name.
bucket := getRandomBucketName()
// create bucket.
err = obj.MakeBucket(context.Background(), bucket, MakeBucketOptions{})
err = obj.MakeBucket(b.Context(), bucket, MakeBucketOptions{})
if err != nil {
b.Fatal(err)
}
@ -51,10 +51,10 @@ func runPutObjectBenchmark(b *testing.B, obj ObjectLayer, objSize int) {
// benchmark utility which helps obtain number of allocations and bytes allocated per ops.
b.ReportAllocs()
// the actual benchmark for PutObject starts here. Reset the benchmark timer.
b.ResetTimer()
for i := 0; i < b.N; i++ {
for i := 0; b.Loop(); i++ {
// insert the object.
objInfo, err := obj.PutObject(context.Background(), bucket, "object"+strconv.Itoa(i),
objInfo, err := obj.PutObject(b.Context(), bucket, "object"+strconv.Itoa(i),
mustGetPutObjReader(b, bytes.NewReader(textData), int64(len(textData)), md5hex, sha256hex), ObjectOptions{})
if err != nil {
b.Fatal(err)
@ -76,7 +76,7 @@ func runPutObjectPartBenchmark(b *testing.B, obj ObjectLayer, partSize int) {
object := getRandomObjectName()
// create bucket.
err = obj.MakeBucket(context.Background(), bucket, MakeBucketOptions{})
err = obj.MakeBucket(b.Context(), bucket, MakeBucketOptions{})
if err != nil {
b.Fatal(err)
}
@ -90,7 +90,7 @@ func runPutObjectPartBenchmark(b *testing.B, obj ObjectLayer, partSize int) {
textData := generateBytesData(objSize)
// generate md5sum for the generated data.
// md5sum of the data to written is required as input for NewMultipartUpload.
res, err := obj.NewMultipartUpload(context.Background(), bucket, object, ObjectOptions{})
res, err := obj.NewMultipartUpload(b.Context(), bucket, object, ObjectOptions{})
if err != nil {
b.Fatal(err)
}
@ -101,11 +101,11 @@ func runPutObjectPartBenchmark(b *testing.B, obj ObjectLayer, partSize int) {
// benchmark utility which helps obtain number of allocations and bytes allocated per ops.
b.ReportAllocs()
// the actual benchmark for PutObjectPart starts here. Reset the benchmark timer.
b.ResetTimer()
for i := 0; i < b.N; i++ {
for i := 0; b.Loop(); i++ {
// insert the object.
totalPartsNR := int(math.Ceil(float64(objSize) / float64(partSize)))
for j := 0; j < totalPartsNR; j++ {
for j := range totalPartsNR {
if j < totalPartsNR-1 {
textPartData = textData[j*partSize : (j+1)*partSize-1]
} else {
@ -113,7 +113,7 @@ func runPutObjectPartBenchmark(b *testing.B, obj ObjectLayer, partSize int) {
}
md5hex := getMD5Hash(textPartData)
var partInfo PartInfo
partInfo, err = obj.PutObjectPart(context.Background(), bucket, object, res.UploadID, j,
partInfo, err = obj.PutObjectPart(b.Context(), bucket, object, res.UploadID, j,
mustGetPutObjReader(b, bytes.NewReader(textPartData), int64(len(textPartData)), md5hex, sha256hex), ObjectOptions{})
if err != nil {
b.Fatal(err)
@ -130,7 +130,7 @@ func runPutObjectPartBenchmark(b *testing.B, obj ObjectLayer, partSize int) {
// creates Erasure/FS backend setup, obtains the object layer and calls the runPutObjectPartBenchmark function.
func benchmarkPutObjectPart(b *testing.B, instanceType string, objSize int) {
// create a temp Erasure/FS backend.
ctx, cancel := context.WithCancel(context.Background())
ctx, cancel := context.WithCancel(b.Context())
defer cancel()
objLayer, disks, err := prepareTestBackend(ctx, instanceType)
if err != nil {
@ -146,7 +146,7 @@ func benchmarkPutObjectPart(b *testing.B, instanceType string, objSize int) {
// creates Erasure/FS backend setup, obtains the object layer and calls the runPutObjectBenchmark function.
func benchmarkPutObject(b *testing.B, instanceType string, objSize int) {
// create a temp Erasure/FS backend.
ctx, cancel := context.WithCancel(context.Background())
ctx, cancel := context.WithCancel(b.Context())
defer cancel()
objLayer, disks, err := prepareTestBackend(ctx, instanceType)
if err != nil {
@ -162,7 +162,7 @@ func benchmarkPutObject(b *testing.B, instanceType string, objSize int) {
// creates Erasure/FS backend setup, obtains the object layer and runs parallel benchmark for put object.
func benchmarkPutObjectParallel(b *testing.B, instanceType string, objSize int) {
// create a temp Erasure/FS backend.
ctx, cancel := context.WithCancel(context.Background())
ctx, cancel := context.WithCancel(b.Context())
defer cancel()
objLayer, disks, err := prepareTestBackend(ctx, instanceType)
if err != nil {
@ -196,7 +196,7 @@ func runPutObjectBenchmarkParallel(b *testing.B, obj ObjectLayer, objSize int) {
// obtains random bucket name.
bucket := getRandomBucketName()
// create bucket.
err := obj.MakeBucket(context.Background(), bucket, MakeBucketOptions{})
err := obj.MakeBucket(b.Context(), bucket, MakeBucketOptions{})
if err != nil {
b.Fatal(err)
}
@ -218,7 +218,7 @@ func runPutObjectBenchmarkParallel(b *testing.B, obj ObjectLayer, objSize int) {
i := 0
for pb.Next() {
// insert the object.
objInfo, err := obj.PutObject(context.Background(), bucket, "object"+strconv.Itoa(i),
objInfo, err := obj.PutObject(b.Context(), bucket, "object"+strconv.Itoa(i),
mustGetPutObjReader(b, bytes.NewReader(textData), int64(len(textData)), md5hex, sha256hex), ObjectOptions{})
if err != nil {
b.Fatal(err)

View File

@ -20,6 +20,7 @@ package cmd
import (
"bytes"
"context"
"errors"
"hash"
"io"
"sync"
@ -37,12 +38,22 @@ type streamingBitrotWriter struct {
shardSize int64
canClose *sync.WaitGroup
byteBuf []byte
finished bool
}
func (b *streamingBitrotWriter) Write(p []byte) (int, error) {
if len(p) == 0 {
return 0, nil
}
if b.finished {
return 0, errors.New("bitrot write not allowed")
}
if int64(len(p)) > b.shardSize {
return 0, errors.New("unexpected bitrot buffer size")
}
if int64(len(p)) < b.shardSize {
b.finished = true
}
b.h.Reset()
b.h.Write(p)
hashBytes := b.h.Sum(nil)
@ -141,13 +152,7 @@ func (b *streamingBitrotReader) Close() error {
}
if closer, ok := b.rc.(io.Closer); ok {
// drain the body for connection reuse at network layer.
xhttp.DrainBody(struct {
io.Reader
io.Closer
}{
Reader: b.rc,
Closer: closeWrapper(func() error { return nil }),
})
xhttp.DrainBody(io.NopCloser(b.rc))
return closer.Close()
}
return nil

View File

@ -99,7 +99,7 @@ func BitrotAlgorithmFromString(s string) (a BitrotAlgorithm) {
return alg
}
}
return
return a
}
func newBitrotWriter(disk StorageAPI, origvolume, volume, filePath string, length int64, algo BitrotAlgorithm, shardSize int64) io.Writer {
@ -128,14 +128,20 @@ func closeBitrotReaders(rs []io.ReaderAt) {
}
// Close all the writers.
func closeBitrotWriters(ws []io.Writer) {
for _, w := range ws {
if w != nil {
if bw, ok := w.(io.Closer); ok {
bw.Close()
}
func closeBitrotWriters(ws []io.Writer) []error {
errs := make([]error, len(ws))
for i, w := range ws {
if w == nil {
errs[i] = errDiskNotFound
continue
}
if bw, ok := w.(io.Closer); ok {
errs[i] = bw.Close()
} else {
errs[i] = nil
}
}
return errs
}
// Returns hash sum for whole-bitrot, nil for streaming-bitrot.
@ -178,7 +184,7 @@ func bitrotVerify(r io.Reader, wantSize, partSize int64, algo BitrotAlgorithm, w
return errFileCorrupt
}
bufp := xioutil.ODirectPoolSmall.Get().(*[]byte)
bufp := xioutil.ODirectPoolSmall.Get()
defer xioutil.ODirectPoolSmall.Put(bufp)
for left > 0 {

View File

@ -18,7 +18,6 @@
package cmd
import (
"context"
"io"
"testing"
)
@ -34,7 +33,7 @@ func testBitrotReaderWriterAlgo(t *testing.T, bitrotAlgo BitrotAlgorithm) {
t.Fatal(err)
}
disk.MakeVol(context.Background(), volume)
disk.MakeVol(t.Context(), volume)
writer := newBitrotWriter(disk, "", volume, filePath, 35, bitrotAlgo, 10)

View File

@ -48,9 +48,7 @@ func (bs *bootstrapTracer) Events() []madmin.TraceInfo {
traceInfo := make([]madmin.TraceInfo, 0, bootstrapTraceLimit)
bs.mu.RLock()
for _, i := range bs.info {
traceInfo = append(traceInfo, i)
}
traceInfo = append(traceInfo, bs.info...)
bs.mu.RUnlock()
return traceInfo

View File

@ -19,9 +19,13 @@ package cmd
import (
"context"
"crypto/md5"
"encoding/hex"
"errors"
"fmt"
"io"
"math/rand"
"os"
"reflect"
"strings"
"sync"
@ -43,10 +47,15 @@ type ServerSystemConfig struct {
NEndpoints int
CmdLines []string
MinioEnv map[string]string
Checksum string
}
// Diff - returns error on first difference found in two configs.
func (s1 *ServerSystemConfig) Diff(s2 *ServerSystemConfig) error {
if s1.Checksum != s2.Checksum {
return fmt.Errorf("Expected MinIO binary checksum: %s, seen: %s", s1.Checksum, s2.Checksum)
}
ns1 := s1.NEndpoints
ns2 := s2.NEndpoints
if ns1 != ns2 {
@ -82,7 +91,7 @@ func (s1 *ServerSystemConfig) Diff(s2 *ServerSystemConfig) error {
extra = append(extra, k)
}
}
msg := "Expected same MINIO_ environment variables and values across all servers: "
msg := "Expected MINIO_* environment name and values across all servers to be same: "
if len(missing) > 0 {
msg += fmt.Sprintf(`Missing environment values: %v. `, missing)
}
@ -97,14 +106,17 @@ func (s1 *ServerSystemConfig) Diff(s2 *ServerSystemConfig) error {
}
var skipEnvs = map[string]struct{}{
"MINIO_OPTS": {},
"MINIO_CERT_PASSWD": {},
"MINIO_SERVER_DEBUG": {},
"MINIO_DSYNC_TRACE": {},
"MINIO_ROOT_USER": {},
"MINIO_ROOT_PASSWORD": {},
"MINIO_ACCESS_KEY": {},
"MINIO_SECRET_KEY": {},
"MINIO_OPTS": {},
"MINIO_CERT_PASSWD": {},
"MINIO_SERVER_DEBUG": {},
"MINIO_DSYNC_TRACE": {},
"MINIO_ROOT_USER": {},
"MINIO_ROOT_PASSWORD": {},
"MINIO_ACCESS_KEY": {},
"MINIO_SECRET_KEY": {},
"MINIO_OPERATOR_VERSION": {},
"MINIO_VSPHERE_PLUGIN_VERSION": {},
"MINIO_CI_CD": {},
}
func getServerSystemCfg() *ServerSystemConfig {
@ -120,7 +132,7 @@ func getServerSystemCfg() *ServerSystemConfig {
}
envValues[envK] = logger.HashString(env.Get(envK, ""))
}
scfg := &ServerSystemConfig{NEndpoints: globalEndpoints.NEndpoints(), MinioEnv: envValues}
scfg := &ServerSystemConfig{NEndpoints: globalEndpoints.NEndpoints(), MinioEnv: envValues, Checksum: binaryChecksum}
var cmdLines []string
for _, ep := range globalEndpoints {
cmdLines = append(cmdLines, ep.CmdLine)
@ -167,6 +179,26 @@ func (client *bootstrapRESTClient) String() string {
return client.gridConn.String()
}
var binaryChecksum = getBinaryChecksum()
func getBinaryChecksum() string {
mw := md5.New()
binPath, err := os.Executable()
if err != nil {
logger.Error("Calculating checksum failed: %s", err)
return "00000000000000000000000000000000"
}
b, err := os.Open(binPath)
if err != nil {
logger.Error("Calculating checksum failed: %s", err)
return "00000000000000000000000000000000"
}
defer b.Close()
io.Copy(mw, b)
return hex.EncodeToString(mw.Sum(nil))
}
func verifyServerSystemConfig(ctx context.Context, endpointServerPools EndpointServerPools, gm *grid.Manager) error {
srcCfg := getServerSystemCfg()
clnts := newBootstrapRESTClients(endpointServerPools, gm)
@ -196,7 +228,7 @@ func verifyServerSystemConfig(ctx context.Context, endpointServerPools EndpointS
err := clnt.Verify(ctx, srcCfg)
mu.Lock()
if err != nil {
bootstrapTraceMsg(fmt.Sprintf("clnt.Verify: %v, endpoint: %s", err, clnt))
bootstrapTraceMsg(fmt.Sprintf("bootstrapVerify: %v, endpoint: %s", err, clnt))
if !isNetworkError(err) {
bootLogOnceIf(context.Background(), fmt.Errorf("%s has incorrect configuration: %w", clnt, err), "incorrect_"+clnt.String())
incorrectConfigs = append(incorrectConfigs, fmt.Errorf("%s has incorrect configuration: %w", clnt, err))

View File

@ -1,7 +1,7 @@
package cmd
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
package cmd
import (
"github.com/tinylib/msgp/msgp"
)
@ -59,19 +59,17 @@ func (z *ServerSystemConfig) DecodeMsg(dc *msgp.Reader) (err error) {
if z.MinioEnv == nil {
z.MinioEnv = make(map[string]string, zb0003)
} else if len(z.MinioEnv) > 0 {
for key := range z.MinioEnv {
delete(z.MinioEnv, key)
}
clear(z.MinioEnv)
}
for zb0003 > 0 {
zb0003--
var za0002 string
var za0003 string
za0002, err = dc.ReadString()
if err != nil {
err = msgp.WrapError(err, "MinioEnv")
return
}
var za0003 string
za0003, err = dc.ReadString()
if err != nil {
err = msgp.WrapError(err, "MinioEnv", za0002)
@ -79,6 +77,12 @@ func (z *ServerSystemConfig) DecodeMsg(dc *msgp.Reader) (err error) {
}
z.MinioEnv[za0002] = za0003
}
case "Checksum":
z.Checksum, err = dc.ReadString()
if err != nil {
err = msgp.WrapError(err, "Checksum")
return
}
default:
err = dc.Skip()
if err != nil {
@ -92,9 +96,9 @@ func (z *ServerSystemConfig) DecodeMsg(dc *msgp.Reader) (err error) {
// EncodeMsg implements msgp.Encodable
func (z *ServerSystemConfig) EncodeMsg(en *msgp.Writer) (err error) {
// map header, size 3
// map header, size 4
// write "NEndpoints"
err = en.Append(0x83, 0xaa, 0x4e, 0x45, 0x6e, 0x64, 0x70, 0x6f, 0x69, 0x6e, 0x74, 0x73)
err = en.Append(0x84, 0xaa, 0x4e, 0x45, 0x6e, 0x64, 0x70, 0x6f, 0x69, 0x6e, 0x74, 0x73)
if err != nil {
return
}
@ -142,15 +146,25 @@ func (z *ServerSystemConfig) EncodeMsg(en *msgp.Writer) (err error) {
return
}
}
// write "Checksum"
err = en.Append(0xa8, 0x43, 0x68, 0x65, 0x63, 0x6b, 0x73, 0x75, 0x6d)
if err != nil {
return
}
err = en.WriteString(z.Checksum)
if err != nil {
err = msgp.WrapError(err, "Checksum")
return
}
return
}
// MarshalMsg implements msgp.Marshaler
func (z *ServerSystemConfig) MarshalMsg(b []byte) (o []byte, err error) {
o = msgp.Require(b, z.Msgsize())
// map header, size 3
// map header, size 4
// string "NEndpoints"
o = append(o, 0x83, 0xaa, 0x4e, 0x45, 0x6e, 0x64, 0x70, 0x6f, 0x69, 0x6e, 0x74, 0x73)
o = append(o, 0x84, 0xaa, 0x4e, 0x45, 0x6e, 0x64, 0x70, 0x6f, 0x69, 0x6e, 0x74, 0x73)
o = msgp.AppendInt(o, z.NEndpoints)
// string "CmdLines"
o = append(o, 0xa8, 0x43, 0x6d, 0x64, 0x4c, 0x69, 0x6e, 0x65, 0x73)
@ -165,6 +179,9 @@ func (z *ServerSystemConfig) MarshalMsg(b []byte) (o []byte, err error) {
o = msgp.AppendString(o, za0002)
o = msgp.AppendString(o, za0003)
}
// string "Checksum"
o = append(o, 0xa8, 0x43, 0x68, 0x65, 0x63, 0x6b, 0x73, 0x75, 0x6d)
o = msgp.AppendString(o, z.Checksum)
return
}
@ -221,14 +238,12 @@ func (z *ServerSystemConfig) UnmarshalMsg(bts []byte) (o []byte, err error) {
if z.MinioEnv == nil {
z.MinioEnv = make(map[string]string, zb0003)
} else if len(z.MinioEnv) > 0 {
for key := range z.MinioEnv {
delete(z.MinioEnv, key)
}
clear(z.MinioEnv)
}
for zb0003 > 0 {
var za0002 string
var za0003 string
zb0003--
var za0002 string
za0002, bts, err = msgp.ReadStringBytes(bts)
if err != nil {
err = msgp.WrapError(err, "MinioEnv")
@ -241,6 +256,12 @@ func (z *ServerSystemConfig) UnmarshalMsg(bts []byte) (o []byte, err error) {
}
z.MinioEnv[za0002] = za0003
}
case "Checksum":
z.Checksum, bts, err = msgp.ReadStringBytes(bts)
if err != nil {
err = msgp.WrapError(err, "Checksum")
return
}
default:
bts, err = msgp.Skip(bts)
if err != nil {
@ -266,5 +287,6 @@ func (z *ServerSystemConfig) Msgsize() (s int) {
s += msgp.StringPrefixSize + len(za0002) + msgp.StringPrefixSize + len(za0003)
}
}
s += 9 + msgp.StringPrefixSize + len(z.Checksum)
return
}

View File

@ -1,7 +1,7 @@
package cmd
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
package cmd
import (
"bytes"
"testing"

View File

@ -28,6 +28,7 @@ import (
"errors"
"fmt"
"io"
"mime"
"mime/multipart"
"net/http"
"net/textproto"
@ -51,9 +52,9 @@ import (
sse "github.com/minio/minio/internal/bucket/encryption"
objectlock "github.com/minio/minio/internal/bucket/object/lock"
"github.com/minio/minio/internal/bucket/replication"
"github.com/minio/minio/internal/config/cache"
"github.com/minio/minio/internal/config/dns"
"github.com/minio/minio/internal/crypto"
"github.com/minio/minio/internal/etag"
"github.com/minio/minio/internal/event"
"github.com/minio/minio/internal/handlers"
"github.com/minio/minio/internal/hash"
@ -92,7 +93,7 @@ const (
// -- If IP of the entry doesn't match, this means entry is
//
// for another instance. Log an error to console.
func initFederatorBackend(buckets []BucketInfo, objLayer ObjectLayer) {
func initFederatorBackend(buckets []string, objLayer ObjectLayer) {
if len(buckets) == 0 {
return
}
@ -113,10 +114,10 @@ func initFederatorBackend(buckets []BucketInfo, objLayer ObjectLayer) {
domainMissing := err == dns.ErrDomainMissing
if dnsBuckets != nil {
for _, bucket := range buckets {
bucketsSet.Add(bucket.Name)
r, ok := dnsBuckets[bucket.Name]
bucketsSet.Add(bucket)
r, ok := dnsBuckets[bucket]
if !ok {
bucketsToBeUpdated.Add(bucket.Name)
bucketsToBeUpdated.Add(bucket)
continue
}
if !globalDomainIPs.Intersection(set.CreateStringSet(getHostsSlice(r)...)).IsEmpty() {
@ -135,7 +136,7 @@ func initFederatorBackend(buckets []BucketInfo, objLayer ObjectLayer) {
// but if we do see a difference with local domain IPs with
// hostSlice from etcd then we should update with newer
// domainIPs, we proceed to do that here.
bucketsToBeUpdated.Add(bucket.Name)
bucketsToBeUpdated.Add(bucket)
continue
}
@ -144,7 +145,7 @@ func initFederatorBackend(buckets []BucketInfo, objLayer ObjectLayer) {
// bucket names are globally unique in federation at a given
// path prefix, name collision is not allowed. We simply log
// an error and continue.
bucketsInConflict.Add(bucket.Name)
bucketsInConflict.Add(bucket)
}
}
@ -153,7 +154,6 @@ func initFederatorBackend(buckets []BucketInfo, objLayer ObjectLayer) {
g := errgroup.WithNErrs(len(bucketsToBeUpdatedSlice)).WithConcurrency(50)
for index := range bucketsToBeUpdatedSlice {
index := index
g.Go(func() error {
return globalDNSConfig.Put(bucketsToBeUpdatedSlice[index])
}, index)
@ -343,11 +343,9 @@ func (api objectAPIHandlers) ListBucketsHandler(w http.ResponseWriter, r *http.R
Created: dnsRecords[0].CreationDate,
})
}
sort.Slice(bucketsInfo, func(i, j int) bool {
return bucketsInfo[i].Name < bucketsInfo[j].Name
})
} else {
// Invoke the list buckets.
var err error
@ -428,7 +426,7 @@ func (api objectAPIHandlers) DeleteMultipleObjectsHandler(w http.ResponseWriter,
// Content-Md5 is required should be set
// http://docs.aws.amazon.com/AmazonS3/latest/API/multiobjectdeleteapi.html
if _, ok := r.Header[xhttp.ContentMD5]; !ok {
if !validateLengthAndChecksum(r) {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrMissingContentMD5), r.URL)
return
}
@ -560,7 +558,7 @@ func (api objectAPIHandlers) DeleteMultipleObjectsHandler(w http.ResponseWriter,
}, goi, opts, gerr)
if dsc.ReplicateAny() {
if object.VersionID != "" {
object.VersionPurgeStatus = Pending
object.VersionPurgeStatus = replication.VersionPurgePending
object.VersionPurgeStatuses = dsc.PendingStatus()
} else {
object.DeleteMarkerReplicationStatus = dsc.PendingStatus()
@ -594,7 +592,7 @@ func (api objectAPIHandlers) DeleteMultipleObjectsHandler(w http.ResponseWriter,
output[idx] = obj
idx++
}
return
return output
}
// Disable timeouts and cancellation
@ -670,9 +668,7 @@ func (api objectAPIHandlers) DeleteMultipleObjectsHandler(w http.ResponseWriter,
continue
}
defer globalCacheConfig.Delete(bucket, dobj.ObjectName)
if replicateDeletes && (dobj.DeleteMarkerReplicationStatus() == replication.Pending || dobj.VersionPurgeStatus() == Pending) {
if replicateDeletes && (dobj.DeleteMarkerReplicationStatus() == replication.Pending || dobj.VersionPurgeStatus() == replication.VersionPurgePending) {
// copy so we can re-add null ID.
dobj := dobj
if isDirObject(dobj.ObjectName) && dobj.VersionID == "" {
@ -842,7 +838,6 @@ func (api objectAPIHandlers) PutBucketHandler(w http.ResponseWriter, r *http.Req
}
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
apiErr := ErrBucketAlreadyExists
if !globalDomainIPs.Intersection(set.CreateStringSet(getHostsSlice(sr)...)).IsEmpty() {
@ -890,6 +885,30 @@ func (api objectAPIHandlers) PutBucketHandler(w http.ResponseWriter, r *http.Req
})
}
// multipartReader is just like https://pkg.go.dev/net/http#Request.MultipartReader but
// rejects multipart/mixed as its not supported in S3 API.
func multipartReader(r *http.Request) (*multipart.Reader, error) {
v := r.Header.Get("Content-Type")
if v == "" {
return nil, http.ErrNotMultipart
}
if r.Body == nil {
return nil, errors.New("missing form body")
}
d, params, err := mime.ParseMediaType(v)
if err != nil {
return nil, http.ErrNotMultipart
}
if d != "multipart/form-data" {
return nil, http.ErrNotMultipart
}
boundary, ok := params["boundary"]
if !ok {
return nil, http.ErrMissingBoundary
}
return multipart.NewReader(r.Body, boundary), nil
}
// PostPolicyBucketHandler - POST policy
// ----------
// This implementation of the POST operation handles object creation with a specified
@ -923,9 +942,14 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
return
}
if r.ContentLength <= 0 {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrEmptyRequestBody), r.URL)
return
}
// Here the parameter is the size of the form data that should
// be loaded in memory, the remaining being put in temporary files.
mp, err := r.MultipartReader()
mp, err := multipartReader(r)
if err != nil {
apiErr := errorCodes.ToAPIErr(ErrMalformedPOSTRequest)
apiErr.Description = fmt.Sprintf("%s (%v)", apiErr.Description, err)
@ -937,7 +961,7 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
var (
reader io.Reader
fileSize int64 = -1
actualSize int64 = -1
fileName string
fanOutEntries = make([]minio.PutObjectFanOutEntry, 0, 100)
)
@ -945,6 +969,7 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
maxParts := 1000
// Canonicalize the form values into http.Header.
formValues := make(http.Header)
var headerLen int64
for {
part, err := mp.NextRawPart()
if errors.Is(err, io.EOF) {
@ -986,7 +1011,7 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
return
}
var b bytes.Buffer
headerLen += int64(len(name)) + int64(len(fileName))
if name != "file" {
if http.CanonicalHeaderKey(name) == http.CanonicalHeaderKey("x-minio-fanout-list") {
dec := json.NewDecoder(part)
@ -997,7 +1022,7 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
if err := dec.Decode(&m); err != nil {
part.Close()
apiErr := errorCodes.ToAPIErr(ErrMalformedPOSTRequest)
apiErr.Description = fmt.Sprintf("%s (%v)", apiErr.Description, multipart.ErrMessageTooLarge)
apiErr.Description = fmt.Sprintf("%s (%v)", apiErr.Description, err)
writeErrorResponse(ctx, w, apiErr, r.URL)
return
}
@ -1007,8 +1032,12 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
continue
}
buf := bytebufferpool.Get()
// value, store as string in memory
n, err := io.CopyN(&b, part, maxMemoryBytes+1)
n, err := io.CopyN(buf, part, maxMemoryBytes+1)
value := buf.String()
buf.Reset()
bytebufferpool.Put(buf)
part.Close()
if err != nil && err != io.EOF {
@ -1030,7 +1059,8 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
writeErrorResponse(ctx, w, apiErr, r.URL)
return
}
formValues[http.CanonicalHeaderKey(name)] = append(formValues[http.CanonicalHeaderKey(name)], b.String())
headerLen += n
formValues[http.CanonicalHeaderKey(name)] = append(formValues[http.CanonicalHeaderKey(name)], value)
continue
}
@ -1039,10 +1069,33 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
// The file or text content must be the last field in the form.
// You cannot upload more than one file at a time.
reader = part
possibleShardSize := (r.ContentLength - headerLen)
if globalStorageClass.ShouldInline(possibleShardSize, false) { // keep versioned false for this check
var b bytes.Buffer
n, err := io.Copy(&b, reader)
if err != nil {
apiErr := errorCodes.ToAPIErr(ErrMalformedPOSTRequest)
apiErr.Description = fmt.Sprintf("%s (%v)", apiErr.Description, err)
writeErrorResponse(ctx, w, apiErr, r.URL)
return
}
reader = &b
actualSize = n
}
// we have found the File part of the request we are done processing multipart-form
break
}
// check if have a file
if reader == nil {
apiErr := errorCodes.ToAPIErr(ErrMalformedPOSTRequest)
apiErr.Description = fmt.Sprintf("%s (%v)", apiErr.Description, errors.New("The file or text content is missing"))
writeErrorResponse(ctx, w, apiErr, r.URL)
return
}
if keyName, ok := formValues["Key"]; !ok {
apiErr := errorCodes.ToAPIErr(ErrMalformedPOSTRequest)
apiErr.Description = fmt.Sprintf("%s (%v)", apiErr.Description, errors.New("The name of the uploaded key is missing"))
@ -1140,11 +1193,33 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
return
}
hashReader, err := hash.NewReader(ctx, reader, fileSize, "", "", fileSize)
clientETag, err := etag.FromContentMD5(formValues)
if err != nil {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrInvalidDigest), r.URL)
return
}
var forceMD5 []byte
// Optimization: If SSE-KMS and SSE-C did not request Content-Md5. Use uuid as etag. Optionally enable this also
// for server that is started with `--no-compat`.
kind, _ := crypto.IsRequested(formValues)
if !etag.ContentMD5Requested(formValues) && (kind == crypto.SSEC || kind == crypto.S3KMS || !globalServerCtxt.StrictS3Compat) {
forceMD5 = mustGetUUIDBytes()
}
hashReader, err := hash.NewReaderWithOpts(ctx, reader, hash.Options{
Size: actualSize,
MD5Hex: clientETag.String(),
SHA256Hex: "",
ActualSize: actualSize,
DisableMD5: false,
ForceMD5: forceMD5,
})
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
if checksum != nil && checksum.Valid() {
if err = hashReader.AddChecksumNoTrailer(formValues, false); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
@ -1201,9 +1276,9 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
writeErrorResponseHeadersOnly(w, toAPIError(ctx, err))
return
}
opts.WantChecksum = checksum
fanOutOpts := fanOutOptions{Checksum: checksum}
if crypto.Requested(formValues) {
if crypto.SSECopy.IsRequested(r.Header) {
writeErrorResponse(ctx, w, toAPIError(ctx, errInvalidEncryptionParameters), r.URL)
@ -1248,8 +1323,15 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
wantSize := int64(-1)
if actualSize >= 0 {
info := ObjectInfo{Size: actualSize}
wantSize = info.EncryptedSize()
}
// do not try to verify encrypted content/
hashReader, err = hash.NewReader(ctx, reader, -1, "", "", -1)
hashReader, err = hash.NewReader(ctx, reader, wantSize, "", "", actualSize)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
@ -1260,6 +1342,7 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
return
}
}
opts.EncryptFn = metadataEncrypter(objectEncryptionKey)
pReader, err = pReader.WithEncryption(hashReader, &objectEncryptionKey)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
@ -1303,10 +1386,7 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
// Set the correct hex md5sum for the fan-out stream.
fanOutOpts.MD5Hex = hex.EncodeToString(md5w.Sum(nil))
concurrentSize := 100
if runtime.GOMAXPROCS(0) < concurrentSize {
concurrentSize = runtime.GOMAXPROCS(0)
}
concurrentSize := min(runtime.GOMAXPROCS(0), 100)
fanOutResp := make([]minio.PutObjectFanOutResponse, 0, len(fanOutEntries))
eventArgsList := make([]eventArgs, 0, len(fanOutEntries))
@ -1329,7 +1409,6 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
Key: objInfo.Name,
Error: errs[i].Error(),
})
eventArgsList = append(eventArgsList, eventArgs{
EventName: event.ObjectCreatedPost,
BucketName: objInfo.Bucket,
@ -1342,22 +1421,6 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
continue
}
asize, err := objInfo.GetActualSize()
if err != nil {
asize = objInfo.Size
}
globalCacheConfig.Set(&cache.ObjectInfo{
Key: objInfo.Name,
Bucket: objInfo.Bucket,
ETag: getDecryptedETag(formValues, objInfo, false),
ModTime: objInfo.ModTime,
Expires: objInfo.Expires.UTC().Format(http.TimeFormat),
CacheControl: objInfo.CacheControl,
Metadata: cleanReservedKeys(objInfo.UserDefined),
Size: asize,
})
fanOutResp = append(fanOutResp, minio.PutObjectFanOutResponse{
Key: objInfo.Name,
ETag: getDecryptedETag(formValues, objInfo, false),
@ -1420,7 +1483,7 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
if formValues.Get(postPolicyBucketTagging) != "" {
tags, err := tags.ParseObjectXML(strings.NewReader(formValues.Get(postPolicyBucketTagging)))
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrMalformedPOSTRequest), r.URL)
return
}
tagsStr := tags.String()
@ -1452,22 +1515,6 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
w.Header().Set(xhttp.Location, obj)
}
asize, err := objInfo.GetActualSize()
if err != nil {
asize = objInfo.Size
}
defer globalCacheConfig.Set(&cache.ObjectInfo{
Key: objInfo.Name,
Bucket: objInfo.Bucket,
ETag: etag,
ModTime: objInfo.ModTime,
Expires: objInfo.ExpiresStr(),
CacheControl: objInfo.CacheControl,
Metadata: cleanReservedKeys(objInfo.UserDefined),
Size: asize,
})
// Notify object created event.
defer sendEvent(eventArgs{
EventName: event.ObjectCreatedPost,
@ -1610,9 +1657,11 @@ func (api objectAPIHandlers) HeadBucketHandler(w http.ResponseWriter, r *http.Re
return
}
if s3Error := checkRequestAuthType(ctx, r, policy.ListBucketAction, bucket, ""); s3Error != ErrNone {
writeErrorResponseHeadersOnly(w, errorCodes.ToAPIErr(s3Error))
return
if s3Error := checkRequestAuthType(ctx, r, policy.HeadBucketAction, bucket, ""); s3Error != ErrNone {
if s3Error := checkRequestAuthType(ctx, r, policy.ListBucketAction, bucket, ""); s3Error != ErrNone {
writeErrorResponseHeadersOnly(w, errorCodes.ToAPIErr(s3Error))
return
}
}
getBucketInfo := objectAPI.GetBucketInfo
@ -1715,7 +1764,7 @@ func (api objectAPIHandlers) DeleteBucketHandler(w http.ResponseWriter, r *http.
}
globalNotificationSys.DeleteBucketMetadata(ctx, bucket)
globalReplicationPool.deleteResyncMetadata(ctx, bucket)
globalReplicationPool.Get().deleteResyncMetadata(ctx, bucket)
// Call site replication hook.
replLogIf(ctx, globalSiteReplicationSys.DeleteBucketHook(ctx, bucket, forceDelete))
@ -1764,6 +1813,10 @@ func (api objectAPIHandlers) PutBucketObjectLockConfigHandler(w http.ResponseWri
return
}
// Audit log tags.
reqInfo := logger.GetReqInfo(ctx)
reqInfo.SetTags("retention", config.String())
configData, err := xml.Marshal(config)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)

View File

@ -32,7 +32,7 @@ import (
// Wrapper for calling RemoveBucket HTTP handler tests for both Erasure multiple disks and single node setup.
func TestRemoveBucketHandler(t *testing.T) {
ExecObjectLayerAPITest(t, testRemoveBucketHandler, []string{"RemoveBucket"})
ExecObjectLayerAPITest(ExecObjectLayerAPITestArgs{t: t, objAPITest: testRemoveBucketHandler, endpoints: []string{"RemoveBucket"}})
}
func testRemoveBucketHandler(obj ObjectLayer, instanceType, bucketName string, apiRouter http.Handler,
@ -78,7 +78,7 @@ func testRemoveBucketHandler(obj ObjectLayer, instanceType, bucketName string, a
// Wrapper for calling GetBucketPolicy HTTP handler tests for both Erasure multiple disks and single node setup.
func TestGetBucketLocationHandler(t *testing.T) {
ExecObjectLayerAPITest(t, testGetBucketLocationHandler, []string{"GetBucketLocation"})
ExecObjectLayerAPITest(ExecObjectLayerAPITestArgs{t: t, objAPITest: testGetBucketLocationHandler, endpoints: []string{"GetBucketLocation"}})
}
func testGetBucketLocationHandler(obj ObjectLayer, instanceType, bucketName string, apiRouter http.Handler,
@ -188,7 +188,6 @@ func testGetBucketLocationHandler(obj ObjectLayer, instanceType, bucketName stri
if errorResponse.Code != testCase.errorResponse.Code {
t.Errorf("Test %d: %s: Expected the error code to be `%s`, but instead found `%s`", i+1, instanceType, testCase.errorResponse.Code, errorResponse.Code)
}
}
// Test for Anonymous/unsigned http request.
@ -220,7 +219,7 @@ func testGetBucketLocationHandler(obj ObjectLayer, instanceType, bucketName stri
// Wrapper for calling HeadBucket HTTP handler tests for both Erasure multiple disks and single node setup.
func TestHeadBucketHandler(t *testing.T) {
ExecObjectLayerAPITest(t, testHeadBucketHandler, []string{"HeadBucket"})
ExecObjectLayerAPITest(ExecObjectLayerAPITestArgs{t: t, objAPITest: testHeadBucketHandler, endpoints: []string{"HeadBucket"}})
}
func testHeadBucketHandler(obj ObjectLayer, instanceType, bucketName string, apiRouter http.Handler,
@ -290,7 +289,6 @@ func testHeadBucketHandler(obj ObjectLayer, instanceType, bucketName string, api
if recV2.Code != testCase.expectedRespStatus {
t.Errorf("Test %d: %s: Expected the response status to be `%d`, but instead found `%d`", i+1, instanceType, testCase.expectedRespStatus, recV2.Code)
}
}
// Test for Anonymous/unsigned http request.
@ -322,7 +320,7 @@ func testHeadBucketHandler(obj ObjectLayer, instanceType, bucketName string, api
// Wrapper for calling TestListMultipartUploadsHandler tests for both Erasure multiple disks and single node setup.
func TestListMultipartUploadsHandler(t *testing.T) {
ExecObjectLayerAPITest(t, testListMultipartUploadsHandler, []string{"ListMultipartUploads"})
ExecObjectLayerAPITest(ExecObjectLayerAPITestArgs{t: t, objAPITest: testListMultipartUploadsHandler, endpoints: []string{"ListMultipartUploads"}})
}
// testListMultipartUploadsHandler - Tests validate listing of multipart uploads.
@ -558,7 +556,7 @@ func testListMultipartUploadsHandler(obj ObjectLayer, instanceType, bucketName s
// Wrapper for calling TestListBucketsHandler tests for both Erasure multiple disks and single node setup.
func TestListBucketsHandler(t *testing.T) {
ExecObjectLayerAPITest(t, testListBucketsHandler, []string{"ListBuckets"})
ExecObjectLayerAPITest(ExecObjectLayerAPITestArgs{t: t, objAPITest: testListBucketsHandler, endpoints: []string{"ListBuckets"}})
}
// testListBucketsHandler - Tests validate listing of buckets.
@ -649,7 +647,7 @@ func testListBucketsHandler(obj ObjectLayer, instanceType, bucketName string, ap
// Wrapper for calling DeleteMultipleObjects HTTP handler tests for both Erasure multiple disks and single node setup.
func TestAPIDeleteMultipleObjectsHandler(t *testing.T) {
ExecObjectLayerAPITest(t, testAPIDeleteMultipleObjectsHandler, []string{"DeleteMultipleObjects", "PutBucketPolicy"})
ExecObjectLayerAPITest(ExecObjectLayerAPITestArgs{t: t, objAPITest: testAPIDeleteMultipleObjectsHandler, endpoints: []string{"DeleteMultipleObjects", "PutBucketPolicy"}})
}
func testAPIDeleteMultipleObjectsHandler(obj ObjectLayer, instanceType, bucketName string, apiRouter http.Handler,
@ -659,7 +657,7 @@ func testAPIDeleteMultipleObjectsHandler(obj ObjectLayer, instanceType, bucketNa
sha256sum := ""
var objectNames []string
for i := 0; i < 10; i++ {
for i := range 10 {
contentBytes := []byte("hello")
objectName := "test-object-" + strconv.Itoa(i)
if i == 0 {
@ -689,7 +687,7 @@ func testAPIDeleteMultipleObjectsHandler(obj ObjectLayer, instanceType, bucketNa
// The following block will create a bucket policy with delete object to 'public/*'. This is
// to test a mixed response of a successful & failure while deleting objects in a single request
policyBytes := []byte(fmt.Sprintf(`{"Id": "Policy1637752602639", "Version": "2012-10-17", "Statement": [{"Sid": "Stmt1637752600730", "Action": "s3:DeleteObject", "Effect": "Allow", "Resource": "arn:aws:s3:::%s/public/*", "Principal": "*"}]}`, bucketName))
policyBytes := fmt.Appendf(nil, `{"Id": "Policy1637752602639", "Version": "2012-10-17", "Statement": [{"Sid": "Stmt1637752600730", "Action": "s3:DeleteObject", "Effect": "Allow", "Resource": "arn:aws:s3:::%s/public/*", "Principal": "*"}]}`, bucketName)
rec := httptest.NewRecorder()
req, err := newTestSignedRequestV4(http.MethodPut, getPutPolicyURL("", bucketName), int64(len(policyBytes)), bytes.NewReader(policyBytes),
credentials.AccessKey, credentials.SecretKey, nil)

Some files were not shown because too many files have changed in this diff Show More