Compare commits

...

137 Commits

Author SHA1 Message Date
dorman
3a0cc6c86e
fix doc 404 (#21670) 2025-10-26 19:47:37 -07:00
yangw
10b0a234d2
fix: update metric descriptions to specify current MinIO server instance (#21638)
Signed-off-by: yangw <wuyangmuc@gmail.com>
2025-10-23 21:06:31 -07:00
Raul-Mircea Crivineanu
18f97e70b1
Updates for conditional put read quorum issue (#21653) 2025-10-23 21:05:31 -07:00
Menno Finlay-Smits
52eee5a2f1
fix(api): Don't send multiple responses for one request (#21651)
fix(api): Don't send responses twice.

In some cases multiple responses are being sent for one request, causing
the API server to incorrectly drop connections.

This change introduces a ResponseWriter which tracks whether a
response has already been sent. This is used to prevent a response being
sent if something already has (e.g. by a preconditions check function).

Fixes #21633.

Co-authored-by: Menno Finlay-Smits <hello@menno.io>
2025-10-23 21:05:19 -07:00
Rishabh Agrahari
c6d3aac5c4
Fix typo in entrypoint script path in README (#21657) 2025-10-23 08:10:39 -07:00
M Alvee
fa18589d1c
fix: Tagging in PostPolicy upload does not enforce policy tags (#21656) 2025-10-23 08:10:12 -07:00
Harshavardhana
05e569960a update scripts pointing to internal registry for community releases 2025-10-19 01:22:05 -07:00
Harshavardhana
9e49d5e7a6 update README.md and other docs to point to source only releases 2025-10-15 10:29:55 -07:00
Aditya Manthramurthy
c1a49490c7
fix: check sub-policy properly when present (#21642)
This fixes a security issue where sub-policy attached to a service
account or STS account is not properly validated under certain "own"
account operations (like creating new service accounts). This allowed a
service account to create new service accounts for the same user
bypassing the inline policy restriction.
2025-10-15 10:00:45 -07:00
Ravind Kumar
334c313da4
Change documentation link in README (#21636)
Updated documentation link to point to the GitHub project.
2025-10-10 12:00:53 -07:00
cduzer
1b8ac0af9f
fix: allow trailing slash in AWS S3 POST policies (#21612) 2025-10-10 11:57:35 -07:00
Mark Theunissen
ba3c0fd1c7
Bump Go version in toolchain directive to 1.24.8 (#21629) 2025-10-10 11:57:03 -07:00
Ravind Kumar
d51a4a4ff6
Update README with Docker and Helm installation instructions (#21627)
Added instructions for building Docker image and using Helm charts.

This closes the loop on supported methods for deploying MinIO with latest changes.
2025-10-09 15:10:11 -07:00
Harshavardhana
62383dfbfe
Fix formatting of features in README.md 2025-10-07 09:59:23 -07:00
Ravind Kumar
bde0d5a291
Updating readme for MinIO docs (#21625) 2025-10-06 22:36:26 -07:00
yangw
534f4a9fb1
fix: timeN function return final closure not be called (#21615) 2025-09-30 23:06:01 -07:00
Klaus Post
b8631cf531
Use new gofumpt (#21613)
Update tinylib. Should fix CI.

`gofumpt -w .&&go generate ./...`
2025-09-28 13:59:21 -07:00
jiuker
456d9462e5
fix: after saveRebalanceStats cancel will be empty (#21597) 2025-09-19 21:51:57 -07:00
jiuker
756f3c8142
fix: incorrect poolID when after decommission adding pools (#21590) 2025-09-18 04:47:48 -07:00
mosesdd
7a80ec1cce
fix: LDAP TLS handshake fails with StartTLS and tls_skip_verify=off (#21582)
Fixes #21581
2025-09-17 00:58:27 -07:00
M Alvee
ae71d76901
fix: remove unnecessary replication checks (#21569) 2025-09-08 10:43:13 -07:00
M Alvee
07c3a429bf
fix: conditional checks write for multipart (#21567) 2025-09-07 09:13:09 -07:00
Minio Trusted
0cde982902 Update yaml files to latest version RELEASE.2025-09-06T17-38-46Z 2025-09-07 05:14:10 +00:00
Ian Roberts
d0f50cdd9b
fix: use correct dummy ARN for claim-based OIDC provider when listing access keys (#21549)
fix: use correct dummy ARN for claim-based OIDC provider

When listing OIDC access keys, use the correct ARN when looking up the provider configuration for the claim-based provider.  Without this it was impossible to list access keys for a claim-based provider, only for a role-policy-based provider.

Fixes minio/minio#21548
2025-09-06 10:38:46 -07:00
WGH
da532ab93d
Fix support for legacy compression env variables (#21533)
Commit b6eb8dff649b0f46c12d24e89aa11254fb0132fa renamed compression
setting environment variables to follow consistent style.

Although it preserved backward compatibility for the most part (i.e. it
handled MINIO_COMPRESS_ALLOW_ENCRYPTION, MINIO_COMPRESS_EXTENSIONS, and
MINIO_COMPRESS_MIME_TYPES), MINIO_COMPRESS_ENABLE was left behind.

Additionally, due to incorrect fallback ordering, and DefaultKVS
containing enable=off allow_encryption=off (so kvs.Get should've been
tried last), that commit broke MINIO_COMPRESS_ALLOW_ENCRYPTION (even
though it appeared to be handled), and even older MINIO_COMPRESS, too.

The legacy MIME types and extensions variables take precedence over both
config and new variables, so they don't need fixing.
2025-09-06 10:37:10 -07:00
M Alvee
558fc1c09c
fix: return error on conditional write for non existing object (#21550) 2025-09-06 10:34:38 -07:00
Alex
9fdbf6fe83
Updated object-browser to the latest version v2.0.4 (#21564)
Signed-off-by: Benjamin Perez <benjamin@bexsoft.net>
2025-09-06 10:33:19 -07:00
jiuker
5c87d4ae87
fix: when save the rebalanceStats not found the config file (#21547) 2025-09-04 13:47:24 -07:00
Klaus Post
f0b91e5504
Run modernize (#21546)
`go run golang.org/x/tools/gopls/internal/analysis/modernize/cmd/modernize@latest -fix -test ./...` executed.

`go generate ./...` ran afterwards to keep generated.
2025-08-28 19:39:48 -07:00
Manuel Reis
3b7cb6512c
Revert dns.msgUnPath, fixes #21541 (#21542)
* Add more tests to UnPath function
* Revert implementation on dns.msgUnPath. Fixes: #21541
2025-08-28 10:31:12 -07:00
Mark Theunissen
4ea6f3b06b
fix: invalid checksum on site replication with conforming checksum types (#21535) 2025-08-22 07:15:21 -07:00
jiuker
86d9d9b55e
fix: use amqp.ParseURL to parse amqp url (#21528) 2025-08-20 21:25:07 -07:00
Denis Peshkov
5a35585acd
http/listener: fix bugs and simplify (#21514)
* Store `ctx.Done` channel in a struct instead of a `ctx`. See: https://go.dev/blog/context-and-structs
* Return from `handleListener` on `ctx` cancellation, preventing goroutine leaks
* Simplify `handleListener` by removing the `send` closure. The `handleListener` is inlined by the compiler
* Return the first error from `Close`
* Preallocate slice in `Addrs`
* Reduce duplication in handling `opts.Trace`
* http/listener: revert error propagation from Close()
* http/listener: preserve original listener address in Addr()
* Preserve the original address when calling Addr() with multiple listeners
* Remove unused listeners from the slice
2025-08-12 11:22:12 -07:00
Daryl White
0848e69602
Update docs links throughout (#21513) 2025-08-12 11:20:36 -07:00
M Alvee
02ba581ecf
custom user-agent transport wrapper (#21483) 2025-08-08 10:51:53 -07:00
Ian Roberts
b44b2a090c
fix: when claim-based OIDC is configured, treat unknown roleArn as claim-based auth (#21512)
RoleARN is a required parameter in AssumeRoleWithWebIdentity, 
according to the standard AWS implementation, and the official 
AWS SDKs and CLI will not allow you to assume a role from a JWT 
without also specifying a RoleARN.  This meant that it was not 
possible to use the official SDKs for claim-based OIDC with Minio 
(minio/minio#21421), since Minio required you to _omit_ the RoleARN in this case.

minio/minio#21468 attempted to fix this by disabling the validation 
of the RoleARN when a claim-based provider was configured, but this had 
the side effect of making it impossible to have a mixture of claim-based 
and role-based OIDC providers configured at the same time - every 
authentication would be treated as claim-based, ignoring the RoleARN entirely.

This is an alternative fix, whereby:

- _if_ the `RoleARN` is one that Minio knows about, then use the associated role policy
- if the `RoleARN` is not recognised, but there is a claim-based provider configured, then ignore the role ARN and attempt authentication with the claim-based provider
- if the `RoleARN` is not recognised, and there is _no_ claim-based provider, then return an error.
2025-08-08 10:51:23 -07:00
dorman
c7d6a9722d
Modify permission verification type (#21505) 2025-08-08 02:47:37 -07:00
jiuker
a8abdc797e
fix: add name and description to ldap accesskey list (#21511) 2025-08-07 19:46:04 -07:00
M Alvee
0638ccc5f3
fix: claim based oidc for official aws libraries (#21468) 2025-08-07 19:42:38 -07:00
jiuker
b1a34fd63f
fix: errUploadIDNotFound will be ignored when err is from peer client (#21504) 2025-08-07 19:38:41 -07:00
Klaus Post
ffcfa36b13
Check legalHoldPerm (#21508)
The provided parameter should be checked before accepting legal hold
2025-08-07 19:38:25 -07:00
Aditya Kotra
376fbd11a7
fix(helm): do not suspend versioning by default for buckets, only set versioning if specified(21349) (#21494)
Signed-off-by: Aditya Kotra <kaditya030@gmail.com>
2025-08-07 02:47:02 -07:00
dorman
c76f209ccc
Optimize outdated commands in the log (#21498) 2025-08-06 16:48:58 -07:00
M Alvee
7a6a2256b1
imagePullSecrets consistent types for global , local (#21500) 2025-08-06 16:48:24 -07:00
Johannes Horn
d002beaee3
feat: add variable for datasource in grafana dashboards (#21470) 2025-08-03 18:46:49 -07:00
jiuker
71f293d9ab
fix: record extral skippedEntry for listObject (#21484) 2025-08-01 08:53:35 -07:00
jiuker
e3d183b6a4
bring more idempotent behavior to AbortMultipartUpload() (#21475)
fix #21456
2025-07-30 23:57:23 -07:00
Alex
752abc2e2c
Update console to v2.0.3 (#21474)
Signed-off-by: Benjamin Perez <benjamin@bexsoft.net>
Co-authored-by: Benjamin Perez <benjamin@bexsoft.net>
2025-07-30 10:57:17 -07:00
Minio Trusted
b9f0e8c712 Update yaml files to latest version RELEASE.2025-07-23T15-54-02Z 2025-07-23 18:28:46 +00:00
M Alvee
7ced9663e6
simplify validating policy mapping (#21450) 2025-07-23 08:54:02 -07:00
MagicPig
50fcf9b670
fix boundary value bug when objTime ends in whole seconds (without sub-second) (#21419) 2025-07-23 05:36:06 -07:00
Harshavardhana
64f5c6103f
wait for metadata reads on minDisks+1 for HEAD/GET when data==parity (#21449)
fixes a regression since #19741
2025-07-23 04:21:15 -07:00
Poorna
e909be6380 send replication requests to correct pool (#1162)
Fixes incorrect application of ilm expiry rules on versioned objects
when replication is enabled.

Regression from https://github.com/minio/minio/pull/20441 which sends
DeleteObject calls to all pools. This is a problem for replication + ilm
scenario since replicated version can end up in a pool by itself instead of
pool where remaining object versions reside.

For example, if the delete marker is set on pool1 and object versions exist on
pool2, the second rule below will cause the delete marker to be expired by ilm
policy since it is the single version present in pool1
```
{
  "Rules": [
   {
    "ID": "cs6il1ri2hp48g71mdjg",
    "NoncurrentVersionExpiration": {
     "NoncurrentDays": 14
    },
    "Status": "Enabled"
   },
   {
    "Expiration": {
     "ExpiredObjectDeleteMarker": true
    },
    "ID": "cs6inj3i2hp4po19cil0",
    "Status": "Enabled"
   }
  ]
}
```
2025-07-19 13:27:52 -07:00
jiuker
83b2ad418b
fix: restrict SinglePool by the minimum free drive threshold (#21115) 2025-07-18 23:25:44 -07:00
Loganaden Velvindron
7a64bb9766
Add support for X25519MLKEM768 (#21435)
Signed-off-by: Bhuvanesh Fokeer <fokeerbhuvanesh@cyberstorm.mu>
Signed-off-by: Nakul Baboolall <nkb@cyberstorm.mu>
Signed-off-by: Sehun Bissessur <sehun.bissessur@cyberstorm.mu>
2025-07-18 23:23:15 -07:00
Minio Trusted
34679befef Update yaml files to latest version RELEASE.2025-07-18T21-56-31Z 2025-07-18 23:28:59 +00:00
Harshavardhana
4021d8c8e2
fix: lambda handler response to match the lambda return status (#21436) 2025-07-18 14:56:31 -07:00
Burkov Egor
de234b888c
fix: admin api - SetPolicyForUserOrGroup avoid nil deref (#21400) 2025-07-01 09:00:17 -07:00
Mark Theunissen
2718d9a430
CopyObject must preserve checksums and encrypt them if required (#21399) 2025-06-25 08:08:54 -07:00
Alex
a65292cab1
Update Console to latest version (#21397)
Signed-off-by: Benjamin Perez <benjamin@bexsoft.net>
2025-06-24 17:33:22 -07:00
Minio Trusted
e0c79be251 Update yaml files to latest version RELEASE.2025-06-13T11-33-47Z 2025-06-23 20:28:38 +00:00
jiuker
a6c538c5a1
fix: honor renamePart's PathNotFound (#21378) 2025-06-13 04:33:47 -07:00
jiuker
e1fcaebc77
fix: when ListMultipartUploads append result from cache should filter with bucket (#21376) 2025-06-12 00:09:12 -07:00
Johannes Horn
21409f112d
add networkpolicy for job and add possibility to define egress ports (#20951) 2025-06-08 09:14:18 -07:00
Sung Jeon
417c8648f0
use provided region in tier configuration for S3 backend (#21365)
fixes #21364
2025-06-08 09:13:30 -07:00
ffgan
e2245a0b12
allow cross-compiling support for RISC-V 64 (#21348)
this is minor PR that supports building on RISC-V 64,
this PR is for compilation only. There is no guarantee 
that code is tested and will work in production.
2025-06-08 09:12:05 -07:00
Shubhendu
b4b3d208dd
Add targetArn label for bucket replication metrics (#21354)
Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
2025-06-04 13:45:31 -07:00
ILIYA
0a36d41dcd
modernizes for loop in cmd/, internal/ (#21309) 2025-05-27 08:19:03 -07:00
jiuker
ea77bcfc98
fix: panic for TestListObjectsWithILM (#21322) 2025-05-27 08:18:36 -07:00
jiuker
9f24ca5d66
fix: empty fileName cause Reader nil for PostPolicyBucketHandler (#21323) 2025-05-27 08:18:26 -07:00
VARUN SHARMA
816666a4c6
make some targeted updates to README.md (#21125) 2025-05-26 12:34:56 -07:00
Anis Eleuch
2c7fe094d1
s3: Fix early listing stopping when ILM is enabled (#472) (#21246)
S3 listing call is usually sent with a 'max-keys' parameter. This
'max-keys' will also be passed to WalkDir() call. However, when ILM is
enabled in a bucket and some objects are skipped, the listing can
return IsTruncated set to false even if there are more entries in
the drives.

The reason is that drives stop feeding the listing code because it has
max-keys parameter and the listing code thinks listing is finished
because it is being fed anymore.

Ask the drives to not stop listing and relies on the context
cancellation to stop listing in the drives as fast as possible.
2025-05-26 00:06:43 -07:00
Harshavardhana
9ebe168782 add pull requests etiquette 2025-05-25 09:32:03 -07:00
Minio Trusted
ee2028cde6 Update yaml files to latest version RELEASE.2025-05-24T17-08-30Z 2025-05-24 21:37:47 +00:00
Frank Elsinga
ecde75f911
docs: use github-style-notes in the readme (#21308)
use notes in the readme
2025-05-24 10:08:30 -07:00
jiuker
12a6ea89cc
fix: Use mime encode for Non-US-ASCII metadata (#21282) 2025-05-22 08:42:54 -07:00
Anis Eleuch
63e102c049
heal: Avoid disabling scanner healing in single and dist erasure mode (#21302)
A typo disabled the scanner healing in erasure mode. Fix it.
2025-05-22 08:42:29 -07:00
Alex
160f8a901b
Update Console UI to latest version (#21294) 2025-05-21 08:59:37 -07:00
jiuker
ef9b03fbf5
fix: unable to get net.Interface cause panic (#21277) 2025-05-16 07:28:04 -07:00
Andreas Auernhammer
1d50cae43d
remove support for FIPS 140-2 with boringcrypto (#21292)
This commit removes FIPS 140-2 related code for the following
reasons:
 - FIPS 140-2 is a compliance, not a security requirement. Being
   FIPS 140-2 compliant has no security implication on its own.
   From a tech. perspetive, a FIPS 140-2 compliant implementation
   is not necessarily secure and a non-FIPS 140-2 compliant implementation
   is not necessarily insecure. It depends on the concret design and
   crypto primitives/constructions used.
 - The boringcrypto branch used to achieve FIPS 140-2 compliance was never
   officially supported by the Go team and is now in maintainance mode.
   It is replaced by a built-in FIPS 140-3 module. It will be removed
   eventually. Ref: https://github.com/golang/go/issues/69536
 - FIPS 140-2 modules are no longer re-certified after Sep. 2026.
   Ref: https://csrc.nist.gov/projects/cryptographic-module-validation-program

Signed-off-by: Andreas Auernhammer <github@aead.dev>
2025-05-16 07:27:42 -07:00
Klaus Post
c0a33952c6
Allow FTPS to force TLS (#21251)
Fixes #21249

Example params: `-ftp=force-tls=true -ftp="tls-private-key=ftp/private.key" -ftp="tls-public-cert=ftp/public.crt"`

If MinIO is set up for TLS those certs will be used.
2025-05-09 13:10:19 -07:00
Alex
8cad40a483
Update UI console to the latest version (#21278)
Signed-off-by: Benjamin Perez <benjamin@bexsoft.net>
2025-05-09 13:09:54 -07:00
Harshavardhana
6d18dba9a2
return error for AppendObject() API (#21272) 2025-05-07 08:37:12 -07:00
jiuker
9ea14c88d8
cleanup: use NewWithOptions replace the Deprecated one (#21243) 2025-04-29 08:35:51 -07:00
jiuker
30a1261c22
fix: track object and bucket for exipreAll (#21241) 2025-04-27 21:19:38 -07:00
Matt Lloyd
0e017ab071
feat: support nats nkey seed auth (#21231) 2025-04-26 21:30:57 -07:00
Harshavardhana
f14198e3dc update with newer pkger release 2025-04-26 17:44:22 -07:00
Burkov Egor
93c389dbc9
typo: return actual error from RemoveRemoteTargetsForEndpoint (#21238) 2025-04-26 01:43:10 -07:00
jiuker
ddd9a84cd7
allow concurrent aborts on active uploadParts() (#21229)
allow aborting on active uploads in progress, however fail these
uploads subsequently during commit phase and return appropriate errors
2025-04-24 22:41:04 -07:00
Celis
b7540169a2
Add documentation for replication_max_lrg_workers (#21236) 2025-04-24 16:34:26 -07:00
Klaus Post
f01374950f
Use go mod tool to install tools for go generate (#21232)
Use go tool for generators

* Use go.mod tool section
* Install tools with go generate
* Update dependencies
* Remove madmin fork.
2025-04-24 16:34:11 -07:00
Taran Pelkey
18aceae620
Fix nil dereference in adding service account (#21235)
Fixes #21234
2025-04-24 11:14:00 -07:00
Andreas Auernhammer
427826abc5
update minio/kms-go/kms SDK (#21233)
Signed-off-by: Andreas Auernhammer <github@aead.dev>
2025-04-24 08:33:57 -07:00
Harshavardhana
2780778c10 Revert "Fix: Change TTFB metric type to histogram (#20999)"
This reverts commit 8d223e07fb7f8593ae56dfd2f4a0688fe1ee8a17.
2025-04-23 13:56:18 -07:00
Shubhendu
2d8ba15b9e
Correct spelling (#21225) 2025-04-23 08:13:23 -07:00
Minio Trusted
bd6dd55e7f Update yaml files to latest version RELEASE.2025-04-22T22-12-26Z 2025-04-22 22:34:07 +00:00
Matt Lloyd
0d7408fc99
feat: support nats tls handshake first (#21008) 2025-04-22 15:12:26 -07:00
jiuker
864f80e226
fix: batch expiry job doesn't report delete marker in batch-status (#21183) 2025-04-22 04:16:32 -07:00
Harshavardhana
0379d6a37f fix: permissions for docker-compose 2025-04-21 09:24:31 -07:00
Harshavardhana
43aa8e4259
support autogenerated credentials for KMS_SECRET_KEY properly (#21223)
we had a chicken and egg problem with this feature even
when used with kes the credentials generation would
not work in correct sequence causing setup/deployment
disruptions.

This PR streamlines all of this properly to ensure that
this functionality works as advertised.
2025-04-21 09:23:51 -07:00
Harshavardhana
e2ed696619 fix: docker-compose link since latest release 2025-04-20 10:05:30 -07:00
Klaus Post
fb3f67a597
Fix shared error buffer (#21203)
v.cancelFn(RemoteErr(m.Payload)) would use an already returned buffer.

Simplify code a bit as well by returning on errors.
2025-04-18 02:10:55 -07:00
dependabot[bot]
7ee75368e0
build(deps): bump github.com/nats-io/nats-server/v2 from 2.9.23 to 2.10.27 (#21191)
build(deps): bump github.com/nats-io/nats-server/v2

Bumps [github.com/nats-io/nats-server/v2](https://github.com/nats-io/nats-server) from 2.9.23 to 2.10.27.
- [Release notes](https://github.com/nats-io/nats-server/releases)
- [Changelog](https://github.com/nats-io/nats-server/blob/main/.goreleaser.yml)
- [Commits](https://github.com/nats-io/nats-server/compare/v2.9.23...v2.10.27)

---
updated-dependencies:
- dependency-name: github.com/nats-io/nats-server/v2
  dependency-version: 2.10.27
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-17 04:45:51 -07:00
dependabot[bot]
1d6478b8ae
build(deps): bump golang.org/x/net from 0.34.0 to 0.38.0 in /docs/debugging/s3-verify (#21199)
build(deps): bump golang.org/x/net in /docs/debugging/s3-verify

Bumps [golang.org/x/net](https://github.com/golang/net) from 0.34.0 to 0.38.0.
- [Commits](https://github.com/golang/net/compare/v0.34.0...v0.38.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-version: 0.38.0
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-17 04:45:33 -07:00
dependabot[bot]
0581001b6f
build(deps): bump golang.org/x/net from 0.37.0 to 0.38.0 (#21200)
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.37.0 to 0.38.0.
- [Commits](https://github.com/golang/net/compare/v0.37.0...v0.38.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-version: 0.38.0
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-17 04:45:15 -07:00
dependabot[bot]
479303e7e9
build(deps): bump golang.org/x/crypto from 0.32.0 to 0.35.0 in /docs/debugging/inspect (#21192) 2025-04-16 14:54:16 -07:00
Burkov Egor
89aec6804b
typo: fix return of checkDiskFatalErrs (#21121) 2025-04-16 08:20:41 -07:00
Taran Pelkey
eb33bc6bf5 Add New Accesskey Info and OpenID Accesskey List API endpoints (#21097) 2025-04-16 00:34:24 -07:00
dependabot[bot]
3310f740f0
build(deps): bump golang.org/x/crypto from 0.32.0 to 0.35.0 in /docs/debugging/s3-verify (#21185)
build(deps): bump golang.org/x/crypto in /docs/debugging/s3-verify

Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.32.0 to 0.35.0.
- [Commits](https://github.com/golang/crypto/compare/v0.32.0...v0.35.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-version: 0.35.0
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-15 07:00:14 -07:00
Burkov Egor
4595293ca0
typo: fix error msg for decoding XL headers (#21120) 2025-04-10 08:55:43 -07:00
Klaus Post
02a67cbd2a
Fix buffered streams missing final entries (#21122)
On buffered streams the final entries could be missing, if a lot 
are delivered when stream ends.

Fixes end-of-stream cancelling return of final entries by canceling
with the StreamEOF error.
2025-04-10 08:29:19 -07:00
Harshavardhana
2b34e5b9ae
move to go1.24 (#21114) 2025-04-09 07:28:39 -07:00
Minio Trusted
a6258668a6 Update yaml files to latest version RELEASE.2025-04-08T15-41-24Z 2025-04-08 19:37:51 +00:00
Krishnan Parthasarathi
d0cada583f
ilm: Expect objects with only free versions when scanning (#21112) 2025-04-08 08:41:24 -07:00
Harshavardhana
0bd8f06b62 fix: healing to list, purge dangling objects (#621)
in a specific corner case when you only have dangling
objects with single shard left over, we end up a situation
where healing is unable to list this dangling object to
purge due to the fact that listing logic expected only
`len(disks)/2+1` - where as when you make this choice you
end up with a situation that the drive where this object
is present is not part of your expected disks list, causing
it to be never listed and ignored into perpetuity.

change the logic such that HealObjects() would be able
to listAndHeal() per set properly on all its drives, since
there is really no other way to do this cleanly, however
instead of "listing" on all erasure sets simultaneously, we
list on '3' at a time. So in a large enough cluster this is
fairly staggered.
2025-04-04 06:49:12 -07:00
Harshavardhana
6640be3bed fix: listParts crash when partNumberMarker is expected (#620)
fixes https://github.com/minio/minio/issues/21098
2025-04-04 06:44:38 -07:00
Anis Eleuch
eafeb27e90
decom: Ignore orphan delete markers in verification stage (#21106)
To make sure that no objects were skipped for any reason,
decommissioning does a second phase of listing to check if there
are some objects that need to be decommissioned. However, the code
forgot to skip orphan delete markers since the decom code already
skips it.

Make the code ignore delete markers in in the verification phase.

Co-authored-by: Anis Eleuch <anis@min.io>
2025-04-03 15:07:24 -07:00
Minio Trusted
f2c9eb0f79 Update yaml files to latest version RELEASE.2025-04-03T14-56-28Z 2025-04-03 18:57:40 +00:00
爱折腾的小竹同学
f2619d1f62
Fix description error in README (#21099)
There is prefix in json, but not in the equivalent command line. Although the role of prefix has been explained in the previous example, I think it should be supplemented.
2025-04-03 07:56:28 -07:00
Harshavardhana
8c70975283
make sure to validate signature unsigned trailer stream (#21103)
This is a security incident fix, it would seem like since
the implementation of unsigned payload trailer on PUTs,
we do not validate the signature of the incoming request.

The signature can be invalid and is totally being ignored,
this in-turn allows any arbitrary secret to upload objects
given the user has "WRITE" permissions on the bucket, since
acces-key is a public information in general exposes these
potential users with WRITE on the bucket to be used by any
arbitrary client to make a fake request to MinIO the signature
under Authorization: header is totally ignored.

A test has been added to cover this scenario and fail
appropriately.
2025-04-03 07:55:52 -07:00
Krishnan Parthasarathi
01447d2438
Fix evaluation of NewerNoncurrentVersions (#21096)
- Move VersionPurgeStatus into replication package
- ilm: Evaluate policy w/ obj retention/replication
- lifecycle: Use Evaluator to enforce ILM in scanner
- Unit tests covering ILM, replication and retention
- Simplify NewEvaluator constructor
2025-04-02 23:45:06 -07:00
Shubhendu
07f31e574c
Try reconnect IAM systems if failed initially (#20333)
Fixes: https://github.com/minio/minio/issues/20118

Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
2025-04-02 10:29:33 -07:00
iamsagar99
8d223e07fb
Fix: Change TTFB metric type to histogram (#20999) 2025-04-01 22:48:58 -07:00
Harshavardhana
4041a8727c start publishing latest-cicd images 2025-04-01 20:53:54 -07:00
Klaus Post
5f243fde9a
Fix anonymous unsigned trailing headers (#21095)
Do not fail on anonymous requests with trailing headers.

Fixes #21005

With modified minio-go (will send PR):

```
<DEBUG> PUT /tbb/mc.exe HTTP/1.1
Host: 127.0.0.1:9001
User-Agent: MinIO (windows; amd64) minio-go/v7.0.90 mc/DEVELOPMENT.GOGET
Content-Length: 44301288
Accept-Encoding: zstd,gzip
Content-Encoding: aws-chunked
Content-Type: application/x-msdownload
X-Amz-Content-Sha256: STREAMING-UNSIGNED-PAYLOAD-TRAILER
X-Amz-Date: 20250401T150402Z
X-Amz-Decoded-Content-Length: 44295168
X-Amz-Trailer: x-amz-checksum-crc32

mc: <DEBUG> HTTP/1.1 200 OK
Content-Length: 0
Accept-Ranges: bytes
Date: Tue, 01 Apr 2025 15:04:02 GMT
Etag: "46273a30f232dc015ead1c0da8925c98"
Server: MinIO
Strict-Transport-Security: max-age=31536000; includeSubDomains
Vary: Origin
Vary: Accept-Encoding
X-Amz-Checksum-Crc32: wElc/A==
X-Amz-Id-2: 7987905dee74cdeb212432486a178e511309594cee7cb75f892cd53e35f09ea4
X-Amz-Request-Id: 18323A0F322B41C8
X-Content-Type-Options: nosniff
X-Ratelimit-Limit: 2478
X-Ratelimit-Remaining: 2478
X-Xss-Protection: 1; mode=block
```

Tested on multipart uploads as well.
2025-04-01 11:23:27 -07:00
Burkov Egor
a0e3f1cc18
internal: add handling of KVS config parse (#21079) 2025-04-01 08:28:26 -07:00
Name
b1bc641105
chore(all): replace map key deletion loop with clear() (#21082) 2025-04-01 08:28:06 -07:00
jiuker
e0c8738230
fix: token is invalid for admin heal when minio is distErasure on windows (#21092) 2025-04-01 08:21:33 -07:00
alingse
9aa24b1920
fix call toAPIErrorCode with a nil value error after check another err (#21083)
if check lerr != nil and return a toAPIErrorCode(nil)

it should  return toAPIErrorCode(lerr)
2025-03-31 13:31:15 -07:00
Taran Pelkey
53d40e41bc
Add new API endpoint to revoke STS tokens (#21072) 2025-03-31 11:51:24 -07:00
Taran Pelkey
e88d494775
Migrate golanglint-ci config to V2 (#21081) 2025-03-29 17:56:02 -07:00
dependabot[bot]
b67f0cf721
build(deps): bump github.com/golang-jwt/jwt/v4 from 4.5.1 to 4.5.2 (#21056)
Bumps [github.com/golang-jwt/jwt/v4](https://github.com/golang-jwt/jwt) from 4.5.1 to 4.5.2.
- [Release notes](https://github.com/golang-jwt/jwt/releases)
- [Changelog](https://github.com/golang-jwt/jwt/blob/main/VERSION_HISTORY.md)
- [Commits](https://github.com/golang-jwt/jwt/compare/v4.5.1...v4.5.2)

---
updated-dependencies:
- dependency-name: github.com/golang-jwt/jwt/v4
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-23 08:18:21 -07:00
Alexander Kalaj
46922c71b7
Updating Prom queries to include tilde needed to work (#21054) 2025-03-22 08:22:29 -07:00
dependabot[bot]
670edb4fcf
build(deps): bump github.com/golang-jwt/jwt/v5 from 5.2.1 to 5.2.2 (#21055)
Bumps [github.com/golang-jwt/jwt/v5](https://github.com/golang-jwt/jwt) from 5.2.1 to 5.2.2.
- [Release notes](https://github.com/golang-jwt/jwt/releases)
- [Changelog](https://github.com/golang-jwt/jwt/blob/main/VERSION_HISTORY.md)
- [Commits](https://github.com/golang-jwt/jwt/compare/v5.2.1...v5.2.2)

---
updated-dependencies:
- dependency-name: github.com/golang-jwt/jwt/v5
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-22 08:21:04 -07:00
itsJohnySmith
42d4ab2a0a
fix(templates): replace dash with underscore (#19566) 2025-03-14 13:01:11 -07:00
Harshavardhana
5e2eb372bf update dependencies for CVE fix x/net 2025-03-12 22:29:51 -07:00
Minio Trusted
cccb37a5ac Update yaml files to latest version RELEASE.2025-03-12T18-04-18Z 2025-03-12 18:22:31 +00:00
588 changed files with 9207 additions and 6068 deletions

View File

@ -1,14 +1,19 @@
---
name: Bug report
about: Create a report to help us improve
about: Report a bug in MinIO (community edition is source-only)
title: ''
labels: community, triage
assignees: ''
---
## NOTE
If this case is urgent, please subscribe to [Subnet](https://min.io/pricing) so that our 24/7 support team may help you faster.
## IMPORTANT NOTES
**Community Edition**: MinIO community edition is now source-only. Install via `go install github.com/minio/minio@latest`
**Feature Requests**: We are no longer accepting feature requests for the community edition. For feature requests and enterprise support, please subscribe to [MinIO Enterprise Support](https://min.io/pricing).
**Urgent Issues**: If this case is urgent or affects production, please subscribe to [SUBNET](https://min.io/pricing) for 24/7 enterprise support.
<!--- Provide a general summary of the issue in the Title above -->

View File

@ -2,7 +2,7 @@ blank_issues_enabled: false
contact_links:
- name: MinIO Community Support
url: https://slack.min.io
about: Join here for Community Support
- name: MinIO SUBNET Support
about: Community support via Slack - for questions and discussions
- name: MinIO Enterprise Support (SUBNET)
url: https://min.io/pricing
about: Join here for Enterprise Support
about: Enterprise support with SLA - for production deployments and feature requests

View File

@ -1,20 +0,0 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: community, triage
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.

View File

@ -20,7 +20,7 @@ jobs:
runs-on: ${{ matrix.os }}
strategy:
matrix:
go-version: [1.23.x]
go-version: [1.24.x]
os: [ubuntu-latest]
steps:
- uses: actions/checkout@v4

View File

@ -1,59 +0,0 @@
name: FIPS Build Test
on:
pull_request:
branches:
- master
# This ensures that previous jobs for the PR are canceled when the PR is
# updated.
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref }}
cancel-in-progress: true
permissions:
contents: read
jobs:
build:
name: Go BoringCrypto ${{ matrix.go-version }} on ${{ matrix.os }}
runs-on: ${{ matrix.os }}
strategy:
matrix:
go-version: [1.23.x]
os: [ubuntu-latest]
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version: ${{ matrix.go-version }}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Setup dockerfile for build test
run: |
GO_VERSION=$(go version | cut -d ' ' -f 3 | sed 's/go//')
echo Detected go version $GO_VERSION
cat > Dockerfile.fips.test <<EOF
FROM golang:${GO_VERSION}
COPY . /minio
WORKDIR /minio
ENV GOEXPERIMENT=boringcrypto
RUN make
EOF
- name: Build
uses: docker/build-push-action@v3
with:
context: .
file: Dockerfile.fips.test
push: false
load: true
tags: minio/fips-test:latest
# This should fail if grep returns non-zero exit
- name: Test binary
run: |
docker run --rm minio/fips-test:latest ./minio --version
docker run --rm -i minio/fips-test:latest /bin/bash -c 'go tool nm ./minio | grep FIPS | grep -q FIPS'

View File

@ -20,7 +20,7 @@ jobs:
runs-on: ${{ matrix.os }}
strategy:
matrix:
go-version: [1.23.x]
go-version: [1.24.x]
os: [ubuntu-latest]
steps:
- uses: actions/checkout@v4

View File

@ -20,7 +20,7 @@ jobs:
runs-on: ${{ matrix.os }}
strategy:
matrix:
go-version: [1.23.x]
go-version: [1.24.x]
os: [ubuntu-latest]
steps:
- uses: actions/checkout@v4

View File

@ -20,7 +20,7 @@ jobs:
runs-on: ${{ matrix.os }}
strategy:
matrix:
go-version: [1.23.x]
go-version: [1.24.x]
os: [ubuntu-latest]
steps:
- uses: actions/checkout@v4

View File

@ -20,7 +20,7 @@ jobs:
runs-on: ${{ matrix.os }}
strategy:
matrix:
go-version: [1.23.x]
go-version: [1.24.x]
os: [ubuntu-latest]
steps:
- uses: actions/checkout@v4

View File

@ -61,7 +61,7 @@ jobs:
# are turned off - i.e. if ldap="", then ldap server is not enabled for
# the tests.
matrix:
go-version: [1.23.x]
go-version: [1.24.x]
ldap: ["", "localhost:389"]
etcd: ["", "http://localhost:2379"]
openid: ["", "http://127.0.0.1:5556/dex"]

View File

@ -29,7 +29,7 @@ jobs:
- name: setup-go-step
uses: actions/setup-go@v5
with:
go-version: 1.23.x
go-version: 1.24.x
- name: github sha short
id: vars

View File

@ -21,7 +21,7 @@ jobs:
strategy:
matrix:
go-version: [1.23.x]
go-version: [1.24.x]
steps:
- uses: actions/checkout@v4

View File

@ -20,7 +20,7 @@ jobs:
runs-on: ${{ matrix.os }}
strategy:
matrix:
go-version: [1.23.x]
go-version: [1.24.x]
os: [ubuntu-latest]
steps:

View File

@ -20,7 +20,7 @@ jobs:
runs-on: ${{ matrix.os }}
strategy:
matrix:
go-version: [1.23.x]
go-version: [1.24.x]
os: [ubuntu-latest]
steps:

View File

@ -21,7 +21,8 @@ jobs:
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: 1.23.5
go-version: 1.24.x
cached: false
- name: Get official govulncheck
run: go install golang.org/x/vuln/cmd/govulncheck@latest
shell: bash

View File

@ -1,45 +1,64 @@
linters-settings:
gofumpt:
simplify: true
misspell:
locale: US
staticcheck:
checks: ['all', '-ST1005', '-ST1000', '-SA4000', '-SA9004', '-SA1019', '-SA1008', '-U1000', '-ST1016']
version: "2"
linters:
disable-all: true
default: none
enable:
- durationcheck
- forcetypeassert
- gocritic
- gofumpt
- goimports
- gomodguard
- govet
- ineffassign
- misspell
- revive
- staticcheck
- typecheck
- unconvert
- unused
- usetesting
- forcetypeassert
- whitespace
settings:
misspell:
locale: US
staticcheck:
checks:
- all
- -SA1008
- -SA1019
- -SA4000
- -SA9004
- -ST1000
- -ST1005
- -ST1016
- -U1000
exclusions:
generated: lax
rules:
- linters:
- forcetypeassert
path: _test\.go
- path: (.+)\.go$
text: 'empty-block:'
- path: (.+)\.go$
text: 'unused-parameter:'
- path: (.+)\.go$
text: 'dot-imports:'
- path: (.+)\.go$
text: should have a package comment
- path: (.+)\.go$
text: error strings should not be capitalized or end with punctuation or a newline
paths:
- third_party$
- builtin$
- examples$
issues:
exclude-use-default: false
max-issues-per-linter: 100
max-same-issues: 100
exclude:
- "empty-block:"
- "unused-parameter:"
- "dot-imports:"
- should have a package comment
- error strings should not be capitalized or end with punctuation or a newline
exclude-rules:
# Exclude some linters from running on tests files.
- path: _test\.go
linters:
- forcetypeassert
formatters:
enable:
- gofumpt
- goimports
exclusions:
generated: lax
paths:
- third_party$
- builtin$
- examples$

View File

@ -14,6 +14,7 @@ extend-ignore-re = [
'http\.Header\{"X-Amz-Server-Side-Encryptio":',
"ZoEoZdLlzVbOlT9rbhD7ZN7TLyiYXSAlB79uGEge",
"ERRO:",
"(?Rm)^.*(#|//)\\s*spellchecker:disable-line$", # ignore line
]
[default.extend-words]

896
CREDITS
View File

@ -4095,214 +4095,6 @@ SOFTWARE.
================================================================
github.com/census-instrumentation/opencensus-proto
https://github.com/census-instrumentation/opencensus-proto
----------------------------------------------------------------
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
================================================================
github.com/cespare/xxhash/v2
https://github.com/cespare/xxhash/v2
----------------------------------------------------------------
@ -6754,6 +6546,420 @@ https://github.com/envoyproxy/go-control-plane
================================================================
github.com/envoyproxy/go-control-plane/envoy
https://github.com/envoyproxy/go-control-plane/envoy
----------------------------------------------------------------
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
================================================================
github.com/envoyproxy/go-control-plane/ratelimit
https://github.com/envoyproxy/go-control-plane/ratelimit
----------------------------------------------------------------
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
================================================================
github.com/envoyproxy/protoc-gen-validate
https://github.com/envoyproxy/protoc-gen-validate
----------------------------------------------------------------
@ -7760,7 +7966,7 @@ https://github.com/go-ldap/ldap/v3
The MIT License (MIT)
Copyright (c) 2011-2015 Michael Mitton (mmitton@gmail.com)
Portions copyright (c) 2015-2016 go-ldap Authors
Portions copyright (c) 2015-2024 go-ldap Authors
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
@ -12447,6 +12653,39 @@ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
================================================================
github.com/gorilla/mux
https://github.com/gorilla/mux
----------------------------------------------------------------
Copyright (c) 2023 The Gorilla Authors. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
================================================================
github.com/gorilla/websocket
https://github.com/gorilla/websocket
----------------------------------------------------------------
@ -18356,6 +18595,214 @@ For more information on this, and how to apply and follow the GNU AGPL, see
================================================================
github.com/minio/crc64nvme
https://github.com/minio/crc64nvme
----------------------------------------------------------------
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
================================================================
github.com/minio/dnscache
https://github.com/minio/dnscache
----------------------------------------------------------------
@ -22346,7 +22793,7 @@ https://github.com/minio/minio-go/v7
github.com/minio/mux
https://github.com/minio/mux
----------------------------------------------------------------
Copyright (c) 2012-2018 The Gorilla Authors. All rights reserved.
Copyright (c) 2023 The Gorilla Authors. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
@ -24404,33 +24851,6 @@ https://github.com/modern-go/reflect2
================================================================
github.com/montanaflynn/stats
https://github.com/montanaflynn/stats
----------------------------------------------------------------
The MIT License (MIT)
Copyright (c) 2014-2023 Montana Flynn (https://montanaflynn.com)
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
================================================================
github.com/muesli/ansi
https://github.com/muesli/ansi
----------------------------------------------------------------
@ -28451,7 +28871,7 @@ https://github.com/safchain/ethtool
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Copyright (c) 2015 The Ethtool Authors
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.

View File

@ -1,8 +1,14 @@
FROM minio/minio:latest
ARG TARGETARCH
ARG RELEASE
RUN chmod -R 777 /usr/bin
COPY ./minio /usr/bin/minio
COPY ./minio-${TARGETARCH}.${RELEASE} /usr/bin/minio
COPY ./minio-${TARGETARCH}.${RELEASE}.minisig /usr/bin/minio.minisig
COPY ./minio-${TARGETARCH}.${RELEASE}.sha256sum /usr/bin/minio.sha256sum
COPY dockerscripts/docker-entrypoint.sh /usr/bin/docker-entrypoint.sh
ENTRYPOINT ["/usr/bin/docker-entrypoint.sh"]

View File

@ -1,4 +1,4 @@
FROM golang:1.23-alpine as build
FROM golang:1.24-alpine as build
ARG TARGETARCH
ARG RELEASE

View File

@ -1,4 +1,4 @@
FROM golang:1.23-alpine AS build
FROM golang:1.24-alpine AS build
ARG TARGETARCH
ARG RELEASE

View File

@ -1,4 +1,4 @@
FROM golang:1.23-alpine AS build
FROM golang:1.24-alpine AS build
ARG TARGETARCH
ARG RELEASE

View File

@ -24,8 +24,6 @@ help: ## print this help
getdeps: ## fetch necessary dependencies
@mkdir -p ${GOPATH}/bin
@echo "Installing golangci-lint" && curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(GOLANGCI_DIR)
@echo "Installing msgp" && go install -v github.com/tinylib/msgp@v1.2.5
@echo "Installing stringer" && go install -v golang.org/x/tools/cmd/stringer@latest
crosscompile: ## cross compile minio
@(env bash $(PWD)/buildscripts/cross-compile.sh)
@ -188,9 +186,9 @@ hotfix-vars:
$(eval VERSION := $(shell git describe --tags --abbrev=0).hotfix.$(shell git rev-parse --short HEAD))
hotfix: hotfix-vars clean install ## builds minio binary with hotfix tags
@wget -q -c https://github.com/minio/pkger/releases/download/v2.3.10/pkger_2.3.10_linux_amd64.deb
@wget -q -c https://raw.githubusercontent.com/minio/minio-service/v1.1.0/linux-systemd/distributed/minio.service
@sudo apt install ./pkger_2.3.10_linux_amd64.deb --yes
@wget -q -c https://github.com/minio/pkger/releases/download/v2.3.11/pkger_2.3.11_linux_amd64.deb
@wget -q -c https://raw.githubusercontent.com/minio/minio-service/v1.1.1/linux-systemd/distributed/minio.service
@sudo apt install ./pkger_2.3.11_linux_amd64.deb --yes
@mkdir -p minio-release/$(GOOS)-$(GOARCH)/archive
@cp -af ./minio minio-release/$(GOOS)-$(GOARCH)/minio
@cp -af ./minio minio-release/$(GOOS)-$(GOARCH)/minio.$(VERSION)

View File

@ -0,0 +1,93 @@
# MinIO Pull Request Guidelines
These guidelines ensure high-quality commits in MinIOs GitHub repositories, maintaining
a clear, valuable commit history for our open-source projects. They apply to all contributors,
fostering efficient reviews and robust code.
## Why Pull Requests?
Pull Requests (PRs) drive quality in MinIOs codebase by:
- Enabling peer review without pair programming.
- Documenting changes for future reference.
- Ensuring commits tell a clear story of development.
**A poor commit lasts forever, even if code is refactored.**
## Crafting a Quality PR
A strong MinIO PR:
- Delivers a complete, valuable change (feature, bug fix, or improvement).
- Has a concise title (e.g., `[S3] Fix bucket policy parsing #1234`) and a summary with context, referencing issues (e.g., `#1234`).
- Contains well-written, logical commits explaining *why* changes were made (e.g., “Add S3 bucket tagging support so that users can organize resources efficiently”).
- Is small, focused, and easy to review—ideally one commit, unless multiple commits better narrate complex work.
- Adheres to MinIOs coding standards (e.g., Go style, error handling, testing).
PRs must flow smoothly through review to reach production. Large PRs should be split into smaller, manageable ones.
## Submitting PRs
1. **Title and Summary**:
- Use a scannable title: `[Subsystem] Action Description #Issue` (e.g., `[IAM] Add role-based access control #567`).
- Include context in the summary: what changed, why, and any issue references.
- Use `[WIP]` for in-progress PRs to avoid premature merging or choose GitHub draft PRs.
2. **Commits**:
- Write clear messages: what changed and why (e.g., “Refactor S3 API handler to reduce latency so that requests process 20% faster”).
- Rebase to tidy commits before submitting (e.g., `git rebase -i main` to squash typos or reword messages), unless multiple contributors worked on the branch.
- Keep PRs focused—one feature or fix. Split large changes into multiple PRs.
3. **Testing**:
- Include unit tests for new functionality or bug fixes.
- Ensure existing tests pass (`make test`).
- Document testing steps in the PR summary if manual testing was performed.
4. **Before Submitting**:
- Run `make verify` to check formatting, linting, and tests.
- Reference related issues (e.g., “Closes #1234”).
- Notify team members via GitHub `@mentions` if urgent or complex.
## Reviewing PRs
Reviewers ensure MinIOs commit history remains a clear, reliable record. Responsibilities include:
1. **Commit Quality**:
- Verify each commit explains *why* the change was made (e.g., “So that…”).
- Request rebasing if commits are unclear, redundant, or lack context (e.g., “Please squash typo fixes into the parent commit”).
2. **Code Quality**:
- Check adherence to MinIOs Go standards (e.g., error handling, documentation).
- Ensure tests cover new code and pass CI.
- Flag bugs or critical issues for immediate fixes; suggest non-blocking improvements as follow-up issues.
3. **Flow**:
- Review promptly to avoid blocking progress.
- Balance quality and speed—minor issues can be addressed later via issues, not PR blocks.
- If unable to complete the review, tag another reviewer (e.g., `@username please take over`).
4. **Shared Responsibility**:
- All MinIO contributors are reviewers. The first commenter on a PR owns the review unless they delegate.
- Multiple reviewers are encouraged for complex PRs.
5. **No Self-Edits**:
- Dont modify the PR directly (e.g., fixing bugs). Request changes from the submitter or create a follow-up PR.
- If you edit, youre a collaborator, not a reviewer, and cannot merge.
6. **Testing**:
- Assume the submitter tested the code. If testing is unclear, ask for details (e.g., “How was this tested?”).
- Reject untested PRs unless testing is infeasible, then assist with test setup.
## Tips for Success
- **Small PRs**: Easier to review, faster to merge. Split large changes logically.
- **Clear Commits**: Use `git rebase -i` to refine history before submitting.
- **Engage Early**: Discuss complex changes in issues or Slack (https://slack.min.io) before coding.
- **Be Responsive**: Address reviewer feedback promptly to keep PRs moving.
- **Learn from Reviews**: Use feedback to improve future contributions.
## Resources
- [MinIO Coding Standards](https://github.com/minio/minio/blob/master/CONTRIBUTING.md)
- [Effective Commit Messages](https://mislav.net/2014/02/hidden-documentation/)
- [GitHub PR Tips](https://github.com/blog/1943-how-to-write-the-perfect-pull-request)
By following these guidelines, we ensure MinIOs codebase remains high-quality, maintainable, and a joy to contribute to. Happy coding!

View File

@ -1,7 +0,0 @@
# MinIO FIPS Builds
MinIO creates FIPS builds using a patched version of the Go compiler (that uses BoringCrypto, from BoringSSL, which is [FIPS 140-2 validated](https://csrc.nist.gov/csrc/media/projects/cryptographic-module-validation-program/documents/security-policies/140sp2964.pdf)) published by the Golang Team [here](https://github.com/golang/go/tree/dev.boringcrypto/misc/boring).
MinIO FIPS executables are available at <http://dl.min.io> - they are only published for `linux-amd64` architecture as binary files with the suffix `.fips`. We also publish corresponding container images to our official image repositories.
We are not making any statements or representations about the suitability of this code or build in relation to the FIPS 140-2 standard. Interested users will have to evaluate for themselves whether this is useful for their own purposes.

267
README.md
View File

@ -4,253 +4,154 @@
[![MinIO](https://raw.githubusercontent.com/minio/minio/master/.github/logo.svg?sanitize=true)](https://min.io)
MinIO is a High Performance Object Storage released under GNU Affero General Public License v3.0. It is API compatible with Amazon S3 cloud storage service. Use MinIO to build high performance infrastructure for machine learning, analytics and application data workloads. To learn more about what MinIO is doing for AI storage, go to [AI storage documentation](https://min.io/solutions/object-storage-for-ai).
MinIO is a high-performance, S3-compatible object storage solution released under the GNU AGPL v3.0 license.
Designed for speed and scalability, it powers AI/ML, analytics, and data-intensive workloads with industry-leading performance.
This README provides quickstart instructions on running MinIO on bare metal hardware, including container-based installations. For Kubernetes environments, use the [MinIO Kubernetes Operator](https://github.com/minio/operator/blob/master/README.md).
- S3 API Compatible Seamless integration with existing S3 tools
- Built for AI & Analytics Optimized for large-scale data pipelines
- High Performance Ideal for demanding storage workloads.
## Container Installation
This README provides instructions for building MinIO from source and deploying onto baremetal hardware.
Use the [MinIO Documentation](https://github.com/minio/docs) project to build and host a local copy of the documentation.
Use the following commands to run a standalone MinIO server as a container.
## MinIO is Open Source Software
Standalone MinIO servers are best suited for early development and evaluation. Certain features such as versioning, object locking, and bucket replication
require distributed deploying MinIO with Erasure Coding. For extended development and production, deploy MinIO with Erasure Coding enabled - specifically,
with a *minimum* of 4 drives per MinIO server. See [MinIO Erasure Code Overview](https://min.io/docs/minio/linux/operations/concepts/erasure-coding.html)
for more complete documentation.
We designed MinIO as Open Source software for the Open Source software community. We encourage the community to remix, redesign, and reshare MinIO under the terms of the AGPLv3 license.
### Stable
All usage of MinIO in your application stack requires validation against AGPLv3 obligations, which include but are not limited to the release of modified code to the community from which you have benefited. Any commercial/proprietary usage of the AGPLv3 software, including repackaging or reselling services/features, is done at your own risk.
Run the following command to run the latest stable image of MinIO as a container using an ephemeral data volume:
The AGPLv3 provides no obligation by any party to support, maintain, or warranty the original or any modified work.
All support is provided on a best-effort basis through Github and our [Slack](https//slack.min.io) channel, and any member of the community is welcome to contribute and assist others in their usage of the software.
```sh
podman run -p 9000:9000 -p 9001:9001 \
quay.io/minio/minio server /data --console-address ":9001"
```
MinIO [AIStor](https://www.min.io/product/aistor) includes enterprise-grade support and licensing for workloads which require commercial or proprietary usage and production-level SLA/SLO-backed support. For more information, [reach out for a quote](https://min.io/pricing).
The MinIO deployment starts using default root credentials `minioadmin:minioadmin`. You can test the deployment using the MinIO Console, an embedded
object browser built into MinIO Server. Point a web browser running on the host machine to <http://127.0.0.1:9000> and log in with the
root credentials. You can use the Browser to create buckets, upload objects, and browse the contents of the MinIO server.
## Source-Only Distribution
You can also connect using any S3-compatible tool, such as the MinIO Client `mc` commandline tool. See
[Test using MinIO Client `mc`](#test-using-minio-client-mc) for more information on using the `mc` commandline tool. For application developers,
see <https://min.io/docs/minio/linux/developers/minio-drivers.html> to view MinIO SDKs for supported languages.
**Important:** The MinIO community edition is now distributed as source code only. We will no longer provide pre-compiled binary releases for the community version.
> NOTE: To deploy MinIO on with persistent storage, you must map local persistent directories from the host OS to the container using the `podman -v` option. For example, `-v /mnt/data:/data` maps the host OS drive at `/mnt/data` to `/data` on the container.
### Installing Latest MinIO Community Edition
## macOS
To use MinIO community edition, you have two options:
Use the following commands to run a standalone MinIO server on macOS.
1. **Install from source** using `go install github.com/minio/minio@latest` (recommended)
2. **Build a Docker image** from the provided Dockerfile
Standalone MinIO servers are best suited for early development and evaluation. Certain features such as versioning, object locking, and bucket replication require distributed deploying MinIO with Erasure Coding. For extended development and production, deploy MinIO with Erasure Coding enabled - specifically, with a *minimum* of 4 drives per MinIO server. See [MinIO Erasure Code Overview](https://min.io/docs/minio/linux/operations/concepts/erasure-coding.html) for more complete documentation.
See the sections below for detailed instructions on each method.
### Homebrew (recommended)
### Legacy Binary Releases
Run the following command to install the latest stable MinIO package using [Homebrew](https://brew.sh/). Replace ``/data`` with the path to the drive or directory in which you want MinIO to store data.
Historical pre-compiled binary releases remain available for reference but are no longer maintained:
- GitHub Releases: https://github.com/minio/minio/releases
- Direct downloads: https://dl.min.io/server/minio/release/
```sh
brew install minio/stable/minio
minio server /data
```
> NOTE: If you previously installed minio using `brew install minio` then it is recommended that you reinstall minio from `minio/stable/minio` official repo instead.
```sh
brew uninstall minio
brew install minio/stable/minio
```
The MinIO deployment starts using default root credentials `minioadmin:minioadmin`. You can test the deployment using the MinIO Console, an embedded web-based object browser built into MinIO Server. Point a web browser running on the host machine to <http://127.0.0.1:9000> and log in with the root credentials. You can use the Browser to create buckets, upload objects, and browse the contents of the MinIO server.
You can also connect using any S3-compatible tool, such as the MinIO Client `mc` commandline tool. See [Test using MinIO Client `mc`](#test-using-minio-client-mc) for more information on using the `mc` commandline tool. For application developers, see <https://min.io/docs/minio/linux/developers/minio-drivers.html/> to view MinIO SDKs for supported languages.
### Binary Download
Use the following command to download and run a standalone MinIO server on macOS. Replace ``/data`` with the path to the drive or directory in which you want MinIO to store data.
```sh
wget https://dl.min.io/server/minio/release/darwin-amd64/minio
chmod +x minio
./minio server /data
```
The MinIO deployment starts using default root credentials `minioadmin:minioadmin`. You can test the deployment using the MinIO Console, an embedded web-based object browser built into MinIO Server. Point a web browser running on the host machine to <http://127.0.0.1:9000> and log in with the root credentials. You can use the Browser to create buckets, upload objects, and browse the contents of the MinIO server.
You can also connect using any S3-compatible tool, such as the MinIO Client `mc` commandline tool. See [Test using MinIO Client `mc`](#test-using-minio-client-mc) for more information on using the `mc` commandline tool. For application developers, see <https://min.io/docs/minio/linux/developers/minio-drivers.html> to view MinIO SDKs for supported languages.
## GNU/Linux
Use the following command to run a standalone MinIO server on Linux hosts running 64-bit Intel/AMD architectures. Replace ``/data`` with the path to the drive or directory in which you want MinIO to store data.
```sh
wget https://dl.min.io/server/minio/release/linux-amd64/minio
chmod +x minio
./minio server /data
```
The following table lists supported architectures. Replace the `wget` URL with the architecture for your Linux host.
| Architecture | URL |
| -------- | ------ |
| 64-bit Intel/AMD | <https://dl.min.io/server/minio/release/linux-amd64/minio> |
| 64-bit ARM | <https://dl.min.io/server/minio/release/linux-arm64/minio> |
| 64-bit PowerPC LE (ppc64le) | <https://dl.min.io/server/minio/release/linux-ppc64le/minio> |
The MinIO deployment starts using default root credentials `minioadmin:minioadmin`. You can test the deployment using the MinIO Console, an embedded web-based object browser built into MinIO Server. Point a web browser running on the host machine to <http://127.0.0.1:9000> and log in with the root credentials. You can use the Browser to create buckets, upload objects, and browse the contents of the MinIO server.
You can also connect using any S3-compatible tool, such as the MinIO Client `mc` commandline tool. See [Test using MinIO Client `mc`](#test-using-minio-client-mc) for more information on using the `mc` commandline tool. For application developers, see <https://min.io/docs/minio/linux/developers/minio-drivers.html> to view MinIO SDKs for supported languages.
> NOTE: Standalone MinIO servers are best suited for early development and evaluation. Certain features such as versioning, object locking, and bucket replication require distributed deploying MinIO with Erasure Coding. For extended development and production, deploy MinIO with Erasure Coding enabled - specifically, with a *minimum* of 4 drives per MinIO server. See [MinIO Erasure Code Overview](https://min.io/docs/minio/linux/operations/concepts/erasure-coding.html#) for more complete documentation.
## Microsoft Windows
To run MinIO on 64-bit Windows hosts, download the MinIO executable from the following URL:
```sh
https://dl.min.io/server/minio/release/windows-amd64/minio.exe
```
Use the following command to run a standalone MinIO server on the Windows host. Replace ``D:\`` with the path to the drive or directory in which you want MinIO to store data. You must change the terminal or powershell directory to the location of the ``minio.exe`` executable, *or* add the path to that directory to the system ``$PATH``:
```sh
minio.exe server D:\
```
The MinIO deployment starts using default root credentials `minioadmin:minioadmin`. You can test the deployment using the MinIO Console, an embedded web-based object browser built into MinIO Server. Point a web browser running on the host machine to <http://127.0.0.1:9000> and log in with the root credentials. You can use the Browser to create buckets, upload objects, and browse the contents of the MinIO server.
You can also connect using any S3-compatible tool, such as the MinIO Client `mc` commandline tool. See [Test using MinIO Client `mc`](#test-using-minio-client-mc) for more information on using the `mc` commandline tool. For application developers, see <https://min.io/docs/minio/linux/developers/minio-drivers.html> to view MinIO SDKs for supported languages.
> NOTE: Standalone MinIO servers are best suited for early development and evaluation. Certain features such as versioning, object locking, and bucket replication require distributed deploying MinIO with Erasure Coding. For extended development and production, deploy MinIO with Erasure Coding enabled - specifically, with a *minimum* of 4 drives per MinIO server. See [MinIO Erasure Code Overview](https://min.io/docs/minio/linux/operations/concepts/erasure-coding.html#) for more complete documentation.
**These legacy binaries will not receive updates.** We strongly recommend using source builds for access to the latest features, bug fixes, and security updates.
## Install from Source
Use the following commands to compile and run a standalone MinIO server from source. Source installation is only intended for developers and advanced users. If you do not have a working Golang environment, please follow [How to install Golang](https://golang.org/doc/install). Minimum version required is [go1.21](https://golang.org/dl/#stable)
Use the following commands to compile and run a standalone MinIO server from source.
If you do not have a working Golang environment, please follow [How to install Golang](https://golang.org/doc/install). Minimum version required is [go1.24](https://golang.org/dl/#stable)
```sh
go install github.com/minio/minio@latest
```
The MinIO deployment starts using default root credentials `minioadmin:minioadmin`. You can test the deployment using the MinIO Console, an embedded web-based object browser built into MinIO Server. Point a web browser running on the host machine to <http://127.0.0.1:9000> and log in with the root credentials. You can use the Browser to create buckets, upload objects, and browse the contents of the MinIO server.
You can alternatively run `go build` and use the `GOOS` and `GOARCH` environment variables to control the OS and architecture target.
For example:
You can also connect using any S3-compatible tool, such as the MinIO Client `mc` commandline tool. See [Test using MinIO Client `mc`](#test-using-minio-client-mc) for more information on using the `mc` commandline tool. For application developers, see <https://min.io/docs/minio/linux/developers/minio-drivers.html> to view MinIO SDKs for supported languages.
> NOTE: Standalone MinIO servers are best suited for early development and evaluation. Certain features such as versioning, object locking, and bucket replication require distributed deploying MinIO with Erasure Coding. For extended development and production, deploy MinIO with Erasure Coding enabled - specifically, with a *minimum* of 4 drives per MinIO server. See [MinIO Erasure Code Overview](https://min.io/docs/minio/linux/operations/concepts/erasure-coding.html) for more complete documentation.
MinIO strongly recommends *against* using compiled-from-source MinIO servers for production environments.
## Deployment Recommendations
### Allow port access for Firewalls
By default MinIO uses the port 9000 to listen for incoming connections. If your platform blocks the port by default, you may need to enable access to the port.
### ufw
For hosts with ufw enabled (Debian based distros), you can use `ufw` command to allow traffic to specific ports. Use below command to allow access to port 9000
```sh
ufw allow 9000
```
env GOOS=linux GOARCh=arm64 go build
```
Below command enables all incoming traffic to ports ranging from 9000 to 9010.
Start MinIO by running `minio server PATH` where `PATH` is any empty folder on your local filesystem.
The MinIO deployment starts using default root credentials `minioadmin:minioadmin`.
You can test the deployment using the MinIO Console, an embedded web-based object browser built into MinIO Server.
Point a web browser running on the host machine to <http://127.0.0.1:9000> and log in with the root credentials.
You can use the Browser to create buckets, upload objects, and browse the contents of the MinIO server.
You can also connect using any S3-compatible tool, such as the MinIO Client `mc` commandline tool:
```sh
ufw allow 9000:9010/tcp
mc alias set local http://localhost:9000 minioadmin minioadmin
mc admin info local
```
### firewall-cmd
See [Test using MinIO Client `mc`](#test-using-minio-client-mc) for more information on using the `mc` commandline tool.
For application developers, see <https://docs.min.io/enterprise/aistor-object-store/developers/sdk/> to view MinIO SDKs for supported languages.
For hosts with firewall-cmd enabled (CentOS), you can use `firewall-cmd` command to allow traffic to specific ports. Use below commands to allow access to port 9000
> [!NOTE]
> Production environments using compiled-from-source MinIO binaries do so at their own risk.
> The AGPLv3 license provides no warranties nor liabilites for any such usage.
## Build Docker Image
You can use the `docker build .` command to build a Docker image on your local host machine.
You must first [build MinIO](#install-from-source) and ensure the `minio` binary exists in the project root.
The following command builds the Docker image using the default `Dockerfile` in the root project directory with the repository and image tag `myminio:minio`
```sh
firewall-cmd --get-active-zones
docker build -t myminio:minio .
```
This command gets the active zone(s). Now, apply port rules to the relevant zones returned above. For example if the zone is `public`, use
Use `docker image ls` to confirm the image exists in your local repository.
You can run the server using standard Docker invocation:
```sh
firewall-cmd --zone=public --add-port=9000/tcp --permanent
docker run -p 9000:9000 -p 9001:9001 myminio:minio server /tmp/minio --console-address :9001
```
Note that `permanent` makes sure the rules are persistent across firewall start, restart or reload. Finally reload the firewall for changes to take effect.
Complete documentation for building Docker containers, managing custom images, or loading images into orchestration platforms is out of scope for this documentation.
You can modify the `Dockerfile` and `dockerscripts/docker-entrypoint.sh` as-needed to reflect your specific image requirements.
```sh
firewall-cmd --reload
```
See the [MinIO Container](https://docs.min.io/community/minio-object-store/operations/deployments/baremetal-deploy-minio-as-a-container.html#deploy-minio-container) documentation for more guidance on running MinIO within a Container image.
### iptables
## Install using Helm Charts
For hosts with iptables enabled (RHEL, CentOS, etc), you can use `iptables` command to enable all traffic coming to specific ports. Use below command to allow
access to port 9000
There are two paths for installing MinIO onto Kubernetes infrastructure:
```sh
iptables -A INPUT -p tcp --dport 9000 -j ACCEPT
service iptables restart
```
- Use the [MinIO Operator](https://github.com/minio/operator)
- Use the community-maintained [Helm charts](https://github.com/minio/minio/tree/master/helm/minio)
Below command enables all incoming traffic to ports ranging from 9000 to 9010.
```sh
iptables -A INPUT -p tcp --dport 9000:9010 -j ACCEPT
service iptables restart
```
See the [MinIO Documentation](https://docs.min.io/community/minio-object-store/operations/deployments/kubernetes.html) for guidance on deploying using the Operator.
The Community Helm chart has instructions in the folder-level README.
## Test MinIO Connectivity
### Test using MinIO Console
MinIO Server comes with an embedded web based object browser. Point your web browser to <http://127.0.0.1:9000> to ensure your server has started successfully.
MinIO Server comes with an embedded web based object browser.
Point your web browser to <http://127.0.0.1:9000> to ensure your server has started successfully.
> NOTE: MinIO runs console on random port by default, if you wish to choose a specific port use `--console-address` to pick a specific interface and port.
> [!NOTE]
> MinIO runs console on random port by default, if you wish to choose a specific port use `--console-address` to pick a specific interface and port.
### Things to consider
### Test using MinIO Client `mc`
MinIO redirects browser access requests to the configured server port (i.e. `127.0.0.1:9000`) to the configured Console port. MinIO uses the hostname or IP address specified in the request when building the redirect URL. The URL and port *must* be accessible by the client for the redirection to work.
`mc` provides a modern alternative to UNIX commands like ls, cat, cp, mirror, diff etc. It supports filesystems and Amazon S3 compatible cloud storage services.
For deployments behind a load balancer, proxy, or ingress rule where the MinIO host IP address or port is not public, use the `MINIO_BROWSER_REDIRECT_URL` environment variable to specify the external hostname for the redirect. The LB/Proxy must have rules for directing traffic to the Console port specifically.
For example, consider a MinIO deployment behind a proxy `https://minio.example.net`, `https://console.minio.example.net` with rules for forwarding traffic on port :9000 and :9001 to MinIO and the MinIO Console respectively on the internal network. Set `MINIO_BROWSER_REDIRECT_URL` to `https://console.minio.example.net` to ensure the browser receives a valid reachable URL.
| Dashboard | Creating a bucket |
| ------------- | ------------- |
| ![Dashboard](https://github.com/minio/minio/blob/master/docs/screenshots/pic1.png?raw=true) | ![Dashboard](https://github.com/minio/minio/blob/master/docs/screenshots/pic2.png?raw=true) |
## Test using MinIO Client `mc`
`mc` provides a modern alternative to UNIX commands like ls, cat, cp, mirror, diff etc. It supports filesystems and Amazon S3 compatible cloud storage services. Follow the MinIO Client [Quickstart Guide](https://min.io/docs/minio/linux/reference/minio-mc.html#quickstart) for further instructions.
## Upgrading MinIO
Upgrades require zero downtime in MinIO, all upgrades are non-disruptive, all transactions on MinIO are atomic. So upgrading all the servers simultaneously is the recommended way to upgrade MinIO.
> NOTE: requires internet access to update directly from <https://dl.min.io>, optionally you can host any mirrors at <https://my-artifactory.example.com/minio/>
- For deployments that installed the MinIO server binary by hand, use [`mc admin update`](https://min.io/docs/minio/linux/reference/minio-mc-admin/mc-admin-update.html)
The following commands set a local alias, validate the server information, create a bucket, copy data to that bucket, and list the contents of the bucket.
```sh
mc admin update <minio alias, e.g., myminio>
mc alias set local http://localhost:9000 minioadmin minioadmin
mc admin info
mc mb data
mc cp ~/Downloads/mydata data/
mc ls data/
```
- For deployments without external internet access (e.g. airgapped environments), download the binary from <https://dl.min.io> and replace the existing MinIO binary let's say for example `/opt/bin/minio`, apply executable permissions `chmod +x /opt/bin/minio` and proceed to perform `mc admin service restart alias/`.
- For installations using Systemd MinIO service, upgrade via RPM/DEB packages **parallelly** on all servers or replace the binary lets say `/opt/bin/minio` on all nodes, apply executable permissions `chmod +x /opt/bin/minio` and process to perform `mc admin service restart alias/`.
### Upgrade Checklist
- Test all upgrades in a lower environment (DEV, QA, UAT) before applying to production. Performing blind upgrades in production environments carries significant risk.
- Read the release notes for MinIO *before* performing any upgrade, there is no forced requirement to upgrade to latest release upon every release. Some release may not be relevant to your setup, avoid upgrading production environments unnecessarily.
- If you plan to use `mc admin update`, MinIO process must have write access to the parent directory where the binary is present on the host system.
- `mc admin update` is not supported and should be avoided in kubernetes/container environments, please upgrade containers by upgrading relevant container images.
- **We do not recommend upgrading one MinIO server at a time, the product is designed to support parallel upgrades please follow our recommended guidelines.**
Follow the MinIO Client [Quickstart Guide](https://docs.min.io/community/minio-object-store/reference/minio-mc.html#quickstart) for further instructions.
## Explore Further
- [MinIO Erasure Code Overview](https://min.io/docs/minio/linux/operations/concepts/erasure-coding.html)
- [Use `mc` with MinIO Server](https://min.io/docs/minio/linux/reference/minio-mc.html)
- [Use `minio-go` SDK with MinIO Server](https://min.io/docs/minio/linux/developers/go/minio-go.html)
- [The MinIO documentation website](https://min.io/docs/minio/linux/index.html)
- [The MinIO documentation website](https://docs.min.io/community/minio-object-store/index.html)
- [MinIO Erasure Code Overview](https://docs.min.io/community/minio-object-store/operations/concepts/erasure-coding.html)
- [Use `mc` with MinIO Server](https://docs.min.io/community/minio-object-store/reference/minio-mc.html)
- [Use `minio-go` SDK with MinIO Server](https://docs.min.io/enterprise/aistor-object-store/developers/sdk/go/)
## Contribute to MinIO Project
Please follow MinIO [Contributor's Guide](https://github.com/minio/minio/blob/master/CONTRIBUTING.md)
Please follow MinIO [Contributor's Guide](https://github.com/minio/minio/blob/master/CONTRIBUTING.md) for guidance on making new contributions to the repository.
## License

View File

@ -74,11 +74,11 @@ check_minimum_version() {
assert_is_supported_arch() {
case "${ARCH}" in
x86_64 | amd64 | aarch64 | ppc64le | arm* | s390x | loong64 | loongarch64)
x86_64 | amd64 | aarch64 | ppc64le | arm* | s390x | loong64 | loongarch64 | riscv64)
return
;;
*)
echo "Arch '${ARCH}' is not supported. Supported Arch: [x86_64, amd64, aarch64, ppc64le, arm*, s390x, loong64, loongarch64]"
echo "Arch '${ARCH}' is not supported. Supported Arch: [x86_64, amd64, aarch64, ppc64le, arm*, s390x, loong64, loongarch64, riscv64]"
exit 1
;;
esac

View File

@ -9,7 +9,7 @@ function _init() {
export CGO_ENABLED=0
## List of architectures and OS to test coss compilation.
SUPPORTED_OSARCH="linux/ppc64le linux/mips64 linux/amd64 linux/arm64 linux/s390x darwin/arm64 darwin/amd64 freebsd/amd64 windows/amd64 linux/arm linux/386 netbsd/amd64 linux/mips openbsd/amd64"
SUPPORTED_OSARCH="linux/ppc64le linux/mips64 linux/amd64 linux/arm64 linux/s390x darwin/arm64 darwin/amd64 freebsd/amd64 windows/amd64 linux/arm linux/386 netbsd/amd64 linux/mips openbsd/amd64 linux/riscv64"
}
function _build() {

View File

@ -69,8 +69,10 @@ __init__() {
## this is needed because github actions don't have
## docker-compose on all runners
go install github.com/docker/compose/v2/cmd@latest
mv -v /tmp/gopath/bin/cmd /tmp/gopath/bin/docker-compose
COMPOSE_VERSION=v2.35.1
mkdir -p /tmp/gopath/bin/
wget -O /tmp/gopath/bin/docker-compose https://github.com/docker/compose/releases/download/${COMPOSE_VERSION}/docker-compose-linux-x86_64
chmod +x /tmp/gopath/bin/docker-compose
cleanup

View File

@ -193,27 +193,27 @@ func (a adminAPIHandlers) SetConfigKVHandler(w http.ResponseWriter, r *http.Requ
func setConfigKV(ctx context.Context, objectAPI ObjectLayer, kvBytes []byte) (result setConfigResult, err error) {
result.Cfg, err = readServerConfig(ctx, objectAPI, nil)
if err != nil {
return
return result, err
}
result.Dynamic, err = result.Cfg.ReadConfig(bytes.NewReader(kvBytes))
if err != nil {
return
return result, err
}
result.SubSys, _, _, err = config.GetSubSys(string(kvBytes))
if err != nil {
return
return result, err
}
tgts, err := config.ParseConfigTargetID(bytes.NewReader(kvBytes))
if err != nil {
return
return result, err
}
ctx = context.WithValue(ctx, config.ContextKeyForTargetFromConfig, tgts)
if verr := validateConfig(ctx, result.Cfg, result.SubSys); verr != nil {
err = badConfigErr{Err: verr}
return
return result, err
}
// Check if subnet proxy being set and if so set the same value to proxy of subnet
@ -222,12 +222,12 @@ func setConfigKV(ctx context.Context, objectAPI ObjectLayer, kvBytes []byte) (re
// Update the actual server config on disk.
if err = saveServerConfig(ctx, objectAPI, result.Cfg); err != nil {
return
return result, err
}
// Write the config input KV to history.
err = saveServerConfigHistory(ctx, objectAPI, kvBytes)
return
return result, err
}
// GetConfigKVHandler - GET /minio/admin/v3/get-config-kv?key={key}

View File

@ -214,10 +214,7 @@ func (a adminAPIHandlers) AddServiceAccountLDAP(w http.ResponseWriter, r *http.R
}
// Check if we are creating svc account for request sender.
isSvcAccForRequestor := false
if targetUser == requestorUser || targetUser == requestorParentUser {
isSvcAccForRequestor = true
}
isSvcAccForRequestor := targetUser == requestorUser || targetUser == requestorParentUser
var (
targetGroups []string
@ -448,8 +445,10 @@ func (a adminAPIHandlers) ListAccessKeysLDAP(w http.ResponseWriter, r *http.Requ
for _, svc := range serviceAccounts {
expiryTime := svc.Expiration
serviceAccountList = append(serviceAccountList, madmin.ServiceAccountInfo{
AccessKey: svc.AccessKey,
Expiration: &expiryTime,
AccessKey: svc.AccessKey,
Expiration: &expiryTime,
Name: svc.Name,
Description: svc.Description,
})
}
for _, sts := range stsKeys {
@ -628,8 +627,10 @@ func (a adminAPIHandlers) ListAccessKeysLDAPBulk(w http.ResponseWriter, r *http.
}
for _, svc := range serviceAccounts {
accessKeys.ServiceAccounts = append(accessKeys.ServiceAccounts, madmin.ServiceAccountInfo{
AccessKey: svc.AccessKey,
Expiration: &svc.Expiration,
AccessKey: svc.AccessKey,
Expiration: &svc.Expiration,
Name: svc.Name,
Description: svc.Description,
})
}
// if only service accounts, skip if user has no service accounts

View File

@ -0,0 +1,248 @@
// Copyright (c) 2015-2025 MinIO, Inc.
//
// This file is part of MinIO Object Storage stack
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package cmd
import (
"encoding/json"
"errors"
"net/http"
"sort"
"github.com/minio/madmin-go/v3"
"github.com/minio/minio-go/v7/pkg/set"
"github.com/minio/pkg/v3/policy"
)
const dummyRoleARN = "dummy-internal"
// ListAccessKeysOpenIDBulk - GET /minio/admin/v3/idp/openid/list-access-keys-bulk
func (a adminAPIHandlers) ListAccessKeysOpenIDBulk(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
// Get current object layer instance.
objectAPI := newObjectLayerFn()
if objectAPI == nil || globalNotificationSys == nil {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrServerNotInitialized), r.URL)
return
}
cred, owner, s3Err := validateAdminSignature(ctx, r, "")
if s3Err != ErrNone {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(s3Err), r.URL)
return
}
if !globalIAMSys.OpenIDConfig.Enabled {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminOpenIDNotEnabled), r.URL)
return
}
userList := r.Form["users"]
isAll := r.Form.Get("all") == "true"
selfOnly := !isAll && len(userList) == 0
cfgName := r.Form.Get("configName")
allConfigs := r.Form.Get("allConfigs") == "true"
if cfgName == "" && !allConfigs {
cfgName = madmin.Default
}
if isAll && len(userList) > 0 {
// This should be checked on client side, so return generic error
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrInvalidRequest), r.URL)
return
}
// Empty DN list and not self, list access keys for all users
if isAll {
if !globalIAMSys.IsAllowed(policy.Args{
AccountName: cred.AccessKey,
Groups: cred.Groups,
Action: policy.ListUsersAdminAction,
ConditionValues: getConditionValues(r, "", cred),
IsOwner: owner,
Claims: cred.Claims,
}) {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAccessDenied), r.URL)
return
}
} else if len(userList) == 1 && userList[0] == cred.ParentUser {
selfOnly = true
}
if !globalIAMSys.IsAllowed(policy.Args{
AccountName: cred.AccessKey,
Groups: cred.Groups,
Action: policy.ListServiceAccountsAdminAction,
ConditionValues: getConditionValues(r, "", cred),
IsOwner: owner,
Claims: cred.Claims,
DenyOnly: selfOnly,
}) {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAccessDenied), r.URL)
return
}
if selfOnly && len(userList) == 0 {
selfDN := cred.AccessKey
if cred.ParentUser != "" {
selfDN = cred.ParentUser
}
userList = append(userList, selfDN)
}
listType := r.Form.Get("listType")
var listSTSKeys, listServiceAccounts bool
switch listType {
case madmin.AccessKeyListUsersOnly:
listSTSKeys = false
listServiceAccounts = false
case madmin.AccessKeyListSTSOnly:
listSTSKeys = true
listServiceAccounts = false
case madmin.AccessKeyListSvcaccOnly:
listSTSKeys = false
listServiceAccounts = true
case madmin.AccessKeyListAll:
listSTSKeys = true
listServiceAccounts = true
default:
err := errors.New("invalid list type")
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErrWithErr(ErrInvalidRequest, err), r.URL)
return
}
s := globalServerConfig.Clone()
roleArnMap := make(map[string]string)
// Map of configs to a map of users to their access keys
cfgToUsersMap := make(map[string]map[string]madmin.OpenIDUserAccessKeys)
configs, err := globalIAMSys.OpenIDConfig.GetConfigList(s)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
for _, config := range configs {
if !allConfigs && cfgName != config.Name {
continue
}
arn := dummyRoleARN
if config.RoleARN != "" {
arn = config.RoleARN
}
roleArnMap[arn] = config.Name
newResp := make(map[string]madmin.OpenIDUserAccessKeys)
cfgToUsersMap[config.Name] = newResp
}
if len(roleArnMap) == 0 {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminNoSuchConfigTarget), r.URL)
return
}
userSet := set.CreateStringSet(userList...)
accessKeys, err := globalIAMSys.ListAllAccessKeys(ctx)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
for _, accessKey := range accessKeys {
// Filter out any disqualifying access keys
_, ok := accessKey.Claims[subClaim]
if !ok {
continue // OpenID access keys must have a sub claim
}
if (!listSTSKeys && !accessKey.IsServiceAccount()) || (!listServiceAccounts && accessKey.IsServiceAccount()) {
continue // skip if not the type we want
}
arn, ok := accessKey.Claims[roleArnClaim].(string)
if !ok {
if _, ok := accessKey.Claims[iamPolicyClaimNameOpenID()]; !ok {
continue // skip if no roleArn and no policy claim
}
// claim-based provider is in the roleArnMap under dummy ARN
arn = dummyRoleARN
}
matchingCfgName, ok := roleArnMap[arn]
if !ok {
continue // skip if not part of the target config
}
var id string
if idClaim := globalIAMSys.OpenIDConfig.GetUserIDClaim(matchingCfgName); idClaim != "" {
id, _ = accessKey.Claims[idClaim].(string)
}
if !userSet.IsEmpty() && !userSet.Contains(accessKey.ParentUser) && !userSet.Contains(id) {
continue // skip if not in the user list
}
openIDUserAccessKeys, ok := cfgToUsersMap[matchingCfgName][accessKey.ParentUser]
// Add new user to map if not already present
if !ok {
var readableClaim string
if rc := globalIAMSys.OpenIDConfig.GetUserReadableClaim(matchingCfgName); rc != "" {
readableClaim, _ = accessKey.Claims[rc].(string)
}
openIDUserAccessKeys = madmin.OpenIDUserAccessKeys{
MinioAccessKey: accessKey.ParentUser,
ID: id,
ReadableName: readableClaim,
}
}
svcAccInfo := madmin.ServiceAccountInfo{
AccessKey: accessKey.AccessKey,
Expiration: &accessKey.Expiration,
}
if accessKey.IsServiceAccount() {
openIDUserAccessKeys.ServiceAccounts = append(openIDUserAccessKeys.ServiceAccounts, svcAccInfo)
} else {
openIDUserAccessKeys.STSKeys = append(openIDUserAccessKeys.STSKeys, svcAccInfo)
}
cfgToUsersMap[matchingCfgName][accessKey.ParentUser] = openIDUserAccessKeys
}
// Convert map to slice and sort
resp := make([]madmin.ListAccessKeysOpenIDResp, 0, len(cfgToUsersMap))
for cfgName, usersMap := range cfgToUsersMap {
users := make([]madmin.OpenIDUserAccessKeys, 0, len(usersMap))
for _, user := range usersMap {
users = append(users, user)
}
sort.Slice(users, func(i, j int) bool {
return users[i].MinioAccessKey < users[j].MinioAccessKey
})
resp = append(resp, madmin.ListAccessKeysOpenIDResp{
ConfigName: cfgName,
Users: users,
})
}
sort.Slice(resp, func(i, j int) bool {
return resp[i].ConfigName < resp[j].ConfigName
})
data, err := json.Marshal(resp)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
encryptedData, err := madmin.EncryptData(cred.SecretKey, data)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
writeSuccessResponseJSON(w, encryptedData)
}

View File

@ -61,7 +61,7 @@ func (a adminAPIHandlers) StartDecommission(w http.ResponseWriter, r *http.Reque
return
}
if z.IsRebalanceStarted() {
if z.IsRebalanceStarted(ctx) {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminRebalanceAlreadyStarted), r.URL)
return
}
@ -258,7 +258,7 @@ func (a adminAPIHandlers) RebalanceStart(w http.ResponseWriter, r *http.Request)
// concurrent rebalance-start commands.
if ep := globalEndpoints[0].Endpoints[0]; !ep.IsLocal {
for nodeIdx, proxyEp := range globalProxyEndpoints {
if proxyEp.Endpoint.Host == ep.Host {
if proxyEp.Host == ep.Host {
if proxied, success := proxyRequestByNodeIndex(ctx, w, r, nodeIdx, false); proxied && success {
return
}
@ -277,7 +277,7 @@ func (a adminAPIHandlers) RebalanceStart(w http.ResponseWriter, r *http.Request)
return
}
if pools.IsRebalanceStarted() {
if pools.IsRebalanceStarted(ctx) {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminRebalanceAlreadyStarted), r.URL)
return
}
@ -329,7 +329,7 @@ func (a adminAPIHandlers) RebalanceStatus(w http.ResponseWriter, r *http.Request
// pools may temporarily have out of date info on the others.
if ep := globalEndpoints[0].Endpoints[0]; !ep.IsLocal {
for nodeIdx, proxyEp := range globalProxyEndpoints {
if proxyEp.Endpoint.Host == ep.Host {
if proxyEp.Host == ep.Host {
if proxied, success := proxyRequestByNodeIndex(ctx, w, r, nodeIdx, false); proxied && success {
return
}
@ -380,14 +380,14 @@ func (a adminAPIHandlers) RebalanceStop(w http.ResponseWriter, r *http.Request)
func proxyDecommissionRequest(ctx context.Context, defaultEndPoint Endpoint, w http.ResponseWriter, r *http.Request) (proxy bool) {
host := env.Get("_MINIO_DECOM_ENDPOINT_HOST", defaultEndPoint.Host)
if host == "" {
return
return proxy
}
for nodeIdx, proxyEp := range globalProxyEndpoints {
if proxyEp.Endpoint.Host == host && !proxyEp.IsLocal {
if proxyEp.Host == host && !proxyEp.IsLocal {
if proxied, success := proxyRequestByNodeIndex(ctx, w, r, nodeIdx, false); proxied && success {
return true
}
}
}
return
return proxy
}

View File

@ -70,7 +70,7 @@ func (a adminAPIHandlers) SiteReplicationAdd(w http.ResponseWriter, r *http.Requ
func getSRAddOptions(r *http.Request) (opts madmin.SRAddOptions) {
opts.ReplicateILMExpiry = r.Form.Get("replicateILMExpiry") == "true"
return
return opts
}
// SRPeerJoin - PUT /minio/admin/v3/site-replication/join
@ -304,7 +304,7 @@ func (a adminAPIHandlers) SRPeerGetIDPSettings(w http.ResponseWriter, r *http.Re
}
}
func parseJSONBody(ctx context.Context, body io.Reader, v interface{}, encryptionKey string) error {
func parseJSONBody(ctx context.Context, body io.Reader, v any, encryptionKey string) error {
data, err := io.ReadAll(body)
if err != nil {
return SRError{
@ -422,7 +422,7 @@ func (a adminAPIHandlers) SiteReplicationEdit(w http.ResponseWriter, r *http.Req
func getSREditOptions(r *http.Request) (opts madmin.SREditOptions) {
opts.DisableILMExpiryReplication = r.Form.Get("disableILMExpiryReplication") == "true"
opts.EnableILMExpiryReplication = r.Form.Get("enableILMExpiryReplication") == "true"
return
return opts
}
// SRPeerEdit - PUT /minio/admin/v3/site-replication/peer/edit
@ -484,7 +484,7 @@ func getSRStatusOptions(r *http.Request) (opts madmin.SRStatusOptions) {
opts.EntityValue = q.Get("entityvalue")
opts.ShowDeleted = q.Get("showDeleted") == "true"
opts.Metrics = q.Get("metrics") == "true"
return
return opts
}
// SiteReplicationRemove - PUT /minio/admin/v3/site-replication/remove

View File

@ -89,7 +89,7 @@ func (s *TestSuiteIAM) TestDeleteUserRace(c *check) {
// Create a policy policy
policy := "mypolicy"
policyBytes := []byte(fmt.Sprintf(`{
policyBytes := fmt.Appendf(nil, `{
"Version": "2012-10-17",
"Statement": [
{
@ -104,7 +104,7 @@ func (s *TestSuiteIAM) TestDeleteUserRace(c *check) {
]
}
]
}`, bucket))
}`, bucket)
err = s.adm.AddCannedPolicy(ctx, policy, policyBytes)
if err != nil {
c.Fatalf("policy add error: %v", err)
@ -113,7 +113,7 @@ func (s *TestSuiteIAM) TestDeleteUserRace(c *check) {
userCount := 50
accessKeys := make([]string, userCount)
secretKeys := make([]string, userCount)
for i := 0; i < userCount; i++ {
for i := range userCount {
accessKey, secretKey := mustGenerateCredentials(c)
err = s.adm.SetUser(ctx, accessKey, secretKey, madmin.AccountEnabled)
if err != nil {
@ -133,7 +133,7 @@ func (s *TestSuiteIAM) TestDeleteUserRace(c *check) {
}
g := errgroup.Group{}
for i := 0; i < userCount; i++ {
for i := range userCount {
g.Go(func(i int) func() error {
return func() error {
uClient := s.getUserClient(c, accessKeys[i], secretKeys[i], "")

View File

@ -24,6 +24,7 @@ import (
"errors"
"fmt"
"io"
"maps"
"net/http"
"os"
"slices"
@ -157,9 +158,7 @@ func (a adminAPIHandlers) ListUsers(w http.ResponseWriter, r *http.Request) {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
for k, v := range ldapUsers {
allCredentials[k] = v
}
maps.Copy(allCredentials, ldapUsers)
// Marshal the response
data, err := json.Marshal(allCredentials)
@ -197,12 +196,7 @@ func (a adminAPIHandlers) GetUserInfo(w http.ResponseWriter, r *http.Request) {
return
}
checkDenyOnly := false
if name == cred.AccessKey {
// Check that there is no explicit deny - otherwise it's allowed
// to view one's own info.
checkDenyOnly = true
}
checkDenyOnly := name == cred.AccessKey
if !globalIAMSys.IsAllowed(policy.Args{
AccountName: cred.AccessKey,
@ -493,12 +487,7 @@ func (a adminAPIHandlers) AddUser(w http.ResponseWriter, r *http.Request) {
return
}
checkDenyOnly := false
if accessKey == cred.AccessKey {
// Check that there is no explicit deny - otherwise it's allowed
// to change one's own password.
checkDenyOnly = true
}
checkDenyOnly := accessKey == cred.AccessKey
if !globalIAMSys.IsAllowed(policy.Args{
AccountName: cred.AccessKey,
@ -689,10 +678,7 @@ func (a adminAPIHandlers) AddServiceAccount(w http.ResponseWriter, r *http.Reque
}
// Check if we are creating svc account for request sender.
isSvcAccForRequestor := false
if targetUser == requestorUser || targetUser == requestorParentUser {
isSvcAccForRequestor = true
}
isSvcAccForRequestor := targetUser == requestorUser || targetUser == requestorParentUser
// If we are creating svc account for request sender, ensure
// that targetUser is a real user (i.e. not derived
@ -1840,16 +1826,18 @@ func (a adminAPIHandlers) SetPolicyForUserOrGroup(w http.ResponseWriter, r *http
iamLogIf(ctx, err)
} else if foundGroupDN == nil || !underBaseDN {
err = errNoSuchGroup
} else {
entityName = foundGroupDN.NormDN
}
entityName = foundGroupDN.NormDN
} else {
var foundUserDN *xldap.DNSearchResult
if foundUserDN, err = globalIAMSys.LDAPConfig.GetValidatedDNForUsername(entityName); err != nil {
iamLogIf(ctx, err)
} else if foundUserDN == nil {
err = errNoSuchUser
} else {
entityName = foundUserDN.NormDN
}
entityName = foundUserDN.NormDN
}
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
@ -2003,6 +1991,227 @@ func (a adminAPIHandlers) AttachDetachPolicyBuiltin(w http.ResponseWriter, r *ht
writeSuccessResponseJSON(w, encryptedData)
}
// RevokeTokens - POST /minio/admin/v3/revoke-tokens/{userProvider}
func (a adminAPIHandlers) RevokeTokens(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
// Get current object layer instance.
objectAPI := newObjectLayerFn()
if objectAPI == nil || globalNotificationSys == nil {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrServerNotInitialized), r.URL)
return
}
cred, owner, s3Err := validateAdminSignature(ctx, r, "")
if s3Err != ErrNone {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(s3Err), r.URL)
return
}
userProvider := mux.Vars(r)["userProvider"]
user := r.Form.Get("user")
tokenRevokeType := r.Form.Get("tokenRevokeType")
fullRevoke := r.Form.Get("fullRevoke") == "true"
isTokenSelfRevoke := user == ""
if !isTokenSelfRevoke {
var err error
user, err = getUserWithProvider(ctx, userProvider, user, false)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
}
if (user != "" && tokenRevokeType == "" && !fullRevoke) || (tokenRevokeType != "" && fullRevoke) {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrInvalidRequest), r.URL)
return
}
adminPrivilege := globalIAMSys.IsAllowed(policy.Args{
AccountName: cred.AccessKey,
Groups: cred.Groups,
Action: policy.RemoveServiceAccountAdminAction,
ConditionValues: getConditionValues(r, "", cred),
IsOwner: owner,
Claims: cred.Claims,
})
if !adminPrivilege || isTokenSelfRevoke {
parentUser := cred.AccessKey
if cred.ParentUser != "" {
parentUser = cred.ParentUser
}
if !isTokenSelfRevoke && user != parentUser {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAccessDenied), r.URL)
return
}
user = parentUser
}
// Infer token revoke type from the request if requestor is STS.
if isTokenSelfRevoke && tokenRevokeType == "" && !fullRevoke {
if cred.IsTemp() {
tokenRevokeType, _ = cred.Claims[tokenRevokeTypeClaim].(string)
}
if tokenRevokeType == "" {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrNoTokenRevokeType), r.URL)
return
}
}
err := globalIAMSys.RevokeTokens(ctx, user, tokenRevokeType)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
writeSuccessNoContent(w)
}
// InfoAccessKey - GET /minio/admin/v3/info-access-key?access-key=<access-key>
func (a adminAPIHandlers) InfoAccessKey(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
// Get current object layer instance.
objectAPI := newObjectLayerFn()
if objectAPI == nil || globalNotificationSys == nil {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrServerNotInitialized), r.URL)
return
}
cred, owner, s3Err := validateAdminSignature(ctx, r, "")
if s3Err != ErrNone {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(s3Err), r.URL)
return
}
accessKey := mux.Vars(r)["accessKey"]
if accessKey == "" {
accessKey = cred.AccessKey
}
u, ok := globalIAMSys.GetUser(ctx, accessKey)
targetCred := u.Credentials
if !globalIAMSys.IsAllowed(policy.Args{
AccountName: cred.AccessKey,
Groups: cred.Groups,
Action: policy.ListServiceAccountsAdminAction,
ConditionValues: getConditionValues(r, "", cred),
IsOwner: owner,
Claims: cred.Claims,
}) {
// If requested user does not exist and requestor is not allowed to list service accounts, return access denied.
if !ok {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAccessDenied), r.URL)
return
}
requestUser := cred.AccessKey
if cred.ParentUser != "" {
requestUser = cred.ParentUser
}
if requestUser != targetCred.ParentUser {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAccessDenied), r.URL)
return
}
}
if !ok {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminNoSuchAccessKey), r.URL)
return
}
var (
sessionPolicy *policy.Policy
err error
userType string
)
switch {
case targetCred.IsTemp():
userType = "STS"
_, sessionPolicy, err = globalIAMSys.GetTemporaryAccount(ctx, accessKey)
if err == errNoSuchTempAccount {
err = errNoSuchAccessKey
}
case targetCred.IsServiceAccount():
userType = "Service Account"
_, sessionPolicy, err = globalIAMSys.GetServiceAccount(ctx, accessKey)
if err == errNoSuchServiceAccount {
err = errNoSuchAccessKey
}
default:
err = errNoSuchAccessKey
}
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
// if session policy is nil or empty, then it is implied policy
impliedPolicy := sessionPolicy == nil || (sessionPolicy.Version == "" && len(sessionPolicy.Statements) == 0)
var svcAccountPolicy policy.Policy
if !impliedPolicy {
svcAccountPolicy = *sessionPolicy
} else {
policiesNames, err := globalIAMSys.PolicyDBGet(targetCred.ParentUser, targetCred.Groups...)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
svcAccountPolicy = globalIAMSys.GetCombinedPolicy(policiesNames...)
}
policyJSON, err := json.MarshalIndent(svcAccountPolicy, "", " ")
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
var expiration *time.Time
if !targetCred.Expiration.IsZero() && !targetCred.Expiration.Equal(timeSentinel) {
expiration = &targetCred.Expiration
}
userProvider := guessUserProvider(targetCred)
infoResp := madmin.InfoAccessKeyResp{
AccessKey: accessKey,
InfoServiceAccountResp: madmin.InfoServiceAccountResp{
ParentUser: targetCred.ParentUser,
Name: targetCred.Name,
Description: targetCred.Description,
AccountStatus: targetCred.Status,
ImpliedPolicy: impliedPolicy,
Policy: string(policyJSON),
Expiration: expiration,
},
UserType: userType,
UserProvider: userProvider,
}
populateProviderInfoFromClaims(targetCred.Claims, userProvider, &infoResp)
data, err := json.Marshal(infoResp)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
encryptedData, err := madmin.EncryptData(cred.SecretKey, data)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
writeSuccessResponseJSON(w, encryptedData)
}
const (
allPoliciesFile = "policies.json"
allUsersFile = "users.json"
@ -2673,7 +2882,7 @@ func addExpirationToCondValues(exp *time.Time, condValues map[string][]string) e
if exp == nil || exp.IsZero() || exp.Equal(timeSentinel) {
return nil
}
dur := exp.Sub(time.Now())
dur := time.Until(*exp)
if dur <= 0 {
return errors.New("unsupported expiration time")
}
@ -2739,7 +2948,7 @@ func commonAddServiceAccount(r *http.Request, ldap bool) (context.Context, auth.
name: createReq.Name,
description: description,
expiration: createReq.Expiration,
claims: make(map[string]interface{}),
claims: make(map[string]any),
}
condValues := getConditionValues(r, "", cred)
@ -2751,7 +2960,7 @@ func commonAddServiceAccount(r *http.Request, ldap bool) (context.Context, auth.
denyOnly := (targetUser == cred.AccessKey || targetUser == cred.ParentUser)
if ldap && !denyOnly {
res, _ := globalIAMSys.LDAPConfig.GetValidatedDNForUsername(targetUser)
if res.NormDN == cred.ParentUser {
if res != nil && res.NormDN == cred.ParentUser {
denyOnly = true
}
}

View File

@ -160,7 +160,7 @@ func (s *TestSuiteIAM) SetUpSuite(c *check) {
}
func (s *TestSuiteIAM) RestartIAMSuite(c *check) {
s.TestSuiteCommon.RestartTestServer(c)
s.RestartTestServer(c)
s.iamSetup(c)
}
@ -208,6 +208,8 @@ func TestIAMInternalIDPServerSuite(t *testing.T) {
suite.TestGroupAddRemove(c)
suite.TestServiceAccountOpsByAdmin(c)
suite.TestServiceAccountPrivilegeEscalationBug(c)
suite.TestServiceAccountPrivilegeEscalationBug2_2025_10_15(c, true)
suite.TestServiceAccountPrivilegeEscalationBug2_2025_10_15(c, false)
suite.TestServiceAccountOpsByUser(c)
suite.TestServiceAccountDurationSecondsCondition(c)
suite.TestAddServiceAccountPerms(c)
@ -332,7 +334,7 @@ func (s *TestSuiteIAM) TestUserPolicyEscalationBug(c *check) {
// 2.2 create and associate policy to user
policy := "mypolicy-test-user-update"
policyBytes := []byte(fmt.Sprintf(`{
policyBytes := fmt.Appendf(nil, `{
"Version": "2012-10-17",
"Statement": [
{
@ -355,7 +357,7 @@ func (s *TestSuiteIAM) TestUserPolicyEscalationBug(c *check) {
]
}
]
}`, bucket, bucket))
}`, bucket, bucket)
err = s.adm.AddCannedPolicy(ctx, policy, policyBytes)
if err != nil {
c.Fatalf("policy add error: %v", err)
@ -562,7 +564,7 @@ func (s *TestSuiteIAM) TestPolicyCreate(c *check) {
// 1. Create a policy
policy := "mypolicy"
policyBytes := []byte(fmt.Sprintf(`{
policyBytes := fmt.Appendf(nil, `{
"Version": "2012-10-17",
"Statement": [
{
@ -585,7 +587,7 @@ func (s *TestSuiteIAM) TestPolicyCreate(c *check) {
]
}
]
}`, bucket, bucket))
}`, bucket, bucket)
err = s.adm.AddCannedPolicy(ctx, policy, policyBytes)
if err != nil {
c.Fatalf("policy add error: %v", err)
@ -680,7 +682,7 @@ func (s *TestSuiteIAM) TestCannedPolicies(c *check) {
c.Fatalf("bucket creat error: %v", err)
}
policyBytes := []byte(fmt.Sprintf(`{
policyBytes := fmt.Appendf(nil, `{
"Version": "2012-10-17",
"Statement": [
{
@ -703,7 +705,7 @@ func (s *TestSuiteIAM) TestCannedPolicies(c *check) {
]
}
]
}`, bucket, bucket))
}`, bucket, bucket)
// Check that default policies can be overwritten.
err = s.adm.AddCannedPolicy(ctx, "readwrite", policyBytes)
@ -739,7 +741,7 @@ func (s *TestSuiteIAM) TestGroupAddRemove(c *check) {
}
policy := "mypolicy"
policyBytes := []byte(fmt.Sprintf(`{
policyBytes := fmt.Appendf(nil, `{
"Version": "2012-10-17",
"Statement": [
{
@ -762,7 +764,7 @@ func (s *TestSuiteIAM) TestGroupAddRemove(c *check) {
]
}
]
}`, bucket, bucket))
}`, bucket, bucket)
err = s.adm.AddCannedPolicy(ctx, policy, policyBytes)
if err != nil {
c.Fatalf("policy add error: %v", err)
@ -911,7 +913,7 @@ func (s *TestSuiteIAM) TestServiceAccountOpsByUser(c *check) {
// Create policy, user and associate policy
policy := "mypolicy"
policyBytes := []byte(fmt.Sprintf(`{
policyBytes := fmt.Appendf(nil, `{
"Version": "2012-10-17",
"Statement": [
{
@ -934,7 +936,7 @@ func (s *TestSuiteIAM) TestServiceAccountOpsByUser(c *check) {
]
}
]
}`, bucket, bucket))
}`, bucket, bucket)
err = s.adm.AddCannedPolicy(ctx, policy, policyBytes)
if err != nil {
c.Fatalf("policy add error: %v", err)
@ -995,7 +997,7 @@ func (s *TestSuiteIAM) TestServiceAccountDurationSecondsCondition(c *check) {
// Create policy, user and associate policy
policy := "mypolicy"
policyBytes := []byte(fmt.Sprintf(`{
policyBytes := fmt.Appendf(nil, `{
"Version": "2012-10-17",
"Statement": [
{
@ -1026,7 +1028,7 @@ func (s *TestSuiteIAM) TestServiceAccountDurationSecondsCondition(c *check) {
]
}
]
}`, bucket, bucket))
}`, bucket, bucket)
err = s.adm.AddCannedPolicy(ctx, policy, policyBytes)
if err != nil {
c.Fatalf("policy add error: %v", err)
@ -1093,7 +1095,7 @@ func (s *TestSuiteIAM) TestServiceAccountOpsByAdmin(c *check) {
// Create policy, user and associate policy
policy := "mypolicy"
policyBytes := []byte(fmt.Sprintf(`{
policyBytes := fmt.Appendf(nil, `{
"Version": "2012-10-17",
"Statement": [
{
@ -1116,7 +1118,7 @@ func (s *TestSuiteIAM) TestServiceAccountOpsByAdmin(c *check) {
]
}
]
}`, bucket, bucket))
}`, bucket, bucket)
err = s.adm.AddCannedPolicy(ctx, policy, policyBytes)
if err != nil {
c.Fatalf("policy add error: %v", err)
@ -1249,6 +1251,108 @@ func (s *TestSuiteIAM) TestServiceAccountPrivilegeEscalationBug(c *check) {
}
}
func (s *TestSuiteIAM) TestServiceAccountPrivilegeEscalationBug2_2025_10_15(c *check, forRoot bool) {
ctx, cancel := context.WithTimeout(context.Background(), testDefaultTimeout)
defer cancel()
for i := range 3 {
err := s.client.MakeBucket(ctx, fmt.Sprintf("bucket%d", i+1), minio.MakeBucketOptions{})
if err != nil {
c.Fatalf("bucket create error: %v", err)
}
defer func(i int) {
_ = s.client.RemoveBucket(ctx, fmt.Sprintf("bucket%d", i+1))
}(i)
}
allow2BucketsPolicyBytes := []byte(`{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ListBucket1AndBucket2",
"Effect": "Allow",
"Action": ["s3:ListBucket"],
"Resource": ["arn:aws:s3:::bucket1", "arn:aws:s3:::bucket2"]
},
{
"Sid": "ReadWriteBucket1AndBucket2Objects",
"Effect": "Allow",
"Action": [
"s3:DeleteObject",
"s3:DeleteObjectVersion",
"s3:GetObject",
"s3:GetObjectVersion",
"s3:PutObject"
],
"Resource": ["arn:aws:s3:::bucket1/*", "arn:aws:s3:::bucket2/*"]
}
]
}`)
if forRoot {
// Create a service account for the root user.
_, err := s.adm.AddServiceAccount(ctx, madmin.AddServiceAccountReq{
Policy: allow2BucketsPolicyBytes,
AccessKey: "restricted",
SecretKey: "restricted123",
})
if err != nil {
c.Fatalf("could not create service account")
}
defer func() {
_ = s.adm.DeleteServiceAccount(ctx, "restricted")
}()
} else {
// Create a regular user and attach consoleAdmin policy
err := s.adm.AddUser(ctx, "foobar", "foobar123")
if err != nil {
c.Fatalf("could not create user")
}
_, err = s.adm.AttachPolicy(ctx, madmin.PolicyAssociationReq{
Policies: []string{"consoleAdmin"},
User: "foobar",
})
if err != nil {
c.Fatalf("could not attach policy")
}
// Create a service account for the regular user.
_, err = s.adm.AddServiceAccount(ctx, madmin.AddServiceAccountReq{
Policy: allow2BucketsPolicyBytes,
TargetUser: "foobar",
AccessKey: "restricted",
SecretKey: "restricted123",
})
if err != nil {
c.Fatalf("could not create service account: %v", err)
}
defer func() {
_ = s.adm.DeleteServiceAccount(ctx, "restricted")
_ = s.adm.RemoveUser(ctx, "foobar")
}()
}
restrictedClient := s.getUserClient(c, "restricted", "restricted123", "")
buckets, err := restrictedClient.ListBuckets(ctx)
if err != nil {
c.Fatalf("err fetching buckets %s", err)
}
if len(buckets) != 2 || buckets[0].Name != "bucket1" || buckets[1].Name != "bucket2" {
c.Fatalf("restricted service account should only have access to bucket1 and bucket2")
}
// Try to escalate privileges
restrictedAdmClient := s.getAdminClient(c, "restricted", "restricted123", "")
_, err = restrictedAdmClient.AddServiceAccount(ctx, madmin.AddServiceAccountReq{
AccessKey: "newroot",
SecretKey: "newroot123",
})
if err == nil {
c.Fatalf("restricted service account was able to create service account bypassing sub-policy!")
}
}
func (s *TestSuiteIAM) SetUpAccMgmtPlugin(c *check) {
ctx, cancel := context.WithTimeout(context.Background(), testDefaultTimeout)
defer cancel()
@ -1367,7 +1471,7 @@ func (s *TestSuiteIAM) TestAccMgmtPlugin(c *check) {
svcAK, svcSK := mustGenerateCredentials(c)
// This policy does not allow listing objects.
policyBytes := []byte(fmt.Sprintf(`{
policyBytes := fmt.Appendf(nil, `{
"Version": "2012-10-17",
"Statement": [
{
@ -1381,7 +1485,7 @@ func (s *TestSuiteIAM) TestAccMgmtPlugin(c *check) {
]
}
]
}`, bucket))
}`, bucket)
cr, err := userAdmClient.AddServiceAccount(ctx, madmin.AddServiceAccountReq{
Policy: policyBytes,
TargetUser: accessKey,
@ -1558,7 +1662,7 @@ func (c *check) mustDownload(ctx context.Context, client *minio.Client, bucket s
func (c *check) mustUploadReturnVersions(ctx context.Context, client *minio.Client, bucket string) []string {
c.Helper()
versions := []string{}
for i := 0; i < 5; i++ {
for range 5 {
ui, err := client.PutObject(ctx, bucket, "some-object", bytes.NewBuffer([]byte("stuff")), 5, minio.PutObjectOptions{})
if err != nil {
c.Fatalf("upload did not succeed got %#v", err)
@ -1627,7 +1731,7 @@ func (c *check) assertSvcAccSessionPolicyUpdate(ctx context.Context, s *TestSuit
svcAK, svcSK := mustGenerateCredentials(c)
// This policy does not allow listing objects.
policyBytes := []byte(fmt.Sprintf(`{
policyBytes := fmt.Appendf(nil, `{
"Version": "2012-10-17",
"Statement": [
{
@ -1641,7 +1745,7 @@ func (c *check) assertSvcAccSessionPolicyUpdate(ctx context.Context, s *TestSuit
]
}
]
}`, bucket))
}`, bucket)
cr, err := madmClient.AddServiceAccount(ctx, madmin.AddServiceAccountReq{
Policy: policyBytes,
TargetUser: accessKey,
@ -1655,7 +1759,7 @@ func (c *check) assertSvcAccSessionPolicyUpdate(ctx context.Context, s *TestSuit
c.mustNotListObjects(ctx, svcClient, bucket)
// This policy allows listing objects.
newPolicyBytes := []byte(fmt.Sprintf(`{
newPolicyBytes := fmt.Appendf(nil, `{
"Version": "2012-10-17",
"Statement": [
{
@ -1668,7 +1772,7 @@ func (c *check) assertSvcAccSessionPolicyUpdate(ctx context.Context, s *TestSuit
]
}
]
}`, bucket))
}`, bucket)
err = madmClient.UpdateServiceAccount(ctx, svcAK, madmin.UpdateServiceAccountReq{
NewPolicy: newPolicyBytes,
})

View File

@ -49,6 +49,7 @@ import (
"github.com/klauspost/compress/zip"
"github.com/minio/madmin-go/v3"
"github.com/minio/madmin-go/v3/estream"
"github.com/minio/madmin-go/v3/logger/log"
"github.com/minio/minio-go/v7/pkg/set"
"github.com/minio/minio/internal/auth"
"github.com/minio/minio/internal/dsync"
@ -59,7 +60,6 @@ import (
"github.com/minio/minio/internal/kms"
"github.com/minio/minio/internal/logger"
"github.com/minio/mux"
"github.com/minio/pkg/v3/logger/message/log"
xnet "github.com/minio/pkg/v3/net"
"github.com/minio/pkg/v3/policy"
"github.com/secure-io/sio-go"
@ -954,7 +954,7 @@ func (a adminAPIHandlers) ForceUnlockHandler(w http.ResponseWriter, r *http.Requ
var args dsync.LockArgs
var lockers []dsync.NetLocker
for _, path := range strings.Split(vars["paths"], ",") {
for path := range strings.SplitSeq(vars["paths"], ",") {
if path == "" {
continue
}
@ -1193,7 +1193,7 @@ type dummyFileInfo struct {
mode os.FileMode
modTime time.Time
isDir bool
sys interface{}
sys any
}
func (f dummyFileInfo) Name() string { return f.name }
@ -1201,7 +1201,7 @@ func (f dummyFileInfo) Size() int64 { return f.size }
func (f dummyFileInfo) Mode() os.FileMode { return f.mode }
func (f dummyFileInfo) ModTime() time.Time { return f.modTime }
func (f dummyFileInfo) IsDir() bool { return f.isDir }
func (f dummyFileInfo) Sys() interface{} { return f.sys }
func (f dummyFileInfo) Sys() any { return f.sys }
// DownloadProfilingHandler - POST /minio/admin/v3/profiling/download
// ----------
@ -1243,17 +1243,17 @@ func extractHealInitParams(vars map[string]string, qParams url.Values, r io.Read
if hip.objPrefix != "" {
// Bucket is required if object-prefix is given
err = ErrHealMissingBucket
return
return hip, err
}
} else if isReservedOrInvalidBucket(hip.bucket, false) {
err = ErrInvalidBucketName
return
return hip, err
}
// empty prefix is valid.
if !IsValidObjectPrefix(hip.objPrefix) {
err = ErrInvalidObjectName
return
return hip, err
}
if len(qParams[mgmtClientToken]) > 0 {
@ -1275,7 +1275,7 @@ func extractHealInitParams(vars map[string]string, qParams url.Values, r io.Read
if (hip.forceStart && hip.forceStop) ||
(hip.clientToken != "" && (hip.forceStart || hip.forceStop)) {
err = ErrInvalidRequest
return
return hip, err
}
// ignore body if clientToken is provided
@ -1284,12 +1284,12 @@ func extractHealInitParams(vars map[string]string, qParams url.Values, r io.Read
if jerr != nil {
adminLogIf(GlobalContext, jerr, logger.ErrorKind)
err = ErrRequestBodyParse
return
return hip, err
}
}
err = ErrNone
return
return hip, err
}
// HealHandler - POST /minio/admin/v3/heal/
@ -1407,7 +1407,7 @@ func (a adminAPIHandlers) HealHandler(w http.ResponseWriter, r *http.Request) {
if exists && !nh.hasEnded() && len(nh.currentStatus.Items) > 0 {
clientToken := nh.clientToken
if globalIsDistErasure {
clientToken = fmt.Sprintf("%s:%d", nh.clientToken, GetProxyEndpointLocalIndex(globalProxyEndpoints))
clientToken = fmt.Sprintf("%s%s%d", nh.clientToken, getKeySeparator(), GetProxyEndpointLocalIndex(globalProxyEndpoints))
}
b, err := json.Marshal(madmin.HealStartSuccess{
ClientToken: clientToken,
@ -2022,7 +2022,7 @@ func extractTraceOptions(r *http.Request) (opts madmin.ServiceTraceOpts, err err
opts.OS = true
// Older mc - cannot deal with more types...
}
return
return opts, err
}
// TraceHandler - POST /minio/admin/v3/trace
@ -2676,7 +2676,7 @@ func fetchHealthInfo(healthCtx context.Context, objectAPI ObjectLayer, query *ur
// disk metrics are already included under drive info of each server
getRealtimeMetrics := func() *madmin.RealtimeMetrics {
var m madmin.RealtimeMetrics
var types madmin.MetricType = madmin.MetricsAll &^ madmin.MetricsDisk
types := madmin.MetricsAll &^ madmin.MetricsDisk
mLocal := collectLocalMetrics(types, collectMetricsOpts{})
m.Merge(&mLocal)
cctx, cancel := context.WithTimeout(healthCtx, time.Second/2)
@ -2720,7 +2720,7 @@ func fetchHealthInfo(healthCtx context.Context, objectAPI ObjectLayer, query *ur
poolsArgs := re.ReplaceAllString(cmdLine, `$3`)
var anonPools []string
if !(strings.Contains(poolsArgs, "{") && strings.Contains(poolsArgs, "}")) {
if !strings.Contains(poolsArgs, "{") || !strings.Contains(poolsArgs, "}") {
// No ellipses pattern. Anonymize host name from every pool arg
pools := strings.Fields(poolsArgs)
anonPools = make([]string, len(pools))
@ -3420,7 +3420,7 @@ func (a adminAPIHandlers) InspectDataHandler(w http.ResponseWriter, r *http.Requ
}
// save the format.json as part of inspect by default
if !(volume == minioMetaBucket && file == formatConfigFile) {
if volume != minioMetaBucket || file != formatConfigFile {
err = o.GetRawData(ctx, minioMetaBucket, formatConfigFile, rawDataFn)
}
if !errors.Is(err, errFileNotFound) {

View File

@ -263,7 +263,7 @@ func buildAdminRequest(queryVal url.Values, method, path string,
}
func TestAdminServerInfo(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
ctx, cancel := context.WithCancel(t.Context())
defer cancel()
adminTestBed, err := prepareAdminErasureTestBed(ctx)
@ -402,7 +402,7 @@ func (b byResourceUID) Less(i, j int) bool {
func TestTopLockEntries(t *testing.T) {
locksHeld := make(map[string][]lockRequesterInfo)
var owners []string
for i := 0; i < 4; i++ {
for i := range 4 {
owners = append(owners, fmt.Sprintf("node-%d", i))
}
@ -410,7 +410,7 @@ func TestTopLockEntries(t *testing.T) {
// request UID, but 10 different resource names associated with it.
var lris []lockRequesterInfo
uuid := mustGetUUID()
for i := 0; i < 10; i++ {
for i := range 10 {
resource := fmt.Sprintf("bucket/delete-object-%d", i)
lri := lockRequesterInfo{
Name: resource,
@ -425,7 +425,7 @@ func TestTopLockEntries(t *testing.T) {
}
// Add a few concurrent read locks to the mix
for i := 0; i < 50; i++ {
for i := range 50 {
resource := fmt.Sprintf("bucket/get-object-%d", i)
lri := lockRequesterInfo{
Name: resource,

View File

@ -22,6 +22,7 @@ import (
"encoding/json"
"errors"
"fmt"
"maps"
"net/http"
"sort"
"sync"
@ -260,7 +261,7 @@ func (ahs *allHealState) stopHealSequence(path string) ([]byte, APIError) {
} else {
clientToken := he.clientToken
if globalIsDistErasure {
clientToken = fmt.Sprintf("%s:%d", he.clientToken, GetProxyEndpointLocalIndex(globalProxyEndpoints))
clientToken = fmt.Sprintf("%s%s%d", he.clientToken, getKeySeparator(), GetProxyEndpointLocalIndex(globalProxyEndpoints))
}
hsp = madmin.HealStopSuccess{
@ -331,7 +332,7 @@ func (ahs *allHealState) LaunchNewHealSequence(h *healSequence, objAPI ObjectLay
clientToken := h.clientToken
if globalIsDistErasure {
clientToken = fmt.Sprintf("%s:%d", h.clientToken, GetProxyEndpointLocalIndex(globalProxyEndpoints))
clientToken = fmt.Sprintf("%s%s%d", h.clientToken, getKeySeparator(), GetProxyEndpointLocalIndex(globalProxyEndpoints))
}
if h.clientToken == bgHealingUUID {
@ -520,9 +521,7 @@ func (h *healSequence) getScannedItemsMap() map[madmin.HealItemType]int64 {
// Make a copy before returning the value
retMap := make(map[madmin.HealItemType]int64, len(h.scannedItemsMap))
for k, v := range h.scannedItemsMap {
retMap[k] = v
}
maps.Copy(retMap, h.scannedItemsMap)
return retMap
}
@ -534,9 +533,7 @@ func (h *healSequence) getHealedItemsMap() map[madmin.HealItemType]int64 {
// Make a copy before returning the value
retMap := make(map[madmin.HealItemType]int64, len(h.healedItemsMap))
for k, v := range h.healedItemsMap {
retMap[k] = v
}
maps.Copy(retMap, h.healedItemsMap)
return retMap
}
@ -549,9 +546,7 @@ func (h *healSequence) getHealFailedItemsMap() map[madmin.HealItemType]int64 {
// Make a copy before returning the value
retMap := make(map[madmin.HealItemType]int64, len(h.healFailedItemsMap))
for k, v := range h.healFailedItemsMap {
retMap[k] = v
}
maps.Copy(retMap, h.healFailedItemsMap)
return retMap
}

View File

@ -246,6 +246,7 @@ func registerAdminRouter(router *mux.Router, enableConfigOps bool) {
// Access key (service account/STS) operations
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/list-access-keys-bulk").HandlerFunc(adminMiddleware(adminAPI.ListAccessKeysBulk)).Queries("listType", "{listType:.*}")
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/info-access-key").HandlerFunc(adminMiddleware(adminAPI.InfoAccessKey)).Queries("accessKey", "{accessKey:.*}")
// Info policy IAM latest
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/info-canned-policy").HandlerFunc(adminMiddleware(adminAPI.InfoCannedPolicy)).Queries("name", "{name:.*}")
@ -295,7 +296,7 @@ func registerAdminRouter(router *mux.Router, enableConfigOps bool) {
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/import-iam").HandlerFunc(adminMiddleware(adminAPI.ImportIAM, noGZFlag))
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/import-iam-v2").HandlerFunc(adminMiddleware(adminAPI.ImportIAMV2, noGZFlag))
// IDentity Provider configuration APIs
// Identity Provider configuration APIs
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/idp-config/{type}/{name}").HandlerFunc(adminMiddleware(adminAPI.AddIdentityProviderCfg))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/idp-config/{type}/{name}").HandlerFunc(adminMiddleware(adminAPI.UpdateIdentityProviderCfg))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/idp-config/{type}").HandlerFunc(adminMiddleware(adminAPI.ListIdentityProviderCfg))
@ -312,6 +313,11 @@ func registerAdminRouter(router *mux.Router, enableConfigOps bool) {
// LDAP IAM operations
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/idp/ldap/policy-entities").HandlerFunc(adminMiddleware(adminAPI.ListLDAPPolicyMappingEntities))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/idp/ldap/policy/{operation}").HandlerFunc(adminMiddleware(adminAPI.AttachDetachPolicyLDAP))
// OpenID specific service accounts ops
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/idp/openid/list-access-keys-bulk").
HandlerFunc(adminMiddleware(adminAPI.ListAccessKeysOpenIDBulk)).Queries("listType", "{listType:.*}")
// -- END IAM APIs --
// GetBucketQuotaConfig
@ -424,6 +430,9 @@ func registerAdminRouter(router *mux.Router, enableConfigOps bool) {
// -- Health API --
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/healthinfo").
HandlerFunc(adminMiddleware(adminAPI.HealthInfoHandler))
// STS Revocation
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/revoke-tokens/{userProvider}").HandlerFunc(adminMiddleware(adminAPI.RevokeTokens))
}
// If none of the routes match add default error handler routes

View File

@ -44,10 +44,10 @@ type DeleteMarkerMTime struct {
// MarshalXML encodes expiration date if it is non-zero and encodes
// empty string otherwise
func (t DeleteMarkerMTime) MarshalXML(e *xml.Encoder, startElement xml.StartElement) error {
if t.Time.IsZero() {
if t.IsZero() {
return nil
}
return e.EncodeElement(t.Time.Format(time.RFC3339), startElement)
return e.EncodeElement(t.Format(time.RFC3339), startElement)
}
// ObjectV object version key/versionId

View File

@ -214,6 +214,9 @@ const (
ErrPolicyNotAttached
ErrExcessData
ErrPolicyInvalidName
ErrNoTokenRevokeType
ErrAdminOpenIDNotEnabled
ErrAdminNoSuchAccessKey
// Add new error codes here.
// SSE-S3/SSE-KMS related API errors
@ -567,6 +570,11 @@ var errorCodes = errorCodeMap{
Description: "Policy name may not contain comma",
HTTPStatusCode: http.StatusBadRequest,
},
ErrAdminOpenIDNotEnabled: {
Code: "OpenIDNotEnabled",
Description: "No enabled OpenID Connect identity providers",
HTTPStatusCode: http.StatusBadRequest,
},
ErrPolicyTooLarge: {
Code: "PolicyTooLarge",
Description: "Policy exceeds the maximum allowed document size.",
@ -1264,6 +1272,16 @@ var errorCodes = errorCodeMap{
Description: "The security token included in the request is invalid",
HTTPStatusCode: http.StatusForbidden,
},
ErrNoTokenRevokeType: {
Code: "InvalidArgument",
Description: "No token revoke type specified and one could not be inferred from the request",
HTTPStatusCode: http.StatusBadRequest,
},
ErrAdminNoSuchAccessKey: {
Code: "XMinioAdminNoSuchAccessKey",
Description: "The specified access key does not exist.",
HTTPStatusCode: http.StatusNotFound,
},
// S3 extensions.
ErrContentSHA256Mismatch: {
@ -2161,6 +2179,8 @@ func toAPIErrorCode(ctx context.Context, err error) (apiErr APIErrorCode) {
apiErr = ErrAdminNoSuchUserLDAPWarn
case errNoSuchServiceAccount:
apiErr = ErrAdminServiceAccountNotFound
case errNoSuchAccessKey:
apiErr = ErrAdminNoSuchAccessKey
case errNoSuchGroup:
apiErr = ErrAdminNoSuchGroup
case errGroupNotEmpty:

View File

@ -18,7 +18,6 @@
package cmd
import (
"context"
"errors"
"testing"
@ -64,7 +63,7 @@ var toAPIErrorTests = []struct {
}
func TestAPIErrCode(t *testing.T) {
ctx := context.Background()
ctx := t.Context()
for i, testCase := range toAPIErrorTests {
errCode := toAPIErrorCode(ctx, testCase.err)
if errCode != testCase.errCode {

View File

@ -23,6 +23,7 @@ import (
"encoding/json"
"encoding/xml"
"fmt"
"mime"
"net/http"
"strconv"
"strings"
@ -64,7 +65,7 @@ func setCommonHeaders(w http.ResponseWriter) {
}
// Encodes the response headers into XML format.
func encodeResponse(response interface{}) []byte {
func encodeResponse(response any) []byte {
var buf bytes.Buffer
buf.WriteString(xml.Header)
if err := xml.NewEncoder(&buf).Encode(response); err != nil {
@ -82,7 +83,7 @@ func encodeResponse(response interface{}) []byte {
// Do not use this function for anything other than ListObjects()
// variants, please open a github discussion if you wish to use
// this in other places.
func encodeResponseList(response interface{}) []byte {
func encodeResponseList(response any) []byte {
var buf bytes.Buffer
buf.WriteString(xxml.Header)
if err := xxml.NewEncoder(&buf).Encode(response); err != nil {
@ -93,7 +94,7 @@ func encodeResponseList(response interface{}) []byte {
}
// Encodes the response headers into JSON format.
func encodeResponseJSON(response interface{}) []byte {
func encodeResponseJSON(response any) []byte {
var bytesBuffer bytes.Buffer
e := json.NewEncoder(&bytesBuffer)
e.Encode(response)
@ -168,6 +169,32 @@ func setObjectHeaders(ctx context.Context, w http.ResponseWriter, objInfo Object
if !stringsHasPrefixFold(k, userMetadataPrefix) {
continue
}
// check the doc https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingMetadata.html
// For metadata values like "ö", "ÄMÄZÕÑ S3", and "öha, das sollte eigentlich
// funktionieren", tested against a real AWS S3 bucket, S3 may encode incorrectly. For
// example, "ö" was encoded as =?UTF-8?B?w4PCtg==?=, producing invalid UTF-8 instead
// of =?UTF-8?B?w7Y=?=. This mirrors errors like the ä½ in another string.
//
// S3 uses B-encoding (Base64) for non-ASCII-heavy metadata and Q-encoding
// (quoted-printable) for mostly ASCII strings. Long strings are split at word
// boundaries to fit RFC 2047s 75-character limit, ensuring HTTP parser
// compatibility.
//
// However, this splitting increases header size and can introduce errors, unlike Gos
// mime package in MinIO, which correctly encodes strings with fixed B/Q encodings,
// avoiding S3s heuristic-driven issues.
//
// For MinIO developers, decode S3 metadata with mime.WordDecoder, validate outputs,
// report encoding bugs to AWS, and use ASCII-only metadata to ensure reliable S3 API
// compatibility.
if needsMimeEncoding(v) {
// see https://github.com/golang/go/blob/release-branch.go1.24/src/net/mail/message.go#L325
if strings.ContainsAny(v, "\"#$%&'(),.:;<>@[]^`{|}~") {
v = mime.BEncoding.Encode("UTF-8", v)
} else {
v = mime.QEncoding.Encode("UTF-8", v)
}
}
w.Header()[strings.ToLower(k)] = []string{v}
isSet = true
break
@ -229,3 +256,14 @@ func setObjectHeaders(ctx context.Context, w http.ResponseWriter, objInfo Object
return nil
}
// needsEncoding reports whether s contains any bytes that need to be encoded.
// see mime.needsEncoding
func needsMimeEncoding(s string) bool {
for _, b := range s {
if (b < ' ' || b > '~') && b != '\t' {
return true
}
}
return false
}

View File

@ -34,7 +34,8 @@ func TestNewRequestID(t *testing.T) {
e = char
// Ensure that it is alphanumeric, in this case, between 0-9 and A-Z.
if !(('0' <= e && e <= '9') || ('A' <= e && e <= 'Z')) {
isAlnum := ('0' <= e && e <= '9') || ('A' <= e && e <= 'Z')
if !isAlnum {
t.Fail()
}
}

View File

@ -31,7 +31,7 @@ func getListObjectsV1Args(values url.Values) (prefix, marker, delimiter string,
var err error
if maxkeys, err = strconv.Atoi(values.Get("max-keys")); err != nil {
errCode = ErrInvalidMaxKeys
return
return prefix, marker, delimiter, maxkeys, encodingType, errCode
}
} else {
maxkeys = maxObjectList
@ -41,7 +41,7 @@ func getListObjectsV1Args(values url.Values) (prefix, marker, delimiter string,
marker = values.Get("marker")
delimiter = values.Get("delimiter")
encodingType = values.Get("encoding-type")
return
return prefix, marker, delimiter, maxkeys, encodingType, errCode
}
func getListBucketObjectVersionsArgs(values url.Values) (prefix, marker, delimiter string, maxkeys int, encodingType, versionIDMarker string, errCode APIErrorCode) {
@ -51,7 +51,7 @@ func getListBucketObjectVersionsArgs(values url.Values) (prefix, marker, delimit
var err error
if maxkeys, err = strconv.Atoi(values.Get("max-keys")); err != nil {
errCode = ErrInvalidMaxKeys
return
return prefix, marker, delimiter, maxkeys, encodingType, versionIDMarker, errCode
}
} else {
maxkeys = maxObjectList
@ -62,7 +62,7 @@ func getListBucketObjectVersionsArgs(values url.Values) (prefix, marker, delimit
delimiter = values.Get("delimiter")
encodingType = values.Get("encoding-type")
versionIDMarker = values.Get("version-id-marker")
return
return prefix, marker, delimiter, maxkeys, encodingType, versionIDMarker, errCode
}
// Parse bucket url queries for ListObjects V2.
@ -73,7 +73,7 @@ func getListObjectsV2Args(values url.Values) (prefix, token, startAfter, delimit
if val, ok := values["continuation-token"]; ok {
if len(val[0]) == 0 {
errCode = ErrIncorrectContinuationToken
return
return prefix, token, startAfter, delimiter, fetchOwner, maxkeys, encodingType, errCode
}
}
@ -81,7 +81,7 @@ func getListObjectsV2Args(values url.Values) (prefix, token, startAfter, delimit
var err error
if maxkeys, err = strconv.Atoi(values.Get("max-keys")); err != nil {
errCode = ErrInvalidMaxKeys
return
return prefix, token, startAfter, delimiter, fetchOwner, maxkeys, encodingType, errCode
}
} else {
maxkeys = maxObjectList
@ -97,11 +97,11 @@ func getListObjectsV2Args(values url.Values) (prefix, token, startAfter, delimit
decodedToken, err := base64.StdEncoding.DecodeString(token)
if err != nil {
errCode = ErrIncorrectContinuationToken
return
return prefix, token, startAfter, delimiter, fetchOwner, maxkeys, encodingType, errCode
}
token = string(decodedToken)
}
return
return prefix, token, startAfter, delimiter, fetchOwner, maxkeys, encodingType, errCode
}
// Parse bucket url queries for ?uploads
@ -112,7 +112,7 @@ func getBucketMultipartResources(values url.Values) (prefix, keyMarker, uploadID
var err error
if maxUploads, err = strconv.Atoi(values.Get("max-uploads")); err != nil {
errCode = ErrInvalidMaxUploads
return
return prefix, keyMarker, uploadIDMarker, delimiter, maxUploads, encodingType, errCode
}
} else {
maxUploads = maxUploadsList
@ -123,7 +123,7 @@ func getBucketMultipartResources(values url.Values) (prefix, keyMarker, uploadID
uploadIDMarker = values.Get("upload-id-marker")
delimiter = values.Get("delimiter")
encodingType = values.Get("encoding-type")
return
return prefix, keyMarker, uploadIDMarker, delimiter, maxUploads, encodingType, errCode
}
// Parse object url queries
@ -134,7 +134,7 @@ func getObjectResources(values url.Values) (uploadID string, partNumberMarker, m
if values.Get("max-parts") != "" {
if maxParts, err = strconv.Atoi(values.Get("max-parts")); err != nil {
errCode = ErrInvalidMaxParts
return
return uploadID, partNumberMarker, maxParts, encodingType, errCode
}
} else {
maxParts = maxPartsList
@ -143,11 +143,11 @@ func getObjectResources(values url.Values) (uploadID string, partNumberMarker, m
if values.Get("part-number-marker") != "" {
if partNumberMarker, err = strconv.Atoi(values.Get("part-number-marker")); err != nil {
errCode = ErrInvalidPartNumberMarker
return
return uploadID, partNumberMarker, maxParts, encodingType, errCode
}
}
uploadID = values.Get("uploadId")
encodingType = values.Get("encoding-type")
return
return uploadID, partNumberMarker, maxParts, encodingType, errCode
}

View File

@ -889,6 +889,12 @@ func generateMultiDeleteResponse(quiet bool, deletedObjects []DeletedObject, err
}
func writeResponse(w http.ResponseWriter, statusCode int, response []byte, mType mimeType) {
// Don't write a response if one has already been written.
// Fixes https://github.com/minio/minio/issues/21633
if headersAlreadyWritten(w) {
return
}
if statusCode == 0 {
statusCode = 200
}
@ -1015,3 +1021,45 @@ func writeCustomErrorResponseJSON(ctx context.Context, w http.ResponseWriter, er
encodedErrorResponse := encodeResponseJSON(errorResponse)
writeResponse(w, err.HTTPStatusCode, encodedErrorResponse, mimeJSON)
}
type unwrapper interface {
Unwrap() http.ResponseWriter
}
// headersAlreadyWritten returns true if the headers have already been written
// to this response writer. It will unwrap the ResponseWriter if possible to try
// and find a trackingResponseWriter.
func headersAlreadyWritten(w http.ResponseWriter) bool {
for {
if trw, ok := w.(*trackingResponseWriter); ok {
return trw.headerWritten
} else if uw, ok := w.(unwrapper); ok {
w = uw.Unwrap()
} else {
return false
}
}
}
// trackingResponseWriter wraps a ResponseWriter and notes when WriterHeader has
// been called. This allows high level request handlers to check if something
// has already sent the header.
type trackingResponseWriter struct {
http.ResponseWriter
headerWritten bool
}
func (w *trackingResponseWriter) WriteHeader(statusCode int) {
if !w.headerWritten {
w.headerWritten = true
w.ResponseWriter.WriteHeader(statusCode)
}
}
func (w *trackingResponseWriter) Write(b []byte) (int, error) {
return w.ResponseWriter.Write(b)
}
func (w *trackingResponseWriter) Unwrap() http.ResponseWriter {
return w.ResponseWriter
}

View File

@ -18,8 +18,12 @@
package cmd
import (
"io"
"net/http"
"net/http/httptest"
"testing"
"github.com/klauspost/compress/gzhttp"
)
// Tests object location.
@ -100,7 +104,6 @@ func TestObjectLocation(t *testing.T) {
},
}
for _, testCase := range testCases {
testCase := testCase
t.Run("", func(t *testing.T) {
gotLocation := getObjectLocation(testCase.request, testCase.domains, testCase.bucket, testCase.object)
if testCase.expectedLocation != gotLocation {
@ -123,3 +126,89 @@ func TestGetURLScheme(t *testing.T) {
t.Errorf("Expected %s, got %s", httpsScheme, gotScheme)
}
}
func TestTrackingResponseWriter(t *testing.T) {
rw := httptest.NewRecorder()
trw := &trackingResponseWriter{ResponseWriter: rw}
trw.WriteHeader(123)
if !trw.headerWritten {
t.Fatal("headerWritten was not set by WriteHeader call")
}
_, err := trw.Write([]byte("hello"))
if err != nil {
t.Fatalf("Write unexpectedly failed: %v", err)
}
// Check that WriteHeader and Write were called on the underlying response writer
resp := rw.Result()
if resp.StatusCode != 123 {
t.Fatalf("unexpected status: %v", resp.StatusCode)
}
body, err := io.ReadAll(resp.Body)
if err != nil {
t.Fatalf("reading response body failed: %v", err)
}
if string(body) != "hello" {
t.Fatalf("response body incorrect: %v", string(body))
}
// Check that Unwrap works
if trw.Unwrap() != rw {
t.Fatalf("Unwrap returned wrong result: %v", trw.Unwrap())
}
}
func TestHeadersAlreadyWritten(t *testing.T) {
rw := httptest.NewRecorder()
trw := &trackingResponseWriter{ResponseWriter: rw}
if headersAlreadyWritten(trw) {
t.Fatal("headers have not been written yet")
}
trw.WriteHeader(123)
if !headersAlreadyWritten(trw) {
t.Fatal("headers were written")
}
}
func TestHeadersAlreadyWrittenWrapped(t *testing.T) {
rw := httptest.NewRecorder()
trw := &trackingResponseWriter{ResponseWriter: rw}
wrap1 := &gzhttp.NoGzipResponseWriter{ResponseWriter: trw}
wrap2 := &gzhttp.NoGzipResponseWriter{ResponseWriter: wrap1}
if headersAlreadyWritten(wrap2) {
t.Fatal("headers have not been written yet")
}
wrap2.WriteHeader(123)
if !headersAlreadyWritten(wrap2) {
t.Fatal("headers were written")
}
}
func TestWriteResponseHeadersNotWritten(t *testing.T) {
rw := httptest.NewRecorder()
trw := &trackingResponseWriter{ResponseWriter: rw}
writeResponse(trw, 299, []byte("hello"), "application/foo")
resp := rw.Result()
if resp.StatusCode != 299 {
t.Fatal("response wasn't written")
}
}
func TestWriteResponseHeadersWritten(t *testing.T) {
rw := httptest.NewRecorder()
rw.Code = -1
trw := &trackingResponseWriter{ResponseWriter: rw, headerWritten: true}
writeResponse(trw, 200, []byte("hello"), "application/foo")
if rw.Code != -1 {
t.Fatalf("response was written when it shouldn't have been (Code=%v)", rw.Code)
}
}

View File

@ -218,6 +218,8 @@ func s3APIMiddleware(f http.HandlerFunc, flags ...s3HFlag) http.HandlerFunc {
handlerName := getHandlerName(f, "objectAPIHandlers")
var handler http.HandlerFunc = func(w http.ResponseWriter, r *http.Request) {
w = &trackingResponseWriter{ResponseWriter: w}
// Wrap the actual handler with the appropriate tracing middleware.
var tracedHandler http.HandlerFunc
if handlerFlags.has(traceHdrsS3HFlag) {
@ -227,13 +229,13 @@ func s3APIMiddleware(f http.HandlerFunc, flags ...s3HFlag) http.HandlerFunc {
}
// Skip wrapping with the gzip middleware if specified.
var gzippedHandler http.HandlerFunc = tracedHandler
gzippedHandler := tracedHandler
if !handlerFlags.has(noGZS3HFlag) {
gzippedHandler = gzipHandler(gzippedHandler)
}
// Skip wrapping with throttling middleware if specified.
var throttledHandler http.HandlerFunc = gzippedHandler
throttledHandler := gzippedHandler
if !handlerFlags.has(noThrottleS3HFlag) {
throttledHandler = maxClients(throttledHandler)
}
@ -387,6 +389,11 @@ func registerAPIRouter(router *mux.Router) {
HeadersRegexp(xhttp.AmzSnowballExtract, "true").
HandlerFunc(s3APIMiddleware(api.PutObjectExtractHandler, traceHdrsS3HFlag))
// AppendObject to be rejected
router.Methods(http.MethodPut).Path("/{object:.+}").
HeadersRegexp(xhttp.AmzWriteOffsetBytes, "").
HandlerFunc(s3APIMiddleware(errorResponseHandler))
// PutObject
router.Methods(http.MethodPut).Path("/{object:.+}").
HandlerFunc(s3APIMiddleware(api.PutObjectHandler, traceHdrsS3HFlag))

View File

@ -43,7 +43,7 @@ func shouldEscape(c byte) bool {
// - Force encoding of '~'
func s3URLEncode(s string) string {
spaceCount, hexCount := 0, 0
for i := 0; i < len(s); i++ {
for i := range len(s) {
c := s[i]
if shouldEscape(c) {
if c == ' ' {
@ -70,7 +70,7 @@ func s3URLEncode(s string) string {
if hexCount == 0 {
copy(t, s)
for i := 0; i < len(s); i++ {
for i := range len(s) {
if s[i] == ' ' {
t[i] = '+'
}
@ -79,7 +79,7 @@ func s3URLEncode(s string) string {
}
j := 0
for i := 0; i < len(s); i++ {
for i := range len(s) {
switch c := s[i]; {
case c == ' ':
t[j] = '+'

File diff suppressed because one or more lines are too long

View File

@ -96,7 +96,7 @@ func isRequestSignStreamingTrailerV4(r *http.Request) bool {
// Verify if the request has AWS Streaming Signature Version '4', with unsigned content and trailer.
func isRequestUnsignedTrailerV4(r *http.Request) bool {
return r.Header.Get(xhttp.AmzContentSha256) == unsignedPayloadTrailer &&
r.Method == http.MethodPut && strings.Contains(r.Header.Get(xhttp.ContentEncoding), streamingContentEncoding)
r.Method == http.MethodPut
}
// Authorization type.
@ -216,7 +216,7 @@ func getSessionToken(r *http.Request) (token string) {
// Fetch claims in the security token returned by the client, doesn't return
// errors - upon errors the returned claims map will be empty.
func mustGetClaimsFromToken(r *http.Request) map[string]interface{} {
func mustGetClaimsFromToken(r *http.Request) map[string]any {
claims, _ := getClaimsFromToken(getSessionToken(r))
return claims
}
@ -266,7 +266,7 @@ func getClaimsFromTokenWithSecret(token, secret string) (*xjwt.MapClaims, error)
}
// Fetch claims in the security token returned by the client.
func getClaimsFromToken(token string) (map[string]interface{}, error) {
func getClaimsFromToken(token string) (map[string]any, error) {
jwtClaims, err := getClaimsFromTokenWithSecret(token, globalActiveCred.SecretKey)
if err != nil {
return nil, err
@ -275,7 +275,7 @@ func getClaimsFromToken(token string) (map[string]interface{}, error) {
}
// Fetch claims in the security token returned by the client and validate the token.
func checkClaimsFromToken(r *http.Request, cred auth.Credentials) (map[string]interface{}, APIErrorCode) {
func checkClaimsFromToken(r *http.Request, cred auth.Credentials) (map[string]any, APIErrorCode) {
token := getSessionToken(r)
if token != "" && cred.AccessKey == "" {
// x-amz-security-token is not allowed for anonymous access.
@ -363,7 +363,7 @@ func authenticateRequest(ctx context.Context, r *http.Request, action policy.Act
var cred auth.Credentials
var owner bool
switch getRequestAuthType(r) {
case authTypeUnknown, authTypeStreamingSigned:
case authTypeUnknown, authTypeStreamingSigned, authTypeStreamingSignedTrailer, authTypeStreamingUnsignedTrailer:
return ErrSignatureVersionNotSupported
case authTypePresignedV2, authTypeSignedV2:
if s3Err = isReqAuthenticatedV2(r); s3Err != ErrNone {
@ -674,32 +674,6 @@ func setAuthMiddleware(h http.Handler) http.Handler {
})
}
func validateSignature(atype authType, r *http.Request) (auth.Credentials, bool, APIErrorCode) {
var cred auth.Credentials
var owner bool
var s3Err APIErrorCode
switch atype {
case authTypeUnknown, authTypeStreamingSigned:
return cred, owner, ErrSignatureVersionNotSupported
case authTypeSignedV2, authTypePresignedV2:
if s3Err = isReqAuthenticatedV2(r); s3Err != ErrNone {
return cred, owner, s3Err
}
cred, owner, s3Err = getReqAccessKeyV2(r)
case authTypePresigned, authTypeSigned:
region := globalSite.Region()
if s3Err = isReqAuthenticated(GlobalContext, r, region, serviceS3); s3Err != ErrNone {
return cred, owner, s3Err
}
cred, owner, s3Err = getReqAccessKeyV4(r, region, serviceS3)
}
if s3Err != ErrNone {
return cred, owner, s3Err
}
return cred, owner, ErrNone
}
func isPutRetentionAllowed(bucketName, objectName string, retDays int, retDate time.Time, retMode objectlock.RetMode, byPassSet bool, r *http.Request, cred auth.Credentials, owner bool) (s3Err APIErrorCode) {
var retSet bool
if cred.AccessKey == "" {
@ -754,8 +728,14 @@ func isPutActionAllowed(ctx context.Context, atype authType, bucketName, objectN
return ErrSignatureVersionNotSupported
case authTypeSignedV2, authTypePresignedV2:
cred, owner, s3Err = getReqAccessKeyV2(r)
case authTypeStreamingSigned, authTypePresigned, authTypeSigned, authTypeStreamingSignedTrailer, authTypeStreamingUnsignedTrailer:
case authTypeStreamingSigned, authTypePresigned, authTypeSigned, authTypeStreamingSignedTrailer:
cred, owner, s3Err = getReqAccessKeyV4(r, region, serviceS3)
case authTypeStreamingUnsignedTrailer:
cred, owner, s3Err = getReqAccessKeyV4(r, region, serviceS3)
if s3Err == ErrMissingFields {
// Could be anonymous. cred + owner is zero value.
s3Err = ErrNone
}
}
if s3Err != ErrNone {
return s3Err

View File

@ -413,7 +413,7 @@ func TestIsReqAuthenticated(t *testing.T) {
}
func TestCheckAdminRequestAuthType(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
ctx, cancel := context.WithCancel(t.Context())
defer cancel()
objLayer, fsDir, err := prepareFS(ctx)
@ -450,7 +450,7 @@ func TestCheckAdminRequestAuthType(t *testing.T) {
}
func TestValidateAdminSignature(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
ctx, cancel := context.WithCancel(t.Context())
defer cancel()
objLayer, fsDir, err := prepareFS(ctx)

View File

@ -102,7 +102,7 @@ func waitForLowHTTPReq() {
func initBackgroundHealing(ctx context.Context, objAPI ObjectLayer) {
bgSeq := newBgHealSequence()
// Run the background healer
for i := 0; i < globalBackgroundHealRoutine.workers; i++ {
for range globalBackgroundHealRoutine.workers {
go globalBackgroundHealRoutine.AddWorker(ctx, objAPI, bgSeq)
}

View File

@ -24,6 +24,7 @@ import (
"fmt"
"io"
"os"
"slices"
"sort"
"strings"
"sync"
@ -269,12 +270,7 @@ func (h *healingTracker) delete(ctx context.Context) error {
func (h *healingTracker) isHealed(bucket string) bool {
h.mu.RLock()
defer h.mu.RUnlock()
for _, v := range h.HealedBuckets {
if v == bucket {
return true
}
}
return false
return slices.Contains(h.HealedBuckets, bucket)
}
// resume will reset progress to the numbers at the start of the bucket.

View File

@ -1,7 +1,7 @@
package cmd
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
package cmd
import (
"github.com/tinylib/msgp/msgp"
)

View File

@ -1,7 +1,7 @@
package cmd
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
package cmd
import (
"bytes"
"testing"

View File

@ -424,12 +424,12 @@ func batchObjsForDelete(ctx context.Context, r *BatchJobExpire, ri *batchJobInfo
go func(toExpire []expireObjInfo) {
defer wk.Give()
toExpireAll := make([]ObjectInfo, 0, len(toExpire))
toExpireAll := make([]expireObjInfo, 0, len(toExpire))
toDel := make([]ObjectToDelete, 0, len(toExpire))
oiCache := newObjInfoCache()
for _, exp := range toExpire {
if exp.ExpireAll {
toExpireAll = append(toExpireAll, exp.ObjectInfo)
toExpireAll = append(toExpireAll, exp)
continue
}
// Cache ObjectInfo value via pointers for
@ -527,7 +527,8 @@ func batchObjsForDelete(ctx context.Context, r *BatchJobExpire, ri *batchJobInfo
type expireObjInfo struct {
ObjectInfo
ExpireAll bool
ExpireAll bool
DeleteMarkerCount int64
}
// Start the batch expiration job, resumes if there was a pending job via "job.ID"
@ -624,80 +625,115 @@ func (r *BatchJobExpire) Start(ctx context.Context, api ObjectLayer, job BatchJo
matchedFilter BatchJobExpireFilter
versionsCount int
toDel []expireObjInfo
failed bool
done bool
)
failed := false
for result := range results {
if result.Err != nil {
failed = true
batchLogIf(ctx, result.Err)
continue
deleteMarkerCountMap := map[string]int64{}
pushToExpire := func() {
// set preObject deleteMarkerCount
if len(toDel) > 0 {
lastDelIndex := len(toDel) - 1
lastDel := toDel[lastDelIndex]
if lastDel.ExpireAll {
toDel[lastDelIndex].DeleteMarkerCount = deleteMarkerCountMap[lastDel.Name]
// delete the key
delete(deleteMarkerCountMap, lastDel.Name)
}
}
// Apply filter to find the matching rule to apply expiry
// actions accordingly.
// nolint:gocritic
if result.Item.IsLatest {
// send down filtered entries to be deleted using
// DeleteObjects method
if len(toDel) > 10 { // batch up to 10 objects/versions to be expired simultaneously.
xfer := make([]expireObjInfo, len(toDel))
copy(xfer, toDel)
var done bool
select {
case <-ctx.Done():
done = true
case expireCh <- xfer:
toDel = toDel[:0] // resetting toDel
}
if done {
break
}
// send down filtered entries to be deleted using
// DeleteObjects method
if len(toDel) > 10 { // batch up to 10 objects/versions to be expired simultaneously.
xfer := make([]expireObjInfo, len(toDel))
copy(xfer, toDel)
select {
case expireCh <- xfer:
toDel = toDel[:0] // resetting toDel
case <-ctx.Done():
done = true
}
var match BatchJobExpireFilter
var found bool
for _, rule := range r.Rules {
if rule.Matches(result.Item, now) {
match = rule
found = true
break
}
}
if !found {
continue
}
prevObj = result.Item
matchedFilter = match
versionsCount = 1
// Include the latest version
if matchedFilter.Purge.RetainVersions == 0 {
toDel = append(toDel, expireObjInfo{
ObjectInfo: result.Item,
ExpireAll: true,
})
continue
}
} else if prevObj.Name == result.Item.Name {
if matchedFilter.Purge.RetainVersions == 0 {
continue // including latest version in toDel suffices, skipping other versions
}
versionsCount++
} else {
continue
}
if versionsCount <= matchedFilter.Purge.RetainVersions {
continue // retain versions
}
toDel = append(toDel, expireObjInfo{
ObjectInfo: result.Item,
})
}
for {
select {
case result, ok := <-results:
if !ok {
done = true
break
}
if result.Err != nil {
failed = true
batchLogIf(ctx, result.Err)
continue
}
if result.Item.DeleteMarker {
deleteMarkerCountMap[result.Item.Name]++
}
// Apply filter to find the matching rule to apply expiry
// actions accordingly.
// nolint:gocritic
if result.Item.IsLatest {
var match BatchJobExpireFilter
var found bool
for _, rule := range r.Rules {
if rule.Matches(result.Item, now) {
match = rule
found = true
break
}
}
if !found {
continue
}
if prevObj.Name != result.Item.Name {
// switch the object
pushToExpire()
}
prevObj = result.Item
matchedFilter = match
versionsCount = 1
// Include the latest version
if matchedFilter.Purge.RetainVersions == 0 {
toDel = append(toDel, expireObjInfo{
ObjectInfo: result.Item,
ExpireAll: true,
})
continue
}
} else if prevObj.Name == result.Item.Name {
if matchedFilter.Purge.RetainVersions == 0 {
continue // including latest version in toDel suffices, skipping other versions
}
versionsCount++
} else {
// switch the object
pushToExpire()
// a file switched with no LatestVersion, logging it
batchLogIf(ctx, fmt.Errorf("skipping object %s, no latest version found", result.Item.Name))
continue
}
if versionsCount <= matchedFilter.Purge.RetainVersions {
continue // retain versions
}
toDel = append(toDel, expireObjInfo{
ObjectInfo: result.Item,
})
pushToExpire()
case <-ctx.Done():
done = true
}
if done {
break
}
}
if context.Cause(ctx) != nil {
xioutil.SafeClose(expireCh)
return context.Cause(ctx)
}
pushToExpire()
// Send any remaining objects downstream
if len(toDel) > 0 {
select {

View File

@ -1,7 +1,7 @@
package cmd
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
package cmd
import (
"time"

View File

@ -1,7 +1,7 @@
package cmd
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
package cmd
import (
"bytes"
"testing"

View File

@ -25,6 +25,7 @@ import (
"errors"
"fmt"
"io"
"maps"
"math/rand"
"net/http"
"net/url"
@ -39,7 +40,6 @@ import (
"github.com/lithammer/shortuuid/v4"
"github.com/minio/madmin-go/v3"
"github.com/minio/minio-go/v7"
miniogo "github.com/minio/minio-go/v7"
"github.com/minio/minio-go/v7/pkg/credentials"
"github.com/minio/minio-go/v7/pkg/encrypt"
"github.com/minio/minio-go/v7/pkg/tags"
@ -47,7 +47,6 @@ import (
"github.com/minio/minio/internal/crypto"
"github.com/minio/minio/internal/hash"
xhttp "github.com/minio/minio/internal/http"
"github.com/minio/minio/internal/ioutil"
xioutil "github.com/minio/minio/internal/ioutil"
"github.com/minio/pkg/v3/console"
"github.com/minio/pkg/v3/env"
@ -142,7 +141,7 @@ func (r BatchJobReplicateV1) Notify(ctx context.Context, ri *batchJobInfo) error
}
// ReplicateFromSource - this is not implemented yet where source is 'remote' and target is local.
func (r *BatchJobReplicateV1) ReplicateFromSource(ctx context.Context, api ObjectLayer, core *miniogo.Core, srcObjInfo ObjectInfo, retry bool) error {
func (r *BatchJobReplicateV1) ReplicateFromSource(ctx context.Context, api ObjectLayer, core *minio.Core, srcObjInfo ObjectInfo, retry bool) error {
srcBucket := r.Source.Bucket
tgtBucket := r.Target.Bucket
srcObject := srcObjInfo.Name
@ -189,7 +188,7 @@ func (r *BatchJobReplicateV1) ReplicateFromSource(ctx context.Context, api Objec
}
return r.copyWithMultipartfromSource(ctx, api, core, srcObjInfo, opts, partsCount)
}
gopts := miniogo.GetObjectOptions{
gopts := minio.GetObjectOptions{
VersionID: srcObjInfo.VersionID,
}
if err := gopts.SetMatchETag(srcObjInfo.ETag); err != nil {
@ -210,7 +209,7 @@ func (r *BatchJobReplicateV1) ReplicateFromSource(ctx context.Context, api Objec
return err
}
func (r *BatchJobReplicateV1) copyWithMultipartfromSource(ctx context.Context, api ObjectLayer, c *miniogo.Core, srcObjInfo ObjectInfo, opts ObjectOptions, partsCount int) (err error) {
func (r *BatchJobReplicateV1) copyWithMultipartfromSource(ctx context.Context, api ObjectLayer, c *minio.Core, srcObjInfo ObjectInfo, opts ObjectOptions, partsCount int) (err error) {
srcBucket := r.Source.Bucket
tgtBucket := r.Target.Bucket
srcObject := srcObjInfo.Name
@ -250,8 +249,8 @@ func (r *BatchJobReplicateV1) copyWithMultipartfromSource(ctx context.Context, a
pInfo PartInfo
)
for i := 0; i < partsCount; i++ {
gopts := miniogo.GetObjectOptions{
for i := range partsCount {
gopts := minio.GetObjectOptions{
VersionID: srcObjInfo.VersionID,
PartNumber: i + 1,
}
@ -382,7 +381,7 @@ func (r *BatchJobReplicateV1) StartFromSource(ctx context.Context, api ObjectLay
cred := r.Source.Creds
c, err := miniogo.New(u.Host, &miniogo.Options{
c, err := minio.New(u.Host, &minio.Options{
Creds: credentials.NewStaticV4(cred.AccessKey, cred.SecretKey, cred.SessionToken),
Secure: u.Scheme == "https",
Transport: getRemoteInstanceTransport(),
@ -393,7 +392,7 @@ func (r *BatchJobReplicateV1) StartFromSource(ctx context.Context, api ObjectLay
}
c.SetAppInfo("minio-"+batchJobPrefix, r.APIVersion+" "+job.ID)
core := &miniogo.Core{Client: c}
core := &minio.Core{Client: c}
workerSize, err := strconv.Atoi(env.Get("_MINIO_BATCH_REPLICATION_WORKERS", strconv.Itoa(runtime.GOMAXPROCS(0)/2)))
if err != nil {
@ -414,14 +413,14 @@ func (r *BatchJobReplicateV1) StartFromSource(ctx context.Context, api ObjectLay
minioSrc := r.Source.Type == BatchJobReplicateResourceMinIO
ctx, cancel := context.WithCancel(ctx)
objInfoCh := make(chan miniogo.ObjectInfo, 1)
objInfoCh := make(chan minio.ObjectInfo, 1)
go func() {
prefixes := r.Source.Prefix.F()
if len(prefixes) == 0 {
prefixes = []string{""}
}
for _, prefix := range prefixes {
prefixObjInfoCh := c.ListObjects(ctx, r.Source.Bucket, miniogo.ListObjectsOptions{
prefixObjInfoCh := c.ListObjects(ctx, r.Source.Bucket, minio.ListObjectsOptions{
Prefix: prefix,
WithVersions: minioSrc,
Recursive: true,
@ -444,7 +443,7 @@ func (r *BatchJobReplicateV1) StartFromSource(ctx context.Context, api ObjectLay
// all user metadata or just storageClass. If its only storageClass
// List() already returns relevant information for filter to be applied.
if isMetadata && !isStorageClassOnly {
oi2, err := c.StatObject(ctx, r.Source.Bucket, obj.Key, miniogo.StatObjectOptions{})
oi2, err := c.StatObject(ctx, r.Source.Bucket, obj.Key, minio.StatObjectOptions{})
if err == nil {
oi = toObjectInfo(r.Source.Bucket, obj.Key, oi2)
} else {
@ -540,7 +539,7 @@ func (r *BatchJobReplicateV1) StartFromSource(ctx context.Context, api ObjectLay
}
// toObjectInfo converts minio.ObjectInfo to ObjectInfo
func toObjectInfo(bucket, object string, objInfo miniogo.ObjectInfo) ObjectInfo {
func toObjectInfo(bucket, object string, objInfo minio.ObjectInfo) ObjectInfo {
tags, _ := tags.MapToObjectTags(objInfo.UserTags)
oi := ObjectInfo{
Bucket: bucket,
@ -576,9 +575,7 @@ func toObjectInfo(bucket, object string, objInfo miniogo.ObjectInfo) ObjectInfo
oi.UserDefined[xhttp.AmzStorageClass] = objInfo.StorageClass
}
for k, v := range objInfo.UserMetadata {
oi.UserDefined[k] = v
}
maps.Copy(oi.UserDefined, objInfo.UserMetadata)
return oi
}
@ -643,7 +640,7 @@ func (r BatchJobReplicateV1) writeAsArchive(ctx context.Context, objAPI ObjectLa
}
// ReplicateToTarget read from source and replicate to configured target
func (r *BatchJobReplicateV1) ReplicateToTarget(ctx context.Context, api ObjectLayer, c *miniogo.Core, srcObjInfo ObjectInfo, retry bool) error {
func (r *BatchJobReplicateV1) ReplicateToTarget(ctx context.Context, api ObjectLayer, c *minio.Core, srcObjInfo ObjectInfo, retry bool) error {
srcBucket := r.Source.Bucket
tgtBucket := r.Target.Bucket
tgtPrefix := r.Target.Prefix
@ -652,9 +649,9 @@ func (r *BatchJobReplicateV1) ReplicateToTarget(ctx context.Context, api ObjectL
if srcObjInfo.DeleteMarker || !srcObjInfo.VersionPurgeStatus.Empty() {
if retry && !s3Type {
if _, err := c.StatObject(ctx, tgtBucket, pathJoin(tgtPrefix, srcObject), miniogo.StatObjectOptions{
if _, err := c.StatObject(ctx, tgtBucket, pathJoin(tgtPrefix, srcObject), minio.StatObjectOptions{
VersionID: srcObjInfo.VersionID,
Internal: miniogo.AdvancedGetOptions{
Internal: minio.AdvancedGetOptions{
ReplicationProxyRequest: "false",
},
}); isErrMethodNotAllowed(ErrorRespToObjectError(err, tgtBucket, pathJoin(tgtPrefix, srcObject))) {
@ -671,19 +668,19 @@ func (r *BatchJobReplicateV1) ReplicateToTarget(ctx context.Context, api ObjectL
dmVersionID = ""
versionID = ""
}
return c.RemoveObject(ctx, tgtBucket, pathJoin(tgtPrefix, srcObject), miniogo.RemoveObjectOptions{
return c.RemoveObject(ctx, tgtBucket, pathJoin(tgtPrefix, srcObject), minio.RemoveObjectOptions{
VersionID: versionID,
Internal: miniogo.AdvancedRemoveOptions{
Internal: minio.AdvancedRemoveOptions{
ReplicationDeleteMarker: dmVersionID != "",
ReplicationMTime: srcObjInfo.ModTime,
ReplicationStatus: miniogo.ReplicationStatusReplica,
ReplicationStatus: minio.ReplicationStatusReplica,
ReplicationRequest: true, // always set this to distinguish between `mc mirror` replication and serverside
},
})
}
if retry && !s3Type { // when we are retrying avoid copying if necessary.
gopts := miniogo.GetObjectOptions{}
gopts := minio.GetObjectOptions{}
if err := gopts.SetMatchETag(srcObjInfo.ETag); err != nil {
return err
}
@ -717,7 +714,7 @@ func (r *BatchJobReplicateV1) ReplicateToTarget(ctx context.Context, api ObjectL
return err
}
if r.Target.Type == BatchJobReplicateResourceS3 || r.Source.Type == BatchJobReplicateResourceS3 {
putOpts.Internal = miniogo.AdvancedPutOptions{}
putOpts.Internal = minio.AdvancedPutOptions{}
}
if isMP {
if err := replicateObjectWithMultipart(ctx, c, tgtBucket, pathJoin(tgtPrefix, objInfo.Name), rd, objInfo, putOpts); err != nil {
@ -883,21 +880,23 @@ func (ri *batchJobInfo) clone() *batchJobInfo {
defer ri.mu.RUnlock()
return &batchJobInfo{
Version: ri.Version,
JobID: ri.JobID,
JobType: ri.JobType,
RetryAttempts: ri.RetryAttempts,
Complete: ri.Complete,
Failed: ri.Failed,
StartTime: ri.StartTime,
LastUpdate: ri.LastUpdate,
Bucket: ri.Bucket,
Object: ri.Object,
Objects: ri.Objects,
ObjectsFailed: ri.ObjectsFailed,
BytesTransferred: ri.BytesTransferred,
BytesFailed: ri.BytesFailed,
Attempts: ri.Attempts,
Version: ri.Version,
JobID: ri.JobID,
JobType: ri.JobType,
RetryAttempts: ri.RetryAttempts,
Complete: ri.Complete,
Failed: ri.Failed,
StartTime: ri.StartTime,
LastUpdate: ri.LastUpdate,
Bucket: ri.Bucket,
Object: ri.Object,
Objects: ri.Objects,
ObjectsFailed: ri.ObjectsFailed,
DeleteMarkers: ri.DeleteMarkers,
DeleteMarkersFailed: ri.DeleteMarkersFailed,
BytesTransferred: ri.BytesTransferred,
BytesFailed: ri.BytesFailed,
Attempts: ri.Attempts,
}
}
@ -996,11 +995,22 @@ func (ri *batchJobInfo) updateAfter(ctx context.Context, api ObjectLayer, durati
// Note: to be used only with batch jobs that affect multiple versions through
// a single action. e.g batch-expire has an option to expire all versions of an
// object which matches the given filters.
func (ri *batchJobInfo) trackMultipleObjectVersions(info ObjectInfo, success bool) {
func (ri *batchJobInfo) trackMultipleObjectVersions(info expireObjInfo, success bool) {
if ri == nil {
return
}
ri.mu.Lock()
defer ri.mu.Unlock()
if success {
ri.Objects += int64(info.NumVersions)
ri.Bucket = info.Bucket
ri.Object = info.Name
ri.Objects += int64(info.NumVersions) - info.DeleteMarkerCount
ri.DeleteMarkers += info.DeleteMarkerCount
} else {
ri.ObjectsFailed += int64(info.NumVersions)
ri.ObjectsFailed += int64(info.NumVersions) - info.DeleteMarkerCount
ri.DeleteMarkersFailed += info.DeleteMarkerCount
}
}
@ -1124,7 +1134,8 @@ func (r *BatchJobReplicateV1) Start(ctx context.Context, api ObjectLayer, job Ba
}
// if one of source or target is non MinIO, just replicate the top most version like `mc mirror`
return !((r.Target.Type == BatchJobReplicateResourceS3 || r.Source.Type == BatchJobReplicateResourceS3) && !info.IsLatest)
isSourceOrTargetS3 := r.Target.Type == BatchJobReplicateResourceS3 || r.Source.Type == BatchJobReplicateResourceS3
return !isSourceOrTargetS3 || info.IsLatest
}
u, err := url.Parse(r.Target.Endpoint)
@ -1134,7 +1145,7 @@ func (r *BatchJobReplicateV1) Start(ctx context.Context, api ObjectLayer, job Ba
cred := r.Target.Creds
c, err := miniogo.NewCore(u.Host, &miniogo.Options{
c, err := minio.NewCore(u.Host, &minio.Options{
Creds: credentials.NewStaticV4(cred.AccessKey, cred.SecretKey, cred.SessionToken),
Secure: u.Scheme == "https",
Transport: getRemoteInstanceTransport(),
@ -1157,14 +1168,14 @@ func (r *BatchJobReplicateV1) Start(ctx context.Context, api ObjectLayer, job Ba
if r.Source.Snowball.Disable != nil && !*r.Source.Snowball.Disable && r.Source.Type.isMinio() && r.Target.Type.isMinio() {
go func() {
// Snowball currently needs the high level minio-go Client, not the Core one
cl, err := miniogo.New(u.Host, &miniogo.Options{
cl, err := minio.New(u.Host, &minio.Options{
Creds: credentials.NewStaticV4(cred.AccessKey, cred.SecretKey, cred.SessionToken),
Secure: u.Scheme == "https",
Transport: getRemoteInstanceTransport(),
BucketLookup: lookupStyle(r.Target.Path),
})
if err != nil {
batchLogOnceIf(ctx, err, job.ID+"miniogo.New")
batchLogOnceIf(ctx, err, job.ID+"minio.New")
return
}
@ -1274,7 +1285,7 @@ func (r *BatchJobReplicateV1) Start(ctx context.Context, api ObjectLayer, job Ba
stopFn := globalBatchJobsMetrics.trace(batchJobMetricReplication, job.ID, attempts)
success := true
if err := r.ReplicateToTarget(ctx, api, c, result, retry); err != nil {
if miniogo.ToErrorResponse(err).Code == "PreconditionFailed" {
if minio.ToErrorResponse(err).Code == "PreconditionFailed" {
// pre-condition failed means we already have the object copied over.
return
}
@ -1457,7 +1468,7 @@ func (r *BatchJobReplicateV1) Validate(ctx context.Context, job BatchJobRequest,
return err
}
c, err := miniogo.NewCore(u.Host, &miniogo.Options{
c, err := minio.NewCore(u.Host, &minio.Options{
Creds: credentials.NewStaticV4(cred.AccessKey, cred.SecretKey, cred.SessionToken),
Secure: u.Scheme == "https",
Transport: getRemoteInstanceTransport(),
@ -1470,7 +1481,7 @@ func (r *BatchJobReplicateV1) Validate(ctx context.Context, job BatchJobRequest,
vcfg, err := c.GetBucketVersioning(ctx, remoteBkt)
if err != nil {
if miniogo.ToErrorResponse(err).Code == "NoSuchBucket" {
if minio.ToErrorResponse(err).Code == "NoSuchBucket" {
return batchReplicationJobError{
Code: "NoSuchTargetBucket",
Description: "The specified target bucket does not exist",
@ -1575,13 +1586,13 @@ func (j *BatchJobRequest) load(ctx context.Context, api ObjectLayer, name string
return err
}
func batchReplicationOpts(ctx context.Context, sc string, objInfo ObjectInfo) (putOpts miniogo.PutObjectOptions, isMP bool, err error) {
func batchReplicationOpts(ctx context.Context, sc string, objInfo ObjectInfo) (putOpts minio.PutObjectOptions, isMP bool, err error) {
// TODO: support custom storage class for remote replication
putOpts, isMP, err = putReplicationOpts(ctx, "", objInfo)
if err != nil {
return putOpts, isMP, err
}
putOpts.Internal = miniogo.AdvancedPutOptions{
putOpts.Internal = minio.AdvancedPutOptions{
SourceVersionID: objInfo.VersionID,
SourceMTime: objInfo.ModTime,
SourceETag: objInfo.ETag,
@ -1740,7 +1751,7 @@ func (a adminAPIHandlers) StartBatchJob(w http.ResponseWriter, r *http.Request)
return
}
buf, err := io.ReadAll(ioutil.HardLimitReader(r.Body, humanize.MiByte*4))
buf, err := io.ReadAll(xioutil.HardLimitReader(r.Body, humanize.MiByte*4))
if err != nil {
writeErrorResponseJSON(ctx, w, toAPIError(ctx, err), r.URL)
return
@ -2135,12 +2146,14 @@ func (ri *batchJobInfo) metric() madmin.JobMetric {
switch ri.JobType {
case string(madmin.BatchJobReplicate):
m.Replicate = &madmin.ReplicateInfo{
Bucket: ri.Bucket,
Object: ri.Object,
Objects: ri.Objects,
ObjectsFailed: ri.ObjectsFailed,
BytesTransferred: ri.BytesTransferred,
BytesFailed: ri.BytesFailed,
Bucket: ri.Bucket,
Object: ri.Object,
Objects: ri.Objects,
DeleteMarkers: ri.DeleteMarkers,
ObjectsFailed: ri.ObjectsFailed,
DeleteMarkersFailed: ri.DeleteMarkersFailed,
BytesTransferred: ri.BytesTransferred,
BytesFailed: ri.BytesFailed,
}
case string(madmin.BatchJobKeyRotate):
m.KeyRotate = &madmin.KeyRotationInfo{
@ -2151,10 +2164,12 @@ func (ri *batchJobInfo) metric() madmin.JobMetric {
}
case string(madmin.BatchJobExpire):
m.Expired = &madmin.ExpirationInfo{
Bucket: ri.Bucket,
Object: ri.Object,
Objects: ri.Objects,
ObjectsFailed: ri.ObjectsFailed,
Bucket: ri.Bucket,
Object: ri.Object,
Objects: ri.Objects,
DeleteMarkers: ri.DeleteMarkers,
ObjectsFailed: ri.ObjectsFailed,
DeleteMarkersFailed: ri.DeleteMarkersFailed,
}
}
@ -2300,15 +2315,15 @@ func (m *batchJobMetrics) trace(d batchJobMetric, job string, attempts int) func
}
}
func lookupStyle(s string) miniogo.BucketLookupType {
var lookup miniogo.BucketLookupType
func lookupStyle(s string) minio.BucketLookupType {
var lookup minio.BucketLookupType
switch s {
case "on":
lookup = miniogo.BucketLookupPath
lookup = minio.BucketLookupPath
case "off":
lookup = miniogo.BucketLookupDNS
lookup = minio.BucketLookupDNS
default:
lookup = miniogo.BucketLookupAuto
lookup = minio.BucketLookupAuto
}
return lookup
}

View File

@ -1,7 +1,7 @@
package cmd
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
package cmd
import (
"github.com/tinylib/msgp/msgp"
)

View File

@ -1,7 +1,7 @@
package cmd
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
package cmd
import (
"bytes"
"testing"

View File

@ -275,7 +275,7 @@ func (sf BatchJobSizeFilter) Validate() error {
type BatchJobSize int64
// UnmarshalYAML to parse humanized byte values
func (s *BatchJobSize) UnmarshalYAML(unmarshal func(interface{}) error) error {
func (s *BatchJobSize) UnmarshalYAML(unmarshal func(any) error) error {
var batchExpireSz string
err := unmarshal(&batchExpireSz)
if err != nil {

View File

@ -1,7 +1,7 @@
package cmd
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
package cmd
import (
"github.com/tinylib/msgp/msgp"
)

View File

@ -1,7 +1,7 @@
package cmd
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
package cmd
import (
"bytes"
"testing"

View File

@ -1,7 +1,7 @@
package cmd
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
package cmd
import (
"github.com/tinylib/msgp/msgp"
)

View File

@ -1,7 +1,7 @@
package cmd
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
package cmd
import (
"bytes"
"testing"

View File

@ -21,6 +21,7 @@ import (
"context"
"encoding/base64"
"fmt"
"maps"
"math/rand"
"net/http"
"runtime"
@ -110,9 +111,7 @@ func (e BatchJobKeyRotateEncryption) Validate() error {
}
}
e.kmsContext = kms.Context{}
for k, v := range ctx {
e.kmsContext[k] = v
}
maps.Copy(e.kmsContext, ctx)
ctx["MinIO batch API"] = "batchrotate" // Context for a test key operation
if _, err := GlobalKMS.GenerateKey(GlobalContext, &kms.GenerateKeyRequest{Name: e.Key, AssociatedData: ctx}); err != nil {
return err
@ -225,9 +224,7 @@ func (r *BatchJobKeyRotateV1) KeyRotate(ctx context.Context, api ObjectLayer, ob
// Since we are rotating the keys, make sure to update the metadata.
oi.metadataOnly = true
oi.keyRotation = true
for k, v := range encMetadata {
oi.UserDefined[k] = v
}
maps.Copy(oi.UserDefined, encMetadata)
if _, err := api.CopyObject(ctx, r.Bucket, oi.Name, r.Bucket, oi.Name, oi, ObjectOptions{
VersionID: oi.VersionID,
}, ObjectOptions{

View File

@ -1,7 +1,7 @@
package cmd
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
package cmd
import (
"github.com/tinylib/msgp/msgp"
)

View File

@ -1,7 +1,7 @@
package cmd
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
package cmd
import (
"bytes"
"testing"

View File

@ -35,7 +35,7 @@ func runPutObjectBenchmark(b *testing.B, obj ObjectLayer, objSize int) {
// obtains random bucket name.
bucket := getRandomBucketName()
// create bucket.
err = obj.MakeBucket(context.Background(), bucket, MakeBucketOptions{})
err = obj.MakeBucket(b.Context(), bucket, MakeBucketOptions{})
if err != nil {
b.Fatal(err)
}
@ -51,10 +51,10 @@ func runPutObjectBenchmark(b *testing.B, obj ObjectLayer, objSize int) {
// benchmark utility which helps obtain number of allocations and bytes allocated per ops.
b.ReportAllocs()
// the actual benchmark for PutObject starts here. Reset the benchmark timer.
b.ResetTimer()
for i := 0; i < b.N; i++ {
for i := 0; b.Loop(); i++ {
// insert the object.
objInfo, err := obj.PutObject(context.Background(), bucket, "object"+strconv.Itoa(i),
objInfo, err := obj.PutObject(b.Context(), bucket, "object"+strconv.Itoa(i),
mustGetPutObjReader(b, bytes.NewReader(textData), int64(len(textData)), md5hex, sha256hex), ObjectOptions{})
if err != nil {
b.Fatal(err)
@ -76,7 +76,7 @@ func runPutObjectPartBenchmark(b *testing.B, obj ObjectLayer, partSize int) {
object := getRandomObjectName()
// create bucket.
err = obj.MakeBucket(context.Background(), bucket, MakeBucketOptions{})
err = obj.MakeBucket(b.Context(), bucket, MakeBucketOptions{})
if err != nil {
b.Fatal(err)
}
@ -90,7 +90,7 @@ func runPutObjectPartBenchmark(b *testing.B, obj ObjectLayer, partSize int) {
textData := generateBytesData(objSize)
// generate md5sum for the generated data.
// md5sum of the data to written is required as input for NewMultipartUpload.
res, err := obj.NewMultipartUpload(context.Background(), bucket, object, ObjectOptions{})
res, err := obj.NewMultipartUpload(b.Context(), bucket, object, ObjectOptions{})
if err != nil {
b.Fatal(err)
}
@ -101,11 +101,11 @@ func runPutObjectPartBenchmark(b *testing.B, obj ObjectLayer, partSize int) {
// benchmark utility which helps obtain number of allocations and bytes allocated per ops.
b.ReportAllocs()
// the actual benchmark for PutObjectPart starts here. Reset the benchmark timer.
b.ResetTimer()
for i := 0; i < b.N; i++ {
for i := 0; b.Loop(); i++ {
// insert the object.
totalPartsNR := int(math.Ceil(float64(objSize) / float64(partSize)))
for j := 0; j < totalPartsNR; j++ {
for j := range totalPartsNR {
if j < totalPartsNR-1 {
textPartData = textData[j*partSize : (j+1)*partSize-1]
} else {
@ -113,7 +113,7 @@ func runPutObjectPartBenchmark(b *testing.B, obj ObjectLayer, partSize int) {
}
md5hex := getMD5Hash(textPartData)
var partInfo PartInfo
partInfo, err = obj.PutObjectPart(context.Background(), bucket, object, res.UploadID, j,
partInfo, err = obj.PutObjectPart(b.Context(), bucket, object, res.UploadID, j,
mustGetPutObjReader(b, bytes.NewReader(textPartData), int64(len(textPartData)), md5hex, sha256hex), ObjectOptions{})
if err != nil {
b.Fatal(err)
@ -130,7 +130,7 @@ func runPutObjectPartBenchmark(b *testing.B, obj ObjectLayer, partSize int) {
// creates Erasure/FS backend setup, obtains the object layer and calls the runPutObjectPartBenchmark function.
func benchmarkPutObjectPart(b *testing.B, instanceType string, objSize int) {
// create a temp Erasure/FS backend.
ctx, cancel := context.WithCancel(context.Background())
ctx, cancel := context.WithCancel(b.Context())
defer cancel()
objLayer, disks, err := prepareTestBackend(ctx, instanceType)
if err != nil {
@ -146,7 +146,7 @@ func benchmarkPutObjectPart(b *testing.B, instanceType string, objSize int) {
// creates Erasure/FS backend setup, obtains the object layer and calls the runPutObjectBenchmark function.
func benchmarkPutObject(b *testing.B, instanceType string, objSize int) {
// create a temp Erasure/FS backend.
ctx, cancel := context.WithCancel(context.Background())
ctx, cancel := context.WithCancel(b.Context())
defer cancel()
objLayer, disks, err := prepareTestBackend(ctx, instanceType)
if err != nil {
@ -162,7 +162,7 @@ func benchmarkPutObject(b *testing.B, instanceType string, objSize int) {
// creates Erasure/FS backend setup, obtains the object layer and runs parallel benchmark for put object.
func benchmarkPutObjectParallel(b *testing.B, instanceType string, objSize int) {
// create a temp Erasure/FS backend.
ctx, cancel := context.WithCancel(context.Background())
ctx, cancel := context.WithCancel(b.Context())
defer cancel()
objLayer, disks, err := prepareTestBackend(ctx, instanceType)
if err != nil {
@ -196,7 +196,7 @@ func runPutObjectBenchmarkParallel(b *testing.B, obj ObjectLayer, objSize int) {
// obtains random bucket name.
bucket := getRandomBucketName()
// create bucket.
err := obj.MakeBucket(context.Background(), bucket, MakeBucketOptions{})
err := obj.MakeBucket(b.Context(), bucket, MakeBucketOptions{})
if err != nil {
b.Fatal(err)
}
@ -218,7 +218,7 @@ func runPutObjectBenchmarkParallel(b *testing.B, obj ObjectLayer, objSize int) {
i := 0
for pb.Next() {
// insert the object.
objInfo, err := obj.PutObject(context.Background(), bucket, "object"+strconv.Itoa(i),
objInfo, err := obj.PutObject(b.Context(), bucket, "object"+strconv.Itoa(i),
mustGetPutObjReader(b, bytes.NewReader(textData), int64(len(textData)), md5hex, sha256hex), ObjectOptions{})
if err != nil {
b.Fatal(err)

View File

@ -99,7 +99,7 @@ func BitrotAlgorithmFromString(s string) (a BitrotAlgorithm) {
return alg
}
}
return
return a
}
func newBitrotWriter(disk StorageAPI, origvolume, volume, filePath string, length int64, algo BitrotAlgorithm, shardSize int64) io.Writer {

View File

@ -18,7 +18,6 @@
package cmd
import (
"context"
"io"
"testing"
)
@ -34,7 +33,7 @@ func testBitrotReaderWriterAlgo(t *testing.T, bitrotAlgo BitrotAlgorithm) {
t.Fatal(err)
}
disk.MakeVol(context.Background(), volume)
disk.MakeVol(t.Context(), volume)
writer := newBitrotWriter(disk, "", volume, filePath, 35, bitrotAlgo, 10)

View File

@ -48,9 +48,7 @@ func (bs *bootstrapTracer) Events() []madmin.TraceInfo {
traceInfo := make([]madmin.TraceInfo, 0, bootstrapTraceLimit)
bs.mu.RLock()
for _, i := range bs.info {
traceInfo = append(traceInfo, i)
}
traceInfo = append(traceInfo, bs.info...)
bs.mu.RUnlock()
return traceInfo

View File

@ -1,7 +1,7 @@
package cmd
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
package cmd
import (
"github.com/tinylib/msgp/msgp"
)
@ -59,19 +59,17 @@ func (z *ServerSystemConfig) DecodeMsg(dc *msgp.Reader) (err error) {
if z.MinioEnv == nil {
z.MinioEnv = make(map[string]string, zb0003)
} else if len(z.MinioEnv) > 0 {
for key := range z.MinioEnv {
delete(z.MinioEnv, key)
}
clear(z.MinioEnv)
}
for zb0003 > 0 {
zb0003--
var za0002 string
var za0003 string
za0002, err = dc.ReadString()
if err != nil {
err = msgp.WrapError(err, "MinioEnv")
return
}
var za0003 string
za0003, err = dc.ReadString()
if err != nil {
err = msgp.WrapError(err, "MinioEnv", za0002)
@ -240,14 +238,12 @@ func (z *ServerSystemConfig) UnmarshalMsg(bts []byte) (o []byte, err error) {
if z.MinioEnv == nil {
z.MinioEnv = make(map[string]string, zb0003)
} else if len(z.MinioEnv) > 0 {
for key := range z.MinioEnv {
delete(z.MinioEnv, key)
}
clear(z.MinioEnv)
}
for zb0003 > 0 {
var za0002 string
var za0003 string
zb0003--
var za0002 string
za0002, bts, err = msgp.ReadStringBytes(bts)
if err != nil {
err = msgp.WrapError(err, "MinioEnv")

View File

@ -1,7 +1,7 @@
package cmd
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
package cmd
import (
"bytes"
"testing"

View File

@ -154,7 +154,6 @@ func initFederatorBackend(buckets []string, objLayer ObjectLayer) {
g := errgroup.WithNErrs(len(bucketsToBeUpdatedSlice)).WithConcurrency(50)
for index := range bucketsToBeUpdatedSlice {
index := index
g.Go(func() error {
return globalDNSConfig.Put(bucketsToBeUpdatedSlice[index])
}, index)
@ -559,7 +558,7 @@ func (api objectAPIHandlers) DeleteMultipleObjectsHandler(w http.ResponseWriter,
}, goi, opts, gerr)
if dsc.ReplicateAny() {
if object.VersionID != "" {
object.VersionPurgeStatus = Pending
object.VersionPurgeStatus = replication.VersionPurgePending
object.VersionPurgeStatuses = dsc.PendingStatus()
} else {
object.DeleteMarkerReplicationStatus = dsc.PendingStatus()
@ -593,7 +592,7 @@ func (api objectAPIHandlers) DeleteMultipleObjectsHandler(w http.ResponseWriter,
output[idx] = obj
idx++
}
return
return output
}
// Disable timeouts and cancellation
@ -669,7 +668,7 @@ func (api objectAPIHandlers) DeleteMultipleObjectsHandler(w http.ResponseWriter,
continue
}
if replicateDeletes && (dobj.DeleteMarkerReplicationStatus() == replication.Pending || dobj.VersionPurgeStatus() == Pending) {
if replicateDeletes && (dobj.DeleteMarkerReplicationStatus() == replication.Pending || dobj.VersionPurgeStatus() == replication.VersionPurgePending) {
// copy so we can re-add null ID.
dobj := dobj
if isDirObject(dobj.ObjectName) && dobj.VersionID == "" {
@ -1089,6 +1088,14 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
break
}
// check if have a file
if reader == nil {
apiErr := errorCodes.ToAPIErr(ErrMalformedPOSTRequest)
apiErr.Description = fmt.Sprintf("%s (%v)", apiErr.Description, errors.New("The file or text content is missing"))
writeErrorResponse(ctx, w, apiErr, r.URL)
return
}
if keyName, ok := formValues["Key"]; !ok {
apiErr := errorCodes.ToAPIErr(ErrMalformedPOSTRequest)
apiErr.Description = fmt.Sprintf("%s (%v)", apiErr.Description, errors.New("The name of the uploaded key is missing"))
@ -1379,10 +1386,7 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
// Set the correct hex md5sum for the fan-out stream.
fanOutOpts.MD5Hex = hex.EncodeToString(md5w.Sum(nil))
concurrentSize := 100
if runtime.GOMAXPROCS(0) < concurrentSize {
concurrentSize = runtime.GOMAXPROCS(0)
}
concurrentSize := min(runtime.GOMAXPROCS(0), 100)
fanOutResp := make([]minio.PutObjectFanOutResponse, 0, len(fanOutEntries))
eventArgsList := make([]eventArgs, 0, len(fanOutEntries))
@ -1653,9 +1657,11 @@ func (api objectAPIHandlers) HeadBucketHandler(w http.ResponseWriter, r *http.Re
return
}
if s3Error := checkRequestAuthType(ctx, r, policy.ListBucketAction, bucket, ""); s3Error != ErrNone {
writeErrorResponseHeadersOnly(w, errorCodes.ToAPIErr(s3Error))
return
if s3Error := checkRequestAuthType(ctx, r, policy.HeadBucketAction, bucket, ""); s3Error != ErrNone {
if s3Error := checkRequestAuthType(ctx, r, policy.ListBucketAction, bucket, ""); s3Error != ErrNone {
writeErrorResponseHeadersOnly(w, errorCodes.ToAPIErr(s3Error))
return
}
}
getBucketInfo := objectAPI.GetBucketInfo

View File

@ -657,7 +657,7 @@ func testAPIDeleteMultipleObjectsHandler(obj ObjectLayer, instanceType, bucketNa
sha256sum := ""
var objectNames []string
for i := 0; i < 10; i++ {
for i := range 10 {
contentBytes := []byte("hello")
objectName := "test-object-" + strconv.Itoa(i)
if i == 0 {
@ -687,7 +687,7 @@ func testAPIDeleteMultipleObjectsHandler(obj ObjectLayer, instanceType, bucketNa
// The following block will create a bucket policy with delete object to 'public/*'. This is
// to test a mixed response of a successful & failure while deleting objects in a single request
policyBytes := []byte(fmt.Sprintf(`{"Id": "Policy1637752602639", "Version": "2012-10-17", "Statement": [{"Sid": "Stmt1637752600730", "Action": "s3:DeleteObject", "Effect": "Allow", "Resource": "arn:aws:s3:::%s/public/*", "Principal": "*"}]}`, bucketName))
policyBytes := fmt.Appendf(nil, `{"Id": "Policy1637752602639", "Version": "2012-10-17", "Statement": [{"Sid": "Stmt1637752600730", "Action": "s3:DeleteObject", "Effect": "Allow", "Resource": "arn:aws:s3:::%s/public/*", "Principal": "*"}]}`, bucketName)
rec := httptest.NewRecorder()
req, err := newTestSignedRequestV4(http.MethodPut, getPutPolicyURL("", bucketName), int64(len(policyBytes)), bytes.NewReader(policyBytes),
credentials.AccessKey, credentials.SecretKey, nil)

View File

@ -26,7 +26,7 @@ import (
//go:generate stringer -type lcEventSrc -trimprefix lcEventSrc_ $GOFILE
type lcEventSrc uint8
//revive:disable:var-naming Underscores is used here to indicate where common prefix ends and the enumeration name begins
//nolint:staticcheck,revive // Underscores are used here to indicate where common prefix ends and the enumeration name begins
const (
lcEventSrc_None lcEventSrc = iota
lcEventSrc_Heal

View File

@ -23,6 +23,7 @@ import (
"errors"
"fmt"
"io"
"maps"
"net/http"
"strconv"
"strings"
@ -73,6 +74,10 @@ func NewLifecycleSys() *LifecycleSys {
func ilmTrace(startTime time.Time, duration time.Duration, oi ObjectInfo, event string, metadata map[string]string, err string) madmin.TraceInfo {
sz, _ := oi.GetActualSize()
if metadata == nil {
metadata = make(map[string]string)
}
metadata["version-id"] = oi.VersionID
return madmin.TraceInfo{
TraceType: madmin.TraceILM,
Time: startTime,
@ -151,8 +156,8 @@ func (f freeVersionTask) OpHash() uint64 {
return xxh3.HashString(f.TransitionedObject.Tier + f.TransitionedObject.Name)
}
func (n newerNoncurrentTask) OpHash() uint64 {
return xxh3.HashString(n.bucket + n.versions[0].ObjectV.ObjectName)
func (n noncurrentVersionsTask) OpHash() uint64 {
return xxh3.HashString(n.bucket + n.versions[0].ObjectName)
}
func (j jentry) OpHash() uint64 {
@ -236,14 +241,16 @@ func (es *expiryState) enqueueByDays(oi ObjectInfo, event lifecycle.Event, src l
}
}
// enqueueByNewerNoncurrent enqueues object versions expired by
// NewerNoncurrentVersions limit for expiry.
func (es *expiryState) enqueueByNewerNoncurrent(bucket string, versions []ObjectToDelete, lcEvent lifecycle.Event) {
func (es *expiryState) enqueueNoncurrentVersions(bucket string, versions []ObjectToDelete, events []lifecycle.Event) {
if len(versions) == 0 {
return
}
task := newerNoncurrentTask{bucket: bucket, versions: versions, event: lcEvent}
task := noncurrentVersionsTask{
bucket: bucket,
versions: versions,
events: events,
}
wrkr := es.getWorkerCh(task.OpHash())
if wrkr == nil {
es.stats.missedExpiryTasks.Add(1)
@ -343,8 +350,8 @@ func (es *expiryState) Worker(input <-chan expiryOp) {
} else {
applyExpiryOnNonTransitionedObjects(es.ctx, es.objAPI, v.objInfo, v.event, v.src)
}
case newerNoncurrentTask:
deleteObjectVersions(es.ctx, es.objAPI, v.bucket, v.versions, v.event)
case noncurrentVersionsTask:
deleteObjectVersions(es.ctx, es.objAPI, v.bucket, v.versions, v.events)
case jentry:
transitionLogIf(es.ctx, deleteObjectFromRemoteTier(es.ctx, v.ObjName, v.VersionID, v.TierName))
case freeVersionTask:
@ -392,12 +399,10 @@ func initBackgroundExpiry(ctx context.Context, objectAPI ObjectLayer) {
globalExpiryState = newExpiryState(ctx, objectAPI, globalILMConfig.getExpirationWorkers())
}
// newerNoncurrentTask encapsulates arguments required by worker to expire objects
// by NewerNoncurrentVersions
type newerNoncurrentTask struct {
type noncurrentVersionsTask struct {
bucket string
versions []ObjectToDelete
event lifecycle.Event
events []lifecycle.Event
}
type transitionTask struct {
@ -955,9 +960,7 @@ func putRestoreOpts(bucket, object string, rreq *RestoreObjectRequest, objInfo O
UserDefined: meta,
}
}
for k, v := range objInfo.UserDefined {
meta[k] = v
}
maps.Copy(meta, objInfo.UserDefined)
if len(objInfo.UserTags) != 0 {
meta[xhttp.AmzObjectTagging] = objInfo.UserTags
}
@ -1104,17 +1107,20 @@ func isRestoredObjectOnDisk(meta map[string]string) (onDisk bool) {
// ToLifecycleOpts returns lifecycle.ObjectOpts value for oi.
func (oi ObjectInfo) ToLifecycleOpts() lifecycle.ObjectOpts {
return lifecycle.ObjectOpts{
Name: oi.Name,
UserTags: oi.UserTags,
VersionID: oi.VersionID,
ModTime: oi.ModTime,
Size: oi.Size,
IsLatest: oi.IsLatest,
NumVersions: oi.NumVersions,
DeleteMarker: oi.DeleteMarker,
SuccessorModTime: oi.SuccessorModTime,
RestoreOngoing: oi.RestoreOngoing,
RestoreExpires: oi.RestoreExpires,
TransitionStatus: oi.TransitionedObject.Status,
Name: oi.Name,
UserTags: oi.UserTags,
VersionID: oi.VersionID,
ModTime: oi.ModTime,
Size: oi.Size,
IsLatest: oi.IsLatest,
NumVersions: oi.NumVersions,
DeleteMarker: oi.DeleteMarker,
SuccessorModTime: oi.SuccessorModTime,
RestoreOngoing: oi.RestoreOngoing,
RestoreExpires: oi.RestoreExpires,
TransitionStatus: oi.TransitionedObject.Status,
UserDefined: oi.UserDefined,
VersionPurgeStatus: oi.VersionPurgeStatus,
ReplicationStatus: oi.ReplicationStatus,
}
}

View File

@ -248,19 +248,19 @@ func proxyRequestByToken(ctx context.Context, w http.ResponseWriter, r *http.Req
if subToken, nodeIndex = parseRequestToken(token); nodeIndex >= 0 {
proxied, success = proxyRequestByNodeIndex(ctx, w, r, nodeIndex, returnErr)
}
return
return subToken, proxied, success
}
func proxyRequestByNodeIndex(ctx context.Context, w http.ResponseWriter, r *http.Request, index int, returnErr bool) (proxied, success bool) {
if len(globalProxyEndpoints) == 0 {
return
return proxied, success
}
if index < 0 || index >= len(globalProxyEndpoints) {
return
return proxied, success
}
ep := globalProxyEndpoints[index]
if ep.IsLocal {
return
return proxied, success
}
return true, proxyRequest(ctx, w, r, ep, returnErr)
}

View File

@ -472,7 +472,7 @@ func (sys *BucketMetadataSys) GetConfig(ctx context.Context, bucket string) (met
return meta, reloaded, nil
}
val, err, _ := sys.group.Do(bucket, func() (val interface{}, err error) {
val, err, _ := sys.group.Do(bucket, func() (val any, err error) {
meta, err = loadBucketMetadata(ctx, objAPI, bucket)
if err != nil {
if !sys.Initialized() {
@ -511,7 +511,6 @@ func (sys *BucketMetadataSys) concurrentLoad(ctx context.Context, buckets []stri
g := errgroup.WithNErrs(len(buckets))
bucketMetas := make([]BucketMetadata, len(buckets))
for index := range buckets {
index := index
g.Go(func() error {
// Sleep and stagger to avoid blocked CPU and thundering
// herd upon start up sequence.
@ -647,9 +646,7 @@ func (sys *BucketMetadataSys) init(ctx context.Context, buckets []string) {
// Reset the state of the BucketMetadataSys.
func (sys *BucketMetadataSys) Reset() {
sys.Lock()
for k := range sys.metadataMap {
delete(sys.metadataMap, k)
}
clear(sys.metadataMap)
sys.Unlock()
}

View File

@ -38,7 +38,6 @@ import (
"github.com/minio/minio/internal/bucket/versioning"
"github.com/minio/minio/internal/crypto"
"github.com/minio/minio/internal/event"
"github.com/minio/minio/internal/fips"
"github.com/minio/minio/internal/kms"
"github.com/minio/minio/internal/logger"
"github.com/minio/pkg/v3/policy"
@ -162,7 +161,7 @@ func (b BucketMetadata) lastUpdate() (t time.Time) {
t = b.BucketTargetsConfigMetaUpdatedAt
}
return
return t
}
// Versioning returns true if versioning is enabled
@ -543,26 +542,26 @@ func (b *BucketMetadata) migrateTargetConfig(ctx context.Context, objectAPI Obje
func encryptBucketMetadata(ctx context.Context, bucket string, input []byte, kmsContext kms.Context) (output, metabytes []byte, err error) {
if GlobalKMS == nil {
output = input
return
return output, metabytes, err
}
metadata := make(map[string]string)
key, err := GlobalKMS.GenerateKey(ctx, &kms.GenerateKeyRequest{AssociatedData: kmsContext})
if err != nil {
return
return output, metabytes, err
}
outbuf := bytes.NewBuffer(nil)
objectKey := crypto.GenerateKey(key.Plaintext, rand.Reader)
sealedKey := objectKey.Seal(key.Plaintext, crypto.GenerateIV(rand.Reader), crypto.S3.String(), bucket, "")
crypto.S3.CreateMetadata(metadata, key.KeyID, key.Ciphertext, sealedKey)
_, err = sio.Encrypt(outbuf, bytes.NewBuffer(input), sio.Config{Key: objectKey[:], MinVersion: sio.Version20, CipherSuites: fips.DARECiphers()})
_, err = sio.Encrypt(outbuf, bytes.NewBuffer(input), sio.Config{Key: objectKey[:], MinVersion: sio.Version20})
if err != nil {
return output, metabytes, err
}
metabytes, err = json.Marshal(metadata)
if err != nil {
return
return output, metabytes, err
}
return outbuf.Bytes(), metabytes, nil
}
@ -590,6 +589,6 @@ func decryptBucketMetadata(input []byte, bucket string, meta map[string]string,
}
outbuf := bytes.NewBuffer(nil)
_, err = sio.Decrypt(outbuf, bytes.NewBuffer(input), sio.Config{Key: objectKey[:], MinVersion: sio.Version20, CipherSuites: fips.DARECiphers()})
_, err = sio.Decrypt(outbuf, bytes.NewBuffer(input), sio.Config{Key: objectKey[:], MinVersion: sio.Version20})
return outbuf.Bytes(), err
}

View File

@ -1,7 +1,7 @@
package cmd
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
package cmd
import (
"github.com/tinylib/msgp/msgp"
)

View File

@ -1,7 +1,7 @@
package cmd
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
package cmd
import (
"bytes"
"testing"

View File

@ -295,7 +295,10 @@ func checkPutObjectLockAllowed(ctx context.Context, rq *http.Request, bucket, ob
if legalHoldRequested {
var lerr error
if legalHold, lerr = objectlock.ParseObjectLockLegalHoldHeaders(rq.Header); lerr != nil {
return mode, retainDate, legalHold, toAPIErrorCode(ctx, err)
return mode, retainDate, legalHold, toAPIErrorCode(ctx, lerr)
}
if legalHoldPermErr != ErrNone {
return mode, retainDate, legalHold, legalHoldPermErr
}
}
@ -305,7 +308,7 @@ func checkPutObjectLockAllowed(ctx context.Context, rq *http.Request, bucket, ob
return mode, retainDate, legalHold, toAPIErrorCode(ctx, err)
}
rMode, rDate, err := objectlock.ParseObjectLockRetentionHeaders(rq.Header)
if err != nil && !(replica && rMode == "" && rDate.IsZero()) {
if err != nil && (!replica || rMode != "" || !rDate.IsZero()) {
return mode, retainDate, legalHold, toAPIErrorCode(ctx, err)
}
if retentionPermErr != ErrNone {

View File

@ -122,7 +122,7 @@ func testCreateBucket(obj ObjectLayer, instanceType, bucketName string, apiRoute
var wg sync.WaitGroup
var mu sync.Mutex
wg.Add(n)
for i := 0; i < n; i++ {
for range n {
go func() {
defer wg.Done()
// Sync start.
@ -187,7 +187,7 @@ func testPutBucketPolicyHandler(obj ObjectLayer, instanceType, bucketName string
// Test case - 1.
{
bucketName: bucketName,
bucketPolicyReader: bytes.NewReader([]byte(fmt.Sprintf(bucketPolicyTemplate, bucketName, bucketName))),
bucketPolicyReader: bytes.NewReader(fmt.Appendf(nil, bucketPolicyTemplate, bucketName, bucketName)),
policyLen: len(fmt.Sprintf(bucketPolicyTemplate, bucketName, bucketName)),
accessKey: credentials.AccessKey,
@ -199,7 +199,7 @@ func testPutBucketPolicyHandler(obj ObjectLayer, instanceType, bucketName string
// Expecting StatusBadRequest (400).
{
bucketName: bucketName,
bucketPolicyReader: bytes.NewReader([]byte(fmt.Sprintf(bucketPolicyTemplate, bucketName, bucketName))),
bucketPolicyReader: bytes.NewReader(fmt.Appendf(nil, bucketPolicyTemplate, bucketName, bucketName)),
policyLen: maxBucketPolicySize + 1,
accessKey: credentials.AccessKey,
@ -211,7 +211,7 @@ func testPutBucketPolicyHandler(obj ObjectLayer, instanceType, bucketName string
// Expecting the HTTP response status to be StatusLengthRequired (411).
{
bucketName: bucketName,
bucketPolicyReader: bytes.NewReader([]byte(fmt.Sprintf(bucketPolicyTemplate, bucketName, bucketName))),
bucketPolicyReader: bytes.NewReader(fmt.Appendf(nil, bucketPolicyTemplate, bucketName, bucketName)),
policyLen: 0,
accessKey: credentials.AccessKey,
@ -258,7 +258,7 @@ func testPutBucketPolicyHandler(obj ObjectLayer, instanceType, bucketName string
// checkBucketPolicyResources should fail.
{
bucketName: bucketName1,
bucketPolicyReader: bytes.NewReader([]byte(fmt.Sprintf(bucketPolicyTemplate, bucketName, bucketName))),
bucketPolicyReader: bytes.NewReader(fmt.Appendf(nil, bucketPolicyTemplate, bucketName, bucketName)),
policyLen: len(fmt.Sprintf(bucketPolicyTemplate, bucketName, bucketName)),
accessKey: credentials.AccessKey,
@ -271,7 +271,7 @@ func testPutBucketPolicyHandler(obj ObjectLayer, instanceType, bucketName string
// should result in 404 StatusNotFound
{
bucketName: "non-existent-bucket",
bucketPolicyReader: bytes.NewReader([]byte(fmt.Sprintf(bucketPolicyTemplate, "non-existent-bucket", "non-existent-bucket"))),
bucketPolicyReader: bytes.NewReader(fmt.Appendf(nil, bucketPolicyTemplate, "non-existent-bucket", "non-existent-bucket")),
policyLen: len(fmt.Sprintf(bucketPolicyTemplate, bucketName, bucketName)),
accessKey: credentials.AccessKey,
@ -284,7 +284,7 @@ func testPutBucketPolicyHandler(obj ObjectLayer, instanceType, bucketName string
// should result in 404 StatusNotFound
{
bucketName: ".invalid-bucket",
bucketPolicyReader: bytes.NewReader([]byte(fmt.Sprintf(bucketPolicyTemplate, ".invalid-bucket", ".invalid-bucket"))),
bucketPolicyReader: bytes.NewReader(fmt.Appendf(nil, bucketPolicyTemplate, ".invalid-bucket", ".invalid-bucket")),
policyLen: len(fmt.Sprintf(bucketPolicyTemplate, bucketName, bucketName)),
accessKey: credentials.AccessKey,
@ -297,7 +297,7 @@ func testPutBucketPolicyHandler(obj ObjectLayer, instanceType, bucketName string
// should result in 400 StatusBadRequest.
{
bucketName: bucketName,
bucketPolicyReader: bytes.NewReader([]byte(fmt.Sprintf(bucketPolicyTemplateWithoutVersion, bucketName, bucketName))),
bucketPolicyReader: bytes.NewReader(fmt.Appendf(nil, bucketPolicyTemplateWithoutVersion, bucketName, bucketName)),
policyLen: len(fmt.Sprintf(bucketPolicyTemplateWithoutVersion, bucketName, bucketName)),
accessKey: credentials.AccessKey,

View File

@ -19,6 +19,7 @@ package cmd
import (
"encoding/json"
"maps"
"net/http"
"net/url"
"strconv"
@ -187,9 +188,7 @@ func getConditionValues(r *http.Request, lc string, cred auth.Credentials) map[s
}
cloneURLValues := make(url.Values, len(r.Form))
for k, v := range r.Form {
cloneURLValues[k] = v
}
maps.Copy(cloneURLValues, r.Form)
for _, objLock := range []string{
xhttp.AmzObjectLockMode,
@ -224,7 +223,7 @@ func getConditionValues(r *http.Request, lc string, cred auth.Credentials) map[s
// Add groups claim which could be a list. This will ensure that the claim
// `jwt:groups` works.
if grpsVal, ok := claims["groups"]; ok {
if grpsIs, ok := grpsVal.([]interface{}); ok {
if grpsIs, ok := grpsVal.([]any); ok {
grps := []string{}
for _, gI := range grpsIs {
if g, ok := gI.(string); ok {

View File

@ -92,12 +92,12 @@ func parseBucketQuota(bucket string, data []byte) (quotaCfg *madmin.BucketQuota,
}
if !quotaCfg.IsValid() {
if quotaCfg.Type == "fifo" {
internalLogIf(GlobalContext, errors.New("Detected older 'fifo' quota config, 'fifo' feature is removed and not supported anymore. Please clear your quota configs using 'mc admin bucket quota alias/bucket --clear' and use 'mc ilm add' for expiration of objects"), logger.WarningKind)
internalLogIf(GlobalContext, errors.New("Detected older 'fifo' quota config, 'fifo' feature is removed and not supported anymore. Please clear your quota configs using 'mc quota clear alias/bucket' and use 'mc ilm add' for expiration of objects"), logger.WarningKind)
return quotaCfg, fmt.Errorf("invalid quota type 'fifo'")
}
return quotaCfg, fmt.Errorf("Invalid quota config %#v", quotaCfg)
}
return
return quotaCfg, err
}
func (sys *BucketQuotaSys) enforceQuotaHard(ctx context.Context, bucket string, size int64) error {

View File

@ -1,7 +1,7 @@
package cmd
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
package cmd
import (
"github.com/tinylib/msgp/msgp"
)

View File

@ -1,7 +1,7 @@
package cmd
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
package cmd
import (
"bytes"
"testing"

View File

@ -21,6 +21,7 @@ import (
"bytes"
"context"
"fmt"
"maps"
"net/http"
"net/url"
"regexp"
@ -125,16 +126,16 @@ func (ri replicatedInfos) VersionPurgeStatus() VersionPurgeStatusType {
completed := 0
for _, v := range ri.Targets {
switch v.VersionPurgeStatus {
case Failed:
return Failed
case Complete:
case replication.VersionPurgeFailed:
return replication.VersionPurgeFailed
case replication.VersionPurgeComplete:
completed++
}
}
if completed == len(ri.Targets) {
return Complete
return replication.VersionPurgeComplete
}
return Pending
return replication.VersionPurgePending
}
func (ri replicatedInfos) VersionPurgeStatusInternal() string {
@ -171,13 +172,13 @@ func (ri ReplicateObjectInfo) TargetReplicationStatus(arn string) (status replic
repStatMatches := replStatusRegex.FindAllStringSubmatch(ri.ReplicationStatusInternal, -1)
for _, repStatMatch := range repStatMatches {
if len(repStatMatch) != 3 {
return
return status
}
if repStatMatch[1] == arn {
return replication.StatusType(repStatMatch[2])
}
}
return
return status
}
// TargetReplicationStatus - returns replication status of a target
@ -185,13 +186,13 @@ func (o ObjectInfo) TargetReplicationStatus(arn string) (status replication.Stat
repStatMatches := replStatusRegex.FindAllStringSubmatch(o.ReplicationStatusInternal, -1)
for _, repStatMatch := range repStatMatches {
if len(repStatMatch) != 3 {
return
return status
}
if repStatMatch[1] == arn {
return replication.StatusType(repStatMatch[2])
}
}
return
return status
}
type replicateTargetDecision struct {
@ -309,9 +310,9 @@ func parseReplicateDecision(ctx context.Context, bucket, s string) (r ReplicateD
targetsMap: make(map[string]replicateTargetDecision),
}
if len(s) == 0 {
return
return r, err
}
for _, p := range strings.Split(s, ",") {
for p := range strings.SplitSeq(s, ",") {
if p == "" {
continue
}
@ -326,7 +327,7 @@ func parseReplicateDecision(ctx context.Context, bucket, s string) (r ReplicateD
}
r.targetsMap[slc[0]] = replicateTargetDecision{Replicate: tgt[0] == "true", Synchronous: tgt[1] == "true", Arn: tgt[2], ID: tgt[3]}
}
return
return r, err
}
// ReplicationState represents internal replication state
@ -373,14 +374,14 @@ func (rs *ReplicationState) CompositeReplicationStatus() (st replication.StatusT
case !rs.ReplicaStatus.Empty():
return rs.ReplicaStatus
default:
return
return st
}
}
// CompositeVersionPurgeStatus returns overall replication purge status for the permanent delete being replicated.
func (rs *ReplicationState) CompositeVersionPurgeStatus() VersionPurgeStatusType {
switch VersionPurgeStatusType(rs.VersionPurgeStatusInternal) {
case Pending, Complete, Failed: // for backward compatibility
case replication.VersionPurgePending, replication.VersionPurgeComplete, replication.VersionPurgeFailed: // for backward compatibility
return VersionPurgeStatusType(rs.VersionPurgeStatusInternal)
default:
return getCompositeVersionPurgeStatus(rs.PurgeTargets)
@ -478,16 +479,16 @@ func getCompositeVersionPurgeStatus(m map[string]VersionPurgeStatusType) Version
completed := 0
for _, v := range m {
switch v {
case Failed:
return Failed
case Complete:
case replication.VersionPurgeFailed:
return replication.VersionPurgeFailed
case replication.VersionPurgeComplete:
completed++
}
}
if completed == len(m) {
return Complete
return replication.VersionPurgeComplete
}
return Pending
return replication.VersionPurgePending
}
// getHealReplicateObjectInfo returns info needed by heal replication in ReplicateObjectInfo
@ -635,28 +636,7 @@ type ResyncTarget struct {
}
// VersionPurgeStatusType represents status of a versioned delete or permanent delete w.r.t bucket replication
type VersionPurgeStatusType string
const (
// Pending - versioned delete replication is pending.
Pending VersionPurgeStatusType = "PENDING"
// Complete - versioned delete replication is now complete, erase version on disk.
Complete VersionPurgeStatusType = "COMPLETE"
// Failed - versioned delete replication failed.
Failed VersionPurgeStatusType = "FAILED"
)
// Empty returns true if purge status was not set.
func (v VersionPurgeStatusType) Empty() bool {
return string(v) == ""
}
// Pending returns true if the version is pending purge.
func (v VersionPurgeStatusType) Pending() bool {
return v == Pending || v == Failed
}
type VersionPurgeStatusType = replication.VersionPurgeStatusType
type replicationResyncer struct {
// map of bucket to their resync status
@ -756,10 +736,8 @@ type BucketReplicationResyncStatus struct {
func (rs *BucketReplicationResyncStatus) cloneTgtStats() (m map[string]TargetReplicationResyncStatus) {
m = make(map[string]TargetReplicationResyncStatus)
for arn, st := range rs.TargetsMap {
m[arn] = st
}
return
maps.Copy(m, rs.TargetsMap)
return m
}
func newBucketResyncStatus(bucket string) BucketReplicationResyncStatus {
@ -796,7 +774,7 @@ func extractReplicateDiffOpts(q url.Values) (opts madmin.ReplDiffOpts) {
opts.Verbose = q.Get("verbose") == "true"
opts.ARN = q.Get("arn")
opts.Prefix = q.Get("prefix")
return
return opts
}
const (

View File

@ -1,7 +1,7 @@
package cmd
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
package cmd
import (
"github.com/minio/minio/internal/bucket/replication"
"github.com/tinylib/msgp/msgp"
@ -41,19 +41,17 @@ func (z *BucketReplicationResyncStatus) DecodeMsg(dc *msgp.Reader) (err error) {
if z.TargetsMap == nil {
z.TargetsMap = make(map[string]TargetReplicationResyncStatus, zb0002)
} else if len(z.TargetsMap) > 0 {
for key := range z.TargetsMap {
delete(z.TargetsMap, key)
}
clear(z.TargetsMap)
}
for zb0002 > 0 {
zb0002--
var za0001 string
var za0002 TargetReplicationResyncStatus
za0001, err = dc.ReadString()
if err != nil {
err = msgp.WrapError(err, "TargetsMap")
return
}
var za0002 TargetReplicationResyncStatus
err = za0002.DecodeMsg(dc)
if err != nil {
err = msgp.WrapError(err, "TargetsMap", za0001)
@ -203,14 +201,12 @@ func (z *BucketReplicationResyncStatus) UnmarshalMsg(bts []byte) (o []byte, err
if z.TargetsMap == nil {
z.TargetsMap = make(map[string]TargetReplicationResyncStatus, zb0002)
} else if len(z.TargetsMap) > 0 {
for key := range z.TargetsMap {
delete(z.TargetsMap, key)
}
clear(z.TargetsMap)
}
for zb0002 > 0 {
var za0001 string
var za0002 TargetReplicationResyncStatus
zb0002--
var za0001 string
za0001, bts, err = msgp.ReadStringBytes(bts)
if err != nil {
err = msgp.WrapError(err, "TargetsMap")
@ -288,19 +284,17 @@ func (z *MRFReplicateEntries) DecodeMsg(dc *msgp.Reader) (err error) {
if z.Entries == nil {
z.Entries = make(map[string]MRFReplicateEntry, zb0002)
} else if len(z.Entries) > 0 {
for key := range z.Entries {
delete(z.Entries, key)
}
clear(z.Entries)
}
for zb0002 > 0 {
zb0002--
var za0001 string
var za0002 MRFReplicateEntry
za0001, err = dc.ReadString()
if err != nil {
err = msgp.WrapError(err, "Entries")
return
}
var za0002 MRFReplicateEntry
var zb0003 uint32
zb0003, err = dc.ReadMapHeader()
if err != nil {
@ -478,14 +472,12 @@ func (z *MRFReplicateEntries) UnmarshalMsg(bts []byte) (o []byte, err error) {
if z.Entries == nil {
z.Entries = make(map[string]MRFReplicateEntry, zb0002)
} else if len(z.Entries) > 0 {
for key := range z.Entries {
delete(z.Entries, key)
}
clear(z.Entries)
}
for zb0002 > 0 {
var za0001 string
var za0002 MRFReplicateEntry
zb0002--
var za0001 string
za0001, bts, err = msgp.ReadStringBytes(bts)
if err != nil {
err = msgp.WrapError(err, "Entries")
@ -872,19 +864,17 @@ func (z *ReplicationState) DecodeMsg(dc *msgp.Reader) (err error) {
if z.Targets == nil {
z.Targets = make(map[string]replication.StatusType, zb0002)
} else if len(z.Targets) > 0 {
for key := range z.Targets {
delete(z.Targets, key)
}
clear(z.Targets)
}
for zb0002 > 0 {
zb0002--
var za0001 string
var za0002 replication.StatusType
za0001, err = dc.ReadString()
if err != nil {
err = msgp.WrapError(err, "Targets")
return
}
var za0002 replication.StatusType
err = za0002.DecodeMsg(dc)
if err != nil {
err = msgp.WrapError(err, "Targets", za0001)
@ -902,53 +892,45 @@ func (z *ReplicationState) DecodeMsg(dc *msgp.Reader) (err error) {
if z.PurgeTargets == nil {
z.PurgeTargets = make(map[string]VersionPurgeStatusType, zb0003)
} else if len(z.PurgeTargets) > 0 {
for key := range z.PurgeTargets {
delete(z.PurgeTargets, key)
}
clear(z.PurgeTargets)
}
for zb0003 > 0 {
zb0003--
var za0003 string
var za0004 VersionPurgeStatusType
za0003, err = dc.ReadString()
if err != nil {
err = msgp.WrapError(err, "PurgeTargets")
return
}
{
var zb0004 string
zb0004, err = dc.ReadString()
if err != nil {
err = msgp.WrapError(err, "PurgeTargets", za0003)
return
}
za0004 = VersionPurgeStatusType(zb0004)
var za0004 VersionPurgeStatusType
err = za0004.DecodeMsg(dc)
if err != nil {
err = msgp.WrapError(err, "PurgeTargets", za0003)
return
}
z.PurgeTargets[za0003] = za0004
}
case "ResetStatusesMap":
var zb0005 uint32
zb0005, err = dc.ReadMapHeader()
var zb0004 uint32
zb0004, err = dc.ReadMapHeader()
if err != nil {
err = msgp.WrapError(err, "ResetStatusesMap")
return
}
if z.ResetStatusesMap == nil {
z.ResetStatusesMap = make(map[string]string, zb0005)
z.ResetStatusesMap = make(map[string]string, zb0004)
} else if len(z.ResetStatusesMap) > 0 {
for key := range z.ResetStatusesMap {
delete(z.ResetStatusesMap, key)
}
clear(z.ResetStatusesMap)
}
for zb0005 > 0 {
zb0005--
for zb0004 > 0 {
zb0004--
var za0005 string
var za0006 string
za0005, err = dc.ReadString()
if err != nil {
err = msgp.WrapError(err, "ResetStatusesMap")
return
}
var za0006 string
za0006, err = dc.ReadString()
if err != nil {
err = msgp.WrapError(err, "ResetStatusesMap", za0005)
@ -1078,7 +1060,7 @@ func (z *ReplicationState) EncodeMsg(en *msgp.Writer) (err error) {
err = msgp.WrapError(err, "PurgeTargets")
return
}
err = en.WriteString(string(za0004))
err = za0004.EncodeMsg(en)
if err != nil {
err = msgp.WrapError(err, "PurgeTargets", za0003)
return
@ -1154,7 +1136,11 @@ func (z *ReplicationState) MarshalMsg(b []byte) (o []byte, err error) {
o = msgp.AppendMapHeader(o, uint32(len(z.PurgeTargets)))
for za0003, za0004 := range z.PurgeTargets {
o = msgp.AppendString(o, za0003)
o = msgp.AppendString(o, string(za0004))
o, err = za0004.MarshalMsg(o)
if err != nil {
err = msgp.WrapError(err, "PurgeTargets", za0003)
return
}
}
// string "ResetStatusesMap"
o = append(o, 0xb0, 0x52, 0x65, 0x73, 0x65, 0x74, 0x53, 0x74, 0x61, 0x74, 0x75, 0x73, 0x65, 0x73, 0x4d, 0x61, 0x70)
@ -1236,14 +1222,12 @@ func (z *ReplicationState) UnmarshalMsg(bts []byte) (o []byte, err error) {
if z.Targets == nil {
z.Targets = make(map[string]replication.StatusType, zb0002)
} else if len(z.Targets) > 0 {
for key := range z.Targets {
delete(z.Targets, key)
}
clear(z.Targets)
}
for zb0002 > 0 {
var za0001 string
var za0002 replication.StatusType
zb0002--
var za0001 string
za0001, bts, err = msgp.ReadStringBytes(bts)
if err != nil {
err = msgp.WrapError(err, "Targets")
@ -1266,48 +1250,40 @@ func (z *ReplicationState) UnmarshalMsg(bts []byte) (o []byte, err error) {
if z.PurgeTargets == nil {
z.PurgeTargets = make(map[string]VersionPurgeStatusType, zb0003)
} else if len(z.PurgeTargets) > 0 {
for key := range z.PurgeTargets {
delete(z.PurgeTargets, key)
}
clear(z.PurgeTargets)
}
for zb0003 > 0 {
var za0003 string
var za0004 VersionPurgeStatusType
zb0003--
var za0003 string
za0003, bts, err = msgp.ReadStringBytes(bts)
if err != nil {
err = msgp.WrapError(err, "PurgeTargets")
return
}
{
var zb0004 string
zb0004, bts, err = msgp.ReadStringBytes(bts)
if err != nil {
err = msgp.WrapError(err, "PurgeTargets", za0003)
return
}
za0004 = VersionPurgeStatusType(zb0004)
bts, err = za0004.UnmarshalMsg(bts)
if err != nil {
err = msgp.WrapError(err, "PurgeTargets", za0003)
return
}
z.PurgeTargets[za0003] = za0004
}
case "ResetStatusesMap":
var zb0005 uint32
zb0005, bts, err = msgp.ReadMapHeaderBytes(bts)
var zb0004 uint32
zb0004, bts, err = msgp.ReadMapHeaderBytes(bts)
if err != nil {
err = msgp.WrapError(err, "ResetStatusesMap")
return
}
if z.ResetStatusesMap == nil {
z.ResetStatusesMap = make(map[string]string, zb0005)
z.ResetStatusesMap = make(map[string]string, zb0004)
} else if len(z.ResetStatusesMap) > 0 {
for key := range z.ResetStatusesMap {
delete(z.ResetStatusesMap, key)
}
clear(z.ResetStatusesMap)
}
for zb0005 > 0 {
var za0005 string
for zb0004 > 0 {
var za0006 string
zb0005--
zb0004--
var za0005 string
za0005, bts, err = msgp.ReadStringBytes(bts)
if err != nil {
err = msgp.WrapError(err, "ResetStatusesMap")
@ -1345,7 +1321,7 @@ func (z *ReplicationState) Msgsize() (s int) {
if z.PurgeTargets != nil {
for za0003, za0004 := range z.PurgeTargets {
_ = za0004
s += msgp.StringPrefixSize + len(za0003) + msgp.StringPrefixSize + len(string(za0004))
s += msgp.StringPrefixSize + len(za0003) + za0004.Msgsize()
}
}
s += 17 + msgp.MapHeaderSize
@ -2507,55 +2483,3 @@ func (z *TargetReplicationResyncStatus) Msgsize() (s int) {
s = 1 + 3 + msgp.TimeSize + 4 + msgp.TimeSize + 3 + msgp.StringPrefixSize + len(z.ResyncID) + 4 + msgp.TimeSize + 4 + msgp.IntSize + 3 + msgp.Int64Size + 4 + msgp.Int64Size + 3 + msgp.Int64Size + 4 + msgp.Int64Size + 4 + msgp.StringPrefixSize + len(z.Bucket) + 4 + msgp.StringPrefixSize + len(z.Object)
return
}
// DecodeMsg implements msgp.Decodable
func (z *VersionPurgeStatusType) DecodeMsg(dc *msgp.Reader) (err error) {
{
var zb0001 string
zb0001, err = dc.ReadString()
if err != nil {
err = msgp.WrapError(err)
return
}
(*z) = VersionPurgeStatusType(zb0001)
}
return
}
// EncodeMsg implements msgp.Encodable
func (z VersionPurgeStatusType) EncodeMsg(en *msgp.Writer) (err error) {
err = en.WriteString(string(z))
if err != nil {
err = msgp.WrapError(err)
return
}
return
}
// MarshalMsg implements msgp.Marshaler
func (z VersionPurgeStatusType) MarshalMsg(b []byte) (o []byte, err error) {
o = msgp.Require(b, z.Msgsize())
o = msgp.AppendString(o, string(z))
return
}
// UnmarshalMsg implements msgp.Unmarshaler
func (z *VersionPurgeStatusType) UnmarshalMsg(bts []byte) (o []byte, err error) {
{
var zb0001 string
zb0001, bts, err = msgp.ReadStringBytes(bts)
if err != nil {
err = msgp.WrapError(err)
return
}
(*z) = VersionPurgeStatusType(zb0001)
}
o = bts
return
}
// Msgsize returns an upper bound estimate of the number of bytes occupied by the serialized message
func (z VersionPurgeStatusType) Msgsize() (s int) {
s = msgp.StringPrefixSize + len(string(z))
return
}

View File

@ -1,7 +1,7 @@
package cmd
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
package cmd
import (
"bytes"
"testing"

View File

@ -18,7 +18,6 @@
package cmd
import (
"context"
"testing"
"github.com/minio/minio/internal/bucket/replication"
@ -184,7 +183,7 @@ var parseReplicationDecisionTest = []struct {
func TestParseReplicateDecision(t *testing.T) {
for i, test := range parseReplicationDecisionTest {
dsc, err := parseReplicateDecision(context.Background(), "bucket", test.expDsc.String())
dsc, err := parseReplicateDecision(t.Context(), "bucket", test.expDsc.String())
if err != nil {
if test.expErr != err {
t.Errorf("Test%d (%s): Expected parse error got %t , want %t", i+1, test.name, err, test.expErr)

View File

@ -24,6 +24,7 @@ import (
"errors"
"fmt"
"io"
"maps"
"math/rand"
"net/http"
"net/url"
@ -252,31 +253,31 @@ func getMustReplicateOptions(userDefined map[string]string, userTags string, sta
func mustReplicate(ctx context.Context, bucket, object string, mopts mustReplicateOptions) (dsc ReplicateDecision) {
// object layer not initialized we return with no decision.
if newObjectLayerFn() == nil {
return
return dsc
}
// Disable server-side replication on object prefixes which are excluded
// from versioning via the MinIO bucket versioning extension.
if !globalBucketVersioningSys.PrefixEnabled(bucket, object) {
return
return dsc
}
replStatus := mopts.ReplicationStatus()
if replStatus == replication.Replica && !mopts.isMetadataReplication() {
return
return dsc
}
if mopts.replicationRequest { // incoming replication request on target cluster
return
return dsc
}
cfg, err := getReplicationConfig(ctx, bucket)
if err != nil {
replLogOnceIf(ctx, err, bucket)
return
return dsc
}
if cfg == nil {
return
return dsc
}
opts := replication.ObjectOpts{
@ -347,16 +348,16 @@ func checkReplicateDelete(ctx context.Context, bucket string, dobj ObjectToDelet
rcfg, err := getReplicationConfig(ctx, bucket)
if err != nil || rcfg == nil {
replLogOnceIf(ctx, err, bucket)
return
return dsc
}
// If incoming request is a replication request, it does not need to be re-replicated.
if delOpts.ReplicationRequest {
return
return dsc
}
// Skip replication if this object's prefix is excluded from being
// versioned.
if !delOpts.Versioned {
return
return dsc
}
opts := replication.ObjectOpts{
Name: dobj.ObjectName,
@ -390,7 +391,7 @@ func checkReplicateDelete(ctx context.Context, bucket string, dobj ObjectToDelet
// can be the case that other cluster is down and duplicate `mc rm --vid`
// is issued - this still needs to be replicated back to the other target
if !oi.VersionPurgeStatus.Empty() {
replicate = oi.VersionPurgeStatus == Pending || oi.VersionPurgeStatus == Failed
replicate = oi.VersionPurgeStatus == replication.VersionPurgePending || oi.VersionPurgeStatus == replication.VersionPurgeFailed
dsc.Set(newReplicateTargetDecision(tgtArn, replicate, sync))
}
continue
@ -616,10 +617,10 @@ func replicateDeleteToTarget(ctx context.Context, dobj DeletedObjectReplicationI
if dobj.VersionID == "" && rinfo.PrevReplicationStatus == replication.Completed && dobj.OpType != replication.ExistingObjectReplicationType {
rinfo.ReplicationStatus = rinfo.PrevReplicationStatus
return
return rinfo
}
if dobj.VersionID != "" && rinfo.VersionPurgeStatus == Complete {
return
if dobj.VersionID != "" && rinfo.VersionPurgeStatus == replication.VersionPurgeComplete {
return rinfo
}
if globalBucketTargetSys.isOffline(tgt.EndpointURL()) {
replLogOnceIf(ctx, fmt.Errorf("remote target is offline for bucket:%s arn:%s", dobj.Bucket, tgt.ARN), "replication-target-offline-delete-"+tgt.ARN)
@ -638,9 +639,9 @@ func replicateDeleteToTarget(ctx context.Context, dobj DeletedObjectReplicationI
if dobj.VersionID == "" {
rinfo.ReplicationStatus = replication.Failed
} else {
rinfo.VersionPurgeStatus = Failed
rinfo.VersionPurgeStatus = replication.VersionPurgeFailed
}
return
return rinfo
}
// early return if already replicated delete marker for existing object replication/ healing delete markers
if dobj.DeleteMarkerVersionID != "" {
@ -657,13 +658,13 @@ func replicateDeleteToTarget(ctx context.Context, dobj DeletedObjectReplicationI
// delete marker already replicated
if dobj.VersionID == "" && rinfo.VersionPurgeStatus.Empty() {
rinfo.ReplicationStatus = replication.Completed
return
return rinfo
}
case isErrObjectNotFound(serr), isErrVersionNotFound(serr):
// version being purged is already not found on target.
if !rinfo.VersionPurgeStatus.Empty() {
rinfo.VersionPurgeStatus = Complete
return
rinfo.VersionPurgeStatus = replication.VersionPurgeComplete
return rinfo
}
case isErrReadQuorum(serr), isErrWriteQuorum(serr):
// destination has some quorum issues, perform removeObject() anyways
@ -677,7 +678,7 @@ func replicateDeleteToTarget(ctx context.Context, dobj DeletedObjectReplicationI
if err != nil && !toi.ReplicationReady {
rinfo.ReplicationStatus = replication.Failed
rinfo.Err = err
return
return rinfo
}
}
}
@ -695,7 +696,7 @@ func replicateDeleteToTarget(ctx context.Context, dobj DeletedObjectReplicationI
if dobj.VersionID == "" {
rinfo.ReplicationStatus = replication.Failed
} else {
rinfo.VersionPurgeStatus = Failed
rinfo.VersionPurgeStatus = replication.VersionPurgeFailed
}
replLogIf(ctx, fmt.Errorf("unable to replicate delete marker to %s: %s/%s(%s): %w", tgt.EndpointURL(), tgt.Bucket, dobj.ObjectName, versionID, rmErr))
if rmErr != nil && minio.IsNetworkOrHostDown(rmErr, true) && !globalBucketTargetSys.isOffline(tgt.EndpointURL()) {
@ -705,10 +706,10 @@ func replicateDeleteToTarget(ctx context.Context, dobj DeletedObjectReplicationI
if dobj.VersionID == "" {
rinfo.ReplicationStatus = replication.Completed
} else {
rinfo.VersionPurgeStatus = Complete
rinfo.VersionPurgeStatus = replication.VersionPurgeComplete
}
}
return
return rinfo
}
func getCopyObjMetadata(oi ObjectInfo, sc string) map[string]string {
@ -803,9 +804,7 @@ func putReplicationOpts(ctx context.Context, sc string, objInfo ObjectInfo) (put
} else {
cs, mp := getCRCMeta(objInfo, 0, nil)
// Set object checksum.
for k, v := range cs {
meta[k] = v
}
maps.Copy(meta, cs)
isMP = mp
if !objInfo.isMultipart() && cs[xhttp.AmzChecksumType] == xhttp.AmzChecksumTypeFullObject {
// For objects where checksum is full object, it will be the same.
@ -911,7 +910,7 @@ func putReplicationOpts(ctx context.Context, sc string, objInfo ObjectInfo) (put
}
putOpts.ServerSideEncryption = sseEnc
}
return
return putOpts, isMP, err
}
type replicationAction string
@ -969,9 +968,7 @@ func getReplicationAction(oi1 ObjectInfo, oi2 minio.ObjectInfo, opType replicati
t, _ := tags.ParseObjectTags(oi1.UserTags)
oi2Map := make(map[string]string)
for k, v := range oi2.UserTags {
oi2Map[k] = v
}
maps.Copy(oi2Map, oi2.UserTags)
if (oi2.UserTagCount > 0 && !reflect.DeepEqual(oi2Map, t.ToMap())) || (oi2.UserTagCount != len(t.ToMap())) {
return replicateMetadata
}
@ -1211,7 +1208,7 @@ func (ri ReplicateObjectInfo) replicateObject(ctx context.Context, objectAPI Obj
if ri.TargetReplicationStatus(tgt.ARN) == replication.Completed && !ri.ExistingObjResync.Empty() && !ri.ExistingObjResync.mustResyncTarget(tgt.ARN) {
rinfo.ReplicationStatus = replication.Completed
rinfo.ReplicationResynced = true
return
return rinfo
}
if globalBucketTargetSys.isOffline(tgt.EndpointURL()) {
@ -1223,7 +1220,7 @@ func (ri ReplicateObjectInfo) replicateObject(ctx context.Context, objectAPI Obj
UserAgent: "Internal: [Replication]",
Host: globalLocalNodeName,
})
return
return rinfo
}
versioned := globalBucketVersioningSys.PrefixEnabled(bucket, object)
@ -1247,7 +1244,7 @@ func (ri ReplicateObjectInfo) replicateObject(ctx context.Context, objectAPI Obj
})
replLogOnceIf(ctx, fmt.Errorf("unable to read source object %s/%s(%s): %w", bucket, object, objInfo.VersionID, err), object+":"+objInfo.VersionID)
}
return
return rinfo
}
defer gr.Close()
@ -1271,7 +1268,7 @@ func (ri ReplicateObjectInfo) replicateObject(ctx context.Context, objectAPI Obj
UserAgent: "Internal: [Replication]",
Host: globalLocalNodeName,
})
return
return rinfo
}
}
@ -1310,7 +1307,7 @@ func (ri ReplicateObjectInfo) replicateObject(ctx context.Context, objectAPI Obj
UserAgent: "Internal: [Replication]",
Host: globalLocalNodeName,
})
return
return rinfo
}
var headerSize int
@ -1347,7 +1344,7 @@ func (ri ReplicateObjectInfo) replicateObject(ctx context.Context, objectAPI Obj
globalBucketTargetSys.markOffline(tgt.EndpointURL())
}
}
return
return rinfo
}
// replicateAll replicates metadata for specified version of the object to destination bucket
@ -1383,7 +1380,7 @@ func (ri ReplicateObjectInfo) replicateAll(ctx context.Context, objectAPI Object
UserAgent: "Internal: [Replication]",
Host: globalLocalNodeName,
})
return
return rinfo
}
versioned := globalBucketVersioningSys.PrefixEnabled(bucket, object)
@ -1408,7 +1405,7 @@ func (ri ReplicateObjectInfo) replicateAll(ctx context.Context, objectAPI Object
})
replLogIf(ctx, fmt.Errorf("unable to replicate to target %s for %s/%s(%s): %w", tgt.EndpointURL(), bucket, object, objInfo.VersionID, err))
}
return
return rinfo
}
defer gr.Close()
@ -1421,7 +1418,7 @@ func (ri ReplicateObjectInfo) replicateAll(ctx context.Context, objectAPI Object
if objInfo.TargetReplicationStatus(tgt.ARN) == replication.Completed && !ri.ExistingObjResync.Empty() && !ri.ExistingObjResync.mustResyncTarget(tgt.ARN) {
rinfo.ReplicationStatus = replication.Completed
rinfo.ReplicationResynced = true
return
return rinfo
}
size, err := objInfo.GetActualSize()
@ -1434,7 +1431,7 @@ func (ri ReplicateObjectInfo) replicateAll(ctx context.Context, objectAPI Object
UserAgent: "Internal: [Replication]",
Host: globalLocalNodeName,
})
return
return rinfo
}
// Set the encrypted size for SSE-C objects
@ -1497,7 +1494,7 @@ func (ri ReplicateObjectInfo) replicateAll(ctx context.Context, objectAPI Object
rinfo.ReplicationAction = rAction
rinfo.ReplicationStatus = replication.Completed
}
return
return rinfo
}
} else {
// SSEC objects will refuse HeadObject without the decryption key.
@ -1531,7 +1528,7 @@ func (ri ReplicateObjectInfo) replicateAll(ctx context.Context, objectAPI Object
UserAgent: "Internal: [Replication]",
Host: globalLocalNodeName,
})
return
return rinfo
}
}
applyAction:
@ -1597,7 +1594,7 @@ applyAction:
UserAgent: "Internal: [Replication]",
Host: globalLocalNodeName,
})
return
return rinfo
}
var headerSize int
for k, v := range putOpts.Header() {
@ -1634,7 +1631,7 @@ applyAction:
}
}
}
return
return rinfo
}
func replicateObjectWithMultipart(ctx context.Context, c *minio.Core, bucket, object string, r io.Reader, objInfo ObjectInfo, opts minio.PutObjectOptions) (err error) {
@ -1770,9 +1767,7 @@ func filterReplicationStatusMetadata(metadata map[string]string) map[string]stri
}
if !copied {
dst = make(map[string]string, len(metadata))
for k, v := range metadata {
dst[k] = v
}
maps.Copy(dst, metadata)
copied = true
}
delete(dst, key)
@ -2682,7 +2677,7 @@ func (c replicationConfig) Replicate(opts replication.ObjectOpts) bool {
// Resync returns true if replication reset is requested
func (c replicationConfig) Resync(ctx context.Context, oi ObjectInfo, dsc ReplicateDecision, tgtStatuses map[string]replication.StatusType) (r ResyncDecision) {
if c.Empty() {
return
return r
}
// Now overlay existing object replication choices for target
@ -2698,7 +2693,7 @@ func (c replicationConfig) Resync(ctx context.Context, oi ObjectInfo, dsc Replic
tgtArns := c.Config.FilterTargetArns(opts)
// indicates no matching target with Existing object replication enabled.
if len(tgtArns) == 0 {
return
return r
}
for _, t := range tgtArns {
opts.TargetArn = t
@ -2724,7 +2719,7 @@ func (c replicationConfig) resync(oi ObjectInfo, dsc ReplicateDecision, tgtStatu
targets: make(map[string]ResyncTargetDecision, len(dsc.targetsMap)),
}
if c.remotes == nil {
return
return r
}
for _, tgt := range c.remotes.Targets {
d, ok := dsc.targetsMap[tgt.Arn]
@ -2736,7 +2731,7 @@ func (c replicationConfig) resync(oi ObjectInfo, dsc ReplicateDecision, tgtStatu
}
r.targets[d.Arn] = resyncTarget(oi, tgt.Arn, tgt.ResetID, tgt.ResetBeforeDate, tgtStatuses[tgt.Arn])
}
return
return r
}
func targetResetHeader(arn string) string {
@ -2755,28 +2750,28 @@ func resyncTarget(oi ObjectInfo, arn string, resetID string, resetBeforeDate tim
if !ok { // existing object replication is enabled and object version is unreplicated so far.
if resetID != "" && oi.ModTime.Before(resetBeforeDate) { // trigger replication if `mc replicate reset` requested
rd.Replicate = true
return
return rd
}
// For existing object reset - this condition is needed
rd.Replicate = tgtStatus == ""
return
return rd
}
if resetID == "" || resetBeforeDate.Equal(timeSentinel) { // no reset in progress
return
return rd
}
// if already replicated, return true if a new reset was requested.
splits := strings.SplitN(rs, ";", 2)
if len(splits) != 2 {
return
return rd
}
newReset := splits[1] != resetID
if !newReset && tgtStatus == replication.Completed {
// already replicated and no reset requested
return
return rd
}
rd.Replicate = newReset && oi.ModTime.Before(resetBeforeDate)
return
return rd
}
const resyncTimeInterval = time.Minute * 1
@ -2954,7 +2949,7 @@ func (s *replicationResyncer) resyncBucket(ctx context.Context, objectAPI Object
}()
var wg sync.WaitGroup
for i := 0; i < resyncParallelRoutines; i++ {
for i := range resyncParallelRoutines {
wg.Add(1)
workers[i] = make(chan ReplicateObjectInfo, 100)
i := i
@ -3063,7 +3058,7 @@ func (s *replicationResyncer) resyncBucket(ctx context.Context, objectAPI Object
workers[h%uint64(resyncParallelRoutines)] <- roi
}
}
for i := 0; i < resyncParallelRoutines; i++ {
for i := range resyncParallelRoutines {
xioutil.SafeClose(workers[i])
}
wg.Wait()
@ -3193,11 +3188,9 @@ func (p *ReplicationPool) startResyncRoutine(ctx context.Context, buckets []stri
<-ctx.Done()
return
}
duration := time.Duration(r.Float64() * float64(time.Minute))
if duration < time.Second {
duration := max(time.Duration(r.Float64()*float64(time.Minute)),
// Make sure to sleep at least a second to avoid high CPU ticks.
duration = time.Second
}
time.Second)
time.Sleep(duration)
}
}
@ -3363,7 +3356,7 @@ func getReplicationDiff(ctx context.Context, objAPI ObjectLayer, bucket string,
}
for arn, st := range roi.TargetPurgeStatuses {
if opts.ARN == "" || opts.ARN == arn {
if !opts.Verbose && st == Complete {
if !opts.Verbose && st == replication.VersionPurgeComplete {
continue
}
t, ok := tgtsMap[arn]
@ -3429,12 +3422,12 @@ func queueReplicationHeal(ctx context.Context, bucket string, oi ObjectInfo, rcf
roi = getHealReplicateObjectInfo(oi, rcfg)
roi.RetryCount = uint32(retryCount)
if !roi.Dsc.ReplicateAny() {
return
return roi
}
// early return if replication already done, otherwise we need to determine if this
// version is an existing object that needs healing.
if oi.ReplicationStatus == replication.Completed && oi.VersionPurgeStatus.Empty() && !roi.ExistingObjResync.mustResync() {
return
return roi
}
if roi.DeleteMarker || !roi.VersionPurgeStatus.Empty() {
@ -3462,16 +3455,16 @@ func queueReplicationHeal(ctx context.Context, bucket string, oi ObjectInfo, rcf
// heal delete marker replication failure or versioned delete replication failure
if roi.ReplicationStatus == replication.Pending ||
roi.ReplicationStatus == replication.Failed ||
roi.VersionPurgeStatus == Failed || roi.VersionPurgeStatus == Pending {
roi.VersionPurgeStatus == replication.VersionPurgeFailed || roi.VersionPurgeStatus == replication.VersionPurgePending {
globalReplicationPool.Get().queueReplicaDeleteTask(dv)
return
return roi
}
// if replication status is Complete on DeleteMarker and existing object resync required
if roi.ExistingObjResync.mustResync() && (roi.ReplicationStatus == replication.Completed || roi.ReplicationStatus.Empty()) {
queueReplicateDeletesWrapper(dv, roi.ExistingObjResync)
return
return roi
}
return
return roi
}
if roi.ExistingObjResync.mustResync() {
roi.OpType = replication.ExistingObjectReplicationType
@ -3480,13 +3473,13 @@ func queueReplicationHeal(ctx context.Context, bucket string, oi ObjectInfo, rcf
case replication.Pending, replication.Failed:
roi.EventType = ReplicateHeal
globalReplicationPool.Get().queueReplicaTask(roi)
return
return roi
}
if roi.ExistingObjResync.mustResync() {
roi.EventType = ReplicateExisting
globalReplicationPool.Get().queueReplicaTask(roi)
}
return
return roi
}
const (
@ -3750,7 +3743,7 @@ func (p *ReplicationPool) queueMRFHeal() error {
}
func (p *ReplicationPool) initialized() bool {
return !(p == nil || p.objLayer == nil)
return p != nil && p.objLayer != nil
}
// getMRF returns MRF entries for this node.
@ -3797,14 +3790,13 @@ func getCRCMeta(oi ObjectInfo, partNum int, h http.Header) (cs map[string]string
meta := make(map[string]string)
cs, isMP = oi.decryptChecksums(partNum, h)
for k, v := range cs {
cksum := hash.NewChecksumString(k, v)
if cksum == nil {
if k == xhttp.AmzChecksumType {
continue
}
if cksum.Valid() {
meta[cksum.Type.Key()] = v
meta[xhttp.AmzChecksumType] = cs[xhttp.AmzChecksumType]
meta[xhttp.AmzChecksumAlgo] = cksum.Type.String()
cktype := hash.ChecksumStringToType(k)
if cktype.IsSet() {
meta[cktype.Key()] = v
meta[xhttp.AmzChecksumAlgo] = cktype.String()
}
}
return meta, isMP

View File

@ -18,7 +18,6 @@
package cmd
import (
"context"
"fmt"
"net/http"
"testing"
@ -86,7 +85,7 @@ var replicationConfigTests = []struct {
}
func TestReplicationResync(t *testing.T) {
ctx := context.Background()
ctx := t.Context()
for i, test := range replicationConfigTests {
if sync := test.rcfg.Resync(ctx, test.info, test.dsc, test.tgtStatuses); sync.mustResync() != test.expectedSync {
t.Errorf("Test%d (%s): Resync got %t , want %t", i+1, test.name, sync.mustResync(), test.expectedSync)

View File

@ -19,6 +19,7 @@ package cmd
import (
"fmt"
"maps"
"math"
"sync/atomic"
"time"
@ -37,7 +38,7 @@ type ReplicationLatency struct {
// Merge two replication latency into a new one
func (rl ReplicationLatency) merge(other ReplicationLatency) (newReplLatency ReplicationLatency) {
newReplLatency.UploadHistogram = rl.UploadHistogram.Merge(other.UploadHistogram)
return
return newReplLatency
}
// Get upload latency of each object size range
@ -48,7 +49,7 @@ func (rl ReplicationLatency) getUploadLatency() (ret map[string]uint64) {
// Convert nanoseconds to milliseconds
ret[sizeTagToString(k)] = uint64(v.avg() / time.Millisecond)
}
return
return ret
}
// Update replication upload latency with a new value
@ -63,7 +64,7 @@ type ReplicationLastMinute struct {
func (rl ReplicationLastMinute) merge(other ReplicationLastMinute) (nl ReplicationLastMinute) {
nl = ReplicationLastMinute{rl.LastMinute.merge(other.LastMinute)}
return
return nl
}
func (rl *ReplicationLastMinute) addsize(n int64) {
@ -221,9 +222,7 @@ func (brs BucketReplicationStats) Clone() (c BucketReplicationStats) {
}
if s.Failed.ErrCounts == nil {
s.Failed.ErrCounts = make(map[string]int)
for k, v := range st.Failed.ErrCounts {
s.Failed.ErrCounts[k] = v
}
maps.Copy(s.Failed.ErrCounts, st.Failed.ErrCounts)
}
c.Stats[arn] = &s
}

Some files were not shown because too many files have changed in this diff Show More