`go run golang.org/x/tools/gopls/internal/analysis/modernize/cmd/modernize@latest -fix -test ./...` executed.
`go generate ./...` ran afterwards to keep generated.
RoleARN is a required parameter in AssumeRoleWithWebIdentity,
according to the standard AWS implementation, and the official
AWS SDKs and CLI will not allow you to assume a role from a JWT
without also specifying a RoleARN. This meant that it was not
possible to use the official SDKs for claim-based OIDC with Minio
(minio/minio#21421), since Minio required you to _omit_ the RoleARN in this case.
minio/minio#21468 attempted to fix this by disabling the validation
of the RoleARN when a claim-based provider was configured, but this had
the side effect of making it impossible to have a mixture of claim-based
and role-based OIDC providers configured at the same time - every
authentication would be treated as claim-based, ignoring the RoleARN entirely.
This is an alternative fix, whereby:
- _if_ the `RoleARN` is one that Minio knows about, then use the associated role policy
- if the `RoleARN` is not recognised, but there is a claim-based provider configured, then ignore the role ARN and attempt authentication with the claim-based provider
- if the `RoleARN` is not recognised, and there is _no_ claim-based provider, then return an error.
Fixes incorrect application of ilm expiry rules on versioned objects
when replication is enabled.
Regression from https://github.com/minio/minio/pull/20441 which sends
DeleteObject calls to all pools. This is a problem for replication + ilm
scenario since replicated version can end up in a pool by itself instead of
pool where remaining object versions reside.
For example, if the delete marker is set on pool1 and object versions exist on
pool2, the second rule below will cause the delete marker to be expired by ilm
policy since it is the single version present in pool1
```
{
"Rules": [
{
"ID": "cs6il1ri2hp48g71mdjg",
"NoncurrentVersionExpiration": {
"NoncurrentDays": 14
},
"Status": "Enabled"
},
{
"Expiration": {
"ExpiredObjectDeleteMarker": true
},
"ID": "cs6inj3i2hp4po19cil0",
"Status": "Enabled"
}
]
}
```
S3 listing call is usually sent with a 'max-keys' parameter. This
'max-keys' will also be passed to WalkDir() call. However, when ILM is
enabled in a bucket and some objects are skipped, the listing can
return IsTruncated set to false even if there are more entries in
the drives.
The reason is that drives stop feeding the listing code because it has
max-keys parameter and the listing code thinks listing is finished
because it is being fed anymore.
Ask the drives to not stop listing and relies on the context
cancellation to stop listing in the drives as fast as possible.
This commit removes FIPS 140-2 related code for the following
reasons:
- FIPS 140-2 is a compliance, not a security requirement. Being
FIPS 140-2 compliant has no security implication on its own.
From a tech. perspetive, a FIPS 140-2 compliant implementation
is not necessarily secure and a non-FIPS 140-2 compliant implementation
is not necessarily insecure. It depends on the concret design and
crypto primitives/constructions used.
- The boringcrypto branch used to achieve FIPS 140-2 compliance was never
officially supported by the Go team and is now in maintainance mode.
It is replaced by a built-in FIPS 140-3 module. It will be removed
eventually. Ref: https://github.com/golang/go/issues/69536
- FIPS 140-2 modules are no longer re-certified after Sep. 2026.
Ref: https://csrc.nist.gov/projects/cryptographic-module-validation-program
Signed-off-by: Andreas Auernhammer <github@aead.dev>
Fixes#21249
Example params: `-ftp=force-tls=true -ftp="tls-private-key=ftp/private.key" -ftp="tls-public-cert=ftp/public.crt"`
If MinIO is set up for TLS those certs will be used.
we had a chicken and egg problem with this feature even
when used with kes the credentials generation would
not work in correct sequence causing setup/deployment
disruptions.
This PR streamlines all of this properly to ensure that
this functionality works as advertised.
in a specific corner case when you only have dangling
objects with single shard left over, we end up a situation
where healing is unable to list this dangling object to
purge due to the fact that listing logic expected only
`len(disks)/2+1` - where as when you make this choice you
end up with a situation that the drive where this object
is present is not part of your expected disks list, causing
it to be never listed and ignored into perpetuity.
change the logic such that HealObjects() would be able
to listAndHeal() per set properly on all its drives, since
there is really no other way to do this cleanly, however
instead of "listing" on all erasure sets simultaneously, we
list on '3' at a time. So in a large enough cluster this is
fairly staggered.