Commit Graph

22 Commits

Author SHA1 Message Date
Poorna
837a2a3d4b
sr: use service account cred for claims check (#19209)
PR #19111 overlaid service account secret with site replicator secret
during token claims check.

Fixes : #19206
2024-03-06 16:19:24 -08:00
Poorna
912a0031b7
fix sr tests to capture all server logs (#19051) 2024-02-13 20:51:23 -08:00
Harshavardhana
0c068b15c7
add missing handler for reloading site replication config on peers (#19042) 2024-02-13 06:55:54 -08:00
Poorna
3781a0f9ad
replication: Pass metadata timestamps in CopyObject call (#18647)
Regression from #18285. CopyObject options were inheriting source MTime
for metadata timestamps if unspecified, removing this prevented metadata
updates from being applied on target.
2023-12-13 15:28:55 -08:00
Poorna
e79b289325
fix datadir missing check on HeadObject (#18646)
versions pending purge in replication were seeing a errFileCorrupt
that prevents permanent deletion after replication.

Regression from PR#18477
2023-12-13 14:54:01 -08:00
Poorna
78f1f69d57
fix site replication resync status (#18245)
To persist status changes on disk upon completion.

Adds new tests to handle this functionality.
2023-10-13 22:17:22 -07:00
Harshavardhana
4a425cbac1
cleanup scripts and apply shfmt (#17284) 2023-05-25 22:07:25 -07:00
Poorna
e07c2ab868
Use hash.NewLimitReader for internal multipart calls (#17191) 2023-05-12 11:19:08 -07:00
Harshavardhana
fb1492f531
check for quorum errors for DeleteBucket() (#16859) 2023-03-20 23:38:06 -07:00
Harshavardhana
c97f50e274
replication syncs happen in 60sec intervals (#16571) 2023-02-08 08:23:26 -08:00
Harshavardhana
a9f5b58a01
fix: update the JSON keys for latest 'mc' release (#16171) 2022-12-05 10:28:22 -08:00
Anis Elleuch
b737c83a66
Ensure that only one node performs site replication healing (#15584)
When a node finds a change in the other replication cluster and applies
to itself will already notify other peers. No need for all nodes in a
given cluster to do site replication healing, only one node is
sufficient.
2022-08-24 13:46:09 -07:00
Harshavardhana
5e763b71dc
use logger.LogOnce to reduce printing disconnection logs (#15408)
fixes #15334

- re-use net/url parsed value for http.Request{}
- remove gosimple, structcheck and unusued due to https://github.com/golangci/golangci-lint/issues/2649
- unwrapErrs upto leafErr to ensure that we store exactly the correct errors
2022-07-27 09:44:59 -07:00
Poorna
426c902b87
site replication: fix healing of bucket deletes. (#15377)
This PR changes the handling of bucket deletes for site 
replicated setups to hold on to deleted bucket state until 
it syncs to all the clusters participating in site replication.
2022-07-25 17:51:32 -07:00
Poorna
7e32a17742
fix: site replication healing of missing buckets (#15298)
fixes a regression from #15186

- Adding tests to cover healing of buckets.
- Also dereference quota in SiteReplicationStatus only when non-nil
2022-07-14 14:27:47 -07:00
Poorna
0ea5c9d8e8
site healing: Skip stale iam asset updates from peer. (#15203)
Allow healing to apply IAM change only when peer
gave the most recent update.
2022-07-01 13:19:13 -07:00
Poorna
7cc9286e0f
site healing: Skip stale bucket metadata updates from peer (#15186)
Allow healing to apply bucket metadata change only when peer
gave the most recent update.
2022-06-28 18:09:20 -07:00
Poorna
f8d6eaaa96
fix: regression from range GET proxy on replicated buckets #14345 (#14532)
Fixes: #14531
2022-03-11 15:56:49 -08:00
Anis Elleuch
1f92fc3fc0
Always check for root disks unless MINIO_CI_CD is set (#14232)
The current code considers a pool with all root disks to be as part
of a testing environment even if there are other pools with mounted
disks. This will result to illegitimate writing in root disks.

Fix this by simplifing the logic: require MINIO_CI_CD in order to skip
root disk check.
2022-02-13 15:42:07 -08:00
Harshavardhana
9d588319dd
support site replication to replicate IAM users,groups (#14128)
- Site replication was missing replicating users,
  groups when an empty site was added.

- Add site replication for groups and users when they
  are disabled and enabled.

- Add support for replicating bucket quota config.
2022-01-19 20:02:24 -08:00
Poorna
54a98773f8
fix: replication of tag removal (#14056)
Currently tag removal leaves replication state as `PENDING` 
because the `HEAD` api returns just a tag count but not the 
actual tags, and this is treated as a no-op
2022-01-10 19:06:10 -08:00
Harshavardhana
b7c5e45fff
heal: isObjectDangling should return false when it cannot decide (#14053)
In a multi-pool setup when disks are coming up, or in a single pool
setup let's say with 100's of erasure sets with a slow network.

It's possible when healing is attempted on `.minio.sys/config`
folder, it can lead to healing unexpectedly deleting some policy
files as dangling due to a mistake in understanding when `isObjectDangling`
is considered to be 'true'.

This issue happened in commit 30135eed86
when we assumed the validMeta with empty ErasureInfo is considered
to be fully dangling. This implementation issue gets exposed when
the server is starting up.

This is most easily seen with multiple-pool setups because of the
disconnected fashion pools that come up. The decision to purge the
object as dangling is taken incorrectly prior to the correct state
being achieved on each pool, when the corresponding drive let's say
returns 'errDiskNotFound', a 'delete' is triggered. At this point,
the 'drive' comes online because this is part of the startup sequence
as drives can come online lazily.

This kind of situation exists because we allow (totalDisks/2) number
of drives to be online when the server is being restarted.

Implementation made an incorrect assumption here leading to policies
getting deleted.

Added tests to capture the implementation requirements.
2022-01-07 19:11:54 -08:00