mirror of
https://github.com/minio/minio.git
synced 2025-10-29 07:45:02 -04:00
Compare commits
21 Commits
RELEASE.20
...
master
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
3a0cc6c86e | ||
|
|
10b0a234d2 | ||
|
|
18f97e70b1 | ||
|
|
52eee5a2f1 | ||
|
|
c6d3aac5c4 | ||
|
|
fa18589d1c | ||
|
|
05e569960a | ||
|
|
9e49d5e7a6 | ||
|
|
c1a49490c7 | ||
|
|
334c313da4 | ||
|
|
1b8ac0af9f | ||
|
|
ba3c0fd1c7 | ||
|
|
d51a4a4ff6 | ||
|
|
62383dfbfe | ||
|
|
bde0d5a291 | ||
|
|
534f4a9fb1 | ||
|
|
b8631cf531 | ||
|
|
456d9462e5 | ||
|
|
756f3c8142 | ||
|
|
7a80ec1cce | ||
|
|
ae71d76901 |
11
.github/ISSUE_TEMPLATE/bug_report.md
vendored
11
.github/ISSUE_TEMPLATE/bug_report.md
vendored
@ -1,14 +1,19 @@
|
||||
---
|
||||
name: Bug report
|
||||
about: Create a report to help us improve
|
||||
about: Report a bug in MinIO (community edition is source-only)
|
||||
title: ''
|
||||
labels: community, triage
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
## NOTE
|
||||
If this case is urgent, please subscribe to [Subnet](https://min.io/pricing) so that our 24/7 support team may help you faster.
|
||||
## IMPORTANT NOTES
|
||||
|
||||
**Community Edition**: MinIO community edition is now source-only. Install via `go install github.com/minio/minio@latest`
|
||||
|
||||
**Feature Requests**: We are no longer accepting feature requests for the community edition. For feature requests and enterprise support, please subscribe to [MinIO Enterprise Support](https://min.io/pricing).
|
||||
|
||||
**Urgent Issues**: If this case is urgent or affects production, please subscribe to [SUBNET](https://min.io/pricing) for 24/7 enterprise support.
|
||||
|
||||
<!--- Provide a general summary of the issue in the Title above -->
|
||||
|
||||
|
||||
6
.github/ISSUE_TEMPLATE/config.yml
vendored
6
.github/ISSUE_TEMPLATE/config.yml
vendored
@ -2,7 +2,7 @@ blank_issues_enabled: false
|
||||
contact_links:
|
||||
- name: MinIO Community Support
|
||||
url: https://slack.min.io
|
||||
about: Join here for Community Support
|
||||
- name: MinIO SUBNET Support
|
||||
about: Community support via Slack - for questions and discussions
|
||||
- name: MinIO Enterprise Support (SUBNET)
|
||||
url: https://min.io/pricing
|
||||
about: Join here for Enterprise Support
|
||||
about: Enterprise support with SLA - for production deployments and feature requests
|
||||
|
||||
20
.github/ISSUE_TEMPLATE/feature_request.md
vendored
20
.github/ISSUE_TEMPLATE/feature_request.md
vendored
@ -1,20 +0,0 @@
|
||||
---
|
||||
name: Feature request
|
||||
about: Suggest an idea for this project
|
||||
title: ''
|
||||
labels: community, triage
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
**Is your feature request related to a problem? Please describe.**
|
||||
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
|
||||
|
||||
**Describe the solution you'd like**
|
||||
A clear and concise description of what you want to happen.
|
||||
|
||||
**Describe alternatives you've considered**
|
||||
A clear and concise description of any alternative solutions or features you've considered.
|
||||
|
||||
**Additional context**
|
||||
Add any other context or screenshots about the feature request here.
|
||||
@ -1,8 +1,14 @@
|
||||
FROM minio/minio:latest
|
||||
|
||||
ARG TARGETARCH
|
||||
ARG RELEASE
|
||||
|
||||
RUN chmod -R 777 /usr/bin
|
||||
|
||||
COPY ./minio /usr/bin/minio
|
||||
COPY ./minio-${TARGETARCH}.${RELEASE} /usr/bin/minio
|
||||
COPY ./minio-${TARGETARCH}.${RELEASE}.minisig /usr/bin/minio.minisig
|
||||
COPY ./minio-${TARGETARCH}.${RELEASE}.sha256sum /usr/bin/minio.sha256sum
|
||||
|
||||
COPY dockerscripts/docker-entrypoint.sh /usr/bin/docker-entrypoint.sh
|
||||
|
||||
ENTRYPOINT ["/usr/bin/docker-entrypoint.sh"]
|
||||
|
||||
278
README.md
278
README.md
@ -4,268 +4,154 @@
|
||||
|
||||
[](https://min.io)
|
||||
|
||||
MinIO is a high-performance, S3-compatible object storage solution released under the GNU AGPL v3.0 license. Designed for speed and scalability, it powers AI/ML, analytics, and data-intensive workloads with industry-leading performance.
|
||||
MinIO is a high-performance, S3-compatible object storage solution released under the GNU AGPL v3.0 license.
|
||||
Designed for speed and scalability, it powers AI/ML, analytics, and data-intensive workloads with industry-leading performance.
|
||||
|
||||
🔹 S3 API Compatible – Seamless integration with existing S3 tools
|
||||
🔹 Built for AI & Analytics – Optimized for large-scale data pipelines
|
||||
🔹 High Performance – Ideal for demanding storage workloads.
|
||||
- S3 API Compatible – Seamless integration with existing S3 tools
|
||||
- Built for AI & Analytics – Optimized for large-scale data pipelines
|
||||
- High Performance – Ideal for demanding storage workloads.
|
||||
|
||||
AI storage documentation (https://min.io/solutions/object-storage-for-ai).
|
||||
This README provides instructions for building MinIO from source and deploying onto baremetal hardware.
|
||||
Use the [MinIO Documentation](https://github.com/minio/docs) project to build and host a local copy of the documentation.
|
||||
|
||||
This README provides quickstart instructions on running MinIO on bare metal hardware, including container-based installations. For Kubernetes environments, use the [MinIO Kubernetes Operator](https://github.com/minio/operator/blob/master/README.md).
|
||||
## MinIO is Open Source Software
|
||||
|
||||
## Container Installation
|
||||
We designed MinIO as Open Source software for the Open Source software community. We encourage the community to remix, redesign, and reshare MinIO under the terms of the AGPLv3 license.
|
||||
|
||||
Use the following commands to run a standalone MinIO server as a container.
|
||||
All usage of MinIO in your application stack requires validation against AGPLv3 obligations, which include but are not limited to the release of modified code to the community from which you have benefited. Any commercial/proprietary usage of the AGPLv3 software, including repackaging or reselling services/features, is done at your own risk.
|
||||
|
||||
Standalone MinIO servers are best suited for early development and evaluation. Certain features such as versioning, object locking, and bucket replication
|
||||
require distributed deploying MinIO with Erasure Coding. For extended development and production, deploy MinIO with Erasure Coding enabled - specifically,
|
||||
with a *minimum* of 4 drives per MinIO server. See [MinIO Erasure Code Overview](https://docs.min.io/community/minio-object-store/operations/concepts/erasure-coding.html)
|
||||
for more complete documentation.
|
||||
The AGPLv3 provides no obligation by any party to support, maintain, or warranty the original or any modified work.
|
||||
All support is provided on a best-effort basis through Github and our [Slack](https//slack.min.io) channel, and any member of the community is welcome to contribute and assist others in their usage of the software.
|
||||
|
||||
### Stable
|
||||
MinIO [AIStor](https://www.min.io/product/aistor) includes enterprise-grade support and licensing for workloads which require commercial or proprietary usage and production-level SLA/SLO-backed support. For more information, [reach out for a quote](https://min.io/pricing).
|
||||
|
||||
Run the following command to run the latest stable image of MinIO as a container using an ephemeral data volume:
|
||||
## Source-Only Distribution
|
||||
|
||||
```sh
|
||||
podman run -p 9000:9000 -p 9001:9001 \
|
||||
quay.io/minio/minio server /data --console-address ":9001"
|
||||
```
|
||||
**Important:** The MinIO community edition is now distributed as source code only. We will no longer provide pre-compiled binary releases for the community version.
|
||||
|
||||
The MinIO deployment starts using default root credentials `minioadmin:minioadmin`. You can test the deployment using the MinIO Console, an embedded
|
||||
object browser built into MinIO Server. Point a web browser running on the host machine to <http://127.0.0.1:9000> and log in with the
|
||||
root credentials. You can use the Browser to create buckets, upload objects, and browse the contents of the MinIO server.
|
||||
### Installing Latest MinIO Community Edition
|
||||
|
||||
You can also connect using any S3-compatible tool, such as the MinIO Client `mc` commandline tool. See
|
||||
[Test using MinIO Client `mc`](#test-using-minio-client-mc) for more information on using the `mc` commandline tool. For application developers,
|
||||
see <https://docs.min.io/community/minio-object-store/developers/minio-drivers.html> to view MinIO SDKs for supported languages.
|
||||
To use MinIO community edition, you have two options:
|
||||
|
||||
> [!NOTE]
|
||||
> To deploy MinIO on with persistent storage, you must map local persistent directories from the host OS to the container using the `podman -v` option.
|
||||
> For example, `-v /mnt/data:/data` maps the host OS drive at `/mnt/data` to `/data` on the container.
|
||||
1. **Install from source** using `go install github.com/minio/minio@latest` (recommended)
|
||||
2. **Build a Docker image** from the provided Dockerfile
|
||||
|
||||
## macOS
|
||||
See the sections below for detailed instructions on each method.
|
||||
|
||||
Use the following commands to run a standalone MinIO server on macOS.
|
||||
### Legacy Binary Releases
|
||||
|
||||
Standalone MinIO servers are best suited for early development and evaluation. Certain features such as versioning, object locking, and bucket replication require distributed deploying MinIO with Erasure Coding. For extended development and production, deploy MinIO with Erasure Coding enabled - specifically, with a *minimum* of 4 drives per MinIO server. See [MinIO Erasure Code Overview](https://docs.min.io/community/minio-object-store/operations/concepts/erasure-coding.html) for more complete documentation.
|
||||
Historical pre-compiled binary releases remain available for reference but are no longer maintained:
|
||||
- GitHub Releases: https://github.com/minio/minio/releases
|
||||
- Direct downloads: https://dl.min.io/server/minio/release/
|
||||
|
||||
### Homebrew (recommended)
|
||||
|
||||
Run the following command to install the latest stable MinIO package using [Homebrew](https://brew.sh/). Replace ``/data`` with the path to the drive or directory in which you want MinIO to store data.
|
||||
|
||||
```sh
|
||||
brew install minio/stable/minio
|
||||
minio server /data
|
||||
```
|
||||
|
||||
> [!NOTE]
|
||||
> If you previously installed minio using `brew install minio` then it is recommended that you reinstall minio from `minio/stable/minio` official repo instead.
|
||||
|
||||
```sh
|
||||
brew uninstall minio
|
||||
brew install minio/stable/minio
|
||||
```
|
||||
|
||||
The MinIO deployment starts using default root credentials `minioadmin:minioadmin`. You can test the deployment using the MinIO Console, an embedded web-based object browser built into MinIO Server. Point a web browser running on the host machine to <http://127.0.0.1:9000> and log in with the root credentials. You can use the Browser to create buckets, upload objects, and browse the contents of the MinIO server.
|
||||
|
||||
You can also connect using any S3-compatible tool, such as the MinIO Client `mc` commandline tool. See [Test using MinIO Client `mc`](#test-using-minio-client-mc) for more information on using the `mc` commandline tool. For application developers, see <https://docs.min.io/community/minio-object-store/developers/minio-drivers.html/> to view MinIO SDKs for supported languages.
|
||||
|
||||
### Binary Download
|
||||
|
||||
Use the following command to download and run a standalone MinIO server on macOS. Replace ``/data`` with the path to the drive or directory in which you want MinIO to store data.
|
||||
|
||||
```sh
|
||||
wget https://dl.min.io/server/minio/release/darwin-amd64/minio
|
||||
chmod +x minio
|
||||
./minio server /data
|
||||
```
|
||||
|
||||
The MinIO deployment starts using default root credentials `minioadmin:minioadmin`. You can test the deployment using the MinIO Console, an embedded web-based object browser built into MinIO Server. Point a web browser running on the host machine to <http://127.0.0.1:9000> and log in with the root credentials. You can use the Browser to create buckets, upload objects, and browse the contents of the MinIO server.
|
||||
|
||||
You can also connect using any S3-compatible tool, such as the MinIO Client `mc` commandline tool. See [Test using MinIO Client `mc`](#test-using-minio-client-mc) for more information on using the `mc` commandline tool. For application developers, see <https://docs.min.io/community/minio-object-store/developers/minio-drivers.html> to view MinIO SDKs for supported languages.
|
||||
|
||||
## GNU/Linux
|
||||
|
||||
Use the following command to run a standalone MinIO server on Linux hosts running 64-bit Intel/AMD architectures. Replace ``/data`` with the path to the drive or directory in which you want MinIO to store data.
|
||||
|
||||
```sh
|
||||
wget https://dl.min.io/server/minio/release/linux-amd64/minio
|
||||
chmod +x minio
|
||||
./minio server /data
|
||||
```
|
||||
|
||||
The following table lists supported architectures. Replace the `wget` URL with the architecture for your Linux host.
|
||||
|
||||
| Architecture | URL |
|
||||
| -------- | ------ |
|
||||
| 64-bit Intel/AMD | <https://dl.min.io/server/minio/release/linux-amd64/minio> |
|
||||
| 64-bit ARM | <https://dl.min.io/server/minio/release/linux-arm64/minio> |
|
||||
| 64-bit PowerPC LE (ppc64le) | <https://dl.min.io/server/minio/release/linux-ppc64le/minio> |
|
||||
|
||||
The MinIO deployment starts using default root credentials `minioadmin:minioadmin`. You can test the deployment using the MinIO Console, an embedded web-based object browser built into MinIO Server. Point a web browser running on the host machine to <http://127.0.0.1:9000> and log in with the root credentials. You can use the Browser to create buckets, upload objects, and browse the contents of the MinIO server.
|
||||
|
||||
You can also connect using any S3-compatible tool, such as the MinIO Client `mc` commandline tool. See [Test using MinIO Client `mc`](#test-using-minio-client-mc) for more information on using the `mc` commandline tool. For application developers, see <https://docs.min.io/community/minio-object-store/developers/minio-drivers.html> to view MinIO SDKs for supported languages.
|
||||
|
||||
> [!NOTE]
|
||||
> Standalone MinIO servers are best suited for early development and evaluation. Certain features such as versioning, object locking, and bucket replication require distributed deploying MinIO with Erasure Coding. For extended development and production, deploy MinIO with Erasure Coding enabled - specifically, with a *minimum* of 4 drives per MinIO server. See [MinIO Erasure Code Overview](https://docs.min.io/community/minio-object-store/operations/concepts/erasure-coding.html) for more complete documentation.
|
||||
|
||||
## Microsoft Windows
|
||||
|
||||
To run MinIO on 64-bit Windows hosts, download the MinIO executable from the following URL:
|
||||
|
||||
```sh
|
||||
https://dl.min.io/server/minio/release/windows-amd64/minio.exe
|
||||
```
|
||||
|
||||
Use the following command to run a standalone MinIO server on the Windows host. Replace ``D:\`` with the path to the drive or directory in which you want MinIO to store data. You must change the terminal or powershell directory to the location of the ``minio.exe`` executable, *or* add the path to that directory to the system ``$PATH``:
|
||||
|
||||
```sh
|
||||
minio.exe server D:\
|
||||
```
|
||||
|
||||
The MinIO deployment starts using default root credentials `minioadmin:minioadmin`. You can test the deployment using the MinIO Console, an embedded web-based object browser built into MinIO Server. Point a web browser running on the host machine to <http://127.0.0.1:9000> and log in with the root credentials. You can use the Browser to create buckets, upload objects, and browse the contents of the MinIO server.
|
||||
|
||||
You can also connect using any S3-compatible tool, such as the MinIO Client `mc` commandline tool. See [Test using MinIO Client `mc`](#test-using-minio-client-mc) for more information on using the `mc` commandline tool. For application developers, see <https://docs.min.io/community/minio-object-store/developers/minio-drivers.html> to view MinIO SDKs for supported languages.
|
||||
|
||||
> [!NOTE]
|
||||
> Standalone MinIO servers are best suited for early development and evaluation. Certain features such as versioning, object locking, and bucket replication require distributed deploying MinIO with Erasure Coding. For extended development and production, deploy MinIO with Erasure Coding enabled - specifically, with a *minimum* of 4 drives per MinIO server. See [MinIO Erasure Code Overview](https://docs.min.io/community/minio-object-store/operations/concepts/erasure-coding.html) for more complete documentation.
|
||||
**These legacy binaries will not receive updates.** We strongly recommend using source builds for access to the latest features, bug fixes, and security updates.
|
||||
|
||||
## Install from Source
|
||||
|
||||
Use the following commands to compile and run a standalone MinIO server from source. Source installation is only intended for developers and advanced users. If you do not have a working Golang environment, please follow [How to install Golang](https://golang.org/doc/install). Minimum version required is [go1.24](https://golang.org/dl/#stable)
|
||||
Use the following commands to compile and run a standalone MinIO server from source.
|
||||
If you do not have a working Golang environment, please follow [How to install Golang](https://golang.org/doc/install). Minimum version required is [go1.24](https://golang.org/dl/#stable)
|
||||
|
||||
```sh
|
||||
go install github.com/minio/minio@latest
|
||||
```
|
||||
|
||||
The MinIO deployment starts using default root credentials `minioadmin:minioadmin`. You can test the deployment using the MinIO Console, an embedded web-based object browser built into MinIO Server. Point a web browser running on the host machine to <http://127.0.0.1:9000> and log in with the root credentials. You can use the Browser to create buckets, upload objects, and browse the contents of the MinIO server.
|
||||
You can alternatively run `go build` and use the `GOOS` and `GOARCH` environment variables to control the OS and architecture target.
|
||||
For example:
|
||||
|
||||
You can also connect using any S3-compatible tool, such as the MinIO Client `mc` commandline tool. See [Test using MinIO Client `mc`](#test-using-minio-client-mc) for more information on using the `mc` commandline tool. For application developers, see <https://docs.min.io/community/minio-object-store/developers/minio-drivers.html> to view MinIO SDKs for supported languages.
|
||||
```
|
||||
env GOOS=linux GOARCh=arm64 go build
|
||||
```
|
||||
|
||||
Start MinIO by running `minio server PATH` where `PATH` is any empty folder on your local filesystem.
|
||||
|
||||
The MinIO deployment starts using default root credentials `minioadmin:minioadmin`.
|
||||
You can test the deployment using the MinIO Console, an embedded web-based object browser built into MinIO Server.
|
||||
Point a web browser running on the host machine to <http://127.0.0.1:9000> and log in with the root credentials.
|
||||
You can use the Browser to create buckets, upload objects, and browse the contents of the MinIO server.
|
||||
|
||||
You can also connect using any S3-compatible tool, such as the MinIO Client `mc` commandline tool:
|
||||
|
||||
```sh
|
||||
mc alias set local http://localhost:9000 minioadmin minioadmin
|
||||
mc admin info local
|
||||
```
|
||||
|
||||
See [Test using MinIO Client `mc`](#test-using-minio-client-mc) for more information on using the `mc` commandline tool.
|
||||
For application developers, see <https://docs.min.io/enterprise/aistor-object-store/developers/sdk/> to view MinIO SDKs for supported languages.
|
||||
|
||||
> [!NOTE]
|
||||
> Standalone MinIO servers are best suited for early development and evaluation. Certain features such as versioning, object locking, and bucket replication require distributed deploying MinIO with Erasure Coding. For extended development and production, deploy MinIO with Erasure Coding enabled - specifically, with a *minimum* of 4 drives per MinIO server. See [MinIO Erasure Code Overview](https://docs.min.io/community/minio-object-store/operations/concepts/erasure-coding.html) for more complete documentation.
|
||||
> Production environments using compiled-from-source MinIO binaries do so at their own risk.
|
||||
> The AGPLv3 license provides no warranties nor liabilites for any such usage.
|
||||
|
||||
MinIO strongly recommends *against* using compiled-from-source MinIO servers for production environments.
|
||||
## Build Docker Image
|
||||
|
||||
## Deployment Recommendations
|
||||
You can use the `docker build .` command to build a Docker image on your local host machine.
|
||||
You must first [build MinIO](#install-from-source) and ensure the `minio` binary exists in the project root.
|
||||
|
||||
### Allow port access for Firewalls
|
||||
|
||||
By default MinIO uses the port 9000 to listen for incoming connections. If your platform blocks the port by default, you may need to enable access to the port.
|
||||
|
||||
### ufw
|
||||
|
||||
For hosts with ufw enabled (Debian based distros), you can use `ufw` command to allow traffic to specific ports. Use below command to allow access to port 9000
|
||||
The following command builds the Docker image using the default `Dockerfile` in the root project directory with the repository and image tag `myminio:minio`
|
||||
|
||||
```sh
|
||||
ufw allow 9000
|
||||
docker build -t myminio:minio .
|
||||
```
|
||||
|
||||
Below command enables all incoming traffic to ports ranging from 9000 to 9010.
|
||||
Use `docker image ls` to confirm the image exists in your local repository.
|
||||
You can run the server using standard Docker invocation:
|
||||
|
||||
```sh
|
||||
ufw allow 9000:9010/tcp
|
||||
docker run -p 9000:9000 -p 9001:9001 myminio:minio server /tmp/minio --console-address :9001
|
||||
```
|
||||
|
||||
### firewall-cmd
|
||||
Complete documentation for building Docker containers, managing custom images, or loading images into orchestration platforms is out of scope for this documentation.
|
||||
You can modify the `Dockerfile` and `dockerscripts/docker-entrypoint.sh` as-needed to reflect your specific image requirements.
|
||||
|
||||
For hosts with firewall-cmd enabled (CentOS), you can use `firewall-cmd` command to allow traffic to specific ports. Use below commands to allow access to port 9000
|
||||
See the [MinIO Container](https://docs.min.io/community/minio-object-store/operations/deployments/baremetal-deploy-minio-as-a-container.html#deploy-minio-container) documentation for more guidance on running MinIO within a Container image.
|
||||
|
||||
```sh
|
||||
firewall-cmd --get-active-zones
|
||||
```
|
||||
## Install using Helm Charts
|
||||
|
||||
This command gets the active zone(s). Now, apply port rules to the relevant zones returned above. For example if the zone is `public`, use
|
||||
There are two paths for installing MinIO onto Kubernetes infrastructure:
|
||||
|
||||
```sh
|
||||
firewall-cmd --zone=public --add-port=9000/tcp --permanent
|
||||
```
|
||||
- Use the [MinIO Operator](https://github.com/minio/operator)
|
||||
- Use the community-maintained [Helm charts](https://github.com/minio/minio/tree/master/helm/minio)
|
||||
|
||||
> [!NOTE]
|
||||
> `permanent` makes sure the rules are persistent across firewall start, restart or reload. Finally reload the firewall for changes to take effect.
|
||||
|
||||
```sh
|
||||
firewall-cmd --reload
|
||||
```
|
||||
|
||||
### iptables
|
||||
|
||||
For hosts with iptables enabled (RHEL, CentOS, etc), you can use `iptables` command to enable all traffic coming to specific ports. Use below command to allow
|
||||
access to port 9000
|
||||
|
||||
```sh
|
||||
iptables -A INPUT -p tcp --dport 9000 -j ACCEPT
|
||||
service iptables restart
|
||||
```
|
||||
|
||||
Below command enables all incoming traffic to ports ranging from 9000 to 9010.
|
||||
|
||||
```sh
|
||||
iptables -A INPUT -p tcp --dport 9000:9010 -j ACCEPT
|
||||
service iptables restart
|
||||
```
|
||||
See the [MinIO Documentation](https://docs.min.io/community/minio-object-store/operations/deployments/kubernetes.html) for guidance on deploying using the Operator.
|
||||
The Community Helm chart has instructions in the folder-level README.
|
||||
|
||||
## Test MinIO Connectivity
|
||||
|
||||
### Test using MinIO Console
|
||||
|
||||
MinIO Server comes with an embedded web based object browser. Point your web browser to <http://127.0.0.1:9000> to ensure your server has started successfully.
|
||||
MinIO Server comes with an embedded web based object browser.
|
||||
Point your web browser to <http://127.0.0.1:9000> to ensure your server has started successfully.
|
||||
|
||||
> [!NOTE]
|
||||
> MinIO runs console on random port by default, if you wish to choose a specific port use `--console-address` to pick a specific interface and port.
|
||||
|
||||
### Things to consider
|
||||
### Test using MinIO Client `mc`
|
||||
|
||||
MinIO redirects browser access requests to the configured server port (i.e. `127.0.0.1:9000`) to the configured Console port. MinIO uses the hostname or IP address specified in the request when building the redirect URL. The URL and port *must* be accessible by the client for the redirection to work.
|
||||
`mc` provides a modern alternative to UNIX commands like ls, cat, cp, mirror, diff etc. It supports filesystems and Amazon S3 compatible cloud storage services.
|
||||
|
||||
For deployments behind a load balancer, proxy, or ingress rule where the MinIO host IP address or port is not public, use the `MINIO_BROWSER_REDIRECT_URL` environment variable to specify the external hostname for the redirect. The LB/Proxy must have rules for directing traffic to the Console port specifically.
|
||||
|
||||
For example, consider a MinIO deployment behind a proxy `https://minio.example.net`, `https://console.minio.example.net` with rules for forwarding traffic on port :9000 and :9001 to MinIO and the MinIO Console respectively on the internal network. Set `MINIO_BROWSER_REDIRECT_URL` to `https://console.minio.example.net` to ensure the browser receives a valid reachable URL.
|
||||
|
||||
| Dashboard | Creating a bucket |
|
||||
| ------------- | ------------- |
|
||||
|  |  |
|
||||
|
||||
## Test using MinIO Client `mc`
|
||||
|
||||
`mc` provides a modern alternative to UNIX commands like ls, cat, cp, mirror, diff etc. It supports filesystems and Amazon S3 compatible cloud storage services. Follow the MinIO Client [Quickstart Guide](https://docs.min.io/community/minio-object-store/reference/minio-mc.html#quickstart) for further instructions.
|
||||
|
||||
## Upgrading MinIO
|
||||
|
||||
Upgrades require zero downtime in MinIO, all upgrades are non-disruptive, all transactions on MinIO are atomic. So upgrading all the servers simultaneously is the recommended way to upgrade MinIO.
|
||||
|
||||
> [!NOTE]
|
||||
> requires internet access to update directly from <https://dl.min.io>, optionally you can host any mirrors at <https://my-artifactory.example.com/minio/>
|
||||
|
||||
- For deployments that installed the MinIO server binary by hand, use [`mc admin update`](https://docs.min.io/community/minio-object-store/reference/minio-mc-admin/mc-admin-update.html)
|
||||
The following commands set a local alias, validate the server information, create a bucket, copy data to that bucket, and list the contents of the bucket.
|
||||
|
||||
```sh
|
||||
mc admin update <minio alias, e.g., myminio>
|
||||
mc alias set local http://localhost:9000 minioadmin minioadmin
|
||||
mc admin info
|
||||
mc mb data
|
||||
mc cp ~/Downloads/mydata data/
|
||||
mc ls data/
|
||||
```
|
||||
|
||||
- For deployments without external internet access (e.g. airgapped environments), download the binary from <https://dl.min.io> and replace the existing MinIO binary let's say for example `/opt/bin/minio`, apply executable permissions `chmod +x /opt/bin/minio` and proceed to perform `mc admin service restart alias/`.
|
||||
|
||||
- For installations using Systemd MinIO service, upgrade via RPM/DEB packages **parallelly** on all servers or replace the binary lets say `/opt/bin/minio` on all nodes, apply executable permissions `chmod +x /opt/bin/minio` and process to perform `mc admin service restart alias/`.
|
||||
|
||||
### Upgrade Checklist
|
||||
|
||||
- Test all upgrades in a lower environment (DEV, QA, UAT) before applying to production. Performing blind upgrades in production environments carries significant risk.
|
||||
- Read the release notes for MinIO *before* performing any upgrade, there is no forced requirement to upgrade to latest release upon every release. Some release may not be relevant to your setup, avoid upgrading production environments unnecessarily.
|
||||
- If you plan to use `mc admin update`, MinIO process must have write access to the parent directory where the binary is present on the host system.
|
||||
- `mc admin update` is not supported and should be avoided in kubernetes/container environments, please upgrade containers by upgrading relevant container images.
|
||||
- **We do not recommend upgrading one MinIO server at a time, the product is designed to support parallel upgrades please follow our recommended guidelines.**
|
||||
Follow the MinIO Client [Quickstart Guide](https://docs.min.io/community/minio-object-store/reference/minio-mc.html#quickstart) for further instructions.
|
||||
|
||||
## Explore Further
|
||||
|
||||
- [The MinIO documentation website](https://docs.min.io/community/minio-object-store/index.html)
|
||||
- [MinIO Erasure Code Overview](https://docs.min.io/community/minio-object-store/operations/concepts/erasure-coding.html)
|
||||
- [Use `mc` with MinIO Server](https://docs.min.io/community/minio-object-store/reference/minio-mc.html)
|
||||
- [Use `minio-go` SDK with MinIO Server](https://docs.min.io/community/minio-object-store/developers/go/minio-go.html)
|
||||
- [The MinIO documentation website](https://docs.min.io/community/minio-object-store/index.html)
|
||||
- [Use `minio-go` SDK with MinIO Server](https://docs.min.io/enterprise/aistor-object-store/developers/sdk/go/)
|
||||
|
||||
## Contribute to MinIO Project
|
||||
|
||||
Please follow MinIO [Contributor's Guide](https://github.com/minio/minio/blob/master/CONTRIBUTING.md)
|
||||
Please follow MinIO [Contributor's Guide](https://github.com/minio/minio/blob/master/CONTRIBUTING.md) for guidance on making new contributions to the repository.
|
||||
|
||||
## License
|
||||
|
||||
|
||||
@ -193,27 +193,27 @@ func (a adminAPIHandlers) SetConfigKVHandler(w http.ResponseWriter, r *http.Requ
|
||||
func setConfigKV(ctx context.Context, objectAPI ObjectLayer, kvBytes []byte) (result setConfigResult, err error) {
|
||||
result.Cfg, err = readServerConfig(ctx, objectAPI, nil)
|
||||
if err != nil {
|
||||
return
|
||||
return result, err
|
||||
}
|
||||
|
||||
result.Dynamic, err = result.Cfg.ReadConfig(bytes.NewReader(kvBytes))
|
||||
if err != nil {
|
||||
return
|
||||
return result, err
|
||||
}
|
||||
|
||||
result.SubSys, _, _, err = config.GetSubSys(string(kvBytes))
|
||||
if err != nil {
|
||||
return
|
||||
return result, err
|
||||
}
|
||||
|
||||
tgts, err := config.ParseConfigTargetID(bytes.NewReader(kvBytes))
|
||||
if err != nil {
|
||||
return
|
||||
return result, err
|
||||
}
|
||||
ctx = context.WithValue(ctx, config.ContextKeyForTargetFromConfig, tgts)
|
||||
if verr := validateConfig(ctx, result.Cfg, result.SubSys); verr != nil {
|
||||
err = badConfigErr{Err: verr}
|
||||
return
|
||||
return result, err
|
||||
}
|
||||
|
||||
// Check if subnet proxy being set and if so set the same value to proxy of subnet
|
||||
@ -222,12 +222,12 @@ func setConfigKV(ctx context.Context, objectAPI ObjectLayer, kvBytes []byte) (re
|
||||
|
||||
// Update the actual server config on disk.
|
||||
if err = saveServerConfig(ctx, objectAPI, result.Cfg); err != nil {
|
||||
return
|
||||
return result, err
|
||||
}
|
||||
|
||||
// Write the config input KV to history.
|
||||
err = saveServerConfigHistory(ctx, objectAPI, kvBytes)
|
||||
return
|
||||
return result, err
|
||||
}
|
||||
|
||||
// GetConfigKVHandler - GET /minio/admin/v3/get-config-kv?key={key}
|
||||
|
||||
@ -380,7 +380,7 @@ func (a adminAPIHandlers) RebalanceStop(w http.ResponseWriter, r *http.Request)
|
||||
func proxyDecommissionRequest(ctx context.Context, defaultEndPoint Endpoint, w http.ResponseWriter, r *http.Request) (proxy bool) {
|
||||
host := env.Get("_MINIO_DECOM_ENDPOINT_HOST", defaultEndPoint.Host)
|
||||
if host == "" {
|
||||
return
|
||||
return proxy
|
||||
}
|
||||
for nodeIdx, proxyEp := range globalProxyEndpoints {
|
||||
if proxyEp.Host == host && !proxyEp.IsLocal {
|
||||
@ -389,5 +389,5 @@ func proxyDecommissionRequest(ctx context.Context, defaultEndPoint Endpoint, w h
|
||||
}
|
||||
}
|
||||
}
|
||||
return
|
||||
return proxy
|
||||
}
|
||||
|
||||
@ -70,7 +70,7 @@ func (a adminAPIHandlers) SiteReplicationAdd(w http.ResponseWriter, r *http.Requ
|
||||
|
||||
func getSRAddOptions(r *http.Request) (opts madmin.SRAddOptions) {
|
||||
opts.ReplicateILMExpiry = r.Form.Get("replicateILMExpiry") == "true"
|
||||
return
|
||||
return opts
|
||||
}
|
||||
|
||||
// SRPeerJoin - PUT /minio/admin/v3/site-replication/join
|
||||
@ -422,7 +422,7 @@ func (a adminAPIHandlers) SiteReplicationEdit(w http.ResponseWriter, r *http.Req
|
||||
func getSREditOptions(r *http.Request) (opts madmin.SREditOptions) {
|
||||
opts.DisableILMExpiryReplication = r.Form.Get("disableILMExpiryReplication") == "true"
|
||||
opts.EnableILMExpiryReplication = r.Form.Get("enableILMExpiryReplication") == "true"
|
||||
return
|
||||
return opts
|
||||
}
|
||||
|
||||
// SRPeerEdit - PUT /minio/admin/v3/site-replication/peer/edit
|
||||
@ -484,7 +484,7 @@ func getSRStatusOptions(r *http.Request) (opts madmin.SRStatusOptions) {
|
||||
opts.EntityValue = q.Get("entityvalue")
|
||||
opts.ShowDeleted = q.Get("showDeleted") == "true"
|
||||
opts.Metrics = q.Get("metrics") == "true"
|
||||
return
|
||||
return opts
|
||||
}
|
||||
|
||||
// SiteReplicationRemove - PUT /minio/admin/v3/site-replication/remove
|
||||
|
||||
@ -208,6 +208,8 @@ func TestIAMInternalIDPServerSuite(t *testing.T) {
|
||||
suite.TestGroupAddRemove(c)
|
||||
suite.TestServiceAccountOpsByAdmin(c)
|
||||
suite.TestServiceAccountPrivilegeEscalationBug(c)
|
||||
suite.TestServiceAccountPrivilegeEscalationBug2_2025_10_15(c, true)
|
||||
suite.TestServiceAccountPrivilegeEscalationBug2_2025_10_15(c, false)
|
||||
suite.TestServiceAccountOpsByUser(c)
|
||||
suite.TestServiceAccountDurationSecondsCondition(c)
|
||||
suite.TestAddServiceAccountPerms(c)
|
||||
@ -1249,6 +1251,108 @@ func (s *TestSuiteIAM) TestServiceAccountPrivilegeEscalationBug(c *check) {
|
||||
}
|
||||
}
|
||||
|
||||
func (s *TestSuiteIAM) TestServiceAccountPrivilegeEscalationBug2_2025_10_15(c *check, forRoot bool) {
|
||||
ctx, cancel := context.WithTimeout(context.Background(), testDefaultTimeout)
|
||||
defer cancel()
|
||||
|
||||
for i := range 3 {
|
||||
err := s.client.MakeBucket(ctx, fmt.Sprintf("bucket%d", i+1), minio.MakeBucketOptions{})
|
||||
if err != nil {
|
||||
c.Fatalf("bucket create error: %v", err)
|
||||
}
|
||||
defer func(i int) {
|
||||
_ = s.client.RemoveBucket(ctx, fmt.Sprintf("bucket%d", i+1))
|
||||
}(i)
|
||||
}
|
||||
|
||||
allow2BucketsPolicyBytes := []byte(`{
|
||||
"Version": "2012-10-17",
|
||||
"Statement": [
|
||||
{
|
||||
"Sid": "ListBucket1AndBucket2",
|
||||
"Effect": "Allow",
|
||||
"Action": ["s3:ListBucket"],
|
||||
"Resource": ["arn:aws:s3:::bucket1", "arn:aws:s3:::bucket2"]
|
||||
},
|
||||
{
|
||||
"Sid": "ReadWriteBucket1AndBucket2Objects",
|
||||
"Effect": "Allow",
|
||||
"Action": [
|
||||
"s3:DeleteObject",
|
||||
"s3:DeleteObjectVersion",
|
||||
"s3:GetObject",
|
||||
"s3:GetObjectVersion",
|
||||
"s3:PutObject"
|
||||
],
|
||||
"Resource": ["arn:aws:s3:::bucket1/*", "arn:aws:s3:::bucket2/*"]
|
||||
}
|
||||
]
|
||||
}`)
|
||||
|
||||
if forRoot {
|
||||
// Create a service account for the root user.
|
||||
_, err := s.adm.AddServiceAccount(ctx, madmin.AddServiceAccountReq{
|
||||
Policy: allow2BucketsPolicyBytes,
|
||||
AccessKey: "restricted",
|
||||
SecretKey: "restricted123",
|
||||
})
|
||||
if err != nil {
|
||||
c.Fatalf("could not create service account")
|
||||
}
|
||||
defer func() {
|
||||
_ = s.adm.DeleteServiceAccount(ctx, "restricted")
|
||||
}()
|
||||
} else {
|
||||
// Create a regular user and attach consoleAdmin policy
|
||||
err := s.adm.AddUser(ctx, "foobar", "foobar123")
|
||||
if err != nil {
|
||||
c.Fatalf("could not create user")
|
||||
}
|
||||
|
||||
_, err = s.adm.AttachPolicy(ctx, madmin.PolicyAssociationReq{
|
||||
Policies: []string{"consoleAdmin"},
|
||||
User: "foobar",
|
||||
})
|
||||
if err != nil {
|
||||
c.Fatalf("could not attach policy")
|
||||
}
|
||||
|
||||
// Create a service account for the regular user.
|
||||
_, err = s.adm.AddServiceAccount(ctx, madmin.AddServiceAccountReq{
|
||||
Policy: allow2BucketsPolicyBytes,
|
||||
TargetUser: "foobar",
|
||||
AccessKey: "restricted",
|
||||
SecretKey: "restricted123",
|
||||
})
|
||||
if err != nil {
|
||||
c.Fatalf("could not create service account: %v", err)
|
||||
}
|
||||
defer func() {
|
||||
_ = s.adm.DeleteServiceAccount(ctx, "restricted")
|
||||
_ = s.adm.RemoveUser(ctx, "foobar")
|
||||
}()
|
||||
}
|
||||
restrictedClient := s.getUserClient(c, "restricted", "restricted123", "")
|
||||
|
||||
buckets, err := restrictedClient.ListBuckets(ctx)
|
||||
if err != nil {
|
||||
c.Fatalf("err fetching buckets %s", err)
|
||||
}
|
||||
if len(buckets) != 2 || buckets[0].Name != "bucket1" || buckets[1].Name != "bucket2" {
|
||||
c.Fatalf("restricted service account should only have access to bucket1 and bucket2")
|
||||
}
|
||||
|
||||
// Try to escalate privileges
|
||||
restrictedAdmClient := s.getAdminClient(c, "restricted", "restricted123", "")
|
||||
_, err = restrictedAdmClient.AddServiceAccount(ctx, madmin.AddServiceAccountReq{
|
||||
AccessKey: "newroot",
|
||||
SecretKey: "newroot123",
|
||||
})
|
||||
if err == nil {
|
||||
c.Fatalf("restricted service account was able to create service account bypassing sub-policy!")
|
||||
}
|
||||
}
|
||||
|
||||
func (s *TestSuiteIAM) SetUpAccMgmtPlugin(c *check) {
|
||||
ctx, cancel := context.WithTimeout(context.Background(), testDefaultTimeout)
|
||||
defer cancel()
|
||||
|
||||
@ -1243,17 +1243,17 @@ func extractHealInitParams(vars map[string]string, qParams url.Values, r io.Read
|
||||
if hip.objPrefix != "" {
|
||||
// Bucket is required if object-prefix is given
|
||||
err = ErrHealMissingBucket
|
||||
return
|
||||
return hip, err
|
||||
}
|
||||
} else if isReservedOrInvalidBucket(hip.bucket, false) {
|
||||
err = ErrInvalidBucketName
|
||||
return
|
||||
return hip, err
|
||||
}
|
||||
|
||||
// empty prefix is valid.
|
||||
if !IsValidObjectPrefix(hip.objPrefix) {
|
||||
err = ErrInvalidObjectName
|
||||
return
|
||||
return hip, err
|
||||
}
|
||||
|
||||
if len(qParams[mgmtClientToken]) > 0 {
|
||||
@ -1275,7 +1275,7 @@ func extractHealInitParams(vars map[string]string, qParams url.Values, r io.Read
|
||||
if (hip.forceStart && hip.forceStop) ||
|
||||
(hip.clientToken != "" && (hip.forceStart || hip.forceStop)) {
|
||||
err = ErrInvalidRequest
|
||||
return
|
||||
return hip, err
|
||||
}
|
||||
|
||||
// ignore body if clientToken is provided
|
||||
@ -1284,12 +1284,12 @@ func extractHealInitParams(vars map[string]string, qParams url.Values, r io.Read
|
||||
if jerr != nil {
|
||||
adminLogIf(GlobalContext, jerr, logger.ErrorKind)
|
||||
err = ErrRequestBodyParse
|
||||
return
|
||||
return hip, err
|
||||
}
|
||||
}
|
||||
|
||||
err = ErrNone
|
||||
return
|
||||
return hip, err
|
||||
}
|
||||
|
||||
// HealHandler - POST /minio/admin/v3/heal/
|
||||
@ -2022,7 +2022,7 @@ func extractTraceOptions(r *http.Request) (opts madmin.ServiceTraceOpts, err err
|
||||
opts.OS = true
|
||||
// Older mc - cannot deal with more types...
|
||||
}
|
||||
return
|
||||
return opts, err
|
||||
}
|
||||
|
||||
// TraceHandler - POST /minio/admin/v3/trace
|
||||
|
||||
@ -296,7 +296,7 @@ func registerAdminRouter(router *mux.Router, enableConfigOps bool) {
|
||||
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/import-iam").HandlerFunc(adminMiddleware(adminAPI.ImportIAM, noGZFlag))
|
||||
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/import-iam-v2").HandlerFunc(adminMiddleware(adminAPI.ImportIAMV2, noGZFlag))
|
||||
|
||||
// IDentity Provider configuration APIs
|
||||
// Identity Provider configuration APIs
|
||||
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/idp-config/{type}/{name}").HandlerFunc(adminMiddleware(adminAPI.AddIdentityProviderCfg))
|
||||
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/idp-config/{type}/{name}").HandlerFunc(adminMiddleware(adminAPI.UpdateIdentityProviderCfg))
|
||||
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/idp-config/{type}").HandlerFunc(adminMiddleware(adminAPI.ListIdentityProviderCfg))
|
||||
|
||||
@ -31,7 +31,7 @@ func getListObjectsV1Args(values url.Values) (prefix, marker, delimiter string,
|
||||
var err error
|
||||
if maxkeys, err = strconv.Atoi(values.Get("max-keys")); err != nil {
|
||||
errCode = ErrInvalidMaxKeys
|
||||
return
|
||||
return prefix, marker, delimiter, maxkeys, encodingType, errCode
|
||||
}
|
||||
} else {
|
||||
maxkeys = maxObjectList
|
||||
@ -41,7 +41,7 @@ func getListObjectsV1Args(values url.Values) (prefix, marker, delimiter string,
|
||||
marker = values.Get("marker")
|
||||
delimiter = values.Get("delimiter")
|
||||
encodingType = values.Get("encoding-type")
|
||||
return
|
||||
return prefix, marker, delimiter, maxkeys, encodingType, errCode
|
||||
}
|
||||
|
||||
func getListBucketObjectVersionsArgs(values url.Values) (prefix, marker, delimiter string, maxkeys int, encodingType, versionIDMarker string, errCode APIErrorCode) {
|
||||
@ -51,7 +51,7 @@ func getListBucketObjectVersionsArgs(values url.Values) (prefix, marker, delimit
|
||||
var err error
|
||||
if maxkeys, err = strconv.Atoi(values.Get("max-keys")); err != nil {
|
||||
errCode = ErrInvalidMaxKeys
|
||||
return
|
||||
return prefix, marker, delimiter, maxkeys, encodingType, versionIDMarker, errCode
|
||||
}
|
||||
} else {
|
||||
maxkeys = maxObjectList
|
||||
@ -62,7 +62,7 @@ func getListBucketObjectVersionsArgs(values url.Values) (prefix, marker, delimit
|
||||
delimiter = values.Get("delimiter")
|
||||
encodingType = values.Get("encoding-type")
|
||||
versionIDMarker = values.Get("version-id-marker")
|
||||
return
|
||||
return prefix, marker, delimiter, maxkeys, encodingType, versionIDMarker, errCode
|
||||
}
|
||||
|
||||
// Parse bucket url queries for ListObjects V2.
|
||||
@ -73,7 +73,7 @@ func getListObjectsV2Args(values url.Values) (prefix, token, startAfter, delimit
|
||||
if val, ok := values["continuation-token"]; ok {
|
||||
if len(val[0]) == 0 {
|
||||
errCode = ErrIncorrectContinuationToken
|
||||
return
|
||||
return prefix, token, startAfter, delimiter, fetchOwner, maxkeys, encodingType, errCode
|
||||
}
|
||||
}
|
||||
|
||||
@ -81,7 +81,7 @@ func getListObjectsV2Args(values url.Values) (prefix, token, startAfter, delimit
|
||||
var err error
|
||||
if maxkeys, err = strconv.Atoi(values.Get("max-keys")); err != nil {
|
||||
errCode = ErrInvalidMaxKeys
|
||||
return
|
||||
return prefix, token, startAfter, delimiter, fetchOwner, maxkeys, encodingType, errCode
|
||||
}
|
||||
} else {
|
||||
maxkeys = maxObjectList
|
||||
@ -97,11 +97,11 @@ func getListObjectsV2Args(values url.Values) (prefix, token, startAfter, delimit
|
||||
decodedToken, err := base64.StdEncoding.DecodeString(token)
|
||||
if err != nil {
|
||||
errCode = ErrIncorrectContinuationToken
|
||||
return
|
||||
return prefix, token, startAfter, delimiter, fetchOwner, maxkeys, encodingType, errCode
|
||||
}
|
||||
token = string(decodedToken)
|
||||
}
|
||||
return
|
||||
return prefix, token, startAfter, delimiter, fetchOwner, maxkeys, encodingType, errCode
|
||||
}
|
||||
|
||||
// Parse bucket url queries for ?uploads
|
||||
@ -112,7 +112,7 @@ func getBucketMultipartResources(values url.Values) (prefix, keyMarker, uploadID
|
||||
var err error
|
||||
if maxUploads, err = strconv.Atoi(values.Get("max-uploads")); err != nil {
|
||||
errCode = ErrInvalidMaxUploads
|
||||
return
|
||||
return prefix, keyMarker, uploadIDMarker, delimiter, maxUploads, encodingType, errCode
|
||||
}
|
||||
} else {
|
||||
maxUploads = maxUploadsList
|
||||
@ -123,7 +123,7 @@ func getBucketMultipartResources(values url.Values) (prefix, keyMarker, uploadID
|
||||
uploadIDMarker = values.Get("upload-id-marker")
|
||||
delimiter = values.Get("delimiter")
|
||||
encodingType = values.Get("encoding-type")
|
||||
return
|
||||
return prefix, keyMarker, uploadIDMarker, delimiter, maxUploads, encodingType, errCode
|
||||
}
|
||||
|
||||
// Parse object url queries
|
||||
@ -134,7 +134,7 @@ func getObjectResources(values url.Values) (uploadID string, partNumberMarker, m
|
||||
if values.Get("max-parts") != "" {
|
||||
if maxParts, err = strconv.Atoi(values.Get("max-parts")); err != nil {
|
||||
errCode = ErrInvalidMaxParts
|
||||
return
|
||||
return uploadID, partNumberMarker, maxParts, encodingType, errCode
|
||||
}
|
||||
} else {
|
||||
maxParts = maxPartsList
|
||||
@ -143,11 +143,11 @@ func getObjectResources(values url.Values) (uploadID string, partNumberMarker, m
|
||||
if values.Get("part-number-marker") != "" {
|
||||
if partNumberMarker, err = strconv.Atoi(values.Get("part-number-marker")); err != nil {
|
||||
errCode = ErrInvalidPartNumberMarker
|
||||
return
|
||||
return uploadID, partNumberMarker, maxParts, encodingType, errCode
|
||||
}
|
||||
}
|
||||
|
||||
uploadID = values.Get("uploadId")
|
||||
encodingType = values.Get("encoding-type")
|
||||
return
|
||||
return uploadID, partNumberMarker, maxParts, encodingType, errCode
|
||||
}
|
||||
|
||||
@ -889,6 +889,12 @@ func generateMultiDeleteResponse(quiet bool, deletedObjects []DeletedObject, err
|
||||
}
|
||||
|
||||
func writeResponse(w http.ResponseWriter, statusCode int, response []byte, mType mimeType) {
|
||||
// Don't write a response if one has already been written.
|
||||
// Fixes https://github.com/minio/minio/issues/21633
|
||||
if headersAlreadyWritten(w) {
|
||||
return
|
||||
}
|
||||
|
||||
if statusCode == 0 {
|
||||
statusCode = 200
|
||||
}
|
||||
@ -1015,3 +1021,45 @@ func writeCustomErrorResponseJSON(ctx context.Context, w http.ResponseWriter, er
|
||||
encodedErrorResponse := encodeResponseJSON(errorResponse)
|
||||
writeResponse(w, err.HTTPStatusCode, encodedErrorResponse, mimeJSON)
|
||||
}
|
||||
|
||||
type unwrapper interface {
|
||||
Unwrap() http.ResponseWriter
|
||||
}
|
||||
|
||||
// headersAlreadyWritten returns true if the headers have already been written
|
||||
// to this response writer. It will unwrap the ResponseWriter if possible to try
|
||||
// and find a trackingResponseWriter.
|
||||
func headersAlreadyWritten(w http.ResponseWriter) bool {
|
||||
for {
|
||||
if trw, ok := w.(*trackingResponseWriter); ok {
|
||||
return trw.headerWritten
|
||||
} else if uw, ok := w.(unwrapper); ok {
|
||||
w = uw.Unwrap()
|
||||
} else {
|
||||
return false
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// trackingResponseWriter wraps a ResponseWriter and notes when WriterHeader has
|
||||
// been called. This allows high level request handlers to check if something
|
||||
// has already sent the header.
|
||||
type trackingResponseWriter struct {
|
||||
http.ResponseWriter
|
||||
headerWritten bool
|
||||
}
|
||||
|
||||
func (w *trackingResponseWriter) WriteHeader(statusCode int) {
|
||||
if !w.headerWritten {
|
||||
w.headerWritten = true
|
||||
w.ResponseWriter.WriteHeader(statusCode)
|
||||
}
|
||||
}
|
||||
|
||||
func (w *trackingResponseWriter) Write(b []byte) (int, error) {
|
||||
return w.ResponseWriter.Write(b)
|
||||
}
|
||||
|
||||
func (w *trackingResponseWriter) Unwrap() http.ResponseWriter {
|
||||
return w.ResponseWriter
|
||||
}
|
||||
|
||||
@ -18,8 +18,12 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"io"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"testing"
|
||||
|
||||
"github.com/klauspost/compress/gzhttp"
|
||||
)
|
||||
|
||||
// Tests object location.
|
||||
@ -122,3 +126,89 @@ func TestGetURLScheme(t *testing.T) {
|
||||
t.Errorf("Expected %s, got %s", httpsScheme, gotScheme)
|
||||
}
|
||||
}
|
||||
|
||||
func TestTrackingResponseWriter(t *testing.T) {
|
||||
rw := httptest.NewRecorder()
|
||||
trw := &trackingResponseWriter{ResponseWriter: rw}
|
||||
trw.WriteHeader(123)
|
||||
if !trw.headerWritten {
|
||||
t.Fatal("headerWritten was not set by WriteHeader call")
|
||||
}
|
||||
|
||||
_, err := trw.Write([]byte("hello"))
|
||||
if err != nil {
|
||||
t.Fatalf("Write unexpectedly failed: %v", err)
|
||||
}
|
||||
|
||||
// Check that WriteHeader and Write were called on the underlying response writer
|
||||
resp := rw.Result()
|
||||
if resp.StatusCode != 123 {
|
||||
t.Fatalf("unexpected status: %v", resp.StatusCode)
|
||||
}
|
||||
body, err := io.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
t.Fatalf("reading response body failed: %v", err)
|
||||
}
|
||||
if string(body) != "hello" {
|
||||
t.Fatalf("response body incorrect: %v", string(body))
|
||||
}
|
||||
|
||||
// Check that Unwrap works
|
||||
if trw.Unwrap() != rw {
|
||||
t.Fatalf("Unwrap returned wrong result: %v", trw.Unwrap())
|
||||
}
|
||||
}
|
||||
|
||||
func TestHeadersAlreadyWritten(t *testing.T) {
|
||||
rw := httptest.NewRecorder()
|
||||
trw := &trackingResponseWriter{ResponseWriter: rw}
|
||||
|
||||
if headersAlreadyWritten(trw) {
|
||||
t.Fatal("headers have not been written yet")
|
||||
}
|
||||
|
||||
trw.WriteHeader(123)
|
||||
if !headersAlreadyWritten(trw) {
|
||||
t.Fatal("headers were written")
|
||||
}
|
||||
}
|
||||
|
||||
func TestHeadersAlreadyWrittenWrapped(t *testing.T) {
|
||||
rw := httptest.NewRecorder()
|
||||
trw := &trackingResponseWriter{ResponseWriter: rw}
|
||||
wrap1 := &gzhttp.NoGzipResponseWriter{ResponseWriter: trw}
|
||||
wrap2 := &gzhttp.NoGzipResponseWriter{ResponseWriter: wrap1}
|
||||
|
||||
if headersAlreadyWritten(wrap2) {
|
||||
t.Fatal("headers have not been written yet")
|
||||
}
|
||||
|
||||
wrap2.WriteHeader(123)
|
||||
if !headersAlreadyWritten(wrap2) {
|
||||
t.Fatal("headers were written")
|
||||
}
|
||||
}
|
||||
|
||||
func TestWriteResponseHeadersNotWritten(t *testing.T) {
|
||||
rw := httptest.NewRecorder()
|
||||
trw := &trackingResponseWriter{ResponseWriter: rw}
|
||||
|
||||
writeResponse(trw, 299, []byte("hello"), "application/foo")
|
||||
|
||||
resp := rw.Result()
|
||||
if resp.StatusCode != 299 {
|
||||
t.Fatal("response wasn't written")
|
||||
}
|
||||
}
|
||||
|
||||
func TestWriteResponseHeadersWritten(t *testing.T) {
|
||||
rw := httptest.NewRecorder()
|
||||
rw.Code = -1
|
||||
trw := &trackingResponseWriter{ResponseWriter: rw, headerWritten: true}
|
||||
|
||||
writeResponse(trw, 200, []byte("hello"), "application/foo")
|
||||
|
||||
if rw.Code != -1 {
|
||||
t.Fatalf("response was written when it shouldn't have been (Code=%v)", rw.Code)
|
||||
}
|
||||
}
|
||||
|
||||
@ -218,6 +218,8 @@ func s3APIMiddleware(f http.HandlerFunc, flags ...s3HFlag) http.HandlerFunc {
|
||||
handlerName := getHandlerName(f, "objectAPIHandlers")
|
||||
|
||||
var handler http.HandlerFunc = func(w http.ResponseWriter, r *http.Request) {
|
||||
w = &trackingResponseWriter{ResponseWriter: w}
|
||||
|
||||
// Wrap the actual handler with the appropriate tracing middleware.
|
||||
var tracedHandler http.HandlerFunc
|
||||
if handlerFlags.has(traceHdrsS3HFlag) {
|
||||
|
||||
@ -1,7 +1,7 @@
|
||||
package cmd
|
||||
|
||||
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
|
||||
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"github.com/tinylib/msgp/msgp"
|
||||
)
|
||||
|
||||
@ -1,7 +1,7 @@
|
||||
package cmd
|
||||
|
||||
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
|
||||
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"testing"
|
||||
|
||||
@ -1,7 +1,7 @@
|
||||
package cmd
|
||||
|
||||
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
|
||||
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"time"
|
||||
|
||||
|
||||
@ -1,7 +1,7 @@
|
||||
package cmd
|
||||
|
||||
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
|
||||
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"testing"
|
||||
|
||||
@ -1,7 +1,7 @@
|
||||
package cmd
|
||||
|
||||
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
|
||||
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"github.com/tinylib/msgp/msgp"
|
||||
)
|
||||
|
||||
@ -1,7 +1,7 @@
|
||||
package cmd
|
||||
|
||||
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
|
||||
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"testing"
|
||||
|
||||
@ -1,7 +1,7 @@
|
||||
package cmd
|
||||
|
||||
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
|
||||
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"github.com/tinylib/msgp/msgp"
|
||||
)
|
||||
|
||||
@ -1,7 +1,7 @@
|
||||
package cmd
|
||||
|
||||
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
|
||||
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"testing"
|
||||
|
||||
@ -1,7 +1,7 @@
|
||||
package cmd
|
||||
|
||||
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
|
||||
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"github.com/tinylib/msgp/msgp"
|
||||
)
|
||||
|
||||
@ -1,7 +1,7 @@
|
||||
package cmd
|
||||
|
||||
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
|
||||
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"testing"
|
||||
|
||||
@ -1,7 +1,7 @@
|
||||
package cmd
|
||||
|
||||
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
|
||||
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"github.com/tinylib/msgp/msgp"
|
||||
)
|
||||
|
||||
@ -1,7 +1,7 @@
|
||||
package cmd
|
||||
|
||||
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
|
||||
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"testing"
|
||||
|
||||
@ -99,7 +99,7 @@ func BitrotAlgorithmFromString(s string) (a BitrotAlgorithm) {
|
||||
return alg
|
||||
}
|
||||
}
|
||||
return
|
||||
return a
|
||||
}
|
||||
|
||||
func newBitrotWriter(disk StorageAPI, origvolume, volume, filePath string, length int64, algo BitrotAlgorithm, shardSize int64) io.Writer {
|
||||
|
||||
@ -1,7 +1,7 @@
|
||||
package cmd
|
||||
|
||||
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
|
||||
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"github.com/tinylib/msgp/msgp"
|
||||
)
|
||||
@ -59,19 +59,17 @@ func (z *ServerSystemConfig) DecodeMsg(dc *msgp.Reader) (err error) {
|
||||
if z.MinioEnv == nil {
|
||||
z.MinioEnv = make(map[string]string, zb0003)
|
||||
} else if len(z.MinioEnv) > 0 {
|
||||
for key := range z.MinioEnv {
|
||||
delete(z.MinioEnv, key)
|
||||
}
|
||||
clear(z.MinioEnv)
|
||||
}
|
||||
for zb0003 > 0 {
|
||||
zb0003--
|
||||
var za0002 string
|
||||
var za0003 string
|
||||
za0002, err = dc.ReadString()
|
||||
if err != nil {
|
||||
err = msgp.WrapError(err, "MinioEnv")
|
||||
return
|
||||
}
|
||||
var za0003 string
|
||||
za0003, err = dc.ReadString()
|
||||
if err != nil {
|
||||
err = msgp.WrapError(err, "MinioEnv", za0002)
|
||||
@ -240,14 +238,12 @@ func (z *ServerSystemConfig) UnmarshalMsg(bts []byte) (o []byte, err error) {
|
||||
if z.MinioEnv == nil {
|
||||
z.MinioEnv = make(map[string]string, zb0003)
|
||||
} else if len(z.MinioEnv) > 0 {
|
||||
for key := range z.MinioEnv {
|
||||
delete(z.MinioEnv, key)
|
||||
}
|
||||
clear(z.MinioEnv)
|
||||
}
|
||||
for zb0003 > 0 {
|
||||
var za0002 string
|
||||
var za0003 string
|
||||
zb0003--
|
||||
var za0002 string
|
||||
za0002, bts, err = msgp.ReadStringBytes(bts)
|
||||
if err != nil {
|
||||
err = msgp.WrapError(err, "MinioEnv")
|
||||
|
||||
@ -1,7 +1,7 @@
|
||||
package cmd
|
||||
|
||||
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
|
||||
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"testing"
|
||||
|
||||
@ -592,7 +592,7 @@ func (api objectAPIHandlers) DeleteMultipleObjectsHandler(w http.ResponseWriter,
|
||||
output[idx] = obj
|
||||
idx++
|
||||
}
|
||||
return
|
||||
return output
|
||||
}
|
||||
|
||||
// Disable timeouts and cancellation
|
||||
|
||||
@ -248,19 +248,19 @@ func proxyRequestByToken(ctx context.Context, w http.ResponseWriter, r *http.Req
|
||||
if subToken, nodeIndex = parseRequestToken(token); nodeIndex >= 0 {
|
||||
proxied, success = proxyRequestByNodeIndex(ctx, w, r, nodeIndex, returnErr)
|
||||
}
|
||||
return
|
||||
return subToken, proxied, success
|
||||
}
|
||||
|
||||
func proxyRequestByNodeIndex(ctx context.Context, w http.ResponseWriter, r *http.Request, index int, returnErr bool) (proxied, success bool) {
|
||||
if len(globalProxyEndpoints) == 0 {
|
||||
return
|
||||
return proxied, success
|
||||
}
|
||||
if index < 0 || index >= len(globalProxyEndpoints) {
|
||||
return
|
||||
return proxied, success
|
||||
}
|
||||
ep := globalProxyEndpoints[index]
|
||||
if ep.IsLocal {
|
||||
return
|
||||
return proxied, success
|
||||
}
|
||||
return true, proxyRequest(ctx, w, r, ep, returnErr)
|
||||
}
|
||||
|
||||
@ -161,7 +161,7 @@ func (b BucketMetadata) lastUpdate() (t time.Time) {
|
||||
t = b.BucketTargetsConfigMetaUpdatedAt
|
||||
}
|
||||
|
||||
return
|
||||
return t
|
||||
}
|
||||
|
||||
// Versioning returns true if versioning is enabled
|
||||
@ -542,13 +542,13 @@ func (b *BucketMetadata) migrateTargetConfig(ctx context.Context, objectAPI Obje
|
||||
func encryptBucketMetadata(ctx context.Context, bucket string, input []byte, kmsContext kms.Context) (output, metabytes []byte, err error) {
|
||||
if GlobalKMS == nil {
|
||||
output = input
|
||||
return
|
||||
return output, metabytes, err
|
||||
}
|
||||
|
||||
metadata := make(map[string]string)
|
||||
key, err := GlobalKMS.GenerateKey(ctx, &kms.GenerateKeyRequest{AssociatedData: kmsContext})
|
||||
if err != nil {
|
||||
return
|
||||
return output, metabytes, err
|
||||
}
|
||||
|
||||
outbuf := bytes.NewBuffer(nil)
|
||||
@ -561,7 +561,7 @@ func encryptBucketMetadata(ctx context.Context, bucket string, input []byte, kms
|
||||
}
|
||||
metabytes, err = json.Marshal(metadata)
|
||||
if err != nil {
|
||||
return
|
||||
return output, metabytes, err
|
||||
}
|
||||
return outbuf.Bytes(), metabytes, nil
|
||||
}
|
||||
|
||||
@ -1,7 +1,7 @@
|
||||
package cmd
|
||||
|
||||
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
|
||||
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"github.com/tinylib/msgp/msgp"
|
||||
)
|
||||
|
||||
@ -1,7 +1,7 @@
|
||||
package cmd
|
||||
|
||||
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
|
||||
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"testing"
|
||||
|
||||
@ -97,7 +97,7 @@ func parseBucketQuota(bucket string, data []byte) (quotaCfg *madmin.BucketQuota,
|
||||
}
|
||||
return quotaCfg, fmt.Errorf("Invalid quota config %#v", quotaCfg)
|
||||
}
|
||||
return
|
||||
return quotaCfg, err
|
||||
}
|
||||
|
||||
func (sys *BucketQuotaSys) enforceQuotaHard(ctx context.Context, bucket string, size int64) error {
|
||||
|
||||
@ -1,7 +1,7 @@
|
||||
package cmd
|
||||
|
||||
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
|
||||
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"github.com/tinylib/msgp/msgp"
|
||||
)
|
||||
|
||||
@ -1,7 +1,7 @@
|
||||
package cmd
|
||||
|
||||
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
|
||||
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"testing"
|
||||
|
||||
@ -172,13 +172,13 @@ func (ri ReplicateObjectInfo) TargetReplicationStatus(arn string) (status replic
|
||||
repStatMatches := replStatusRegex.FindAllStringSubmatch(ri.ReplicationStatusInternal, -1)
|
||||
for _, repStatMatch := range repStatMatches {
|
||||
if len(repStatMatch) != 3 {
|
||||
return
|
||||
return status
|
||||
}
|
||||
if repStatMatch[1] == arn {
|
||||
return replication.StatusType(repStatMatch[2])
|
||||
}
|
||||
}
|
||||
return
|
||||
return status
|
||||
}
|
||||
|
||||
// TargetReplicationStatus - returns replication status of a target
|
||||
@ -186,13 +186,13 @@ func (o ObjectInfo) TargetReplicationStatus(arn string) (status replication.Stat
|
||||
repStatMatches := replStatusRegex.FindAllStringSubmatch(o.ReplicationStatusInternal, -1)
|
||||
for _, repStatMatch := range repStatMatches {
|
||||
if len(repStatMatch) != 3 {
|
||||
return
|
||||
return status
|
||||
}
|
||||
if repStatMatch[1] == arn {
|
||||
return replication.StatusType(repStatMatch[2])
|
||||
}
|
||||
}
|
||||
return
|
||||
return status
|
||||
}
|
||||
|
||||
type replicateTargetDecision struct {
|
||||
@ -310,7 +310,7 @@ func parseReplicateDecision(ctx context.Context, bucket, s string) (r ReplicateD
|
||||
targetsMap: make(map[string]replicateTargetDecision),
|
||||
}
|
||||
if len(s) == 0 {
|
||||
return
|
||||
return r, err
|
||||
}
|
||||
for p := range strings.SplitSeq(s, ",") {
|
||||
if p == "" {
|
||||
@ -327,7 +327,7 @@ func parseReplicateDecision(ctx context.Context, bucket, s string) (r ReplicateD
|
||||
}
|
||||
r.targetsMap[slc[0]] = replicateTargetDecision{Replicate: tgt[0] == "true", Synchronous: tgt[1] == "true", Arn: tgt[2], ID: tgt[3]}
|
||||
}
|
||||
return
|
||||
return r, err
|
||||
}
|
||||
|
||||
// ReplicationState represents internal replication state
|
||||
@ -374,7 +374,7 @@ func (rs *ReplicationState) CompositeReplicationStatus() (st replication.StatusT
|
||||
case !rs.ReplicaStatus.Empty():
|
||||
return rs.ReplicaStatus
|
||||
default:
|
||||
return
|
||||
return st
|
||||
}
|
||||
}
|
||||
|
||||
@ -737,7 +737,7 @@ type BucketReplicationResyncStatus struct {
|
||||
func (rs *BucketReplicationResyncStatus) cloneTgtStats() (m map[string]TargetReplicationResyncStatus) {
|
||||
m = make(map[string]TargetReplicationResyncStatus)
|
||||
maps.Copy(m, rs.TargetsMap)
|
||||
return
|
||||
return m
|
||||
}
|
||||
|
||||
func newBucketResyncStatus(bucket string) BucketReplicationResyncStatus {
|
||||
@ -774,7 +774,7 @@ func extractReplicateDiffOpts(q url.Values) (opts madmin.ReplDiffOpts) {
|
||||
opts.Verbose = q.Get("verbose") == "true"
|
||||
opts.ARN = q.Get("arn")
|
||||
opts.Prefix = q.Get("prefix")
|
||||
return
|
||||
return opts
|
||||
}
|
||||
|
||||
const (
|
||||
|
||||
@ -1,7 +1,7 @@
|
||||
package cmd
|
||||
|
||||
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
|
||||
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"github.com/minio/minio/internal/bucket/replication"
|
||||
"github.com/tinylib/msgp/msgp"
|
||||
@ -41,19 +41,17 @@ func (z *BucketReplicationResyncStatus) DecodeMsg(dc *msgp.Reader) (err error) {
|
||||
if z.TargetsMap == nil {
|
||||
z.TargetsMap = make(map[string]TargetReplicationResyncStatus, zb0002)
|
||||
} else if len(z.TargetsMap) > 0 {
|
||||
for key := range z.TargetsMap {
|
||||
delete(z.TargetsMap, key)
|
||||
}
|
||||
clear(z.TargetsMap)
|
||||
}
|
||||
for zb0002 > 0 {
|
||||
zb0002--
|
||||
var za0001 string
|
||||
var za0002 TargetReplicationResyncStatus
|
||||
za0001, err = dc.ReadString()
|
||||
if err != nil {
|
||||
err = msgp.WrapError(err, "TargetsMap")
|
||||
return
|
||||
}
|
||||
var za0002 TargetReplicationResyncStatus
|
||||
err = za0002.DecodeMsg(dc)
|
||||
if err != nil {
|
||||
err = msgp.WrapError(err, "TargetsMap", za0001)
|
||||
@ -203,14 +201,12 @@ func (z *BucketReplicationResyncStatus) UnmarshalMsg(bts []byte) (o []byte, err
|
||||
if z.TargetsMap == nil {
|
||||
z.TargetsMap = make(map[string]TargetReplicationResyncStatus, zb0002)
|
||||
} else if len(z.TargetsMap) > 0 {
|
||||
for key := range z.TargetsMap {
|
||||
delete(z.TargetsMap, key)
|
||||
}
|
||||
clear(z.TargetsMap)
|
||||
}
|
||||
for zb0002 > 0 {
|
||||
var za0001 string
|
||||
var za0002 TargetReplicationResyncStatus
|
||||
zb0002--
|
||||
var za0001 string
|
||||
za0001, bts, err = msgp.ReadStringBytes(bts)
|
||||
if err != nil {
|
||||
err = msgp.WrapError(err, "TargetsMap")
|
||||
@ -288,19 +284,17 @@ func (z *MRFReplicateEntries) DecodeMsg(dc *msgp.Reader) (err error) {
|
||||
if z.Entries == nil {
|
||||
z.Entries = make(map[string]MRFReplicateEntry, zb0002)
|
||||
} else if len(z.Entries) > 0 {
|
||||
for key := range z.Entries {
|
||||
delete(z.Entries, key)
|
||||
}
|
||||
clear(z.Entries)
|
||||
}
|
||||
for zb0002 > 0 {
|
||||
zb0002--
|
||||
var za0001 string
|
||||
var za0002 MRFReplicateEntry
|
||||
za0001, err = dc.ReadString()
|
||||
if err != nil {
|
||||
err = msgp.WrapError(err, "Entries")
|
||||
return
|
||||
}
|
||||
var za0002 MRFReplicateEntry
|
||||
var zb0003 uint32
|
||||
zb0003, err = dc.ReadMapHeader()
|
||||
if err != nil {
|
||||
@ -478,14 +472,12 @@ func (z *MRFReplicateEntries) UnmarshalMsg(bts []byte) (o []byte, err error) {
|
||||
if z.Entries == nil {
|
||||
z.Entries = make(map[string]MRFReplicateEntry, zb0002)
|
||||
} else if len(z.Entries) > 0 {
|
||||
for key := range z.Entries {
|
||||
delete(z.Entries, key)
|
||||
}
|
||||
clear(z.Entries)
|
||||
}
|
||||
for zb0002 > 0 {
|
||||
var za0001 string
|
||||
var za0002 MRFReplicateEntry
|
||||
zb0002--
|
||||
var za0001 string
|
||||
za0001, bts, err = msgp.ReadStringBytes(bts)
|
||||
if err != nil {
|
||||
err = msgp.WrapError(err, "Entries")
|
||||
@ -872,19 +864,17 @@ func (z *ReplicationState) DecodeMsg(dc *msgp.Reader) (err error) {
|
||||
if z.Targets == nil {
|
||||
z.Targets = make(map[string]replication.StatusType, zb0002)
|
||||
} else if len(z.Targets) > 0 {
|
||||
for key := range z.Targets {
|
||||
delete(z.Targets, key)
|
||||
}
|
||||
clear(z.Targets)
|
||||
}
|
||||
for zb0002 > 0 {
|
||||
zb0002--
|
||||
var za0001 string
|
||||
var za0002 replication.StatusType
|
||||
za0001, err = dc.ReadString()
|
||||
if err != nil {
|
||||
err = msgp.WrapError(err, "Targets")
|
||||
return
|
||||
}
|
||||
var za0002 replication.StatusType
|
||||
err = za0002.DecodeMsg(dc)
|
||||
if err != nil {
|
||||
err = msgp.WrapError(err, "Targets", za0001)
|
||||
@ -902,19 +892,17 @@ func (z *ReplicationState) DecodeMsg(dc *msgp.Reader) (err error) {
|
||||
if z.PurgeTargets == nil {
|
||||
z.PurgeTargets = make(map[string]VersionPurgeStatusType, zb0003)
|
||||
} else if len(z.PurgeTargets) > 0 {
|
||||
for key := range z.PurgeTargets {
|
||||
delete(z.PurgeTargets, key)
|
||||
}
|
||||
clear(z.PurgeTargets)
|
||||
}
|
||||
for zb0003 > 0 {
|
||||
zb0003--
|
||||
var za0003 string
|
||||
var za0004 VersionPurgeStatusType
|
||||
za0003, err = dc.ReadString()
|
||||
if err != nil {
|
||||
err = msgp.WrapError(err, "PurgeTargets")
|
||||
return
|
||||
}
|
||||
var za0004 VersionPurgeStatusType
|
||||
err = za0004.DecodeMsg(dc)
|
||||
if err != nil {
|
||||
err = msgp.WrapError(err, "PurgeTargets", za0003)
|
||||
@ -932,19 +920,17 @@ func (z *ReplicationState) DecodeMsg(dc *msgp.Reader) (err error) {
|
||||
if z.ResetStatusesMap == nil {
|
||||
z.ResetStatusesMap = make(map[string]string, zb0004)
|
||||
} else if len(z.ResetStatusesMap) > 0 {
|
||||
for key := range z.ResetStatusesMap {
|
||||
delete(z.ResetStatusesMap, key)
|
||||
}
|
||||
clear(z.ResetStatusesMap)
|
||||
}
|
||||
for zb0004 > 0 {
|
||||
zb0004--
|
||||
var za0005 string
|
||||
var za0006 string
|
||||
za0005, err = dc.ReadString()
|
||||
if err != nil {
|
||||
err = msgp.WrapError(err, "ResetStatusesMap")
|
||||
return
|
||||
}
|
||||
var za0006 string
|
||||
za0006, err = dc.ReadString()
|
||||
if err != nil {
|
||||
err = msgp.WrapError(err, "ResetStatusesMap", za0005)
|
||||
@ -1236,14 +1222,12 @@ func (z *ReplicationState) UnmarshalMsg(bts []byte) (o []byte, err error) {
|
||||
if z.Targets == nil {
|
||||
z.Targets = make(map[string]replication.StatusType, zb0002)
|
||||
} else if len(z.Targets) > 0 {
|
||||
for key := range z.Targets {
|
||||
delete(z.Targets, key)
|
||||
}
|
||||
clear(z.Targets)
|
||||
}
|
||||
for zb0002 > 0 {
|
||||
var za0001 string
|
||||
var za0002 replication.StatusType
|
||||
zb0002--
|
||||
var za0001 string
|
||||
za0001, bts, err = msgp.ReadStringBytes(bts)
|
||||
if err != nil {
|
||||
err = msgp.WrapError(err, "Targets")
|
||||
@ -1266,14 +1250,12 @@ func (z *ReplicationState) UnmarshalMsg(bts []byte) (o []byte, err error) {
|
||||
if z.PurgeTargets == nil {
|
||||
z.PurgeTargets = make(map[string]VersionPurgeStatusType, zb0003)
|
||||
} else if len(z.PurgeTargets) > 0 {
|
||||
for key := range z.PurgeTargets {
|
||||
delete(z.PurgeTargets, key)
|
||||
}
|
||||
clear(z.PurgeTargets)
|
||||
}
|
||||
for zb0003 > 0 {
|
||||
var za0003 string
|
||||
var za0004 VersionPurgeStatusType
|
||||
zb0003--
|
||||
var za0003 string
|
||||
za0003, bts, err = msgp.ReadStringBytes(bts)
|
||||
if err != nil {
|
||||
err = msgp.WrapError(err, "PurgeTargets")
|
||||
@ -1296,14 +1278,12 @@ func (z *ReplicationState) UnmarshalMsg(bts []byte) (o []byte, err error) {
|
||||
if z.ResetStatusesMap == nil {
|
||||
z.ResetStatusesMap = make(map[string]string, zb0004)
|
||||
} else if len(z.ResetStatusesMap) > 0 {
|
||||
for key := range z.ResetStatusesMap {
|
||||
delete(z.ResetStatusesMap, key)
|
||||
}
|
||||
clear(z.ResetStatusesMap)
|
||||
}
|
||||
for zb0004 > 0 {
|
||||
var za0005 string
|
||||
var za0006 string
|
||||
zb0004--
|
||||
var za0005 string
|
||||
za0005, bts, err = msgp.ReadStringBytes(bts)
|
||||
if err != nil {
|
||||
err = msgp.WrapError(err, "ResetStatusesMap")
|
||||
|
||||
@ -1,7 +1,7 @@
|
||||
package cmd
|
||||
|
||||
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
|
||||
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"testing"
|
||||
|
||||
@ -253,31 +253,31 @@ func getMustReplicateOptions(userDefined map[string]string, userTags string, sta
|
||||
func mustReplicate(ctx context.Context, bucket, object string, mopts mustReplicateOptions) (dsc ReplicateDecision) {
|
||||
// object layer not initialized we return with no decision.
|
||||
if newObjectLayerFn() == nil {
|
||||
return
|
||||
return dsc
|
||||
}
|
||||
|
||||
// Disable server-side replication on object prefixes which are excluded
|
||||
// from versioning via the MinIO bucket versioning extension.
|
||||
if !globalBucketVersioningSys.PrefixEnabled(bucket, object) {
|
||||
return
|
||||
return dsc
|
||||
}
|
||||
|
||||
replStatus := mopts.ReplicationStatus()
|
||||
if replStatus == replication.Replica && !mopts.isMetadataReplication() {
|
||||
return
|
||||
return dsc
|
||||
}
|
||||
|
||||
if mopts.replicationRequest { // incoming replication request on target cluster
|
||||
return
|
||||
return dsc
|
||||
}
|
||||
|
||||
cfg, err := getReplicationConfig(ctx, bucket)
|
||||
if err != nil {
|
||||
replLogOnceIf(ctx, err, bucket)
|
||||
return
|
||||
return dsc
|
||||
}
|
||||
if cfg == nil {
|
||||
return
|
||||
return dsc
|
||||
}
|
||||
|
||||
opts := replication.ObjectOpts{
|
||||
@ -348,16 +348,16 @@ func checkReplicateDelete(ctx context.Context, bucket string, dobj ObjectToDelet
|
||||
rcfg, err := getReplicationConfig(ctx, bucket)
|
||||
if err != nil || rcfg == nil {
|
||||
replLogOnceIf(ctx, err, bucket)
|
||||
return
|
||||
return dsc
|
||||
}
|
||||
// If incoming request is a replication request, it does not need to be re-replicated.
|
||||
if delOpts.ReplicationRequest {
|
||||
return
|
||||
return dsc
|
||||
}
|
||||
// Skip replication if this object's prefix is excluded from being
|
||||
// versioned.
|
||||
if !delOpts.Versioned {
|
||||
return
|
||||
return dsc
|
||||
}
|
||||
opts := replication.ObjectOpts{
|
||||
Name: dobj.ObjectName,
|
||||
@ -617,10 +617,10 @@ func replicateDeleteToTarget(ctx context.Context, dobj DeletedObjectReplicationI
|
||||
|
||||
if dobj.VersionID == "" && rinfo.PrevReplicationStatus == replication.Completed && dobj.OpType != replication.ExistingObjectReplicationType {
|
||||
rinfo.ReplicationStatus = rinfo.PrevReplicationStatus
|
||||
return
|
||||
return rinfo
|
||||
}
|
||||
if dobj.VersionID != "" && rinfo.VersionPurgeStatus == replication.VersionPurgeComplete {
|
||||
return
|
||||
return rinfo
|
||||
}
|
||||
if globalBucketTargetSys.isOffline(tgt.EndpointURL()) {
|
||||
replLogOnceIf(ctx, fmt.Errorf("remote target is offline for bucket:%s arn:%s", dobj.Bucket, tgt.ARN), "replication-target-offline-delete-"+tgt.ARN)
|
||||
@ -641,7 +641,7 @@ func replicateDeleteToTarget(ctx context.Context, dobj DeletedObjectReplicationI
|
||||
} else {
|
||||
rinfo.VersionPurgeStatus = replication.VersionPurgeFailed
|
||||
}
|
||||
return
|
||||
return rinfo
|
||||
}
|
||||
// early return if already replicated delete marker for existing object replication/ healing delete markers
|
||||
if dobj.DeleteMarkerVersionID != "" {
|
||||
@ -658,13 +658,13 @@ func replicateDeleteToTarget(ctx context.Context, dobj DeletedObjectReplicationI
|
||||
// delete marker already replicated
|
||||
if dobj.VersionID == "" && rinfo.VersionPurgeStatus.Empty() {
|
||||
rinfo.ReplicationStatus = replication.Completed
|
||||
return
|
||||
return rinfo
|
||||
}
|
||||
case isErrObjectNotFound(serr), isErrVersionNotFound(serr):
|
||||
// version being purged is already not found on target.
|
||||
if !rinfo.VersionPurgeStatus.Empty() {
|
||||
rinfo.VersionPurgeStatus = replication.VersionPurgeComplete
|
||||
return
|
||||
return rinfo
|
||||
}
|
||||
case isErrReadQuorum(serr), isErrWriteQuorum(serr):
|
||||
// destination has some quorum issues, perform removeObject() anyways
|
||||
@ -678,7 +678,7 @@ func replicateDeleteToTarget(ctx context.Context, dobj DeletedObjectReplicationI
|
||||
if err != nil && !toi.ReplicationReady {
|
||||
rinfo.ReplicationStatus = replication.Failed
|
||||
rinfo.Err = err
|
||||
return
|
||||
return rinfo
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -709,7 +709,7 @@ func replicateDeleteToTarget(ctx context.Context, dobj DeletedObjectReplicationI
|
||||
rinfo.VersionPurgeStatus = replication.VersionPurgeComplete
|
||||
}
|
||||
}
|
||||
return
|
||||
return rinfo
|
||||
}
|
||||
|
||||
func getCopyObjMetadata(oi ObjectInfo, sc string) map[string]string {
|
||||
@ -910,7 +910,7 @@ func putReplicationOpts(ctx context.Context, sc string, objInfo ObjectInfo) (put
|
||||
}
|
||||
putOpts.ServerSideEncryption = sseEnc
|
||||
}
|
||||
return
|
||||
return putOpts, isMP, err
|
||||
}
|
||||
|
||||
type replicationAction string
|
||||
@ -1208,7 +1208,7 @@ func (ri ReplicateObjectInfo) replicateObject(ctx context.Context, objectAPI Obj
|
||||
if ri.TargetReplicationStatus(tgt.ARN) == replication.Completed && !ri.ExistingObjResync.Empty() && !ri.ExistingObjResync.mustResyncTarget(tgt.ARN) {
|
||||
rinfo.ReplicationStatus = replication.Completed
|
||||
rinfo.ReplicationResynced = true
|
||||
return
|
||||
return rinfo
|
||||
}
|
||||
|
||||
if globalBucketTargetSys.isOffline(tgt.EndpointURL()) {
|
||||
@ -1220,7 +1220,7 @@ func (ri ReplicateObjectInfo) replicateObject(ctx context.Context, objectAPI Obj
|
||||
UserAgent: "Internal: [Replication]",
|
||||
Host: globalLocalNodeName,
|
||||
})
|
||||
return
|
||||
return rinfo
|
||||
}
|
||||
|
||||
versioned := globalBucketVersioningSys.PrefixEnabled(bucket, object)
|
||||
@ -1244,7 +1244,7 @@ func (ri ReplicateObjectInfo) replicateObject(ctx context.Context, objectAPI Obj
|
||||
})
|
||||
replLogOnceIf(ctx, fmt.Errorf("unable to read source object %s/%s(%s): %w", bucket, object, objInfo.VersionID, err), object+":"+objInfo.VersionID)
|
||||
}
|
||||
return
|
||||
return rinfo
|
||||
}
|
||||
defer gr.Close()
|
||||
|
||||
@ -1268,7 +1268,7 @@ func (ri ReplicateObjectInfo) replicateObject(ctx context.Context, objectAPI Obj
|
||||
UserAgent: "Internal: [Replication]",
|
||||
Host: globalLocalNodeName,
|
||||
})
|
||||
return
|
||||
return rinfo
|
||||
}
|
||||
}
|
||||
|
||||
@ -1307,7 +1307,7 @@ func (ri ReplicateObjectInfo) replicateObject(ctx context.Context, objectAPI Obj
|
||||
UserAgent: "Internal: [Replication]",
|
||||
Host: globalLocalNodeName,
|
||||
})
|
||||
return
|
||||
return rinfo
|
||||
}
|
||||
|
||||
var headerSize int
|
||||
@ -1344,7 +1344,7 @@ func (ri ReplicateObjectInfo) replicateObject(ctx context.Context, objectAPI Obj
|
||||
globalBucketTargetSys.markOffline(tgt.EndpointURL())
|
||||
}
|
||||
}
|
||||
return
|
||||
return rinfo
|
||||
}
|
||||
|
||||
// replicateAll replicates metadata for specified version of the object to destination bucket
|
||||
@ -1380,7 +1380,7 @@ func (ri ReplicateObjectInfo) replicateAll(ctx context.Context, objectAPI Object
|
||||
UserAgent: "Internal: [Replication]",
|
||||
Host: globalLocalNodeName,
|
||||
})
|
||||
return
|
||||
return rinfo
|
||||
}
|
||||
|
||||
versioned := globalBucketVersioningSys.PrefixEnabled(bucket, object)
|
||||
@ -1405,7 +1405,7 @@ func (ri ReplicateObjectInfo) replicateAll(ctx context.Context, objectAPI Object
|
||||
})
|
||||
replLogIf(ctx, fmt.Errorf("unable to replicate to target %s for %s/%s(%s): %w", tgt.EndpointURL(), bucket, object, objInfo.VersionID, err))
|
||||
}
|
||||
return
|
||||
return rinfo
|
||||
}
|
||||
defer gr.Close()
|
||||
|
||||
@ -1418,7 +1418,7 @@ func (ri ReplicateObjectInfo) replicateAll(ctx context.Context, objectAPI Object
|
||||
if objInfo.TargetReplicationStatus(tgt.ARN) == replication.Completed && !ri.ExistingObjResync.Empty() && !ri.ExistingObjResync.mustResyncTarget(tgt.ARN) {
|
||||
rinfo.ReplicationStatus = replication.Completed
|
||||
rinfo.ReplicationResynced = true
|
||||
return
|
||||
return rinfo
|
||||
}
|
||||
|
||||
size, err := objInfo.GetActualSize()
|
||||
@ -1431,7 +1431,7 @@ func (ri ReplicateObjectInfo) replicateAll(ctx context.Context, objectAPI Object
|
||||
UserAgent: "Internal: [Replication]",
|
||||
Host: globalLocalNodeName,
|
||||
})
|
||||
return
|
||||
return rinfo
|
||||
}
|
||||
|
||||
// Set the encrypted size for SSE-C objects
|
||||
@ -1494,7 +1494,7 @@ func (ri ReplicateObjectInfo) replicateAll(ctx context.Context, objectAPI Object
|
||||
rinfo.ReplicationAction = rAction
|
||||
rinfo.ReplicationStatus = replication.Completed
|
||||
}
|
||||
return
|
||||
return rinfo
|
||||
}
|
||||
} else {
|
||||
// SSEC objects will refuse HeadObject without the decryption key.
|
||||
@ -1528,7 +1528,7 @@ func (ri ReplicateObjectInfo) replicateAll(ctx context.Context, objectAPI Object
|
||||
UserAgent: "Internal: [Replication]",
|
||||
Host: globalLocalNodeName,
|
||||
})
|
||||
return
|
||||
return rinfo
|
||||
}
|
||||
}
|
||||
applyAction:
|
||||
@ -1594,7 +1594,7 @@ applyAction:
|
||||
UserAgent: "Internal: [Replication]",
|
||||
Host: globalLocalNodeName,
|
||||
})
|
||||
return
|
||||
return rinfo
|
||||
}
|
||||
var headerSize int
|
||||
for k, v := range putOpts.Header() {
|
||||
@ -1631,7 +1631,7 @@ applyAction:
|
||||
}
|
||||
}
|
||||
}
|
||||
return
|
||||
return rinfo
|
||||
}
|
||||
|
||||
func replicateObjectWithMultipart(ctx context.Context, c *minio.Core, bucket, object string, r io.Reader, objInfo ObjectInfo, opts minio.PutObjectOptions) (err error) {
|
||||
@ -2677,7 +2677,7 @@ func (c replicationConfig) Replicate(opts replication.ObjectOpts) bool {
|
||||
// Resync returns true if replication reset is requested
|
||||
func (c replicationConfig) Resync(ctx context.Context, oi ObjectInfo, dsc ReplicateDecision, tgtStatuses map[string]replication.StatusType) (r ResyncDecision) {
|
||||
if c.Empty() {
|
||||
return
|
||||
return r
|
||||
}
|
||||
|
||||
// Now overlay existing object replication choices for target
|
||||
@ -2693,7 +2693,7 @@ func (c replicationConfig) Resync(ctx context.Context, oi ObjectInfo, dsc Replic
|
||||
tgtArns := c.Config.FilterTargetArns(opts)
|
||||
// indicates no matching target with Existing object replication enabled.
|
||||
if len(tgtArns) == 0 {
|
||||
return
|
||||
return r
|
||||
}
|
||||
for _, t := range tgtArns {
|
||||
opts.TargetArn = t
|
||||
@ -2719,7 +2719,7 @@ func (c replicationConfig) resync(oi ObjectInfo, dsc ReplicateDecision, tgtStatu
|
||||
targets: make(map[string]ResyncTargetDecision, len(dsc.targetsMap)),
|
||||
}
|
||||
if c.remotes == nil {
|
||||
return
|
||||
return r
|
||||
}
|
||||
for _, tgt := range c.remotes.Targets {
|
||||
d, ok := dsc.targetsMap[tgt.Arn]
|
||||
@ -2731,7 +2731,7 @@ func (c replicationConfig) resync(oi ObjectInfo, dsc ReplicateDecision, tgtStatu
|
||||
}
|
||||
r.targets[d.Arn] = resyncTarget(oi, tgt.Arn, tgt.ResetID, tgt.ResetBeforeDate, tgtStatuses[tgt.Arn])
|
||||
}
|
||||
return
|
||||
return r
|
||||
}
|
||||
|
||||
func targetResetHeader(arn string) string {
|
||||
@ -2750,28 +2750,28 @@ func resyncTarget(oi ObjectInfo, arn string, resetID string, resetBeforeDate tim
|
||||
if !ok { // existing object replication is enabled and object version is unreplicated so far.
|
||||
if resetID != "" && oi.ModTime.Before(resetBeforeDate) { // trigger replication if `mc replicate reset` requested
|
||||
rd.Replicate = true
|
||||
return
|
||||
return rd
|
||||
}
|
||||
// For existing object reset - this condition is needed
|
||||
rd.Replicate = tgtStatus == ""
|
||||
return
|
||||
return rd
|
||||
}
|
||||
if resetID == "" || resetBeforeDate.Equal(timeSentinel) { // no reset in progress
|
||||
return
|
||||
return rd
|
||||
}
|
||||
|
||||
// if already replicated, return true if a new reset was requested.
|
||||
splits := strings.SplitN(rs, ";", 2)
|
||||
if len(splits) != 2 {
|
||||
return
|
||||
return rd
|
||||
}
|
||||
newReset := splits[1] != resetID
|
||||
if !newReset && tgtStatus == replication.Completed {
|
||||
// already replicated and no reset requested
|
||||
return
|
||||
return rd
|
||||
}
|
||||
rd.Replicate = newReset && oi.ModTime.Before(resetBeforeDate)
|
||||
return
|
||||
return rd
|
||||
}
|
||||
|
||||
const resyncTimeInterval = time.Minute * 1
|
||||
@ -3422,12 +3422,12 @@ func queueReplicationHeal(ctx context.Context, bucket string, oi ObjectInfo, rcf
|
||||
roi = getHealReplicateObjectInfo(oi, rcfg)
|
||||
roi.RetryCount = uint32(retryCount)
|
||||
if !roi.Dsc.ReplicateAny() {
|
||||
return
|
||||
return roi
|
||||
}
|
||||
// early return if replication already done, otherwise we need to determine if this
|
||||
// version is an existing object that needs healing.
|
||||
if oi.ReplicationStatus == replication.Completed && oi.VersionPurgeStatus.Empty() && !roi.ExistingObjResync.mustResync() {
|
||||
return
|
||||
return roi
|
||||
}
|
||||
|
||||
if roi.DeleteMarker || !roi.VersionPurgeStatus.Empty() {
|
||||
@ -3457,14 +3457,14 @@ func queueReplicationHeal(ctx context.Context, bucket string, oi ObjectInfo, rcf
|
||||
roi.ReplicationStatus == replication.Failed ||
|
||||
roi.VersionPurgeStatus == replication.VersionPurgeFailed || roi.VersionPurgeStatus == replication.VersionPurgePending {
|
||||
globalReplicationPool.Get().queueReplicaDeleteTask(dv)
|
||||
return
|
||||
return roi
|
||||
}
|
||||
// if replication status is Complete on DeleteMarker and existing object resync required
|
||||
if roi.ExistingObjResync.mustResync() && (roi.ReplicationStatus == replication.Completed || roi.ReplicationStatus.Empty()) {
|
||||
queueReplicateDeletesWrapper(dv, roi.ExistingObjResync)
|
||||
return
|
||||
return roi
|
||||
}
|
||||
return
|
||||
return roi
|
||||
}
|
||||
if roi.ExistingObjResync.mustResync() {
|
||||
roi.OpType = replication.ExistingObjectReplicationType
|
||||
@ -3473,13 +3473,13 @@ func queueReplicationHeal(ctx context.Context, bucket string, oi ObjectInfo, rcf
|
||||
case replication.Pending, replication.Failed:
|
||||
roi.EventType = ReplicateHeal
|
||||
globalReplicationPool.Get().queueReplicaTask(roi)
|
||||
return
|
||||
return roi
|
||||
}
|
||||
if roi.ExistingObjResync.mustResync() {
|
||||
roi.EventType = ReplicateExisting
|
||||
globalReplicationPool.Get().queueReplicaTask(roi)
|
||||
}
|
||||
return
|
||||
return roi
|
||||
}
|
||||
|
||||
const (
|
||||
|
||||
@ -38,7 +38,7 @@ type ReplicationLatency struct {
|
||||
// Merge two replication latency into a new one
|
||||
func (rl ReplicationLatency) merge(other ReplicationLatency) (newReplLatency ReplicationLatency) {
|
||||
newReplLatency.UploadHistogram = rl.UploadHistogram.Merge(other.UploadHistogram)
|
||||
return
|
||||
return newReplLatency
|
||||
}
|
||||
|
||||
// Get upload latency of each object size range
|
||||
@ -49,7 +49,7 @@ func (rl ReplicationLatency) getUploadLatency() (ret map[string]uint64) {
|
||||
// Convert nanoseconds to milliseconds
|
||||
ret[sizeTagToString(k)] = uint64(v.avg() / time.Millisecond)
|
||||
}
|
||||
return
|
||||
return ret
|
||||
}
|
||||
|
||||
// Update replication upload latency with a new value
|
||||
@ -64,7 +64,7 @@ type ReplicationLastMinute struct {
|
||||
|
||||
func (rl ReplicationLastMinute) merge(other ReplicationLastMinute) (nl ReplicationLastMinute) {
|
||||
nl = ReplicationLastMinute{rl.LastMinute.merge(other.LastMinute)}
|
||||
return
|
||||
return nl
|
||||
}
|
||||
|
||||
func (rl *ReplicationLastMinute) addsize(n int64) {
|
||||
|
||||
@ -1,7 +1,7 @@
|
||||
package cmd
|
||||
|
||||
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
|
||||
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"github.com/tinylib/msgp/msgp"
|
||||
)
|
||||
@ -617,19 +617,17 @@ func (z *BucketReplicationStats) DecodeMsg(dc *msgp.Reader) (err error) {
|
||||
if z.Stats == nil {
|
||||
z.Stats = make(map[string]*BucketReplicationStat, zb0002)
|
||||
} else if len(z.Stats) > 0 {
|
||||
for key := range z.Stats {
|
||||
delete(z.Stats, key)
|
||||
}
|
||||
clear(z.Stats)
|
||||
}
|
||||
for zb0002 > 0 {
|
||||
zb0002--
|
||||
var za0001 string
|
||||
var za0002 *BucketReplicationStat
|
||||
za0001, err = dc.ReadString()
|
||||
if err != nil {
|
||||
err = msgp.WrapError(err, "Stats")
|
||||
return
|
||||
}
|
||||
var za0002 *BucketReplicationStat
|
||||
if dc.IsNil() {
|
||||
err = dc.ReadNil()
|
||||
if err != nil {
|
||||
@ -943,14 +941,12 @@ func (z *BucketReplicationStats) UnmarshalMsg(bts []byte) (o []byte, err error)
|
||||
if z.Stats == nil {
|
||||
z.Stats = make(map[string]*BucketReplicationStat, zb0002)
|
||||
} else if len(z.Stats) > 0 {
|
||||
for key := range z.Stats {
|
||||
delete(z.Stats, key)
|
||||
}
|
||||
clear(z.Stats)
|
||||
}
|
||||
for zb0002 > 0 {
|
||||
var za0001 string
|
||||
var za0002 *BucketReplicationStat
|
||||
zb0002--
|
||||
var za0001 string
|
||||
za0001, bts, err = msgp.ReadStringBytes(bts)
|
||||
if err != nil {
|
||||
err = msgp.WrapError(err, "Stats")
|
||||
@ -1402,19 +1398,17 @@ func (z *BucketStatsMap) DecodeMsg(dc *msgp.Reader) (err error) {
|
||||
if z.Stats == nil {
|
||||
z.Stats = make(map[string]BucketStats, zb0002)
|
||||
} else if len(z.Stats) > 0 {
|
||||
for key := range z.Stats {
|
||||
delete(z.Stats, key)
|
||||
}
|
||||
clear(z.Stats)
|
||||
}
|
||||
for zb0002 > 0 {
|
||||
zb0002--
|
||||
var za0001 string
|
||||
var za0002 BucketStats
|
||||
za0001, err = dc.ReadString()
|
||||
if err != nil {
|
||||
err = msgp.WrapError(err, "Stats")
|
||||
return
|
||||
}
|
||||
var za0002 BucketStats
|
||||
err = za0002.DecodeMsg(dc)
|
||||
if err != nil {
|
||||
err = msgp.WrapError(err, "Stats", za0001)
|
||||
@ -1526,14 +1520,12 @@ func (z *BucketStatsMap) UnmarshalMsg(bts []byte) (o []byte, err error) {
|
||||
if z.Stats == nil {
|
||||
z.Stats = make(map[string]BucketStats, zb0002)
|
||||
} else if len(z.Stats) > 0 {
|
||||
for key := range z.Stats {
|
||||
delete(z.Stats, key)
|
||||
}
|
||||
clear(z.Stats)
|
||||
}
|
||||
for zb0002 > 0 {
|
||||
var za0001 string
|
||||
var za0002 BucketStats
|
||||
zb0002--
|
||||
var za0001 string
|
||||
za0001, bts, err = msgp.ReadStringBytes(bts)
|
||||
if err != nil {
|
||||
err = msgp.WrapError(err, "Stats")
|
||||
|
||||
@ -1,7 +1,7 @@
|
||||
package cmd
|
||||
|
||||
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
|
||||
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"testing"
|
||||
|
||||
@ -285,7 +285,7 @@ func (sys *BucketTargetSys) ListTargets(ctx context.Context, bucket, arnType str
|
||||
}
|
||||
}
|
||||
}
|
||||
return
|
||||
return targets
|
||||
}
|
||||
|
||||
// ListBucketTargets - gets list of bucket targets for this bucket.
|
||||
@ -668,7 +668,7 @@ func (sys *BucketTargetSys) getRemoteTargetClient(tcfg *madmin.BucketTarget) (*T
|
||||
// getRemoteARN gets existing ARN for an endpoint or generates a new one.
|
||||
func (sys *BucketTargetSys) getRemoteARN(bucket string, target *madmin.BucketTarget, deplID string) (arn string, exists bool) {
|
||||
if target == nil {
|
||||
return
|
||||
return arn, exists
|
||||
}
|
||||
sys.RLock()
|
||||
defer sys.RUnlock()
|
||||
@ -682,7 +682,7 @@ func (sys *BucketTargetSys) getRemoteARN(bucket string, target *madmin.BucketTar
|
||||
}
|
||||
}
|
||||
if !target.Type.IsValid() {
|
||||
return
|
||||
return arn, exists
|
||||
}
|
||||
return generateARN(target, deplID), false
|
||||
}
|
||||
|
||||
@ -167,7 +167,7 @@ func (sys *HTTPConsoleLoggerSys) Content() (logs []log.Entry) {
|
||||
})
|
||||
sys.RUnlock()
|
||||
|
||||
return
|
||||
return logs
|
||||
}
|
||||
|
||||
// Cancel - cancels the target
|
||||
|
||||
@ -106,16 +106,14 @@ func (p *scannerMetrics) log(s scannerMetric, paths ...string) func(custom map[s
|
||||
|
||||
// time n scanner actions.
|
||||
// Use for s < scannerMetricLastRealtime
|
||||
func (p *scannerMetrics) timeN(s scannerMetric) func(n int) func() {
|
||||
func (p *scannerMetrics) timeN(s scannerMetric) func(n int) {
|
||||
startTime := time.Now()
|
||||
return func(n int) func() {
|
||||
return func() {
|
||||
duration := time.Since(startTime)
|
||||
return func(n int) {
|
||||
duration := time.Since(startTime)
|
||||
|
||||
atomic.AddUint64(&p.operations[s], uint64(n))
|
||||
if s < scannerMetricLastRealtime {
|
||||
p.latency[s].add(duration)
|
||||
}
|
||||
atomic.AddUint64(&p.operations[s], uint64(n))
|
||||
if s < scannerMetricLastRealtime {
|
||||
p.latency[s].add(duration)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@ -1221,11 +1221,11 @@ func (z *dataUsageHashMap) DecodeMsg(dc *msgp.Reader) (err error) {
|
||||
zb0002, err = dc.ReadArrayHeader()
|
||||
if err != nil {
|
||||
err = msgp.WrapError(err)
|
||||
return
|
||||
return err
|
||||
}
|
||||
if zb0002 == 0 {
|
||||
*z = nil
|
||||
return
|
||||
return err
|
||||
}
|
||||
*z = make(dataUsageHashMap, zb0002)
|
||||
for i := uint32(0); i < zb0002; i++ {
|
||||
@ -1234,12 +1234,12 @@ func (z *dataUsageHashMap) DecodeMsg(dc *msgp.Reader) (err error) {
|
||||
zb0003, err = dc.ReadString()
|
||||
if err != nil {
|
||||
err = msgp.WrapError(err)
|
||||
return
|
||||
return err
|
||||
}
|
||||
(*z)[zb0003] = struct{}{}
|
||||
}
|
||||
}
|
||||
return
|
||||
return err
|
||||
}
|
||||
|
||||
// EncodeMsg implements msgp.Encodable
|
||||
@ -1247,16 +1247,16 @@ func (z dataUsageHashMap) EncodeMsg(en *msgp.Writer) (err error) {
|
||||
err = en.WriteArrayHeader(uint32(len(z)))
|
||||
if err != nil {
|
||||
err = msgp.WrapError(err)
|
||||
return
|
||||
return err
|
||||
}
|
||||
for zb0004 := range z {
|
||||
err = en.WriteString(zb0004)
|
||||
if err != nil {
|
||||
err = msgp.WrapError(err, zb0004)
|
||||
return
|
||||
return err
|
||||
}
|
||||
}
|
||||
return
|
||||
return err
|
||||
}
|
||||
|
||||
// MarshalMsg implements msgp.Marshaler
|
||||
@ -1266,7 +1266,7 @@ func (z dataUsageHashMap) MarshalMsg(b []byte) (o []byte, err error) {
|
||||
for zb0004 := range z {
|
||||
o = msgp.AppendString(o, zb0004)
|
||||
}
|
||||
return
|
||||
return o, err
|
||||
}
|
||||
|
||||
// UnmarshalMsg implements msgp.Unmarshaler
|
||||
@ -1275,7 +1275,7 @@ func (z *dataUsageHashMap) UnmarshalMsg(bts []byte) (o []byte, err error) {
|
||||
zb0002, bts, err = msgp.ReadArrayHeaderBytes(bts)
|
||||
if err != nil {
|
||||
err = msgp.WrapError(err)
|
||||
return
|
||||
return o, err
|
||||
}
|
||||
if zb0002 == 0 {
|
||||
*z = nil
|
||||
@ -1288,13 +1288,13 @@ func (z *dataUsageHashMap) UnmarshalMsg(bts []byte) (o []byte, err error) {
|
||||
zb0003, bts, err = msgp.ReadStringBytes(bts)
|
||||
if err != nil {
|
||||
err = msgp.WrapError(err)
|
||||
return
|
||||
return o, err
|
||||
}
|
||||
(*z)[zb0003] = struct{}{}
|
||||
}
|
||||
}
|
||||
o = bts
|
||||
return
|
||||
return o, err
|
||||
}
|
||||
|
||||
// Msgsize returns an upper bound estimate of the number of bytes occupied by the serialized message
|
||||
@ -1303,7 +1303,7 @@ func (z dataUsageHashMap) Msgsize() (s int) {
|
||||
for zb0004 := range z {
|
||||
s += msgp.StringPrefixSize + len(zb0004)
|
||||
}
|
||||
return
|
||||
return s
|
||||
}
|
||||
|
||||
//msgp:encode ignore currentScannerCycle
|
||||
|
||||
@ -1,7 +1,7 @@
|
||||
package cmd
|
||||
|
||||
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
|
||||
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"time"
|
||||
|
||||
@ -36,19 +36,17 @@ func (z *allTierStats) DecodeMsg(dc *msgp.Reader) (err error) {
|
||||
if z.Tiers == nil {
|
||||
z.Tiers = make(map[string]tierStats, zb0002)
|
||||
} else if len(z.Tiers) > 0 {
|
||||
for key := range z.Tiers {
|
||||
delete(z.Tiers, key)
|
||||
}
|
||||
clear(z.Tiers)
|
||||
}
|
||||
for zb0002 > 0 {
|
||||
zb0002--
|
||||
var za0001 string
|
||||
var za0002 tierStats
|
||||
za0001, err = dc.ReadString()
|
||||
if err != nil {
|
||||
err = msgp.WrapError(err, "Tiers")
|
||||
return
|
||||
}
|
||||
var za0002 tierStats
|
||||
var zb0003 uint32
|
||||
zb0003, err = dc.ReadMapHeader()
|
||||
if err != nil {
|
||||
@ -207,14 +205,12 @@ func (z *allTierStats) UnmarshalMsg(bts []byte) (o []byte, err error) {
|
||||
if z.Tiers == nil {
|
||||
z.Tiers = make(map[string]tierStats, zb0002)
|
||||
} else if len(z.Tiers) > 0 {
|
||||
for key := range z.Tiers {
|
||||
delete(z.Tiers, key)
|
||||
}
|
||||
clear(z.Tiers)
|
||||
}
|
||||
for zb0002 > 0 {
|
||||
var za0001 string
|
||||
var za0002 tierStats
|
||||
zb0002--
|
||||
var za0001 string
|
||||
za0001, bts, err = msgp.ReadStringBytes(bts)
|
||||
if err != nil {
|
||||
err = msgp.WrapError(err, "Tiers")
|
||||
@ -415,19 +411,17 @@ func (z *dataUsageCache) DecodeMsg(dc *msgp.Reader) (err error) {
|
||||
if z.Cache == nil {
|
||||
z.Cache = make(map[string]dataUsageEntry, zb0002)
|
||||
} else if len(z.Cache) > 0 {
|
||||
for key := range z.Cache {
|
||||
delete(z.Cache, key)
|
||||
}
|
||||
clear(z.Cache)
|
||||
}
|
||||
for zb0002 > 0 {
|
||||
zb0002--
|
||||
var za0001 string
|
||||
var za0002 dataUsageEntry
|
||||
za0001, err = dc.ReadString()
|
||||
if err != nil {
|
||||
err = msgp.WrapError(err, "Cache")
|
||||
return
|
||||
}
|
||||
var za0002 dataUsageEntry
|
||||
err = za0002.DecodeMsg(dc)
|
||||
if err != nil {
|
||||
err = msgp.WrapError(err, "Cache", za0001)
|
||||
@ -543,14 +537,12 @@ func (z *dataUsageCache) UnmarshalMsg(bts []byte) (o []byte, err error) {
|
||||
if z.Cache == nil {
|
||||
z.Cache = make(map[string]dataUsageEntry, zb0002)
|
||||
} else if len(z.Cache) > 0 {
|
||||
for key := range z.Cache {
|
||||
delete(z.Cache, key)
|
||||
}
|
||||
clear(z.Cache)
|
||||
}
|
||||
for zb0002 > 0 {
|
||||
var za0001 string
|
||||
var za0002 dataUsageEntry
|
||||
zb0002--
|
||||
var za0001 string
|
||||
za0001, bts, err = msgp.ReadStringBytes(bts)
|
||||
if err != nil {
|
||||
err = msgp.WrapError(err, "Cache")
|
||||
@ -799,19 +791,17 @@ func (z *dataUsageCacheV2) DecodeMsg(dc *msgp.Reader) (err error) {
|
||||
if z.Cache == nil {
|
||||
z.Cache = make(map[string]dataUsageEntryV2, zb0002)
|
||||
} else if len(z.Cache) > 0 {
|
||||
for key := range z.Cache {
|
||||
delete(z.Cache, key)
|
||||
}
|
||||
clear(z.Cache)
|
||||
}
|
||||
for zb0002 > 0 {
|
||||
zb0002--
|
||||
var za0001 string
|
||||
var za0002 dataUsageEntryV2
|
||||
za0001, err = dc.ReadString()
|
||||
if err != nil {
|
||||
err = msgp.WrapError(err, "Cache")
|
||||
return
|
||||
}
|
||||
var za0002 dataUsageEntryV2
|
||||
err = za0002.DecodeMsg(dc)
|
||||
if err != nil {
|
||||
err = msgp.WrapError(err, "Cache", za0001)
|
||||
@ -864,14 +854,12 @@ func (z *dataUsageCacheV2) UnmarshalMsg(bts []byte) (o []byte, err error) {
|
||||
if z.Cache == nil {
|
||||
z.Cache = make(map[string]dataUsageEntryV2, zb0002)
|
||||
} else if len(z.Cache) > 0 {
|
||||
for key := range z.Cache {
|
||||
delete(z.Cache, key)
|
||||
}
|
||||
clear(z.Cache)
|
||||
}
|
||||
for zb0002 > 0 {
|
||||
var za0001 string
|
||||
var za0002 dataUsageEntryV2
|
||||
zb0002--
|
||||
var za0001 string
|
||||
za0001, bts, err = msgp.ReadStringBytes(bts)
|
||||
if err != nil {
|
||||
err = msgp.WrapError(err, "Cache")
|
||||
@ -942,19 +930,17 @@ func (z *dataUsageCacheV3) DecodeMsg(dc *msgp.Reader) (err error) {
|
||||
if z.Cache == nil {
|
||||
z.Cache = make(map[string]dataUsageEntryV3, zb0002)
|
||||
} else if len(z.Cache) > 0 {
|
||||
for key := range z.Cache {
|
||||
delete(z.Cache, key)
|
||||
}
|
||||
clear(z.Cache)
|
||||
}
|
||||
for zb0002 > 0 {
|
||||
zb0002--
|
||||
var za0001 string
|
||||
var za0002 dataUsageEntryV3
|
||||
za0001, err = dc.ReadString()
|
||||
if err != nil {
|
||||
err = msgp.WrapError(err, "Cache")
|
||||
return
|
||||
}
|
||||
var za0002 dataUsageEntryV3
|
||||
err = za0002.DecodeMsg(dc)
|
||||
if err != nil {
|
||||
err = msgp.WrapError(err, "Cache", za0001)
|
||||
@ -1007,14 +993,12 @@ func (z *dataUsageCacheV3) UnmarshalMsg(bts []byte) (o []byte, err error) {
|
||||
if z.Cache == nil {
|
||||
z.Cache = make(map[string]dataUsageEntryV3, zb0002)
|
||||
} else if len(z.Cache) > 0 {
|
||||
for key := range z.Cache {
|
||||
delete(z.Cache, key)
|
||||
}
|
||||
clear(z.Cache)
|
||||
}
|
||||
for zb0002 > 0 {
|
||||
var za0001 string
|
||||
var za0002 dataUsageEntryV3
|
||||
zb0002--
|
||||
var za0001 string
|
||||
za0001, bts, err = msgp.ReadStringBytes(bts)
|
||||
if err != nil {
|
||||
err = msgp.WrapError(err, "Cache")
|
||||
@ -1085,19 +1069,17 @@ func (z *dataUsageCacheV4) DecodeMsg(dc *msgp.Reader) (err error) {
|
||||
if z.Cache == nil {
|
||||
z.Cache = make(map[string]dataUsageEntryV4, zb0002)
|
||||
} else if len(z.Cache) > 0 {
|
||||
for key := range z.Cache {
|
||||
delete(z.Cache, key)
|
||||
}
|
||||
clear(z.Cache)
|
||||
}
|
||||
for zb0002 > 0 {
|
||||
zb0002--
|
||||
var za0001 string
|
||||
var za0002 dataUsageEntryV4
|
||||
za0001, err = dc.ReadString()
|
||||
if err != nil {
|
||||
err = msgp.WrapError(err, "Cache")
|
||||
return
|
||||
}
|
||||
var za0002 dataUsageEntryV4
|
||||
err = za0002.DecodeMsg(dc)
|
||||
if err != nil {
|
||||
err = msgp.WrapError(err, "Cache", za0001)
|
||||
@ -1150,14 +1132,12 @@ func (z *dataUsageCacheV4) UnmarshalMsg(bts []byte) (o []byte, err error) {
|
||||
if z.Cache == nil {
|
||||
z.Cache = make(map[string]dataUsageEntryV4, zb0002)
|
||||
} else if len(z.Cache) > 0 {
|
||||
for key := range z.Cache {
|
||||
delete(z.Cache, key)
|
||||
}
|
||||
clear(z.Cache)
|
||||
}
|
||||
for zb0002 > 0 {
|
||||
var za0001 string
|
||||
var za0002 dataUsageEntryV4
|
||||
zb0002--
|
||||
var za0001 string
|
||||
za0001, bts, err = msgp.ReadStringBytes(bts)
|
||||
if err != nil {
|
||||
err = msgp.WrapError(err, "Cache")
|
||||
@ -1228,19 +1208,17 @@ func (z *dataUsageCacheV5) DecodeMsg(dc *msgp.Reader) (err error) {
|
||||
if z.Cache == nil {
|
||||
z.Cache = make(map[string]dataUsageEntryV5, zb0002)
|
||||
} else if len(z.Cache) > 0 {
|
||||
for key := range z.Cache {
|
||||
delete(z.Cache, key)
|
||||
}
|
||||
clear(z.Cache)
|
||||
}
|
||||
for zb0002 > 0 {
|
||||
zb0002--
|
||||
var za0001 string
|
||||
var za0002 dataUsageEntryV5
|
||||
za0001, err = dc.ReadString()
|
||||
if err != nil {
|
||||
err = msgp.WrapError(err, "Cache")
|
||||
return
|
||||
}
|
||||
var za0002 dataUsageEntryV5
|
||||
err = za0002.DecodeMsg(dc)
|
||||
if err != nil {
|
||||
err = msgp.WrapError(err, "Cache", za0001)
|
||||
@ -1293,14 +1271,12 @@ func (z *dataUsageCacheV5) UnmarshalMsg(bts []byte) (o []byte, err error) {
|
||||
if z.Cache == nil {
|
||||
z.Cache = make(map[string]dataUsageEntryV5, zb0002)
|
||||
} else if len(z.Cache) > 0 {
|
||||
for key := range z.Cache {
|
||||
delete(z.Cache, key)
|
||||
}
|
||||
clear(z.Cache)
|
||||
}
|
||||
for zb0002 > 0 {
|
||||
var za0001 string
|
||||
var za0002 dataUsageEntryV5
|
||||
zb0002--
|
||||
var za0001 string
|
||||
za0001, bts, err = msgp.ReadStringBytes(bts)
|
||||
if err != nil {
|
||||
err = msgp.WrapError(err, "Cache")
|
||||
@ -1371,19 +1347,17 @@ func (z *dataUsageCacheV6) DecodeMsg(dc *msgp.Reader) (err error) {
|
||||
if z.Cache == nil {
|
||||
z.Cache = make(map[string]dataUsageEntryV6, zb0002)
|
||||
} else if len(z.Cache) > 0 {
|
||||
for key := range z.Cache {
|
||||
delete(z.Cache, key)
|
||||
}
|
||||
clear(z.Cache)
|
||||
}
|
||||
for zb0002 > 0 {
|
||||
zb0002--
|
||||
var za0001 string
|
||||
var za0002 dataUsageEntryV6
|
||||
za0001, err = dc.ReadString()
|
||||
if err != nil {
|
||||
err = msgp.WrapError(err, "Cache")
|
||||
return
|
||||
}
|
||||
var za0002 dataUsageEntryV6
|
||||
err = za0002.DecodeMsg(dc)
|
||||
if err != nil {
|
||||
err = msgp.WrapError(err, "Cache", za0001)
|
||||
@ -1436,14 +1410,12 @@ func (z *dataUsageCacheV6) UnmarshalMsg(bts []byte) (o []byte, err error) {
|
||||
if z.Cache == nil {
|
||||
z.Cache = make(map[string]dataUsageEntryV6, zb0002)
|
||||
} else if len(z.Cache) > 0 {
|
||||
for key := range z.Cache {
|
||||
delete(z.Cache, key)
|
||||
}
|
||||
clear(z.Cache)
|
||||
}
|
||||
for zb0002 > 0 {
|
||||
var za0001 string
|
||||
var za0002 dataUsageEntryV6
|
||||
zb0002--
|
||||
var za0001 string
|
||||
za0001, bts, err = msgp.ReadStringBytes(bts)
|
||||
if err != nil {
|
||||
err = msgp.WrapError(err, "Cache")
|
||||
@ -1514,19 +1486,17 @@ func (z *dataUsageCacheV7) DecodeMsg(dc *msgp.Reader) (err error) {
|
||||
if z.Cache == nil {
|
||||
z.Cache = make(map[string]dataUsageEntryV7, zb0002)
|
||||
} else if len(z.Cache) > 0 {
|
||||
for key := range z.Cache {
|
||||
delete(z.Cache, key)
|
||||
}
|
||||
clear(z.Cache)
|
||||
}
|
||||
for zb0002 > 0 {
|
||||
zb0002--
|
||||
var za0001 string
|
||||
var za0002 dataUsageEntryV7
|
||||
za0001, err = dc.ReadString()
|
||||
if err != nil {
|
||||
err = msgp.WrapError(err, "Cache")
|
||||
return
|
||||
}
|
||||
var za0002 dataUsageEntryV7
|
||||
err = za0002.DecodeMsg(dc)
|
||||
if err != nil {
|
||||
err = msgp.WrapError(err, "Cache", za0001)
|
||||
@ -1579,14 +1549,12 @@ func (z *dataUsageCacheV7) UnmarshalMsg(bts []byte) (o []byte, err error) {
|
||||
if z.Cache == nil {
|
||||
z.Cache = make(map[string]dataUsageEntryV7, zb0002)
|
||||
} else if len(z.Cache) > 0 {
|
||||
for key := range z.Cache {
|
||||
delete(z.Cache, key)
|
||||
}
|
||||
clear(z.Cache)
|
||||
}
|
||||
for zb0002 > 0 {
|
||||
var za0001 string
|
||||
var za0002 dataUsageEntryV7
|
||||
zb0002--
|
||||
var za0001 string
|
||||
za0001, bts, err = msgp.ReadStringBytes(bts)
|
||||
if err != nil {
|
||||
err = msgp.WrapError(err, "Cache")
|
||||
@ -1745,19 +1713,17 @@ func (z *dataUsageEntry) DecodeMsg(dc *msgp.Reader) (err error) {
|
||||
if z.AllTierStats.Tiers == nil {
|
||||
z.AllTierStats.Tiers = make(map[string]tierStats, zb0005)
|
||||
} else if len(z.AllTierStats.Tiers) > 0 {
|
||||
for key := range z.AllTierStats.Tiers {
|
||||
delete(z.AllTierStats.Tiers, key)
|
||||
}
|
||||
clear(z.AllTierStats.Tiers)
|
||||
}
|
||||
for zb0005 > 0 {
|
||||
zb0005--
|
||||
var za0003 string
|
||||
var za0004 tierStats
|
||||
za0003, err = dc.ReadString()
|
||||
if err != nil {
|
||||
err = msgp.WrapError(err, "AllTierStats", "Tiers")
|
||||
return
|
||||
}
|
||||
var za0004 tierStats
|
||||
var zb0006 uint32
|
||||
zb0006, err = dc.ReadMapHeader()
|
||||
if err != nil {
|
||||
@ -2211,14 +2177,12 @@ func (z *dataUsageEntry) UnmarshalMsg(bts []byte) (o []byte, err error) {
|
||||
if z.AllTierStats.Tiers == nil {
|
||||
z.AllTierStats.Tiers = make(map[string]tierStats, zb0005)
|
||||
} else if len(z.AllTierStats.Tiers) > 0 {
|
||||
for key := range z.AllTierStats.Tiers {
|
||||
delete(z.AllTierStats.Tiers, key)
|
||||
}
|
||||
clear(z.AllTierStats.Tiers)
|
||||
}
|
||||
for zb0005 > 0 {
|
||||
var za0003 string
|
||||
var za0004 tierStats
|
||||
zb0005--
|
||||
var za0003 string
|
||||
za0003, bts, err = msgp.ReadStringBytes(bts)
|
||||
if err != nil {
|
||||
err = msgp.WrapError(err, "AllTierStats", "Tiers")
|
||||
@ -2984,19 +2948,17 @@ func (z *dataUsageEntryV7) DecodeMsg(dc *msgp.Reader) (err error) {
|
||||
if z.AllTierStats.Tiers == nil {
|
||||
z.AllTierStats.Tiers = make(map[string]tierStats, zb0005)
|
||||
} else if len(z.AllTierStats.Tiers) > 0 {
|
||||
for key := range z.AllTierStats.Tiers {
|
||||
delete(z.AllTierStats.Tiers, key)
|
||||
}
|
||||
clear(z.AllTierStats.Tiers)
|
||||
}
|
||||
for zb0005 > 0 {
|
||||
zb0005--
|
||||
var za0003 string
|
||||
var za0004 tierStats
|
||||
za0003, err = dc.ReadString()
|
||||
if err != nil {
|
||||
err = msgp.WrapError(err, "AllTierStats", "Tiers")
|
||||
return
|
||||
}
|
||||
var za0004 tierStats
|
||||
var zb0006 uint32
|
||||
zb0006, err = dc.ReadMapHeader()
|
||||
if err != nil {
|
||||
@ -3192,14 +3154,12 @@ func (z *dataUsageEntryV7) UnmarshalMsg(bts []byte) (o []byte, err error) {
|
||||
if z.AllTierStats.Tiers == nil {
|
||||
z.AllTierStats.Tiers = make(map[string]tierStats, zb0005)
|
||||
} else if len(z.AllTierStats.Tiers) > 0 {
|
||||
for key := range z.AllTierStats.Tiers {
|
||||
delete(z.AllTierStats.Tiers, key)
|
||||
}
|
||||
clear(z.AllTierStats.Tiers)
|
||||
}
|
||||
for zb0005 > 0 {
|
||||
var za0003 string
|
||||
var za0004 tierStats
|
||||
zb0005--
|
||||
var za0003 string
|
||||
za0003, bts, err = msgp.ReadStringBytes(bts)
|
||||
if err != nil {
|
||||
err = msgp.WrapError(err, "AllTierStats", "Tiers")
|
||||
|
||||
@ -1,7 +1,7 @@
|
||||
package cmd
|
||||
|
||||
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
|
||||
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"testing"
|
||||
|
||||
@ -56,13 +56,13 @@ func TestDataUsageUpdate(t *testing.T) {
|
||||
var s os.FileInfo
|
||||
s, err = os.Stat(item.Path)
|
||||
if err != nil {
|
||||
return
|
||||
return sizeS, err
|
||||
}
|
||||
sizeS.totalSize = s.Size()
|
||||
sizeS.versions++
|
||||
return sizeS, nil
|
||||
}
|
||||
return
|
||||
return sizeS, err
|
||||
}
|
||||
xls := xlStorage{drivePath: base, diskInfoCache: cachevalue.New[DiskInfo]()}
|
||||
xls.diskInfoCache.InitOnce(time.Second, cachevalue.Opts{}, func(ctx context.Context) (DiskInfo, error) {
|
||||
@ -279,13 +279,13 @@ func TestDataUsageUpdatePrefix(t *testing.T) {
|
||||
var s os.FileInfo
|
||||
s, err = os.Stat(item.Path)
|
||||
if err != nil {
|
||||
return
|
||||
return sizeS, err
|
||||
}
|
||||
sizeS.totalSize = s.Size()
|
||||
sizeS.versions++
|
||||
return
|
||||
return sizeS, err
|
||||
}
|
||||
return
|
||||
return sizeS, err
|
||||
}
|
||||
|
||||
weSleep := func() bool { return false }
|
||||
@ -569,13 +569,13 @@ func TestDataUsageCacheSerialize(t *testing.T) {
|
||||
var s os.FileInfo
|
||||
s, err = os.Stat(item.Path)
|
||||
if err != nil {
|
||||
return
|
||||
return sizeS, err
|
||||
}
|
||||
sizeS.versions++
|
||||
sizeS.totalSize = s.Size()
|
||||
return
|
||||
return sizeS, err
|
||||
}
|
||||
return
|
||||
return sizeS, err
|
||||
}
|
||||
xls := xlStorage{drivePath: base, diskInfoCache: cachevalue.New[DiskInfo]()}
|
||||
xls.diskInfoCache.InitOnce(time.Second, cachevalue.Opts{}, func(ctx context.Context) (DiskInfo, error) {
|
||||
|
||||
@ -87,7 +87,7 @@ func (d *DummyDataGen) Read(b []byte) (n int, err error) {
|
||||
}
|
||||
err = io.EOF
|
||||
}
|
||||
return
|
||||
return n, err
|
||||
}
|
||||
|
||||
func (d *DummyDataGen) Seek(offset int64, whence int) (int64, error) {
|
||||
|
||||
@ -450,7 +450,7 @@ func setEncryptionMetadata(r *http.Request, bucket, object string, metadata map[
|
||||
}
|
||||
}
|
||||
_, err = newEncryptMetadata(r.Context(), kind, keyID, key, bucket, object, metadata, kmsCtx)
|
||||
return
|
||||
return err
|
||||
}
|
||||
|
||||
// EncryptRequest takes the client provided content and encrypts the data
|
||||
@ -855,7 +855,7 @@ func tryDecryptETag(key []byte, encryptedETag string, sses3 bool) string {
|
||||
func (o *ObjectInfo) GetDecryptedRange(rs *HTTPRangeSpec) (encOff, encLength, skipLen int64, seqNumber uint32, partStart int, err error) {
|
||||
if _, ok := crypto.IsEncrypted(o.UserDefined); !ok {
|
||||
err = errors.New("Object is not encrypted")
|
||||
return
|
||||
return encOff, encLength, skipLen, seqNumber, partStart, err
|
||||
}
|
||||
|
||||
if rs == nil {
|
||||
@ -873,7 +873,7 @@ func (o *ObjectInfo) GetDecryptedRange(rs *HTTPRangeSpec) (encOff, encLength, sk
|
||||
partSize, err = sio.DecryptedSize(uint64(part.Size))
|
||||
if err != nil {
|
||||
err = errObjectTampered
|
||||
return
|
||||
return encOff, encLength, skipLen, seqNumber, partStart, err
|
||||
}
|
||||
sizes[i] = int64(partSize)
|
||||
decObjSize += int64(partSize)
|
||||
@ -883,7 +883,7 @@ func (o *ObjectInfo) GetDecryptedRange(rs *HTTPRangeSpec) (encOff, encLength, sk
|
||||
partSize, err = sio.DecryptedSize(uint64(o.Size))
|
||||
if err != nil {
|
||||
err = errObjectTampered
|
||||
return
|
||||
return encOff, encLength, skipLen, seqNumber, partStart, err
|
||||
}
|
||||
sizes = []int64{int64(partSize)}
|
||||
decObjSize = sizes[0]
|
||||
@ -892,7 +892,7 @@ func (o *ObjectInfo) GetDecryptedRange(rs *HTTPRangeSpec) (encOff, encLength, sk
|
||||
var off, length int64
|
||||
off, length, err = rs.GetOffsetLength(decObjSize)
|
||||
if err != nil {
|
||||
return
|
||||
return encOff, encLength, skipLen, seqNumber, partStart, err
|
||||
}
|
||||
|
||||
// At this point, we have:
|
||||
|
||||
@ -483,7 +483,7 @@ func TestGetDecryptedRange(t *testing.T) {
|
||||
cumulativeSum += v
|
||||
cumulativeEncSum += getEncSize(v)
|
||||
}
|
||||
return
|
||||
return o, l, skip, sn, ps
|
||||
}
|
||||
|
||||
for i, test := range testMPs {
|
||||
|
||||
@ -443,7 +443,7 @@ func buildDisksLayoutFromConfFile(pools []poolArgs) (layout disksLayout, err err
|
||||
layout: setArgs,
|
||||
})
|
||||
}
|
||||
return
|
||||
return layout, err
|
||||
}
|
||||
|
||||
// mergeDisksLayoutFromArgs supports with and without ellipses transparently.
|
||||
@ -475,7 +475,7 @@ func mergeDisksLayoutFromArgs(args []string, ctxt *serverCtxt) (err error) {
|
||||
legacy: true,
|
||||
pools: []poolDisksLayout{{layout: setArgs, cmdline: strings.Join(args, " ")}},
|
||||
}
|
||||
return
|
||||
return err
|
||||
}
|
||||
|
||||
for _, arg := range args {
|
||||
@ -489,7 +489,7 @@ func mergeDisksLayoutFromArgs(args []string, ctxt *serverCtxt) (err error) {
|
||||
}
|
||||
ctxt.Layout.pools = append(ctxt.Layout.pools, poolDisksLayout{cmdline: arg, layout: setArgs})
|
||||
}
|
||||
return
|
||||
return err
|
||||
}
|
||||
|
||||
// CreateServerEndpoints - validates and creates new endpoints from input args, supports
|
||||
|
||||
@ -267,7 +267,7 @@ func (l EndpointServerPools) ESCount() (count int) {
|
||||
for _, p := range l {
|
||||
count += p.SetCount
|
||||
}
|
||||
return
|
||||
return count
|
||||
}
|
||||
|
||||
// GetNodes returns a sorted list of nodes in this cluster
|
||||
@ -297,7 +297,7 @@ func (l EndpointServerPools) GetNodes() (nodes []Node) {
|
||||
sort.Slice(nodes, func(i, j int) bool {
|
||||
return nodes[i].Host < nodes[j].Host
|
||||
})
|
||||
return
|
||||
return nodes
|
||||
}
|
||||
|
||||
// GetPoolIdx return pool index
|
||||
@ -588,7 +588,7 @@ func (endpoints Endpoints) GetAllStrings() (all []string) {
|
||||
for _, e := range endpoints {
|
||||
all = append(all, e.String())
|
||||
}
|
||||
return
|
||||
return all
|
||||
}
|
||||
|
||||
func hostResolveToLocalhost(endpoint Endpoint) bool {
|
||||
|
||||
@ -69,7 +69,7 @@ func NewErasure(ctx context.Context, dataBlocks, parityBlocks int, blockSize int
|
||||
})
|
||||
return enc
|
||||
}
|
||||
return
|
||||
return e, err
|
||||
}
|
||||
|
||||
// EncodeData encodes the given data and returns the erasure-coded data.
|
||||
|
||||
@ -283,7 +283,7 @@ func countPartNotSuccess(partErrs []int) (c int) {
|
||||
c++
|
||||
}
|
||||
}
|
||||
return
|
||||
return c
|
||||
}
|
||||
|
||||
// checkObjectWithAllParts sets partsMetadata and onlineDisks when xl.meta is inexistant/corrupted or outdated
|
||||
@ -436,5 +436,5 @@ func checkObjectWithAllParts(ctx context.Context, onlineDisks []StorageAPI, part
|
||||
dataErrsByDisk[disk][part] = dataErrsByPart[part][disk]
|
||||
}
|
||||
}
|
||||
return
|
||||
return dataErrsByDisk, dataErrsByPart
|
||||
}
|
||||
|
||||
@ -965,7 +965,7 @@ func danglingMetaErrsCount(cerrs []error) (notFoundCount int, nonActionableCount
|
||||
nonActionableCount++
|
||||
}
|
||||
}
|
||||
return
|
||||
return notFoundCount, nonActionableCount
|
||||
}
|
||||
|
||||
func danglingPartErrsCount(results []int) (notFoundCount int, nonActionableCount int) {
|
||||
@ -980,7 +980,7 @@ func danglingPartErrsCount(results []int) (notFoundCount int, nonActionableCount
|
||||
nonActionableCount++
|
||||
}
|
||||
}
|
||||
return
|
||||
return notFoundCount, nonActionableCount
|
||||
}
|
||||
|
||||
// Object is considered dangling/corrupted if and only
|
||||
|
||||
@ -521,7 +521,7 @@ func listObjectParities(partsMetadata []FileInfo, errs []error) (parities []int)
|
||||
parities[index] = metadata.Erasure.ParityBlocks
|
||||
}
|
||||
}
|
||||
return
|
||||
return parities
|
||||
}
|
||||
|
||||
// Returns per object readQuorum and writeQuorum
|
||||
|
||||
225
cmd/erasure-multipart-conditional_test.go
Normal file
225
cmd/erasure-multipart-conditional_test.go
Normal file
@ -0,0 +1,225 @@
|
||||
// Copyright (c) 2015-2025 MinIO, Inc.
|
||||
//
|
||||
// This file is part of MinIO Object Storage stack
|
||||
//
|
||||
// This program is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Affero General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Affero General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Affero General Public License
|
||||
// along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
"github.com/dustin/go-humanize"
|
||||
xhttp "github.com/minio/minio/internal/http"
|
||||
)
|
||||
|
||||
// TestNewMultipartUploadConditionalWithReadQuorumFailure tests that conditional
|
||||
// multipart uploads (with if-match/if-none-match) behave correctly when read quorum
|
||||
// cannot be reached.
|
||||
//
|
||||
// Related to: https://github.com/minio/minio/issues/21603
|
||||
//
|
||||
// Should return an error when read quorum cannot
|
||||
// be reached, as we cannot reliably determine if the precondition is met.
|
||||
func TestNewMultipartUploadConditionalWithReadQuorumFailure(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
|
||||
obj, fsDirs, err := prepareErasure16(ctx)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer obj.Shutdown(context.Background())
|
||||
defer removeRoots(fsDirs)
|
||||
|
||||
z := obj.(*erasureServerPools)
|
||||
xl := z.serverPools[0].sets[0]
|
||||
|
||||
bucket := "test-bucket"
|
||||
object := "test-object"
|
||||
|
||||
err = obj.MakeBucket(ctx, bucket, MakeBucketOptions{})
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// Put an initial object so it exists
|
||||
_, err = obj.PutObject(ctx, bucket, object,
|
||||
mustGetPutObjReader(t, bytes.NewReader([]byte("initial-value")),
|
||||
int64(len("initial-value")), "", ""), ObjectOptions{})
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// Get object info to capture the ETag
|
||||
objInfo, err := obj.GetObjectInfo(ctx, bucket, object, ObjectOptions{})
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
existingETag := objInfo.ETag
|
||||
|
||||
// Simulate read quorum failure by taking enough disks offline
|
||||
// With 16 disks (EC 8+8), read quorum is 9. Taking 8 disks offline leaves only 8,
|
||||
// which is below read quorum.
|
||||
erasureDisks := xl.getDisks()
|
||||
z.serverPools[0].erasureDisksMu.Lock()
|
||||
xl.getDisks = func() []StorageAPI {
|
||||
for i := range erasureDisks[:8] {
|
||||
erasureDisks[i] = nil
|
||||
}
|
||||
return erasureDisks
|
||||
}
|
||||
z.serverPools[0].erasureDisksMu.Unlock()
|
||||
|
||||
t.Run("if-none-match with read quorum failure", func(t *testing.T) {
|
||||
// Test Case 1: if-none-match (create only if doesn't exist)
|
||||
// With if-none-match: *, this should only succeed if object doesn't exist.
|
||||
// Since read quorum fails, we can't determine if object exists.
|
||||
opts := ObjectOptions{
|
||||
UserDefined: map[string]string{
|
||||
xhttp.IfNoneMatch: "*",
|
||||
},
|
||||
CheckPrecondFn: func(oi ObjectInfo) bool {
|
||||
// Precondition fails if object exists (ETag is not empty)
|
||||
return oi.ETag != ""
|
||||
},
|
||||
}
|
||||
|
||||
_, err := obj.NewMultipartUpload(ctx, bucket, object, opts)
|
||||
if !isErrReadQuorum(err) {
|
||||
t.Errorf("Expected read quorum error when if-none-match is used with quorum failure, got: %v", err)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("if-match with wrong ETag and read quorum failure", func(t *testing.T) {
|
||||
// Test Case 2: if-match with WRONG ETag
|
||||
// This should fail even without quorum issues, but with quorum failure
|
||||
// we can't verify the ETag at all.
|
||||
opts := ObjectOptions{
|
||||
UserDefined: map[string]string{
|
||||
xhttp.IfMatch: "wrong-etag-12345",
|
||||
},
|
||||
HasIfMatch: true,
|
||||
CheckPrecondFn: func(oi ObjectInfo) bool {
|
||||
// Precondition fails if ETags don't match
|
||||
return oi.ETag != "wrong-etag-12345"
|
||||
},
|
||||
}
|
||||
|
||||
_, err := obj.NewMultipartUpload(ctx, bucket, object, opts)
|
||||
if !isErrReadQuorum(err) {
|
||||
t.Logf("Got error (as expected): %v", err)
|
||||
t.Logf("But expected read quorum error, not object-not-found error")
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("if-match with correct ETag and read quorum failure", func(t *testing.T) {
|
||||
// Test Case 3: if-match with CORRECT ETag but read quorum failure
|
||||
// Even with the correct ETag, we shouldn't proceed if we can't verify it.
|
||||
opts := ObjectOptions{
|
||||
UserDefined: map[string]string{
|
||||
xhttp.IfMatch: existingETag,
|
||||
},
|
||||
HasIfMatch: true,
|
||||
CheckPrecondFn: func(oi ObjectInfo) bool {
|
||||
// Precondition fails if ETags don't match
|
||||
return oi.ETag != existingETag
|
||||
},
|
||||
}
|
||||
|
||||
_, err := obj.NewMultipartUpload(ctx, bucket, object, opts)
|
||||
if !isErrReadQuorum(err) {
|
||||
t.Errorf("Expected read quorum error when if-match is used with quorum failure, got: %v", err)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// TestCompleteMultipartUploadConditionalWithReadQuorumFailure tests that conditional
|
||||
// complete multipart upload operations behave correctly when read quorum cannot be reached.
|
||||
func TestCompleteMultipartUploadConditionalWithReadQuorumFailure(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
|
||||
obj, fsDirs, err := prepareErasure16(ctx)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer obj.Shutdown(context.Background())
|
||||
defer removeRoots(fsDirs)
|
||||
|
||||
z := obj.(*erasureServerPools)
|
||||
xl := z.serverPools[0].sets[0]
|
||||
|
||||
bucket := "test-bucket"
|
||||
object := "test-object"
|
||||
|
||||
err = obj.MakeBucket(ctx, bucket, MakeBucketOptions{})
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// Put an initial object
|
||||
_, err = obj.PutObject(ctx, bucket, object,
|
||||
mustGetPutObjReader(t, bytes.NewReader([]byte("initial-value")),
|
||||
int64(len("initial-value")), "", ""), ObjectOptions{})
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// Start a multipart upload WITHOUT conditional checks (this should work)
|
||||
res, err := obj.NewMultipartUpload(ctx, bucket, object, ObjectOptions{})
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// Upload a part
|
||||
partData := bytes.Repeat([]byte("a"), 5*humanize.MiByte)
|
||||
md5Hex := getMD5Hash(partData)
|
||||
_, err = obj.PutObjectPart(ctx, bucket, object, res.UploadID, 1,
|
||||
mustGetPutObjReader(t, bytes.NewReader(partData), int64(len(partData)), md5Hex, ""),
|
||||
ObjectOptions{})
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// Now simulate read quorum failure
|
||||
erasureDisks := xl.getDisks()
|
||||
z.serverPools[0].erasureDisksMu.Lock()
|
||||
xl.getDisks = func() []StorageAPI {
|
||||
for i := range erasureDisks[:8] {
|
||||
erasureDisks[i] = nil
|
||||
}
|
||||
return erasureDisks
|
||||
}
|
||||
z.serverPools[0].erasureDisksMu.Unlock()
|
||||
|
||||
t.Run("complete multipart with if-none-match and read quorum failure", func(t *testing.T) {
|
||||
// Try to complete the multipart upload with if-none-match
|
||||
// This should fail because we can't verify the condition due to read quorum failure
|
||||
opts := ObjectOptions{
|
||||
UserDefined: map[string]string{
|
||||
xhttp.IfNoneMatch: "*",
|
||||
},
|
||||
CheckPrecondFn: func(oi ObjectInfo) bool {
|
||||
return oi.ETag != ""
|
||||
},
|
||||
}
|
||||
|
||||
parts := []CompletePart{{PartNumber: 1, ETag: md5Hex}}
|
||||
_, err := obj.CompleteMultipartUpload(ctx, bucket, object, res.UploadID, parts, opts)
|
||||
if !isErrReadQuorum(err) {
|
||||
t.Errorf("Expected read quorum error, got: %v", err)
|
||||
}
|
||||
})
|
||||
}
|
||||
@ -390,13 +390,13 @@ func (er erasureObjects) newMultipartUpload(ctx context.Context, bucket string,
|
||||
if err == nil && opts.CheckPrecondFn(obj) {
|
||||
return nil, PreConditionFailed{}
|
||||
}
|
||||
if err != nil && !isErrVersionNotFound(err) && !isErrObjectNotFound(err) && !isErrReadQuorum(err) {
|
||||
if err != nil && !isErrVersionNotFound(err) && !isErrObjectNotFound(err) {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// if object doesn't exist and not a replication request return error for If-Match conditional requests
|
||||
// if object doesn't exist return error for If-Match conditional requests
|
||||
// If-None-Match should be allowed to proceed for non-existent objects
|
||||
if err != nil && !opts.ReplicationRequest && opts.HasIfMatch && (isErrObjectNotFound(err) || isErrVersionNotFound(err)) {
|
||||
if err != nil && opts.HasIfMatch && (isErrObjectNotFound(err) || isErrVersionNotFound(err)) {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
@ -1114,13 +1114,13 @@ func (er erasureObjects) CompleteMultipartUpload(ctx context.Context, bucket str
|
||||
if err == nil && opts.CheckPrecondFn(obj) {
|
||||
return ObjectInfo{}, PreConditionFailed{}
|
||||
}
|
||||
if err != nil && !isErrVersionNotFound(err) && !isErrObjectNotFound(err) && !isErrReadQuorum(err) {
|
||||
if err != nil && !isErrVersionNotFound(err) && !isErrObjectNotFound(err) {
|
||||
return ObjectInfo{}, err
|
||||
}
|
||||
|
||||
// if object doesn't exist and not a replication request return error for If-Match conditional requests
|
||||
// if object doesn't exist return error for If-Match conditional requests
|
||||
// If-None-Match should be allowed to proceed for non-existent objects
|
||||
if err != nil && !opts.ReplicationRequest && opts.HasIfMatch && (isErrObjectNotFound(err) || isErrVersionNotFound(err)) {
|
||||
if err != nil && opts.HasIfMatch && (isErrObjectNotFound(err) || isErrVersionNotFound(err)) {
|
||||
return ObjectInfo{}, err
|
||||
}
|
||||
}
|
||||
|
||||
150
cmd/erasure-object-conditional_test.go
Normal file
150
cmd/erasure-object-conditional_test.go
Normal file
@ -0,0 +1,150 @@
|
||||
// Copyright (c) 2015-2025 MinIO, Inc.
|
||||
//
|
||||
// This file is part of MinIO Object Storage stack
|
||||
//
|
||||
// This program is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Affero General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Affero General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Affero General Public License
|
||||
// along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
xhttp "github.com/minio/minio/internal/http"
|
||||
)
|
||||
|
||||
// TestPutObjectConditionalWithReadQuorumFailure tests that conditional
|
||||
// PutObject operations (with if-match/if-none-match) behave correctly when read quorum
|
||||
// cannot be reached.
|
||||
//
|
||||
// Related to: https://github.com/minio/minio/issues/21603
|
||||
//
|
||||
// Should return an error when read quorum cannot
|
||||
// be reached, as we cannot reliably determine if the precondition is met.
|
||||
func TestPutObjectConditionalWithReadQuorumFailure(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
|
||||
obj, fsDirs, err := prepareErasure16(ctx)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer obj.Shutdown(context.Background())
|
||||
defer removeRoots(fsDirs)
|
||||
|
||||
z := obj.(*erasureServerPools)
|
||||
xl := z.serverPools[0].sets[0]
|
||||
|
||||
bucket := "test-bucket"
|
||||
object := "test-object"
|
||||
|
||||
err = obj.MakeBucket(ctx, bucket, MakeBucketOptions{})
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// Put an initial object so it exists
|
||||
_, err = obj.PutObject(ctx, bucket, object,
|
||||
mustGetPutObjReader(t, bytes.NewReader([]byte("initial-value")),
|
||||
int64(len("initial-value")), "", ""), ObjectOptions{})
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// Get object info to capture the ETag
|
||||
objInfo, err := obj.GetObjectInfo(ctx, bucket, object, ObjectOptions{})
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
existingETag := objInfo.ETag
|
||||
|
||||
// Simulate read quorum failure by taking enough disks offline
|
||||
// With 16 disks (EC 8+8), read quorum is 9. Taking 8 disks offline leaves only 8,
|
||||
// which is below read quorum.
|
||||
erasureDisks := xl.getDisks()
|
||||
z.serverPools[0].erasureDisksMu.Lock()
|
||||
xl.getDisks = func() []StorageAPI {
|
||||
for i := range erasureDisks[:8] {
|
||||
erasureDisks[i] = nil
|
||||
}
|
||||
return erasureDisks
|
||||
}
|
||||
z.serverPools[0].erasureDisksMu.Unlock()
|
||||
|
||||
t.Run("if-none-match with read quorum failure", func(t *testing.T) {
|
||||
// Test Case 1: if-none-match (create only if doesn't exist)
|
||||
// With if-none-match: *, this should only succeed if object doesn't exist.
|
||||
// Since read quorum fails, we can't determine if object exists.
|
||||
opts := ObjectOptions{
|
||||
UserDefined: map[string]string{
|
||||
xhttp.IfNoneMatch: "*",
|
||||
},
|
||||
CheckPrecondFn: func(oi ObjectInfo) bool {
|
||||
// Precondition fails if object exists (ETag is not empty)
|
||||
return oi.ETag != ""
|
||||
},
|
||||
}
|
||||
|
||||
_, err := obj.PutObject(ctx, bucket, object,
|
||||
mustGetPutObjReader(t, bytes.NewReader([]byte("new-value")),
|
||||
int64(len("new-value")), "", ""), opts)
|
||||
if !isErrReadQuorum(err) {
|
||||
t.Errorf("Expected read quorum error when if-none-match is used with quorum failure, got: %v", err)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("if-match with read quorum failure", func(t *testing.T) {
|
||||
// Test Case 2: if-match (update only if ETag matches)
|
||||
// With if-match: <etag>, this should only succeed if object exists with matching ETag.
|
||||
// Since read quorum fails, we can't determine if object exists or ETag matches.
|
||||
opts := ObjectOptions{
|
||||
UserDefined: map[string]string{
|
||||
xhttp.IfMatch: existingETag,
|
||||
},
|
||||
CheckPrecondFn: func(oi ObjectInfo) bool {
|
||||
// Precondition fails if ETag doesn't match
|
||||
return oi.ETag != existingETag
|
||||
},
|
||||
}
|
||||
|
||||
_, err := obj.PutObject(ctx, bucket, object,
|
||||
mustGetPutObjReader(t, bytes.NewReader([]byte("updated-value")),
|
||||
int64(len("updated-value")), "", ""), opts)
|
||||
if !isErrReadQuorum(err) {
|
||||
t.Errorf("Expected read quorum error when if-match is used with quorum failure, got: %v", err)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("if-match wrong etag with read quorum failure", func(t *testing.T) {
|
||||
// Test Case 3: if-match with wrong ETag
|
||||
// Even if the ETag doesn't match, we should still get read quorum error
|
||||
// because we can't read the object to check the condition.
|
||||
opts := ObjectOptions{
|
||||
UserDefined: map[string]string{
|
||||
xhttp.IfMatch: "wrong-etag",
|
||||
},
|
||||
CheckPrecondFn: func(oi ObjectInfo) bool {
|
||||
// Precondition fails if ETag doesn't match
|
||||
return oi.ETag != "wrong-etag"
|
||||
},
|
||||
}
|
||||
|
||||
_, err := obj.PutObject(ctx, bucket, object,
|
||||
mustGetPutObjReader(t, bytes.NewReader([]byte("should-fail")),
|
||||
int64(len("should-fail")), "", ""), opts)
|
||||
if !isErrReadQuorum(err) {
|
||||
t.Errorf("Expected read quorum error when if-match is used with quorum failure (even with wrong ETag), got: %v", err)
|
||||
}
|
||||
})
|
||||
}
|
||||
@ -1274,13 +1274,13 @@ func (er erasureObjects) putObject(ctx context.Context, bucket string, object st
|
||||
if err == nil && opts.CheckPrecondFn(obj) {
|
||||
return objInfo, PreConditionFailed{}
|
||||
}
|
||||
if err != nil && !isErrVersionNotFound(err) && !isErrObjectNotFound(err) && !isErrReadQuorum(err) {
|
||||
if err != nil && !isErrVersionNotFound(err) && !isErrObjectNotFound(err) {
|
||||
return objInfo, err
|
||||
}
|
||||
|
||||
// if object doesn't exist and not a replication request return error for If-Match conditional requests
|
||||
// if object doesn't exist return error for If-Match conditional requests
|
||||
// If-None-Match should be allowed to proceed for non-existent objects
|
||||
if err != nil && !opts.ReplicationRequest && opts.HasIfMatch && (isErrObjectNotFound(err) || isErrVersionNotFound(err)) {
|
||||
if err != nil && opts.HasIfMatch && (isErrObjectNotFound(err) || isErrVersionNotFound(err)) {
|
||||
return objInfo, err
|
||||
}
|
||||
}
|
||||
|
||||
@ -233,7 +233,7 @@ func (p poolMeta) ResumeBucketObject(idx int) (bucket, object string) {
|
||||
bucket = p.Pools[idx].Decommission.Bucket
|
||||
object = p.Pools[idx].Decommission.Object
|
||||
}
|
||||
return
|
||||
return bucket, object
|
||||
}
|
||||
|
||||
func (p *poolMeta) TrackCurrentBucketObject(idx int, bucket string, object string) {
|
||||
@ -568,6 +568,7 @@ func newPoolMeta(z *erasureServerPools, prevMeta poolMeta) poolMeta {
|
||||
for _, currentPool := range prevMeta.Pools {
|
||||
// Preserve any current pool status.
|
||||
if currentPool.CmdLine == pool.endpoints.CmdLine {
|
||||
currentPool.ID = idx
|
||||
newMeta.Pools = append(newMeta.Pools, currentPool)
|
||||
skip = true
|
||||
break
|
||||
|
||||
@ -1,7 +1,7 @@
|
||||
package cmd
|
||||
|
||||
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
|
||||
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"github.com/tinylib/msgp/msgp"
|
||||
)
|
||||
|
||||
@ -1,7 +1,7 @@
|
||||
package cmd
|
||||
|
||||
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
|
||||
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"testing"
|
||||
|
||||
@ -98,12 +98,10 @@ type rebalanceInfo struct {
|
||||
|
||||
// rebalanceMeta contains information pertaining to an ongoing rebalance operation.
|
||||
type rebalanceMeta struct {
|
||||
cancel context.CancelFunc `msg:"-"` // to be invoked on rebalance-stop
|
||||
lastRefreshedAt time.Time `msg:"-"`
|
||||
StoppedAt time.Time `msg:"stopTs"` // Time when rebalance-stop was issued.
|
||||
ID string `msg:"id"` // ID of the ongoing rebalance operation
|
||||
PercentFreeGoal float64 `msg:"pf"` // Computed from total free space and capacity at the start of rebalance
|
||||
PoolStats []*rebalanceStats `msg:"rss"` // Per-pool rebalance stats keyed by pool index
|
||||
StoppedAt time.Time `msg:"stopTs"` // Time when rebalance-stop was issued.
|
||||
ID string `msg:"id"` // ID of the ongoing rebalance operation
|
||||
PercentFreeGoal float64 `msg:"pf"` // Computed from total free space and capacity at the start of rebalance
|
||||
PoolStats []*rebalanceStats `msg:"rss"` // Per-pool rebalance stats keyed by pool index
|
||||
}
|
||||
|
||||
var errRebalanceNotStarted = errors.New("rebalance not started")
|
||||
@ -313,8 +311,6 @@ func (r *rebalanceMeta) loadWithOpts(ctx context.Context, store objectIO, opts O
|
||||
return err
|
||||
}
|
||||
|
||||
r.lastRefreshedAt = time.Now()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
@ -450,7 +446,7 @@ func (z *erasureServerPools) rebalanceBuckets(ctx context.Context, poolIdx int)
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
doneCh <- ctx.Err()
|
||||
return
|
||||
return err
|
||||
default:
|
||||
}
|
||||
|
||||
@ -468,7 +464,7 @@ func (z *erasureServerPools) rebalanceBuckets(ctx context.Context, poolIdx int)
|
||||
}
|
||||
rebalanceLogIf(GlobalContext, err)
|
||||
doneCh <- err
|
||||
return
|
||||
return err
|
||||
}
|
||||
stopFn(0, nil)
|
||||
z.bucketRebalanceDone(bucket, poolIdx)
|
||||
@ -944,7 +940,7 @@ func (z *erasureServerPools) StartRebalance() {
|
||||
return
|
||||
}
|
||||
ctx, cancel := context.WithCancel(GlobalContext)
|
||||
z.rebalMeta.cancel = cancel // to be used when rebalance-stop is called
|
||||
z.rebalCancel = cancel // to be used when rebalance-stop is called
|
||||
z.rebalMu.Unlock()
|
||||
|
||||
z.rebalMu.RLock()
|
||||
@ -987,10 +983,9 @@ func (z *erasureServerPools) StopRebalance() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
if cancel := r.cancel; cancel != nil {
|
||||
// cancel != nil only on pool leaders
|
||||
r.cancel = nil
|
||||
if cancel := z.rebalCancel; cancel != nil {
|
||||
cancel()
|
||||
z.rebalCancel = nil
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
@ -1,7 +1,7 @@
|
||||
package cmd
|
||||
|
||||
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
|
||||
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"github.com/tinylib/msgp/msgp"
|
||||
)
|
||||
|
||||
@ -1,7 +1,7 @@
|
||||
package cmd
|
||||
|
||||
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
|
||||
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"testing"
|
||||
|
||||
@ -53,8 +53,9 @@ type erasureServerPools struct {
|
||||
poolMetaMutex sync.RWMutex
|
||||
poolMeta poolMeta
|
||||
|
||||
rebalMu sync.RWMutex
|
||||
rebalMeta *rebalanceMeta
|
||||
rebalMu sync.RWMutex
|
||||
rebalMeta *rebalanceMeta
|
||||
rebalCancel context.CancelFunc
|
||||
|
||||
deploymentID [16]byte
|
||||
distributionAlgo string
|
||||
@ -701,7 +702,7 @@ func (z *erasureServerPools) BackendInfo() (b madmin.BackendInfo) {
|
||||
|
||||
b.StandardSCParity = scParity
|
||||
b.RRSCParity = rrSCParity
|
||||
return
|
||||
return b
|
||||
}
|
||||
|
||||
func (z *erasureServerPools) LocalStorageInfo(ctx context.Context, metrics bool) StorageInfo {
|
||||
|
||||
@ -116,7 +116,7 @@ func diskErrToDriveState(err error) (state string) {
|
||||
state = fmt.Sprintf("%s (cause: %s)", madmin.DriveStateUnknown, err)
|
||||
}
|
||||
|
||||
return
|
||||
return state
|
||||
}
|
||||
|
||||
func getOnlineOfflineDisksStats(disksInfo []madmin.Disk) (onlineDisks, offlineDisks madmin.BackendDisks) {
|
||||
|
||||
@ -94,18 +94,18 @@ func availableMemory() (available uint64) {
|
||||
if limit > 0 {
|
||||
// A valid value is found, return its 90%
|
||||
available = (limit * 9) / 10
|
||||
return
|
||||
return available
|
||||
}
|
||||
} // for all other platforms limits are based on virtual memory.
|
||||
|
||||
memStats, err := mem.VirtualMemory()
|
||||
if err != nil {
|
||||
return
|
||||
return available
|
||||
}
|
||||
|
||||
// A valid value is available return its 90%
|
||||
available = (memStats.Available * 9) / 10
|
||||
return
|
||||
return available
|
||||
}
|
||||
|
||||
func (t *apiConfig) init(cfg api.Config, setDriveCounts []int, legacy bool) {
|
||||
|
||||
@ -466,7 +466,7 @@ func getHostName(r *http.Request) (hostName string) {
|
||||
} else {
|
||||
hostName = r.Host
|
||||
}
|
||||
return
|
||||
return hostName
|
||||
}
|
||||
|
||||
// Proxy any request to an endpoint.
|
||||
@ -500,5 +500,5 @@ func proxyRequest(ctx context.Context, w http.ResponseWriter, r *http.Request, e
|
||||
|
||||
r.URL.Host = ep.Host
|
||||
f.ServeHTTP(w, r)
|
||||
return
|
||||
return success
|
||||
}
|
||||
|
||||
@ -18,7 +18,11 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"sync"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
xhttp "github.com/minio/minio/internal/http"
|
||||
)
|
||||
|
||||
// Test redactLDAPPwd()
|
||||
@ -52,3 +56,129 @@ func TestRedactLDAPPwd(t *testing.T) {
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// TestHTTPStatsRaceCondition tests the race condition fix for HTTPStats.
|
||||
// This test specifically addresses the race between:
|
||||
// - Write operations via updateStats.
|
||||
// - Read operations via toServerHTTPStats(false).
|
||||
func TestRaulStatsRaceCondition(t *testing.T) {
|
||||
httpStats := newHTTPStats()
|
||||
// Simulate the concurrent scenario from the original race condition:
|
||||
// Multiple HTTP request handlers updating stats concurrently,
|
||||
// while background processes are reading the stats for persistence.
|
||||
const numWriters = 100 // Simulate many HTTP request handlers.
|
||||
const numReaders = 50 // Simulate background stats readers.
|
||||
const opsPerGoroutine = 100
|
||||
|
||||
var wg sync.WaitGroup
|
||||
for i := range numWriters {
|
||||
wg.Add(1)
|
||||
go func(writerID int) {
|
||||
defer wg.Done()
|
||||
for j := 0; j < opsPerGoroutine; j++ {
|
||||
switch j % 4 {
|
||||
case 0:
|
||||
httpStats.updateStats("GetObject", &xhttp.ResponseRecorder{})
|
||||
case 1:
|
||||
httpStats.totalS3Requests.Inc("PutObject")
|
||||
case 2:
|
||||
httpStats.totalS3Errors.Inc("DeleteObject")
|
||||
case 3:
|
||||
httpStats.currentS3Requests.Inc("ListObjects")
|
||||
}
|
||||
}
|
||||
}(i)
|
||||
}
|
||||
|
||||
for i := range numReaders {
|
||||
wg.Add(1)
|
||||
go func(readerID int) {
|
||||
defer wg.Done()
|
||||
for range opsPerGoroutine {
|
||||
_ = httpStats.toServerHTTPStats(false)
|
||||
_ = httpStats.totalS3Requests.Load(false)
|
||||
_ = httpStats.currentS3Requests.Load(false)
|
||||
time.Sleep(1 * time.Microsecond)
|
||||
}
|
||||
}(i)
|
||||
}
|
||||
wg.Wait()
|
||||
|
||||
finalStats := httpStats.toServerHTTPStats(false)
|
||||
totalRequests := 0
|
||||
for _, v := range finalStats.TotalS3Requests.APIStats {
|
||||
totalRequests += v
|
||||
}
|
||||
if totalRequests == 0 {
|
||||
t.Error("Expected some total requests to be recorded, but got zero")
|
||||
}
|
||||
t.Logf("Total requests recorded: %d", totalRequests)
|
||||
t.Logf("Race condition test passed - no races detected")
|
||||
}
|
||||
|
||||
// TestHTTPAPIStatsRaceCondition tests concurrent access to HTTPAPIStats specifically.
|
||||
func TestRaulHTTPAPIStatsRaceCondition(t *testing.T) {
|
||||
stats := &HTTPAPIStats{}
|
||||
const numGoroutines = 50
|
||||
const opsPerGoroutine = 1000
|
||||
|
||||
var wg sync.WaitGroup
|
||||
for i := range numGoroutines {
|
||||
wg.Add(1)
|
||||
go func(id int) {
|
||||
defer wg.Done()
|
||||
for j := 0; j < opsPerGoroutine; j++ {
|
||||
stats.Inc("TestAPI")
|
||||
}
|
||||
}(i)
|
||||
}
|
||||
|
||||
for i := range numGoroutines / 2 {
|
||||
wg.Add(1)
|
||||
go func(id int) {
|
||||
defer wg.Done()
|
||||
for range opsPerGoroutine / 2 {
|
||||
_ = stats.Load(false)
|
||||
}
|
||||
}(i)
|
||||
}
|
||||
wg.Wait()
|
||||
|
||||
finalStats := stats.Load(false)
|
||||
expected := numGoroutines * opsPerGoroutine
|
||||
actual := finalStats["TestAPI"]
|
||||
if actual != expected {
|
||||
t.Errorf("Race condition detected: expected %d, got %d (lost %d increments)",
|
||||
expected, actual, expected-actual)
|
||||
}
|
||||
}
|
||||
|
||||
// TestBucketHTTPStatsRaceCondition tests concurrent access to bucket-level HTTP stats.
|
||||
func TestRaulBucketHTTPStatsRaceCondition(t *testing.T) {
|
||||
bucketStats := newBucketHTTPStats()
|
||||
const numGoroutines = 50
|
||||
const opsPerGoroutine = 100
|
||||
|
||||
var wg sync.WaitGroup
|
||||
for i := range numGoroutines {
|
||||
wg.Add(1)
|
||||
go func(id int) {
|
||||
defer wg.Done()
|
||||
bucketName := "test-bucket"
|
||||
|
||||
for range opsPerGoroutine {
|
||||
bucketStats.updateHTTPStats(bucketName, "GetObject", nil)
|
||||
recorder := &xhttp.ResponseRecorder{}
|
||||
bucketStats.updateHTTPStats(bucketName, "GetObject", recorder)
|
||||
_ = bucketStats.load(bucketName)
|
||||
}
|
||||
}(i)
|
||||
}
|
||||
wg.Wait()
|
||||
|
||||
stats := bucketStats.load("test-bucket")
|
||||
if stats.totalS3Requests == nil {
|
||||
t.Error("Expected bucket stats to be initialized")
|
||||
}
|
||||
t.Logf("Bucket HTTP stats race test passed")
|
||||
}
|
||||
|
||||
@ -1128,7 +1128,7 @@ func (store *IAMStoreSys) listGroups(ctx context.Context) (res []string, err err
|
||||
return true
|
||||
})
|
||||
}
|
||||
return
|
||||
return res, err
|
||||
}
|
||||
|
||||
// PolicyDBUpdate - adds or removes given policies to/from the user or group's
|
||||
@ -1139,7 +1139,7 @@ func (store *IAMStoreSys) PolicyDBUpdate(ctx context.Context, name string, isGro
|
||||
) {
|
||||
if name == "" {
|
||||
err = errInvalidArgument
|
||||
return
|
||||
return updatedAt, addedOrRemoved, effectivePolicies, err
|
||||
}
|
||||
|
||||
cache := store.lock()
|
||||
@ -1163,12 +1163,12 @@ func (store *IAMStoreSys) PolicyDBUpdate(ctx context.Context, name string, isGro
|
||||
g, ok := cache.iamGroupsMap[name]
|
||||
if !ok {
|
||||
err = errNoSuchGroup
|
||||
return
|
||||
return updatedAt, addedOrRemoved, effectivePolicies, err
|
||||
}
|
||||
|
||||
if g.Status == statusDisabled {
|
||||
err = errGroupDisabled
|
||||
return
|
||||
return updatedAt, addedOrRemoved, effectivePolicies, err
|
||||
}
|
||||
}
|
||||
mp, _ = cache.iamGroupPolicyMap.Load(name)
|
||||
@ -1186,7 +1186,7 @@ func (store *IAMStoreSys) PolicyDBUpdate(ctx context.Context, name string, isGro
|
||||
for _, p := range policiesToUpdate.ToSlice() {
|
||||
if _, found := cache.iamPolicyDocsMap[p]; !found {
|
||||
err = errNoSuchPolicy
|
||||
return
|
||||
return updatedAt, addedOrRemoved, effectivePolicies, err
|
||||
}
|
||||
}
|
||||
newPolicySet = existingPolicySet.Union(policiesToUpdate)
|
||||
@ -1198,7 +1198,7 @@ func (store *IAMStoreSys) PolicyDBUpdate(ctx context.Context, name string, isGro
|
||||
// We return an error if the requested policy update will have no effect.
|
||||
if policiesToUpdate.IsEmpty() {
|
||||
err = errNoPolicyToAttachOrDetach
|
||||
return
|
||||
return updatedAt, addedOrRemoved, effectivePolicies, err
|
||||
}
|
||||
|
||||
newPolicies := newPolicySet.ToSlice()
|
||||
@ -1210,7 +1210,7 @@ func (store *IAMStoreSys) PolicyDBUpdate(ctx context.Context, name string, isGro
|
||||
// in this case, we delete the mapping from the store.
|
||||
if len(newPolicies) == 0 {
|
||||
if err = store.deleteMappedPolicy(ctx, name, userType, isGroup); err != nil && !errors.Is(err, errNoSuchPolicy) {
|
||||
return
|
||||
return updatedAt, addedOrRemoved, effectivePolicies, err
|
||||
}
|
||||
if !isGroup {
|
||||
if userType == stsUser {
|
||||
@ -1223,7 +1223,7 @@ func (store *IAMStoreSys) PolicyDBUpdate(ctx context.Context, name string, isGro
|
||||
}
|
||||
} else {
|
||||
if err = store.saveMappedPolicy(ctx, name, userType, isGroup, newPolicyMapping); err != nil {
|
||||
return
|
||||
return updatedAt, addedOrRemoved, effectivePolicies, err
|
||||
}
|
||||
if !isGroup {
|
||||
if userType == stsUser {
|
||||
@ -3052,7 +3052,7 @@ func extractJWTClaims(u UserIdentity) (jwtClaims *jwt.MapClaims, err error) {
|
||||
break
|
||||
}
|
||||
}
|
||||
return
|
||||
return jwtClaims, err
|
||||
}
|
||||
|
||||
func validateSvcExpirationInUTC(expirationInUTC time.Time) error {
|
||||
|
||||
83
cmd/iam.go
83
cmd/iam.go
@ -1029,7 +1029,7 @@ func (sys *IAMSys) SetUserStatus(ctx context.Context, accessKey string, status m
|
||||
|
||||
updatedAt, err = sys.store.SetUserStatus(ctx, accessKey, status)
|
||||
if err != nil {
|
||||
return
|
||||
return updatedAt, err
|
||||
}
|
||||
|
||||
sys.notifyForUser(ctx, accessKey, false)
|
||||
@ -1985,7 +1985,7 @@ func (sys *IAMSys) PolicyDBSet(ctx context.Context, name, policy string, userTyp
|
||||
|
||||
updatedAt, err = sys.store.PolicyDBSet(ctx, name, policy, userType, isGroup)
|
||||
if err != nil {
|
||||
return
|
||||
return updatedAt, err
|
||||
}
|
||||
|
||||
// Notify all other MinIO peers to reload policy
|
||||
@ -2008,7 +2008,7 @@ func (sys *IAMSys) PolicyDBUpdateBuiltin(ctx context.Context, isAttach bool,
|
||||
) (updatedAt time.Time, addedOrRemoved, effectivePolicies []string, err error) {
|
||||
if !sys.Initialized() {
|
||||
err = errServerNotInitialized
|
||||
return
|
||||
return updatedAt, addedOrRemoved, effectivePolicies, err
|
||||
}
|
||||
|
||||
userOrGroup := r.User
|
||||
@ -2021,24 +2021,24 @@ func (sys *IAMSys) PolicyDBUpdateBuiltin(ctx context.Context, isAttach bool,
|
||||
if isGroup {
|
||||
_, err = sys.GetGroupDescription(userOrGroup)
|
||||
if err != nil {
|
||||
return
|
||||
return updatedAt, addedOrRemoved, effectivePolicies, err
|
||||
}
|
||||
} else {
|
||||
var isTemp bool
|
||||
isTemp, _, err = sys.IsTempUser(userOrGroup)
|
||||
if err != nil && err != errNoSuchUser {
|
||||
return
|
||||
return updatedAt, addedOrRemoved, effectivePolicies, err
|
||||
}
|
||||
if isTemp {
|
||||
err = errIAMActionNotAllowed
|
||||
return
|
||||
return updatedAt, addedOrRemoved, effectivePolicies, err
|
||||
}
|
||||
|
||||
// When the user is root credential you are not allowed to
|
||||
// add policies for root user.
|
||||
if userOrGroup == globalActiveCred.AccessKey {
|
||||
err = errIAMActionNotAllowed
|
||||
return
|
||||
return updatedAt, addedOrRemoved, effectivePolicies, err
|
||||
}
|
||||
|
||||
// Validate that user exists.
|
||||
@ -2046,14 +2046,14 @@ func (sys *IAMSys) PolicyDBUpdateBuiltin(ctx context.Context, isAttach bool,
|
||||
_, userExists = sys.GetUser(ctx, userOrGroup)
|
||||
if !userExists {
|
||||
err = errNoSuchUser
|
||||
return
|
||||
return updatedAt, addedOrRemoved, effectivePolicies, err
|
||||
}
|
||||
}
|
||||
|
||||
updatedAt, addedOrRemoved, effectivePolicies, err = sys.store.PolicyDBUpdate(ctx, userOrGroup, isGroup,
|
||||
regUser, r.Policies, isAttach)
|
||||
if err != nil {
|
||||
return
|
||||
return updatedAt, addedOrRemoved, effectivePolicies, err
|
||||
}
|
||||
|
||||
// Notify all other MinIO peers to reload policy
|
||||
@ -2077,7 +2077,7 @@ func (sys *IAMSys) PolicyDBUpdateBuiltin(ctx context.Context, isAttach bool,
|
||||
UpdatedAt: updatedAt,
|
||||
}))
|
||||
|
||||
return
|
||||
return updatedAt, addedOrRemoved, effectivePolicies, err
|
||||
}
|
||||
|
||||
// PolicyDBUpdateLDAP - adds or removes policies from a user or a group verified
|
||||
@ -2087,7 +2087,7 @@ func (sys *IAMSys) PolicyDBUpdateLDAP(ctx context.Context, isAttach bool,
|
||||
) (updatedAt time.Time, addedOrRemoved, effectivePolicies []string, err error) {
|
||||
if !sys.Initialized() {
|
||||
err = errServerNotInitialized
|
||||
return
|
||||
return updatedAt, addedOrRemoved, effectivePolicies, err
|
||||
}
|
||||
|
||||
var dn string
|
||||
@ -2097,7 +2097,7 @@ func (sys *IAMSys) PolicyDBUpdateLDAP(ctx context.Context, isAttach bool,
|
||||
dnResult, err = sys.LDAPConfig.GetValidatedDNForUsername(r.User)
|
||||
if err != nil {
|
||||
iamLogIf(ctx, err)
|
||||
return
|
||||
return updatedAt, addedOrRemoved, effectivePolicies, err
|
||||
}
|
||||
if dnResult == nil {
|
||||
// dn not found - still attempt to detach if provided user is a DN.
|
||||
@ -2105,7 +2105,7 @@ func (sys *IAMSys) PolicyDBUpdateLDAP(ctx context.Context, isAttach bool,
|
||||
dn = sys.LDAPConfig.QuickNormalizeDN(r.User)
|
||||
} else {
|
||||
err = errNoSuchUser
|
||||
return
|
||||
return updatedAt, addedOrRemoved, effectivePolicies, err
|
||||
}
|
||||
} else {
|
||||
dn = dnResult.NormDN
|
||||
@ -2115,14 +2115,14 @@ func (sys *IAMSys) PolicyDBUpdateLDAP(ctx context.Context, isAttach bool,
|
||||
var underBaseDN bool
|
||||
if dnResult, underBaseDN, err = sys.LDAPConfig.GetValidatedGroupDN(nil, r.Group); err != nil {
|
||||
iamLogIf(ctx, err)
|
||||
return
|
||||
return updatedAt, addedOrRemoved, effectivePolicies, err
|
||||
}
|
||||
if dnResult == nil || !underBaseDN {
|
||||
if !isAttach {
|
||||
dn = sys.LDAPConfig.QuickNormalizeDN(r.Group)
|
||||
} else {
|
||||
err = errNoSuchGroup
|
||||
return
|
||||
return updatedAt, addedOrRemoved, effectivePolicies, err
|
||||
}
|
||||
} else {
|
||||
// We use the group DN returned by the LDAP server (this may not
|
||||
@ -2149,7 +2149,7 @@ func (sys *IAMSys) PolicyDBUpdateLDAP(ctx context.Context, isAttach bool,
|
||||
updatedAt, addedOrRemoved, effectivePolicies, err = sys.store.PolicyDBUpdate(
|
||||
ctx, dn, isGroup, userType, r.Policies, isAttach)
|
||||
if err != nil {
|
||||
return
|
||||
return updatedAt, addedOrRemoved, effectivePolicies, err
|
||||
}
|
||||
|
||||
// Notify all other MinIO peers to reload policy
|
||||
@ -2173,7 +2173,7 @@ func (sys *IAMSys) PolicyDBUpdateLDAP(ctx context.Context, isAttach bool,
|
||||
UpdatedAt: updatedAt,
|
||||
}))
|
||||
|
||||
return
|
||||
return updatedAt, addedOrRemoved, effectivePolicies, err
|
||||
}
|
||||
|
||||
// PolicyDBGet - gets policy set on a user or group. If a list of groups is
|
||||
@ -2376,7 +2376,7 @@ func isAllowedBySessionPolicyForServiceAccount(args policy.Args) (hasSessionPoli
|
||||
// Now check if we have a sessionPolicy.
|
||||
spolicy, ok := args.Claims[sessionPolicyNameExtracted]
|
||||
if !ok {
|
||||
return
|
||||
return hasSessionPolicy, isAllowed
|
||||
}
|
||||
|
||||
hasSessionPolicy = true
|
||||
@ -2385,7 +2385,7 @@ func isAllowedBySessionPolicyForServiceAccount(args policy.Args) (hasSessionPoli
|
||||
if !ok {
|
||||
// Sub policy if set, should be a string reject
|
||||
// malformed/malicious requests.
|
||||
return
|
||||
return hasSessionPolicy, isAllowed
|
||||
}
|
||||
|
||||
// Check if policy is parseable.
|
||||
@ -2393,38 +2393,33 @@ func isAllowedBySessionPolicyForServiceAccount(args policy.Args) (hasSessionPoli
|
||||
if err != nil {
|
||||
// Log any error in input session policy config.
|
||||
iamLogIf(GlobalContext, err)
|
||||
return
|
||||
return hasSessionPolicy, isAllowed
|
||||
}
|
||||
|
||||
// SPECIAL CASE: For service accounts, any valid JSON is allowed as a
|
||||
// policy, regardless of whether the number of statements is 0, this
|
||||
// includes `null`, `{}` and `{"Statement": null}`. In fact, MinIO Console
|
||||
// sends `null` when no policy is set and the intended behavior is that the
|
||||
// service account should inherit parent policy.
|
||||
//
|
||||
// However, for a policy like `{"Statement":[]}`, the intention is to not
|
||||
// provide any permissions via the session policy - i.e. the service account
|
||||
// can do nothing (such a JSON could be generated by an external application
|
||||
// as the policy for the service account). Inheriting the parent policy in
|
||||
// such a case, is a security issue. Ideally, we should not allow such
|
||||
// behavior, but for compatibility with the Console, we currently allow it.
|
||||
//
|
||||
// TODO:
|
||||
//
|
||||
// 1. fix console behavior and allow this inheritance for service accounts
|
||||
// created before a certain (TBD) future date.
|
||||
//
|
||||
// 2. do not allow empty statement policies for service accounts.
|
||||
// service account should inherit parent policy. So when policy is empty in
|
||||
// all fields we return hasSessionPolicy=false.
|
||||
if subPolicy.Version == "" && subPolicy.Statements == nil && subPolicy.ID == "" {
|
||||
hasSessionPolicy = false
|
||||
return
|
||||
return hasSessionPolicy, isAllowed
|
||||
}
|
||||
|
||||
// As the session policy exists, even if the parent is the root account, it
|
||||
// must be restricted by it. So, we set `.IsOwner` to false here
|
||||
// unconditionally.
|
||||
//
|
||||
// We also set `DenyOnly` arg to false here - this is an IMPORTANT corner
|
||||
// case: DenyOnly is used only for allowing an account to do actions related
|
||||
// to its own account (like create service accounts for itself, among
|
||||
// others). However when a session policy is present, we need to validate
|
||||
// that the action is actually allowed, rather than checking if the action
|
||||
// is only disallowed.
|
||||
sessionPolicyArgs := args
|
||||
sessionPolicyArgs.IsOwner = false
|
||||
sessionPolicyArgs.DenyOnly = false
|
||||
|
||||
// Sub policy is set and valid.
|
||||
return hasSessionPolicy, subPolicy.IsAllowed(sessionPolicyArgs)
|
||||
@ -2437,7 +2432,7 @@ func isAllowedBySessionPolicy(args policy.Args) (hasSessionPolicy bool, isAllowe
|
||||
// Now check if we have a sessionPolicy.
|
||||
spolicy, ok := args.Claims[sessionPolicyNameExtracted]
|
||||
if !ok {
|
||||
return
|
||||
return hasSessionPolicy, isAllowed
|
||||
}
|
||||
|
||||
hasSessionPolicy = true
|
||||
@ -2446,7 +2441,7 @@ func isAllowedBySessionPolicy(args policy.Args) (hasSessionPolicy bool, isAllowe
|
||||
if !ok {
|
||||
// Sub policy if set, should be a string reject
|
||||
// malformed/malicious requests.
|
||||
return
|
||||
return hasSessionPolicy, isAllowed
|
||||
}
|
||||
|
||||
// Check if policy is parseable.
|
||||
@ -2454,19 +2449,27 @@ func isAllowedBySessionPolicy(args policy.Args) (hasSessionPolicy bool, isAllowe
|
||||
if err != nil {
|
||||
// Log any error in input session policy config.
|
||||
iamLogIf(GlobalContext, err)
|
||||
return
|
||||
return hasSessionPolicy, isAllowed
|
||||
}
|
||||
|
||||
// Policy without Version string value reject it.
|
||||
if subPolicy.Version == "" {
|
||||
return
|
||||
return hasSessionPolicy, isAllowed
|
||||
}
|
||||
|
||||
// As the session policy exists, even if the parent is the root account, it
|
||||
// must be restricted by it. So, we set `.IsOwner` to false here
|
||||
// unconditionally.
|
||||
//
|
||||
// We also set `DenyOnly` arg to false here - this is an IMPORTANT corner
|
||||
// case: DenyOnly is used only for allowing an account to do actions related
|
||||
// to its own account (like create service accounts for itself, among
|
||||
// others). However when a session policy is present, we need to validate
|
||||
// that the action is actually allowed, rather than checking if the action
|
||||
// is only disallowed.
|
||||
sessionPolicyArgs := args
|
||||
sessionPolicyArgs.IsOwner = false
|
||||
sessionPolicyArgs.DenyOnly = false
|
||||
|
||||
// Sub policy is set and valid.
|
||||
return hasSessionPolicy, subPolicy.IsAllowed(sessionPolicyArgs)
|
||||
|
||||
@ -1,7 +1,7 @@
|
||||
package cmd
|
||||
|
||||
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
|
||||
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"github.com/tinylib/msgp/msgp"
|
||||
)
|
||||
|
||||
@ -1,7 +1,7 @@
|
||||
package cmd
|
||||
|
||||
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
|
||||
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"testing"
|
||||
|
||||
@ -159,5 +159,5 @@ func pickRelevantGoroutines() (gs []string) {
|
||||
gs = append(gs, g)
|
||||
}
|
||||
sort.Strings(gs)
|
||||
return
|
||||
return gs
|
||||
}
|
||||
|
||||
@ -162,7 +162,7 @@ func (l *localLocker) Unlock(_ context.Context, args dsync.LockArgs) (reply bool
|
||||
reply = l.removeEntry(resource, args, &lri) || reply
|
||||
}
|
||||
}
|
||||
return
|
||||
return reply, err
|
||||
}
|
||||
|
||||
// removeEntry based on the uid of the lock message, removes a single entry from the
|
||||
|
||||
@ -1,7 +1,7 @@
|
||||
package cmd
|
||||
|
||||
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
|
||||
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"time"
|
||||
|
||||
@ -19,21 +19,19 @@ func (z *localLockMap) DecodeMsg(dc *msgp.Reader) (err error) {
|
||||
if (*z) == nil {
|
||||
(*z) = make(localLockMap, zb0004)
|
||||
} else if len((*z)) > 0 {
|
||||
for key := range *z {
|
||||
delete((*z), key)
|
||||
}
|
||||
clear((*z))
|
||||
}
|
||||
var field []byte
|
||||
_ = field
|
||||
for zb0004 > 0 {
|
||||
zb0004--
|
||||
var zb0001 string
|
||||
var zb0002 []lockRequesterInfo
|
||||
zb0001, err = dc.ReadString()
|
||||
if err != nil {
|
||||
err = msgp.WrapError(err)
|
||||
return
|
||||
}
|
||||
var zb0002 []lockRequesterInfo
|
||||
var zb0005 uint32
|
||||
zb0005, err = dc.ReadArrayHeader()
|
||||
if err != nil {
|
||||
@ -115,16 +113,14 @@ func (z *localLockMap) UnmarshalMsg(bts []byte) (o []byte, err error) {
|
||||
if (*z) == nil {
|
||||
(*z) = make(localLockMap, zb0004)
|
||||
} else if len((*z)) > 0 {
|
||||
for key := range *z {
|
||||
delete((*z), key)
|
||||
}
|
||||
clear((*z))
|
||||
}
|
||||
var field []byte
|
||||
_ = field
|
||||
for zb0004 > 0 {
|
||||
var zb0001 string
|
||||
var zb0002 []lockRequesterInfo
|
||||
zb0004--
|
||||
var zb0001 string
|
||||
zb0001, bts, err = msgp.ReadStringBytes(bts)
|
||||
if err != nil {
|
||||
err = msgp.WrapError(err)
|
||||
|
||||
@ -1,7 +1,7 @@
|
||||
package cmd
|
||||
|
||||
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
|
||||
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"testing"
|
||||
|
||||
@ -339,7 +339,7 @@ func triggerExpiryAndRepl(ctx context.Context, o listPathOptions, obj metaCacheE
|
||||
if !o.Versioned && !o.V1 {
|
||||
fi, err := obj.fileInfo(o.Bucket)
|
||||
if err != nil {
|
||||
return
|
||||
return skip
|
||||
}
|
||||
objInfo := fi.ToObjectInfo(o.Bucket, obj.name, versioned)
|
||||
if o.Lifecycle != nil {
|
||||
@ -350,7 +350,7 @@ func triggerExpiryAndRepl(ctx context.Context, o listPathOptions, obj metaCacheE
|
||||
|
||||
fiv, err := obj.fileInfoVersions(o.Bucket)
|
||||
if err != nil {
|
||||
return
|
||||
return skip
|
||||
}
|
||||
|
||||
// Expire all versions if needed, if not attempt to queue for replication.
|
||||
@ -369,7 +369,7 @@ func triggerExpiryAndRepl(ctx context.Context, o listPathOptions, obj metaCacheE
|
||||
|
||||
queueReplicationHeal(ctx, o.Bucket, objInfo, o.Replication, 0)
|
||||
}
|
||||
return
|
||||
return skip
|
||||
}
|
||||
|
||||
func (z *erasureServerPools) listAndSave(ctx context.Context, o *listPathOptions) (entries metaCacheEntriesSorted, err error) {
|
||||
|
||||
@ -653,7 +653,7 @@ func calcCommonWritesDeletes(infos []DiskInfo, readQuorum int) (commonWrite, com
|
||||
|
||||
commonWrite = filter(writes)
|
||||
commonDelete = filter(deletes)
|
||||
return
|
||||
return commonWrite, commonDelete
|
||||
}
|
||||
|
||||
func calcCommonCounter(infos []DiskInfo, readQuorum int) (commonCount uint64) {
|
||||
|
||||
@ -1,7 +1,7 @@
|
||||
package cmd
|
||||
|
||||
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
|
||||
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"github.com/tinylib/msgp/msgp"
|
||||
)
|
||||
|
||||
@ -1,7 +1,7 @@
|
||||
package cmd
|
||||
|
||||
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
|
||||
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"testing"
|
||||
|
||||
@ -1,7 +1,7 @@
|
||||
package cmd
|
||||
|
||||
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
|
||||
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"github.com/tinylib/msgp/msgp"
|
||||
)
|
||||
|
||||
@ -1,7 +1,7 @@
|
||||
package cmd
|
||||
|
||||
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
|
||||
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"testing"
|
||||
|
||||
@ -1,7 +1,7 @@
|
||||
package cmd
|
||||
|
||||
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
|
||||
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"github.com/tinylib/msgp/msgp"
|
||||
)
|
||||
|
||||
@ -1,7 +1,7 @@
|
||||
package cmd
|
||||
|
||||
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
|
||||
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"testing"
|
||||
|
||||
@ -40,7 +40,7 @@ type collectMetricsOpts struct {
|
||||
|
||||
func collectLocalMetrics(types madmin.MetricType, opts collectMetricsOpts) (m madmin.RealtimeMetrics) {
|
||||
if types == madmin.MetricsNone {
|
||||
return
|
||||
return m
|
||||
}
|
||||
|
||||
byHostName := globalMinioAddr
|
||||
@ -51,7 +51,7 @@ func collectLocalMetrics(types madmin.MetricType, opts collectMetricsOpts) (m ma
|
||||
if _, ok := opts.hosts[server.Endpoint]; ok {
|
||||
byHostName = server.Endpoint
|
||||
} else {
|
||||
return
|
||||
return m
|
||||
}
|
||||
}
|
||||
|
||||
@ -221,7 +221,7 @@ func collectLocalDisksMetrics(disks map[string]struct{}) map[string]madmin.DiskM
|
||||
|
||||
func collectRemoteMetrics(ctx context.Context, types madmin.MetricType, opts collectMetricsOpts) (m madmin.RealtimeMetrics) {
|
||||
if !globalIsDistErasure {
|
||||
return
|
||||
return m
|
||||
}
|
||||
all := globalNotificationSys.GetMetrics(ctx, types, opts)
|
||||
for _, remote := range all {
|
||||
|
||||
@ -151,7 +151,7 @@ func init() {
|
||||
cpuLoad1: "CPU load average 1min",
|
||||
cpuLoad5: "CPU load average 5min",
|
||||
cpuLoad15: "CPU load average 15min",
|
||||
cpuLoad1Perc: "CPU load average 1min (perentage)",
|
||||
cpuLoad1Perc: "CPU load average 1min (percentage)",
|
||||
cpuLoad5Perc: "CPU load average 5min (percentage)",
|
||||
cpuLoad15Perc: "CPU load average 15min (percentage)",
|
||||
}
|
||||
|
||||
@ -1704,7 +1704,7 @@ func getMinioProcMetrics() *MetricsGroupV2 {
|
||||
p, err := procfs.Self()
|
||||
if err != nil {
|
||||
internalLogOnceIf(ctx, err, string(nodeMetricNamespace))
|
||||
return
|
||||
return metrics
|
||||
}
|
||||
|
||||
openFDs, _ := p.FileDescriptorsLen()
|
||||
@ -1819,7 +1819,7 @@ func getMinioProcMetrics() *MetricsGroupV2 {
|
||||
Value: stat.CPUTime(),
|
||||
})
|
||||
}
|
||||
return
|
||||
return metrics
|
||||
})
|
||||
return mg
|
||||
}
|
||||
@ -1833,7 +1833,7 @@ func getGoMetrics() *MetricsGroupV2 {
|
||||
Description: getMinIOGORoutineCountMD(),
|
||||
Value: float64(runtime.NumGoroutine()),
|
||||
})
|
||||
return
|
||||
return metrics
|
||||
})
|
||||
return mg
|
||||
}
|
||||
@ -2632,7 +2632,7 @@ func getMinioVersionMetrics() *MetricsGroupV2 {
|
||||
Description: getMinIOVersionMD(),
|
||||
VariableLabels: map[string]string{"version": Version},
|
||||
})
|
||||
return
|
||||
return metrics
|
||||
})
|
||||
return mg
|
||||
}
|
||||
@ -2653,7 +2653,7 @@ func getNodeHealthMetrics(opts MetricsGroupOpts) *MetricsGroupV2 {
|
||||
Description: getNodeOfflineTotalMD(),
|
||||
Value: float64(nodesDown),
|
||||
})
|
||||
return
|
||||
return metrics
|
||||
})
|
||||
return mg
|
||||
}
|
||||
@ -2666,11 +2666,11 @@ func getMinioHealingMetrics(opts MetricsGroupOpts) *MetricsGroupV2 {
|
||||
mg.RegisterRead(func(_ context.Context) (metrics []MetricV2) {
|
||||
bgSeq, exists := globalBackgroundHealState.getHealSequenceByToken(bgHealingUUID)
|
||||
if !exists {
|
||||
return
|
||||
return metrics
|
||||
}
|
||||
|
||||
if bgSeq.lastHealActivity.IsZero() {
|
||||
return
|
||||
return metrics
|
||||
}
|
||||
|
||||
metrics = make([]MetricV2, 0, 5)
|
||||
@ -2681,7 +2681,7 @@ func getMinioHealingMetrics(opts MetricsGroupOpts) *MetricsGroupV2 {
|
||||
metrics = append(metrics, getObjectsScanned(bgSeq)...)
|
||||
metrics = append(metrics, getHealedItems(bgSeq)...)
|
||||
metrics = append(metrics, getFailedItems(bgSeq)...)
|
||||
return
|
||||
return metrics
|
||||
})
|
||||
return mg
|
||||
}
|
||||
@ -2696,7 +2696,7 @@ func getFailedItems(seq *healSequence) (m []MetricV2) {
|
||||
Value: float64(v),
|
||||
})
|
||||
}
|
||||
return
|
||||
return m
|
||||
}
|
||||
|
||||
func getHealedItems(seq *healSequence) (m []MetricV2) {
|
||||
@ -2709,7 +2709,7 @@ func getHealedItems(seq *healSequence) (m []MetricV2) {
|
||||
Value: float64(v),
|
||||
})
|
||||
}
|
||||
return
|
||||
return m
|
||||
}
|
||||
|
||||
func getObjectsScanned(seq *healSequence) (m []MetricV2) {
|
||||
@ -2722,7 +2722,7 @@ func getObjectsScanned(seq *healSequence) (m []MetricV2) {
|
||||
Value: float64(v),
|
||||
})
|
||||
}
|
||||
return
|
||||
return m
|
||||
}
|
||||
|
||||
func getDistLockMetrics(opts MetricsGroupOpts) *MetricsGroupV2 {
|
||||
@ -3030,7 +3030,7 @@ func getHTTPMetrics(opts MetricsGroupOpts) *MetricsGroupV2 {
|
||||
VariableLabels: map[string]string{"api": api},
|
||||
})
|
||||
}
|
||||
return
|
||||
return metrics
|
||||
}
|
||||
|
||||
// If we have too many, limit them
|
||||
@ -3099,7 +3099,7 @@ func getHTTPMetrics(opts MetricsGroupOpts) *MetricsGroupV2 {
|
||||
}
|
||||
}
|
||||
|
||||
return
|
||||
return metrics
|
||||
})
|
||||
return mg
|
||||
}
|
||||
@ -3142,7 +3142,7 @@ func getNetworkMetrics() *MetricsGroupV2 {
|
||||
Description: getS3ReceivedBytesMD(),
|
||||
Value: float64(connStats.s3InputBytes),
|
||||
})
|
||||
return
|
||||
return metrics
|
||||
})
|
||||
return mg
|
||||
}
|
||||
@ -3155,19 +3155,19 @@ func getClusterUsageMetrics(opts MetricsGroupOpts) *MetricsGroupV2 {
|
||||
mg.RegisterRead(func(ctx context.Context) (metrics []MetricV2) {
|
||||
objLayer := newObjectLayerFn()
|
||||
if objLayer == nil {
|
||||
return
|
||||
return metrics
|
||||
}
|
||||
|
||||
metrics = make([]MetricV2, 0, 50)
|
||||
dataUsageInfo, err := loadDataUsageFromBackend(ctx, objLayer)
|
||||
if err != nil {
|
||||
metricsLogIf(ctx, err)
|
||||
return
|
||||
return metrics
|
||||
}
|
||||
|
||||
// data usage has not captured any data yet.
|
||||
if dataUsageInfo.LastUpdate.IsZero() {
|
||||
return
|
||||
return metrics
|
||||
}
|
||||
|
||||
metrics = append(metrics, MetricV2{
|
||||
@ -3248,7 +3248,7 @@ func getClusterUsageMetrics(opts MetricsGroupOpts) *MetricsGroupV2 {
|
||||
Value: float64(clusterBuckets),
|
||||
})
|
||||
|
||||
return
|
||||
return metrics
|
||||
})
|
||||
return mg
|
||||
}
|
||||
@ -3265,12 +3265,12 @@ func getBucketUsageMetrics(opts MetricsGroupOpts) *MetricsGroupV2 {
|
||||
dataUsageInfo, err := loadDataUsageFromBackend(ctx, objLayer)
|
||||
if err != nil {
|
||||
metricsLogIf(ctx, err)
|
||||
return
|
||||
return metrics
|
||||
}
|
||||
|
||||
// data usage has not captured any data yet.
|
||||
if dataUsageInfo.LastUpdate.IsZero() {
|
||||
return
|
||||
return metrics
|
||||
}
|
||||
|
||||
metrics = append(metrics, MetricV2{
|
||||
@ -3454,7 +3454,7 @@ func getBucketUsageMetrics(opts MetricsGroupOpts) *MetricsGroupV2 {
|
||||
VariableLabels: map[string]string{"bucket": bucket},
|
||||
})
|
||||
}
|
||||
return
|
||||
return metrics
|
||||
})
|
||||
return mg
|
||||
}
|
||||
@ -3498,17 +3498,17 @@ func getClusterTierMetrics(opts MetricsGroupOpts) *MetricsGroupV2 {
|
||||
objLayer := newObjectLayerFn()
|
||||
|
||||
if globalTierConfigMgr.Empty() {
|
||||
return
|
||||
return metrics
|
||||
}
|
||||
|
||||
dui, err := loadDataUsageFromBackend(ctx, objLayer)
|
||||
if err != nil {
|
||||
metricsLogIf(ctx, err)
|
||||
return
|
||||
return metrics
|
||||
}
|
||||
// data usage has not captured any tier stats yet.
|
||||
if dui.TierStats == nil {
|
||||
return
|
||||
return metrics
|
||||
}
|
||||
|
||||
return dui.tierMetrics()
|
||||
@ -3614,7 +3614,7 @@ func getLocalStorageMetrics(opts MetricsGroupOpts) *MetricsGroupV2 {
|
||||
Value: float64(storageInfo.Backend.RRSCParity),
|
||||
})
|
||||
|
||||
return
|
||||
return metrics
|
||||
})
|
||||
return mg
|
||||
}
|
||||
@ -3755,7 +3755,7 @@ func getClusterHealthMetrics(opts MetricsGroupOpts) *MetricsGroupV2 {
|
||||
})
|
||||
}
|
||||
|
||||
return
|
||||
return metrics
|
||||
})
|
||||
|
||||
return mg
|
||||
@ -3776,7 +3776,7 @@ func getBatchJobsMetrics(opts MetricsGroupOpts) *MetricsGroupV2 {
|
||||
m.Merge(&mRemote)
|
||||
|
||||
if m.Aggregated.BatchJobs == nil {
|
||||
return
|
||||
return metrics
|
||||
}
|
||||
|
||||
for _, mj := range m.Aggregated.BatchJobs.Jobs {
|
||||
@ -3822,7 +3822,7 @@ func getBatchJobsMetrics(opts MetricsGroupOpts) *MetricsGroupV2 {
|
||||
},
|
||||
)
|
||||
}
|
||||
return
|
||||
return metrics
|
||||
})
|
||||
return mg
|
||||
}
|
||||
@ -3875,7 +3875,7 @@ func getClusterStorageMetrics(opts MetricsGroupOpts) *MetricsGroupV2 {
|
||||
Description: getClusterDrivesTotalMD(),
|
||||
Value: float64(totalDrives.Sum()),
|
||||
})
|
||||
return
|
||||
return metrics
|
||||
})
|
||||
return mg
|
||||
}
|
||||
@ -4264,7 +4264,7 @@ func getOrderedLabelValueArrays(labelsWithValue map[string]string) (labels, valu
|
||||
labels = append(labels, l)
|
||||
values = append(values, v)
|
||||
}
|
||||
return
|
||||
return labels, values
|
||||
}
|
||||
|
||||
// newMinioCollectorNode describes the collector
|
||||
|
||||
@ -1,7 +1,7 @@
|
||||
package cmd
|
||||
|
||||
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
|
||||
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"github.com/tinylib/msgp/msgp"
|
||||
)
|
||||
@ -297,14 +297,12 @@ func (z *MetricV2) UnmarshalMsg(bts []byte) (o []byte, err error) {
|
||||
if z.StaticLabels == nil {
|
||||
z.StaticLabels = make(map[string]string, zb0002)
|
||||
} else if len(z.StaticLabels) > 0 {
|
||||
for key := range z.StaticLabels {
|
||||
delete(z.StaticLabels, key)
|
||||
}
|
||||
clear(z.StaticLabels)
|
||||
}
|
||||
for zb0002 > 0 {
|
||||
var za0001 string
|
||||
var za0002 string
|
||||
zb0002--
|
||||
var za0001 string
|
||||
za0001, bts, err = msgp.ReadStringBytes(bts)
|
||||
if err != nil {
|
||||
err = msgp.WrapError(err, "StaticLabels")
|
||||
@ -333,14 +331,12 @@ func (z *MetricV2) UnmarshalMsg(bts []byte) (o []byte, err error) {
|
||||
if z.VariableLabels == nil {
|
||||
z.VariableLabels = make(map[string]string, zb0003)
|
||||
} else if len(z.VariableLabels) > 0 {
|
||||
for key := range z.VariableLabels {
|
||||
delete(z.VariableLabels, key)
|
||||
}
|
||||
clear(z.VariableLabels)
|
||||
}
|
||||
for zb0003 > 0 {
|
||||
var za0003 string
|
||||
var za0004 string
|
||||
zb0003--
|
||||
var za0003 string
|
||||
za0003, bts, err = msgp.ReadStringBytes(bts)
|
||||
if err != nil {
|
||||
err = msgp.WrapError(err, "VariableLabels")
|
||||
@ -369,14 +365,12 @@ func (z *MetricV2) UnmarshalMsg(bts []byte) (o []byte, err error) {
|
||||
if z.Histogram == nil {
|
||||
z.Histogram = make(map[string]uint64, zb0004)
|
||||
} else if len(z.Histogram) > 0 {
|
||||
for key := range z.Histogram {
|
||||
delete(z.Histogram, key)
|
||||
}
|
||||
clear(z.Histogram)
|
||||
}
|
||||
for zb0004 > 0 {
|
||||
var za0005 string
|
||||
var za0006 uint64
|
||||
zb0004--
|
||||
var za0005 string
|
||||
za0005, bts, err = msgp.ReadStringBytes(bts)
|
||||
if err != nil {
|
||||
err = msgp.WrapError(err, "Histogram")
|
||||
|
||||
@ -1,7 +1,7 @@
|
||||
package cmd
|
||||
|
||||
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
|
||||
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
|
||||
@ -60,7 +60,7 @@ type nodesOnline struct {
|
||||
func newNodesUpDownCache() *cachevalue.Cache[nodesOnline] {
|
||||
loadNodesUpDown := func(ctx context.Context) (v nodesOnline, err error) {
|
||||
v.Online, v.Offline = globalNotificationSys.GetPeerOnlineCount()
|
||||
return
|
||||
return v, err
|
||||
}
|
||||
return cachevalue.NewFromFunc(1*time.Minute,
|
||||
cachevalue.Opts{ReturnLastGood: true},
|
||||
@ -88,12 +88,12 @@ func newDataUsageInfoCache() *cachevalue.Cache[DataUsageInfo] {
|
||||
loadDataUsage := func(ctx context.Context) (u DataUsageInfo, err error) {
|
||||
objLayer := newObjectLayerFn()
|
||||
if objLayer == nil {
|
||||
return
|
||||
return u, err
|
||||
}
|
||||
|
||||
// Collect cluster level object metrics.
|
||||
u, err = loadDataUsageFromBackend(GlobalContext, objLayer)
|
||||
return
|
||||
return u, err
|
||||
}
|
||||
return cachevalue.NewFromFunc(1*time.Minute,
|
||||
cachevalue.Opts{ReturnLastGood: true},
|
||||
@ -104,11 +104,11 @@ func newESetHealthResultCache() *cachevalue.Cache[HealthResult] {
|
||||
loadHealth := func(ctx context.Context) (r HealthResult, err error) {
|
||||
objLayer := newObjectLayerFn()
|
||||
if objLayer == nil {
|
||||
return
|
||||
return r, err
|
||||
}
|
||||
|
||||
r = objLayer.Health(GlobalContext, HealthOptions{})
|
||||
return
|
||||
return r, err
|
||||
}
|
||||
return cachevalue.NewFromFunc(1*time.Minute,
|
||||
cachevalue.Opts{ReturnLastGood: true},
|
||||
@ -146,7 +146,7 @@ func getDriveIOStatMetrics(ioStats madmin.DiskIOStats, duration time.Duration) (
|
||||
// TotalTicks is in milliseconds
|
||||
m.percUtil = float64(ioStats.TotalTicks) * 100 / (durationSecs * 1000)
|
||||
|
||||
return
|
||||
return m
|
||||
}
|
||||
|
||||
func newDriveMetricsCache() *cachevalue.Cache[storageMetrics] {
|
||||
@ -161,7 +161,7 @@ func newDriveMetricsCache() *cachevalue.Cache[storageMetrics] {
|
||||
loadDriveMetrics := func(ctx context.Context) (v storageMetrics, err error) {
|
||||
objLayer := newObjectLayerFn()
|
||||
if objLayer == nil {
|
||||
return
|
||||
return v, err
|
||||
}
|
||||
|
||||
storageInfo := objLayer.LocalStorageInfo(GlobalContext, true)
|
||||
@ -195,7 +195,7 @@ func newDriveMetricsCache() *cachevalue.Cache[storageMetrics] {
|
||||
prevDriveIOStatsRefreshedAt = now
|
||||
prevDriveIOStatsMu.Unlock()
|
||||
|
||||
return
|
||||
return v, err
|
||||
}
|
||||
|
||||
return cachevalue.NewFromFunc(1*time.Minute,
|
||||
@ -220,7 +220,7 @@ func newCPUMetricsCache() *cachevalue.Cache[madmin.CPUMetrics] {
|
||||
}
|
||||
}
|
||||
|
||||
return
|
||||
return v, err
|
||||
}
|
||||
|
||||
return cachevalue.NewFromFunc(1*time.Minute,
|
||||
@ -245,7 +245,7 @@ func newMemoryMetricsCache() *cachevalue.Cache[madmin.MemInfo] {
|
||||
}
|
||||
}
|
||||
|
||||
return
|
||||
return v, err
|
||||
}
|
||||
|
||||
return cachevalue.NewFromFunc(1*time.Minute,
|
||||
@ -268,7 +268,7 @@ func newClusterStorageInfoCache() *cachevalue.Cache[storageMetrics] {
|
||||
offlineDrives: offlineDrives.Sum(),
|
||||
totalDrives: totalDrives.Sum(),
|
||||
}
|
||||
return
|
||||
return v, err
|
||||
}
|
||||
return cachevalue.NewFromFunc(1*time.Minute,
|
||||
cachevalue.Opts{ReturnLastGood: true},
|
||||
|
||||
@ -228,7 +228,7 @@ func (h *metricsV3Server) ServeHTTP(w http.ResponseWriter, r *http.Request) {
|
||||
// it's the last part of the path. e.g. /bucket/api/<bucket-name>
|
||||
bucketIdx := strings.LastIndex(pathComponents, "/")
|
||||
buckets = append(buckets, pathComponents[bucketIdx+1:])
|
||||
// remove bucket from pathComponents as it is dyanamic and
|
||||
// remove bucket from pathComponents as it is dynamic and
|
||||
// hence not included in the collector path.
|
||||
pathComponents = pathComponents[:bucketIdx]
|
||||
}
|
||||
|
||||
@ -386,7 +386,7 @@ func storageMetricsPrometheus(ch chan<- prometheus.Metric) {
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
prometheus.NewDesc(
|
||||
prometheus.BuildFQName(minioNamespace, "capacity_raw", "total"),
|
||||
"Total capacity online in the cluster",
|
||||
"Total capacity online in current MinIO server instance",
|
||||
nil, nil),
|
||||
prometheus.GaugeValue,
|
||||
float64(GetTotalCapacity(server.Disks)),
|
||||
@ -396,7 +396,7 @@ func storageMetricsPrometheus(ch chan<- prometheus.Metric) {
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
prometheus.NewDesc(
|
||||
prometheus.BuildFQName(minioNamespace, "capacity_raw_free", "total"),
|
||||
"Total free capacity online in the cluster",
|
||||
"Total free capacity online in current MinIO server instance",
|
||||
nil, nil),
|
||||
prometheus.GaugeValue,
|
||||
float64(GetTotalCapacityFree(server.Disks)),
|
||||
@ -408,7 +408,7 @@ func storageMetricsPrometheus(ch chan<- prometheus.Metric) {
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
prometheus.NewDesc(
|
||||
prometheus.BuildFQName(minioNamespace, "capacity_usable", "total"),
|
||||
"Total usable capacity online in the cluster",
|
||||
"Total usable capacity online in current MinIO server instance",
|
||||
nil, nil),
|
||||
prometheus.GaugeValue,
|
||||
float64(GetTotalUsableCapacity(server.Disks, sinfo)),
|
||||
@ -418,7 +418,7 @@ func storageMetricsPrometheus(ch chan<- prometheus.Metric) {
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
prometheus.NewDesc(
|
||||
prometheus.BuildFQName(minioNamespace, "capacity_usable_free", "total"),
|
||||
"Total free usable capacity online in the cluster",
|
||||
"Total free usable capacity online in current MinIO server instance",
|
||||
nil, nil),
|
||||
prometheus.GaugeValue,
|
||||
float64(GetTotalUsableCapacityFree(server.Disks, sinfo)),
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Loading…
x
Reference in New Issue
Block a user