mirror of https://github.com/minio/minio.git
remove mint from MinIO repo, move it to github.com/minio/mint
This commit is contained in:
parent
632252ff1d
commit
260970776b
|
@ -6,10 +6,10 @@ ENV GOROOT /usr/local/go
|
||||||
ENV GOPATH /usr/local/gopath
|
ENV GOPATH /usr/local/gopath
|
||||||
ENV PATH $GOPATH/bin:$GOROOT/bin:$PATH
|
ENV PATH $GOPATH/bin:$GOROOT/bin:$PATH
|
||||||
ENV MINT_ROOT_DIR /mint
|
ENV MINT_ROOT_DIR /mint
|
||||||
COPY mint /mint
|
|
||||||
|
|
||||||
RUN apt-get --yes update && apt-get --yes upgrade && \
|
RUN apt-get --yes update && apt-get --yes upgrade && \
|
||||||
apt-get --yes --quiet install wget jq curl git dnsmasq && \
|
apt-get --yes --quiet install wget jq curl git dnsmasq && \
|
||||||
|
git clone https://github.com/minio/mint && \
|
||||||
cd /mint && /mint/release.sh
|
cd /mint && /mint/release.sh
|
||||||
|
|
||||||
WORKDIR /mint
|
WORKDIR /mint
|
||||||
|
|
|
@ -1,17 +0,0 @@
|
||||||
*.test
|
|
||||||
*.jar
|
|
||||||
src/*
|
|
||||||
temp
|
|
||||||
__pycache__/
|
|
||||||
log/*
|
|
||||||
minio.test
|
|
||||||
bin/*
|
|
||||||
node_modules
|
|
||||||
# exception to the rule
|
|
||||||
!log/.gitkeep
|
|
||||||
!bin/.gitkeep
|
|
||||||
*.class
|
|
||||||
*~
|
|
||||||
run/core/minio-dotnet/bin/*
|
|
||||||
run/core/minio-dotnet/obj/*
|
|
||||||
run/core/minio-dotnet/out/*
|
|
120
mint/README.md
120
mint/README.md
|
@ -1,120 +0,0 @@
|
||||||
# Mint [![Slack](https://slack.minio.io/slack?type=svg)](https://slack.minio.io) [![Docker Pulls](https://img.shields.io/docker/pulls/minio/mint.svg?maxAge=604800)](https://hub.docker.com/r/minio/mint/)
|
|
||||||
|
|
||||||
Mint is a testing framework for Minio object server, available as a docker image. It runs correctness, benchmarking and stress tests. Following are the SDKs/tools used in correctness tests.
|
|
||||||
|
|
||||||
- awscli
|
|
||||||
- aws-sdk-go
|
|
||||||
- aws-sdk-php
|
|
||||||
- aws-sdk-ruby
|
|
||||||
- aws-sdk-java
|
|
||||||
- mc
|
|
||||||
- minio-go
|
|
||||||
- minio-java
|
|
||||||
- minio-js
|
|
||||||
- minio-py
|
|
||||||
- minio-dotnet
|
|
||||||
- s3cmd
|
|
||||||
|
|
||||||
## Running Mint
|
|
||||||
|
|
||||||
Mint is run by `docker run` command which requires Docker to be installed. For Docker installation follow the steps [here](https://docs.docker.com/engine/installation/linux/docker-ce/ubuntu/).
|
|
||||||
|
|
||||||
To run Mint with Minio Play server as test target,
|
|
||||||
|
|
||||||
```sh
|
|
||||||
$ docker run -e SERVER_ENDPOINT=play.minio.io:9000 -e ACCESS_KEY=Q3AM3UQ867SPQQA43P2F \
|
|
||||||
-e SECRET_KEY=zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG -e ENABLE_HTTPS=1 minio/mint
|
|
||||||
```
|
|
||||||
|
|
||||||
After the tests are run, output is stored in `/mint/log` directory inside the container. To get these logs, use `docker cp` command. For example
|
|
||||||
```sh
|
|
||||||
docker cp <container-id>:/mint/log /tmp/logs
|
|
||||||
```
|
|
||||||
|
|
||||||
### Mint environment variables
|
|
||||||
|
|
||||||
Below environment variables are required to be passed to the docker container. Supported environment variables:
|
|
||||||
|
|
||||||
| Environment variable | Description | Example |
|
|
||||||
|:--- |:--- |:--- |
|
|
||||||
| `SERVER_ENDPOINT` | Endpoint of Minio server in the format `HOST:PORT`; for virtual style `IP:PORT` | `play.minio.io:9000` |
|
|
||||||
| `ACCESS_KEY` | Access key of access `SERVER_ENDPOINT` | `Q3AM3UQ867SPQQA43P2F` |
|
|
||||||
| `SECRET_KEY` | Secret Key of access `SERVER_ENDPOINT` | `zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG` |
|
|
||||||
| `ENABLE_HTTPS` | (Optional) Set `1` to indicate to use HTTPS to access `SERVER_ENDPOINT`. Defaults to `0` (HTTP) | `1` |
|
|
||||||
| `MINT_MODE` | (Optional) Set mode indicating what category of tests to be run by values `core`, `full`. Defaults to `core` | `full` |
|
|
||||||
| `DOMAIN` | (Optional) Value of MINIO_DOMAIN environment variable used in Minio server | `myminio.com` |
|
|
||||||
| `ENABLE_VIRTUAL_STYLE` | (Optional) Set `1` to indicate virtual style access . Defaults to `0` (Path style) | `1` |
|
|
||||||
| `RUN_ON_FAIL` | (Optional) Set `1` to indicate execute all tests independent of failures (currently implemented for minio-go and minio-java) . Defaults to `0` (Path style) | `1` |
|
|
||||||
|
|
||||||
|
|
||||||
### Test virtual style access against Minio server
|
|
||||||
|
|
||||||
To test Minio server virtual style access with Mint, follow these steps:
|
|
||||||
|
|
||||||
- Set a domain in your Minio server using environment variable MINIO_DOMAIN. For example `export MINIO_DOMAIN=myminio.com`.
|
|
||||||
- Start Minio server.
|
|
||||||
- Execute Mint against Minio server (with `MINIO_DOMAIN` set to `myminio.com`) using this command
|
|
||||||
```sh
|
|
||||||
$ docker run -e "SERVER_ENDPOINT=192.168.86.133:9000" -e "DOMAIN=minio.com" \
|
|
||||||
-e "ACCESS_KEY=minio" -e "SECRET_KEY=minio123" -e "ENABLE_HTTPS=0" \
|
|
||||||
-e "ENABLE_VIRTUAL_STYLE=1" minio/mint
|
|
||||||
```
|
|
||||||
|
|
||||||
### Mint log format
|
|
||||||
|
|
||||||
All test logs are stored in `/mint/log/log.json` as multiple JSON document. Below is the JSON format for every entry in the log file.
|
|
||||||
|
|
||||||
| JSON field | Type | Description | Example |
|
|
||||||
|:--- |:--- |:--- |:--- |
|
|
||||||
| `name` | _string_ | Testing tool/SDK name | `"aws-sdk-php"` |
|
|
||||||
| `function` | _string_ | Test function name | `"getBucketLocation ( array $params = [] )"` |
|
|
||||||
| `args` | _object_ | (Optional) Key/Value map of arguments passed to test function | `{"Bucket":"aws-sdk-php-bucket-20341"}` |
|
|
||||||
| `duration` | _int_ | Time taken in milliseconds to run the test | `384` |
|
|
||||||
| `status` | _string_ | one of `PASS`, `FAIL` or `NA` | `"PASS"` |
|
|
||||||
| `alert` | _string_ | (Optional) Alert message indicating test failure | `"I/O error on create file"` |
|
|
||||||
| `message` | _string_ | (Optional) Any log message | `"validating checksum of downloaded object"` |
|
|
||||||
| `error` | _string_ | Detailed error message including stack trace on status `FAIL` | `"Error executing \"CompleteMultipartUpload\" on ...` |
|
|
||||||
|
|
||||||
## For Developers
|
|
||||||
|
|
||||||
### Running Mint development code
|
|
||||||
|
|
||||||
After making changes to Mint source code a local docker image can be built/run by
|
|
||||||
|
|
||||||
```sh
|
|
||||||
$ docker build -t minio/mint . -f Dockerfile.mint
|
|
||||||
$ docker run -e SERVER_ENDPOINT=play.minio.io:9000 -e ACCESS_KEY=Q3AM3UQ867SPQQA43P2F \
|
|
||||||
-e SECRET_KEY=zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG \
|
|
||||||
-e ENABLE_HTTPS=1 -e MINT_MODE=full minio/mint:latest
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
### Adding tests with new tool/SDK
|
|
||||||
|
|
||||||
Below are the steps need to be followed
|
|
||||||
|
|
||||||
- Create new app directory under [build](https://github.com/minio/mint/tree/master/build) and [run/core](https://github.com/minio/mint/tree/master/run/core) directories.
|
|
||||||
- Create `install.sh` which does installation of required tool/SDK under app directory.
|
|
||||||
- Any build and install time dependencies should be added to [install-packages.list](https://github.com/minio/mint/blob/master/install-packages.list).
|
|
||||||
- Build time dependencies should be added to [remove-packages.list](https://github.com/minio/mint/blob/master/remove-packages.list) for removal to have clean Mint docker image.
|
|
||||||
- Add `run.sh` in app directory under `run/core` which execute actual tests.
|
|
||||||
|
|
||||||
#### Test data
|
|
||||||
Tests may use pre-created data set to perform various object operations on Minio server. Below data files are available under `/mint/data` directory.
|
|
||||||
|
|
||||||
| File name | Size |
|
|
||||||
|:--- |:--- |
|
|
||||||
| datafile-0-b | 0B |
|
|
||||||
| datafile-1-b | 1B |
|
|
||||||
| datafile-1-kB |1KiB |
|
|
||||||
| datafile-10-kB |10KiB |
|
|
||||||
| datafile-33-kB |33KiB |
|
|
||||||
| datafile-100-kB |100KiB |
|
|
||||||
| datafile-1-MB |1MiB |
|
|
||||||
| datafile-1.03-MB |1.03MiB |
|
|
||||||
| datafile-5-MB |5MiB |
|
|
||||||
| datafile-6-MB |6MiB |
|
|
||||||
| datafile-10-MB |10MiB |
|
|
||||||
| datafile-11-MB |11MiB |
|
|
||||||
| datafile-65-MB |65MiB |
|
|
||||||
| datafile-129-MB |129MiB |
|
|
|
@ -1,6 +0,0 @@
|
||||||
#!/bin/bash -e
|
|
||||||
#
|
|
||||||
#
|
|
||||||
|
|
||||||
test_run_dir="$MINT_RUN_CORE_DIR/aws-sdk-go"
|
|
||||||
(cd "$test_run_dir" && GO111MODULE=on CGO_ENABLED=0 go build)
|
|
|
@ -1,58 +0,0 @@
|
||||||
<project xmlns:ivy="antlib:org.apache.ivy.ant" name="aws-sdk-java-tests" default="run">
|
|
||||||
<property name="ivy.install.version" value="2.5.0" />
|
|
||||||
<condition property="ivy.home" value="${env.IVY_HOME}">
|
|
||||||
<isset property="env.IVY_HOME" />
|
|
||||||
</condition>
|
|
||||||
<property name="ivy.home" value="${user.home}/.ant" />
|
|
||||||
<property name="ivy.jar.dir" value="${ivy.home}/lib" />
|
|
||||||
<property name="ivy.jar.file" value="${ivy.jar.dir}/ivy.jar" />
|
|
||||||
|
|
||||||
<target name="download-ivy" unless="offline">
|
|
||||||
<mkdir dir="${ivy.jar.dir}"/>
|
|
||||||
<get src="https://repo1.maven.org/maven2/org/apache/ivy/ivy/${ivy.install.version}/ivy-${ivy.install.version}.jar"
|
|
||||||
dest="${ivy.jar.file}" usetimestamp="true"/>
|
|
||||||
</target>
|
|
||||||
|
|
||||||
<target name="init-ivy" depends="download-ivy">
|
|
||||||
<path id="ivy.lib.path">
|
|
||||||
<fileset dir="${ivy.jar.dir}" includes="*.jar"/>
|
|
||||||
|
|
||||||
</path>
|
|
||||||
<taskdef resource="org/apache/ivy/ant/antlib.xml"
|
|
||||||
uri="antlib:org.apache.ivy.ant" classpathref="ivy.lib.path"/>
|
|
||||||
</target>
|
|
||||||
<target name="resolve" description="--> retrieve dependencies with ivy">
|
|
||||||
<ivy:retrieve />
|
|
||||||
</target>
|
|
||||||
|
|
||||||
<target name="clean">
|
|
||||||
<delete dir="build"/>
|
|
||||||
</target>
|
|
||||||
|
|
||||||
<path id="aws-s3-sdk-deps">
|
|
||||||
<fileset dir="lib">
|
|
||||||
<include name="*.jar"/>
|
|
||||||
</fileset>
|
|
||||||
</path>
|
|
||||||
|
|
||||||
<target name="compile">
|
|
||||||
<mkdir dir="build/classes"/>
|
|
||||||
<javac srcdir="src" destdir="build/classes">
|
|
||||||
<classpath refid="aws-s3-sdk-deps" />
|
|
||||||
</javac>
|
|
||||||
</target>
|
|
||||||
|
|
||||||
<target name="jar">
|
|
||||||
<mkdir dir="build/jar"/>
|
|
||||||
<jar destfile="build/jar/FunctionalTests.jar" basedir="build/classes">
|
|
||||||
<archives>
|
|
||||||
<zips>
|
|
||||||
<fileset dir="lib/" includes="*.jar"/>
|
|
||||||
</zips>
|
|
||||||
</archives>
|
|
||||||
<manifest>
|
|
||||||
<attribute name="Main-Class" value="io.minio.awssdk.tests.FunctionalTests"/>
|
|
||||||
</manifest>
|
|
||||||
</jar>
|
|
||||||
</target>
|
|
||||||
</project>
|
|
|
@ -1,17 +0,0 @@
|
||||||
#!/bin/bash -e
|
|
||||||
#
|
|
||||||
#
|
|
||||||
|
|
||||||
test_run_dir="$MINT_RUN_CORE_DIR/aws-sdk-java"
|
|
||||||
|
|
||||||
cd "$(dirname "$(realpath "$0")")"
|
|
||||||
|
|
||||||
ant init-ivy && \
|
|
||||||
ant resolve && \
|
|
||||||
ant compile && \
|
|
||||||
ant jar
|
|
||||||
|
|
||||||
cp build/jar/FunctionalTests.jar "$test_run_dir/"
|
|
||||||
|
|
||||||
rm -rf lib/ build/
|
|
||||||
|
|
|
@ -1,6 +0,0 @@
|
||||||
<ivy-module version="2.0">
|
|
||||||
<info organisation="org.apache" module="aws-sdk-java-tests"/>
|
|
||||||
<dependencies>
|
|
||||||
<dependency org="com.amazonaws" name="aws-java-sdk-s3" rev="1.11.706"/>
|
|
||||||
</dependencies>
|
|
||||||
</ivy-module>
|
|
|
@ -1,634 +0,0 @@
|
||||||
/*
|
|
||||||
* Copyright (c) 2015-2021 MinIO, Inc.
|
|
||||||
*
|
|
||||||
* This file is part of MinIO Object Storage stack
|
|
||||||
*
|
|
||||||
* This program is free software: you can redistribute it and/or modify
|
|
||||||
* it under the terms of the GNU Affero General Public License as published by
|
|
||||||
* the Free Software Foundation, either version 3 of the License, or
|
|
||||||
* (at your option) any later version.
|
|
||||||
*
|
|
||||||
* This program is distributed in the hope that it will be useful
|
|
||||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
* GNU Affero General Public License for more details.
|
|
||||||
*
|
|
||||||
* You should have received a copy of the GNU Affero General Public License
|
|
||||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
|
||||||
*/
|
|
||||||
|
|
||||||
package io.minio.awssdk.tests;
|
|
||||||
|
|
||||||
import java.io.*;
|
|
||||||
import java.security.NoSuchAlgorithmException;
|
|
||||||
import java.security.SecureRandom;
|
|
||||||
|
|
||||||
import java.security.*;
|
|
||||||
import java.util.*;
|
|
||||||
|
|
||||||
import java.nio.file.*;
|
|
||||||
import java.math.BigInteger;
|
|
||||||
|
|
||||||
import javax.crypto.KeyGenerator;
|
|
||||||
import javax.crypto.SecretKey;
|
|
||||||
import javax.crypto.spec.SecretKeySpec;
|
|
||||||
|
|
||||||
import com.amazonaws.auth.AWSCredentials;
|
|
||||||
import com.amazonaws.auth.BasicAWSCredentials;
|
|
||||||
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
|
|
||||||
import com.amazonaws.AmazonClientException;
|
|
||||||
import com.amazonaws.AmazonServiceException;
|
|
||||||
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
|
|
||||||
import com.amazonaws.auth.AWSStaticCredentialsProvider;
|
|
||||||
import com.amazonaws.services.s3.AmazonS3;
|
|
||||||
import com.amazonaws.services.s3.AmazonS3Client;
|
|
||||||
import com.amazonaws.services.s3.model.CreateBucketRequest;
|
|
||||||
import com.amazonaws.services.s3.model.ObjectListing;
|
|
||||||
import com.amazonaws.services.s3.model.S3ObjectSummary;
|
|
||||||
import com.amazonaws.services.s3.model.SSECustomerKey;
|
|
||||||
|
|
||||||
// Main Testing class
|
|
||||||
public class FunctionalTests {
|
|
||||||
|
|
||||||
private static final String PASS = "PASS";
|
|
||||||
private static final String FAILED = "FAIL";
|
|
||||||
private static final String IGNORED = "NA";
|
|
||||||
|
|
||||||
private static String accessKey;
|
|
||||||
private static String secretKey;
|
|
||||||
private static String region;
|
|
||||||
private static String endpoint;
|
|
||||||
private static boolean enableHTTPS;
|
|
||||||
|
|
||||||
private static final Random random = new Random(new SecureRandom().nextLong());
|
|
||||||
private static String bucketName = getRandomName();
|
|
||||||
private static boolean mintEnv = false;
|
|
||||||
|
|
||||||
private static String file1Kb;
|
|
||||||
private static String file1Mb;
|
|
||||||
private static String file6Mb;
|
|
||||||
|
|
||||||
private static SSECustomerKey sseKey1;
|
|
||||||
private static SSECustomerKey sseKey2;
|
|
||||||
private static SSECustomerKey sseKey3;
|
|
||||||
|
|
||||||
private static AmazonS3 s3Client;
|
|
||||||
private static S3TestUtils s3TestUtils;
|
|
||||||
|
|
||||||
public static String getRandomName() {
|
|
||||||
return "aws-java-sdk-test-" + new BigInteger(32, random).toString(32);
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Prints a success log entry in JSON format.
|
|
||||||
*/
|
|
||||||
public static void mintSuccessLog(String function, String args, long startTime) {
|
|
||||||
if (mintEnv) {
|
|
||||||
System.out.println(
|
|
||||||
new MintLogger(function, args, System.currentTimeMillis() - startTime, PASS, null, null, null));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Prints a failure log entry in JSON format.
|
|
||||||
*/
|
|
||||||
public static void mintFailedLog(String function, String args, long startTime, String message, String error) {
|
|
||||||
if (mintEnv) {
|
|
||||||
System.out.println(new MintLogger(function, args, System.currentTimeMillis() - startTime, FAILED, null,
|
|
||||||
message, error));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Prints a ignore log entry in JSON format.
|
|
||||||
*/
|
|
||||||
public static void mintIgnoredLog(String function, String args, long startTime) {
|
|
||||||
if (mintEnv) {
|
|
||||||
System.out.println(
|
|
||||||
new MintLogger(function, args, System.currentTimeMillis() - startTime, IGNORED, null, null, null));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
public static void initTests() throws IOException {
|
|
||||||
// Create encryption key.
|
|
||||||
byte[] rawKey1 = "32byteslongsecretkeymustgenerate".getBytes();
|
|
||||||
SecretKey secretKey1 = new SecretKeySpec(rawKey1, 0, rawKey1.length, "AES");
|
|
||||||
sseKey1 = new SSECustomerKey(secretKey1);
|
|
||||||
|
|
||||||
// Create new encryption key for target so it is saved using sse-c
|
|
||||||
byte[] rawKey2 = "xxbytescopysecretkeymustprovided".getBytes();
|
|
||||||
SecretKey secretKey2 = new SecretKeySpec(rawKey2, 0, rawKey2.length, "AES");
|
|
||||||
sseKey2 = new SSECustomerKey(secretKey2);
|
|
||||||
|
|
||||||
// Create new encryption key for target so it is saved using sse-c
|
|
||||||
byte[] rawKey3 = "32byteslongsecretkeymustgenerat1".getBytes();
|
|
||||||
SecretKey secretKey3 = new SecretKeySpec(rawKey3, 0, rawKey3.length, "AES");
|
|
||||||
sseKey3 = new SSECustomerKey(secretKey3);
|
|
||||||
|
|
||||||
// Create bucket
|
|
||||||
s3Client.createBucket(new CreateBucketRequest(bucketName));
|
|
||||||
}
|
|
||||||
|
|
||||||
public static void teardown() throws IOException {
|
|
||||||
|
|
||||||
// Remove all objects under the test bucket & the bucket itself
|
|
||||||
// TODO: use multi delete API instead
|
|
||||||
ObjectListing objectListing = s3Client.listObjects(bucketName);
|
|
||||||
while (true) {
|
|
||||||
for (Iterator<?> iterator = objectListing.getObjectSummaries().iterator(); iterator.hasNext();) {
|
|
||||||
S3ObjectSummary summary = (S3ObjectSummary) iterator.next();
|
|
||||||
s3Client.deleteObject(bucketName, summary.getKey());
|
|
||||||
}
|
|
||||||
// more objectListing to retrieve?
|
|
||||||
if (objectListing.isTruncated()) {
|
|
||||||
objectListing = s3Client.listNextBatchOfObjects(objectListing);
|
|
||||||
} else {
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
;
|
|
||||||
s3Client.deleteBucket(bucketName);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Test regular object upload using encryption
|
|
||||||
public static void uploadObjectEncryption_test1() throws Exception {
|
|
||||||
if (!mintEnv) {
|
|
||||||
System.out.println(
|
|
||||||
"Test: uploadObject(String bucketName, String objectName, String f, SSECustomerKey sseKey)");
|
|
||||||
}
|
|
||||||
|
|
||||||
if (!enableHTTPS) {
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
long startTime = System.currentTimeMillis();
|
|
||||||
String file1KbMD5 = Utils.getFileMD5(file1Kb);
|
|
||||||
String objectName = "testobject";
|
|
||||||
try {
|
|
||||||
s3TestUtils.uploadObject(bucketName, objectName, file1Kb, sseKey1);
|
|
||||||
s3TestUtils.downloadObject(bucketName, objectName, sseKey1, file1KbMD5);
|
|
||||||
mintSuccessLog("uploadObject(String bucketName, String objectName, String f, SSECustomerKey sseKey)",
|
|
||||||
"bucketName: " + bucketName + ", objectName: " + objectName + ", String: " + file1Kb
|
|
||||||
+ ", SSECustomerKey: " + sseKey1,
|
|
||||||
startTime);
|
|
||||||
} catch (Exception e) {
|
|
||||||
mintFailedLog("uploadObject(String bucketName, String objectName, String f, SSECustomerKey sseKey)",
|
|
||||||
"bucketName: " + bucketName + ", objectName: " + objectName + ", String: " + file1Kb
|
|
||||||
+ ", SSECustomerKey: " + sseKey1,
|
|
||||||
startTime, null, e.toString() + " >>> " + Arrays.toString(e.getStackTrace()));
|
|
||||||
throw e;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Test downloading an object with a wrong encryption key
|
|
||||||
public static void downloadObjectEncryption_test1() throws Exception {
|
|
||||||
if (!mintEnv) {
|
|
||||||
System.out.println("Test: downloadObject(String bucketName, String objectName, SSECustomerKey sseKey)");
|
|
||||||
}
|
|
||||||
|
|
||||||
if (!enableHTTPS) {
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
long startTime = System.currentTimeMillis();
|
|
||||||
|
|
||||||
String file1KbMD5 = Utils.getFileMD5(file1Kb);
|
|
||||||
String objectName = "testobject";
|
|
||||||
|
|
||||||
try {
|
|
||||||
s3TestUtils.uploadObject(bucketName, "testobject", file1Kb, sseKey1);
|
|
||||||
s3TestUtils.downloadObject(bucketName, objectName, sseKey2);
|
|
||||||
Exception ex = new Exception("downloadObject did not throw an S3 Access denied exception");
|
|
||||||
mintFailedLog("downloadObject(String bucketName, String objectName, SSECustomerKey sseKey)",
|
|
||||||
"bucketName: " + bucketName + ", objectName: " + objectName + ", SSECustomerKey: " + sseKey2,
|
|
||||||
startTime, null, ex.toString() + " >>> " + Arrays.toString(ex.getStackTrace()));
|
|
||||||
throw ex;
|
|
||||||
} catch (Exception e) {
|
|
||||||
if (!e.getMessage().contains("Access Denied")) {
|
|
||||||
Exception ex = new Exception(
|
|
||||||
"downloadObject did not throw S3 Access denied Exception but it did throw: " + e.getMessage());
|
|
||||||
mintFailedLog("downloadObject(String bucketName, String objectName, SSECustomerKey sseKey)",
|
|
||||||
"bucketName: " + bucketName + ", objectName: " + objectName + ", SSECustomerKey: " + sseKey2,
|
|
||||||
startTime, null, ex.toString() + " >>> " + Arrays.toString(ex.getStackTrace()));
|
|
||||||
throw ex;
|
|
||||||
}
|
|
||||||
mintSuccessLog("downloadObject(String bucketName, String objectName, SSECustomerKey sseKey)",
|
|
||||||
"bucketName: " + bucketName + ", objectName: " + objectName + ", SSECustomerKey: " + sseKey2,
|
|
||||||
startTime);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Test copying object with a new different encryption key
|
|
||||||
public static void copyObjectEncryption_test1() throws Exception {
|
|
||||||
if (!mintEnv) {
|
|
||||||
System.out.println("Test: copyObject(String bucketName, String objectName, SSECustomerKey sseKey, "
|
|
||||||
+ "String destBucketName, String dstObjectName, SSECustomerKey sseKey2, boolean replaceDirective)");
|
|
||||||
}
|
|
||||||
|
|
||||||
if (!enableHTTPS) {
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
long startTime = System.currentTimeMillis();
|
|
||||||
String file1KbMD5 = Utils.getFileMD5(file1Kb);
|
|
||||||
String objectName = "testobject";
|
|
||||||
String dstObjectName = "dir/newobject";
|
|
||||||
|
|
||||||
try {
|
|
||||||
s3TestUtils.uploadObject(bucketName, objectName, file1Kb, sseKey1);
|
|
||||||
s3TestUtils.copyObject(bucketName, objectName, sseKey1, bucketName, dstObjectName, sseKey2, false);
|
|
||||||
s3TestUtils.downloadObject(bucketName, dstObjectName, sseKey2, file1KbMD5);
|
|
||||||
} catch (Exception e) {
|
|
||||||
mintFailedLog("copyObject(String bucketName, String objectName, SSECustomerKey sseKey, "
|
|
||||||
+ "String destBucketName, String dstObjectName, SSECustomerKey sseKey2, boolean replaceDirective)",
|
|
||||||
"bucketName: " + bucketName + ", objectName: " + objectName + ", SSECustomerKey: " + sseKey1
|
|
||||||
+ "DstbucketName: " + bucketName + ", DstObjectName: " + dstObjectName
|
|
||||||
+ ", SSECustomerKey: " + sseKey2 + ", replaceDirective: " + false,
|
|
||||||
startTime, null, e.toString() + " >>> " + Arrays.toString(e.getStackTrace()));
|
|
||||||
throw e;
|
|
||||||
}
|
|
||||||
mintSuccessLog("copyObject(String bucketName, String objectName, SSECustomerKey sseKey, "
|
|
||||||
+ "String destBucketName, String dstObjectName, SSECustomerKey sseKey2, boolean replaceDirective)",
|
|
||||||
"bucketName: " + bucketName + ", objectName: " + objectName + ", SSECustomerKey: " + sseKey1
|
|
||||||
+ "DstbucketName: " + bucketName + ", DstObjectName: " + dstObjectName + ", SSECustomerKey: "
|
|
||||||
+ sseKey2 + ", replaceDirective: " + false,
|
|
||||||
startTime);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Test copying object with wrong source encryption key
|
|
||||||
public static void copyObjectEncryption_test2() throws Exception {
|
|
||||||
if (!mintEnv) {
|
|
||||||
System.out.println("Test: copyObject(String bucketName, String objectName, SSECustomerKey sseKey, "
|
|
||||||
+ "String destBucketName, String dstObjectName, SSECustomerKey sseKey2, boolean replaceDirective)");
|
|
||||||
}
|
|
||||||
|
|
||||||
if (!enableHTTPS) {
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
String objectName = "testobject";
|
|
||||||
String dstObjectName = "dir/newobject";
|
|
||||||
|
|
||||||
long startTime = System.currentTimeMillis();
|
|
||||||
|
|
||||||
try {
|
|
||||||
s3TestUtils.copyObject(bucketName, objectName, sseKey3, bucketName, dstObjectName, sseKey2, false);
|
|
||||||
Exception ex = new Exception("copyObject did not throw an S3 Access denied exception");
|
|
||||||
mintFailedLog("copyObject(String bucketName, String objectName, SSECustomerKey sseKey, "
|
|
||||||
+ "String destBucketName, String dstObjectName, SSECustomerKey sseKey2, boolean replaceDirective)",
|
|
||||||
"bucketName: " + bucketName + ", objectName: " + objectName + ", SSECustomerKey: " + sseKey3
|
|
||||||
+ "DstbucketName: " + bucketName + ", DstObjectName: " + dstObjectName
|
|
||||||
+ ", SSECustomerKey: " + sseKey2 + ", replaceDirective: " + false,
|
|
||||||
startTime, null, ex.toString() + " >>> " + Arrays.toString(ex.getStackTrace()));
|
|
||||||
throw ex;
|
|
||||||
} catch (Exception e) {
|
|
||||||
if (!e.getMessage().contains("Access Denied")) {
|
|
||||||
Exception ex = new Exception(
|
|
||||||
"copyObject did not throw S3 Access denied Exception but it did throw: " + e.getMessage());
|
|
||||||
mintFailedLog("copyObject(String bucketName, String objectName, SSECustomerKey sseKey, "
|
|
||||||
+ "String destBucketName, String dstObjectName, SSECustomerKey sseKey2, boolean replaceDirective)",
|
|
||||||
"bucketName: " + bucketName + ", objectName: " + objectName + ", SSECustomerKey: " + sseKey3
|
|
||||||
+ "DstbucketName: " + bucketName + ", DstObjectName: " + dstObjectName
|
|
||||||
+ ", SSECustomerKey: " + sseKey2 + ", replaceDirective: " + false,
|
|
||||||
startTime, null, ex.toString() + " >>> " + Arrays.toString(ex.getStackTrace()));
|
|
||||||
throw ex;
|
|
||||||
}
|
|
||||||
mintSuccessLog("copyObject(String bucketName, String objectName, SSECustomerKey sseKey, "
|
|
||||||
+ "String destBucketName, String dstObjectName, SSECustomerKey sseKey2, boolean replaceDirective)",
|
|
||||||
"bucketName: " + bucketName + ", objectName: " + objectName + ", SSECustomerKey: " + sseKey3
|
|
||||||
+ "DstbucketName: " + bucketName + ", DstObjectName: " + dstObjectName
|
|
||||||
+ ", SSECustomerKey: " + sseKey2 + ", replaceDirective: " + false,
|
|
||||||
startTime);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Test copying multipart object
|
|
||||||
public static void copyObjectEncryption_test3() throws Exception {
|
|
||||||
if (!mintEnv) {
|
|
||||||
System.out.println("Test: copyObject(String bucketName, String objectName, SSECustomerKey sseKey, "
|
|
||||||
+ "String destBucketName, String dstObjectName, SSECustomerKey sseKey2, boolean replaceDirective)");
|
|
||||||
}
|
|
||||||
|
|
||||||
if (!enableHTTPS) {
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
long startTime = System.currentTimeMillis();
|
|
||||||
String file6MbMD5 = Utils.getFileMD5(file6Mb);
|
|
||||||
String objectName = "testobject";
|
|
||||||
String dstObjectName = "dir/newobject";
|
|
||||||
|
|
||||||
try {
|
|
||||||
s3TestUtils.uploadMultipartObject(bucketName, objectName, file6Mb, sseKey1);
|
|
||||||
s3TestUtils.copyObject(bucketName, objectName, sseKey1, bucketName, dstObjectName, sseKey2, false);
|
|
||||||
s3TestUtils.downloadObject(bucketName, dstObjectName, sseKey2, file6MbMD5);
|
|
||||||
} catch (Exception e) {
|
|
||||||
mintFailedLog("copyObject(String bucketName, String objectName, SSECustomerKey sseKey, "
|
|
||||||
+ "String destBucketName, String dstObjectName, SSECustomerKey sseKey2, boolean replaceDirective)",
|
|
||||||
"bucketName: " + bucketName + ", objectName: " + objectName + ", SSECustomerKey: " + sseKey1
|
|
||||||
+ "DstbucketName: " + bucketName + ", DstObjectName: " + dstObjectName
|
|
||||||
+ ", SSECustomerKey: " + sseKey2 + ", replaceDirective: " + false,
|
|
||||||
startTime, null, e.toString() + " >>> " + Arrays.toString(e.getStackTrace()));
|
|
||||||
throw e;
|
|
||||||
}
|
|
||||||
mintSuccessLog("copyObject(String bucketName, String objectName, SSECustomerKey sseKey, "
|
|
||||||
+ "String destBucketName, String dstObjectName, SSECustomerKey sseKey2, boolean replaceDirective)",
|
|
||||||
"bucketName: " + bucketName + ", objectName: " + objectName + ", SSECustomerKey: " + sseKey1
|
|
||||||
+ "DstbucketName: " + bucketName + ", DstObjectName: " + dstObjectName + ", SSECustomerKey: "
|
|
||||||
+ sseKey2 + ", replaceDirective: " + false,
|
|
||||||
startTime);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Test downloading encrypted object with Get Range, 0 -> 1024
|
|
||||||
public static void downloadGetRangeEncryption_test1() throws Exception {
|
|
||||||
if (!mintEnv) {
|
|
||||||
System.out.println("Test: downloadObjectGetRange(String bucketName, String objectName, "
|
|
||||||
+ "SSECustomerKey sseKey, String expectedMD5, int start, int length)");
|
|
||||||
}
|
|
||||||
|
|
||||||
if (!enableHTTPS) {
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
long startTime = System.currentTimeMillis();
|
|
||||||
|
|
||||||
String objectName = "testobject";
|
|
||||||
String range1MD5 = Utils.getFileMD5(file1Kb);
|
|
||||||
int start = 0;
|
|
||||||
int length = 1024;
|
|
||||||
try {
|
|
||||||
s3TestUtils.uploadObject(bucketName, objectName, file1Kb, sseKey1);
|
|
||||||
s3TestUtils.downloadObject(bucketName, objectName, sseKey1, range1MD5, start, length);
|
|
||||||
} catch (Exception e) {
|
|
||||||
mintFailedLog(
|
|
||||||
"downloadObjectGetRange(String bucketName, String objectName, "
|
|
||||||
+ "SSECustomerKey sseKey, String expectedMD5, int start, int length)",
|
|
||||||
"bucketName: " + bucketName + ", objectName: " + objectName + ", SSECustomerKey: " + sseKey1
|
|
||||||
+ ", expectedMD5: " + range1MD5 + ", start: " + start + ", length: " + length,
|
|
||||||
startTime, null, e.toString() + " >>> " + Arrays.toString(e.getStackTrace()));
|
|
||||||
throw e;
|
|
||||||
}
|
|
||||||
mintSuccessLog(
|
|
||||||
"downloadObjectGetRange(String bucketName, String objectName, "
|
|
||||||
+ "SSECustomerKey sseKey, String expectedMD5, int start, int length)",
|
|
||||||
"bucketName: " + bucketName + ", objectName: " + objectName + ", SSECustomerKey: " + sseKey1
|
|
||||||
+ ", expectedMD5: " + range1MD5 + ", start: " + start + ", length: " + length,
|
|
||||||
startTime);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Test downloading encrypted object with Get Range, 0 -> 1
|
|
||||||
public static void downloadGetRangeEncryption_test2() throws Exception {
|
|
||||||
if (!mintEnv) {
|
|
||||||
System.out.println("Test: downloadObjectGetRange(String bucketName, String objectName, "
|
|
||||||
+ "SSECustomerKey sseKey, String expectedMD5, int start, int length)");
|
|
||||||
}
|
|
||||||
|
|
||||||
if (!enableHTTPS) {
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
long startTime = System.currentTimeMillis();
|
|
||||||
|
|
||||||
String objectName = "testobject";
|
|
||||||
int start = 0;
|
|
||||||
int length = 1;
|
|
||||||
String range1MD5 = Utils.getFileMD5(file1Kb, start, length);
|
|
||||||
try {
|
|
||||||
s3TestUtils.uploadObject(bucketName, objectName, file1Kb, sseKey1);
|
|
||||||
s3TestUtils.downloadObject(bucketName, objectName, sseKey1, range1MD5, start, length);
|
|
||||||
} catch (Exception e) {
|
|
||||||
mintFailedLog(
|
|
||||||
"downloadObjectGetRange(String bucketName, String objectName, "
|
|
||||||
+ "SSECustomerKey sseKey, String expectedMD5, int start, int length)",
|
|
||||||
"bucketName: " + bucketName + ", objectName: " + objectName + ", SSECustomerKey: " + sseKey1
|
|
||||||
+ ", expectedMD5: " + range1MD5 + ", start: " + start + ", length: " + length,
|
|
||||||
startTime, null, e.toString() + " >>> " + Arrays.toString(e.getStackTrace()));
|
|
||||||
throw e;
|
|
||||||
}
|
|
||||||
mintSuccessLog(
|
|
||||||
"downloadObjectGetRange(String bucketName, String objectName, "
|
|
||||||
+ "SSECustomerKey sseKey, String expectedMD5, int start, int length)",
|
|
||||||
"bucketName: " + bucketName + ", objectName: " + objectName + ", SSECustomerKey: " + sseKey1
|
|
||||||
+ ", expectedMD5: " + range1MD5 + ", start: " + start + ", length: " + length,
|
|
||||||
startTime);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Test downloading encrypted object with Get Range, 0 -> 1024-1
|
|
||||||
public static void downloadGetRangeEncryption_test3() throws Exception {
|
|
||||||
if (!mintEnv) {
|
|
||||||
System.out.println("Test: downloadObjectGetRange(String bucketName, String objectName, "
|
|
||||||
+ "SSECustomerKey sseKey, String expectedMD5, int start, int length)");
|
|
||||||
}
|
|
||||||
|
|
||||||
if (!enableHTTPS) {
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
long startTime = System.currentTimeMillis();
|
|
||||||
|
|
||||||
String objectName = "testobject";
|
|
||||||
int start = 0;
|
|
||||||
int length = 1023;
|
|
||||||
String range1MD5 = Utils.getFileMD5(file1Kb, start, length);
|
|
||||||
try {
|
|
||||||
s3TestUtils.uploadObject(bucketName, objectName, file1Kb, sseKey1);
|
|
||||||
s3TestUtils.downloadObject(bucketName, objectName, sseKey1, range1MD5, start, length);
|
|
||||||
} catch (Exception e) {
|
|
||||||
mintFailedLog(
|
|
||||||
"downloadObjectGetRange(String bucketName, String objectName, "
|
|
||||||
+ "SSECustomerKey sseKey, String expectedMD5, int start, int length)",
|
|
||||||
"bucketName: " + bucketName + ", objectName: " + objectName + ", SSECustomerKey: " + sseKey1
|
|
||||||
+ ", expectedMD5: " + range1MD5 + ", start: " + start + ", length: " + length,
|
|
||||||
startTime, null, e.toString() + " >>> " + Arrays.toString(e.getStackTrace()));
|
|
||||||
throw e;
|
|
||||||
}
|
|
||||||
mintSuccessLog(
|
|
||||||
"downloadObjectGetRange(String bucketName, String objectName, "
|
|
||||||
+ "SSECustomerKey sseKey, String expectedMD5, int start, int length)",
|
|
||||||
"bucketName: " + bucketName + ", objectName: " + objectName + ", SSECustomerKey: " + sseKey1
|
|
||||||
+ ", expectedMD5: " + range1MD5 + ", start: " + start + ", length: " + length,
|
|
||||||
startTime);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Test downloading encrypted object with Get Range, 1 -> 1024-1
|
|
||||||
public static void downloadGetRangeEncryption_test4() throws Exception {
|
|
||||||
if (!mintEnv) {
|
|
||||||
System.out.println("Test: downloadObjectGetRange(String bucketName, String objectName, "
|
|
||||||
+ "SSECustomerKey sseKey, String expectedMD5, int start, int length)");
|
|
||||||
}
|
|
||||||
|
|
||||||
if (!enableHTTPS) {
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
long startTime = System.currentTimeMillis();
|
|
||||||
|
|
||||||
String objectName = "testobject";
|
|
||||||
int start = 1;
|
|
||||||
int length = 1023;
|
|
||||||
String range1MD5 = Utils.getFileMD5(file1Kb, start, length);
|
|
||||||
try {
|
|
||||||
s3TestUtils.uploadObject(bucketName, objectName, file1Kb, sseKey1);
|
|
||||||
s3TestUtils.downloadObject(bucketName, objectName, sseKey1, range1MD5, start, length);
|
|
||||||
} catch (Exception e) {
|
|
||||||
mintFailedLog(
|
|
||||||
"downloadObjectGetRange(String bucketName, String objectName, "
|
|
||||||
+ "SSECustomerKey sseKey, String expectedMD5, int start, int length)",
|
|
||||||
"bucketName: " + bucketName + ", objectName: " + objectName + ", SSECustomerKey: " + sseKey1
|
|
||||||
+ ", expectedMD5: " + range1MD5 + ", start: " + start + ", length: " + length,
|
|
||||||
startTime, null, e.toString() + " >>> " + Arrays.toString(e.getStackTrace()));
|
|
||||||
throw e;
|
|
||||||
}
|
|
||||||
mintSuccessLog(
|
|
||||||
"downloadObjectGetRange(String bucketName, String objectName, "
|
|
||||||
+ "SSECustomerKey sseKey, String expectedMD5, int start, int length)",
|
|
||||||
"bucketName: " + bucketName + ", objectName: " + objectName + ", SSECustomerKey: " + sseKey1
|
|
||||||
+ ", expectedMD5: " + range1MD5 + ", start: " + start + ", length: " + length,
|
|
||||||
startTime);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Test downloading encrypted object with Get Range, 64*1024 -> 64*1024
|
|
||||||
public static void downloadGetRangeEncryption_test5() throws Exception {
|
|
||||||
if (!mintEnv) {
|
|
||||||
System.out.println("Test: downloadObjectGetRange(String bucketName, String objectName, "
|
|
||||||
+ "SSECustomerKey sseKey, String expectedMD5, int start, int length)");
|
|
||||||
}
|
|
||||||
|
|
||||||
if (!enableHTTPS) {
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
long startTime = System.currentTimeMillis();
|
|
||||||
|
|
||||||
String objectName = "testobject";
|
|
||||||
int start = 64 * 1024;
|
|
||||||
int length = 64 * 1024;
|
|
||||||
String range1MD5 = Utils.getFileMD5(file1Mb, start, length);
|
|
||||||
try {
|
|
||||||
s3TestUtils.uploadObject(bucketName, objectName, file1Mb, sseKey1);
|
|
||||||
s3TestUtils.downloadObject(bucketName, objectName, sseKey1, range1MD5, start, length);
|
|
||||||
} catch (Exception e) {
|
|
||||||
mintFailedLog(
|
|
||||||
"downloadObjectGetRange(String bucketName, String objectName, "
|
|
||||||
+ "SSECustomerKey sseKey, String expectedMD5, int start, int length)",
|
|
||||||
"bucketName: " + bucketName + ", objectName: " + objectName + ", SSECustomerKey: " + sseKey1
|
|
||||||
+ ", expectedMD5: " + range1MD5 + ", start: " + start + ", length: " + length,
|
|
||||||
startTime, null, e.toString() + " >>> " + Arrays.toString(e.getStackTrace()));
|
|
||||||
throw e;
|
|
||||||
}
|
|
||||||
mintSuccessLog(
|
|
||||||
"downloadObjectGetRange(String bucketName, String objectName, "
|
|
||||||
+ "SSECustomerKey sseKey, String expectedMD5, int start, int length)",
|
|
||||||
"bucketName: " + bucketName + ", objectName: " + objectName + ", SSECustomerKey: " + sseKey1
|
|
||||||
+ ", expectedMD5: " + range1MD5 + ", start: " + start + ", length: " + length,
|
|
||||||
startTime);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Test downloading encrypted object with Get Range, 64*1024 ->
|
|
||||||
// 1024*1024-64*1024
|
|
||||||
public static void downloadGetRangeEncryption_test6() throws Exception {
|
|
||||||
if (!mintEnv) {
|
|
||||||
System.out.println("Test: downloadObjectGetRange(String bucketName, String objectName, "
|
|
||||||
+ "SSECustomerKey sseKey, String expectedMD5, int start, int length)");
|
|
||||||
}
|
|
||||||
|
|
||||||
if (!enableHTTPS) {
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
long startTime = System.currentTimeMillis();
|
|
||||||
|
|
||||||
String objectName = "testobject";
|
|
||||||
int start = 64 * 1024;
|
|
||||||
int length = 1024 * 1024 - 64 * 1024;
|
|
||||||
String range1MD5 = Utils.getFileMD5(file1Mb, start, length);
|
|
||||||
try {
|
|
||||||
s3TestUtils.uploadObject(bucketName, objectName, file1Mb, sseKey1);
|
|
||||||
s3TestUtils.downloadObject(bucketName, objectName, sseKey1, range1MD5, start, length);
|
|
||||||
} catch (Exception e) {
|
|
||||||
mintFailedLog(
|
|
||||||
"downloadObjectGetRange(String bucketName, String objectName, "
|
|
||||||
+ "SSECustomerKey sseKey, String expectedMD5, int start, int length)",
|
|
||||||
"bucketName: " + bucketName + ", objectName: " + objectName + ", SSECustomerKey: " + sseKey1
|
|
||||||
+ ", expectedMD5: " + range1MD5 + ", start: " + start + ", length: " + length,
|
|
||||||
startTime, null, e.toString() + " >>> " + Arrays.toString(e.getStackTrace()));
|
|
||||||
throw e;
|
|
||||||
}
|
|
||||||
mintSuccessLog(
|
|
||||||
"downloadObjectGetRange(String bucketName, String objectName, "
|
|
||||||
+ "SSECustomerKey sseKey, String expectedMD5, int start, int length)",
|
|
||||||
"bucketName: " + bucketName + ", objectName: " + objectName + ", SSECustomerKey: " + sseKey1
|
|
||||||
+ ", expectedMD5: " + range1MD5 + ", start: " + start + ", length: " + length,
|
|
||||||
startTime);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Run tests
|
|
||||||
public static void runTests() throws Exception {
|
|
||||||
|
|
||||||
uploadObjectEncryption_test1();
|
|
||||||
|
|
||||||
downloadObjectEncryption_test1();
|
|
||||||
|
|
||||||
copyObjectEncryption_test1();
|
|
||||||
copyObjectEncryption_test2();
|
|
||||||
copyObjectEncryption_test3();
|
|
||||||
|
|
||||||
downloadGetRangeEncryption_test1();
|
|
||||||
downloadGetRangeEncryption_test2();
|
|
||||||
downloadGetRangeEncryption_test3();
|
|
||||||
downloadGetRangeEncryption_test4();
|
|
||||||
downloadGetRangeEncryption_test5();
|
|
||||||
downloadGetRangeEncryption_test6();
|
|
||||||
}
|
|
||||||
|
|
||||||
public static void main(String[] args) throws Exception, IOException, NoSuchAlgorithmException {
|
|
||||||
|
|
||||||
endpoint = System.getenv("SERVER_ENDPOINT");
|
|
||||||
accessKey = System.getenv("ACCESS_KEY");
|
|
||||||
secretKey = System.getenv("SECRET_KEY");
|
|
||||||
enableHTTPS = System.getenv("ENABLE_HTTPS").equals("1");
|
|
||||||
|
|
||||||
region = "us-east-1";
|
|
||||||
|
|
||||||
if (enableHTTPS) {
|
|
||||||
endpoint = "https://" + endpoint;
|
|
||||||
} else {
|
|
||||||
endpoint = "http://" + endpoint;
|
|
||||||
}
|
|
||||||
|
|
||||||
String dataDir = System.getenv("MINT_DATA_DIR");
|
|
||||||
if (dataDir != null && !dataDir.equals("")) {
|
|
||||||
mintEnv = true;
|
|
||||||
file1Kb = Paths.get(dataDir, "datafile-1-kB").toString();
|
|
||||||
file1Mb = Paths.get(dataDir, "datafile-1-MB").toString();
|
|
||||||
file6Mb = Paths.get(dataDir, "datafile-6-MB").toString();
|
|
||||||
}
|
|
||||||
|
|
||||||
String mintMode = null;
|
|
||||||
if (mintEnv) {
|
|
||||||
mintMode = System.getenv("MINT_MODE");
|
|
||||||
}
|
|
||||||
|
|
||||||
AWSCredentials credentials = new BasicAWSCredentials(accessKey, secretKey);
|
|
||||||
AmazonS3ClientBuilder.EndpointConfiguration endpointConfiguration = new AmazonS3ClientBuilder.EndpointConfiguration(
|
|
||||||
endpoint, region);
|
|
||||||
|
|
||||||
AmazonS3ClientBuilder clientBuilder = AmazonS3ClientBuilder.standard();
|
|
||||||
clientBuilder.setCredentials(new AWSStaticCredentialsProvider(credentials));
|
|
||||||
clientBuilder.setEndpointConfiguration(endpointConfiguration);
|
|
||||||
clientBuilder.setPathStyleAccessEnabled(true);
|
|
||||||
|
|
||||||
s3Client = clientBuilder.build();
|
|
||||||
s3TestUtils = new S3TestUtils(s3Client);
|
|
||||||
|
|
||||||
try {
|
|
||||||
initTests();
|
|
||||||
FunctionalTests.runTests();
|
|
||||||
} catch (Exception e) {
|
|
||||||
e.printStackTrace();
|
|
||||||
System.exit(-1);
|
|
||||||
} finally {
|
|
||||||
teardown();
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
|
@ -1,62 +0,0 @@
|
||||||
/*
|
|
||||||
* Copyright (c) 2015-2021 MinIO, Inc.
|
|
||||||
*
|
|
||||||
* This file is part of MinIO Object Storage stack
|
|
||||||
*
|
|
||||||
* This program is free software: you can redistribute it and/or modify
|
|
||||||
* it under the terms of the GNU Affero General Public License as published by
|
|
||||||
* the Free Software Foundation, either version 3 of the License, or
|
|
||||||
* (at your option) any later version.
|
|
||||||
*
|
|
||||||
* This program is distributed in the hope that it will be useful
|
|
||||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
* GNU Affero General Public License for more details.
|
|
||||||
*
|
|
||||||
* You should have received a copy of the GNU Affero General Public License
|
|
||||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
|
||||||
*/
|
|
||||||
|
|
||||||
package io.minio.awssdk.tests;
|
|
||||||
|
|
||||||
import java.io.*;
|
|
||||||
|
|
||||||
// LimitedInputStream wraps a regular InputStream, calling
|
|
||||||
// read() will skip some bytes as configured and will also
|
|
||||||
// return only data with configured length
|
|
||||||
|
|
||||||
class LimitedInputStream extends InputStream {
|
|
||||||
|
|
||||||
private int skip;
|
|
||||||
private int length;
|
|
||||||
private InputStream is;
|
|
||||||
|
|
||||||
LimitedInputStream(InputStream is, int skip, int length) {
|
|
||||||
this.is = is;
|
|
||||||
this.skip = skip;
|
|
||||||
this.length = length;
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public int read() throws IOException {
|
|
||||||
int r;
|
|
||||||
while (skip > 0) {
|
|
||||||
r = is.read();
|
|
||||||
if (r < 0) {
|
|
||||||
throw new IOException("stream ended before being able to skip all bytes");
|
|
||||||
}
|
|
||||||
skip--;
|
|
||||||
}
|
|
||||||
if (length == 0) {
|
|
||||||
return -1;
|
|
||||||
}
|
|
||||||
r = is.read();
|
|
||||||
if (r < 0) {
|
|
||||||
throw new IOException("stream ended before being able to read all bytes");
|
|
||||||
}
|
|
||||||
length--;
|
|
||||||
return r;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
|
@ -1,153 +0,0 @@
|
||||||
/*
|
|
||||||
* Copyright (c) 2015-2021 MinIO, Inc.
|
|
||||||
*
|
|
||||||
* This file is part of MinIO Object Storage stack
|
|
||||||
*
|
|
||||||
* This program is free software: you can redistribute it and/or modify
|
|
||||||
* it under the terms of the GNU Affero General Public License as published by
|
|
||||||
* the Free Software Foundation, either version 3 of the License, or
|
|
||||||
* (at your option) any later version.
|
|
||||||
*
|
|
||||||
* This program is distributed in the hope that it will be useful
|
|
||||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
* GNU Affero General Public License for more details.
|
|
||||||
*
|
|
||||||
* You should have received a copy of the GNU Affero General Public License
|
|
||||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
|
||||||
*/
|
|
||||||
|
|
||||||
package io.minio.awssdk.tests;
|
|
||||||
|
|
||||||
import com.fasterxml.jackson.annotation.JsonAutoDetect;
|
|
||||||
import com.fasterxml.jackson.annotation.JsonIgnore;
|
|
||||||
import com.fasterxml.jackson.annotation.JsonInclude.Include;
|
|
||||||
import com.fasterxml.jackson.annotation.JsonProperty;
|
|
||||||
import com.fasterxml.jackson.core.JsonProcessingException;
|
|
||||||
import com.fasterxml.jackson.databind.ObjectMapper;
|
|
||||||
|
|
||||||
@JsonAutoDetect(fieldVisibility = JsonAutoDetect.Visibility.ANY)
|
|
||||||
public class MintLogger {
|
|
||||||
|
|
||||||
@JsonProperty("name")
|
|
||||||
private String name;
|
|
||||||
|
|
||||||
@JsonProperty("function")
|
|
||||||
private String function;
|
|
||||||
|
|
||||||
@JsonProperty("args")
|
|
||||||
private String args;
|
|
||||||
|
|
||||||
@JsonProperty("duration")
|
|
||||||
private long duration;
|
|
||||||
|
|
||||||
@JsonProperty("status")
|
|
||||||
private String status;
|
|
||||||
|
|
||||||
@JsonProperty("alert")
|
|
||||||
private String alert;
|
|
||||||
|
|
||||||
@JsonProperty("message")
|
|
||||||
private String message;
|
|
||||||
|
|
||||||
@JsonProperty("error")
|
|
||||||
private String error;
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Constructor.
|
|
||||||
**/
|
|
||||||
public MintLogger(String function,
|
|
||||||
String args,
|
|
||||||
long duration,
|
|
||||||
String status,
|
|
||||||
String alert,
|
|
||||||
String message,
|
|
||||||
String error) {
|
|
||||||
this.name = "aws-sdk-java";
|
|
||||||
this.function = function;
|
|
||||||
this.duration = duration;
|
|
||||||
this.args = args;
|
|
||||||
this.status = status;
|
|
||||||
this.alert = alert;
|
|
||||||
this.message = message;
|
|
||||||
this.error = error;
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Return JSON Log Entry.
|
|
||||||
**/
|
|
||||||
@JsonIgnore
|
|
||||||
public String toString() {
|
|
||||||
|
|
||||||
try {
|
|
||||||
return new ObjectMapper().setSerializationInclusion(Include.NON_NULL).writeValueAsString(this);
|
|
||||||
} catch (JsonProcessingException e) {
|
|
||||||
e.printStackTrace();
|
|
||||||
}
|
|
||||||
return "";
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Return Alert.
|
|
||||||
**/
|
|
||||||
@JsonIgnore
|
|
||||||
public String alert() {
|
|
||||||
return alert;
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Return Error.
|
|
||||||
**/
|
|
||||||
@JsonIgnore
|
|
||||||
public String error() {
|
|
||||||
return error;
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Return Message.
|
|
||||||
**/
|
|
||||||
@JsonIgnore
|
|
||||||
public String message() {
|
|
||||||
return message;
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Return args.
|
|
||||||
**/
|
|
||||||
@JsonIgnore
|
|
||||||
public String args() {
|
|
||||||
return args;
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Return status.
|
|
||||||
**/
|
|
||||||
@JsonIgnore
|
|
||||||
public String status() {
|
|
||||||
return status;
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Return name.
|
|
||||||
**/
|
|
||||||
@JsonIgnore
|
|
||||||
public String name() {
|
|
||||||
return name;
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Return function.
|
|
||||||
**/
|
|
||||||
@JsonIgnore
|
|
||||||
public String function() {
|
|
||||||
return function;
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Return duration.
|
|
||||||
**/
|
|
||||||
@JsonIgnore
|
|
||||||
public long duration() {
|
|
||||||
return duration;
|
|
||||||
}
|
|
||||||
}
|
|
|
@ -1,189 +0,0 @@
|
||||||
/*
|
|
||||||
* Copyright (c) 2015-2021 MinIO, Inc.
|
|
||||||
*
|
|
||||||
* This file is part of MinIO Object Storage stack
|
|
||||||
*
|
|
||||||
* This program is free software: you can redistribute it and/or modify
|
|
||||||
* it under the terms of the GNU Affero General Public License as published by
|
|
||||||
* the Free Software Foundation, either version 3 of the License, or
|
|
||||||
* (at your option) any later version.
|
|
||||||
*
|
|
||||||
* This program is distributed in the hope that it will be useful
|
|
||||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
* GNU Affero General Public License for more details.
|
|
||||||
*
|
|
||||||
* You should have received a copy of the GNU Affero General Public License
|
|
||||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
|
||||||
*/
|
|
||||||
|
|
||||||
package io.minio.awssdk.tests;
|
|
||||||
|
|
||||||
import java.io.*;
|
|
||||||
import java.util.*;
|
|
||||||
import java.nio.channels.Channels;
|
|
||||||
|
|
||||||
import com.amazonaws.services.s3.model.GetObjectMetadataRequest;
|
|
||||||
import com.amazonaws.services.s3.model.GetObjectRequest;
|
|
||||||
import com.amazonaws.services.s3.model.ObjectMetadata;
|
|
||||||
import com.amazonaws.services.s3.model.PutObjectRequest;
|
|
||||||
import com.amazonaws.services.s3.model.CopyObjectRequest;
|
|
||||||
import com.amazonaws.services.s3.model.S3Object;
|
|
||||||
import com.amazonaws.services.s3.model.S3ObjectInputStream;
|
|
||||||
import com.amazonaws.services.s3.model.SSECustomerKey;
|
|
||||||
|
|
||||||
import com.amazonaws.services.s3.model.CompleteMultipartUploadRequest;
|
|
||||||
import com.amazonaws.services.s3.model.InitiateMultipartUploadRequest;
|
|
||||||
import com.amazonaws.services.s3.model.InitiateMultipartUploadResult;
|
|
||||||
import com.amazonaws.services.s3.model.PartETag;
|
|
||||||
import com.amazonaws.services.s3.model.UploadPartRequest;
|
|
||||||
|
|
||||||
import com.amazonaws.services.s3.model.MetadataDirective;
|
|
||||||
|
|
||||||
import com.amazonaws.services.s3.AmazonS3;
|
|
||||||
|
|
||||||
class S3TestUtils {
|
|
||||||
|
|
||||||
private AmazonS3 s3Client;
|
|
||||||
|
|
||||||
S3TestUtils(AmazonS3 s3Client) {
|
|
||||||
this.s3Client = s3Client;
|
|
||||||
}
|
|
||||||
|
|
||||||
void uploadMultipartObject(String bucketName, String keyName,
|
|
||||||
String filePath, SSECustomerKey sseKey) throws IOException {
|
|
||||||
|
|
||||||
File file = new File(filePath);
|
|
||||||
|
|
||||||
List<PartETag> partETags = new ArrayList<PartETag>();
|
|
||||||
|
|
||||||
// Step 1: Initialize.
|
|
||||||
InitiateMultipartUploadRequest initRequest = new
|
|
||||||
InitiateMultipartUploadRequest(bucketName, keyName);
|
|
||||||
|
|
||||||
if (sseKey != null) {
|
|
||||||
initRequest.setSSECustomerKey(sseKey);
|
|
||||||
}
|
|
||||||
|
|
||||||
InitiateMultipartUploadResult initResponse =
|
|
||||||
s3Client.initiateMultipartUpload(initRequest);
|
|
||||||
|
|
||||||
long contentLength = file.length();
|
|
||||||
long partSize = 5242880; // Set part size to 5 MB.
|
|
||||||
|
|
||||||
// Step 2: Upload parts.
|
|
||||||
long filePosition = 0;
|
|
||||||
for (int i = 1; filePosition < contentLength; i++) {
|
|
||||||
// Last part can be less than 5 MB. Adjust part size.
|
|
||||||
partSize = Math.min(partSize, (contentLength - filePosition));
|
|
||||||
|
|
||||||
// Create request to upload a part.
|
|
||||||
UploadPartRequest uploadRequest = new UploadPartRequest()
|
|
||||||
.withBucketName(bucketName).withKey(keyName)
|
|
||||||
.withUploadId(initResponse.getUploadId()).withPartNumber(i)
|
|
||||||
.withFileOffset(filePosition)
|
|
||||||
.withFile(file)
|
|
||||||
.withPartSize(partSize);
|
|
||||||
|
|
||||||
if (sseKey != null) {
|
|
||||||
uploadRequest.withSSECustomerKey(sseKey);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Upload part and add response to our list.
|
|
||||||
partETags.add(s3Client.uploadPart(uploadRequest).getPartETag());
|
|
||||||
|
|
||||||
filePosition += partSize;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Step 3: Complete.
|
|
||||||
CompleteMultipartUploadRequest compRequest = new
|
|
||||||
CompleteMultipartUploadRequest(
|
|
||||||
bucketName,
|
|
||||||
keyName,
|
|
||||||
initResponse.getUploadId(),
|
|
||||||
partETags);
|
|
||||||
|
|
||||||
s3Client.completeMultipartUpload(compRequest);
|
|
||||||
}
|
|
||||||
|
|
||||||
void uploadObject(String bucketName, String keyName,
|
|
||||||
String filePath, SSECustomerKey sseKey) throws IOException {
|
|
||||||
|
|
||||||
File f = new File(filePath);
|
|
||||||
PutObjectRequest putObjectRequest = new PutObjectRequest(bucketName, keyName, f);
|
|
||||||
if (sseKey != null) {
|
|
||||||
putObjectRequest.withSSECustomerKey(sseKey);
|
|
||||||
}
|
|
||||||
s3Client.putObject(putObjectRequest);
|
|
||||||
}
|
|
||||||
|
|
||||||
void downloadObject(String bucketName, String keyName, SSECustomerKey sseKey)
|
|
||||||
throws Exception, IOException {
|
|
||||||
downloadObject(bucketName, keyName, sseKey, "", -1, -1);
|
|
||||||
}
|
|
||||||
|
|
||||||
void downloadObject(String bucketName, String keyName, SSECustomerKey sseKey,
|
|
||||||
String expectedMD5)
|
|
||||||
throws Exception, IOException {
|
|
||||||
downloadObject(bucketName, keyName, sseKey, expectedMD5, -1, -1);
|
|
||||||
}
|
|
||||||
|
|
||||||
void downloadObject(String bucketName, String keyName, SSECustomerKey sseKey,
|
|
||||||
String expectedMD5, int start, int length) throws Exception, IOException {
|
|
||||||
GetObjectRequest getObjectRequest = new GetObjectRequest(bucketName, keyName)
|
|
||||||
.withSSECustomerKey(sseKey);
|
|
||||||
|
|
||||||
if (start >= 0 && length >= 0) {
|
|
||||||
getObjectRequest.setRange(start, start+length-1);
|
|
||||||
}
|
|
||||||
|
|
||||||
S3Object s3Object = s3Client.getObject(getObjectRequest);
|
|
||||||
|
|
||||||
int size = 0;
|
|
||||||
int c;
|
|
||||||
|
|
||||||
S3ObjectInputStream input = s3Object.getObjectContent();
|
|
||||||
|
|
||||||
ByteArrayOutputStream output = new ByteArrayOutputStream();
|
|
||||||
String data = "";
|
|
||||||
while ((c = input.read()) != -1) {
|
|
||||||
output.write((byte) c);
|
|
||||||
size++;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (length >= 0 && size != length) {
|
|
||||||
throw new Exception("downloaded object has unexpected size, expected: " + length + ", received: " + size);
|
|
||||||
}
|
|
||||||
|
|
||||||
String calculatedMD5 = Utils.getBufferMD5(output.toByteArray());
|
|
||||||
|
|
||||||
if (!expectedMD5.equals("") && !calculatedMD5.equals(expectedMD5)) {
|
|
||||||
throw new Exception("downloaded object has unexpected md5sum, expected: " + expectedMD5 + ", found: " + calculatedMD5);
|
|
||||||
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
void copyObject(String bucketName, String keyName, SSECustomerKey sseKey,
|
|
||||||
String targetBucketName, String targetKeyName, SSECustomerKey newSseKey,
|
|
||||||
boolean replace) {
|
|
||||||
CopyObjectRequest copyRequest = new CopyObjectRequest(bucketName, keyName, targetBucketName, targetKeyName);
|
|
||||||
if (sseKey != null) {
|
|
||||||
copyRequest.withSourceSSECustomerKey(sseKey);
|
|
||||||
}
|
|
||||||
if (newSseKey != null) {
|
|
||||||
copyRequest.withDestinationSSECustomerKey(newSseKey);
|
|
||||||
}
|
|
||||||
if (replace) {
|
|
||||||
copyRequest.withMetadataDirective(MetadataDirective.COPY);
|
|
||||||
}
|
|
||||||
s3Client.copyObject(copyRequest);
|
|
||||||
}
|
|
||||||
|
|
||||||
long retrieveObjectMetadata(String bucketName, String keyName, SSECustomerKey sseKey) {
|
|
||||||
GetObjectMetadataRequest getMetadataRequest = new GetObjectMetadataRequest(bucketName, keyName)
|
|
||||||
.withSSECustomerKey(sseKey);
|
|
||||||
ObjectMetadata objectMetadata = s3Client.getObjectMetadata(getMetadataRequest);
|
|
||||||
return objectMetadata.getContentLength();
|
|
||||||
}
|
|
||||||
|
|
||||||
}
|
|
|
@ -1,77 +0,0 @@
|
||||||
/*
|
|
||||||
* Copyright (c) 2015-2021 MinIO, Inc.
|
|
||||||
*
|
|
||||||
* This file is part of MinIO Object Storage stack
|
|
||||||
*
|
|
||||||
* This program is free software: you can redistribute it and/or modify
|
|
||||||
* it under the terms of the GNU Affero General Public License as published by
|
|
||||||
* the Free Software Foundation, either version 3 of the License, or
|
|
||||||
* (at your option) any later version.
|
|
||||||
*
|
|
||||||
* This program is distributed in the hope that it will be useful
|
|
||||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
* GNU Affero General Public License for more details.
|
|
||||||
*
|
|
||||||
* You should have received a copy of the GNU Affero General Public License
|
|
||||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
|
||||||
*/
|
|
||||||
|
|
||||||
package io.minio.awssdk.tests;
|
|
||||||
|
|
||||||
import java.io.*;
|
|
||||||
import java.nio.channels.*;
|
|
||||||
import java.security.*;
|
|
||||||
|
|
||||||
class Utils {
|
|
||||||
|
|
||||||
public static byte[] createChecksum(InputStream is, int skip, int length) throws Exception {
|
|
||||||
int numRead;
|
|
||||||
byte[] buffer = new byte[1024];
|
|
||||||
|
|
||||||
MessageDigest complete = MessageDigest.getInstance("MD5");
|
|
||||||
|
|
||||||
if (skip > -1 && length > -1) {
|
|
||||||
is = new LimitedInputStream(is, skip, length);
|
|
||||||
}
|
|
||||||
|
|
||||||
do {
|
|
||||||
numRead = is.read(buffer);
|
|
||||||
if (numRead > 0) {
|
|
||||||
complete.update(buffer, 0, numRead);
|
|
||||||
}
|
|
||||||
} while (numRead != -1);
|
|
||||||
|
|
||||||
return complete.digest();
|
|
||||||
}
|
|
||||||
|
|
||||||
public static String getInputStreamMD5(InputStream is) throws Exception {
|
|
||||||
return getInputStreamMD5(is, -1, -1);
|
|
||||||
}
|
|
||||||
|
|
||||||
public static String getInputStreamMD5(InputStream is, int start, int length) throws Exception {
|
|
||||||
byte[] b = createChecksum(is, start, length);
|
|
||||||
String result = "";
|
|
||||||
|
|
||||||
for (int i=0; i < b.length; i++) {
|
|
||||||
result += Integer.toString( ( b[i] & 0xff ) + 0x100, 16).substring( 1 );
|
|
||||||
}
|
|
||||||
return result;
|
|
||||||
}
|
|
||||||
|
|
||||||
public static String getFileMD5(String filePath) throws Exception {
|
|
||||||
return getFileMD5(filePath, -1, -1);
|
|
||||||
}
|
|
||||||
|
|
||||||
public static String getFileMD5(String filePath, int start, int length) throws Exception {
|
|
||||||
File f = new File(filePath);
|
|
||||||
InputStream is = new FileInputStream(f);
|
|
||||||
return getInputStreamMD5(is, start, length);
|
|
||||||
}
|
|
||||||
|
|
||||||
public static String getBufferMD5(byte[] data) throws Exception {
|
|
||||||
ByteArrayInputStream bis = new ByteArrayInputStream(data);
|
|
||||||
return getInputStreamMD5(bis);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
|
@ -1,7 +0,0 @@
|
||||||
#!/bin/bash -e
|
|
||||||
#
|
|
||||||
#
|
|
||||||
|
|
||||||
test_run_dir="$MINT_RUN_CORE_DIR/aws-sdk-php"
|
|
||||||
$WGET --output-document=- https://getcomposer.org/installer | php -- --install-dir="$test_run_dir"
|
|
||||||
php "$test_run_dir/composer.phar" --working-dir="$test_run_dir" install
|
|
|
@ -1,5 +0,0 @@
|
||||||
#!/bin/bash -e
|
|
||||||
#
|
|
||||||
#
|
|
||||||
|
|
||||||
gem install --no-rdoc --no-ri aws-sdk-resources aws-sdk multipart_body
|
|
|
@ -1,5 +0,0 @@
|
||||||
#!/bin/bash -e
|
|
||||||
#
|
|
||||||
#
|
|
||||||
|
|
||||||
pip3 install awscli --upgrade
|
|
|
@ -1,6 +0,0 @@
|
||||||
#!/bin/bash -e
|
|
||||||
#
|
|
||||||
#
|
|
||||||
|
|
||||||
test_run_dir="$MINT_RUN_CORE_DIR/healthcheck"
|
|
||||||
(cd "$test_run_dir" && GO111MODULE=on CGO_ENABLED=0 go build)
|
|
|
@ -1,18 +0,0 @@
|
||||||
#!/bin/bash -e
|
|
||||||
#
|
|
||||||
#
|
|
||||||
|
|
||||||
MC_VERSION=$(curl --retry 10 -Ls -o /dev/null -w "%{url_effective}" https://github.com/minio/mc/releases/latest | sed "s/https:\/\/github.com\/minio\/mc\/releases\/tag\///")
|
|
||||||
if [ -z "$MC_VERSION" ]; then
|
|
||||||
echo "unable to get mc version from github"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
test_run_dir="$MINT_RUN_CORE_DIR/mc"
|
|
||||||
$WGET --output-document="${test_run_dir}/mc" "https://dl.minio.io/client/mc/release/linux-amd64/mc.${MC_VERSION}"
|
|
||||||
chmod a+x "${test_run_dir}/mc"
|
|
||||||
|
|
||||||
git clone --quiet https://github.com/minio/mc.git "$test_run_dir/mc.git"
|
|
||||||
(cd "$test_run_dir/mc.git"; git checkout --quiet "tags/${MC_VERSION}")
|
|
||||||
cp -a "${test_run_dir}/mc.git/functional-tests.sh" "$test_run_dir/"
|
|
||||||
rm -fr "$test_run_dir/mc.git"
|
|
|
@ -1,29 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
#
|
|
||||||
#
|
|
||||||
|
|
||||||
set -e
|
|
||||||
|
|
||||||
MINIO_DOTNET_SDK_PATH="$MINT_RUN_CORE_DIR/minio-dotnet"
|
|
||||||
|
|
||||||
MINIO_DOTNET_SDK_VERSION=$(curl --retry 10 -Ls -o /dev/null -w "%{url_effective}" https://github.com/minio/minio-dotnet/releases/latest | sed "s/https:\/\/github.com\/minio\/minio-dotnet\/releases\/tag\///")
|
|
||||||
if [ -z "$MINIO_DOTNET_SDK_VERSION" ]; then
|
|
||||||
echo "unable to get minio-dotnet version from github"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
out_dir="$MINIO_DOTNET_SDK_PATH/out"
|
|
||||||
if [ -z "$out_dir" ]; then
|
|
||||||
mkdir "$out_dir"
|
|
||||||
fi
|
|
||||||
|
|
||||||
temp_dir="$MINIO_DOTNET_SDK_PATH/temp"
|
|
||||||
git clone --quiet https://github.com/minio/minio-dotnet.git "${temp_dir}/minio-dotnet.git/"
|
|
||||||
(cd "${temp_dir}/minio-dotnet.git"; git checkout --quiet "tags/${MINIO_DOTNET_SDK_VERSION}")
|
|
||||||
|
|
||||||
cp -a "${temp_dir}/minio-dotnet.git/Minio.Functional.Tests/"* "${MINIO_DOTNET_SDK_PATH}/"
|
|
||||||
rm -fr "${temp_dir}"
|
|
||||||
|
|
||||||
cd "$MINIO_DOTNET_SDK_PATH"
|
|
||||||
dotnet restore /p:Configuration=Mint
|
|
||||||
dotnet publish --runtime ubuntu.18.04-x64 --output out /p:Configuration=Mint
|
|
|
@ -1,13 +0,0 @@
|
||||||
#!/bin/bash -e
|
|
||||||
#
|
|
||||||
#
|
|
||||||
|
|
||||||
MINIO_GO_VERSION=$(curl --retry 10 -Ls -o /dev/null -w "%{url_effective}" https://github.com/minio/minio-go/releases/latest | sed "s/https:\/\/github.com\/minio\/minio-go\/releases\/tag\///")
|
|
||||||
if [ -z "$MINIO_GO_VERSION" ]; then
|
|
||||||
echo "unable to get minio-go version from github"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
test_run_dir="$MINT_RUN_CORE_DIR/minio-go"
|
|
||||||
curl -sL -o "${test_run_dir}/main.go" "https://raw.githubusercontent.com/minio/minio-go/${MINIO_GO_VERSION}/functional_tests.go"
|
|
||||||
(cd "$test_run_dir" && GO111MODULE=on CGO_ENABLED=0 go build -o minio-go main.go)
|
|
|
@ -1,21 +0,0 @@
|
||||||
#!/bin/bash -e
|
|
||||||
#
|
|
||||||
#
|
|
||||||
|
|
||||||
SPOTBUGS_VERSION="4.2.2" ## needed since 8.0.2 release
|
|
||||||
JUNIT_VERSION="4.12" ## JUNIT version
|
|
||||||
MINIO_JAVA_VERSION=$(curl --retry 10 -s "https://repo1.maven.org/maven2/io/minio/minio/maven-metadata.xml" | sed -n "/<latest>/{s/<.[^>]*>//g;p;q}" | sed "s/ *//g")
|
|
||||||
if [ -z "$MINIO_JAVA_VERSION" ]; then
|
|
||||||
echo "unable to get latest minio-java version from maven"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
test_run_dir="$MINT_RUN_CORE_DIR/minio-java"
|
|
||||||
git clone --quiet https://github.com/minio/minio-java.git "$test_run_dir/minio-java.git"
|
|
||||||
(cd "$test_run_dir/minio-java.git"; git checkout --quiet "tags/${MINIO_JAVA_VERSION}")
|
|
||||||
$WGET --output-document="$test_run_dir/minio-${MINIO_JAVA_VERSION}-all.jar" "https://repo1.maven.org/maven2/io/minio/minio/${MINIO_JAVA_VERSION}/minio-${MINIO_JAVA_VERSION}-all.jar"
|
|
||||||
$WGET --output-document="$test_run_dir/spotbugs-annotations-${SPOTBUGS_VERSION}.jar" "https://repo1.maven.org/maven2/com/github/spotbugs/spotbugs-annotations/${SPOTBUGS_VERSION}/spotbugs-annotations-${SPOTBUGS_VERSION}.jar"
|
|
||||||
$WGET --output-document="$test_run_dir/junit-${JUNIT_VERSION}.jar" "https://repo1.maven.org/maven2/junit/junit/${JUNIT_VERSION}/junit-${JUNIT_VERSION}.jar"
|
|
||||||
javac -cp "$test_run_dir/minio-${MINIO_JAVA_VERSION}-all.jar:$test_run_dir/spotbugs-annotations-${SPOTBUGS_VERSION}.jar:$test_run_dir/junit-${JUNIT_VERSION}.jar" "${test_run_dir}/minio-java.git/functional"/*.java
|
|
||||||
cp -a "${test_run_dir}/minio-java.git/functional"/*.class "$test_run_dir/"
|
|
||||||
rm -fr "$test_run_dir/minio-java.git"
|
|
|
@ -1,15 +0,0 @@
|
||||||
#!/bin/bash -e
|
|
||||||
#
|
|
||||||
#
|
|
||||||
|
|
||||||
MINIO_JS_VERSION=$(curl --retry 10 -Ls -o /dev/null -w "%{url_effective}" https://github.com/minio/minio-js/releases/latest | sed "s/https:\/\/github.com\/minio\/minio-js\/releases\/tag\///")
|
|
||||||
if [ -z "$MINIO_JS_VERSION" ]; then
|
|
||||||
echo "unable to get minio-js version from github"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
test_run_dir="$MINT_RUN_CORE_DIR/minio-js"
|
|
||||||
mkdir "${test_run_dir}/test"
|
|
||||||
$WGET --output-document="${test_run_dir}/test/functional-tests.js" "https://raw.githubusercontent.com/minio/minio-js/${MINIO_JS_VERSION}/src/test/functional/functional-tests.js"
|
|
||||||
npm --prefix "$test_run_dir" install --save "minio@$MINIO_JS_VERSION"
|
|
||||||
npm --prefix "$test_run_dir" install
|
|
|
@ -1,14 +0,0 @@
|
||||||
#!/bin/bash -e
|
|
||||||
#
|
|
||||||
#
|
|
||||||
|
|
||||||
MINIO_PY_VERSION=$(curl --retry 10 -Ls -o /dev/null -w "%{url_effective}" https://github.com/minio/minio-py/releases/latest | sed "s/https:\/\/github.com\/minio\/minio-py\/releases\/tag\///")
|
|
||||||
if [ -z "$MINIO_PY_VERSION" ]; then
|
|
||||||
echo "unable to get minio-py version from github"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
test_run_dir="$MINT_RUN_CORE_DIR/minio-py"
|
|
||||||
pip3 install --user faker
|
|
||||||
pip3 install minio=="${MINIO_PY_VERSION}"
|
|
||||||
$WGET --output-document="$test_run_dir/tests.py" "https://raw.githubusercontent.com/minio/minio-py/${MINIO_PY_VERSION}/tests/functional/tests.py"
|
|
|
@ -1,6 +0,0 @@
|
||||||
#!/bin/bash -e
|
|
||||||
#
|
|
||||||
#
|
|
||||||
|
|
||||||
# Always install the latest.
|
|
||||||
python -m pip install s3cmd
|
|
|
@ -1,5 +0,0 @@
|
||||||
#!/bin/bash -e
|
|
||||||
#
|
|
||||||
#
|
|
||||||
|
|
||||||
python -m pip install minio
|
|
|
@ -1,6 +0,0 @@
|
||||||
#!/bin/bash -e
|
|
||||||
#
|
|
||||||
#
|
|
||||||
|
|
||||||
test_run_dir="$MINT_RUN_CORE_DIR/security"
|
|
||||||
(cd "$test_run_dir" && GO111MODULE=on CGO_ENABLED=0 go build -o tls-tests)
|
|
|
@ -1,83 +0,0 @@
|
||||||
// Copyright (c) 2015-2021 MinIO, Inc.
|
|
||||||
//
|
|
||||||
// This file is part of MinIO Object Storage stack
|
|
||||||
//
|
|
||||||
// This program is free software: you can redistribute it and/or modify
|
|
||||||
// it under the terms of the GNU Affero General Public License as published by
|
|
||||||
// the Free Software Foundation, either version 3 of the License, or
|
|
||||||
// (at your option) any later version.
|
|
||||||
//
|
|
||||||
// This program is distributed in the hope that it will be useful
|
|
||||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
// GNU Affero General Public License for more details.
|
|
||||||
//
|
|
||||||
// You should have received a copy of the GNU Affero General Public License
|
|
||||||
// along with this program. If not, see <http://www.gnu.org/licenses/>.
|
|
||||||
|
|
||||||
package main
|
|
||||||
|
|
||||||
import (
|
|
||||||
"errors"
|
|
||||||
"math/rand"
|
|
||||||
"strings"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"github.com/aws/aws-sdk-go/aws"
|
|
||||||
"github.com/aws/aws-sdk-go/service/s3"
|
|
||||||
)
|
|
||||||
|
|
||||||
// Tests bucket versioned bucket and get its versioning configuration to check
|
|
||||||
func testMakeBucket() {
|
|
||||||
s3Client.Config.Region = aws.String("us-east-1")
|
|
||||||
|
|
||||||
// initialize logging params
|
|
||||||
startTime := time.Now()
|
|
||||||
function := "testCreateVersioningBucket"
|
|
||||||
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "versioning-test-")
|
|
||||||
args := map[string]interface{}{
|
|
||||||
"bucketName": bucketName,
|
|
||||||
}
|
|
||||||
_, err := s3Client.CreateBucket(&s3.CreateBucketInput{
|
|
||||||
Bucket: aws.String(bucketName),
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", "Versioning CreateBucket Failed", err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
defer cleanupBucket(bucketName, function, args, startTime)
|
|
||||||
|
|
||||||
putVersioningInput := &s3.PutBucketVersioningInput{
|
|
||||||
Bucket: aws.String(bucketName),
|
|
||||||
VersioningConfiguration: &s3.VersioningConfiguration{
|
|
||||||
MFADelete: aws.String("Disabled"),
|
|
||||||
Status: aws.String("Enabled"),
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
_, err = s3Client.PutBucketVersioning(putVersioningInput)
|
|
||||||
if err != nil {
|
|
||||||
if strings.Contains(err.Error(), "NotImplemented: A header you provided implies functionality that is not implemented") {
|
|
||||||
ignoreLog(function, args, startTime, "Versioning is not implemented").Info()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
failureLog(function, args, startTime, "", "Put versioning failed", err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
getVersioningInput := &s3.GetBucketVersioningInput{
|
|
||||||
Bucket: aws.String(bucketName),
|
|
||||||
}
|
|
||||||
|
|
||||||
result, err := s3Client.GetBucketVersioning(getVersioningInput)
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", "Get Versioning failed", err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
if *result.Status != "Enabled" {
|
|
||||||
failureLog(function, args, startTime, "", "Get Versioning status failed", errors.New("unexpected versioning status")).Fatal()
|
|
||||||
}
|
|
||||||
|
|
||||||
successLogger(function, args, startTime).Info()
|
|
||||||
}
|
|
|
@ -1,172 +0,0 @@
|
||||||
// Copyright (c) 2015-2021 MinIO, Inc.
|
|
||||||
//
|
|
||||||
// This file is part of MinIO Object Storage stack
|
|
||||||
//
|
|
||||||
// This program is free software: you can redistribute it and/or modify
|
|
||||||
// it under the terms of the GNU Affero General Public License as published by
|
|
||||||
// the Free Software Foundation, either version 3 of the License, or
|
|
||||||
// (at your option) any later version.
|
|
||||||
//
|
|
||||||
// This program is distributed in the hope that it will be useful
|
|
||||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
// GNU Affero General Public License for more details.
|
|
||||||
//
|
|
||||||
// You should have received a copy of the GNU Affero General Public License
|
|
||||||
// along with this program. If not, see <http://www.gnu.org/licenses/>.
|
|
||||||
|
|
||||||
package main
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
"io/ioutil"
|
|
||||||
"math/rand"
|
|
||||||
"strings"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"github.com/aws/aws-sdk-go/aws"
|
|
||||||
"github.com/aws/aws-sdk-go/aws/awserr"
|
|
||||||
"github.com/aws/aws-sdk-go/service/s3"
|
|
||||||
)
|
|
||||||
|
|
||||||
func testDeleteObject() {
|
|
||||||
startTime := time.Now()
|
|
||||||
function := "testDeleteObject"
|
|
||||||
bucket := randString(60, rand.NewSource(time.Now().UnixNano()), "versioning-test-")
|
|
||||||
object := "testObject"
|
|
||||||
objectContent := "my object content"
|
|
||||||
expiry := 1 * time.Minute
|
|
||||||
args := map[string]interface{}{
|
|
||||||
"bucketName": bucket,
|
|
||||||
"objectName": object,
|
|
||||||
"expiry": expiry,
|
|
||||||
}
|
|
||||||
|
|
||||||
_, err := s3Client.CreateBucket(&s3.CreateBucketInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", "CreateBucket failed", err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
defer cleanupBucket(bucket, function, args, startTime)
|
|
||||||
|
|
||||||
putVersioningInput := &s3.PutBucketVersioningInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
VersioningConfiguration: &s3.VersioningConfiguration{
|
|
||||||
Status: aws.String("Enabled"),
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
_, err = s3Client.PutBucketVersioning(putVersioningInput)
|
|
||||||
if err != nil {
|
|
||||||
if strings.Contains(err.Error(), "NotImplemented: A header you provided implies functionality that is not implemented") {
|
|
||||||
ignoreLog(function, args, startTime, "Versioning is not implemented").Info()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
failureLog(function, args, startTime, "", "Put versioning failed", err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
putInput := &s3.PutObjectInput{
|
|
||||||
Body: aws.ReadSeekCloser(strings.NewReader(objectContent)),
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
Key: aws.String(object),
|
|
||||||
}
|
|
||||||
|
|
||||||
putOutput, err := s3Client.PutObject(putInput)
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("PUT expected to succeed but got %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// First delete without version ID
|
|
||||||
deleteInput := &s3.DeleteObjectInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
Key: aws.String(object),
|
|
||||||
}
|
|
||||||
delOutput, err := s3Client.DeleteObject(deleteInput)
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("Delete expected to succeed but got %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Get the delete marker version, should lead to an error
|
|
||||||
getInput := &s3.GetObjectInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
Key: aws.String(object),
|
|
||||||
VersionId: aws.String(*delOutput.VersionId),
|
|
||||||
}
|
|
||||||
|
|
||||||
result, err := s3Client.GetObject(getInput)
|
|
||||||
if err == nil {
|
|
||||||
failureLog(function, args, startTime, "", "GetObject expected to fail but succeeded", nil).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
if err != nil {
|
|
||||||
aerr, ok := err.(awserr.Error)
|
|
||||||
if !ok {
|
|
||||||
failureLog(function, args, startTime, "", "GetObject unexpected error with delete marker", err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
if aerr.Code() != "MethodNotAllowed" {
|
|
||||||
failureLog(function, args, startTime, "", "GetObject unexpected error with delete marker", err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Get the older version, make sure it is preserved
|
|
||||||
getInput = &s3.GetObjectInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
Key: aws.String(object),
|
|
||||||
VersionId: aws.String(*putOutput.VersionId),
|
|
||||||
}
|
|
||||||
|
|
||||||
result, err = s3Client.GetObject(getInput)
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("GetObject expected to succeed but failed with %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
body, err := ioutil.ReadAll(result.Body)
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("GetObject expected to return data but failed with %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
result.Body.Close()
|
|
||||||
|
|
||||||
if string(body) != objectContent {
|
|
||||||
failureLog(function, args, startTime, "", "GetObject unexpected body content", nil).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
for i, versionID := range []string{*delOutput.VersionId, *putOutput.VersionId} {
|
|
||||||
delInput := &s3.DeleteObjectInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
Key: aws.String(object),
|
|
||||||
VersionId: aws.String(versionID),
|
|
||||||
}
|
|
||||||
_, err := s3Client.DeleteObject(delInput)
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("DeleteObject (%d) expected to succeed but failed", i+1), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
listInput := &s3.ListObjectVersionsInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
}
|
|
||||||
|
|
||||||
listOutput, err := s3Client.ListObjectVersions(listInput)
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("ListObjectVersions expected to succeed but got %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
if len(listOutput.DeleteMarkers) != 0 || len(listOutput.CommonPrefixes) != 0 || len(listOutput.Versions) != 0 {
|
|
||||||
failureLog(function, args, startTime, "", "ListObjectVersions returned some entries but expected to return nothing", nil).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
successLogger(function, args, startTime).Info()
|
|
||||||
}
|
|
|
@ -1,169 +0,0 @@
|
||||||
// Copyright (c) 2015-2021 MinIO, Inc.
|
|
||||||
//
|
|
||||||
// This file is part of MinIO Object Storage stack
|
|
||||||
//
|
|
||||||
// This program is free software: you can redistribute it and/or modify
|
|
||||||
// it under the terms of the GNU Affero General Public License as published by
|
|
||||||
// the Free Software Foundation, either version 3 of the License, or
|
|
||||||
// (at your option) any later version.
|
|
||||||
//
|
|
||||||
// This program is distributed in the hope that it will be useful
|
|
||||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
// GNU Affero General Public License for more details.
|
|
||||||
//
|
|
||||||
// You should have received a copy of the GNU Affero General Public License
|
|
||||||
// along with this program. If not, see <http://www.gnu.org/licenses/>.
|
|
||||||
|
|
||||||
package main
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
"io/ioutil"
|
|
||||||
"math/rand"
|
|
||||||
"strings"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"github.com/aws/aws-sdk-go/aws"
|
|
||||||
"github.com/aws/aws-sdk-go/aws/awserr"
|
|
||||||
"github.com/aws/aws-sdk-go/service/s3"
|
|
||||||
)
|
|
||||||
|
|
||||||
// testGetObject tests all get object features - picking a particular
|
|
||||||
// version id, check content and its metadata
|
|
||||||
func testGetObject() {
|
|
||||||
startTime := time.Now()
|
|
||||||
function := "testGetObject"
|
|
||||||
bucket := randString(60, rand.NewSource(time.Now().UnixNano()), "versioning-test-")
|
|
||||||
object := "testObject"
|
|
||||||
expiry := 1 * time.Minute
|
|
||||||
args := map[string]interface{}{
|
|
||||||
"bucketName": bucket,
|
|
||||||
"objectName": object,
|
|
||||||
"expiry": expiry,
|
|
||||||
}
|
|
||||||
|
|
||||||
_, err := s3Client.CreateBucket(&s3.CreateBucketInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", "CreateBucket failed", err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
defer cleanupBucket(bucket, function, args, startTime)
|
|
||||||
|
|
||||||
putVersioningInput := &s3.PutBucketVersioningInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
VersioningConfiguration: &s3.VersioningConfiguration{
|
|
||||||
Status: aws.String("Enabled"),
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
_, err = s3Client.PutBucketVersioning(putVersioningInput)
|
|
||||||
if err != nil {
|
|
||||||
if strings.Contains(err.Error(), "NotImplemented: A header you provided implies functionality that is not implemented") {
|
|
||||||
ignoreLog(function, args, startTime, "Versioning is not implemented").Info()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
failureLog(function, args, startTime, "", "Put versioning failed", err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
putInput1 := &s3.PutObjectInput{
|
|
||||||
Body: aws.ReadSeekCloser(strings.NewReader("my content 1")),
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
Key: aws.String(object),
|
|
||||||
}
|
|
||||||
_, err = s3Client.PutObject(putInput1)
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("PUT expected to succeed but got %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
putInput2 := &s3.PutObjectInput{
|
|
||||||
Body: aws.ReadSeekCloser(strings.NewReader("content file 2")),
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
Key: aws.String(object),
|
|
||||||
}
|
|
||||||
_, err = s3Client.PutObject(putInput2)
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("PUT expected to succeed but got %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
deleteInput := &s3.DeleteObjectInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
Key: aws.String(object),
|
|
||||||
}
|
|
||||||
|
|
||||||
_, err = s3Client.DeleteObject(deleteInput)
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("Delete expected to succeed but got %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
input := &s3.ListObjectVersionsInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
}
|
|
||||||
|
|
||||||
result, err := s3Client.ListObjectVersions(input)
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("ListObjectVersions expected to succeed but got %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
testCases := []struct {
|
|
||||||
content string
|
|
||||||
versionId string
|
|
||||||
deleteMarker bool
|
|
||||||
}{
|
|
||||||
{"", *(*result.DeleteMarkers[0]).VersionId, true},
|
|
||||||
{"content file 2", *(*result.Versions[0]).VersionId, false},
|
|
||||||
{"my content 1", *(*result.Versions[1]).VersionId, false},
|
|
||||||
}
|
|
||||||
|
|
||||||
for i, testCase := range testCases {
|
|
||||||
getInput := &s3.GetObjectInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
Key: aws.String(object),
|
|
||||||
VersionId: aws.String(testCase.versionId),
|
|
||||||
}
|
|
||||||
|
|
||||||
result, err := s3Client.GetObject(getInput)
|
|
||||||
if testCase.deleteMarker && err == nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("GetObject(%d) expected to fail but succeeded", i+1), nil).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
if !testCase.deleteMarker && err != nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("GetObject(%d) expected to succeed but failed", i+1), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
if testCase.deleteMarker {
|
|
||||||
aerr, ok := err.(awserr.Error)
|
|
||||||
if !ok {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("GetObject(%d) unexpected error with delete marker", i+1), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
if aerr.Code() != "MethodNotAllowed" {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("GetObject(%d) unexpected error with delete marker", i+1), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
body, err := ioutil.ReadAll(result.Body)
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("GetObject(%d) expected to return data but failed", i+1), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
result.Body.Close()
|
|
||||||
|
|
||||||
if string(body) != testCase.content {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("GetObject(%d) unexpected body content", i+1), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
successLogger(function, args, startTime).Info()
|
|
||||||
}
|
|
|
@ -1,8 +0,0 @@
|
||||||
module mint.minio.io/versioning/tests
|
|
||||||
|
|
||||||
go 1.16
|
|
||||||
|
|
||||||
require (
|
|
||||||
github.com/aws/aws-sdk-go v1.37.9
|
|
||||||
github.com/sirupsen/logrus v1.7.0
|
|
||||||
)
|
|
|
@ -1,36 +0,0 @@
|
||||||
github.com/aws/aws-sdk-go v1.37.9 h1:sgRbr+heubkgSwkn9fQMF80l9xjXkmhzk9DLdsaYh+c=
|
|
||||||
github.com/aws/aws-sdk-go v1.37.9/go.mod h1:hcU610XS61/+aQV88ixoOzUoG7v3b31pl2zKMmprdro=
|
|
||||||
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
|
||||||
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
|
|
||||||
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
|
||||||
github.com/jmespath/go-jmespath v0.4.0 h1:BEgLn5cpjn8UN1mAw4NjwDrS35OdebyEtFe+9YPoQUg=
|
|
||||||
github.com/jmespath/go-jmespath v0.4.0/go.mod h1:T8mJZnbsbmF+m6zOOFylbeCJqk5+pHWvzYPziyZiYoo=
|
|
||||||
github.com/jmespath/go-jmespath/internal/testify v1.5.1 h1:shLQSRRSCCPj3f2gpwzGwWFoC7ycTf1rcQZHOlsJ6N8=
|
|
||||||
github.com/jmespath/go-jmespath/internal/testify v1.5.1/go.mod h1:L3OGu8Wl2/fWfCI6z80xFu9LTZmf1ZRjMHUOPmWr69U=
|
|
||||||
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
|
||||||
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
|
|
||||||
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
|
||||||
github.com/sirupsen/logrus v1.7.0 h1:ShrD1U9pZB12TX0cVy0DtePoCH97K8EtX+mg7ZARUtM=
|
|
||||||
github.com/sirupsen/logrus v1.7.0/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
|
|
||||||
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
|
||||||
github.com/stretchr/testify v1.2.2 h1:bSDNvY7ZPG5RlJ8otE/7V6gMiyenm9RtJ7IUVIAoJ1w=
|
|
||||||
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
|
|
||||||
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
|
|
||||||
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
|
||||||
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
|
||||||
golang.org/x/net v0.0.0-20201110031124-69a78807bb2b h1:uwuIcX0g4Yl1NC5XAz37xsr2lTtcqevgzYNVt49waME=
|
|
||||||
golang.org/x/net v0.0.0-20201110031124-69a78807bb2b/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
|
|
||||||
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
|
||||||
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
|
||||||
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
|
||||||
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f h1:+Nyd8tzPX9R7BWHguqsrbFdRx3WQ/1ib8I44HXV5yTA=
|
|
||||||
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
|
||||||
golang.org/x/text v0.3.0 h1:g61tztE5qeGQ89tm6NTjjM9VPIm088od1l6aSorWRWg=
|
|
||||||
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
|
||||||
golang.org/x/text v0.3.3 h1:cokOdA+Jmi5PJGXLlLllQSgYigAEfHXJAERHVMaCc2k=
|
|
||||||
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
|
||||||
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
|
||||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=
|
|
||||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
|
||||||
gopkg.in/yaml.v2 v2.2.8 h1:obN1ZagJSUGI0Ek/LBmuj4SNLPfIny3KsKFopxRdj10=
|
|
||||||
gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
|
|
@ -1,8 +0,0 @@
|
||||||
#!/bin/bash -e
|
|
||||||
#
|
|
||||||
#
|
|
||||||
|
|
||||||
test_run_dir="$MINT_RUN_CORE_DIR/versioning"
|
|
||||||
test_build_dir="$MINT_RUN_BUILD_DIR/versioning"
|
|
||||||
|
|
||||||
(cd "$test_build_dir" && GO111MODULE=on CGO_ENABLED=0 go build --ldflags "-s -w" -o "$test_run_dir/tests")
|
|
|
@ -1,282 +0,0 @@
|
||||||
// Copyright (c) 2015-2021 MinIO, Inc.
|
|
||||||
//
|
|
||||||
// This file is part of MinIO Object Storage stack
|
|
||||||
//
|
|
||||||
// This program is free software: you can redistribute it and/or modify
|
|
||||||
// it under the terms of the GNU Affero General Public License as published by
|
|
||||||
// the Free Software Foundation, either version 3 of the License, or
|
|
||||||
// (at your option) any later version.
|
|
||||||
//
|
|
||||||
// This program is distributed in the hope that it will be useful
|
|
||||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
// GNU Affero General Public License for more details.
|
|
||||||
//
|
|
||||||
// You should have received a copy of the GNU Affero General Public License
|
|
||||||
// along with this program. If not, see <http://www.gnu.org/licenses/>.
|
|
||||||
|
|
||||||
package main
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
"math/rand"
|
|
||||||
"strings"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"github.com/aws/aws-sdk-go/aws"
|
|
||||||
"github.com/aws/aws-sdk-go/aws/credentials"
|
|
||||||
"github.com/aws/aws-sdk-go/aws/session"
|
|
||||||
"github.com/aws/aws-sdk-go/service/s3"
|
|
||||||
)
|
|
||||||
|
|
||||||
// Test locking for different versions
|
|
||||||
func testLockingLegalhold() {
|
|
||||||
startTime := time.Now()
|
|
||||||
function := "testLockingLegalhold"
|
|
||||||
bucket := randString(60, rand.NewSource(time.Now().UnixNano()), "versioning-test-")
|
|
||||||
object := "testObject"
|
|
||||||
expiry := 1 * time.Minute
|
|
||||||
args := map[string]interface{}{
|
|
||||||
"bucketName": bucket,
|
|
||||||
"objectName": object,
|
|
||||||
"expiry": expiry,
|
|
||||||
}
|
|
||||||
|
|
||||||
_, err := s3Client.CreateBucket(&s3.CreateBucketInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
ObjectLockEnabledForBucket: aws.Bool(true),
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
if strings.Contains(err.Error(), "NotImplemented: A header you provided implies functionality that is not implemented") {
|
|
||||||
ignoreLog(function, args, startTime, "Versioning is not implemented").Info()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
failureLog(function, args, startTime, "", "CreateBucket failed", err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
defer cleanupBucket(bucket, function, args, startTime)
|
|
||||||
|
|
||||||
type uploadedObject struct {
|
|
||||||
legalhold string
|
|
||||||
successfulRemove bool
|
|
||||||
versionId string
|
|
||||||
deleteMarker bool
|
|
||||||
}
|
|
||||||
|
|
||||||
uploads := []uploadedObject{
|
|
||||||
{legalhold: "ON"},
|
|
||||||
{legalhold: "OFF"},
|
|
||||||
}
|
|
||||||
|
|
||||||
// Upload versions and save their version IDs
|
|
||||||
for i := range uploads {
|
|
||||||
putInput := &s3.PutObjectInput{
|
|
||||||
Body: aws.ReadSeekCloser(strings.NewReader("content")),
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
Key: aws.String(object),
|
|
||||||
ObjectLockLegalHoldStatus: aws.String(uploads[i].legalhold),
|
|
||||||
}
|
|
||||||
output, err := s3Client.PutObject(putInput)
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("PUT expected to succeed but got %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
uploads[i].versionId = *output.VersionId
|
|
||||||
}
|
|
||||||
|
|
||||||
// In all cases, we can remove an object by creating a delete marker
|
|
||||||
// First delete without version ID
|
|
||||||
deleteInput := &s3.DeleteObjectInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
Key: aws.String(object),
|
|
||||||
}
|
|
||||||
deleteOutput, err := s3Client.DeleteObject(deleteInput)
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("DELETE expected to succeed but got %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
uploads = append(uploads, uploadedObject{versionId: *deleteOutput.VersionId, deleteMarker: true})
|
|
||||||
|
|
||||||
// Put tagging on each version
|
|
||||||
for i := range uploads {
|
|
||||||
if uploads[i].deleteMarker {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
deleteInput := &s3.DeleteObjectInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
Key: aws.String(object),
|
|
||||||
VersionId: aws.String(uploads[i].versionId),
|
|
||||||
}
|
|
||||||
_, err = s3Client.DeleteObject(deleteInput)
|
|
||||||
if err == nil && uploads[i].legalhold == "ON" {
|
|
||||||
failureLog(function, args, startTime, "", "DELETE expected to fail but succeed instead", nil).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
if err != nil && uploads[i].legalhold == "OFF" {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("DELETE expected to succeed but got %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
for i := range uploads {
|
|
||||||
if uploads[i].deleteMarker || uploads[i].legalhold == "OFF" {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
input := &s3.GetObjectLegalHoldInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
Key: aws.String(object),
|
|
||||||
VersionId: aws.String(uploads[i].versionId),
|
|
||||||
}
|
|
||||||
_, err := s3Client.GetObjectLegalHold(input)
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("GetObjectLegalHold expected to succeed but got %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
for i := range uploads {
|
|
||||||
if uploads[i].deleteMarker || uploads[i].legalhold == "OFF" {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
input := &s3.PutObjectLegalHoldInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
Key: aws.String(object),
|
|
||||||
LegalHold: &s3.ObjectLockLegalHold{Status: aws.String("OFF")},
|
|
||||||
VersionId: aws.String(uploads[i].versionId),
|
|
||||||
}
|
|
||||||
_, err := s3Client.PutObjectLegalHold(input)
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("Turning off legalhold failed with %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Error cases
|
|
||||||
|
|
||||||
// object-handlers.go > GetObjectLegalHoldHandler > getObjectInfo
|
|
||||||
for i := range uploads {
|
|
||||||
if uploads[i].legalhold == "" || uploads[i].legalhold == "OFF" {
|
|
||||||
input := &s3.GetObjectLegalHoldInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
Key: aws.String(object),
|
|
||||||
VersionId: aws.String(uploads[i].versionId),
|
|
||||||
}
|
|
||||||
// legalhold = "off" => The specified version does not exist.
|
|
||||||
// legalhold = "" => The specified method is not allowed against this resource.
|
|
||||||
_, err := s3Client.GetObjectLegalHold(input)
|
|
||||||
if err == nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("GetObjectLegalHold expected to fail but got %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Second client
|
|
||||||
creds := credentials.NewStaticCredentials("test", "test", "")
|
|
||||||
newSession, err := session.NewSession()
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("NewSession expected to succeed but got %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
s3Config := s3Client.Config
|
|
||||||
s3Config.Credentials = creds
|
|
||||||
s3ClientTest := s3.New(newSession, &s3Config)
|
|
||||||
|
|
||||||
// Check with a second client: object-handlers.go > GetObjectLegalHoldHandler > checkRequestAuthType
|
|
||||||
input := &s3.GetObjectLegalHoldInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
Key: aws.String(object),
|
|
||||||
}
|
|
||||||
// The Access Key Id you provided does not exist in our records.
|
|
||||||
_, err = s3ClientTest.GetObjectLegalHold(input)
|
|
||||||
if err == nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("GetObjectLegalHold expected to fail but got %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// object-handlers.go > GetObjectLegalHoldHandler > globalBucketObjectLockSys.Get(bucket); !rcfg.LockEnabled
|
|
||||||
bucketWithoutLock := bucket + "-without-lock"
|
|
||||||
_, err = s3Client.CreateBucket(&s3.CreateBucketInput{
|
|
||||||
Bucket: aws.String(bucketWithoutLock),
|
|
||||||
ObjectLockEnabledForBucket: aws.Bool(false),
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", "CreateBucket failed", err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
defer cleanupBucket(bucketWithoutLock, function, args, startTime)
|
|
||||||
|
|
||||||
input = &s3.GetObjectLegalHoldInput{
|
|
||||||
Bucket: aws.String(bucketWithoutLock),
|
|
||||||
Key: aws.String(object),
|
|
||||||
}
|
|
||||||
// Bucket is missing ObjectLockConfiguration
|
|
||||||
_, err = s3Client.GetObjectLegalHold(input)
|
|
||||||
if err == nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("GetObjectLegalHold expected to fail but got %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check with a second client: object-handlers.go > PutObjectLegalHoldHandler > checkRequestAuthType
|
|
||||||
for i := range uploads {
|
|
||||||
if uploads[i].deleteMarker || uploads[i].legalhold == "OFF" {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
input := &s3.PutObjectLegalHoldInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
Key: aws.String(object),
|
|
||||||
}
|
|
||||||
// The Access Key Id you provided does not exist in our records.
|
|
||||||
_, err := s3ClientTest.PutObjectLegalHold(input)
|
|
||||||
if err == nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("Turning off legalhold expected to fail but got %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// object-handlers.go > PutObjectLegalHoldHandler > globalBucketObjectLockSys.Get(bucket); !rcfg.LockEnabled
|
|
||||||
for i := range uploads {
|
|
||||||
if uploads[i].deleteMarker || uploads[i].legalhold == "OFF" {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
input := &s3.PutObjectLegalHoldInput{
|
|
||||||
Bucket: aws.String(bucketWithoutLock),
|
|
||||||
Key: aws.String(object),
|
|
||||||
}
|
|
||||||
// Bucket is missing ObjectLockConfiguration
|
|
||||||
_, err := s3Client.PutObjectLegalHold(input)
|
|
||||||
if err == nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("Turning off legalhold expected to fail but got %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// object-handlers.go > PutObjectLegalHoldHandler > objectlock.ParseObjectLegalHold
|
|
||||||
putInput := &s3.PutObjectInput{
|
|
||||||
Body: aws.ReadSeekCloser(strings.NewReader("content")),
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
Key: aws.String(object),
|
|
||||||
ObjectLockLegalHoldStatus: aws.String("test"),
|
|
||||||
}
|
|
||||||
output, err := s3Client.PutObject(putInput)
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("PUT expected to succeed but got %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
uploads[0].versionId = *output.VersionId
|
|
||||||
|
|
||||||
polhInput := &s3.PutObjectLegalHoldInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
Key: aws.String(object),
|
|
||||||
VersionId: aws.String(uploads[0].versionId),
|
|
||||||
}
|
|
||||||
// We encountered an internal error, please try again.: cause(EOF)
|
|
||||||
_, err = s3Client.PutObjectLegalHold(polhInput)
|
|
||||||
if err == nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("PutObjectLegalHold expected to fail but got %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
successLogger(function, args, startTime).Info()
|
|
||||||
}
|
|
|
@ -1,709 +0,0 @@
|
||||||
// Copyright (c) 2015-2021 MinIO, Inc.
|
|
||||||
//
|
|
||||||
// This file is part of MinIO Object Storage stack
|
|
||||||
//
|
|
||||||
// This program is free software: you can redistribute it and/or modify
|
|
||||||
// it under the terms of the GNU Affero General Public License as published by
|
|
||||||
// the Free Software Foundation, either version 3 of the License, or
|
|
||||||
// (at your option) any later version.
|
|
||||||
//
|
|
||||||
// This program is distributed in the hope that it will be useful
|
|
||||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
// GNU Affero General Public License for more details.
|
|
||||||
//
|
|
||||||
// You should have received a copy of the GNU Affero General Public License
|
|
||||||
// along with this program. If not, see <http://www.gnu.org/licenses/>.
|
|
||||||
|
|
||||||
package main
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
"math/rand"
|
|
||||||
"reflect"
|
|
||||||
"strings"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"github.com/aws/aws-sdk-go/aws"
|
|
||||||
"github.com/aws/aws-sdk-go/aws/credentials"
|
|
||||||
"github.com/aws/aws-sdk-go/aws/session"
|
|
||||||
"github.com/aws/aws-sdk-go/service/s3"
|
|
||||||
)
|
|
||||||
|
|
||||||
// Test regular listing result with simple use cases:
|
|
||||||
// Upload an object ten times, delete it once (delete marker)
|
|
||||||
// and check listing result
|
|
||||||
func testListObjectVersionsSimple() {
|
|
||||||
startTime := time.Now()
|
|
||||||
function := "testListObjectVersionsSimple"
|
|
||||||
bucket := randString(60, rand.NewSource(time.Now().UnixNano()), "versioning-test-")
|
|
||||||
object := "testObject"
|
|
||||||
expiry := 1 * time.Minute
|
|
||||||
args := map[string]interface{}{
|
|
||||||
"bucketName": bucket,
|
|
||||||
"objectName": object,
|
|
||||||
"expiry": expiry,
|
|
||||||
}
|
|
||||||
|
|
||||||
_, err := s3Client.CreateBucket(&s3.CreateBucketInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", "CreateBucket failed", err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
defer cleanupBucket(bucket, function, args, startTime)
|
|
||||||
|
|
||||||
putVersioningInput := &s3.PutBucketVersioningInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
VersioningConfiguration: &s3.VersioningConfiguration{
|
|
||||||
Status: aws.String("Enabled"),
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
_, err = s3Client.PutBucketVersioning(putVersioningInput)
|
|
||||||
if err != nil {
|
|
||||||
if strings.Contains(err.Error(), "NotImplemented: A header you provided implies functionality that is not implemented") {
|
|
||||||
ignoreLog(function, args, startTime, "Versioning is not implemented").Info()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
failureLog(function, args, startTime, "", "Put versioning failed", err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
for i := 0; i < 10; i++ {
|
|
||||||
putInput1 := &s3.PutObjectInput{
|
|
||||||
Body: aws.ReadSeekCloser(strings.NewReader("my content 1")),
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
Key: aws.String(object),
|
|
||||||
}
|
|
||||||
_, err = s3Client.PutObject(putInput1)
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("PUT expected to succeed but got %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
input := &s3.ListObjectVersionsInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
}
|
|
||||||
|
|
||||||
deleteInput := &s3.DeleteObjectInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
Key: aws.String(object),
|
|
||||||
}
|
|
||||||
_, err = s3Client.DeleteObject(deleteInput)
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("Delete expected to succeed but got %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Accumulate all versions IDs
|
|
||||||
var versionIDs = make(map[string]struct{})
|
|
||||||
|
|
||||||
result, err := s3Client.ListObjectVersions(input)
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("ListObjectVersions expected to succeed but got %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check the delete marker entries
|
|
||||||
if len(result.DeleteMarkers) != 1 {
|
|
||||||
failureLog(function, args, startTime, "", "ListObjectVersions returned unexpected result", nil).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
dm := *result.DeleteMarkers[0]
|
|
||||||
if !*dm.IsLatest {
|
|
||||||
failureLog(function, args, startTime, "", "ListObjectVersions returned unexpected result", nil).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
if *dm.Key != object {
|
|
||||||
failureLog(function, args, startTime, "", "ListObjectVersions returned unexpected result", nil).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
if time.Since(*dm.LastModified) > time.Hour {
|
|
||||||
failureLog(function, args, startTime, "", "ListObjectVersions returned unexpected result", nil).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
if *dm.VersionId == "" {
|
|
||||||
failureLog(function, args, startTime, "", "ListObjectVersions returned unexpected result", nil).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
versionIDs[*dm.VersionId] = struct{}{}
|
|
||||||
|
|
||||||
// Check versions entries
|
|
||||||
if len(result.Versions) != 10 {
|
|
||||||
failureLog(function, args, startTime, "", "ListObjectVersions returned unexpected result", nil).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, version := range result.Versions {
|
|
||||||
v := *version
|
|
||||||
if *v.IsLatest {
|
|
||||||
failureLog(function, args, startTime, "", "ListObjectVersions returned unexpected IsLatest field", nil).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
if *v.Key != object {
|
|
||||||
failureLog(function, args, startTime, "", "ListObjectVersions returned unexpected Key field", nil).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
if time.Since(*v.LastModified) > time.Hour {
|
|
||||||
failureLog(function, args, startTime, "", "ListObjectVersions returned unexpected LastModified field", nil).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
if *v.VersionId == "" {
|
|
||||||
failureLog(function, args, startTime, "", "ListObjectVersions returned unexpected VersionId field", nil).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
if *v.ETag != "\"094459df8fcebffc70d9aa08d75f9944\"" {
|
|
||||||
failureLog(function, args, startTime, "", "ListObjectVersions returned unexpected ETag field", nil).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
if *v.Size != 12 {
|
|
||||||
failureLog(function, args, startTime, "", "ListObjectVersions returned unexpected Size field", nil).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
if *v.StorageClass != "STANDARD" {
|
|
||||||
failureLog(function, args, startTime, "", "ListObjectVersions returned unexpected StorageClass field", nil).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
versionIDs[*v.VersionId] = struct{}{}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Ensure that we have 11 distinct versions IDs
|
|
||||||
if len(versionIDs) != 11 {
|
|
||||||
failureLog(function, args, startTime, "", "ListObjectVersions didn't return 11 different version IDs", nil).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Error cases
|
|
||||||
|
|
||||||
// bucket-listobjects-handlers.go > ListObjectVersionsHandler > listObjectVersions
|
|
||||||
lovInput := &s3.ListObjectVersionsInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
VersionIdMarker: aws.String("test"),
|
|
||||||
}
|
|
||||||
result, err = s3Client.ListObjectVersions(lovInput)
|
|
||||||
if err == nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("ListObjectVersions expected to fail but got %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// bucket-listobjects-handlers.go > ListObjectVersionsHandler > validateListObjectsArgs
|
|
||||||
lovInput.EncodingType = aws.String("test")
|
|
||||||
result, err = s3Client.ListObjectVersions(lovInput)
|
|
||||||
if err == nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("ListObjectVersions expected to fail but got %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Second client
|
|
||||||
creds := credentials.NewStaticCredentials("test", "test", "")
|
|
||||||
newSession, err := session.NewSession()
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("NewSession expected to succeed but got %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
s3Config := s3Client.Config
|
|
||||||
s3Config.Credentials = creds
|
|
||||||
s3ClientTest := s3.New(newSession, &s3Config)
|
|
||||||
|
|
||||||
// Check with a second client: bucket-listobjects-handlers.go > ListObjectVersionsHandler > checkRequestAuthType
|
|
||||||
result, err = s3ClientTest.ListObjectVersions(lovInput)
|
|
||||||
if err == nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("ListObjectVersions expected to fail but got %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
successLogger(function, args, startTime).Info()
|
|
||||||
}
|
|
||||||
|
|
||||||
func testListObjectVersionsWithPrefixAndDelimiter() {
|
|
||||||
startTime := time.Now()
|
|
||||||
function := "testListObjectVersionsWithPrefixAndDelimiter"
|
|
||||||
bucket := randString(60, rand.NewSource(time.Now().UnixNano()), "versioning-test-")
|
|
||||||
object := "testObject"
|
|
||||||
expiry := 1 * time.Minute
|
|
||||||
args := map[string]interface{}{
|
|
||||||
"bucketName": bucket,
|
|
||||||
"objectName": object,
|
|
||||||
"expiry": expiry,
|
|
||||||
}
|
|
||||||
|
|
||||||
_, err := s3Client.CreateBucket(&s3.CreateBucketInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", "CreateBucket failed", err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
defer cleanupBucket(bucket, function, args, startTime)
|
|
||||||
|
|
||||||
putVersioningInput := &s3.PutBucketVersioningInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
VersioningConfiguration: &s3.VersioningConfiguration{
|
|
||||||
Status: aws.String("Enabled"),
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
_, err = s3Client.PutBucketVersioning(putVersioningInput)
|
|
||||||
if err != nil {
|
|
||||||
if strings.Contains(err.Error(), "NotImplemented: A header you provided implies functionality that is not implemented") {
|
|
||||||
ignoreLog(function, args, startTime, "Versioning is not implemented").Info()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
failureLog(function, args, startTime, "", "Put versioning failed", err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, objectName := range []string{"dir/object", "dir/dir/object", "object"} {
|
|
||||||
putInput := &s3.PutObjectInput{
|
|
||||||
Body: aws.ReadSeekCloser(strings.NewReader("my content 1")),
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
Key: aws.String(objectName),
|
|
||||||
}
|
|
||||||
_, err = s3Client.PutObject(putInput)
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("PUT expected to succeed but got %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
type objectResult struct {
|
|
||||||
name string
|
|
||||||
isLatest bool
|
|
||||||
}
|
|
||||||
type listResult struct {
|
|
||||||
versions []objectResult
|
|
||||||
commonPrefixes []string
|
|
||||||
}
|
|
||||||
|
|
||||||
simplifyListingResult := func(out *s3.ListObjectVersionsOutput) (result listResult) {
|
|
||||||
for _, commonPrefix := range out.CommonPrefixes {
|
|
||||||
result.commonPrefixes = append(result.commonPrefixes, *commonPrefix.Prefix)
|
|
||||||
}
|
|
||||||
for _, version := range out.Versions {
|
|
||||||
result.versions = append(result.versions, objectResult{name: *version.Key, isLatest: *version.IsLatest})
|
|
||||||
}
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Recursive listing
|
|
||||||
input := &s3.ListObjectVersionsInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
}
|
|
||||||
result, err := s3Client.ListObjectVersions(input)
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("ListObjectVersions expected to succeed but got %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
gotResult := simplifyListingResult(result)
|
|
||||||
expectedResult := listResult{
|
|
||||||
versions: []objectResult{
|
|
||||||
objectResult{name: "dir/dir/object", isLatest: true},
|
|
||||||
objectResult{name: "dir/object", isLatest: true},
|
|
||||||
objectResult{name: "object", isLatest: true},
|
|
||||||
}}
|
|
||||||
if !reflect.DeepEqual(gotResult, expectedResult) {
|
|
||||||
failureLog(function, args, startTime, "", "ListObjectVersions returned unexpected listing result", nil).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Listing with delimiter
|
|
||||||
input = &s3.ListObjectVersionsInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
Delimiter: aws.String("/"),
|
|
||||||
}
|
|
||||||
result, err = s3Client.ListObjectVersions(input)
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("ListObjectVersions expected to succeed but got %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
gotResult = simplifyListingResult(result)
|
|
||||||
expectedResult = listResult{
|
|
||||||
versions: []objectResult{
|
|
||||||
objectResult{name: "object", isLatest: true},
|
|
||||||
},
|
|
||||||
commonPrefixes: []string{"dir/"}}
|
|
||||||
if !reflect.DeepEqual(gotResult, expectedResult) {
|
|
||||||
failureLog(function, args, startTime, "", "ListObjectVersions returned unexpected listing result", nil).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Listing with prefix and delimiter
|
|
||||||
input = &s3.ListObjectVersionsInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
Delimiter: aws.String("/"),
|
|
||||||
Prefix: aws.String("dir/"),
|
|
||||||
}
|
|
||||||
result, err = s3Client.ListObjectVersions(input)
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("ListObjectVersions expected to succeed but got %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
gotResult = simplifyListingResult(result)
|
|
||||||
expectedResult = listResult{
|
|
||||||
versions: []objectResult{
|
|
||||||
objectResult{name: "dir/object", isLatest: true},
|
|
||||||
},
|
|
||||||
commonPrefixes: []string{"dir/dir/"}}
|
|
||||||
if !reflect.DeepEqual(gotResult, expectedResult) {
|
|
||||||
failureLog(function, args, startTime, "", "ListObjectVersions returned unexpected listing result", nil).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
successLogger(function, args, startTime).Info()
|
|
||||||
}
|
|
||||||
|
|
||||||
// Test if key marker continuation works in listing works well
|
|
||||||
func testListObjectVersionsKeysContinuation() {
|
|
||||||
startTime := time.Now()
|
|
||||||
function := "testListObjectKeysContinuation"
|
|
||||||
bucket := randString(60, rand.NewSource(time.Now().UnixNano()), "versioning-test-")
|
|
||||||
object := "testObject"
|
|
||||||
expiry := 1 * time.Minute
|
|
||||||
args := map[string]interface{}{
|
|
||||||
"bucketName": bucket,
|
|
||||||
"objectName": object,
|
|
||||||
"expiry": expiry,
|
|
||||||
}
|
|
||||||
|
|
||||||
_, err := s3Client.CreateBucket(&s3.CreateBucketInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", "CreateBucket failed", err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
defer cleanupBucket(bucket, function, args, startTime)
|
|
||||||
|
|
||||||
putVersioningInput := &s3.PutBucketVersioningInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
VersioningConfiguration: &s3.VersioningConfiguration{
|
|
||||||
Status: aws.String("Enabled"),
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
_, err = s3Client.PutBucketVersioning(putVersioningInput)
|
|
||||||
if err != nil {
|
|
||||||
if strings.Contains(err.Error(), "NotImplemented: A header you provided implies functionality that is not implemented") {
|
|
||||||
ignoreLog(function, args, startTime, "Versioning is not implemented").Info()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
failureLog(function, args, startTime, "", "Put versioning failed", err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
for i := 0; i < 10; i++ {
|
|
||||||
putInput1 := &s3.PutObjectInput{
|
|
||||||
Body: aws.ReadSeekCloser(strings.NewReader("my content 1")),
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
Key: aws.String(fmt.Sprintf("testobject-%d", i)),
|
|
||||||
}
|
|
||||||
_, err = s3Client.PutObject(putInput1)
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("PUT expected to succeed but got %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
input := &s3.ListObjectVersionsInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
MaxKeys: aws.Int64(5),
|
|
||||||
}
|
|
||||||
|
|
||||||
type resultPage struct {
|
|
||||||
versions []string
|
|
||||||
nextKeyMarker string
|
|
||||||
lastPage bool
|
|
||||||
}
|
|
||||||
|
|
||||||
var gotResult []resultPage
|
|
||||||
var numPages int
|
|
||||||
|
|
||||||
err = s3Client.ListObjectVersionsPages(input,
|
|
||||||
func(page *s3.ListObjectVersionsOutput, lastPage bool) bool {
|
|
||||||
numPages++
|
|
||||||
resultPage := resultPage{lastPage: lastPage}
|
|
||||||
if page.NextKeyMarker != nil {
|
|
||||||
resultPage.nextKeyMarker = *page.NextKeyMarker
|
|
||||||
}
|
|
||||||
for _, v := range page.Versions {
|
|
||||||
resultPage.versions = append(resultPage.versions, *v.Key)
|
|
||||||
}
|
|
||||||
gotResult = append(gotResult, resultPage)
|
|
||||||
return true
|
|
||||||
})
|
|
||||||
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("ListObjectVersions expected to succeed but got %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
if numPages != 2 {
|
|
||||||
failureLog(function, args, startTime, "", "ListObjectVersions returned unexpected number of pages", nil).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
expectedResult := []resultPage{
|
|
||||||
resultPage{versions: []string{"testobject-0", "testobject-1", "testobject-2", "testobject-3", "testobject-4"}, nextKeyMarker: "testobject-4", lastPage: false},
|
|
||||||
resultPage{versions: []string{"testobject-5", "testobject-6", "testobject-7", "testobject-8", "testobject-9"}, nextKeyMarker: "", lastPage: true},
|
|
||||||
}
|
|
||||||
|
|
||||||
if !reflect.DeepEqual(expectedResult, gotResult) {
|
|
||||||
failureLog(function, args, startTime, "", "ListObjectVersions returned unexpected listing result", nil).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
successLogger(function, args, startTime).Info()
|
|
||||||
}
|
|
||||||
|
|
||||||
// Test if version id marker continuation works in listing works well
|
|
||||||
func testListObjectVersionsVersionIDContinuation() {
|
|
||||||
startTime := time.Now()
|
|
||||||
function := "testListObjectVersionIDContinuation"
|
|
||||||
bucket := randString(60, rand.NewSource(time.Now().UnixNano()), "versioning-test-")
|
|
||||||
object := "testObject"
|
|
||||||
expiry := 1 * time.Minute
|
|
||||||
args := map[string]interface{}{
|
|
||||||
"bucketName": bucket,
|
|
||||||
"objectName": object,
|
|
||||||
"expiry": expiry,
|
|
||||||
}
|
|
||||||
|
|
||||||
_, err := s3Client.CreateBucket(&s3.CreateBucketInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", "CreateBucket failed", err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
defer cleanupBucket(bucket, function, args, startTime)
|
|
||||||
|
|
||||||
putVersioningInput := &s3.PutBucketVersioningInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
VersioningConfiguration: &s3.VersioningConfiguration{
|
|
||||||
Status: aws.String("Enabled"),
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
_, err = s3Client.PutBucketVersioning(putVersioningInput)
|
|
||||||
if err != nil {
|
|
||||||
if strings.Contains(err.Error(), "NotImplemented: A header you provided implies functionality that is not implemented") {
|
|
||||||
ignoreLog(function, args, startTime, "Versioning is not implemented").Info()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
failureLog(function, args, startTime, "", "Put versioning failed", err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
for i := 0; i < 10; i++ {
|
|
||||||
putInput1 := &s3.PutObjectInput{
|
|
||||||
Body: aws.ReadSeekCloser(strings.NewReader("my content 1")),
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
Key: aws.String("testobject"),
|
|
||||||
}
|
|
||||||
_, err = s3Client.PutObject(putInput1)
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("PUT expected to succeed but got %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
input := &s3.ListObjectVersionsInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
MaxKeys: aws.Int64(5),
|
|
||||||
}
|
|
||||||
|
|
||||||
type resultPage struct {
|
|
||||||
versions []string
|
|
||||||
nextVersionIDMarker string
|
|
||||||
lastPage bool
|
|
||||||
}
|
|
||||||
|
|
||||||
var gotResult []resultPage
|
|
||||||
var gotNextVersionIDMarker string
|
|
||||||
var numPages int
|
|
||||||
|
|
||||||
err = s3Client.ListObjectVersionsPages(input,
|
|
||||||
func(page *s3.ListObjectVersionsOutput, lastPage bool) bool {
|
|
||||||
numPages++
|
|
||||||
resultPage := resultPage{lastPage: lastPage}
|
|
||||||
if page.NextVersionIdMarker != nil {
|
|
||||||
resultPage.nextVersionIDMarker = *page.NextVersionIdMarker
|
|
||||||
}
|
|
||||||
for _, v := range page.Versions {
|
|
||||||
resultPage.versions = append(resultPage.versions, *v.Key)
|
|
||||||
}
|
|
||||||
if !lastPage {
|
|
||||||
// There is only two pages, so here we are saving the version id
|
|
||||||
// of the last element in the first page of listing
|
|
||||||
gotNextVersionIDMarker = *(*page.Versions[len(page.Versions)-1]).VersionId
|
|
||||||
}
|
|
||||||
gotResult = append(gotResult, resultPage)
|
|
||||||
return true
|
|
||||||
})
|
|
||||||
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("ListObjectVersions expected to succeed but got %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
if numPages != 2 {
|
|
||||||
failureLog(function, args, startTime, "", "ListObjectVersions returned unexpected number of pages", nil).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
expectedResult := []resultPage{
|
|
||||||
resultPage{versions: []string{"testobject", "testobject", "testobject", "testobject", "testobject"}, nextVersionIDMarker: gotNextVersionIDMarker, lastPage: false},
|
|
||||||
resultPage{versions: []string{"testobject", "testobject", "testobject", "testobject", "testobject"}, lastPage: true},
|
|
||||||
}
|
|
||||||
|
|
||||||
if !reflect.DeepEqual(expectedResult, gotResult) {
|
|
||||||
failureLog(function, args, startTime, "", "ListObjectVersions returned unexpected listing result", nil).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
successLogger(function, args, startTime).Info()
|
|
||||||
}
|
|
||||||
|
|
||||||
// Test listing object when there is some empty directory object
|
|
||||||
func testListObjectsVersionsWithEmptyDirObject() {
|
|
||||||
startTime := time.Now()
|
|
||||||
function := "testListObjectsVersionsWithEmptyDirObject"
|
|
||||||
bucket := randString(60, rand.NewSource(time.Now().UnixNano()), "versioning-test-")
|
|
||||||
object := "testObject"
|
|
||||||
expiry := 1 * time.Minute
|
|
||||||
args := map[string]interface{}{
|
|
||||||
"bucketName": bucket,
|
|
||||||
"objectName": object,
|
|
||||||
"expiry": expiry,
|
|
||||||
}
|
|
||||||
|
|
||||||
_, err := s3Client.CreateBucket(&s3.CreateBucketInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", "CreateBucket failed", err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
defer cleanupBucket(bucket, function, args, startTime)
|
|
||||||
|
|
||||||
putVersioningInput := &s3.PutBucketVersioningInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
VersioningConfiguration: &s3.VersioningConfiguration{
|
|
||||||
Status: aws.String("Enabled"),
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
_, err = s3Client.PutBucketVersioning(putVersioningInput)
|
|
||||||
if err != nil {
|
|
||||||
if strings.Contains(err.Error(), "NotImplemented: A header you provided implies functionality that is not implemented") {
|
|
||||||
ignoreLog(function, args, startTime, "Versioning is not implemented").Info()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
failureLog(function, args, startTime, "", "Put versioning failed", err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, objectName := range []string{"dir/object", "dir/"} {
|
|
||||||
putInput := &s3.PutObjectInput{
|
|
||||||
Body: aws.ReadSeekCloser(strings.NewReader("")),
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
Key: aws.String(objectName),
|
|
||||||
}
|
|
||||||
_, err = s3Client.PutObject(putInput)
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("PUT expected to succeed but got %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
type objectResult struct {
|
|
||||||
name string
|
|
||||||
etag string
|
|
||||||
isLatest bool
|
|
||||||
}
|
|
||||||
type listResult struct {
|
|
||||||
versions []objectResult
|
|
||||||
commonPrefixes []string
|
|
||||||
}
|
|
||||||
|
|
||||||
simplifyListingResult := func(out *s3.ListObjectVersionsOutput) (result listResult) {
|
|
||||||
for _, commonPrefix := range out.CommonPrefixes {
|
|
||||||
result.commonPrefixes = append(result.commonPrefixes, *commonPrefix.Prefix)
|
|
||||||
}
|
|
||||||
for _, version := range out.Versions {
|
|
||||||
result.versions = append(result.versions, objectResult{
|
|
||||||
name: *version.Key,
|
|
||||||
etag: *version.ETag,
|
|
||||||
isLatest: *version.IsLatest,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Recursive listing
|
|
||||||
input := &s3.ListObjectVersionsInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
}
|
|
||||||
result, err := s3Client.ListObjectVersions(input)
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("ListObjectVersions expected to succeed but got %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
gotResult := simplifyListingResult(result)
|
|
||||||
expectedResult := listResult{
|
|
||||||
versions: []objectResult{
|
|
||||||
objectResult{name: "dir/", etag: "\"d41d8cd98f00b204e9800998ecf8427e\"", isLatest: true},
|
|
||||||
objectResult{name: "dir/object", etag: "\"d41d8cd98f00b204e9800998ecf8427e\"", isLatest: true},
|
|
||||||
}}
|
|
||||||
if !reflect.DeepEqual(gotResult, expectedResult) {
|
|
||||||
failureLog(function, args, startTime, "", "ListObjectVersions returned unexpected listing result", nil).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Listing with delimiter
|
|
||||||
input = &s3.ListObjectVersionsInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
Delimiter: aws.String("/"),
|
|
||||||
}
|
|
||||||
result, err = s3Client.ListObjectVersions(input)
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("ListObjectVersions expected to succeed but got %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
gotResult = simplifyListingResult(result)
|
|
||||||
expectedResult = listResult{
|
|
||||||
commonPrefixes: []string{"dir/"}}
|
|
||||||
if !reflect.DeepEqual(gotResult, expectedResult) {
|
|
||||||
failureLog(function, args, startTime, "", "ListObjectVersions returned unexpected listing result", nil).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Listing with prefix and delimiter
|
|
||||||
input = &s3.ListObjectVersionsInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
Delimiter: aws.String("/"),
|
|
||||||
Prefix: aws.String("dir/"),
|
|
||||||
}
|
|
||||||
result, err = s3Client.ListObjectVersions(input)
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("ListObjectVersions expected to succeed but got %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
gotResult = simplifyListingResult(result)
|
|
||||||
expectedResult = listResult{
|
|
||||||
versions: []objectResult{
|
|
||||||
{name: "dir/", etag: "\"d41d8cd98f00b204e9800998ecf8427e\"", isLatest: true},
|
|
||||||
{name: "dir/object", etag: "\"d41d8cd98f00b204e9800998ecf8427e\"", isLatest: true},
|
|
||||||
},
|
|
||||||
}
|
|
||||||
if !reflect.DeepEqual(gotResult, expectedResult) {
|
|
||||||
failureLog(function, args, startTime, "", "ListObjectVersions returned unexpected listing result", nil).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
successLogger(function, args, startTime).Info()
|
|
||||||
}
|
|
|
@ -1,133 +0,0 @@
|
||||||
// Copyright (c) 2015-2021 MinIO, Inc.
|
|
||||||
//
|
|
||||||
// This file is part of MinIO Object Storage stack
|
|
||||||
//
|
|
||||||
// This program is free software: you can redistribute it and/or modify
|
|
||||||
// it under the terms of the GNU Affero General Public License as published by
|
|
||||||
// the Free Software Foundation, either version 3 of the License, or
|
|
||||||
// (at your option) any later version.
|
|
||||||
//
|
|
||||||
// This program is distributed in the hope that it will be useful
|
|
||||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
// GNU Affero General Public License for more details.
|
|
||||||
//
|
|
||||||
// You should have received a copy of the GNU Affero General Public License
|
|
||||||
// along with this program. If not, see <http://www.gnu.org/licenses/>.
|
|
||||||
|
|
||||||
package main
|
|
||||||
|
|
||||||
import (
|
|
||||||
"os"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"github.com/aws/aws-sdk-go/aws"
|
|
||||||
"github.com/aws/aws-sdk-go/aws/credentials"
|
|
||||||
"github.com/aws/aws-sdk-go/aws/session"
|
|
||||||
"github.com/aws/aws-sdk-go/service/s3"
|
|
||||||
log "github.com/sirupsen/logrus"
|
|
||||||
)
|
|
||||||
|
|
||||||
// S3 client for testing
|
|
||||||
var s3Client *s3.S3
|
|
||||||
|
|
||||||
func cleanupBucket(bucket string, function string, args map[string]interface{}, startTime time.Time) {
|
|
||||||
start := time.Now()
|
|
||||||
|
|
||||||
input := &s3.ListObjectVersionsInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
}
|
|
||||||
|
|
||||||
for time.Since(start) < 30*time.Minute {
|
|
||||||
err := s3Client.ListObjectVersionsPages(input,
|
|
||||||
func(page *s3.ListObjectVersionsOutput, lastPage bool) bool {
|
|
||||||
for _, v := range page.Versions {
|
|
||||||
input := &s3.DeleteObjectInput{
|
|
||||||
Bucket: &bucket,
|
|
||||||
Key: v.Key,
|
|
||||||
VersionId: v.VersionId,
|
|
||||||
BypassGovernanceRetention: aws.Bool(true),
|
|
||||||
}
|
|
||||||
_, err := s3Client.DeleteObject(input)
|
|
||||||
if err != nil {
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
for _, v := range page.DeleteMarkers {
|
|
||||||
input := &s3.DeleteObjectInput{
|
|
||||||
Bucket: &bucket,
|
|
||||||
Key: v.Key,
|
|
||||||
VersionId: v.VersionId,
|
|
||||||
BypassGovernanceRetention: aws.Bool(true),
|
|
||||||
}
|
|
||||||
_, err := s3Client.DeleteObject(input)
|
|
||||||
if err != nil {
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return true
|
|
||||||
})
|
|
||||||
|
|
||||||
_, err = s3Client.DeleteBucket(&s3.DeleteBucketInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
time.Sleep(30 * time.Second)
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
failureLog(function, args, startTime, "", "Unable to cleanup bucket after compliance tests", nil).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
func main() {
|
|
||||||
endpoint := os.Getenv("SERVER_ENDPOINT")
|
|
||||||
accessKey := os.Getenv("ACCESS_KEY")
|
|
||||||
secretKey := os.Getenv("SECRET_KEY")
|
|
||||||
secure := os.Getenv("ENABLE_HTTPS")
|
|
||||||
sdkEndpoint := "http://" + endpoint
|
|
||||||
if secure == "1" {
|
|
||||||
sdkEndpoint = "https://" + endpoint
|
|
||||||
}
|
|
||||||
|
|
||||||
creds := credentials.NewStaticCredentials(accessKey, secretKey, "")
|
|
||||||
newSession := session.New()
|
|
||||||
s3Config := &aws.Config{
|
|
||||||
Credentials: creds,
|
|
||||||
Endpoint: aws.String(sdkEndpoint),
|
|
||||||
Region: aws.String("us-east-1"),
|
|
||||||
S3ForcePathStyle: aws.Bool(true),
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create an S3 service object in the default region.
|
|
||||||
s3Client = s3.New(newSession, s3Config)
|
|
||||||
|
|
||||||
// Output to stdout instead of the default stderr
|
|
||||||
log.SetOutput(os.Stdout)
|
|
||||||
// create custom formatter
|
|
||||||
mintFormatter := mintJSONFormatter{}
|
|
||||||
// set custom formatter
|
|
||||||
log.SetFormatter(&mintFormatter)
|
|
||||||
// log Info or above -- success cases are Info level, failures are Fatal level
|
|
||||||
log.SetLevel(log.InfoLevel)
|
|
||||||
|
|
||||||
testMakeBucket()
|
|
||||||
testPutObject()
|
|
||||||
testPutObjectWithTaggingAndMetadata()
|
|
||||||
testGetObject()
|
|
||||||
testStatObject()
|
|
||||||
testDeleteObject()
|
|
||||||
testListObjectVersionsSimple()
|
|
||||||
testListObjectVersionsWithPrefixAndDelimiter()
|
|
||||||
testListObjectVersionsKeysContinuation()
|
|
||||||
testListObjectVersionsVersionIDContinuation()
|
|
||||||
testListObjectsVersionsWithEmptyDirObject()
|
|
||||||
testTagging()
|
|
||||||
testLockingLegalhold()
|
|
||||||
testPutGetRetentionCompliance()
|
|
||||||
testPutGetDeleteRetentionGovernance()
|
|
||||||
testLockingRetentionGovernance()
|
|
||||||
testLockingRetentionCompliance()
|
|
||||||
}
|
|
|
@ -1,292 +0,0 @@
|
||||||
// Copyright (c) 2015-2021 MinIO, Inc.
|
|
||||||
//
|
|
||||||
// This file is part of MinIO Object Storage stack
|
|
||||||
//
|
|
||||||
// This program is free software: you can redistribute it and/or modify
|
|
||||||
// it under the terms of the GNU Affero General Public License as published by
|
|
||||||
// the Free Software Foundation, either version 3 of the License, or
|
|
||||||
// (at your option) any later version.
|
|
||||||
//
|
|
||||||
// This program is distributed in the hope that it will be useful
|
|
||||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
// GNU Affero General Public License for more details.
|
|
||||||
//
|
|
||||||
// You should have received a copy of the GNU Affero General Public License
|
|
||||||
// along with this program. If not, see <http://www.gnu.org/licenses/>.
|
|
||||||
|
|
||||||
package main
|
|
||||||
|
|
||||||
import (
|
|
||||||
"errors"
|
|
||||||
"fmt"
|
|
||||||
"math/rand"
|
|
||||||
"net/url"
|
|
||||||
"strings"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"github.com/aws/aws-sdk-go/aws"
|
|
||||||
"github.com/aws/aws-sdk-go/service/s3"
|
|
||||||
)
|
|
||||||
|
|
||||||
// Put two objects with the same name but with different content
|
|
||||||
func testPutObject() {
|
|
||||||
startTime := time.Now()
|
|
||||||
function := "testPutObject"
|
|
||||||
bucket := randString(60, rand.NewSource(time.Now().UnixNano()), "versioning-test-")
|
|
||||||
object := "testObject"
|
|
||||||
expiry := 1 * time.Minute
|
|
||||||
args := map[string]interface{}{
|
|
||||||
"bucketName": bucket,
|
|
||||||
"objectName": object,
|
|
||||||
"expiry": expiry,
|
|
||||||
}
|
|
||||||
|
|
||||||
_, err := s3Client.CreateBucket(&s3.CreateBucketInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", "CreateBucket failed", err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
defer cleanupBucket(bucket, function, args, startTime)
|
|
||||||
|
|
||||||
putVersioningInput := &s3.PutBucketVersioningInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
VersioningConfiguration: &s3.VersioningConfiguration{
|
|
||||||
Status: aws.String("Enabled"),
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
_, err = s3Client.PutBucketVersioning(putVersioningInput)
|
|
||||||
if err != nil {
|
|
||||||
if strings.Contains(err.Error(), "NotImplemented: A header you provided implies functionality that is not implemented") {
|
|
||||||
ignoreLog(function, args, startTime, "Versioning is not implemented").Info()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
failureLog(function, args, startTime, "", "Put versioning failed", err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
putInput1 := &s3.PutObjectInput{
|
|
||||||
Body: aws.ReadSeekCloser(strings.NewReader("my content 1")),
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
Key: aws.String(object),
|
|
||||||
}
|
|
||||||
_, err = s3Client.PutObject(putInput1)
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("PUT expected to succeed but got %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
putInput2 := &s3.PutObjectInput{
|
|
||||||
Body: aws.ReadSeekCloser(strings.NewReader("content file 2")),
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
Key: aws.String(object),
|
|
||||||
}
|
|
||||||
_, err = s3Client.PutObject(putInput2)
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("PUT expected to succeed but got %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
input := &s3.ListObjectVersionsInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
}
|
|
||||||
|
|
||||||
result, err := s3Client.ListObjectVersions(input)
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("PUT expected to succeed but got %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
if len(result.Versions) != 2 {
|
|
||||||
failureLog(function, args, startTime, "", "Unexpected list content", errors.New("unexpected number of versions")).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
vid1 := *result.Versions[0]
|
|
||||||
vid2 := *result.Versions[1]
|
|
||||||
|
|
||||||
if *vid1.VersionId == "" || *vid2.VersionId == "" || *vid1.VersionId == *vid2.VersionId {
|
|
||||||
failureLog(function, args, startTime, "", "Unexpected list content", errors.New("unexpected VersionId field")).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
if *vid1.IsLatest == false || *vid2.IsLatest == true {
|
|
||||||
failureLog(function, args, startTime, "", "Unexpected list content", errors.New("unexpected IsLatest field")).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
if *vid1.Size != 14 || *vid2.Size != 12 {
|
|
||||||
failureLog(function, args, startTime, "", "Unexpected list content", errors.New("unexpected Size field")).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
if *vid1.ETag != "\"e847032b45d3d76230058a80d8ca909b\"" || *vid2.ETag != "\"094459df8fcebffc70d9aa08d75f9944\"" {
|
|
||||||
failureLog(function, args, startTime, "", "Unexpected list content", errors.New("unexpected ETag field")).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
if *vid1.Key != "testObject" || *vid2.Key != "testObject" {
|
|
||||||
failureLog(function, args, startTime, "", "Unexpected list content", errors.New("unexpected Key field")).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
if (*vid1.LastModified).Before(*vid2.LastModified) {
|
|
||||||
failureLog(function, args, startTime, "", "Unexpected list content", errors.New("unexpected Last modified field")).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
successLogger(function, args, startTime).Info()
|
|
||||||
}
|
|
||||||
|
|
||||||
// Upload object versions with tagging and metadata and check them
|
|
||||||
func testPutObjectWithTaggingAndMetadata() {
|
|
||||||
startTime := time.Now()
|
|
||||||
function := "testPutObjectWithTaggingAndMetadata"
|
|
||||||
bucket := randString(60, rand.NewSource(time.Now().UnixNano()), "versioning-test-")
|
|
||||||
object := "testObject"
|
|
||||||
expiry := 1 * time.Minute
|
|
||||||
args := map[string]interface{}{
|
|
||||||
"bucketName": bucket,
|
|
||||||
"objectName": object,
|
|
||||||
"expiry": expiry,
|
|
||||||
}
|
|
||||||
|
|
||||||
_, err := s3Client.CreateBucket(&s3.CreateBucketInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", "CreateBucket failed", err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
defer cleanupBucket(bucket, function, args, startTime)
|
|
||||||
|
|
||||||
putVersioningInput := &s3.PutBucketVersioningInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
VersioningConfiguration: &s3.VersioningConfiguration{
|
|
||||||
Status: aws.String("Enabled"),
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
_, err = s3Client.PutBucketVersioning(putVersioningInput)
|
|
||||||
if err != nil {
|
|
||||||
if strings.Contains(err.Error(), "NotImplemented: A header you provided implies functionality that is not implemented") {
|
|
||||||
ignoreLog(function, args, startTime, "Versioning is not implemented").Info()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
failureLog(function, args, startTime, "", "Put versioning failed", err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
type objectUpload struct {
|
|
||||||
tags string
|
|
||||||
metadata map[string]string
|
|
||||||
versionId string
|
|
||||||
}
|
|
||||||
|
|
||||||
uploads := []objectUpload{
|
|
||||||
{tags: "key=value"},
|
|
||||||
{},
|
|
||||||
{metadata: map[string]string{"My-Metadata-Key": "my-metadata-val"}},
|
|
||||||
{tags: "key1=value1&key2=value2", metadata: map[string]string{"Foo-Key": "foo-val"}},
|
|
||||||
}
|
|
||||||
|
|
||||||
for i := range uploads {
|
|
||||||
putInput := &s3.PutObjectInput{
|
|
||||||
Body: aws.ReadSeekCloser(strings.NewReader("foocontent")),
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
Key: aws.String(object),
|
|
||||||
}
|
|
||||||
if uploads[i].tags != "" {
|
|
||||||
putInput.Tagging = aws.String(uploads[i].tags)
|
|
||||||
}
|
|
||||||
if uploads[i].metadata != nil {
|
|
||||||
putInput.Metadata = make(map[string]*string)
|
|
||||||
for k, v := range uploads[i].metadata {
|
|
||||||
putInput.Metadata[k] = aws.String(v)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
result, err := s3Client.PutObject(putInput)
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("PUT object expected to succeed but got %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
uploads[i].versionId = *result.VersionId
|
|
||||||
}
|
|
||||||
|
|
||||||
for i := range uploads {
|
|
||||||
putInput := &s3.PutObjectInput{
|
|
||||||
Body: aws.ReadSeekCloser(strings.NewReader("foocontent")),
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
Key: aws.String(object),
|
|
||||||
}
|
|
||||||
if uploads[i].tags != "" {
|
|
||||||
putInput.Tagging = aws.String(uploads[i].tags)
|
|
||||||
}
|
|
||||||
if uploads[i].metadata != nil {
|
|
||||||
putInput.Metadata = make(map[string]*string)
|
|
||||||
for k, v := range uploads[i].metadata {
|
|
||||||
putInput.Metadata[k] = aws.String(v)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
result, err := s3Client.PutObject(putInput)
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("PUT object expected to succeed but got %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
uploads[i].versionId = *result.VersionId
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check for tagging after removal
|
|
||||||
for i := range uploads {
|
|
||||||
if uploads[i].tags != "" {
|
|
||||||
input := &s3.GetObjectTaggingInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
Key: aws.String(object),
|
|
||||||
VersionId: aws.String(uploads[i].versionId),
|
|
||||||
}
|
|
||||||
tagResult, err := s3Client.GetObjectTagging(input)
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("GET Object tagging expected to succeed but got %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
var vals = make(url.Values)
|
|
||||||
for _, tag := range tagResult.TagSet {
|
|
||||||
vals.Add(*tag.Key, *tag.Value)
|
|
||||||
}
|
|
||||||
if uploads[i].tags != vals.Encode() {
|
|
||||||
failureLog(function, args, startTime, "", "PUT Object with tagging header returned unexpected result", nil).Fatal()
|
|
||||||
return
|
|
||||||
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if uploads[i].metadata != nil {
|
|
||||||
input := &s3.HeadObjectInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
Key: aws.String(object),
|
|
||||||
VersionId: aws.String(uploads[i].versionId),
|
|
||||||
}
|
|
||||||
result, err := s3Client.HeadObject(input)
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("HEAD Object expected to succeed but got %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
for expectedKey, expectedVal := range uploads[i].metadata {
|
|
||||||
gotValue, ok := result.Metadata[expectedKey]
|
|
||||||
if !ok {
|
|
||||||
failureLog(function, args, startTime, "", "HEAD Object returned unexpected metadata key result", nil).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
if expectedVal != *gotValue {
|
|
||||||
failureLog(function, args, startTime, "", "HEAD Object returned unexpected metadata value result", nil).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
successLogger(function, args, startTime).Info()
|
|
||||||
}
|
|
|
@ -1,370 +0,0 @@
|
||||||
// Copyright (c) 2015-2021 MinIO, Inc.
|
|
||||||
//
|
|
||||||
// This file is part of MinIO Object Storage stack
|
|
||||||
//
|
|
||||||
// This program is free software: you can redistribute it and/or modify
|
|
||||||
// it under the terms of the GNU Affero General Public License as published by
|
|
||||||
// the Free Software Foundation, either version 3 of the License, or
|
|
||||||
// (at your option) any later version.
|
|
||||||
//
|
|
||||||
// This program is distributed in the hope that it will be useful
|
|
||||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
// GNU Affero General Public License for more details.
|
|
||||||
//
|
|
||||||
// You should have received a copy of the GNU Affero General Public License
|
|
||||||
// along with this program. If not, see <http://www.gnu.org/licenses/>.
|
|
||||||
|
|
||||||
package main
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
"math/rand"
|
|
||||||
"strings"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"github.com/aws/aws-sdk-go/aws"
|
|
||||||
"github.com/aws/aws-sdk-go/service/s3"
|
|
||||||
)
|
|
||||||
|
|
||||||
// Test locking retention governance
|
|
||||||
func testLockingRetentionGovernance() {
|
|
||||||
startTime := time.Now()
|
|
||||||
function := "testLockingRetentionGovernance"
|
|
||||||
bucket := randString(60, rand.NewSource(time.Now().UnixNano()), "versioning-test-")
|
|
||||||
object := "testObject"
|
|
||||||
expiry := 1 * time.Minute
|
|
||||||
args := map[string]interface{}{
|
|
||||||
"bucketName": bucket,
|
|
||||||
"objectName": object,
|
|
||||||
"expiry": expiry,
|
|
||||||
}
|
|
||||||
|
|
||||||
_, err := s3Client.CreateBucket(&s3.CreateBucketInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
ObjectLockEnabledForBucket: aws.Bool(true),
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
if strings.Contains(err.Error(), "NotImplemented: A header you provided implies functionality that is not implemented") {
|
|
||||||
ignoreLog(function, args, startTime, "Versioning is not implemented").Info()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
failureLog(function, args, startTime, "", "CreateBucket failed", err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
defer cleanupBucket(bucket, function, args, startTime)
|
|
||||||
|
|
||||||
type uploadedObject struct {
|
|
||||||
retention string
|
|
||||||
retentionUntil time.Time
|
|
||||||
successfulRemove bool
|
|
||||||
versionId string
|
|
||||||
deleteMarker bool
|
|
||||||
}
|
|
||||||
|
|
||||||
uploads := []uploadedObject{
|
|
||||||
{},
|
|
||||||
{retention: "GOVERNANCE", retentionUntil: time.Now().UTC().Add(time.Hour)},
|
|
||||||
{},
|
|
||||||
}
|
|
||||||
|
|
||||||
// Upload versions and save their version IDs
|
|
||||||
for i := range uploads {
|
|
||||||
putInput := &s3.PutObjectInput{
|
|
||||||
Body: aws.ReadSeekCloser(strings.NewReader("content")),
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
Key: aws.String(object),
|
|
||||||
}
|
|
||||||
if uploads[i].retention != "" {
|
|
||||||
putInput.ObjectLockMode = aws.String(uploads[i].retention)
|
|
||||||
putInput.ObjectLockRetainUntilDate = aws.Time(uploads[i].retentionUntil)
|
|
||||||
|
|
||||||
}
|
|
||||||
output, err := s3Client.PutObject(putInput)
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("PUT expected to succeed but got %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
uploads[i].versionId = *output.VersionId
|
|
||||||
}
|
|
||||||
|
|
||||||
// In all cases, we can remove an object by creating a delete marker
|
|
||||||
// First delete without version ID
|
|
||||||
deleteInput := &s3.DeleteObjectInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
Key: aws.String(object),
|
|
||||||
}
|
|
||||||
deleteOutput, err := s3Client.DeleteObject(deleteInput)
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("DELETE expected to succeed but got %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
uploads = append(uploads, uploadedObject{versionId: *deleteOutput.VersionId, deleteMarker: true})
|
|
||||||
|
|
||||||
// Put tagging on each version
|
|
||||||
for i := range uploads {
|
|
||||||
if uploads[i].deleteMarker {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
deleteInput := &s3.DeleteObjectInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
Key: aws.String(object),
|
|
||||||
VersionId: aws.String(uploads[i].versionId),
|
|
||||||
}
|
|
||||||
_, err = s3Client.DeleteObject(deleteInput)
|
|
||||||
if err == nil && uploads[i].retention != "" {
|
|
||||||
failureLog(function, args, startTime, "", "DELETE expected to fail but succeed instead", nil).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
if err != nil && uploads[i].retention == "" {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("DELETE expected to succeed but got %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
successLogger(function, args, startTime).Info()
|
|
||||||
}
|
|
||||||
|
|
||||||
// Test locking retention compliance
|
|
||||||
func testLockingRetentionCompliance() {
|
|
||||||
startTime := time.Now()
|
|
||||||
function := "testLockingRetentionCompliance"
|
|
||||||
bucket := randString(60, rand.NewSource(time.Now().UnixNano()), "versioning-test-")
|
|
||||||
object := "testObject"
|
|
||||||
expiry := 1 * time.Minute
|
|
||||||
args := map[string]interface{}{
|
|
||||||
"bucketName": bucket,
|
|
||||||
"objectName": object,
|
|
||||||
"expiry": expiry,
|
|
||||||
}
|
|
||||||
|
|
||||||
_, err := s3Client.CreateBucket(&s3.CreateBucketInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
ObjectLockEnabledForBucket: aws.Bool(true),
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
if strings.Contains(err.Error(), "NotImplemented: A header you provided implies functionality that is not implemented") {
|
|
||||||
ignoreLog(function, args, startTime, "Versioning is not implemented").Info()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
failureLog(function, args, startTime, "", "CreateBucket failed", err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
defer cleanupBucket(bucket, function, args, startTime)
|
|
||||||
|
|
||||||
type uploadedObject struct {
|
|
||||||
retention string
|
|
||||||
retentionUntil time.Time
|
|
||||||
successfulRemove bool
|
|
||||||
versionId string
|
|
||||||
deleteMarker bool
|
|
||||||
}
|
|
||||||
|
|
||||||
uploads := []uploadedObject{
|
|
||||||
{},
|
|
||||||
{retention: "COMPLIANCE", retentionUntil: time.Now().UTC().Add(time.Minute)},
|
|
||||||
{},
|
|
||||||
}
|
|
||||||
|
|
||||||
// Upload versions and save their version IDs
|
|
||||||
for i := range uploads {
|
|
||||||
putInput := &s3.PutObjectInput{
|
|
||||||
Body: aws.ReadSeekCloser(strings.NewReader("content")),
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
Key: aws.String(object),
|
|
||||||
}
|
|
||||||
if uploads[i].retention != "" {
|
|
||||||
putInput.ObjectLockMode = aws.String(uploads[i].retention)
|
|
||||||
putInput.ObjectLockRetainUntilDate = aws.Time(uploads[i].retentionUntil)
|
|
||||||
|
|
||||||
}
|
|
||||||
output, err := s3Client.PutObject(putInput)
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("PUT expected to succeed but got %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
uploads[i].versionId = *output.VersionId
|
|
||||||
}
|
|
||||||
|
|
||||||
// In all cases, we can remove an object by creating a delete marker
|
|
||||||
// First delete without version ID
|
|
||||||
deleteInput := &s3.DeleteObjectInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
Key: aws.String(object),
|
|
||||||
}
|
|
||||||
deleteOutput, err := s3Client.DeleteObject(deleteInput)
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("DELETE expected to succeed but got %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
uploads = append(uploads, uploadedObject{versionId: *deleteOutput.VersionId, deleteMarker: true})
|
|
||||||
|
|
||||||
// Put tagging on each version
|
|
||||||
for i := range uploads {
|
|
||||||
if uploads[i].deleteMarker {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
deleteInput := &s3.DeleteObjectInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
Key: aws.String(object),
|
|
||||||
VersionId: aws.String(uploads[i].versionId),
|
|
||||||
}
|
|
||||||
_, err = s3Client.DeleteObject(deleteInput)
|
|
||||||
if err == nil && uploads[i].retention != "" {
|
|
||||||
failureLog(function, args, startTime, "", "DELETE expected to fail but succeed instead", nil).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
if err != nil && uploads[i].retention == "" {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("DELETE expected to succeed but got %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
successLogger(function, args, startTime).Info()
|
|
||||||
}
|
|
||||||
|
|
||||||
func testPutGetDeleteRetentionGovernance() {
|
|
||||||
functionName := "testPutGetDeleteRetentionGovernance"
|
|
||||||
testPutGetDeleteLockingRetention(functionName, "GOVERNANCE")
|
|
||||||
}
|
|
||||||
|
|
||||||
func testPutGetRetentionCompliance() {
|
|
||||||
functionName := "testPutGetRetentionCompliance"
|
|
||||||
testPutGetDeleteLockingRetention(functionName, "COMPLIANCE")
|
|
||||||
}
|
|
||||||
|
|
||||||
// Test locking retention governance
|
|
||||||
func testPutGetDeleteLockingRetention(function, retentionMode string) {
|
|
||||||
startTime := time.Now()
|
|
||||||
bucket := randString(60, rand.NewSource(time.Now().UnixNano()), "versioning-test-")
|
|
||||||
object := "testObject"
|
|
||||||
args := map[string]interface{}{
|
|
||||||
"bucketName": bucket,
|
|
||||||
"objectName": object,
|
|
||||||
"retentionMode": retentionMode,
|
|
||||||
}
|
|
||||||
|
|
||||||
_, err := s3Client.CreateBucket(&s3.CreateBucketInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
ObjectLockEnabledForBucket: aws.Bool(true),
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
if strings.Contains(err.Error(), "NotImplemented: A header you provided implies functionality that is not implemented") {
|
|
||||||
ignoreLog(function, args, startTime, "Versioning is not implemented").Info()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
failureLog(function, args, startTime, "", "CreateBucket failed", err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
defer cleanupBucket(bucket, function, args, startTime)
|
|
||||||
|
|
||||||
oneMinuteRetention := time.Now().UTC().Add(time.Minute)
|
|
||||||
twoMinutesRetention := oneMinuteRetention.Add(time.Minute)
|
|
||||||
|
|
||||||
// Upload version and save the version ID
|
|
||||||
putInput := &s3.PutObjectInput{
|
|
||||||
Body: aws.ReadSeekCloser(strings.NewReader("content")),
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
Key: aws.String(object),
|
|
||||||
ObjectLockMode: aws.String(retentionMode),
|
|
||||||
ObjectLockRetainUntilDate: aws.Time(oneMinuteRetention),
|
|
||||||
}
|
|
||||||
|
|
||||||
output, err := s3Client.PutObject(putInput)
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("PUT expected to succeed but got %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
versionId := *output.VersionId
|
|
||||||
|
|
||||||
// Increase retention until date
|
|
||||||
putRetentionInput := &s3.PutObjectRetentionInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
Key: aws.String(object),
|
|
||||||
VersionId: aws.String(versionId),
|
|
||||||
Retention: &s3.ObjectLockRetention{
|
|
||||||
Mode: aws.String(retentionMode),
|
|
||||||
RetainUntilDate: aws.Time(twoMinutesRetention),
|
|
||||||
},
|
|
||||||
}
|
|
||||||
_, err = s3Client.PutObjectRetention(putRetentionInput)
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("PutObjectRetention expected to succeed but got %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
getRetentionInput := &s3.GetObjectRetentionInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
Key: aws.String(object),
|
|
||||||
VersionId: aws.String(versionId),
|
|
||||||
}
|
|
||||||
|
|
||||||
retentionOutput, err := s3Client.GetObjectRetention(getRetentionInput)
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("GetObjectRetention expected to succeed but got %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Compare until retention date with truncating precision less than second
|
|
||||||
if retentionOutput.Retention.RetainUntilDate.Truncate(time.Second).String() != twoMinutesRetention.Truncate(time.Second).String() {
|
|
||||||
failureLog(function, args, startTime, "", "Unexpected until retention date", nil).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Lower retention until date, should fail
|
|
||||||
putRetentionInput = &s3.PutObjectRetentionInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
Key: aws.String(object),
|
|
||||||
VersionId: aws.String(versionId),
|
|
||||||
Retention: &s3.ObjectLockRetention{
|
|
||||||
Mode: aws.String(retentionMode),
|
|
||||||
RetainUntilDate: aws.Time(oneMinuteRetention),
|
|
||||||
},
|
|
||||||
}
|
|
||||||
_, err = s3Client.PutObjectRetention(putRetentionInput)
|
|
||||||
if err == nil {
|
|
||||||
failureLog(function, args, startTime, "", "PutObjectRetention expected to fail but succeeded", nil).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Remove retention without governance bypass
|
|
||||||
putRetentionInput = &s3.PutObjectRetentionInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
Key: aws.String(object),
|
|
||||||
VersionId: aws.String(versionId),
|
|
||||||
Retention: &s3.ObjectLockRetention{
|
|
||||||
Mode: aws.String(""),
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
_, err = s3Client.PutObjectRetention(putRetentionInput)
|
|
||||||
if err == nil {
|
|
||||||
failureLog(function, args, startTime, "", "Operation expected to fail but succeeded", nil).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
if retentionMode == "GOVERNANCE" {
|
|
||||||
// Remove governance retention without govenance bypass
|
|
||||||
putRetentionInput = &s3.PutObjectRetentionInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
Key: aws.String(object),
|
|
||||||
VersionId: aws.String(versionId),
|
|
||||||
BypassGovernanceRetention: aws.Bool(true),
|
|
||||||
Retention: &s3.ObjectLockRetention{
|
|
||||||
Mode: aws.String(""),
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
_, err = s3Client.PutObjectRetention(putRetentionInput)
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("Expected to succeed but failed with %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
successLogger(function, args, startTime).Info()
|
|
||||||
}
|
|
|
@ -1,181 +0,0 @@
|
||||||
// Copyright (c) 2015-2021 MinIO, Inc.
|
|
||||||
//
|
|
||||||
// This file is part of MinIO Object Storage stack
|
|
||||||
//
|
|
||||||
// This program is free software: you can redistribute it and/or modify
|
|
||||||
// it under the terms of the GNU Affero General Public License as published by
|
|
||||||
// the Free Software Foundation, either version 3 of the License, or
|
|
||||||
// (at your option) any later version.
|
|
||||||
//
|
|
||||||
// This program is distributed in the hope that it will be useful
|
|
||||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
// GNU Affero General Public License for more details.
|
|
||||||
//
|
|
||||||
// You should have received a copy of the GNU Affero General Public License
|
|
||||||
// along with this program. If not, see <http://www.gnu.org/licenses/>.
|
|
||||||
|
|
||||||
package main
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
"math/rand"
|
|
||||||
"strings"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"github.com/aws/aws-sdk-go/aws"
|
|
||||||
"github.com/aws/aws-sdk-go/aws/awserr"
|
|
||||||
"github.com/aws/aws-sdk-go/service/s3"
|
|
||||||
)
|
|
||||||
|
|
||||||
func testStatObject() {
|
|
||||||
startTime := time.Now()
|
|
||||||
function := "testStatObject"
|
|
||||||
bucket := randString(60, rand.NewSource(time.Now().UnixNano()), "versioning-test-")
|
|
||||||
object := "testObject"
|
|
||||||
expiry := 1 * time.Minute
|
|
||||||
args := map[string]interface{}{
|
|
||||||
"bucketName": bucket,
|
|
||||||
"objectName": object,
|
|
||||||
"expiry": expiry,
|
|
||||||
}
|
|
||||||
|
|
||||||
_, err := s3Client.CreateBucket(&s3.CreateBucketInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", "CreateBucket failed", err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
defer cleanupBucket(bucket, function, args, startTime)
|
|
||||||
|
|
||||||
putVersioningInput := &s3.PutBucketVersioningInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
VersioningConfiguration: &s3.VersioningConfiguration{
|
|
||||||
Status: aws.String("Enabled"),
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
_, err = s3Client.PutBucketVersioning(putVersioningInput)
|
|
||||||
if err != nil {
|
|
||||||
if strings.Contains(err.Error(), "NotImplemented: A header you provided implies functionality that is not implemented") {
|
|
||||||
ignoreLog(function, args, startTime, "Versioning is not implemented").Info()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
failureLog(function, args, startTime, "", "Put versioning failed", err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
putInput1 := &s3.PutObjectInput{
|
|
||||||
Body: aws.ReadSeekCloser(strings.NewReader("my content 1")),
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
Key: aws.String(object),
|
|
||||||
}
|
|
||||||
_, err = s3Client.PutObject(putInput1)
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("PUT expected to succeed but got %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
putInput2 := &s3.PutObjectInput{
|
|
||||||
Body: aws.ReadSeekCloser(strings.NewReader("content file 2")),
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
Key: aws.String(object),
|
|
||||||
}
|
|
||||||
_, err = s3Client.PutObject(putInput2)
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("PUT expected to succeed but got %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
deleteInput := &s3.DeleteObjectInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
Key: aws.String(object),
|
|
||||||
}
|
|
||||||
|
|
||||||
_, err = s3Client.DeleteObject(deleteInput)
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("Delete expected to succeed but got %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
input := &s3.ListObjectVersionsInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
}
|
|
||||||
|
|
||||||
result, err := s3Client.ListObjectVersions(input)
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("ListObjectVersions expected to succeed but got %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
testCases := []struct {
|
|
||||||
size int64
|
|
||||||
versionId string
|
|
||||||
etag string
|
|
||||||
contentType string
|
|
||||||
deleteMarker bool
|
|
||||||
}{
|
|
||||||
{0, *(*result.DeleteMarkers[0]).VersionId, "", "", true},
|
|
||||||
{14, *(*result.Versions[0]).VersionId, "\"e847032b45d3d76230058a80d8ca909b\"", "binary/octet-stream", false},
|
|
||||||
{12, *(*result.Versions[1]).VersionId, "\"094459df8fcebffc70d9aa08d75f9944\"", "binary/octet-stream", false},
|
|
||||||
}
|
|
||||||
|
|
||||||
for i, testCase := range testCases {
|
|
||||||
headInput := &s3.HeadObjectInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
Key: aws.String(object),
|
|
||||||
VersionId: aws.String(testCase.versionId),
|
|
||||||
}
|
|
||||||
|
|
||||||
result, err := s3Client.HeadObject(headInput)
|
|
||||||
if testCase.deleteMarker && err == nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("StatObject (%d) expected to fail but succeeded", i+1), nil).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
if !testCase.deleteMarker && err != nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("StatObject (%d) expected to succeed but failed", i+1), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
if testCase.deleteMarker {
|
|
||||||
aerr, ok := err.(awserr.Error)
|
|
||||||
if !ok {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("StatObject (%d) unexpected error with delete marker", i+1), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
if aerr.Code() != "MethodNotAllowed" {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("StatObject (%d) unexpected error code with delete marker", i+1), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
if *result.ContentLength != testCase.size {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("StatObject (%d) unexpected Content-Length", i+1), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
if *result.ETag != testCase.etag {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("StatObject (%d) unexpected ETag", i+1), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
if *result.ContentType != testCase.contentType {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("StatObject (%d) unexpected Content-Type", i+1), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
if result.DeleteMarker != nil && *result.DeleteMarker {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("StatObject (%d) unexpected DeleteMarker", i+1), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
if time.Since(*result.LastModified) > time.Hour {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("StatObject (%d) unexpected LastModified", i+1), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
successLogger(function, args, startTime).Info()
|
|
||||||
}
|
|
|
@ -1,201 +0,0 @@
|
||||||
// Copyright (c) 2015-2021 MinIO, Inc.
|
|
||||||
//
|
|
||||||
// This file is part of MinIO Object Storage stack
|
|
||||||
//
|
|
||||||
// This program is free software: you can redistribute it and/or modify
|
|
||||||
// it under the terms of the GNU Affero General Public License as published by
|
|
||||||
// the Free Software Foundation, either version 3 of the License, or
|
|
||||||
// (at your option) any later version.
|
|
||||||
//
|
|
||||||
// This program is distributed in the hope that it will be useful
|
|
||||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
// GNU Affero General Public License for more details.
|
|
||||||
//
|
|
||||||
// You should have received a copy of the GNU Affero General Public License
|
|
||||||
// along with this program. If not, see <http://www.gnu.org/licenses/>.
|
|
||||||
|
|
||||||
package main
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
"math/rand"
|
|
||||||
"reflect"
|
|
||||||
"strings"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"github.com/aws/aws-sdk-go/aws"
|
|
||||||
"github.com/aws/aws-sdk-go/service/s3"
|
|
||||||
)
|
|
||||||
|
|
||||||
// Test PUT/GET/DELETE tagging for separate versions
|
|
||||||
func testTagging() {
|
|
||||||
startTime := time.Now()
|
|
||||||
function := "testTagging"
|
|
||||||
bucket := randString(60, rand.NewSource(time.Now().UnixNano()), "versioning-test-")
|
|
||||||
object := "testObject"
|
|
||||||
expiry := 1 * time.Minute
|
|
||||||
args := map[string]interface{}{
|
|
||||||
"bucketName": bucket,
|
|
||||||
"objectName": object,
|
|
||||||
"expiry": expiry,
|
|
||||||
}
|
|
||||||
|
|
||||||
_, err := s3Client.CreateBucket(&s3.CreateBucketInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", "CreateBucket failed", err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
defer cleanupBucket(bucket, function, args, startTime)
|
|
||||||
|
|
||||||
putVersioningInput := &s3.PutBucketVersioningInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
VersioningConfiguration: &s3.VersioningConfiguration{
|
|
||||||
Status: aws.String("Enabled"),
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
_, err = s3Client.PutBucketVersioning(putVersioningInput)
|
|
||||||
if err != nil {
|
|
||||||
if strings.Contains(err.Error(), "NotImplemented: A header you provided implies functionality that is not implemented") {
|
|
||||||
ignoreLog(function, args, startTime, "Versioning is not implemented").Info()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
failureLog(function, args, startTime, "", "Put versioning failed", err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
type uploadedObject struct {
|
|
||||||
content string
|
|
||||||
tagging []*s3.Tag
|
|
||||||
versionId string
|
|
||||||
deleteMarker bool
|
|
||||||
}
|
|
||||||
|
|
||||||
uploads := []uploadedObject{
|
|
||||||
{content: "my content 1", tagging: []*s3.Tag{{Key: aws.String("type"), Value: aws.String("text")}}},
|
|
||||||
{content: "content file 2"},
|
|
||||||
{content: "\"%32&é", tagging: []*s3.Tag{{Key: aws.String("type"), Value: aws.String("garbage")}}},
|
|
||||||
{deleteMarker: true},
|
|
||||||
}
|
|
||||||
|
|
||||||
// Upload versions and save their version IDs
|
|
||||||
for i := range uploads {
|
|
||||||
if uploads[i].deleteMarker {
|
|
||||||
// Delete the current object to create a delete marker)
|
|
||||||
deleteInput := &s3.DeleteObjectInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
Key: aws.String(object),
|
|
||||||
}
|
|
||||||
deleteOutput, err := s3Client.DeleteObject(deleteInput)
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("DELETE object expected to succeed but got %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
uploads[i].versionId = *deleteOutput.VersionId
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
putInput := &s3.PutObjectInput{
|
|
||||||
Body: aws.ReadSeekCloser(strings.NewReader(uploads[i].content)),
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
Key: aws.String(object),
|
|
||||||
}
|
|
||||||
output, err := s3Client.PutObject(putInput)
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("PUT expected to succeed but got %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
uploads[i].versionId = *output.VersionId
|
|
||||||
}
|
|
||||||
|
|
||||||
// Put tagging on each version
|
|
||||||
for i := range uploads {
|
|
||||||
if uploads[i].tagging == nil {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
putTaggingInput := &s3.PutObjectTaggingInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
Key: aws.String(object),
|
|
||||||
Tagging: &s3.Tagging{TagSet: uploads[i].tagging},
|
|
||||||
VersionId: aws.String(uploads[i].versionId),
|
|
||||||
}
|
|
||||||
_, err = s3Client.PutObjectTagging(putTaggingInput)
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("PUT Object tagging expected to succeed but got %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check versions tagging
|
|
||||||
for i := range uploads {
|
|
||||||
input := &s3.GetObjectTaggingInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
Key: aws.String(object),
|
|
||||||
VersionId: aws.String(uploads[i].versionId),
|
|
||||||
}
|
|
||||||
result, err := s3Client.GetObjectTagging(input)
|
|
||||||
if err == nil && uploads[i].deleteMarker {
|
|
||||||
failureLog(function, args, startTime, "", "GET Object tagging expected to fail with delete marker but succeded", err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
if err != nil && !uploads[i].deleteMarker {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("GET Object tagging expected to succeed but got %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
if uploads[i].deleteMarker {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
if !reflect.DeepEqual(result.TagSet, uploads[i].tagging) {
|
|
||||||
failureLog(function, args, startTime, "", "GET Object tagging returned unexpected result", nil).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Remove all tagging for all objects
|
|
||||||
for i := range uploads {
|
|
||||||
input := &s3.DeleteObjectTaggingInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
Key: aws.String(object),
|
|
||||||
VersionId: aws.String(uploads[i].versionId),
|
|
||||||
}
|
|
||||||
_, err := s3Client.DeleteObjectTagging(input)
|
|
||||||
if err == nil && uploads[i].deleteMarker {
|
|
||||||
failureLog(function, args, startTime, "", "DELETE Object tagging expected to fail with delete marker but succeded", err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
if err != nil && !uploads[i].deleteMarker {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("GET Object tagging expected to succeed but got %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check for tagging after removal
|
|
||||||
for i := range uploads {
|
|
||||||
if uploads[i].deleteMarker {
|
|
||||||
// Avoid testing this use case since already tested earlier
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
input := &s3.GetObjectTaggingInput{
|
|
||||||
Bucket: aws.String(bucket),
|
|
||||||
Key: aws.String(object),
|
|
||||||
VersionId: aws.String(uploads[i].versionId),
|
|
||||||
}
|
|
||||||
result, err := s3Client.GetObjectTagging(input)
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", fmt.Sprintf("GET Object tagging expected to succeed but got %v", err), err).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
var nilTagSet []*s3.Tag
|
|
||||||
if !reflect.DeepEqual(result.TagSet, nilTagSet) {
|
|
||||||
failureLog(function, args, startTime, "", "GET Object tagging after DELETE returned unexpected result", nil).Fatal()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
successLogger(function, args, startTime).Info()
|
|
||||||
}
|
|
|
@ -1,135 +0,0 @@
|
||||||
// Copyright (c) 2015-2021 MinIO, Inc.
|
|
||||||
//
|
|
||||||
// This file is part of MinIO Object Storage stack
|
|
||||||
//
|
|
||||||
// This program is free software: you can redistribute it and/or modify
|
|
||||||
// it under the terms of the GNU Affero General Public License as published by
|
|
||||||
// the Free Software Foundation, either version 3 of the License, or
|
|
||||||
// (at your option) any later version.
|
|
||||||
//
|
|
||||||
// This program is distributed in the hope that it will be useful
|
|
||||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
// GNU Affero General Public License for more details.
|
|
||||||
//
|
|
||||||
// You should have received a copy of the GNU Affero General Public License
|
|
||||||
// along with this program. If not, see <http://www.gnu.org/licenses/>.
|
|
||||||
|
|
||||||
package main
|
|
||||||
|
|
||||||
import (
|
|
||||||
"encoding/json"
|
|
||||||
"encoding/xml"
|
|
||||||
"fmt"
|
|
||||||
"math/rand"
|
|
||||||
"net/http"
|
|
||||||
"strings"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
log "github.com/sirupsen/logrus"
|
|
||||||
)
|
|
||||||
|
|
||||||
const letterBytes = "abcdefghijklmnopqrstuvwxyz01234569"
|
|
||||||
const (
|
|
||||||
letterIdxBits = 6 // 6 bits to represent a letter index
|
|
||||||
letterIdxMask = 1<<letterIdxBits - 1 // All 1-bits, as many as letterIdxBits
|
|
||||||
letterIdxMax = 63 / letterIdxBits // # of letter indices fitting in 63 bits
|
|
||||||
)
|
|
||||||
|
|
||||||
// different kinds of test failures
|
|
||||||
const (
|
|
||||||
PASS = "PASS" // Indicate that a test passed
|
|
||||||
FAIL = "FAIL" // Indicate that a test failed
|
|
||||||
)
|
|
||||||
|
|
||||||
type errorResponse struct {
|
|
||||||
XMLName xml.Name `xml:"Error" json:"-"`
|
|
||||||
Code string
|
|
||||||
Message string
|
|
||||||
BucketName string
|
|
||||||
Key string
|
|
||||||
RequestID string `xml:"RequestId"`
|
|
||||||
HostID string `xml:"HostId"`
|
|
||||||
|
|
||||||
// Region where the bucket is located. This header is returned
|
|
||||||
// only in HEAD bucket and ListObjects response.
|
|
||||||
Region string
|
|
||||||
|
|
||||||
// Headers of the returned S3 XML error
|
|
||||||
Headers http.Header `xml:"-" json:"-"`
|
|
||||||
}
|
|
||||||
|
|
||||||
type mintJSONFormatter struct {
|
|
||||||
}
|
|
||||||
|
|
||||||
func (f *mintJSONFormatter) Format(entry *log.Entry) ([]byte, error) {
|
|
||||||
data := make(log.Fields, len(entry.Data))
|
|
||||||
for k, v := range entry.Data {
|
|
||||||
switch v := v.(type) {
|
|
||||||
case error:
|
|
||||||
// Otherwise errors are ignored by `encoding/json`
|
|
||||||
// https://github.com/sirupsen/logrus/issues/137
|
|
||||||
data[k] = v.Error()
|
|
||||||
default:
|
|
||||||
data[k] = v
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
serialized, err := json.Marshal(data)
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("Failed to marshal fields to JSON, %w", err)
|
|
||||||
}
|
|
||||||
return append(serialized, '\n'), nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// log successful test runs
|
|
||||||
func successLogger(function string, args map[string]interface{}, startTime time.Time) *log.Entry {
|
|
||||||
// calculate the test case duration
|
|
||||||
duration := time.Since(startTime)
|
|
||||||
// log with the fields as per mint
|
|
||||||
fields := log.Fields{"name": "versioning", "function": function, "args": args, "duration": duration.Nanoseconds() / 1000000, "status": PASS}
|
|
||||||
return log.WithFields(fields)
|
|
||||||
}
|
|
||||||
|
|
||||||
// log not applicable test runs
|
|
||||||
func ignoreLog(function string, args map[string]interface{}, startTime time.Time, alert string) *log.Entry {
|
|
||||||
// calculate the test case duration
|
|
||||||
duration := time.Since(startTime)
|
|
||||||
// log with the fields as per mint
|
|
||||||
fields := log.Fields{"name": "versioning", "function": function, "args": args,
|
|
||||||
"duration": duration.Nanoseconds() / 1000000, "status": "NA", "alert": strings.Split(alert, " ")[0] + " is NotImplemented"}
|
|
||||||
return log.WithFields(fields)
|
|
||||||
}
|
|
||||||
|
|
||||||
// log failed test runs
|
|
||||||
func failureLog(function string, args map[string]interface{}, startTime time.Time, alert string, message string, err error) *log.Entry {
|
|
||||||
// calculate the test case duration
|
|
||||||
duration := time.Since(startTime)
|
|
||||||
var fields log.Fields
|
|
||||||
// log with the fields as per mint
|
|
||||||
if err != nil {
|
|
||||||
fields = log.Fields{"name": "versioning", "function": function, "args": args,
|
|
||||||
"duration": duration.Nanoseconds() / 1000000, "status": FAIL, "alert": alert, "message": message, "error": err}
|
|
||||||
} else {
|
|
||||||
fields = log.Fields{"name": "versioning", "function": function, "args": args,
|
|
||||||
"duration": duration.Nanoseconds() / 1000000, "status": FAIL, "alert": alert, "message": message}
|
|
||||||
}
|
|
||||||
return log.WithFields(fields)
|
|
||||||
}
|
|
||||||
|
|
||||||
func randString(n int, src rand.Source, prefix string) string {
|
|
||||||
b := make([]byte, n)
|
|
||||||
// A rand.Int63() generates 63 random bits, enough for letterIdxMax letters!
|
|
||||||
for i, cache, remain := n-1, src.Int63(), letterIdxMax; i >= 0; {
|
|
||||||
if remain == 0 {
|
|
||||||
cache, remain = src.Int63(), letterIdxMax
|
|
||||||
}
|
|
||||||
if idx := int(cache & letterIdxMask); idx < len(letterBytes) {
|
|
||||||
b[i] = letterBytes[idx]
|
|
||||||
i--
|
|
||||||
}
|
|
||||||
cache >>= letterIdxBits
|
|
||||||
remain--
|
|
||||||
}
|
|
||||||
return prefix + string(b[0:30-len(prefix)])
|
|
||||||
}
|
|
|
@ -1,31 +0,0 @@
|
||||||
#!/bin/bash -e
|
|
||||||
#
|
|
||||||
#
|
|
||||||
|
|
||||||
MINT_DATA_DIR="$MINT_ROOT_DIR/data"
|
|
||||||
|
|
||||||
declare -A data_file_map
|
|
||||||
data_file_map["datafile-0-b"]="0"
|
|
||||||
data_file_map["datafile-1-b"]="1"
|
|
||||||
data_file_map["datafile-1-kB"]="1K"
|
|
||||||
data_file_map["datafile-10-kB"]="10K"
|
|
||||||
data_file_map["datafile-33-kB"]="33K"
|
|
||||||
data_file_map["datafile-100-kB"]="100K"
|
|
||||||
data_file_map["datafile-1.03-MB"]="1056K"
|
|
||||||
data_file_map["datafile-1-MB"]="1M"
|
|
||||||
data_file_map["datafile-5-MB"]="5M"
|
|
||||||
data_file_map["datafile-5243880-b"]="5243880"
|
|
||||||
data_file_map["datafile-6-MB"]="6M"
|
|
||||||
data_file_map["datafile-10-MB"]="10M"
|
|
||||||
data_file_map["datafile-11-MB"]="11M"
|
|
||||||
data_file_map["datafile-65-MB"]="65M"
|
|
||||||
data_file_map["datafile-129-MB"]="129M"
|
|
||||||
|
|
||||||
mkdir -p "$MINT_DATA_DIR"
|
|
||||||
for filename in "${!data_file_map[@]}"; do
|
|
||||||
echo "creating $MINT_DATA_DIR/$filename"
|
|
||||||
if ! shred -n 1 -s "${data_file_map[$filename]}" - 1>"$MINT_DATA_DIR/$filename" 2>/dev/null; then
|
|
||||||
echo "unable to create data file $MINT_DATA_DIR/$filename"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
done
|
|
|
@ -1,11 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
#
|
|
||||||
#
|
|
||||||
|
|
||||||
./mint.sh "$@" &
|
|
||||||
|
|
||||||
# Get the pid to be used for kill command if required
|
|
||||||
main_pid="$!"
|
|
||||||
trap 'echo -e "\nAborting Mint..."; kill $main_pid' SIGINT SIGTERM
|
|
||||||
# use -n here to catch mint.sh exit code, notify to ci
|
|
||||||
wait -n
|
|
|
@ -1,17 +0,0 @@
|
||||||
git
|
|
||||||
python3-pip
|
|
||||||
nodejs
|
|
||||||
openjdk-8-jdk
|
|
||||||
openjdk-8-jdk-headless
|
|
||||||
dirmngr
|
|
||||||
apt-transport-https
|
|
||||||
dotnet-sdk-2.1
|
|
||||||
ca-certificates-mono
|
|
||||||
libunwind8
|
|
||||||
ruby
|
|
||||||
ruby-dev
|
|
||||||
ruby-bundler
|
|
||||||
php
|
|
||||||
php-curl
|
|
||||||
php-xml
|
|
||||||
ant
|
|
183
mint/mint.sh
183
mint/mint.sh
|
@ -1,183 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
#
|
|
||||||
#
|
|
||||||
|
|
||||||
CONTAINER_ID=$(grep -o -e '[0-f]\{12,\}' /proc/1/cpuset | awk '{print substr($1, 1, 12)}')
|
|
||||||
MINT_DATA_DIR=${MINT_DATA_DIR:-/mint/data}
|
|
||||||
MINT_MODE=${MINT_MODE:-core}
|
|
||||||
SERVER_REGION=${SERVER_REGION:-us-east-1}
|
|
||||||
ENABLE_HTTPS=${ENABLE_HTTPS:-0}
|
|
||||||
ENABLE_VIRTUAL_STYLE=${ENABLE_VIRTUAL_STYLE:-0}
|
|
||||||
RUN_ON_FAIL=${RUN_ON_FAIL:-0}
|
|
||||||
GO111MODULE=on
|
|
||||||
|
|
||||||
if [ -z "$SERVER_ENDPOINT" ]; then
|
|
||||||
SERVER_ENDPOINT="play.minio.io:9000"
|
|
||||||
ACCESS_KEY="Q3AM3UQ867SPQQA43P2F"
|
|
||||||
SECRET_KEY="zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG"
|
|
||||||
ENABLE_HTTPS=1
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [ "$ENABLE_VIRTUAL_STYLE" -eq 1 ]; then
|
|
||||||
SERVER_IP="${SERVER_ENDPOINT%%:*}"
|
|
||||||
SERVER_PORT="${SERVER_ENDPOINT/*:/}"
|
|
||||||
# Check if SERVER_IP is actually IPv4 address
|
|
||||||
octets=("${SERVER_IP//./ }")
|
|
||||||
if [ "${#octets[@]}" -ne 4 ]; then
|
|
||||||
echo "$SERVER_IP must be an IP address"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
for octet in "${octets[@]}"; do
|
|
||||||
if [ "$octet" -lt 0 ] 2>/dev/null || [ "$octet" -gt 255 ] 2>/dev/null; then
|
|
||||||
echo "$SERVER_IP must be an IP address"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
fi
|
|
||||||
|
|
||||||
ROOT_DIR="$PWD"
|
|
||||||
TESTS_DIR="$ROOT_DIR/run/core"
|
|
||||||
|
|
||||||
BASE_LOG_DIR="$ROOT_DIR/log"
|
|
||||||
LOG_FILE="log.json"
|
|
||||||
ERROR_FILE="error.log"
|
|
||||||
mkdir -p "$BASE_LOG_DIR"
|
|
||||||
|
|
||||||
function humanize_time()
|
|
||||||
{
|
|
||||||
time="$1"
|
|
||||||
days=$(( time / 60 / 60 / 24 ))
|
|
||||||
hours=$(( time / 60 / 60 % 24 ))
|
|
||||||
minutes=$(( time / 60 % 60 ))
|
|
||||||
seconds=$(( time % 60 ))
|
|
||||||
|
|
||||||
(( days > 0 )) && echo -n "$days days "
|
|
||||||
(( hours > 0 )) && echo -n "$hours hours "
|
|
||||||
(( minutes > 0 )) && echo -n "$minutes minutes "
|
|
||||||
(( days > 0 || hours > 0 || minutes > 0 )) && echo -n "and "
|
|
||||||
echo "$seconds seconds"
|
|
||||||
}
|
|
||||||
|
|
||||||
function run_test()
|
|
||||||
{
|
|
||||||
if [ ! -d "$1" ]; then
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
start=$(date +%s)
|
|
||||||
|
|
||||||
mkdir -p "$BASE_LOG_DIR/$sdk_name"
|
|
||||||
|
|
||||||
(cd "$sdk_dir" && ./run.sh "$BASE_LOG_DIR/$LOG_FILE" "$BASE_LOG_DIR/$sdk_name/$ERROR_FILE")
|
|
||||||
rv=$?
|
|
||||||
end=$(date +%s)
|
|
||||||
duration=$(humanize_time $(( end - start )))
|
|
||||||
|
|
||||||
if [ "$rv" -eq 0 ]; then
|
|
||||||
echo "done in $duration"
|
|
||||||
else
|
|
||||||
echo "FAILED in $duration"
|
|
||||||
entry=$(tail -n 1 "$BASE_LOG_DIR/$LOG_FILE")
|
|
||||||
status=$(jq -e -r .status <<<"$entry")
|
|
||||||
jq_rv=$?
|
|
||||||
if [ "$jq_rv" -ne 0 ]; then
|
|
||||||
echo "$entry"
|
|
||||||
fi
|
|
||||||
## Show error.log when status is empty or not "FAIL".
|
|
||||||
## This may happen when test run failed without providing logs.
|
|
||||||
if [ "$jq_rv" -ne 0 ] || [ -z "$status" ] || { [ "$status" != "FAIL" ] && [ "$status" != "fail" ]; }; then
|
|
||||||
cat "$BASE_LOG_DIR/$sdk_name/$ERROR_FILE"
|
|
||||||
else
|
|
||||||
jq . <<<"$entry"
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
return $rv
|
|
||||||
}
|
|
||||||
|
|
||||||
function trust_s3_endpoint_tls_cert()
|
|
||||||
{
|
|
||||||
# Download the public certificate from the server
|
|
||||||
openssl s_client -showcerts -connect "$SERVER_ENDPOINT" </dev/null 2>/dev/null | \
|
|
||||||
openssl x509 -outform PEM -out /usr/local/share/ca-certificates/s3_server_cert.crt || \
|
|
||||||
exit 1
|
|
||||||
|
|
||||||
# Load the certificate in the system
|
|
||||||
update-ca-certificates --fresh >/dev/null
|
|
||||||
|
|
||||||
# Ask different SDKs/tools to load system certificates
|
|
||||||
export REQUESTS_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt
|
|
||||||
export NODE_EXTRA_CA_CERTS=/etc/ssl/certs/ca-certificates.crt
|
|
||||||
export SSL_CERT_FILE=/etc/ssl/certs/ca-certificates.crt
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
function main()
|
|
||||||
{
|
|
||||||
export MINT_DATA_DIR
|
|
||||||
export MINT_MODE
|
|
||||||
export SERVER_ENDPOINT
|
|
||||||
export SERVER_IP
|
|
||||||
export SERVER_PORT
|
|
||||||
|
|
||||||
export ACCESS_KEY
|
|
||||||
export SECRET_KEY
|
|
||||||
export ENABLE_HTTPS
|
|
||||||
export SERVER_REGION
|
|
||||||
export ENABLE_VIRTUAL_STYLE
|
|
||||||
export RUN_ON_FAIL
|
|
||||||
export GO111MODULE
|
|
||||||
|
|
||||||
echo "Running with"
|
|
||||||
echo "SERVER_ENDPOINT: $SERVER_ENDPOINT"
|
|
||||||
echo "ACCESS_KEY: $ACCESS_KEY"
|
|
||||||
echo "SECRET_KEY: ***REDACTED***"
|
|
||||||
echo "ENABLE_HTTPS: $ENABLE_HTTPS"
|
|
||||||
echo "SERVER_REGION: $SERVER_REGION"
|
|
||||||
echo "MINT_DATA_DIR: $MINT_DATA_DIR"
|
|
||||||
echo "MINT_MODE: $MINT_MODE"
|
|
||||||
echo "ENABLE_VIRTUAL_STYLE: $ENABLE_VIRTUAL_STYLE"
|
|
||||||
echo "RUN_ON_FAIL: $RUN_ON_FAIL"
|
|
||||||
echo
|
|
||||||
echo "To get logs, run 'docker cp ${CONTAINER_ID}:/mint/log /tmp/mint-logs'"
|
|
||||||
echo
|
|
||||||
|
|
||||||
[ "$ENABLE_HTTPS" == "1" ] && trust_s3_endpoint_tls_cert
|
|
||||||
|
|
||||||
declare -a run_list
|
|
||||||
sdks=( "$@" )
|
|
||||||
|
|
||||||
if [ "$#" -eq 0 ]; then
|
|
||||||
cd "$TESTS_DIR" || exit
|
|
||||||
sdks=(*)
|
|
||||||
cd .. || exit
|
|
||||||
fi
|
|
||||||
|
|
||||||
for sdk in "${sdks[@]}"; do
|
|
||||||
sdk=$(basename "$sdk")
|
|
||||||
run_list=( "${run_list[@]}" "$TESTS_DIR/$sdk" )
|
|
||||||
done
|
|
||||||
|
|
||||||
count="${#run_list[@]}"
|
|
||||||
i=0
|
|
||||||
for sdk_dir in "${run_list[@]}"; do
|
|
||||||
sdk_name=$(basename "$sdk_dir")
|
|
||||||
(( i++ ))
|
|
||||||
if [ ! -d "$sdk_dir" ]; then
|
|
||||||
echo "Test $sdk_name not found. Exiting Mint."
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
echo -n "($i/$count) Running $sdk_name tests ... "
|
|
||||||
if ! run_test "$sdk_dir"; then
|
|
||||||
(( i-- ))
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
|
|
||||||
## Report when all tests in run_list are run
|
|
||||||
if [ "$i" -eq "$count" ]; then
|
|
||||||
echo -e "\nAll tests ran successfully"
|
|
||||||
else
|
|
||||||
echo -e "\nExecuted $i out of $count tests successfully."
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
main "$@"
|
|
|
@ -1,15 +0,0 @@
|
||||||
#!/bin/bash -e
|
|
||||||
#
|
|
||||||
#
|
|
||||||
|
|
||||||
export APT="apt --quiet --yes"
|
|
||||||
|
|
||||||
# remove all packages listed in remove-packages.list
|
|
||||||
xargs --arg-file="${MINT_ROOT_DIR}/remove-packages.list" apt --quiet --yes purge
|
|
||||||
${APT} autoremove
|
|
||||||
|
|
||||||
# remove unwanted files
|
|
||||||
rm -fr "$GOROOT" "$GOPATH/src" /var/lib/apt/lists/*
|
|
||||||
|
|
||||||
# flush to disk
|
|
||||||
sync
|
|
|
@ -1,41 +0,0 @@
|
||||||
#!/bin/bash -e
|
|
||||||
#
|
|
||||||
#
|
|
||||||
|
|
||||||
export APT="apt --quiet --yes"
|
|
||||||
export WGET="wget --quiet --no-check-certificate"
|
|
||||||
|
|
||||||
# install nodejs source list
|
|
||||||
if ! $WGET --output-document=- https://deb.nodesource.com/setup_14.x | bash -; then
|
|
||||||
echo "unable to set nodejs repository"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
$APT install apt-transport-https
|
|
||||||
|
|
||||||
if ! $WGET --output-document=packages-microsoft-prod.deb https://packages.microsoft.com/config/ubuntu/18.04/packages-microsoft-prod.deb | bash -; then
|
|
||||||
echo "unable to download dotnet packages"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
dpkg -i packages-microsoft-prod.deb
|
|
||||||
rm -f packages-microsoft-prod.deb
|
|
||||||
|
|
||||||
$APT update
|
|
||||||
$APT install gnupg ca-certificates
|
|
||||||
|
|
||||||
# download and install golang
|
|
||||||
GO_VERSION="1.16"
|
|
||||||
GO_INSTALL_PATH="/usr/local"
|
|
||||||
download_url="https://storage.googleapis.com/golang/go${GO_VERSION}.linux-amd64.tar.gz"
|
|
||||||
if ! $WGET --output-document=- "$download_url" | tar -C "${GO_INSTALL_PATH}" -zxf -; then
|
|
||||||
echo "unable to install go$GO_VERSION"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
xargs --arg-file="${MINT_ROOT_DIR}/install-packages.list" apt --quiet --yes install
|
|
||||||
|
|
||||||
# set python 3.6 as default
|
|
||||||
update-alternatives --install /usr/bin/python python /usr/bin/python3.6 1
|
|
||||||
|
|
||||||
sync
|
|
|
@ -1,20 +0,0 @@
|
||||||
#!/bin/bash -e
|
|
||||||
#
|
|
||||||
#
|
|
||||||
|
|
||||||
export MINT_ROOT_DIR=${MINT_ROOT_DIR:-/mint}
|
|
||||||
export MINT_RUN_CORE_DIR="$MINT_ROOT_DIR/run/core"
|
|
||||||
export MINT_RUN_BUILD_DIR="$MINT_ROOT_DIR/build"
|
|
||||||
export MINT_RUN_SECURITY_DIR="$MINT_ROOT_DIR/run/security"
|
|
||||||
export WGET="wget --quiet --no-check-certificate"
|
|
||||||
|
|
||||||
"${MINT_ROOT_DIR}"/create-data-files.sh
|
|
||||||
"${MINT_ROOT_DIR}"/preinstall.sh
|
|
||||||
|
|
||||||
# install mint app packages
|
|
||||||
for pkg in "$MINT_ROOT_DIR/build"/*/install.sh; do
|
|
||||||
echo "Running $pkg"
|
|
||||||
$pkg
|
|
||||||
done
|
|
||||||
|
|
||||||
"${MINT_ROOT_DIR}"/postinstall.sh
|
|
|
@ -1,8 +0,0 @@
|
||||||
wget
|
|
||||||
git
|
|
||||||
python3-pip
|
|
||||||
ruby-dev
|
|
||||||
ruby-bundler
|
|
||||||
openjdk-8-jdk
|
|
||||||
ant
|
|
||||||
dotnet
|
|
|
@ -1,8 +0,0 @@
|
||||||
module mint.minio.io/aws-sdk-go
|
|
||||||
|
|
||||||
go 1.14
|
|
||||||
|
|
||||||
require (
|
|
||||||
github.com/aws/aws-sdk-go v1.34.10
|
|
||||||
github.com/sirupsen/logrus v1.6.0
|
|
||||||
)
|
|
|
@ -1,31 +0,0 @@
|
||||||
github.com/aws/aws-sdk-go v1.34.10 h1:VU78gcf/3wA4HNEDCHidK738l7K0Bals4SJnfnvXOtY=
|
|
||||||
github.com/aws/aws-sdk-go v1.34.10/go.mod h1:5zCpMtNQVjRREroY7sYe8lOMRSxkhG6MZveU8YkpAk0=
|
|
||||||
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
|
||||||
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
|
|
||||||
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
|
||||||
github.com/go-sql-driver/mysql v1.5.0/go.mod h1:DCzpHaOWr8IXmIStZouvnhqoel9Qv2LBy8hT2VhHyBg=
|
|
||||||
github.com/jmespath/go-jmespath v0.3.0 h1:OS12ieG61fsCg5+qLJ+SsW9NicxNkg3b25OyT2yCeUc=
|
|
||||||
github.com/jmespath/go-jmespath v0.3.0/go.mod h1:9QtRXoHjLGCJ5IBSaohpXITPlowMeeYCZ7fLUTSywik=
|
|
||||||
github.com/konsorten/go-windows-terminal-sequences v1.0.3 h1:CE8S1cTafDpPvMhIxNJKvHsGVBgn1xWYf1NbHQhywc8=
|
|
||||||
github.com/konsorten/go-windows-terminal-sequences v1.0.3/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
|
|
||||||
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
|
||||||
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
|
|
||||||
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
|
||||||
github.com/sirupsen/logrus v1.6.0 h1:UBcNElsrwanuuMsnGSlYmtmgbb23qDR5dG+6X6Oo89I=
|
|
||||||
github.com/sirupsen/logrus v1.6.0/go.mod h1:7uNnSEd1DgxDLC74fIahvMZmmYsHGZGEOFrfsX/uA88=
|
|
||||||
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
|
||||||
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
|
|
||||||
github.com/stretchr/testify v1.5.1 h1:nOGnQDM7FYENwehXlg/kFVnos3rEvtKTjRvOWSzb6H4=
|
|
||||||
github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA=
|
|
||||||
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
|
|
||||||
golang.org/x/net v0.0.0-20200202094626-16171245cfb2 h1:CCH4IOTTfewWjGOlSp+zGcjutRKlBEZQ6wTn8ozI/nI=
|
|
||||||
golang.org/x/net v0.0.0-20200202094626-16171245cfb2/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
|
||||||
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
|
||||||
golang.org/x/sys v0.0.0-20190422165155-953cdadca894 h1:Cz4ceDQGXuKRnVBDTS23GTn/pU5OE2C0WrNTOYK1Uuc=
|
|
||||||
golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
|
||||||
golang.org/x/text v0.3.0 h1:g61tztE5qeGQ89tm6NTjjM9VPIm088od1l6aSorWRWg=
|
|
||||||
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
|
||||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=
|
|
||||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
|
||||||
gopkg.in/yaml.v2 v2.2.2 h1:ZCJp+EgiOT7lHqUV2J862kp8Qj64Jo6az82+3Td9dZw=
|
|
||||||
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
|
File diff suppressed because it is too large
Load Diff
|
@ -1,15 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
#
|
|
||||||
#
|
|
||||||
|
|
||||||
# handle command line arguments
|
|
||||||
if [ $# -ne 2 ]; then
|
|
||||||
echo "usage: run.sh <OUTPUT-LOG-FILE> <ERROR-LOG-FILE>"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
output_log_file="$1"
|
|
||||||
error_log_file="$2"
|
|
||||||
|
|
||||||
# run tests
|
|
||||||
/mint/run/core/aws-sdk-go/aws-sdk-go 1>>"$output_log_file" 2>"$error_log_file"
|
|
|
@ -1,17 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
#
|
|
||||||
#
|
|
||||||
|
|
||||||
# handle command line arguments
|
|
||||||
if [ $# -ne 2 ]; then
|
|
||||||
echo "usage: run.sh <OUTPUT-LOG-FILE> <ERROR-LOG-FILE>"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
output_log_file="$1"
|
|
||||||
error_log_file="$2"
|
|
||||||
|
|
||||||
# run tests
|
|
||||||
cd /mint/run/core/aws-sdk-java/ || exit 1
|
|
||||||
|
|
||||||
java -jar FunctionalTests.jar 1>>"$output_log_file" 2>"$error_log_file"
|
|
|
@ -1,19 +0,0 @@
|
||||||
## `aws-sdk-php` tests
|
|
||||||
This directory serves as the location for Mint tests using `aws-sdk-php`. Top level `mint.sh` calls `run.sh` to execute tests.
|
|
||||||
|
|
||||||
## Adding new tests
|
|
||||||
New tests is added into `quick-tests.php` as new functions.
|
|
||||||
|
|
||||||
## Running tests manually
|
|
||||||
- Set environment variables `MINT_DATA_DIR`, `MINT_MODE`, `SERVER_ENDPOINT`, `ACCESS_KEY`, `SECRET_KEY`, `SERVER_REGION` and `ENABLE_HTTPS`
|
|
||||||
- Call `run.sh` with output log file and error log file. for example
|
|
||||||
```bash
|
|
||||||
export MINT_DATA_DIR=~/my-mint-dir
|
|
||||||
export MINT_MODE=core
|
|
||||||
export SERVER_ENDPOINT="play.minio.io:9000"
|
|
||||||
export ACCESS_KEY="Q3AM3UQ867SPQQA43P2F"
|
|
||||||
export SECRET_KEY="zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG"
|
|
||||||
export ENABLE_HTTPS=1
|
|
||||||
export SERVER_REGION=us-east-1
|
|
||||||
./run.sh /tmp/output.log /tmp/error.log
|
|
||||||
```
|
|
|
@ -1,6 +0,0 @@
|
||||||
{
|
|
||||||
"require": {
|
|
||||||
"aws/aws-sdk-php": "^3.30",
|
|
||||||
"guzzlehttp/psr7": "^1.4"
|
|
||||||
}
|
|
||||||
}
|
|
File diff suppressed because it is too large
Load Diff
|
@ -1,15 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
#
|
|
||||||
#
|
|
||||||
|
|
||||||
# handle command line arguments
|
|
||||||
if [ $# -ne 2 ]; then
|
|
||||||
echo "usage: run.sh <OUTPUT-LOG-FILE> <ERROR-LOG-FILE>"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
output_log_file="$1"
|
|
||||||
error_log_file="$2"
|
|
||||||
|
|
||||||
# run tests
|
|
||||||
php ./quick-tests.php 1>>"$output_log_file" 2>"$error_log_file"
|
|
|
@ -1,19 +0,0 @@
|
||||||
## `aws-sdk-ruby` tests
|
|
||||||
This directory serves as the location for Mint tests using `aws-sdk-ruby`. Top level `mint.sh` calls `run.sh` to execute tests.
|
|
||||||
|
|
||||||
## Adding new tests
|
|
||||||
New tests is added into `aws-stub-test.rb` as new functions.
|
|
||||||
|
|
||||||
## Running tests manually
|
|
||||||
- Set environment variables `MINT_DATA_DIR`, `MINT_MODE`, `SERVER_ENDPOINT`, `ACCESS_KEY`, `SECRET_KEY`, `SERVER_REGION` and `ENABLE_HTTPS`
|
|
||||||
- Call `run.sh` with output log file and error log file. for example
|
|
||||||
```bash
|
|
||||||
export MINT_DATA_DIR=~/my-mint-dir
|
|
||||||
export MINT_MODE=core
|
|
||||||
export SERVER_ENDPOINT="play.minio.io:9000"
|
|
||||||
export ACCESS_KEY="Q3AM3UQ867SPQQA43P2F"
|
|
||||||
export SECRET_KEY="zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG"
|
|
||||||
export ENABLE_HTTPS=1
|
|
||||||
export SERVER_REGION=us-east-1
|
|
||||||
./run.sh /tmp/output.log /tmp/error.log
|
|
||||||
```
|
|
|
@ -1,851 +0,0 @@
|
||||||
#!/usr/bin/env ruby
|
|
||||||
#
|
|
||||||
#
|
|
||||||
|
|
||||||
require 'aws-sdk'
|
|
||||||
require 'securerandom'
|
|
||||||
require 'net/http'
|
|
||||||
require 'multipart_body'
|
|
||||||
|
|
||||||
# For aws-sdk ruby tests to run, setting the following
|
|
||||||
# environment variables is mandatory.
|
|
||||||
# SERVER_ENDPOINT: <ip:port> address of the minio server tests will run against
|
|
||||||
# ACCESS_KEY: access key for the minio server
|
|
||||||
# SECRET_KEY: secreet key for the minio server
|
|
||||||
# SERVER_REGION: region minio server is setup to run
|
|
||||||
# ENABLE_HTTPS: (1|0) turn on/off to specify https or
|
|
||||||
# http services minio server is running on
|
|
||||||
# MINT_DATA_DIR: Data directory where test data files are stored
|
|
||||||
|
|
||||||
class AwsSdkRubyTest
|
|
||||||
# Set variables necessary to create an s3 client instance.
|
|
||||||
# Get them from the environment variables
|
|
||||||
|
|
||||||
# Region information, eg. "us-east-1"
|
|
||||||
region = ENV['SERVER_REGION'] ||= 'SERVER_REGION is not set'
|
|
||||||
# Minio server, eg. "play.minio.io:9000"
|
|
||||||
access_key_id = ENV['ACCESS_KEY'] ||= 'ACCESS_KEY is not set'
|
|
||||||
secret_access_key = ENV['SECRET_KEY'] ||= 'SECRET_KEY is not set'
|
|
||||||
enable_https = ENV['ENABLE_HTTPS']
|
|
||||||
end_point = ENV['SERVER_ENDPOINT'] ||= 'SERVER_ENDPOINT is not set'
|
|
||||||
endpoint = enable_https == '1' ? 'https://' + end_point : 'http://' + end_point
|
|
||||||
|
|
||||||
# Create s3 resource instance,"s3"
|
|
||||||
@@s3 = Aws::S3::Resource.new(
|
|
||||||
region: region,
|
|
||||||
endpoint: endpoint,
|
|
||||||
access_key_id: access_key_id,
|
|
||||||
secret_access_key: secret_access_key,
|
|
||||||
force_path_style: true)
|
|
||||||
|
|
||||||
def initialize_log_output(meth, alert = nil)
|
|
||||||
# Initialize and return log content in log_output hash table
|
|
||||||
|
|
||||||
# Collect args in args_arr
|
|
||||||
args_arr = method(meth).parameters.flatten.map(&:to_s)
|
|
||||||
.reject { |x| x == 'req' || x == 'opt' }
|
|
||||||
# Create and return log output content
|
|
||||||
{ name: 'aws-sdk-ruby',
|
|
||||||
function: "#{meth}(#{args_arr.join(',')})", # method name and arguments
|
|
||||||
args: args_arr, # array of arg names. This'll be replaced with a
|
|
||||||
# a arg/value pairs insdie the caller method
|
|
||||||
duration: 0, # test runtime duration in seconds
|
|
||||||
alert: alert,
|
|
||||||
message: nil,
|
|
||||||
error: nil }
|
|
||||||
end
|
|
||||||
|
|
||||||
def random_bucket_name
|
|
||||||
'aws-sdk-ruby-bucket-' + SecureRandom.hex(6)
|
|
||||||
end
|
|
||||||
|
|
||||||
def calculate_duration(t2, t1)
|
|
||||||
# Durations are in miliseconds, with precision of 2 decimal places
|
|
||||||
((t2 - t1) * 1000).round(2)
|
|
||||||
end
|
|
||||||
|
|
||||||
def print_log(log_output, start_time)
|
|
||||||
# Calculate duration in miliseconds
|
|
||||||
log_output[:duration] = calculate_duration(Time.now, start_time)
|
|
||||||
# Get rid of the log_output fields if nil
|
|
||||||
puts log_output.delete_if{|k, value| value == nil}.to_json
|
|
||||||
# Exit at the first failure
|
|
||||||
exit 1 if log_output[:status] == 'FAIL'
|
|
||||||
end
|
|
||||||
|
|
||||||
def cleanUp(buckets, log_output)
|
|
||||||
# Removes objects and bucket if bucket exists
|
|
||||||
bucket_name = ''
|
|
||||||
buckets.each do |b|
|
|
||||||
bucket_name = b
|
|
||||||
if bucketExistsWrapper(b, log_output)
|
|
||||||
removeObjectsWrapper(b, log_output)
|
|
||||||
removeBucketWrapper(b, log_output)
|
|
||||||
end
|
|
||||||
end
|
|
||||||
rescue => e
|
|
||||||
raise "Failed to clean-up bucket '#{bucket_name}', #{e}"
|
|
||||||
end
|
|
||||||
|
|
||||||
#
|
|
||||||
# API commands/methods
|
|
||||||
#
|
|
||||||
def makeBucket(bucket_name)
|
|
||||||
# Creates a bucket, "bucket_name"
|
|
||||||
# on S3 client , "s3".
|
|
||||||
# Returns bucket_name if already exists
|
|
||||||
@@s3.bucket(bucket_name).exists? ? @@s3.bucket(bucket_name) : @@s3.create_bucket(bucket: bucket_name)
|
|
||||||
rescue => e
|
|
||||||
raise e
|
|
||||||
end
|
|
||||||
|
|
||||||
def makeBucketWrapper(bucket_name, log_output)
|
|
||||||
makeBucket(bucket_name)
|
|
||||||
rescue => e
|
|
||||||
log_output[:function] = "makeBucket(bucket_name)"
|
|
||||||
log_output[:args] = {'bucket_name': bucket_name}
|
|
||||||
raise e
|
|
||||||
end
|
|
||||||
|
|
||||||
def removeBucket(bucket_name)
|
|
||||||
# Deletes/removes bucket, "bucket_name" on S3 client, "s3"
|
|
||||||
@@s3.bucket(bucket_name).delete
|
|
||||||
rescue => e
|
|
||||||
raise e
|
|
||||||
end
|
|
||||||
|
|
||||||
def removeBucketWrapper(bucket_name, log_output)
|
|
||||||
removeBucket(bucket_name)
|
|
||||||
rescue => e
|
|
||||||
log_output[:function] = "removeBucket(bucket_name)"
|
|
||||||
log_output[:args] = {'bucket_name': bucket_name}
|
|
||||||
raise e
|
|
||||||
end
|
|
||||||
|
|
||||||
def putObject(bucket_name, file)
|
|
||||||
# Creates "file" (full path) in bucket, "bucket_name",
|
|
||||||
# on S3 client, "s3"
|
|
||||||
file_name = File.basename(file)
|
|
||||||
@@s3.bucket(bucket_name).object(file_name).upload_file(file)
|
|
||||||
rescue => e
|
|
||||||
raise e
|
|
||||||
end
|
|
||||||
|
|
||||||
def putObjectWrapper(bucket_name, file, log_output)
|
|
||||||
putObject(bucket_name, file)
|
|
||||||
rescue => e
|
|
||||||
log_output[:function] = "putObject(bucket_name, file)"
|
|
||||||
log_output[:args] = {'bucket_name': bucket_name,
|
|
||||||
'file': file}
|
|
||||||
raise e
|
|
||||||
end
|
|
||||||
|
|
||||||
def getObject(bucket_name, file, destination)
|
|
||||||
# Gets/Downloads file, "file",
|
|
||||||
# from bucket, "bucket_name", of S3 client, "s3"
|
|
||||||
file_name = File.basename(file)
|
|
||||||
dest = File.join(destination, file_name)
|
|
||||||
@@s3.bucket(bucket_name).object(file_name).get(response_target: dest)
|
|
||||||
rescue => e
|
|
||||||
raise e
|
|
||||||
end
|
|
||||||
|
|
||||||
def getObjectWrapper(bucket_name, file, destination, log_output)
|
|
||||||
getObject(bucket_name, file, destination)
|
|
||||||
rescue => e
|
|
||||||
log_output[:function] = "getObject(bucket_name, file)"
|
|
||||||
log_output[:args] = {'bucket_name': bucket_name,
|
|
||||||
'file': file,
|
|
||||||
'destination': destination}
|
|
||||||
raise e
|
|
||||||
end
|
|
||||||
|
|
||||||
def copyObject(source_bucket_name, target_bucket_name, source_file_name, target_file_name = '')
|
|
||||||
# Copies file, "file_name", from source bucket,
|
|
||||||
# "source_bucket_name", to target bucket,
|
|
||||||
# "target_bucket_name", on S3 client, "s3"
|
|
||||||
target_file_name = source_file_name if target_file_name.empty?
|
|
||||||
source = @@s3.bucket(source_bucket_name)
|
|
||||||
target = @@s3.bucket(target_bucket_name)
|
|
||||||
source_obj = source.object(source_file_name)
|
|
||||||
target_obj = target.object(target_file_name)
|
|
||||||
source_obj.copy_to(target_obj)
|
|
||||||
rescue => e
|
|
||||||
raise e
|
|
||||||
end
|
|
||||||
|
|
||||||
def copyObjectWrapper(source_bucket_name, target_bucket_name, source_file_name, target_file_name = '', log_output)
|
|
||||||
copyObject(source_bucket_name, target_bucket_name, source_file_name, target_file_name)
|
|
||||||
rescue => e
|
|
||||||
log_output[:function] = 'copyObject(source_bucket_name, target_bucket_name, source_file_name, target_file_name = '')'
|
|
||||||
log_output[:args] = {'source_bucket_name': source_bucket_name,
|
|
||||||
'target_bucket_name': target_bucket_name,
|
|
||||||
'source_file_name': source_file_name,
|
|
||||||
'target_file_name': target_file_name}
|
|
||||||
raise e
|
|
||||||
end
|
|
||||||
|
|
||||||
def removeObject(bucket_name, file)
|
|
||||||
# Deletes file in bucket,
|
|
||||||
# "bucket_name", on S3 client, "s3".
|
|
||||||
# If file, "file_name" does not exist,
|
|
||||||
# it quietly returns without any error message
|
|
||||||
@@s3.bucket(bucket_name).object(file).delete
|
|
||||||
rescue => e
|
|
||||||
raise e
|
|
||||||
end
|
|
||||||
|
|
||||||
def removeObjectWrapper(bucket_name, file_name, log_output)
|
|
||||||
removeObject(bucket_name, file_name)
|
|
||||||
rescue => e
|
|
||||||
log_output[:function] = "removeObject(bucket_name, file_name)"
|
|
||||||
log_output[:args] = {'bucket_name': bucket_name,
|
|
||||||
'file_name': file_name}
|
|
||||||
raise e
|
|
||||||
end
|
|
||||||
|
|
||||||
def removeObjects(bucket_name)
|
|
||||||
# Deletes all files in bucket, "bucket_name"
|
|
||||||
# on S3 client, "s3"
|
|
||||||
file_name = ''
|
|
||||||
@@s3.bucket(bucket_name).objects.each do |obj|
|
|
||||||
file_name = obj.key
|
|
||||||
obj.delete
|
|
||||||
end
|
|
||||||
rescue => e
|
|
||||||
raise "File name: '#{file_name}', #{e}"
|
|
||||||
end
|
|
||||||
|
|
||||||
def removeObjectsWrapper(bucket_name, log_output)
|
|
||||||
removeObjects(bucket_name)
|
|
||||||
rescue => e
|
|
||||||
log_output[:function] = 'removeObjects(bucket_name)'
|
|
||||||
log_output[:args] = {'bucket_name': bucket_name}
|
|
||||||
raise e
|
|
||||||
end
|
|
||||||
|
|
||||||
def listBuckets
|
|
||||||
# Returns an array of bucket names on S3 client, "s3"
|
|
||||||
bucket_name_list = []
|
|
||||||
@@s3.buckets.each do |b|
|
|
||||||
bucket_name_list.push(b.name)
|
|
||||||
end
|
|
||||||
return bucket_name_list
|
|
||||||
rescue => e
|
|
||||||
raise e
|
|
||||||
end
|
|
||||||
|
|
||||||
def listBucketsWrapper(log_output)
|
|
||||||
listBuckets
|
|
||||||
rescue => e
|
|
||||||
log_output[:function] = 'listBuckets'
|
|
||||||
log_output[:args] = {}
|
|
||||||
raise e
|
|
||||||
end
|
|
||||||
|
|
||||||
def listObjects(bucket_name)
|
|
||||||
# Returns an array of object/file names
|
|
||||||
# in bucket, "bucket_name", on S3 client, "s3"
|
|
||||||
object_list = []
|
|
||||||
@@s3.bucket(bucket_name).objects.each do |obj|
|
|
||||||
object_list.push(obj.key)
|
|
||||||
end
|
|
||||||
return object_list
|
|
||||||
rescue => e
|
|
||||||
raise e
|
|
||||||
end
|
|
||||||
|
|
||||||
def listObjectsWrapper(bucket_name, log_output)
|
|
||||||
listObjects(bucket_name)
|
|
||||||
rescue => e
|
|
||||||
log_output[:function] = 'listObjects(bucket_name)'
|
|
||||||
log_output[:args] = {'bucket_name': bucket_name}
|
|
||||||
raise e
|
|
||||||
end
|
|
||||||
|
|
||||||
def statObject(bucket_name, file_name)
|
|
||||||
return @@s3.bucket(bucket_name).object(file_name).exists?
|
|
||||||
rescue => e
|
|
||||||
raise e
|
|
||||||
end
|
|
||||||
|
|
||||||
def statObjectWrapper(bucket_name, file_name, log_output)
|
|
||||||
statObject(bucket_name, file_name)
|
|
||||||
rescue => e
|
|
||||||
log_output[:function] = 'statObject(bucket_name, file_name)'
|
|
||||||
log_output[:args] = {'bucket_name': bucket_name,
|
|
||||||
'file_name': file_name}
|
|
||||||
raise e
|
|
||||||
end
|
|
||||||
|
|
||||||
def bucketExists?(bucket_name)
|
|
||||||
# Returns true if bucket, "bucket_name", exists,
|
|
||||||
# false otherwise
|
|
||||||
return @@s3.bucket(bucket_name).exists?
|
|
||||||
rescue => e
|
|
||||||
raise e
|
|
||||||
end
|
|
||||||
|
|
||||||
def bucketExistsWrapper(bucket_name, log_output)
|
|
||||||
bucketExists?(bucket_name)
|
|
||||||
rescue => e
|
|
||||||
log_output[:function] = 'bucketExists?(bucket_name)'
|
|
||||||
log_output[:args] = {'bucket_name': bucket_name}
|
|
||||||
raise e
|
|
||||||
end
|
|
||||||
|
|
||||||
def presignedGet(bucket_name, file_name)
|
|
||||||
# Returns download/get url
|
|
||||||
obj = @@s3.bucket(bucket_name).object(file_name)
|
|
||||||
return obj.presigned_url(:get, expires_in: 600)
|
|
||||||
rescue => e
|
|
||||||
raise e
|
|
||||||
end
|
|
||||||
|
|
||||||
def presignedGetWrapper(bucket_name, file_name, log_output)
|
|
||||||
presignedGet(bucket_name, file_name)
|
|
||||||
rescue => e
|
|
||||||
log_output[:function] = 'presignedGet(bucket_name, file_name)'
|
|
||||||
log_output[:args] = {'bucket_name': bucket_name,
|
|
||||||
'file_name': file_name}
|
|
||||||
raise e
|
|
||||||
end
|
|
||||||
|
|
||||||
def presignedPut(bucket_name, file_name)
|
|
||||||
# Returns put url
|
|
||||||
obj = @@s3.bucket(bucket_name).object(file_name)
|
|
||||||
return obj.presigned_url(:put, expires_in: 600)
|
|
||||||
rescue => e
|
|
||||||
raise e
|
|
||||||
end
|
|
||||||
|
|
||||||
def presignedPutWrapper(bucket_name, file_name, log_output)
|
|
||||||
presignedPut(bucket_name, file_name)
|
|
||||||
rescue => e
|
|
||||||
log_output[:function] = 'presignedPut(bucket_name, file_name)'
|
|
||||||
log_output[:args] = {'bucket_name': bucket_name,
|
|
||||||
'file_name': file_name}
|
|
||||||
raise e
|
|
||||||
end
|
|
||||||
|
|
||||||
def presignedPost(bucket_name, file_name, expires_in_sec, max_byte_size)
|
|
||||||
# Returns upload/post url
|
|
||||||
obj = @@s3.bucket(bucket_name).object(file_name)
|
|
||||||
return obj.presigned_post(expires: Time.now + expires_in_sec,
|
|
||||||
content_length_range: 1..max_byte_size)
|
|
||||||
rescue => e
|
|
||||||
raise e
|
|
||||||
end
|
|
||||||
|
|
||||||
def presignedPostWrapper(bucket_name, file_name, expires_in_sec, max_byte_size, log_output)
|
|
||||||
presignedPost(bucket_name, file_name, expires_in_sec, max_byte_size)
|
|
||||||
rescue => e
|
|
||||||
log_output[:function] = 'presignedPost(bucket_name, file_name, expires_in_sec, max_byte_size)'
|
|
||||||
log_output[:args] = {'bucket_name': bucket_name,
|
|
||||||
'file_name': file_name,
|
|
||||||
'expires_in_sec': expires_in_sec,
|
|
||||||
'max_byte_size': max_byte_size}
|
|
||||||
raise e
|
|
||||||
end
|
|
||||||
|
|
||||||
# To be addressed. S3 API 'get_bucket_policy' does not work!
|
|
||||||
# def getBucketPolicy(bucket_name)
|
|
||||||
# # Returns bucket policy
|
|
||||||
# return @@s3.bucket(bucket_name).get_bucket_policy
|
|
||||||
# rescue => e
|
|
||||||
# raise e
|
|
||||||
# end
|
|
||||||
|
|
||||||
#
|
|
||||||
# Test case methods
|
|
||||||
#
|
|
||||||
def listBucketsTest
|
|
||||||
# Tests listBuckets api command by creating
|
|
||||||
# new buckets from bucket_name_list
|
|
||||||
|
|
||||||
# get 2 different random bucket names and create a list
|
|
||||||
bucket_name_list = [random_bucket_name, random_bucket_name]
|
|
||||||
# Initialize hash table, 'log_output'
|
|
||||||
log_output = initialize_log_output('listBuckets')
|
|
||||||
# Prepare arg/value hash table and set it in log_output
|
|
||||||
arg_value_hash = {}
|
|
||||||
log_output[:args].each { |x| arg_value_hash[:"#{x}"] = eval x.to_s }
|
|
||||||
log_output[:args] = arg_value_hash
|
|
||||||
|
|
||||||
begin
|
|
||||||
start_time = Time.now
|
|
||||||
prev_total_buckets = listBucketsWrapper(log_output).length
|
|
||||||
new_buckets = bucket_name_list.length
|
|
||||||
bucket_name_list.each do |b|
|
|
||||||
makeBucketWrapper(b, log_output)
|
|
||||||
end
|
|
||||||
new_total_buckets = prev_total_buckets + new_buckets
|
|
||||||
if new_total_buckets >= prev_total_buckets + new_buckets
|
|
||||||
log_output[:status] = 'PASS'
|
|
||||||
else
|
|
||||||
log_output[:error] = 'Could not find expected number of buckets'
|
|
||||||
log_output[:status] = 'FAIL'
|
|
||||||
end
|
|
||||||
cleanUp(bucket_name_list, log_output)
|
|
||||||
rescue => log_output[:error]
|
|
||||||
log_output[:status] = 'FAIL'
|
|
||||||
end
|
|
||||||
|
|
||||||
print_log(log_output, start_time)
|
|
||||||
end
|
|
||||||
|
|
||||||
def makeBucketTest
|
|
||||||
# Tests makeBucket api command.
|
|
||||||
|
|
||||||
# get random bucket name
|
|
||||||
bucket_name = random_bucket_name
|
|
||||||
# Initialize hash table, 'log_output'
|
|
||||||
log_output = initialize_log_output('makeBucket')
|
|
||||||
# Prepare arg/value hash table and set it in log_output
|
|
||||||
arg_value_hash = {}
|
|
||||||
log_output[:args].each { |x| arg_value_hash[:"#{x}"] = eval x.to_s }
|
|
||||||
log_output[:args] = arg_value_hash
|
|
||||||
|
|
||||||
begin
|
|
||||||
start_time = Time.now
|
|
||||||
makeBucketWrapper(bucket_name, log_output)
|
|
||||||
|
|
||||||
if bucketExistsWrapper(bucket_name, log_output)
|
|
||||||
log_output[:status] = 'PASS'
|
|
||||||
else
|
|
||||||
log_output[:error] = 'Bucket expected to be created does not exist'
|
|
||||||
log_output[:status] = 'FAIL'
|
|
||||||
end
|
|
||||||
cleanUp([bucket_name], log_output)
|
|
||||||
rescue => log_output[:error]
|
|
||||||
log_output[:status] = 'FAIL'
|
|
||||||
end
|
|
||||||
|
|
||||||
print_log(log_output, start_time)
|
|
||||||
end
|
|
||||||
|
|
||||||
def bucketExistsNegativeTest
|
|
||||||
# Tests bucketExists api command.
|
|
||||||
|
|
||||||
# get random bucket name
|
|
||||||
bucket_name = random_bucket_name
|
|
||||||
# Initialize hash table, 'log_output'
|
|
||||||
log_output = initialize_log_output('bucketExists?')
|
|
||||||
# Prepare arg/value hash table and set it in log_output
|
|
||||||
arg_value_hash = {}
|
|
||||||
log_output[:args].each { |x| arg_value_hash[:"#{x}"] = eval x.to_s }
|
|
||||||
log_output[:args] = arg_value_hash
|
|
||||||
|
|
||||||
begin
|
|
||||||
start_time = Time.now
|
|
||||||
if !bucketExistsWrapper(bucket_name, log_output)
|
|
||||||
log_output[:status] = 'PASS'
|
|
||||||
else
|
|
||||||
log_output[:error] = "Failed to return 'false' for a non-existing bucket"
|
|
||||||
log_output[:status] = 'FAIL'
|
|
||||||
end
|
|
||||||
cleanUp([bucket_name], log_output)
|
|
||||||
rescue => log_output[:error]
|
|
||||||
log_output[:status] = 'FAIL'
|
|
||||||
end
|
|
||||||
|
|
||||||
print_log(log_output, start_time)
|
|
||||||
end
|
|
||||||
|
|
||||||
def removeBucketTest
|
|
||||||
# Tests removeBucket api command.
|
|
||||||
|
|
||||||
# get a random bucket name
|
|
||||||
bucket_name = random_bucket_name
|
|
||||||
# Initialize hash table, 'log_output'
|
|
||||||
log_output = initialize_log_output('removeBucket')
|
|
||||||
# Prepare arg/value hash table and set it in log_output
|
|
||||||
arg_value_hash = {}
|
|
||||||
log_output[:args].each { |x| arg_value_hash[:"#{x}"] = eval x.to_s }
|
|
||||||
log_output[:args] = arg_value_hash
|
|
||||||
|
|
||||||
begin
|
|
||||||
start_time = Time.now
|
|
||||||
makeBucketWrapper(bucket_name, log_output)
|
|
||||||
removeBucketWrapper(bucket_name, log_output)
|
|
||||||
if !bucketExistsWrapper(bucket_name, log_output)
|
|
||||||
log_output[:status] = 'PASS'
|
|
||||||
else
|
|
||||||
log_output[:error] = 'Bucket expected to be removed still exists'
|
|
||||||
log_output[:status] = 'FAIL'
|
|
||||||
end
|
|
||||||
cleanUp([bucket_name], log_output)
|
|
||||||
rescue => log_output[:error]
|
|
||||||
log_output[:status] = 'FAIL'
|
|
||||||
end
|
|
||||||
|
|
||||||
print_log(log_output, start_time)
|
|
||||||
end
|
|
||||||
|
|
||||||
def putObjectTest(file)
|
|
||||||
# Tests putObject api command by uploading a file
|
|
||||||
|
|
||||||
# get random bucket name
|
|
||||||
bucket_name = random_bucket_name
|
|
||||||
# Initialize hash table, 'log_output'
|
|
||||||
log_output = initialize_log_output('putObject')
|
|
||||||
# Prepare arg/value hash table and set it in log_output
|
|
||||||
arg_value_hash = {}
|
|
||||||
log_output[:args].each { |x| arg_value_hash[:"#{x}"] = eval x.to_s }
|
|
||||||
log_output[:args] = arg_value_hash
|
|
||||||
|
|
||||||
begin
|
|
||||||
start_time = Time.now
|
|
||||||
makeBucketWrapper(bucket_name, log_output)
|
|
||||||
putObjectWrapper(bucket_name, file, log_output)
|
|
||||||
if statObjectWrapper(bucket_name, File.basename(file), log_output)
|
|
||||||
log_output[:status] = 'PASS'
|
|
||||||
else
|
|
||||||
log_output[:error] = "Status for the created object returned 'false'"
|
|
||||||
log_output[:status] = 'FAIL'
|
|
||||||
end
|
|
||||||
cleanUp([bucket_name], log_output)
|
|
||||||
rescue => log_output[:error]
|
|
||||||
log_output[:status] = 'FAIL'
|
|
||||||
end
|
|
||||||
|
|
||||||
print_log(log_output, start_time)
|
|
||||||
end
|
|
||||||
|
|
||||||
def removeObjectTest(file)
|
|
||||||
# Tests removeObject api command by uploading and removing a file
|
|
||||||
|
|
||||||
# get random bucket name
|
|
||||||
bucket_name = random_bucket_name
|
|
||||||
# Initialize hash table, 'log_output'
|
|
||||||
log_output = initialize_log_output('removeObject')
|
|
||||||
# Prepare arg/value hash table and set it in log_output
|
|
||||||
arg_value_hash = {}
|
|
||||||
log_output[:args].each { |x| arg_value_hash[:"#{x}"] = eval x.to_s }
|
|
||||||
log_output[:args] = arg_value_hash
|
|
||||||
|
|
||||||
begin
|
|
||||||
start_time = Time.now
|
|
||||||
makeBucketWrapper(bucket_name, log_output)
|
|
||||||
putObjectWrapper(bucket_name, file, log_output)
|
|
||||||
removeObjectWrapper(bucket_name, File.basename(file), log_output)
|
|
||||||
if !statObjectWrapper(bucket_name, File.basename(file), log_output)
|
|
||||||
log_output[:status] = 'PASS'
|
|
||||||
else
|
|
||||||
log_output[:error] = "Status for the removed object returned 'true'"
|
|
||||||
log_output[:status] = 'FAIL'
|
|
||||||
end
|
|
||||||
cleanUp([bucket_name], log_output)
|
|
||||||
rescue => log_output[:error]
|
|
||||||
log_output[:status] = 'FAIL'
|
|
||||||
end
|
|
||||||
|
|
||||||
print_log(log_output, start_time)
|
|
||||||
end
|
|
||||||
|
|
||||||
def getObjectTest(file, destination)
|
|
||||||
# Tests getObject api command
|
|
||||||
|
|
||||||
# get random bucket name
|
|
||||||
bucket_name = random_bucket_name
|
|
||||||
# Initialize hash table, 'log_output'
|
|
||||||
log_output = initialize_log_output('getObject')
|
|
||||||
# Prepare arg/value hash table and set it in log_output
|
|
||||||
arg_value_hash = {}
|
|
||||||
log_output[:args].each { |x| arg_value_hash[:"#{x}"] = eval x.to_s }
|
|
||||||
log_output[:args] = arg_value_hash
|
|
||||||
|
|
||||||
begin
|
|
||||||
start_time = Time.now
|
|
||||||
makeBucketWrapper(bucket_name, log_output)
|
|
||||||
putObjectWrapper(bucket_name, file, log_output)
|
|
||||||
getObjectWrapper(bucket_name, file, destination, log_output)
|
|
||||||
if system("ls -l #{destination} > /dev/null")
|
|
||||||
log_output[:status] = 'PASS'
|
|
||||||
else
|
|
||||||
log_output[:error] = "Downloaded object does not exist at #{destination}"
|
|
||||||
log_output[:status] = 'FAIL'
|
|
||||||
end
|
|
||||||
cleanUp([bucket_name], log_output)
|
|
||||||
rescue => log_output[:error]
|
|
||||||
log_output[:status] = 'FAIL'
|
|
||||||
end
|
|
||||||
|
|
||||||
print_log(log_output, start_time)
|
|
||||||
end
|
|
||||||
|
|
||||||
def listObjectsTest(file_list)
|
|
||||||
# Tests listObjects api command
|
|
||||||
|
|
||||||
# get random bucket name
|
|
||||||
bucket_name = random_bucket_name
|
|
||||||
# Initialize hash table, 'log_output'
|
|
||||||
log_output = initialize_log_output('listObjects')
|
|
||||||
# Prepare arg/value hash table and set it in log_output
|
|
||||||
arg_value_hash = {}
|
|
||||||
log_output[:args].each { |x| arg_value_hash[:"#{x}"] = eval x.to_s }
|
|
||||||
log_output[:args] = arg_value_hash
|
|
||||||
|
|
||||||
begin
|
|
||||||
start_time = Time.now
|
|
||||||
makeBucketWrapper(bucket_name, log_output)
|
|
||||||
# Put all objects into the bucket
|
|
||||||
file_list.each do |f|
|
|
||||||
putObjectWrapper(bucket_name, f, log_output)
|
|
||||||
end
|
|
||||||
# Total number of files uploaded
|
|
||||||
expected_no = file_list.length
|
|
||||||
# Actual number is what api returns
|
|
||||||
actual_no = listObjectsWrapper(bucket_name, log_output).length
|
|
||||||
# Compare expected and actual values
|
|
||||||
if expected_no == actual_no
|
|
||||||
log_output[:status] = 'PASS'
|
|
||||||
else
|
|
||||||
log_output[:error] = 'Expected and actual number of listed files/objects do not match!'
|
|
||||||
log_output[:status] = 'FAIL'
|
|
||||||
end
|
|
||||||
cleanUp([bucket_name], log_output)
|
|
||||||
rescue => log_output[:error]
|
|
||||||
log_output[:status] = 'FAIL'
|
|
||||||
end
|
|
||||||
|
|
||||||
print_log(log_output, start_time)
|
|
||||||
end
|
|
||||||
|
|
||||||
def copyObjectTest(data_dir, source_file_name, target_file_name = '')
|
|
||||||
# Tests copyObject api command
|
|
||||||
|
|
||||||
# get random bucket names
|
|
||||||
source_bucket_name = random_bucket_name
|
|
||||||
target_bucket_name = random_bucket_name
|
|
||||||
# Initialize hash table, 'log_output'
|
|
||||||
log_output = initialize_log_output('copyObject')
|
|
||||||
# Prepare arg/value hash table and set it in log_output
|
|
||||||
arg_value_hash = {}
|
|
||||||
log_output[:args].each { |x| arg_value_hash[:"#{x}"] = eval x.to_s }
|
|
||||||
log_output[:args] = arg_value_hash
|
|
||||||
|
|
||||||
begin
|
|
||||||
start_time = Time.now
|
|
||||||
target_file_name = source_file_name if target_file_name.empty?
|
|
||||||
makeBucketWrapper(source_bucket_name, log_output)
|
|
||||||
makeBucketWrapper(target_bucket_name, log_output)
|
|
||||||
putObjectWrapper(source_bucket_name,
|
|
||||||
File.join(data_dir, source_file_name), log_output)
|
|
||||||
copyObjectWrapper(source_bucket_name, target_bucket_name,
|
|
||||||
source_file_name, target_file_name, log_output)
|
|
||||||
# Check if copy worked fine
|
|
||||||
if statObjectWrapper(target_bucket_name, target_file_name, log_output)
|
|
||||||
log_output[:status] = 'PASS'
|
|
||||||
else
|
|
||||||
log_output[:error] = 'Copied file could not be found in the expected location'
|
|
||||||
log_output[:status] = 'FAIL'
|
|
||||||
end
|
|
||||||
cleanUp([source_bucket_name, target_bucket_name], log_output)
|
|
||||||
rescue => log_output[:error]
|
|
||||||
log_output[:status] = 'FAIL'
|
|
||||||
end
|
|
||||||
|
|
||||||
print_log(log_output, start_time)
|
|
||||||
end
|
|
||||||
|
|
||||||
def presignedGetObjectTest(data_dir, file_name)
|
|
||||||
# Tests presignedGetObject api command
|
|
||||||
|
|
||||||
# get random bucket name
|
|
||||||
bucket_name = random_bucket_name
|
|
||||||
# Initialize hash table, 'log_output'
|
|
||||||
log_output = initialize_log_output('presignedGet')
|
|
||||||
# Prepare arg/value hash table and set it in log_output
|
|
||||||
arg_value_hash = {}
|
|
||||||
log_output[:args].each { |x| arg_value_hash[:"#{x}"] = eval x.to_s }
|
|
||||||
log_output[:args] = arg_value_hash
|
|
||||||
|
|
||||||
begin
|
|
||||||
start_time = Time.now
|
|
||||||
makeBucketWrapper(bucket_name, log_output)
|
|
||||||
file = File.join(data_dir, file_name)
|
|
||||||
# Get check sum value without the file name
|
|
||||||
cksum_orig = `cksum #{file}`.split[0..1]
|
|
||||||
putObjectWrapper(bucket_name, file, log_output)
|
|
||||||
get_url = presignedGetWrapper(bucket_name, file_name, log_output)
|
|
||||||
# Download the file using the URL
|
|
||||||
# generated by presignedGet api command
|
|
||||||
`wget -O /tmp/#{file_name} '#{get_url}' > /dev/null 2>&1`
|
|
||||||
# Get check sum value for the downloaded file
|
|
||||||
# Split to get rid of the file name
|
|
||||||
cksum_new = `cksum /tmp/#{file_name}`.split[0..1]
|
|
||||||
|
|
||||||
# Check if check sum values for the orig file
|
|
||||||
# and the downloaded file match
|
|
||||||
if cksum_orig == cksum_new
|
|
||||||
log_output[:status] = 'PASS'
|
|
||||||
else
|
|
||||||
log_output[:error] = 'Check sum values do NOT match'
|
|
||||||
log_output[:status] = 'FAIL'
|
|
||||||
end
|
|
||||||
cleanUp([bucket_name], log_output)
|
|
||||||
rescue => log_output[:error]
|
|
||||||
log_output[:status] = 'FAIL'
|
|
||||||
end
|
|
||||||
|
|
||||||
print_log(log_output, start_time)
|
|
||||||
end
|
|
||||||
|
|
||||||
def presignedPutObjectTest(data_dir, file_name)
|
|
||||||
# Tests presignedPutObject api command
|
|
||||||
|
|
||||||
# get random bucket name
|
|
||||||
bucket_name = random_bucket_name
|
|
||||||
# Initialize hash table, 'log_output'
|
|
||||||
log_output = initialize_log_output('presignedPut')
|
|
||||||
# Prepare arg/value hash table and set it in log_output
|
|
||||||
arg_value_hash = {}
|
|
||||||
log_output[:args].each { |x| arg_value_hash[:"#{x}"] = eval x.to_s }
|
|
||||||
log_output[:args] = arg_value_hash
|
|
||||||
|
|
||||||
begin
|
|
||||||
start_time = Time.now
|
|
||||||
makeBucketWrapper(bucket_name, log_output)
|
|
||||||
file = File.join(data_dir, file_name)
|
|
||||||
|
|
||||||
# Get check sum value and
|
|
||||||
# split to get rid of the file name
|
|
||||||
cksum_orig = `cksum #{file}`.split[0..1]
|
|
||||||
|
|
||||||
# Generate presigned Put URL and parse it
|
|
||||||
uri = URI.parse(presignedPutWrapper(bucket_name, file_name, log_output))
|
|
||||||
request = Net::HTTP::Put.new(uri.request_uri, 'x-amz-acl' => 'public-read')
|
|
||||||
request.body = IO.read(File.join(data_dir, file_name))
|
|
||||||
|
|
||||||
http = Net::HTTP.new(uri.host, uri.port)
|
|
||||||
http.use_ssl = true if ENV['ENABLE_HTTPS'] == '1'
|
|
||||||
|
|
||||||
http.request(request)
|
|
||||||
|
|
||||||
if statObjectWrapper(bucket_name, file_name, log_output)
|
|
||||||
getObjectWrapper(bucket_name, file_name, '/tmp', log_output)
|
|
||||||
cksum_new = `cksum /tmp/#{file_name}`.split[0..1]
|
|
||||||
# Check if check sum values of the orig file
|
|
||||||
# and the downloaded file match
|
|
||||||
if cksum_orig == cksum_new
|
|
||||||
log_output[:status] = 'PASS'
|
|
||||||
else
|
|
||||||
log_output[:error] = 'Check sum values do NOT match'
|
|
||||||
log_output[:status] = 'FAIL'
|
|
||||||
end
|
|
||||||
else
|
|
||||||
log_output[:error] = 'Expected to be created object does NOT exist'
|
|
||||||
log_output[:status] = 'FAIL'
|
|
||||||
end
|
|
||||||
cleanUp([bucket_name], log_output)
|
|
||||||
rescue => log_output[:error]
|
|
||||||
log_output[:status] = 'FAIL'
|
|
||||||
end
|
|
||||||
|
|
||||||
print_log(log_output, start_time)
|
|
||||||
end
|
|
||||||
|
|
||||||
def presignedPostObjectTest(data_dir, file_name,
|
|
||||||
expires_in_sec, max_byte_size)
|
|
||||||
# Tests presignedPostObject api command
|
|
||||||
|
|
||||||
# get random bucket name
|
|
||||||
bucket_name = random_bucket_name
|
|
||||||
# Initialize hash table, 'log_output'
|
|
||||||
log_output = initialize_log_output('presignedPost')
|
|
||||||
# Prepare arg/value hash table and set it in log_output
|
|
||||||
arg_value_hash = {}
|
|
||||||
log_output[:args].each { |x| arg_value_hash[:"#{x}"] = eval x.to_s }
|
|
||||||
log_output[:args] = arg_value_hash
|
|
||||||
|
|
||||||
begin
|
|
||||||
start_time = Time.now
|
|
||||||
makeBucketWrapper(bucket_name, log_output)
|
|
||||||
file = File.join(data_dir, file_name)
|
|
||||||
|
|
||||||
# Get check sum value and split it
|
|
||||||
# into parts to get rid of the file name
|
|
||||||
cksum_orig = `cksum #{file}`.split[0..1]
|
|
||||||
# Create the presigned POST url
|
|
||||||
post = presignedPostWrapper(bucket_name, file_name,
|
|
||||||
expires_in_sec, max_byte_size, log_output)
|
|
||||||
|
|
||||||
# Prepare multi parts array for POST command request
|
|
||||||
file_part = Part.new name: 'file',
|
|
||||||
body: IO.read(File.join(data_dir, file_name)),
|
|
||||||
filename: file_name,
|
|
||||||
content_type: 'application/octet-stream'
|
|
||||||
parts = [file_part]
|
|
||||||
# Add POST fields into parts array
|
|
||||||
post.fields.each do |field, value|
|
|
||||||
parts.push(Part.new(field, value))
|
|
||||||
end
|
|
||||||
boundary = "---------------------------#{rand(10_000_000_000_000_000)}"
|
|
||||||
body_parts = MultipartBody.new parts, boundary
|
|
||||||
|
|
||||||
# Parse presigned Post URL
|
|
||||||
uri = URI.parse(post.url)
|
|
||||||
|
|
||||||
# Create the HTTP objects
|
|
||||||
http = Net::HTTP.new(uri.host, uri.port)
|
|
||||||
http.use_ssl = true if ENV['ENABLE_HTTPS'] == '1'
|
|
||||||
request = Net::HTTP::Post.new(uri.request_uri)
|
|
||||||
request.body = body_parts.to_s
|
|
||||||
request.content_type = "multipart/form-data; boundary=#{boundary}"
|
|
||||||
# Send the request
|
|
||||||
log_output[:error] = http.request(request)
|
|
||||||
|
|
||||||
if statObjectWrapper(bucket_name, file_name, log_output)
|
|
||||||
getObjectWrapper(bucket_name, file_name, '/tmp', log_output)
|
|
||||||
cksum_new = `cksum /tmp/#{file_name}`.split[0..1]
|
|
||||||
# Check if check sum values of the orig file
|
|
||||||
# and the downloaded file match
|
|
||||||
if cksum_orig == cksum_new
|
|
||||||
log_output[:status] = 'PASS'
|
|
||||||
# FIXME: HTTP No Content error, status code=204 is returned as error
|
|
||||||
log_output[:error] = nil
|
|
||||||
else
|
|
||||||
log_output[:error] = 'Check sum values do NOT match'
|
|
||||||
log_output[:status] = 'FAIL'
|
|
||||||
end
|
|
||||||
else
|
|
||||||
log_output[:error] = 'Expected to be created object does NOT exist'
|
|
||||||
log_output[:status] = 'FAIL'
|
|
||||||
end
|
|
||||||
cleanUp([bucket_name], log_output)
|
|
||||||
rescue => log_output[:error]
|
|
||||||
log_output[:status] = 'FAIL'
|
|
||||||
end
|
|
||||||
|
|
||||||
print_log(log_output, start_time)
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
# MAIN CODE
|
|
||||||
|
|
||||||
# Create test Class instance and call the tests
|
|
||||||
aws = AwsSdkRubyTest.new
|
|
||||||
file_name1 = 'datafile-1-kB'
|
|
||||||
file_new_name = 'datafile-1-kB-copy'
|
|
||||||
file_name_list = ['datafile-1-kB', 'datafile-1-b', 'datafile-6-MB']
|
|
||||||
# Add data_dir in front of each file name in file_name_list
|
|
||||||
# The location where the bucket and file
|
|
||||||
# objects are going to be created.
|
|
||||||
data_dir = ENV['MINT_DATA_DIR'] ||= 'MINT_DATA_DIR is not set'
|
|
||||||
file_list = file_name_list.map { |f| File.join(data_dir, f) }
|
|
||||||
destination = '/tmp'
|
|
||||||
|
|
||||||
aws.listBucketsTest
|
|
||||||
aws.listObjectsTest(file_list)
|
|
||||||
aws.makeBucketTest
|
|
||||||
aws.bucketExistsNegativeTest
|
|
||||||
aws.removeBucketTest
|
|
||||||
aws.putObjectTest(File.join(data_dir, file_name1))
|
|
||||||
aws.removeObjectTest(File.join(data_dir, file_name1))
|
|
||||||
aws.getObjectTest(File.join(data_dir, file_name1), destination)
|
|
||||||
aws.copyObjectTest(data_dir, file_name1)
|
|
||||||
aws.copyObjectTest(data_dir, file_name1, file_new_name)
|
|
||||||
aws.presignedGetObjectTest(data_dir, file_name1)
|
|
||||||
aws.presignedPutObjectTest(data_dir, file_name1)
|
|
||||||
aws.presignedPostObjectTest(data_dir, file_name1, 60, 3*1024*1024)
|
|
|
@ -1,16 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
#
|
|
||||||
#
|
|
||||||
|
|
||||||
# handle command line arguments
|
|
||||||
if [ $# -ne 2 ]; then
|
|
||||||
echo "usage: run.sh <OUTPUT-LOG-FILE> <ERROR-LOG-FILE>"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
output_log_file="$1"
|
|
||||||
error_log_file="$2"
|
|
||||||
|
|
||||||
# run tests
|
|
||||||
chmod a+x aws-stub-tests.rb
|
|
||||||
ruby aws-stub-tests.rb 1>>"$output_log_file" 2>"$error_log_file"
|
|
|
@ -1,19 +0,0 @@
|
||||||
## `awscli` tests
|
|
||||||
This directory serves as the location for Mint tests using `awscli`. Top level `mint.sh` calls `run.sh` to execute tests.
|
|
||||||
|
|
||||||
## Adding new tests
|
|
||||||
New tests is added into `test.sh` as new functions.
|
|
||||||
|
|
||||||
## Running tests manually
|
|
||||||
- Set environment variables `MINT_DATA_DIR`, `MINT_MODE`, `SERVER_ENDPOINT`, `ACCESS_KEY`, `SECRET_KEY`, `SERVER_REGION` and `ENABLE_HTTPS`
|
|
||||||
- Call `run.sh` with output log file and error log file. for example
|
|
||||||
```bash
|
|
||||||
export MINT_DATA_DIR=~/my-mint-dir
|
|
||||||
export MINT_MODE=core
|
|
||||||
export SERVER_ENDPOINT="play.minio.io:9000"
|
|
||||||
export ACCESS_KEY="Q3AM3UQ867SPQQA43P2F"
|
|
||||||
export SECRET_KEY="zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG"
|
|
||||||
export ENABLE_HTTPS=1
|
|
||||||
export SERVER_REGION=us-east-1
|
|
||||||
./run.sh /tmp/output.log /tmp/error.log
|
|
||||||
```
|
|
|
@ -1,38 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
#
|
|
||||||
#
|
|
||||||
|
|
||||||
# handle command line arguments
|
|
||||||
if [ $# -ne 2 ]; then
|
|
||||||
echo "usage: run.sh <OUTPUT-LOG-FILE> <ERROR-LOG-FILE>"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
output_log_file="$1"
|
|
||||||
error_log_file="$2"
|
|
||||||
|
|
||||||
# configure awscli
|
|
||||||
aws configure set aws_access_key_id "$ACCESS_KEY"
|
|
||||||
aws configure set aws_secret_access_key "$SECRET_KEY"
|
|
||||||
aws configure set default.region "$SERVER_REGION"
|
|
||||||
|
|
||||||
# run tests for virtual style if provided
|
|
||||||
if [ "$ENABLE_VIRTUAL_STYLE" -eq 1 ]; then
|
|
||||||
# Setup endpoint scheme
|
|
||||||
endpoint="http://$DOMAIN:$SERVER_PORT"
|
|
||||||
if [ "$ENABLE_HTTPS" -eq 1 ]; then
|
|
||||||
endpoint="https://$DOMAIN:$SERVER_PORT"
|
|
||||||
fi
|
|
||||||
dnsmasq --address="/$DOMAIN/$SERVER_IP" --user=root
|
|
||||||
echo -e "nameserver 127.0.0.1\n$(cat /etc/resolv.conf)" > /etc/resolv.conf
|
|
||||||
aws configure set default.s3.addressing_style virtual
|
|
||||||
./test.sh "$endpoint" 1>>"$output_log_file" 2>"$error_log_file"
|
|
||||||
aws configure set default.s3.addressing_style path
|
|
||||||
fi
|
|
||||||
|
|
||||||
endpoint="http://$SERVER_ENDPOINT"
|
|
||||||
if [ "$ENABLE_HTTPS" -eq 1 ]; then
|
|
||||||
endpoint="https://$SERVER_ENDPOINT"
|
|
||||||
fi
|
|
||||||
# run path style tests
|
|
||||||
./test.sh "$endpoint" 1>>"$output_log_file" 2>"$error_log_file"
|
|
File diff suppressed because it is too large
Load Diff
|
@ -1,8 +0,0 @@
|
||||||
module mint.minio.io/healthcheck
|
|
||||||
|
|
||||||
go 1.14
|
|
||||||
|
|
||||||
require (
|
|
||||||
github.com/dgrijalva/jwt-go v3.2.0+incompatible
|
|
||||||
github.com/sirupsen/logrus v1.6.0
|
|
||||||
)
|
|
|
@ -1,14 +0,0 @@
|
||||||
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
|
|
||||||
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
|
||||||
github.com/dgrijalva/jwt-go v3.2.0+incompatible h1:7qlOGliEKZXTDg6OTjfoBKDXWrumCAMpl/TFQ4/5kLM=
|
|
||||||
github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
|
|
||||||
github.com/konsorten/go-windows-terminal-sequences v1.0.3 h1:CE8S1cTafDpPvMhIxNJKvHsGVBgn1xWYf1NbHQhywc8=
|
|
||||||
github.com/konsorten/go-windows-terminal-sequences v1.0.3/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
|
|
||||||
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
|
|
||||||
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
|
||||||
github.com/sirupsen/logrus v1.6.0 h1:UBcNElsrwanuuMsnGSlYmtmgbb23qDR5dG+6X6Oo89I=
|
|
||||||
github.com/sirupsen/logrus v1.6.0/go.mod h1:7uNnSEd1DgxDLC74fIahvMZmmYsHGZGEOFrfsX/uA88=
|
|
||||||
github.com/stretchr/testify v1.2.2 h1:bSDNvY7ZPG5RlJ8otE/7V6gMiyenm9RtJ7IUVIAoJ1w=
|
|
||||||
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
|
|
||||||
golang.org/x/sys v0.0.0-20190422165155-953cdadca894 h1:Cz4ceDQGXuKRnVBDTS23GTn/pU5OE2C0WrNTOYK1Uuc=
|
|
||||||
golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
|
|
@ -1,267 +0,0 @@
|
||||||
// Copyright (c) 2015-2021 MinIO, Inc.
|
|
||||||
//
|
|
||||||
// This file is part of MinIO Object Storage stack
|
|
||||||
//
|
|
||||||
// This program is free software: you can redistribute it and/or modify
|
|
||||||
// it under the terms of the GNU Affero General Public License as published by
|
|
||||||
// the Free Software Foundation, either version 3 of the License, or
|
|
||||||
// (at your option) any later version.
|
|
||||||
//
|
|
||||||
// This program is distributed in the hope that it will be useful
|
|
||||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
// GNU Affero General Public License for more details.
|
|
||||||
//
|
|
||||||
// You should have received a copy of the GNU Affero General Public License
|
|
||||||
// along with this program. If not, see <http://www.gnu.org/licenses/>.
|
|
||||||
|
|
||||||
package main
|
|
||||||
|
|
||||||
import (
|
|
||||||
"crypto/tls"
|
|
||||||
"encoding/json"
|
|
||||||
"fmt"
|
|
||||||
"net/http"
|
|
||||||
"net/url"
|
|
||||||
"os"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
jwtgo "github.com/dgrijalva/jwt-go"
|
|
||||||
log "github.com/sirupsen/logrus"
|
|
||||||
)
|
|
||||||
|
|
||||||
const (
|
|
||||||
pass = "PASS" // Indicate that a test passed
|
|
||||||
fail = "FAIL" // Indicate that a test failed
|
|
||||||
livenessPath = "/minio/health/live"
|
|
||||||
readinessPath = "/minio/health/ready"
|
|
||||||
prometheusPath = "/minio/prometheus/metrics"
|
|
||||||
prometheusPathV2 = "/minio/v2/metrics/cluster"
|
|
||||||
timeout = time.Duration(30 * time.Second)
|
|
||||||
)
|
|
||||||
|
|
||||||
type mintJSONFormatter struct {
|
|
||||||
}
|
|
||||||
|
|
||||||
func (f *mintJSONFormatter) Format(entry *log.Entry) ([]byte, error) {
|
|
||||||
data := make(log.Fields, len(entry.Data))
|
|
||||||
for k, v := range entry.Data {
|
|
||||||
switch v := v.(type) {
|
|
||||||
case error:
|
|
||||||
// Otherwise errors are ignored by `encoding/json`
|
|
||||||
// https://github.com/sirupsen/logrus/issues/137
|
|
||||||
data[k] = v.Error()
|
|
||||||
default:
|
|
||||||
data[k] = v
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
serialized, err := json.Marshal(data)
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("Failed to marshal fields to JSON, %w", err)
|
|
||||||
}
|
|
||||||
return append(serialized, '\n'), nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// log successful test runs
|
|
||||||
func successLogger(function string, args map[string]interface{}, startTime time.Time) *log.Entry {
|
|
||||||
// calculate the test case duration
|
|
||||||
duration := time.Since(startTime)
|
|
||||||
// log with the fields as per mint
|
|
||||||
fields := log.Fields{"name": "healthcheck", "function": function, "args": args, "duration": duration.Nanoseconds() / 1000000, "status": pass}
|
|
||||||
return log.WithFields(fields)
|
|
||||||
}
|
|
||||||
|
|
||||||
// log failed test runs
|
|
||||||
func failureLog(function string, args map[string]interface{}, startTime time.Time, alert string, message string, err error) *log.Entry {
|
|
||||||
// calculate the test case duration
|
|
||||||
duration := time.Since(startTime)
|
|
||||||
var fields log.Fields
|
|
||||||
// log with the fields as per mint
|
|
||||||
if err != nil {
|
|
||||||
fields = log.Fields{"name": "healthcheck", "function": function, "args": args,
|
|
||||||
"duration": duration.Nanoseconds() / 1000000, "status": fail, "alert": alert, "message": message, "error": err}
|
|
||||||
} else {
|
|
||||||
fields = log.Fields{"name": "healthcheck", "function": function, "args": args,
|
|
||||||
"duration": duration.Nanoseconds() / 1000000, "status": fail, "alert": alert, "message": message}
|
|
||||||
}
|
|
||||||
return log.WithFields(fields)
|
|
||||||
}
|
|
||||||
|
|
||||||
func testLivenessEndpoint(endpoint string) {
|
|
||||||
startTime := time.Now()
|
|
||||||
function := "testLivenessEndpoint"
|
|
||||||
|
|
||||||
u, err := url.Parse(fmt.Sprintf("%s%s", endpoint, livenessPath))
|
|
||||||
if err != nil {
|
|
||||||
// Could not parse URL successfully
|
|
||||||
failureLog(function, nil, startTime, "", "URL Parsing for Healthcheck Liveness handler failed", err).Fatal()
|
|
||||||
}
|
|
||||||
|
|
||||||
tr := &http.Transport{
|
|
||||||
TLSClientConfig: &tls.Config{InsecureSkipVerify: u.Scheme == "https"},
|
|
||||||
}
|
|
||||||
client := &http.Client{Transport: tr, Timeout: timeout}
|
|
||||||
resp, err := client.Get(u.String())
|
|
||||||
if err != nil {
|
|
||||||
// GET request errored
|
|
||||||
failureLog(function, nil, startTime, "", "GET request failed", err).Fatal()
|
|
||||||
}
|
|
||||||
if resp.StatusCode != http.StatusOK {
|
|
||||||
// Status not 200 OK
|
|
||||||
failureLog(function, nil, startTime, "", fmt.Sprintf("GET /minio/health/live returned %s", resp.Status), err).Fatal()
|
|
||||||
}
|
|
||||||
|
|
||||||
defer resp.Body.Close()
|
|
||||||
defer successLogger(function, nil, startTime).Info()
|
|
||||||
}
|
|
||||||
|
|
||||||
func testReadinessEndpoint(endpoint string) {
|
|
||||||
startTime := time.Now()
|
|
||||||
function := "testReadinessEndpoint"
|
|
||||||
|
|
||||||
u, err := url.Parse(fmt.Sprintf("%s%s", endpoint, readinessPath))
|
|
||||||
if err != nil {
|
|
||||||
// Could not parse URL successfully
|
|
||||||
failureLog(function, nil, startTime, "", "URL Parsing for Healthcheck Readiness handler failed", err).Fatal()
|
|
||||||
}
|
|
||||||
|
|
||||||
tr := &http.Transport{
|
|
||||||
TLSClientConfig: &tls.Config{InsecureSkipVerify: u.Scheme == "https"},
|
|
||||||
}
|
|
||||||
client := &http.Client{Transport: tr, Timeout: timeout}
|
|
||||||
resp, err := client.Get(u.String())
|
|
||||||
if err != nil {
|
|
||||||
// GET request errored
|
|
||||||
failureLog(function, nil, startTime, "", "GET request to Readiness endpoint failed", err).Fatal()
|
|
||||||
}
|
|
||||||
if resp.StatusCode != http.StatusOK {
|
|
||||||
// Status not 200 OK
|
|
||||||
failureLog(function, nil, startTime, "", "GET /minio/health/ready returned non OK status", err).Fatal()
|
|
||||||
}
|
|
||||||
|
|
||||||
defer resp.Body.Close()
|
|
||||||
defer successLogger(function, nil, startTime).Info()
|
|
||||||
}
|
|
||||||
|
|
||||||
const (
|
|
||||||
defaultPrometheusJWTExpiry = 100 * 365 * 24 * time.Hour
|
|
||||||
)
|
|
||||||
|
|
||||||
func testPrometheusEndpoint(endpoint string) {
|
|
||||||
startTime := time.Now()
|
|
||||||
function := "testPrometheusEndpoint"
|
|
||||||
|
|
||||||
u, err := url.Parse(fmt.Sprintf("%s%s", endpoint, prometheusPath))
|
|
||||||
if err != nil {
|
|
||||||
// Could not parse URL successfully
|
|
||||||
failureLog(function, nil, startTime, "", "URL Parsing for Healthcheck Prometheus handler failed", err).Fatal()
|
|
||||||
}
|
|
||||||
|
|
||||||
jwt := jwtgo.NewWithClaims(jwtgo.SigningMethodHS512, jwtgo.StandardClaims{
|
|
||||||
ExpiresAt: time.Now().UTC().Add(defaultPrometheusJWTExpiry).Unix(),
|
|
||||||
Subject: os.Getenv("ACCESS_KEY"),
|
|
||||||
Issuer: "prometheus",
|
|
||||||
})
|
|
||||||
|
|
||||||
token, err := jwt.SignedString([]byte(os.Getenv("SECRET_KEY")))
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, nil, startTime, "", "jwt generation failed", err).Fatal()
|
|
||||||
}
|
|
||||||
|
|
||||||
tr := &http.Transport{
|
|
||||||
TLSClientConfig: &tls.Config{InsecureSkipVerify: u.Scheme == "https"},
|
|
||||||
}
|
|
||||||
client := &http.Client{Transport: tr, Timeout: timeout}
|
|
||||||
|
|
||||||
req, err := http.NewRequest(http.MethodGet, u.String(), nil)
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, nil, startTime, "", "Initializing GET request to Prometheus endpoint failed", err).Fatal()
|
|
||||||
}
|
|
||||||
req.Header.Set("Authorization", fmt.Sprintf("Bearer %s", token))
|
|
||||||
|
|
||||||
resp, err := client.Do(req)
|
|
||||||
if err != nil {
|
|
||||||
// GET request errored
|
|
||||||
failureLog(function, nil, startTime, "", "GET request to Prometheus endpoint failed", err).Fatal()
|
|
||||||
}
|
|
||||||
|
|
||||||
if resp.StatusCode != http.StatusOK {
|
|
||||||
// Status not 200 OK
|
|
||||||
failureLog(function, nil, startTime, "", "GET /minio/prometheus/metrics returned non OK status", err).Fatal()
|
|
||||||
}
|
|
||||||
|
|
||||||
defer resp.Body.Close()
|
|
||||||
defer successLogger(function, nil, startTime).Info()
|
|
||||||
}
|
|
||||||
|
|
||||||
func testPrometheusEndpointV2(endpoint string) {
|
|
||||||
startTime := time.Now()
|
|
||||||
function := "testPrometheusEndpoint"
|
|
||||||
|
|
||||||
u, err := url.Parse(fmt.Sprintf("%s%s", endpoint, prometheusPathV2))
|
|
||||||
if err != nil {
|
|
||||||
// Could not parse URL successfully
|
|
||||||
failureLog(function, nil, startTime, "", "URL Parsing for Healthcheck Prometheus handler failed", err).Fatal()
|
|
||||||
}
|
|
||||||
|
|
||||||
jwt := jwtgo.NewWithClaims(jwtgo.SigningMethodHS512, jwtgo.StandardClaims{
|
|
||||||
ExpiresAt: time.Now().UTC().Add(defaultPrometheusJWTExpiry).Unix(),
|
|
||||||
Subject: os.Getenv("ACCESS_KEY"),
|
|
||||||
Issuer: "prometheus",
|
|
||||||
})
|
|
||||||
|
|
||||||
token, err := jwt.SignedString([]byte(os.Getenv("SECRET_KEY")))
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, nil, startTime, "", "jwt generation failed", err).Fatal()
|
|
||||||
}
|
|
||||||
|
|
||||||
tr := &http.Transport{
|
|
||||||
TLSClientConfig: &tls.Config{InsecureSkipVerify: u.Scheme == "https"},
|
|
||||||
}
|
|
||||||
client := &http.Client{Transport: tr, Timeout: timeout}
|
|
||||||
|
|
||||||
req, err := http.NewRequest(http.MethodGet, u.String(), nil)
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, nil, startTime, "", "Initializing GET request to Prometheus endpoint failed", err).Fatal()
|
|
||||||
}
|
|
||||||
req.Header.Set("Authorization", fmt.Sprintf("Bearer %s", token))
|
|
||||||
|
|
||||||
resp, err := client.Do(req)
|
|
||||||
if err != nil {
|
|
||||||
// GET request errored
|
|
||||||
failureLog(function, nil, startTime, "", "GET request to Prometheus endpoint failed", err).Fatal()
|
|
||||||
}
|
|
||||||
|
|
||||||
if resp.StatusCode != http.StatusOK {
|
|
||||||
// Status not 200 OK
|
|
||||||
failureLog(function, nil, startTime, "", "GET /minio/prometheus/metrics returned non OK status", err).Fatal()
|
|
||||||
}
|
|
||||||
|
|
||||||
defer resp.Body.Close()
|
|
||||||
defer successLogger(function, nil, startTime).Info()
|
|
||||||
}
|
|
||||||
|
|
||||||
func main() {
|
|
||||||
endpoint := os.Getenv("SERVER_ENDPOINT")
|
|
||||||
secure := os.Getenv("ENABLE_HTTPS")
|
|
||||||
if secure == "1" {
|
|
||||||
endpoint = "https://" + endpoint
|
|
||||||
} else {
|
|
||||||
endpoint = "http://" + endpoint
|
|
||||||
}
|
|
||||||
|
|
||||||
// Output to stdout instead of the default stderr
|
|
||||||
log.SetOutput(os.Stdout)
|
|
||||||
// create custom formatter
|
|
||||||
mintFormatter := mintJSONFormatter{}
|
|
||||||
// set custom formatter
|
|
||||||
log.SetFormatter(&mintFormatter)
|
|
||||||
// log Info or above -- success cases are Info level, failures are Fatal level
|
|
||||||
log.SetLevel(log.InfoLevel)
|
|
||||||
// execute tests
|
|
||||||
testLivenessEndpoint(endpoint)
|
|
||||||
testReadinessEndpoint(endpoint)
|
|
||||||
testPrometheusEndpoint(endpoint)
|
|
||||||
testPrometheusEndpointV2(endpoint)
|
|
||||||
}
|
|
|
@ -1,15 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
#
|
|
||||||
#
|
|
||||||
|
|
||||||
# handle command line arguments
|
|
||||||
if [ $# -ne 2 ]; then
|
|
||||||
echo "usage: run.sh <OUTPUT-LOG-FILE> <ERROR-LOG-FILE>"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
output_log_file="$1"
|
|
||||||
error_log_file="$2"
|
|
||||||
|
|
||||||
# run tests
|
|
||||||
/mint/run/core/healthcheck/healthcheck 1>>"$output_log_file" 2>"$error_log_file"
|
|
|
@ -1,19 +0,0 @@
|
||||||
## `mc` tests
|
|
||||||
This directory serves as the location for Mint tests using `mc`. Top level `mint.sh` calls `run.sh` to execute tests.
|
|
||||||
|
|
||||||
## Adding new tests
|
|
||||||
New tests is added into `test.sh` as new functions.
|
|
||||||
|
|
||||||
## Running tests manually
|
|
||||||
- Set environment variables `MINT_DATA_DIR`, `MINT_MODE`, `SERVER_ENDPOINT`, `ACCESS_KEY`, `SECRET_KEY`, `SERVER_REGION` and `ENABLE_HTTPS`
|
|
||||||
- Call `run.sh` with output log file and error log file. for example
|
|
||||||
```bash
|
|
||||||
export MINT_DATA_DIR=~/my-mint-dir
|
|
||||||
export MINT_MODE=core
|
|
||||||
export SERVER_ENDPOINT="play.minio.io:9000"
|
|
||||||
export ACCESS_KEY="Q3AM3UQ867SPQQA43P2F"
|
|
||||||
export SECRET_KEY="zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG"
|
|
||||||
export ENABLE_HTTPS=1
|
|
||||||
export SERVER_REGION=us-east-1
|
|
||||||
./run.sh /tmp/output.log /tmp/error.log
|
|
||||||
```
|
|
|
@ -1,14 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
#
|
|
||||||
#
|
|
||||||
|
|
||||||
# handle command line arguments
|
|
||||||
if [ $# -ne 2 ]; then
|
|
||||||
echo "usage: run.sh <OUTPUT-LOG-FILE> <ERROR-LOG-FILE>"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
output_log_file="$1"
|
|
||||||
error_log_file="$2"
|
|
||||||
|
|
||||||
./functional-tests.sh 1>>"$output_log_file" 2>"$error_log_file"
|
|
|
@ -1,13 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
#
|
|
||||||
#
|
|
||||||
|
|
||||||
# handle command line arguments
|
|
||||||
if [ $# -ne 2 ]; then
|
|
||||||
echo "usage: run.sh <OUTPUT-LOG-FILE> <ERROR-LOG-FILE>"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
output_log_file="$1"
|
|
||||||
error_log_file="$2"
|
|
||||||
/mint/run/core/minio-dotnet/out/Minio.Functional.Tests 1>>"$output_log_file" 2>"$error_log_file"
|
|
|
@ -1,20 +0,0 @@
|
||||||
## `minio-go` tests
|
|
||||||
This directory serves as the location for Mint tests using `minio-go`. Top level `mint.sh` calls `run.sh` to execute tests.
|
|
||||||
|
|
||||||
## Adding new tests
|
|
||||||
New tests are added in functional tests of minio-go. Please check https://github.com/minio/minio-go
|
|
||||||
|
|
||||||
## Running tests manually
|
|
||||||
- Set environment variables `MINT_DATA_DIR`, `MINT_MODE`, `SERVER_ENDPOINT`, `ACCESS_KEY`, `SECRET_KEY`, `SERVER_REGION`, `ENABLE_HTTPS` and `RUN_ON_FAIL`
|
|
||||||
- Call `run.sh` with output log file and error log file. for example
|
|
||||||
```bash
|
|
||||||
export MINT_DATA_DIR=~/my-mint-dir
|
|
||||||
export MINT_MODE=core
|
|
||||||
export SERVER_ENDPOINT="play.minio.io:9000"
|
|
||||||
export ACCESS_KEY="Q3AM3UQ867SPQQA43P2F"
|
|
||||||
export SECRET_KEY="zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG"
|
|
||||||
export ENABLE_HTTPS=1
|
|
||||||
export SERVER_REGION=us-east-1
|
|
||||||
export RUN_ON_FAIL=1
|
|
||||||
./run.sh /tmp/output.log /tmp/error.log
|
|
||||||
```
|
|
|
@ -1,5 +0,0 @@
|
||||||
module mint.minio.io/minio-go
|
|
||||||
|
|
||||||
go 1.14
|
|
||||||
|
|
||||||
require github.com/minio/minio-go/v7 v7.0.7 // indirect
|
|
|
@ -1,80 +0,0 @@
|
||||||
github.com/cheggaaa/pb v1.0.29/go.mod h1:W40334L7FMC5JKWldsTWbdGjLo0RxUKK73K+TuPxX30=
|
|
||||||
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
|
||||||
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
|
||||||
github.com/dustin/go-humanize v1.0.0 h1:VSnTsYCnlFHaM2/igO1h6X3HA71jcobQuxemgkq4zYo=
|
|
||||||
github.com/dustin/go-humanize v1.0.0/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
|
|
||||||
github.com/fatih/color v1.9.0/go.mod h1:eQcE1qtQxscV5RaZvpXrrb8Drkc3/DdQ+uUYCNjL+zU=
|
|
||||||
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
|
|
||||||
github.com/google/uuid v1.1.1 h1:Gkbcsh/GbpXz7lPftLA3P6TYMwjCLYm83jiFQZF/3gY=
|
|
||||||
github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
|
||||||
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
|
|
||||||
github.com/json-iterator/go v1.1.10 h1:Kz6Cvnvv2wGdaG/V8yMvfkmNiXq9Ya2KUv4rouJJr68=
|
|
||||||
github.com/json-iterator/go v1.1.10/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
|
|
||||||
github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
|
|
||||||
github.com/klauspost/cpuid v1.2.3/go.mod h1:Pj4uuM528wm8OyEC2QMXAi2YiTZ96dNQPGgoMS4s3ek=
|
|
||||||
github.com/klauspost/cpuid v1.3.1 h1:5JNjFYYQrZeKRJ0734q51WCEEn2huer72Dc7K+R/b6s=
|
|
||||||
github.com/klauspost/cpuid v1.3.1/go.mod h1:bYW4mA6ZgKPob1/Dlai2LviZJO7KGI3uoWLd42rAQw4=
|
|
||||||
github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI=
|
|
||||||
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
|
|
||||||
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
|
|
||||||
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
|
|
||||||
github.com/mattn/go-colorable v0.1.4/go.mod h1:U0ppj6V5qS13XJ6of8GYAs25YV2eR4EVcfRqFIhoBtE=
|
|
||||||
github.com/mattn/go-isatty v0.0.8/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s=
|
|
||||||
github.com/mattn/go-isatty v0.0.11/go.mod h1:PhnuNfih5lzO57/f3n+odYbM4JtupLOxQOAqxQCu2WE=
|
|
||||||
github.com/mattn/go-runewidth v0.0.4/go.mod h1:LwmH8dsx7+W8Uxz3IHJYH5QSwggIsqBzpuz5H//U1FU=
|
|
||||||
github.com/minio/md5-simd v1.1.0 h1:QPfiOqlZH+Cj9teu0t9b1nTBfPbyTl16Of5MeuShdK4=
|
|
||||||
github.com/minio/md5-simd v1.1.0/go.mod h1:XpBqgZULrMYD3R+M28PcmP0CkI7PEMzB3U77ZrKZ0Gw=
|
|
||||||
github.com/minio/minio-go/v7 v7.0.7 h1:Qld/xb8C1Pwbu0jU46xAceyn9xXKCMW+3XfNbpmTB70=
|
|
||||||
github.com/minio/minio-go/v7 v7.0.7/go.mod h1:pEZBUa+L2m9oECoIA6IcSK8bv/qggtQVLovjeKK5jYc=
|
|
||||||
github.com/minio/sha256-simd v0.1.1 h1:5QHSlgo3nt5yKOJrC7W8w7X+NFl8cMPZm96iu8kKUJU=
|
|
||||||
github.com/minio/sha256-simd v0.1.1/go.mod h1:B5e1o+1/KgNmWrSQK08Y6Z1Vb5pwIktudl0J58iy0KM=
|
|
||||||
github.com/minio/sio v0.2.1 h1:NjzKiIMSMcHediVQR0AFVx2tp7Wxh9tKPfDI3kH7aHQ=
|
|
||||||
github.com/minio/sio v0.2.1/go.mod h1:8b0yPp2avGThviy/+OCJBI6OMpvxoUuiLvE6F1lebhw=
|
|
||||||
github.com/mitchellh/go-homedir v1.1.0 h1:lukF9ziXFxDFPkA1vsr5zpc1XuPDn/wFntq5mG+4E0Y=
|
|
||||||
github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
|
|
||||||
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
|
|
||||||
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
|
|
||||||
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
|
|
||||||
github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
|
|
||||||
github.com/modern-go/reflect2 v1.0.1 h1:9f412s+6RmYXLWZSEzVVgPGK7C2PphHj5RJrvfx9AWI=
|
|
||||||
github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
|
|
||||||
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
|
||||||
github.com/rs/xid v1.2.1 h1:mhH9Nq+C1fY2l1XIpgxIiUOfNpRBYH1kKcr+qfKgjRc=
|
|
||||||
github.com/rs/xid v1.2.1/go.mod h1:+uKXf+4Djp6Md1KODXJxgGQPKngRmWyn10oCKFzNHOQ=
|
|
||||||
github.com/sirupsen/logrus v1.7.0 h1:ShrD1U9pZB12TX0cVy0DtePoCH97K8EtX+mg7ZARUtM=
|
|
||||||
github.com/sirupsen/logrus v1.7.0/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
|
|
||||||
github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc=
|
|
||||||
github.com/smartystreets/goconvey v0.0.0-20190330032615-68dc04aab96a h1:pa8hGb/2YqsZKovtsgrwcDH1RZhVbTKCjLp47XpqCDs=
|
|
||||||
github.com/smartystreets/goconvey v0.0.0-20190330032615-68dc04aab96a/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA=
|
|
||||||
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
|
||||||
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
|
|
||||||
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
|
|
||||||
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
|
|
||||||
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
|
|
||||||
golang.org/x/crypto v0.0.0-20190513172903-22d7a77e9e5f/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
|
||||||
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
|
||||||
golang.org/x/crypto v0.0.0-20200709230013-948cd5f35899 h1:DZhuSZLsGlFL4CmhA8BcRA0mnthyA/nZ00AqCUo7vHg=
|
|
||||||
golang.org/x/crypto v0.0.0-20200709230013-948cd5f35899/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
|
||||||
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
|
||||||
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
|
||||||
golang.org/x/net v0.0.0-20200707034311-ab3426394381 h1:VXak5I6aEWmAXeQjA+QSZzlgNrpq9mjcfDemuexIKsU=
|
|
||||||
golang.org/x/net v0.0.0-20200707034311-ab3426394381/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
|
|
||||||
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
|
||||||
golang.org/x/sys v0.0.0-20190222072716-a9d3bda3a223/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
|
||||||
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
|
||||||
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
|
||||||
golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
|
||||||
golang.org/x/sys v0.0.0-20200625212154-ddb9806d33ae h1:Ih9Yo4hSPImZOpfGuA4bR/ORKTAbhZo2AbWNRCnevdo=
|
|
||||||
golang.org/x/sys v0.0.0-20200625212154-ddb9806d33ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
|
||||||
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
|
||||||
golang.org/x/text v0.3.3 h1:cokOdA+Jmi5PJGXLlLllQSgYigAEfHXJAERHVMaCc2k=
|
|
||||||
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
|
||||||
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
|
||||||
golang.org/x/tools v0.0.0-20190328211700-ab21143f2384/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
|
|
||||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
|
||||||
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
|
||||||
gopkg.in/ini.v1 v1.57.0 h1:9unxIsFcTt4I55uWluz+UmL95q4kdJ0buvQ1ZIqVQww=
|
|
||||||
gopkg.in/ini.v1 v1.57.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
|
|
||||||
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
|
||||||
gopkg.in/yaml.v2 v2.2.8 h1:obN1ZagJSUGI0Ek/LBmuj4SNLPfIny3KsKFopxRdj10=
|
|
||||||
gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
|
|
@ -1,15 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
#
|
|
||||||
#
|
|
||||||
|
|
||||||
# handle command line arguments
|
|
||||||
if [ $# -ne 2 ]; then
|
|
||||||
echo "usage: run.sh <OUTPUT-LOG-FILE> <ERROR-LOG-FILE>"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
output_log_file="$1"
|
|
||||||
error_log_file="$2"
|
|
||||||
|
|
||||||
# run tests
|
|
||||||
/mint/run/core/minio-go/minio-go 1>>"$output_log_file" 2>"$error_log_file"
|
|
|
@ -1,20 +0,0 @@
|
||||||
## `minio-java` tests
|
|
||||||
This directory serves as the location for Mint tests using `minio-java`. Top level `mint.sh` calls `run.sh` to execute tests.
|
|
||||||
|
|
||||||
## Adding new tests
|
|
||||||
New tests is added in functional tests of minio-java. Please check https://github.com/minio/minio-java
|
|
||||||
|
|
||||||
## Running tests manually
|
|
||||||
- Set environment variables `MINT_DATA_DIR`, `MINT_MODE`, `SERVER_ENDPOINT`, `ACCESS_KEY`, `SECRET_KEY`, `SERVER_REGION`, `ENABLE_HTTPS` and `RUN_ON_FAIL`
|
|
||||||
- Call `run.sh` with output log file and error log file. for example
|
|
||||||
```bash
|
|
||||||
export MINT_DATA_DIR=~/my-mint-dir
|
|
||||||
export MINT_MODE=core
|
|
||||||
export SERVER_ENDPOINT="play.minio.io:9000"
|
|
||||||
export ACCESS_KEY="Q3AM3UQ867SPQQA43P2F"
|
|
||||||
export SECRET_KEY="zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG"
|
|
||||||
export ENABLE_HTTPS=1
|
|
||||||
export SERVER_REGION=us-east-1
|
|
||||||
export RUN_ON_FAIL=1
|
|
||||||
./run.sh /tmp/output.log /tmp/error.log
|
|
||||||
```
|
|
|
@ -1,21 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
#
|
|
||||||
#
|
|
||||||
|
|
||||||
# handle command line arguments
|
|
||||||
if [ $# -ne 2 ]; then
|
|
||||||
echo "usage: run.sh <OUTPUT-LOG-FILE> <ERROR-LOG-FILE>"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
output_log_file="$1"
|
|
||||||
error_log_file="$2"
|
|
||||||
|
|
||||||
# run tests
|
|
||||||
endpoint="http://$SERVER_ENDPOINT"
|
|
||||||
if [ "$ENABLE_HTTPS" -eq 1 ]; then
|
|
||||||
endpoint="https://$SERVER_ENDPOINT"
|
|
||||||
fi
|
|
||||||
|
|
||||||
java -Xmx4096m -Xms256m -cp "/mint/run/core/minio-java/*:." FunctionalTest \
|
|
||||||
"$endpoint" "$ACCESS_KEY" "$SECRET_KEY" "$SERVER_REGION" "$RUN_ON_FAIL" 1>>"$output_log_file" 2>"$error_log_file"
|
|
|
@ -1,19 +0,0 @@
|
||||||
## `minio-js` tests
|
|
||||||
This directory serves as the location for Mint tests using `minio-js`. Top level `mint.sh` calls `run.sh` to execute tests.
|
|
||||||
|
|
||||||
## Adding new tests
|
|
||||||
New tests is added in functional tests of minio-js. Please check https://github.com/minio/minio-js
|
|
||||||
|
|
||||||
## Running tests manually
|
|
||||||
- Set environment variables `MINT_DATA_DIR`, `MINT_MODE`, `SERVER_ENDPOINT`, `ACCESS_KEY`, `SECRET_KEY`, `SERVER_REGION` and `ENABLE_HTTPS`
|
|
||||||
- Call `run.sh` with output log file and error log file. for example
|
|
||||||
```bash
|
|
||||||
export MINT_DATA_DIR=~/my-mint-dir
|
|
||||||
export MINT_MODE=core
|
|
||||||
export SERVER_ENDPOINT="play.minio.io:9000"
|
|
||||||
export ACCESS_KEY="Q3AM3UQ867SPQQA43P2F"
|
|
||||||
export SECRET_KEY="zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG"
|
|
||||||
export ENABLE_HTTPS=1
|
|
||||||
export SERVER_REGION=us-east-1
|
|
||||||
./run.sh /tmp/output.log /tmp/error.log
|
|
||||||
```
|
|
|
@ -1,53 +0,0 @@
|
||||||
var mocha = require('mocha');
|
|
||||||
module.exports = minioreporter;
|
|
||||||
|
|
||||||
function minioreporter(runner) {
|
|
||||||
mocha.reporters.Base.call(this, runner);
|
|
||||||
var self = this;
|
|
||||||
|
|
||||||
runner.on('pass', function (test) {
|
|
||||||
GenerateJsonEntry(test)
|
|
||||||
});
|
|
||||||
|
|
||||||
runner.on('fail', function (test, err) {
|
|
||||||
GenerateJsonEntry(test, err)
|
|
||||||
});
|
|
||||||
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Convert test result into a JSON object and print on the console.
|
|
||||||
*
|
|
||||||
* @api private
|
|
||||||
* @param test, err
|
|
||||||
*/
|
|
||||||
|
|
||||||
function GenerateJsonEntry (test, err) {
|
|
||||||
var res = test.title.split("_")
|
|
||||||
var jsonEntry = {};
|
|
||||||
|
|
||||||
jsonEntry.name = "minio-js"
|
|
||||||
|
|
||||||
if (res.length > 0 && res[0].length) {
|
|
||||||
jsonEntry.function = res[0]
|
|
||||||
}
|
|
||||||
|
|
||||||
if (res.length > 1 && res[1].length) {
|
|
||||||
jsonEntry.args = res[1]
|
|
||||||
}
|
|
||||||
|
|
||||||
jsonEntry.duration = test.duration
|
|
||||||
|
|
||||||
if (res.length > 2 && res[2].length) {
|
|
||||||
jsonEntry.alert = res[2]
|
|
||||||
}
|
|
||||||
|
|
||||||
if (err != null ) {
|
|
||||||
jsonEntry.status = "FAIL"
|
|
||||||
jsonEntry.error = err.stack.replace(/\n/g, " ").replace(/ +(?= )/g,'')
|
|
||||||
} else {
|
|
||||||
jsonEntry.status = "PASS"
|
|
||||||
}
|
|
||||||
|
|
||||||
process.stdout.write(JSON.stringify(jsonEntry) + "\n")
|
|
||||||
}
|
|
|
@ -1,49 +0,0 @@
|
||||||
{
|
|
||||||
"name": "bin",
|
|
||||||
"version": "1.0.0",
|
|
||||||
"main": "functional_test.js",
|
|
||||||
"scripts": {
|
|
||||||
"test": "echo \"Error: no test specified\" && exit 1"
|
|
||||||
},
|
|
||||||
"keywords": [],
|
|
||||||
"author": "",
|
|
||||||
"license": "ISC",
|
|
||||||
"description": "",
|
|
||||||
"dependencies": {
|
|
||||||
"app-module-path": "*",
|
|
||||||
"async": "*",
|
|
||||||
"block-stream2": "*",
|
|
||||||
"concat-stream": "*",
|
|
||||||
"es6-error": "*",
|
|
||||||
"json-stream": "*",
|
|
||||||
"lodash": "*",
|
|
||||||
"mime-types": "*",
|
|
||||||
"mkdirp": "*",
|
|
||||||
"moment": "*",
|
|
||||||
"source-map-support": "*",
|
|
||||||
"through2": "*",
|
|
||||||
"xml": "*",
|
|
||||||
"xml2js": "*"
|
|
||||||
},
|
|
||||||
"devDependencies": {
|
|
||||||
"browserify": "*",
|
|
||||||
"chai": "*",
|
|
||||||
"gulp": "*",
|
|
||||||
"gulp-babel": "*",
|
|
||||||
"gulp-jscs": "*",
|
|
||||||
"jshint":"2.*",
|
|
||||||
"gulp-jshint": "*",
|
|
||||||
"gulp-mocha": "*",
|
|
||||||
"gulp-notify": "*",
|
|
||||||
"gulp-sourcemaps": "*",
|
|
||||||
"jshint-stylish": "*",
|
|
||||||
"mocha": "*",
|
|
||||||
"mocha-steps": "*",
|
|
||||||
"nock": "*",
|
|
||||||
"rewire": "*",
|
|
||||||
"superagent": "*"
|
|
||||||
},
|
|
||||||
"scripts": {
|
|
||||||
"test": "mocha"
|
|
||||||
}
|
|
||||||
}
|
|
|
@ -1,15 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
#
|
|
||||||
#
|
|
||||||
|
|
||||||
# handle command line arguments
|
|
||||||
if [ $# -ne 2 ]; then
|
|
||||||
echo "usage: run.sh <OUTPUT-LOG-FILE> <ERROR-LOG-FILE>"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
output_log_file="$1"
|
|
||||||
error_log_file="$2"
|
|
||||||
|
|
||||||
# run tests
|
|
||||||
./node_modules/mocha/bin/mocha -R minioreporter -b --exit 1>>"$output_log_file" 2>"$error_log_file"
|
|
|
@ -1,19 +0,0 @@
|
||||||
## `minio-py` tests
|
|
||||||
This directory serves as the location for Mint tests using `minio-py`. Top level `mint.sh` calls `run.sh` to execute tests.
|
|
||||||
|
|
||||||
## Adding new tests
|
|
||||||
New tests is added in functional tests of minio-py. Please check https://github.com/minio/minio-py
|
|
||||||
|
|
||||||
## Running tests manually
|
|
||||||
- Set environment variables `MINT_DATA_DIR`, `MINT_MODE`, `SERVER_ENDPOINT`, `ACCESS_KEY`, `SECRET_KEY`, `SERVER_REGION` and `ENABLE_HTTPS`
|
|
||||||
- Call `run.sh` with output log file and error log file. for example
|
|
||||||
```bash
|
|
||||||
export MINT_DATA_DIR=~/my-mint-dir
|
|
||||||
export MINT_MODE=core
|
|
||||||
export SERVER_ENDPOINT="play.minio.io:9000"
|
|
||||||
export ACCESS_KEY="Q3AM3UQ867SPQQA43P2F"
|
|
||||||
export SECRET_KEY="zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG"
|
|
||||||
export ENABLE_HTTPS=1
|
|
||||||
export SERVER_REGION=us-east-1
|
|
||||||
./run.sh /tmp/output.log /tmp/error.log
|
|
||||||
```
|
|
|
@ -1,15 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
#
|
|
||||||
#
|
|
||||||
|
|
||||||
# handle command line arguments
|
|
||||||
if [ $# -ne 2 ]; then
|
|
||||||
echo "usage: run.sh <OUTPUT-LOG-FILE> <ERROR-LOG-FILE>"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
output_log_file="$1"
|
|
||||||
error_log_file="$2"
|
|
||||||
|
|
||||||
# run tests
|
|
||||||
python "/mint/run/core/minio-py/tests.py" 1>>"$output_log_file" 2>"$error_log_file"
|
|
|
@ -1,2 +0,0 @@
|
||||||
*~
|
|
||||||
*.log
|
|
|
@ -1,19 +0,0 @@
|
||||||
## `s3cmd` tests
|
|
||||||
This directory serves as the location for Mint tests using `s3cmd`. Top level `mint.sh` calls `run.sh` to execute tests.
|
|
||||||
|
|
||||||
## Adding new tests
|
|
||||||
New tests is added into `test.sh` as new functions.
|
|
||||||
|
|
||||||
## Running tests manually
|
|
||||||
- Set environment variables `MINT_DATA_DIR`, `MINT_MODE`, `SERVER_ENDPOINT`, `ACCESS_KEY`, `SECRET_KEY`, `SERVER_REGION` and `ENABLE_HTTPS`
|
|
||||||
- Call `run.sh` with output log file and error log file. for example
|
|
||||||
```bash
|
|
||||||
export MINT_DATA_DIR=~/my-mint-dir
|
|
||||||
export MINT_MODE=core
|
|
||||||
export SERVER_ENDPOINT="play.minio.io:9000"
|
|
||||||
export ACCESS_KEY="Q3AM3UQ867SPQQA43P2F"
|
|
||||||
export SECRET_KEY="zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG"
|
|
||||||
export ENABLE_HTTPS=1
|
|
||||||
export SERVER_REGION=us-east-1
|
|
||||||
./run.sh /tmp/output.log /tmp/error.log
|
|
||||||
```
|
|
|
@ -1,15 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
#
|
|
||||||
#
|
|
||||||
|
|
||||||
# handle command line arguments
|
|
||||||
if [ $# -ne 2 ]; then
|
|
||||||
echo "usage: run.sh <OUTPUT-LOG-FILE> <ERROR-LOG-FILE>"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
output_log_file="$1"
|
|
||||||
error_log_file="$2"
|
|
||||||
|
|
||||||
# run tests
|
|
||||||
./test.sh 1>>"$output_log_file" 2>"$error_log_file"
|
|
|
@ -1,393 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
#
|
|
||||||
#
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
if [ -n "$MINT_MODE" ]; then
|
|
||||||
if [ -z "${MINT_DATA_DIR+x}" ]; then
|
|
||||||
echo "MINT_DATA_DIR not defined"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
if [ -z "${SERVER_ENDPOINT+x}" ]; then
|
|
||||||
echo "SERVER_ENDPOINT not defined"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
if [ -z "${ACCESS_KEY+x}" ]; then
|
|
||||||
echo "ACCESS_KEY not defined"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
if [ -z "${SECRET_KEY+x}" ]; then
|
|
||||||
echo "SECRET_KEY not defined"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [ -z "${SERVER_ENDPOINT+x}" ]; then
|
|
||||||
SERVER_ENDPOINT="play.minio.io:9000"
|
|
||||||
ACCESS_KEY="Q3AM3UQ867SPQQA43P2F"
|
|
||||||
SECRET_KEY="zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG"
|
|
||||||
ENABLE_HTTPS=1
|
|
||||||
SERVER_REGION="us-east-1"
|
|
||||||
fi
|
|
||||||
|
|
||||||
WORK_DIR="$PWD"
|
|
||||||
DATA_DIR="$MINT_DATA_DIR"
|
|
||||||
if [ -z "$MINT_MODE" ]; then
|
|
||||||
WORK_DIR="$PWD/.run-$RANDOM"
|
|
||||||
DATA_DIR="$WORK_DIR/data"
|
|
||||||
fi
|
|
||||||
|
|
||||||
FILE_1_MB="$DATA_DIR/datafile-1-MB"
|
|
||||||
FILE_65_MB="$DATA_DIR/datafile-65-MB"
|
|
||||||
declare FILE_1_MB_MD5SUM
|
|
||||||
declare FILE_65_MB_MD5SUM
|
|
||||||
|
|
||||||
BUCKET_NAME="s3cmd-test-bucket-$RANDOM"
|
|
||||||
S3CMD=$(command -v s3cmd)
|
|
||||||
declare -a S3CMD_CMD
|
|
||||||
|
|
||||||
function get_md5sum()
|
|
||||||
{
|
|
||||||
filename="$FILE_1_MB"
|
|
||||||
out=$(md5sum "$filename" 2>/dev/null)
|
|
||||||
rv=$?
|
|
||||||
if [ "$rv" -eq 0 ]; then
|
|
||||||
awk '{ print $1 }' <<< "$out"
|
|
||||||
fi
|
|
||||||
|
|
||||||
return "$rv"
|
|
||||||
}
|
|
||||||
|
|
||||||
function get_time()
|
|
||||||
{
|
|
||||||
date +%s%N
|
|
||||||
}
|
|
||||||
|
|
||||||
function get_duration()
|
|
||||||
{
|
|
||||||
start_time=$1
|
|
||||||
end_time=$(get_time)
|
|
||||||
|
|
||||||
echo $(( (end_time - start_time) / 1000000 ))
|
|
||||||
}
|
|
||||||
|
|
||||||
function log_success()
|
|
||||||
{
|
|
||||||
if [ -n "$MINT_MODE" ]; then
|
|
||||||
printf '{"name": "s3cmd", "duration": "%d", "function": "%s", "status": "PASS"}\n' "$(get_duration "$1")" "$2"
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
function show()
|
|
||||||
{
|
|
||||||
if [ -z "$MINT_MODE" ]; then
|
|
||||||
func_name="$1"
|
|
||||||
echo "Running $func_name()"
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
function fail()
|
|
||||||
{
|
|
||||||
rv="$1"
|
|
||||||
shift
|
|
||||||
|
|
||||||
if [ "$rv" -ne 0 ]; then
|
|
||||||
echo "$@"
|
|
||||||
fi
|
|
||||||
|
|
||||||
return "$rv"
|
|
||||||
}
|
|
||||||
|
|
||||||
function assert()
|
|
||||||
{
|
|
||||||
expected_rv="$1"
|
|
||||||
shift
|
|
||||||
start_time="$1"
|
|
||||||
shift
|
|
||||||
func_name="$1"
|
|
||||||
shift
|
|
||||||
|
|
||||||
err=$("$@" 2>&1)
|
|
||||||
rv=$?
|
|
||||||
if [ "$rv" -ne 0 ] && [ "$expected_rv" -eq 0 ]; then
|
|
||||||
if [ -n "$MINT_MODE" ]; then
|
|
||||||
err=$(printf '%s' "$err" | python -c 'import sys,json; print(json.dumps(sys.stdin.read()))')
|
|
||||||
## err is already JSON string, no need to double quote
|
|
||||||
printf '{"name": "s3cmd", "duration": "%d", "function": "%s", "status": "FAIL", "error": %s}\n' "$(get_duration "$start_time")" "$func_name" "$err"
|
|
||||||
else
|
|
||||||
echo "s3cmd: $func_name: $err"
|
|
||||||
fi
|
|
||||||
|
|
||||||
exit "$rv"
|
|
||||||
fi
|
|
||||||
|
|
||||||
return 0
|
|
||||||
}
|
|
||||||
|
|
||||||
function assert_success() {
|
|
||||||
assert 0 "$@"
|
|
||||||
}
|
|
||||||
|
|
||||||
function assert_failure() {
|
|
||||||
assert 1 "$@"
|
|
||||||
}
|
|
||||||
|
|
||||||
function s3cmd_cmd()
|
|
||||||
{
|
|
||||||
cmd=( "${S3CMD_CMD[@]}" "$@" )
|
|
||||||
"${cmd[@]}"
|
|
||||||
rv=$?
|
|
||||||
return "$rv"
|
|
||||||
}
|
|
||||||
|
|
||||||
function check_md5sum()
|
|
||||||
{
|
|
||||||
expected_checksum="$1"
|
|
||||||
shift
|
|
||||||
filename="$*"
|
|
||||||
|
|
||||||
checksum="$(get_md5sum "$filename")"
|
|
||||||
rv=$?
|
|
||||||
if [ "$rv" -ne 0 ]; then
|
|
||||||
echo "unable to get md5sum for $filename"
|
|
||||||
return "$rv"
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [ "$checksum" != "$expected_checksum" ]; then
|
|
||||||
echo "$filename: md5sum mismatch"
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
return 0
|
|
||||||
}
|
|
||||||
|
|
||||||
function test_make_bucket()
|
|
||||||
{
|
|
||||||
show "${FUNCNAME[0]}"
|
|
||||||
|
|
||||||
start_time=$(get_time)
|
|
||||||
bucket_name="s3cmd-test-bucket-$RANDOM"
|
|
||||||
assert_success "$start_time" "${FUNCNAME[0]}" s3cmd_cmd mb "s3://${bucket_name}"
|
|
||||||
assert_success "$start_time" "${FUNCNAME[0]}" s3cmd_cmd rb "s3://${bucket_name}"
|
|
||||||
|
|
||||||
log_success "$start_time" "${FUNCNAME[0]}"
|
|
||||||
}
|
|
||||||
|
|
||||||
function test_make_bucket_error() {
|
|
||||||
show "${FUNCNAME[0]}"
|
|
||||||
|
|
||||||
start_time=$(get_time)
|
|
||||||
bucket_name="S3CMD-test%bucket%$RANDOM"
|
|
||||||
assert_failure "$start_time" "${FUNCNAME[0]}" s3cmd_cmd mb "s3://${bucket_name}"
|
|
||||||
|
|
||||||
log_success "$start_time" "${FUNCNAME[0]}"
|
|
||||||
}
|
|
||||||
|
|
||||||
function setup()
|
|
||||||
{
|
|
||||||
start_time=$(get_time)
|
|
||||||
assert_success "$start_time" "${FUNCNAME[0]}" s3cmd_cmd mb "s3://${BUCKET_NAME}"
|
|
||||||
}
|
|
||||||
|
|
||||||
function teardown()
|
|
||||||
{
|
|
||||||
start_time=$(get_time)
|
|
||||||
assert_success "$start_time" "${FUNCNAME[0]}" s3cmd_cmd rm --force --recursive "s3://${BUCKET_NAME}"
|
|
||||||
assert_success "$start_time" "${FUNCNAME[0]}" s3cmd_cmd rb --force "s3://${BUCKET_NAME}"
|
|
||||||
}
|
|
||||||
|
|
||||||
function test_put_object()
|
|
||||||
{
|
|
||||||
show "${FUNCNAME[0]}"
|
|
||||||
|
|
||||||
start_time=$(get_time)
|
|
||||||
object_name="s3cmd-test-object-$RANDOM"
|
|
||||||
assert_success "$start_time" "${FUNCNAME[0]}" s3cmd_cmd put "${FILE_1_MB}" "s3://${BUCKET_NAME}/${object_name}"
|
|
||||||
assert_success "$start_time" "${FUNCNAME[0]}" s3cmd_cmd rm "s3://${BUCKET_NAME}/${object_name}"
|
|
||||||
|
|
||||||
log_success "$start_time" "${FUNCNAME[0]}"
|
|
||||||
}
|
|
||||||
|
|
||||||
function test_put_object_error()
|
|
||||||
{
|
|
||||||
show "${FUNCNAME[0]}"
|
|
||||||
start_time=$(get_time)
|
|
||||||
|
|
||||||
object_long_name=$(printf "s3cmd-test-object-%01100d" 1)
|
|
||||||
assert_failure "$start_time" "${FUNCNAME[0]}" s3cmd_cmd put "${FILE_1_MB}" "s3://${BUCKET_NAME}/${object_long_name}"
|
|
||||||
|
|
||||||
log_success "$start_time" "${FUNCNAME[0]}"
|
|
||||||
}
|
|
||||||
|
|
||||||
function test_put_object_multipart()
|
|
||||||
{
|
|
||||||
show "${FUNCNAME[0]}"
|
|
||||||
|
|
||||||
start_time=$(get_time)
|
|
||||||
object_name="s3cmd-test-object-$RANDOM"
|
|
||||||
assert_success "$start_time" "${FUNCNAME[0]}" s3cmd_cmd put "${FILE_65_MB}" "s3://${BUCKET_NAME}/${object_name}"
|
|
||||||
assert_success "$start_time" "${FUNCNAME[0]}" s3cmd_cmd rm "s3://${BUCKET_NAME}/${object_name}"
|
|
||||||
|
|
||||||
log_success "$start_time" "${FUNCNAME[0]}"
|
|
||||||
}
|
|
||||||
|
|
||||||
function test_get_object()
|
|
||||||
{
|
|
||||||
show "${FUNCNAME[0]}"
|
|
||||||
|
|
||||||
start_time=$(get_time)
|
|
||||||
object_name="s3cmd-test-object-$RANDOM"
|
|
||||||
assert_success "$start_time" "${FUNCNAME[0]}" s3cmd_cmd put "${FILE_1_MB}" "s3://${BUCKET_NAME}/${object_name}"
|
|
||||||
assert_success "$start_time" "${FUNCNAME[0]}" s3cmd_cmd get "s3://${BUCKET_NAME}/${object_name}" "${object_name}.downloaded"
|
|
||||||
assert_success "$start_time" "${FUNCNAME[0]}" check_md5sum "$FILE_1_MB_MD5SUM" "${object_name}.downloaded"
|
|
||||||
assert_success "$start_time" "${FUNCNAME[0]}" s3cmd_cmd rm "s3://${BUCKET_NAME}/${object_name}"
|
|
||||||
assert_success "$start_time" "${FUNCNAME[0]}" rm -f "${object_name}.downloaded"
|
|
||||||
|
|
||||||
log_success "$start_time" "${FUNCNAME[0]}"
|
|
||||||
}
|
|
||||||
|
|
||||||
function test_get_object_error()
|
|
||||||
{
|
|
||||||
show "${FUNCNAME[0]}"
|
|
||||||
|
|
||||||
start_time=$(get_time)
|
|
||||||
object_name="s3cmd-test-object-$RANDOM"
|
|
||||||
assert_success "$start_time" "${FUNCNAME[0]}" s3cmd_cmd put "${FILE_1_MB}" "s3://${BUCKET_NAME}/${object_name}"
|
|
||||||
assert_success "$start_time" "${FUNCNAME[0]}" s3cmd_cmd rm "s3://${BUCKET_NAME}/${object_name}"
|
|
||||||
assert_failure "$start_time" "${FUNCNAME[0]}" s3cmd_cmd get "s3://${BUCKET_NAME}/${object_name}" "${object_name}.downloaded"
|
|
||||||
assert_success "$start_time" "${FUNCNAME[0]}" rm -f "${object_name}.downloaded"
|
|
||||||
|
|
||||||
log_success "$start_time" "${FUNCNAME[0]}"
|
|
||||||
}
|
|
||||||
|
|
||||||
function test_get_object_multipart()
|
|
||||||
{
|
|
||||||
show "${FUNCNAME[0]}"
|
|
||||||
|
|
||||||
start_time=$(get_time)
|
|
||||||
object_name="s3cmd-test-object-$RANDOM"
|
|
||||||
assert_success "$start_time" "${FUNCNAME[0]}" s3cmd_cmd put "${FILE_65_MB}" "s3://${BUCKET_NAME}/${object_name}"
|
|
||||||
assert_success "$start_time" "${FUNCNAME[0]}" s3cmd_cmd get "s3://${BUCKET_NAME}/${object_name}" "${object_name}.downloaded"
|
|
||||||
assert_success "$start_time" "${FUNCNAME[0]}" check_md5sum "$FILE_65_MB_MD5SUM" "${object_name}.downloaded"
|
|
||||||
assert_success "$start_time" "${FUNCNAME[0]}" s3cmd_cmd rm "s3://${BUCKET_NAME}/${object_name}"
|
|
||||||
assert_success "$start_time" "${FUNCNAME[0]}" rm -f "${object_name}.downloaded"
|
|
||||||
|
|
||||||
log_success "$start_time" "${FUNCNAME[0]}"
|
|
||||||
}
|
|
||||||
|
|
||||||
function test_sync_list_objects()
|
|
||||||
{
|
|
||||||
show "${FUNCNAME[0]}"
|
|
||||||
|
|
||||||
start_time=$(get_time)
|
|
||||||
bucket_name="s3cmd-test-bucket-$RANDOM"
|
|
||||||
object_name="s3cmd-test-object-$RANDOM"
|
|
||||||
assert_success "$start_time" "${FUNCNAME[0]}" s3cmd_cmd mb "s3://${bucket_name}"
|
|
||||||
assert_success "$start_time" "${FUNCNAME[0]}" s3cmd_cmd sync "$DATA_DIR/" "s3://${bucket_name}"
|
|
||||||
|
|
||||||
diff -bB <(ls "$DATA_DIR") <("${S3CMD_CMD[@]}" ls "s3://${bucket_name}" | awk '{print $4}' | sed "s/s3:*..${bucket_name}.//g") >/dev/null 2>&1
|
|
||||||
assert_success "$start_time" "${FUNCNAME[0]}" fail $? "sync and list differs"
|
|
||||||
assert_success "$start_time" "${FUNCNAME[0]}" s3cmd_cmd rb --force --recursive "s3://${bucket_name}"
|
|
||||||
|
|
||||||
log_success "$start_time" "${FUNCNAME[0]}"
|
|
||||||
}
|
|
||||||
|
|
||||||
function run_test()
|
|
||||||
{
|
|
||||||
test_make_bucket
|
|
||||||
test_make_bucket_error
|
|
||||||
|
|
||||||
setup
|
|
||||||
|
|
||||||
test_put_object
|
|
||||||
test_put_object_error
|
|
||||||
test_put_object_multipart
|
|
||||||
test_get_object
|
|
||||||
test_get_object_multipart
|
|
||||||
test_sync_list_objects
|
|
||||||
|
|
||||||
teardown
|
|
||||||
}
|
|
||||||
|
|
||||||
function __init__()
|
|
||||||
{
|
|
||||||
set -e
|
|
||||||
|
|
||||||
S3CMD_CONFIG_DIR="/tmp/.s3cmd-$RANDOM"
|
|
||||||
mkdir -p $S3CMD_CONFIG_DIR
|
|
||||||
S3CMD_CONFIG_FILE="$S3CMD_CONFIG_DIR/s3cfg"
|
|
||||||
|
|
||||||
# configure s3cmd
|
|
||||||
cat > $S3CMD_CONFIG_FILE <<EOF
|
|
||||||
signature_v2 = False
|
|
||||||
host_base = $SERVER_ENDPOINT
|
|
||||||
host_bucket = $SERVER_ENDPOINT
|
|
||||||
bucket_location = $SERVER_REGION
|
|
||||||
use_https = $ENABLE_HTTPS
|
|
||||||
access_key = $ACCESS_KEY
|
|
||||||
secret_key = $SECRET_KEY
|
|
||||||
EOF
|
|
||||||
|
|
||||||
# For Mint, setup is already done. For others, setup the environment
|
|
||||||
if [ -z "$MINT_MODE" ]; then
|
|
||||||
mkdir -p "$WORK_DIR"
|
|
||||||
mkdir -p "$DATA_DIR"
|
|
||||||
|
|
||||||
# If s3cmd executable binary is not available in current directory, use it in the path.
|
|
||||||
if [ ! -x "$S3CMD" ]; then
|
|
||||||
echo "'s3cmd' executable binary not found in current directory and in path"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [ ! -x "$S3CMD" ]; then
|
|
||||||
echo "$S3CMD executable binary not found"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
S3CMD_CMD=( "${S3CMD}" --config "$S3CMD_CONFIG_FILE" )
|
|
||||||
|
|
||||||
if [ ! -e "$FILE_1_MB" ]; then
|
|
||||||
shred -n 1 -s 1MB - >"$FILE_1_MB"
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [ ! -e "$FILE_65_MB" ]; then
|
|
||||||
shred -n 1 -s 65MB - >"$FILE_65_MB"
|
|
||||||
fi
|
|
||||||
|
|
||||||
set -E
|
|
||||||
set -o pipefail
|
|
||||||
|
|
||||||
FILE_1_MB_MD5SUM="$(get_md5sum "$FILE_1_MB")"
|
|
||||||
rv=$?
|
|
||||||
if [ $rv -ne 0 ]; then
|
|
||||||
echo "unable to get md5sum of $FILE_1_MB"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
FILE_65_MB_MD5SUM="$(get_md5sum "$FILE_65_MB")"
|
|
||||||
rv=$?
|
|
||||||
if [ $rv -ne 0 ]; then
|
|
||||||
echo "unable to get md5sum of $FILE_65_MB"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
set +e
|
|
||||||
}
|
|
||||||
|
|
||||||
function main()
|
|
||||||
{
|
|
||||||
( run_test )
|
|
||||||
rv=$?
|
|
||||||
|
|
||||||
rm -fr "$S3CMD_CONFIG_FILE"
|
|
||||||
if [ -z "$MINT_MODE" ]; then
|
|
||||||
rm -fr "$WORK_DIR" "$DATA_DIR"
|
|
||||||
fi
|
|
||||||
|
|
||||||
exit "$rv"
|
|
||||||
}
|
|
||||||
|
|
||||||
__init__ "$@"
|
|
||||||
main "$@"
|
|
|
@ -1,21 +0,0 @@
|
||||||
|
|
||||||
## `s3select` tests
|
|
||||||
This directory serves as the location for Mint tests for s3select features. Top level `mint.sh` calls `run.sh` to execute tests.
|
|
||||||
|
|
||||||
## Adding new tests
|
|
||||||
New tests are added into `s3select/tests.py` as new functions.
|
|
||||||
|
|
||||||
## Running tests manually
|
|
||||||
- Set environment variables `MINT_DATA_DIR`, `MINT_MODE`, `SERVER_ENDPOINT`, `ACCESS_KEY`, `SECRET_KEY`, `SERVER_REGION` and `ENABLE_HTTPS`
|
|
||||||
- Call `run.sh` with output log file and error log file. for example
|
|
||||||
|
|
||||||
```bash
|
|
||||||
export MINT_DATA_DIR=~/my-mint-dir
|
|
||||||
export MINT_MODE=core
|
|
||||||
export SERVER_ENDPOINT="play.min.io"
|
|
||||||
export ACCESS_KEY="Q3AM3UQ867SPQQA43P2F"
|
|
||||||
export SECRET_KEY="zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG"
|
|
||||||
export ENABLE_HTTPS=1
|
|
||||||
export SERVER_REGION=us-east-1
|
|
||||||
./run.sh /tmp/output.log /tmp/error.log
|
|
||||||
```
|
|
|
@ -1,166 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
# -*- coding: utf-8 -*-
|
|
||||||
|
|
||||||
import io
|
|
||||||
import os
|
|
||||||
|
|
||||||
from minio import Minio
|
|
||||||
from minio.select import (COMPRESSION_TYPE_NONE, FILE_HEADER_INFO_NONE,
|
|
||||||
JSON_TYPE_DOCUMENT, QUOTE_FIELDS_ALWAYS,
|
|
||||||
QUOTE_FIELDS_ASNEEDED, CSVInputSerialization,
|
|
||||||
CSVOutputSerialization, JSONInputSerialization,
|
|
||||||
JSONOutputSerialization, SelectRequest)
|
|
||||||
|
|
||||||
from utils import *
|
|
||||||
|
|
||||||
|
|
||||||
def test_sql_api(test_name, client, bucket_name, input_data, sql_opts, expected_output):
|
|
||||||
""" Test if the passed SQL request has the output equal to the passed execpted one"""
|
|
||||||
object_name = generate_object_name()
|
|
||||||
got_output = b''
|
|
||||||
try:
|
|
||||||
bytes_content = io.BytesIO(input_data)
|
|
||||||
client.put_object(bucket_name, object_name,
|
|
||||||
io.BytesIO(input_data), len(input_data))
|
|
||||||
data = client.select_object_content(bucket_name, object_name, sql_opts)
|
|
||||||
# Get the records
|
|
||||||
records = io.BytesIO()
|
|
||||||
for d in data.stream(10*1024):
|
|
||||||
records.write(d)
|
|
||||||
got_output = records.getvalue()
|
|
||||||
except Exception as select_err:
|
|
||||||
if not isinstance(expected_output, Exception):
|
|
||||||
raise ValueError(
|
|
||||||
'Test {} unexpectedly failed with: {}'.format(test_name, select_err))
|
|
||||||
else:
|
|
||||||
if isinstance(expected_output, Exception):
|
|
||||||
raise ValueError(
|
|
||||||
'Test {}: expected an exception, got {}'.format(test_name, got_output))
|
|
||||||
if got_output != expected_output:
|
|
||||||
raise ValueError('Test {}: data mismatch. Expected : {}, Received {}'.format(
|
|
||||||
test_name, expected_output, got_output))
|
|
||||||
finally:
|
|
||||||
client.remove_object(bucket_name, object_name)
|
|
||||||
|
|
||||||
|
|
||||||
def test_csv_input_custom_quote_char(client, log_output):
|
|
||||||
# Get a unique bucket_name and object_name
|
|
||||||
log_output.args['bucket_name'] = bucket_name = generate_bucket_name()
|
|
||||||
|
|
||||||
tests = [
|
|
||||||
# Invalid quote character, should fail
|
|
||||||
('""', '"', b'col1,col2,col3\n', Exception()),
|
|
||||||
# UTF-8 quote character
|
|
||||||
('ع', '"', 'عcol1ع,عcol2ع,عcol3ع\n'.encode(),
|
|
||||||
b'{"_1":"col1","_2":"col2","_3":"col3"}\n'),
|
|
||||||
# Only one field is quoted
|
|
||||||
('"', '"', b'"col1",col2,col3\n',
|
|
||||||
b'{"_1":"col1","_2":"col2","_3":"col3"}\n'),
|
|
||||||
('"', '"', b'"col1,col2,col3"\n', b'{"_1":"col1,col2,col3"}\n'),
|
|
||||||
('\'', '"', b'"col1",col2,col3\n',
|
|
||||||
b'{"_1":"\\"col1\\"","_2":"col2","_3":"col3"}\n'),
|
|
||||||
('', '"', b'"col1",col2,col3\n',
|
|
||||||
b'{"_1":"\\"col1\\"","_2":"col2","_3":"col3"}\n'),
|
|
||||||
('', '"', b'"col1",col2,col3\n',
|
|
||||||
b'{"_1":"\\"col1\\"","_2":"col2","_3":"col3"}\n'),
|
|
||||||
('', '"', b'"col1","col2","col3"\n',
|
|
||||||
b'{"_1":"\\"col1\\"","_2":"\\"col2\\"","_3":"\\"col3\\""}\n'),
|
|
||||||
('"', '"', b'""""""\n', b'{"_1":"\\"\\""}\n'),
|
|
||||||
('"', '"', b'A",B\n', b'{"_1":"A\\"","_2":"B"}\n'),
|
|
||||||
('"', '"', b'A"",B\n', b'{"_1":"A\\"\\"","_2":"B"}\n'),
|
|
||||||
('"', '\\', b'A\\B,C\n', b'{"_1":"A\\\\B","_2":"C"}\n'),
|
|
||||||
('"', '"', b'"A""B","CD"\n', b'{"_1":"A\\"B","_2":"CD"}\n'),
|
|
||||||
('"', '\\', b'"A\\B","CD"\n', b'{"_1":"AB","_2":"CD"}\n'),
|
|
||||||
('"', '\\', b'"A\\,","CD"\n', b'{"_1":"A,","_2":"CD"}\n'),
|
|
||||||
('"', '\\', b'"A\\"B","CD"\n', b'{"_1":"A\\"B","_2":"CD"}\n'),
|
|
||||||
('"', '\\', b'"A\\""\n', b'{"_1":"A\\""}\n'),
|
|
||||||
('"', '\\', b'"A\\"\\"B"\n', b'{"_1":"A\\"\\"B"}\n'),
|
|
||||||
('"', '\\', b'"A\\"","\\"B"\n', b'{"_1":"A\\"","_2":"\\"B"}\n'),
|
|
||||||
]
|
|
||||||
|
|
||||||
client.make_bucket(bucket_name)
|
|
||||||
|
|
||||||
try:
|
|
||||||
for idx, (quote_char, escape_char, data, expected_output) in enumerate(tests):
|
|
||||||
sql_opts = SelectRequest(
|
|
||||||
"select * from s3object",
|
|
||||||
CSVInputSerialization(
|
|
||||||
compression_type=COMPRESSION_TYPE_NONE,
|
|
||||||
file_header_info=FILE_HEADER_INFO_NONE,
|
|
||||||
record_delimiter="\n",
|
|
||||||
field_delimiter=",",
|
|
||||||
quote_character=quote_char,
|
|
||||||
quote_escape_character=escape_char,
|
|
||||||
comments="#",
|
|
||||||
allow_quoted_record_delimiter="FALSE",
|
|
||||||
),
|
|
||||||
JSONOutputSerialization(
|
|
||||||
record_delimiter="\n",
|
|
||||||
),
|
|
||||||
request_progress=False,
|
|
||||||
)
|
|
||||||
|
|
||||||
test_sql_api(f'test_{idx}', client, bucket_name,
|
|
||||||
data, sql_opts, expected_output)
|
|
||||||
finally:
|
|
||||||
client.remove_bucket(bucket_name)
|
|
||||||
|
|
||||||
# Test passes
|
|
||||||
print(log_output.json_report())
|
|
||||||
|
|
||||||
|
|
||||||
def test_csv_output_custom_quote_char(client, log_output):
|
|
||||||
# Get a unique bucket_name and object_name
|
|
||||||
log_output.args['bucket_name'] = bucket_name = generate_bucket_name()
|
|
||||||
|
|
||||||
tests = [
|
|
||||||
# UTF-8 quote character
|
|
||||||
("''", "''", b'col1,col2,col3\n', Exception()),
|
|
||||||
("'", "'", b'col1,col2,col3\n', b"'col1','col2','col3'\n"),
|
|
||||||
("", '"', b'col1,col2,col3\n', b'\x00col1\x00,\x00col2\x00,\x00col3\x00\n'),
|
|
||||||
('"', '"', b'col1,col2,col3\n', b'"col1","col2","col3"\n'),
|
|
||||||
('"', '"', b'col"1,col2,col3\n', b'"col""1","col2","col3"\n'),
|
|
||||||
('"', '"', b'""""\n', b'""""\n'),
|
|
||||||
('"', '"', b'\n', b''),
|
|
||||||
("'", "\\", b'col1,col2,col3\n', b"'col1','col2','col3'\n"),
|
|
||||||
("'", "\\", b'col""1,col2,col3\n', b"'col\"\"1','col2','col3'\n"),
|
|
||||||
("'", "\\", b'col\'1,col2,col3\n', b"'col\\'1','col2','col3'\n"),
|
|
||||||
("'", "\\", b'"col\'1","col2","col3"\n', b"'col\\'1','col2','col3'\n"),
|
|
||||||
("'", "\\", b'col\'\n', b"'col\\''\n"),
|
|
||||||
# Two consecutive escaped quotes
|
|
||||||
("'", "\\", b'"a"""""\n', b"'a\"\"'\n"),
|
|
||||||
]
|
|
||||||
|
|
||||||
client.make_bucket(bucket_name)
|
|
||||||
|
|
||||||
try:
|
|
||||||
for idx, (quote_char, escape_char, input_data, expected_output) in enumerate(tests):
|
|
||||||
sql_opts = SelectRequest(
|
|
||||||
"select * from s3object",
|
|
||||||
CSVInputSerialization(
|
|
||||||
compression_type=COMPRESSION_TYPE_NONE,
|
|
||||||
file_header_info=FILE_HEADER_INFO_NONE,
|
|
||||||
record_delimiter="\n",
|
|
||||||
field_delimiter=",",
|
|
||||||
quote_character='"',
|
|
||||||
quote_escape_character='"',
|
|
||||||
comments="#",
|
|
||||||
allow_quoted_record_delimiter="FALSE",
|
|
||||||
),
|
|
||||||
CSVOutputSerialization(
|
|
||||||
quote_fields=QUOTE_FIELDS_ALWAYS,
|
|
||||||
record_delimiter="\n",
|
|
||||||
field_delimiter=",",
|
|
||||||
quote_character=quote_char,
|
|
||||||
quote_escape_character=escape_char,
|
|
||||||
),
|
|
||||||
request_progress=False,
|
|
||||||
)
|
|
||||||
|
|
||||||
test_sql_api(f'test_{idx}', client, bucket_name,
|
|
||||||
input_data, sql_opts, expected_output)
|
|
||||||
finally:
|
|
||||||
client.remove_bucket(bucket_name)
|
|
||||||
|
|
||||||
# Test passes
|
|
||||||
print(log_output.json_report())
|
|
|
@ -1,15 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
#
|
|
||||||
#
|
|
||||||
|
|
||||||
# handle command line arguments
|
|
||||||
if [ $# -ne 2 ]; then
|
|
||||||
echo "usage: run.sh <OUTPUT-LOG-FILE> <ERROR-LOG-FILE>"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
output_log_file="$1"
|
|
||||||
error_log_file="$2"
|
|
||||||
|
|
||||||
# run path style tests
|
|
||||||
python "./tests.py" 1>>"$output_log_file" 2>"$error_log_file"
|
|
|
@ -1,416 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
# -*- coding: utf-8 -*-
|
|
||||||
|
|
||||||
import io
|
|
||||||
from datetime import datetime
|
|
||||||
|
|
||||||
from minio.select import (FILE_HEADER_INFO_NONE, JSON_TYPE_DOCUMENT,
|
|
||||||
QUOTE_FIELDS_ASNEEDED, CSVInputSerialization,
|
|
||||||
CSVOutputSerialization, JSONInputSerialization,
|
|
||||||
JSONOutputSerialization, SelectRequest)
|
|
||||||
|
|
||||||
from utils import generate_bucket_name, generate_object_name
|
|
||||||
|
|
||||||
|
|
||||||
def test_sql_expressions_custom_input_output(client, input_bytes, sql_input,
|
|
||||||
sql_output, tests, log_output):
|
|
||||||
bucket_name = generate_bucket_name()
|
|
||||||
object_name = generate_object_name()
|
|
||||||
|
|
||||||
log_output.args['total_tests'] = 0
|
|
||||||
log_output.args['total_success'] = 0
|
|
||||||
|
|
||||||
client.make_bucket(bucket_name)
|
|
||||||
try:
|
|
||||||
content = io.BytesIO(bytes(input_bytes, 'utf-8'))
|
|
||||||
client.put_object(bucket_name, object_name, content, len(input_bytes))
|
|
||||||
|
|
||||||
for idx, (test_name, select_expression, expected_output) in enumerate(tests):
|
|
||||||
if select_expression == '':
|
|
||||||
continue
|
|
||||||
try:
|
|
||||||
log_output.args['total_tests'] += 1
|
|
||||||
sreq = SelectRequest(
|
|
||||||
select_expression,
|
|
||||||
sql_input,
|
|
||||||
sql_output,
|
|
||||||
request_progress=False
|
|
||||||
)
|
|
||||||
|
|
||||||
data = client.select_object_content(
|
|
||||||
bucket_name, object_name, sreq)
|
|
||||||
|
|
||||||
# Get the records
|
|
||||||
records = io.BytesIO()
|
|
||||||
for d in data.stream(10*1024):
|
|
||||||
records.write(d)
|
|
||||||
got_output = records.getvalue()
|
|
||||||
|
|
||||||
if got_output != expected_output:
|
|
||||||
if type(expected_output) == datetime:
|
|
||||||
# Attempt to parse the date which will throw an exception for any issue
|
|
||||||
datetime.strptime(got_output.decode(
|
|
||||||
"utf-8").strip(), '%Y-%m-%dT%H:%M:%S.%f%z')
|
|
||||||
else:
|
|
||||||
raise ValueError('Test {}: data mismatch. Expected : {}. Received: {}.'.format(
|
|
||||||
idx+1, expected_output, got_output))
|
|
||||||
|
|
||||||
log_output.args['total_success'] += 1
|
|
||||||
except Exception as err:
|
|
||||||
continue # TODO, raise instead
|
|
||||||
# raise Exception(err)
|
|
||||||
finally:
|
|
||||||
client.remove_object(bucket_name, object_name)
|
|
||||||
client.remove_bucket(bucket_name)
|
|
||||||
|
|
||||||
|
|
||||||
def test_sql_expressions(client, input_json_bytes, tests, log_output):
|
|
||||||
input_serialization = JSONInputSerialization(
|
|
||||||
compression_type="NONE",
|
|
||||||
json_type=JSON_TYPE_DOCUMENT,
|
|
||||||
)
|
|
||||||
|
|
||||||
output_serialization = CSVOutputSerialization(
|
|
||||||
quote_fields=QUOTE_FIELDS_ASNEEDED)
|
|
||||||
|
|
||||||
test_sql_expressions_custom_input_output(client, input_json_bytes,
|
|
||||||
input_serialization, output_serialization, tests, log_output)
|
|
||||||
|
|
||||||
|
|
||||||
def test_sql_operators(client, log_output):
|
|
||||||
|
|
||||||
json_testfile = """{"id": 1, "name": "John", "age": 3}
|
|
||||||
{"id": 2, "name": "Elliot", "age": 4}
|
|
||||||
{"id": 3, "name": "Yves", "age": 5}
|
|
||||||
{"id": 4, "name": null, "age": 0}
|
|
||||||
"""
|
|
||||||
|
|
||||||
tests = [
|
|
||||||
# Logical operators
|
|
||||||
("AND", "select * from S3Object s where s.id = 1 AND s.name = 'John'", b'1,John,3\n'),
|
|
||||||
("NOT", "select * from S3Object s where NOT s.id = 1",
|
|
||||||
b'2,Elliot,4\n3,Yves,5\n4,,0\n'),
|
|
||||||
("OR", "select * from S3Object s where s.id = 1 OR s.id = 3",
|
|
||||||
b'1,John,3\n3,Yves,5\n'),
|
|
||||||
# Comparison Operators
|
|
||||||
("<", "select * from S3Object s where s.age < 4", b'1,John,3\n4,,0\n'),
|
|
||||||
(">", "select * from S3Object s where s.age > 4", b'3,Yves,5\n'),
|
|
||||||
("<=", "select * from S3Object s where s.age <= 4",
|
|
||||||
b'1,John,3\n2,Elliot,4\n4,,0\n'),
|
|
||||||
(">=", "select * from S3Object s where s.age >= 4", b'2,Elliot,4\n3,Yves,5\n'),
|
|
||||||
("=", "select * from S3Object s where s.age = 4", b'2,Elliot,4\n'),
|
|
||||||
("<>", "select * from S3Object s where s.age <> 4",
|
|
||||||
b'1,John,3\n3,Yves,5\n4,,0\n'),
|
|
||||||
("!=", "select * from S3Object s where s.age != 4",
|
|
||||||
b'1,John,3\n3,Yves,5\n4,,0\n'),
|
|
||||||
("BETWEEN", "select * from S3Object s where s.age BETWEEN 4 AND 5",
|
|
||||||
b'2,Elliot,4\n3,Yves,5\n'),
|
|
||||||
("IN", "select * from S3Object s where s.age IN (3,5)", b'1,John,3\n3,Yves,5\n'),
|
|
||||||
# Pattern Matching Operators
|
|
||||||
("LIKE_", "select * from S3Object s where s.name LIKE '_ves'", b'3,Yves,5\n'),
|
|
||||||
("LIKE%", "select * from S3Object s where s.name LIKE 'Ell%t'", b'2,Elliot,4\n'),
|
|
||||||
# Unitary Operators
|
|
||||||
("NULL", "select * from S3Object s where s.name IS NULL", b'4,,0\n'),
|
|
||||||
("NOT_NULL", "select * from S3Object s where s.age IS NOT NULL",
|
|
||||||
b'1,John,3\n2,Elliot,4\n3,Yves,5\n4,,0\n'),
|
|
||||||
# Math Operators
|
|
||||||
("+", "select * from S3Object s where s.age = 1+3 ", b'2,Elliot,4\n'),
|
|
||||||
("-", "select * from S3Object s where s.age = 5-1 ", b'2,Elliot,4\n'),
|
|
||||||
("*", "select * from S3Object s where s.age = 2*2 ", b'2,Elliot,4\n'),
|
|
||||||
("%", "select * from S3Object s where s.age = 10%6 ", b'2,Elliot,4\n'),
|
|
||||||
]
|
|
||||||
|
|
||||||
try:
|
|
||||||
test_sql_expressions(client, json_testfile, tests, log_output)
|
|
||||||
except Exception as select_err:
|
|
||||||
raise select_err
|
|
||||||
# raise ValueError('Test {} unexpectedly failed with: {}'.format(test_name, select_err))
|
|
||||||
# pass
|
|
||||||
|
|
||||||
# Test passes
|
|
||||||
print(log_output.json_report())
|
|
||||||
|
|
||||||
|
|
||||||
def test_sql_operators_precedence(client, log_output):
|
|
||||||
|
|
||||||
json_testfile = """{"id": 1, "name": "Eric"}"""
|
|
||||||
|
|
||||||
tests = [
|
|
||||||
("-_1", "select -3*3 from S3Object", b'-9\n'),
|
|
||||||
("*", "select 10-3*2 from S3Object", b'4\n'),
|
|
||||||
("/", "select 13-10/5 from S3Object", b'11\n'),
|
|
||||||
("%", "select 13-10%5 from S3Object", b'13\n'),
|
|
||||||
("+", "select 1+1*3 from S3Object", b'4\n'),
|
|
||||||
("-_2", "select 1-1*3 from S3Object", b'-2\n'),
|
|
||||||
("=", "select * from S3Object as s where s.id = 13-12", b'1,Eric\n'),
|
|
||||||
("<>", "select * from S3Object as s where s.id <> 1-1", b'1,Eric\n'),
|
|
||||||
("NOT", "select * from S3Object where false OR NOT false", b'1,Eric\n'),
|
|
||||||
("AND", "select * from S3Object where true AND true OR false ", b'1,Eric\n'),
|
|
||||||
("OR", "select * from S3Object where false OR NOT false", b'1,Eric\n'),
|
|
||||||
("IN", "select * from S3Object as s where s.id <> -1 AND s.id IN (1,2,3)", b'1,Eric\n'),
|
|
||||||
("BETWEEN", "select * from S3Object as s where s.id <> -1 AND s.id BETWEEN -1 AND 3", b'1,Eric\n'),
|
|
||||||
("LIKE", "select * from S3Object as s where s.id <> -1 AND s.name LIKE 'E%'", b'1,Eric\n'),
|
|
||||||
]
|
|
||||||
|
|
||||||
try:
|
|
||||||
test_sql_expressions(client, json_testfile, tests, log_output)
|
|
||||||
except Exception as select_err:
|
|
||||||
raise select_err
|
|
||||||
# raise ValueError('Test {} unexpectedly failed with: {}'.format(test_name, select_err))
|
|
||||||
# pass
|
|
||||||
|
|
||||||
# Test passes
|
|
||||||
print(log_output.json_report())
|
|
||||||
|
|
||||||
|
|
||||||
def test_sql_functions_agg_cond_conv(client, log_output):
|
|
||||||
|
|
||||||
json_testfile = """{"id": 1, "name": "John", "age": 3}
|
|
||||||
{"id": 2, "name": "Elliot", "age": 4}
|
|
||||||
{"id": 3, "name": "Yves", "age": 5}
|
|
||||||
{"id": 4, "name": "Christine", "age": null}
|
|
||||||
{"id": 5, "name": "Eric", "age": 0}
|
|
||||||
"""
|
|
||||||
tests = [
|
|
||||||
# Aggregate functions
|
|
||||||
("COUNT", "select count(*) from S3Object s", b'5\n'),
|
|
||||||
("AVG", "select avg(s.age) from S3Object s", b'3\n'),
|
|
||||||
("MAX", "select max(s.age) from S3Object s", b'5\n'),
|
|
||||||
("MIN", "select min(s.age) from S3Object s", b'0\n'),
|
|
||||||
("SUM", "select sum(s.age) from S3Object s", b'12\n'),
|
|
||||||
# Conditional functions
|
|
||||||
("COALESCE", "SELECT COALESCE(s.age, 99) FROM S3Object s", b'3\n4\n5\n99\n0\n'),
|
|
||||||
("NULLIF", "SELECT NULLIF(s.age, 0) FROM S3Object s", b'3\n4\n5\n\n\n'),
|
|
||||||
# Conversion functions
|
|
||||||
("CAST", "SELECT CAST(s.age AS FLOAT) FROM S3Object s",
|
|
||||||
b'3.0\n4.0\n5.0\n\n0.0\n'),
|
|
||||||
|
|
||||||
]
|
|
||||||
|
|
||||||
try:
|
|
||||||
test_sql_expressions(client, json_testfile, tests, log_output)
|
|
||||||
except Exception as select_err:
|
|
||||||
raise select_err
|
|
||||||
# raise ValueError('Test {} unexpectedly failed with: {}'.format(test_name, select_err))
|
|
||||||
# pass
|
|
||||||
|
|
||||||
# Test passes
|
|
||||||
print(log_output.json_report())
|
|
||||||
|
|
||||||
|
|
||||||
def test_sql_functions_date(client, log_output):
|
|
||||||
|
|
||||||
json_testfile = """
|
|
||||||
{"id": 1, "name": "John", "datez": "2017-01-02T03:04:05.006+07:30"}
|
|
||||||
"""
|
|
||||||
|
|
||||||
tests = [
|
|
||||||
# DATE_ADD
|
|
||||||
("DATE_ADD_1", "select DATE_ADD(year, 5, TO_TIMESTAMP(s.datez)) from S3Object as s",
|
|
||||||
b'2022-01-02T03:04:05.006+07:30\n'),
|
|
||||||
("DATE_ADD_2", "select DATE_ADD(month, 1, TO_TIMESTAMP(s.datez)) from S3Object as s",
|
|
||||||
b'2017-02-02T03:04:05.006+07:30\n'),
|
|
||||||
("DATE_ADD_3", "select DATE_ADD(day, -1, TO_TIMESTAMP(s.datez)) from S3Object as s",
|
|
||||||
b'2017-01-01T03:04:05.006+07:30\n'),
|
|
||||||
("DATE_ADD_4", "select DATE_ADD(hour, 1, TO_TIMESTAMP(s.datez)) from S3Object as s",
|
|
||||||
b'2017-01-02T04:04:05.006+07:30\n'),
|
|
||||||
("DATE_ADD_5", "select DATE_ADD(minute, 5, TO_TIMESTAMP(s.datez)) from S3Object as s",
|
|
||||||
b'2017-01-02T03:09:05.006+07:30\n'),
|
|
||||||
("DATE_ADD_6", "select DATE_ADD(second, 5, TO_TIMESTAMP(s.datez)) from S3Object as s",
|
|
||||||
b'2017-01-02T03:04:10.006+07:30\n'),
|
|
||||||
# DATE_DIFF
|
|
||||||
("DATE_DIFF_1", "select DATE_DIFF(year, TO_TIMESTAMP(s.datez), TO_TIMESTAMP('2011-01-01T')) from S3Object as s", b'-6\n'),
|
|
||||||
("DATE_DIFF_2", "select DATE_DIFF(month, TO_TIMESTAMP(s.datez), TO_TIMESTAMP('2011T')) from S3Object as s", b'-72\n'),
|
|
||||||
("DATE_DIFF_3", "select DATE_DIFF(day, TO_TIMESTAMP(s.datez), TO_TIMESTAMP('2010-01-02T')) from S3Object as s", b'-2556\n'),
|
|
||||||
# EXTRACT
|
|
||||||
("EXTRACT_1", "select EXTRACT(year FROM TO_TIMESTAMP(s.datez)) from S3Object as s", b'2017\n'),
|
|
||||||
("EXTRACT_2", "select EXTRACT(month FROM TO_TIMESTAMP(s.datez)) from S3Object as s", b'1\n'),
|
|
||||||
("EXTRACT_3", "select EXTRACT(hour FROM TO_TIMESTAMP(s.datez)) from S3Object as s", b'3\n'),
|
|
||||||
("EXTRACT_4", "select EXTRACT(minute FROM TO_TIMESTAMP(s.datez)) from S3Object as s", b'4\n'),
|
|
||||||
("EXTRACT_5", "select EXTRACT(timezone_hour FROM TO_TIMESTAMP(s.datez)) from S3Object as s", b'7\n'),
|
|
||||||
("EXTRACT_6", "select EXTRACT(timezone_minute FROM TO_TIMESTAMP(s.datez)) from S3Object as s", b'30\n'),
|
|
||||||
# TO_STRING
|
|
||||||
("TO_STRING_1", "select TO_STRING(TO_TIMESTAMP(s.datez), 'MMMM d, y') from S3Object as s",
|
|
||||||
b'"January 2, 2017"\n'),
|
|
||||||
("TO_STRING_2", "select TO_STRING(TO_TIMESTAMP(s.datez), 'MMM d, yyyy') from S3Object as s", b'"Jan 2, 2017"\n'),
|
|
||||||
("TO_STRING_3", "select TO_STRING(TO_TIMESTAMP(s.datez), 'M-d-yy') from S3Object as s", b'1-2-17\n'),
|
|
||||||
("TO_STRING_4", "select TO_STRING(TO_TIMESTAMP(s.datez), 'MM-d-y') from S3Object as s", b'01-2-2017\n'),
|
|
||||||
("TO_STRING_5", "select TO_STRING(TO_TIMESTAMP(s.datez), 'MMMM d, y h:m a') from S3Object as s",
|
|
||||||
b'"January 2, 2017 3:4 AM"\n'),
|
|
||||||
("TO_STRING_6", "select TO_STRING(TO_TIMESTAMP(s.datez), 'y-MM-dd''T''H:m:ssX') from S3Object as s",
|
|
||||||
b'2017-01-02T3:4:05+0730\n'),
|
|
||||||
("TO_STRING_7", "select TO_STRING(TO_TIMESTAMP(s.datez), 'y-MM-dd''T''H:m:ssX') from S3Object as s",
|
|
||||||
b'2017-01-02T3:4:05+0730\n'),
|
|
||||||
("TO_STRING_8", "select TO_STRING(TO_TIMESTAMP(s.datez), 'y-MM-dd''T''H:m:ssXXXX') from S3Object as s",
|
|
||||||
b'2017-01-02T3:4:05+0730\n'),
|
|
||||||
("TO_STRING_9", "select TO_STRING(TO_TIMESTAMP(s.datez), 'y-MM-dd''T''H:m:ssXXXXX') from S3Object as s",
|
|
||||||
b'2017-01-02T3:4:05+07:30\n'),
|
|
||||||
("TO_TIMESTAMP", "select TO_TIMESTAMP(s.datez) from S3Object as s",
|
|
||||||
b'2017-01-02T03:04:05.006+07:30\n'),
|
|
||||||
("UTCNOW", "select UTCNOW() from S3Object", datetime(1, 1, 1)),
|
|
||||||
|
|
||||||
]
|
|
||||||
|
|
||||||
try:
|
|
||||||
test_sql_expressions(client, json_testfile, tests, log_output)
|
|
||||||
except Exception as select_err:
|
|
||||||
raise select_err
|
|
||||||
# raise ValueError('Test {} unexpectedly failed with: {}'.format(test_name, select_err))
|
|
||||||
# pass
|
|
||||||
|
|
||||||
# Test passes
|
|
||||||
print(log_output.json_report())
|
|
||||||
|
|
||||||
|
|
||||||
def test_sql_functions_string(client, log_output):
|
|
||||||
|
|
||||||
json_testfile = """
|
|
||||||
{"id": 1, "name": "John"}
|
|
||||||
{"id": 2, "name": " \tfoobar\t "}
|
|
||||||
{"id": 3, "name": "1112211foobar22211122"}
|
|
||||||
"""
|
|
||||||
|
|
||||||
tests = [
|
|
||||||
# CHAR_LENGTH
|
|
||||||
("CHAR_LENGTH", "select CHAR_LENGTH(s.name) from S3Object as s", b'4\n24\n21\n'),
|
|
||||||
("CHARACTER_LENGTH",
|
|
||||||
"select CHARACTER_LENGTH(s.name) from S3Object as s", b'4\n24\n21\n'),
|
|
||||||
# LOWER
|
|
||||||
("LOWER", "select LOWER(s.name) from S3Object as s where s.id= 1", b'john\n'),
|
|
||||||
# SUBSTRING
|
|
||||||
("SUBSTRING_1", "select SUBSTRING(s.name FROM 2) from S3Object as s where s.id = 1", b'ohn\n'),
|
|
||||||
("SUBSTRING_2", "select SUBSTRING(s.name FROM 2 FOR 2) from S3Object as s where s.id = 1", b'oh\n'),
|
|
||||||
("SUBSTRING_3", "select SUBSTRING(s.name FROM -1 FOR 2) from S3Object as s where s.id = 1", b'\n'),
|
|
||||||
# TRIM
|
|
||||||
("TRIM_1", "select TRIM(s.name) from S3Object as s where s.id = 2", b'\tfoobar\t\n'),
|
|
||||||
("TRIM_2", "select TRIM(LEADING FROM s.name) from S3Object as s where s.id = 2",
|
|
||||||
b'\tfoobar\t \n'),
|
|
||||||
("TRIM_3", "select TRIM(TRAILING FROM s.name) from S3Object as s where s.id = 2",
|
|
||||||
b' \tfoobar\t\n'),
|
|
||||||
("TRIM_4", "select TRIM(BOTH FROM s.name) from S3Object as s where s.id = 2", b'\tfoobar\t\n'),
|
|
||||||
("TRIM_5", "select TRIM(BOTH '12' FROM s.name) from S3Object as s where s.id = 3", b'foobar\n'),
|
|
||||||
# UPPER
|
|
||||||
("UPPER", "select UPPER(s.name) from S3Object as s where s.id= 1", b'JOHN\n'),
|
|
||||||
]
|
|
||||||
|
|
||||||
try:
|
|
||||||
test_sql_expressions(client, json_testfile, tests, log_output)
|
|
||||||
except Exception as select_err:
|
|
||||||
raise select_err
|
|
||||||
# raise ValueError('Test {} unexpectedly failed with: {}'.format(test_name, select_err))
|
|
||||||
# pass
|
|
||||||
|
|
||||||
# Test passes
|
|
||||||
print(log_output.json_report())
|
|
||||||
|
|
||||||
|
|
||||||
def test_sql_datatypes(client, log_output):
|
|
||||||
json_testfile = """
|
|
||||||
{"name": "John"}
|
|
||||||
"""
|
|
||||||
tests = [
|
|
||||||
("bool", "select CAST('true' AS BOOL) from S3Object", b'true\n'),
|
|
||||||
("int", "select CAST('13' AS INT) from S3Object", b'13\n'),
|
|
||||||
("integer", "select CAST('13' AS INTEGER) from S3Object", b'13\n'),
|
|
||||||
("string", "select CAST(true AS STRING) from S3Object", b'true\n'),
|
|
||||||
("float", "select CAST('13.3' AS FLOAT) from S3Object", b'13.3\n'),
|
|
||||||
("decimal", "select CAST('14.3' AS FLOAT) from S3Object", b'14.3\n'),
|
|
||||||
("numeric", "select CAST('14.3' AS FLOAT) from S3Object", b'14.3\n'),
|
|
||||||
("timestamp", "select CAST('2007-04-05T14:30Z' AS TIMESTAMP) from S3Object",
|
|
||||||
b'2007-04-05T14:30Z\n'),
|
|
||||||
]
|
|
||||||
|
|
||||||
try:
|
|
||||||
test_sql_expressions(client, json_testfile, tests, log_output)
|
|
||||||
except Exception as select_err:
|
|
||||||
raise select_err
|
|
||||||
# raise ValueError('Test {} unexpectedly failed with: {}'.format(test_name, select_err))
|
|
||||||
# pass
|
|
||||||
|
|
||||||
# Test passes
|
|
||||||
print(log_output.json_report())
|
|
||||||
|
|
||||||
|
|
||||||
def test_sql_select(client, log_output):
|
|
||||||
|
|
||||||
json_testfile = """{"id": 1, "created": "June 27", "modified": "July 6" }
|
|
||||||
{"id": 2, "Created": "June 28", "Modified": "July 7", "Cast": "Random Date" }"""
|
|
||||||
tests = [
|
|
||||||
("select_1", "select * from S3Object",
|
|
||||||
b'1,June 27,July 6\n2,June 28,July 7,Random Date\n'),
|
|
||||||
("select_2", "select * from S3Object s",
|
|
||||||
b'1,June 27,July 6\n2,June 28,July 7,Random Date\n'),
|
|
||||||
("select_3", "select * from S3Object as s",
|
|
||||||
b'1,June 27,July 6\n2,June 28,July 7,Random Date\n'),
|
|
||||||
("select_4", "select s.line from S3Object as s", b'\n\n'),
|
|
||||||
("select_5", 'select s."Created" from S3Object as s', b'\nJune 28\n'),
|
|
||||||
("select_5", 'select s."Cast" from S3Object as s', b'\nRandom Date\n'),
|
|
||||||
("where", 'select s.created from S3Object as s', b'June 27\nJune 28\n'),
|
|
||||||
("limit", 'select * from S3Object as s LIMIT 1', b'1,June 27,July 6\n'),
|
|
||||||
]
|
|
||||||
|
|
||||||
try:
|
|
||||||
test_sql_expressions(client, json_testfile, tests, log_output)
|
|
||||||
except Exception as select_err:
|
|
||||||
raise select_err
|
|
||||||
# raise ValueError('Test {} unexpectedly failed with: {}'.format(test_name, select_err))
|
|
||||||
# pass
|
|
||||||
|
|
||||||
# Test passes
|
|
||||||
print(log_output.json_report())
|
|
||||||
|
|
||||||
|
|
||||||
def test_sql_select_json(client, log_output):
|
|
||||||
json_testcontent = """{ "Rules": [ {"id": "1"}, {"expr": "y > x"}, {"id": "2", "expr": "z = DEBUG"} ]}
|
|
||||||
{ "created": "June 27", "modified": "July 6" }
|
|
||||||
"""
|
|
||||||
tests = [
|
|
||||||
("select_1", "SELECT id FROM S3Object[*].Rules[*].id",
|
|
||||||
b'{"id":"1"}\n{}\n{"id":"2"}\n{}\n'),
|
|
||||||
("select_2",
|
|
||||||
"SELECT id FROM S3Object[*].Rules[*].id WHERE id IS NOT MISSING", b'{"id":"1"}\n{"id":"2"}\n'),
|
|
||||||
("select_3", "SELECT d.created, d.modified FROM S3Object[*] d",
|
|
||||||
b'{}\n{"created":"June 27","modified":"July 6"}\n'),
|
|
||||||
("select_4", "SELECT _1.created, _1.modified FROM S3Object[*]",
|
|
||||||
b'{}\n{"created":"June 27","modified":"July 6"}\n'),
|
|
||||||
("select_5",
|
|
||||||
"Select s.rules[1].expr from S3Object s", b'{"expr":"y > x"}\n{}\n'),
|
|
||||||
]
|
|
||||||
|
|
||||||
input_serialization = JSONInputSerialization(json_type=JSON_TYPE_DOCUMENT)
|
|
||||||
output_serialization = JSONOutputSerialization()
|
|
||||||
try:
|
|
||||||
test_sql_expressions_custom_input_output(client, json_testcontent,
|
|
||||||
input_serialization, output_serialization, tests, log_output)
|
|
||||||
except Exception as select_err:
|
|
||||||
raise select_err
|
|
||||||
# raise ValueError('Test {} unexpectedly failed with: {}'.format(test_name, select_err))
|
|
||||||
# pass
|
|
||||||
|
|
||||||
# Test passes
|
|
||||||
print(log_output.json_report())
|
|
||||||
|
|
||||||
|
|
||||||
def test_sql_select_csv_no_header(client, log_output):
|
|
||||||
json_testcontent = """val1,val2,val3
|
|
||||||
val4,val5,val6
|
|
||||||
"""
|
|
||||||
tests = [
|
|
||||||
("select_1", "SELECT s._2 FROM S3Object as s", b'val2\nval5\n'),
|
|
||||||
]
|
|
||||||
|
|
||||||
input_serialization = CSVInputSerialization(
|
|
||||||
file_header_info=FILE_HEADER_INFO_NONE,
|
|
||||||
allow_quoted_record_delimiter="FALSE",
|
|
||||||
)
|
|
||||||
output_serialization = CSVOutputSerialization()
|
|
||||||
try:
|
|
||||||
test_sql_expressions_custom_input_output(client, json_testcontent,
|
|
||||||
input_serialization, output_serialization, tests, log_output)
|
|
||||||
except Exception as select_err:
|
|
||||||
raise select_err
|
|
||||||
# raise ValueError('Test {} unexpectedly failed with: {}'.format(test_name, select_err))
|
|
||||||
# pass
|
|
||||||
|
|
||||||
# Test passes
|
|
||||||
print(log_output.json_report())
|
|
|
@ -1,87 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
# -*- coding: utf-8 -*-
|
|
||||||
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
from csv import (test_csv_input_custom_quote_char,
|
|
||||||
test_csv_output_custom_quote_char)
|
|
||||||
|
|
||||||
from minio import Minio
|
|
||||||
|
|
||||||
from sql_ops import (test_sql_datatypes, test_sql_functions_agg_cond_conv,
|
|
||||||
test_sql_functions_date, test_sql_functions_string,
|
|
||||||
test_sql_operators, test_sql_operators_precedence,
|
|
||||||
test_sql_select, test_sql_select_csv_no_header,
|
|
||||||
test_sql_select_json)
|
|
||||||
from utils import LogOutput
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
"""
|
|
||||||
Functional testing for S3 select.
|
|
||||||
"""
|
|
||||||
|
|
||||||
try:
|
|
||||||
access_key = os.getenv('ACCESS_KEY', 'Q3AM3UQ867SPQQA43P2F')
|
|
||||||
secret_key = os.getenv('SECRET_KEY',
|
|
||||||
'zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG')
|
|
||||||
server_endpoint = os.getenv('SERVER_ENDPOINT', 'play.min.io')
|
|
||||||
secure = os.getenv('ENABLE_HTTPS', '1') == '1'
|
|
||||||
if server_endpoint == 'play.min.io':
|
|
||||||
access_key = 'Q3AM3UQ867SPQQA43P2F'
|
|
||||||
secret_key = 'zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG'
|
|
||||||
secure = True
|
|
||||||
|
|
||||||
client = Minio(server_endpoint, access_key, secret_key, secure=secure)
|
|
||||||
|
|
||||||
log_output = LogOutput(client.select_object_content,
|
|
||||||
'test_csv_input_quote_char')
|
|
||||||
test_csv_input_custom_quote_char(client, log_output)
|
|
||||||
|
|
||||||
log_output = LogOutput(client.select_object_content,
|
|
||||||
'test_csv_output_quote_char')
|
|
||||||
test_csv_output_custom_quote_char(client, log_output)
|
|
||||||
|
|
||||||
log_output = LogOutput(
|
|
||||||
client.select_object_content, 'test_sql_operators')
|
|
||||||
test_sql_operators(client, log_output)
|
|
||||||
|
|
||||||
log_output = LogOutput(client.select_object_content,
|
|
||||||
'test_sql_operators_precedence')
|
|
||||||
test_sql_operators_precedence(client, log_output)
|
|
||||||
|
|
||||||
log_output = LogOutput(client.select_object_content,
|
|
||||||
'test_sql_functions_agg_cond_conv')
|
|
||||||
test_sql_functions_agg_cond_conv(client, log_output)
|
|
||||||
|
|
||||||
log_output = LogOutput(
|
|
||||||
client.select_object_content, 'test_sql_functions_date')
|
|
||||||
test_sql_functions_date(client, log_output)
|
|
||||||
|
|
||||||
log_output = LogOutput(client.select_object_content,
|
|
||||||
'test_sql_functions_string')
|
|
||||||
test_sql_functions_string(client, log_output)
|
|
||||||
|
|
||||||
log_output = LogOutput(
|
|
||||||
client.select_object_content, 'test_sql_datatypes')
|
|
||||||
test_sql_datatypes(client, log_output)
|
|
||||||
|
|
||||||
log_output = LogOutput(client.select_object_content, 'test_sql_select')
|
|
||||||
test_sql_select(client, log_output)
|
|
||||||
|
|
||||||
log_output = LogOutput(
|
|
||||||
client.select_object_content, 'test_sql_select_json')
|
|
||||||
test_sql_select_json(client, log_output)
|
|
||||||
|
|
||||||
log_output = LogOutput(
|
|
||||||
client.select_object_content, 'test_sql_select_csv')
|
|
||||||
test_sql_select_csv_no_header(client, log_output)
|
|
||||||
|
|
||||||
except Exception as err:
|
|
||||||
print(log_output.json_report(err))
|
|
||||||
sys.exit(1)
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
# Execute only if run as a script
|
|
||||||
main()
|
|
|
@ -1,92 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
# -*- coding: utf-8 -*-
|
|
||||||
|
|
||||||
import inspect
|
|
||||||
import json
|
|
||||||
import time
|
|
||||||
import traceback
|
|
||||||
import uuid
|
|
||||||
|
|
||||||
|
|
||||||
class LogOutput(object):
|
|
||||||
"""
|
|
||||||
LogOutput is the class for log output. It is required standard for all
|
|
||||||
SDK tests controlled by mint.
|
|
||||||
Here are its attributes:
|
|
||||||
'name': name of the SDK under test, e.g. 's3select'
|
|
||||||
'function': name of the method/api under test with its signature
|
|
||||||
The following python code can be used to
|
|
||||||
pull args information of a <method> and to
|
|
||||||
put together with the method name:
|
|
||||||
<method>.__name__+'('+', '.join(args_list)+')'
|
|
||||||
e.g. 'remove_object(bucket_name, object_name)'
|
|
||||||
'args': method/api arguments with their values, in
|
|
||||||
dictionary form: {'arg1': val1, 'arg2': val2, ...}
|
|
||||||
'duration': duration of the whole test in milliseconds,
|
|
||||||
defaults to 0
|
|
||||||
'alert': any extra information user is needed to be alerted about,
|
|
||||||
like whether this is a Blocker/Gateway/Server related
|
|
||||||
issue, etc., defaults to None
|
|
||||||
'message': descriptive error message, defaults to None
|
|
||||||
'error': stack-trace/exception message(only in case of failure),
|
|
||||||
actual low level exception/error thrown by the program,
|
|
||||||
defaults to None
|
|
||||||
'status': exit status, possible values are 'PASS', 'FAIL', 'NA',
|
|
||||||
defaults to 'PASS'
|
|
||||||
"""
|
|
||||||
|
|
||||||
PASS = 'PASS'
|
|
||||||
FAIL = 'FAIL'
|
|
||||||
NA = 'NA'
|
|
||||||
|
|
||||||
def __init__(self, meth, test_name):
|
|
||||||
self.__args_list = inspect.getargspec(meth).args[1:]
|
|
||||||
self.__name = 's3select:'+test_name
|
|
||||||
self.__function = meth.__name__+'('+', '.join(self.__args_list)+')'
|
|
||||||
self.__args = {}
|
|
||||||
self.__duration = 0
|
|
||||||
self.__alert = ''
|
|
||||||
self.__message = None
|
|
||||||
self.__error = None
|
|
||||||
self.__status = self.PASS
|
|
||||||
self.__start_time = time.time()
|
|
||||||
|
|
||||||
@property
|
|
||||||
def name(self): return self.__name
|
|
||||||
|
|
||||||
@property
|
|
||||||
def function(self): return self.__function
|
|
||||||
|
|
||||||
@property
|
|
||||||
def args(self): return self.__args
|
|
||||||
|
|
||||||
@name.setter
|
|
||||||
def name(self, val): self.__name = val
|
|
||||||
|
|
||||||
@function.setter
|
|
||||||
def function(self, val): self.__function = val
|
|
||||||
|
|
||||||
@args.setter
|
|
||||||
def args(self, val): self.__args = val
|
|
||||||
|
|
||||||
def json_report(self, err_msg='', alert='', status=''):
|
|
||||||
self.__args = {k: v for k, v in self.__args.items() if v and v != ''}
|
|
||||||
entry = {'name': self.__name,
|
|
||||||
'function': self.__function,
|
|
||||||
'args': self.__args,
|
|
||||||
'duration': int(round((time.time() - self.__start_time)*1000)),
|
|
||||||
'alert': str(alert),
|
|
||||||
'message': str(err_msg),
|
|
||||||
'error': traceback.format_exc() if err_msg and err_msg != '' else '',
|
|
||||||
'status': status if status and status != '' else
|
|
||||||
self.FAIL if err_msg and err_msg != '' else self.PASS
|
|
||||||
}
|
|
||||||
return json.dumps({k: v for k, v in entry.items() if v and v != ''})
|
|
||||||
|
|
||||||
|
|
||||||
def generate_bucket_name():
|
|
||||||
return "s3select-test-" + str(uuid.uuid4())
|
|
||||||
|
|
||||||
|
|
||||||
def generate_object_name():
|
|
||||||
return str(uuid.uuid4())
|
|
|
@ -1,5 +0,0 @@
|
||||||
module mint.minio.io/security
|
|
||||||
|
|
||||||
go 1.14
|
|
||||||
|
|
||||||
require github.com/sirupsen/logrus v1.6.0
|
|
|
@ -1,12 +0,0 @@
|
||||||
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
|
|
||||||
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
|
||||||
github.com/konsorten/go-windows-terminal-sequences v1.0.3 h1:CE8S1cTafDpPvMhIxNJKvHsGVBgn1xWYf1NbHQhywc8=
|
|
||||||
github.com/konsorten/go-windows-terminal-sequences v1.0.3/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
|
|
||||||
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
|
|
||||||
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
|
||||||
github.com/sirupsen/logrus v1.6.0 h1:UBcNElsrwanuuMsnGSlYmtmgbb23qDR5dG+6X6Oo89I=
|
|
||||||
github.com/sirupsen/logrus v1.6.0/go.mod h1:7uNnSEd1DgxDLC74fIahvMZmmYsHGZGEOFrfsX/uA88=
|
|
||||||
github.com/stretchr/testify v1.2.2 h1:bSDNvY7ZPG5RlJ8otE/7V6gMiyenm9RtJ7IUVIAoJ1w=
|
|
||||||
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
|
|
||||||
golang.org/x/sys v0.0.0-20190422165155-953cdadca894 h1:Cz4ceDQGXuKRnVBDTS23GTn/pU5OE2C0WrNTOYK1Uuc=
|
|
||||||
golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
|
|
@ -1,274 +0,0 @@
|
||||||
// Copyright (c) 2015-2021 MinIO, Inc.
|
|
||||||
//
|
|
||||||
// This file is part of MinIO Object Storage stack
|
|
||||||
//
|
|
||||||
// This program is free software: you can redistribute it and/or modify
|
|
||||||
// it under the terms of the GNU Affero General Public License as published by
|
|
||||||
// the Free Software Foundation, either version 3 of the License, or
|
|
||||||
// (at your option) any later version.
|
|
||||||
//
|
|
||||||
// This program is distributed in the hope that it will be useful
|
|
||||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
// GNU Affero General Public License for more details.
|
|
||||||
//
|
|
||||||
// You should have received a copy of the GNU Affero General Public License
|
|
||||||
// along with this program. If not, see <http://www.gnu.org/licenses/>.
|
|
||||||
|
|
||||||
package main
|
|
||||||
|
|
||||||
import (
|
|
||||||
"crypto/tls"
|
|
||||||
"encoding/json"
|
|
||||||
"fmt"
|
|
||||||
"os"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
log "github.com/sirupsen/logrus"
|
|
||||||
)
|
|
||||||
|
|
||||||
const testName = "TLS-tests"
|
|
||||||
|
|
||||||
const (
|
|
||||||
// PASS indicate that a test passed
|
|
||||||
PASS = "PASS"
|
|
||||||
// FAIL indicate that a test failed
|
|
||||||
FAIL = "FAIL"
|
|
||||||
// NA indicates that a test is not applicable
|
|
||||||
NA = "NA"
|
|
||||||
)
|
|
||||||
|
|
||||||
func main() {
|
|
||||||
log.SetOutput(os.Stdout)
|
|
||||||
log.SetFormatter(&mintJSONFormatter{})
|
|
||||||
log.SetLevel(log.InfoLevel)
|
|
||||||
|
|
||||||
endpoint := os.Getenv("SERVER_ENDPOINT")
|
|
||||||
secure := os.Getenv("ENABLE_HTTPS")
|
|
||||||
if secure != "1" {
|
|
||||||
log.WithFields(log.Fields{"name:": testName, "status": NA, "message": "TLS is not enabled"}).Info()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
testTLSVersions(endpoint)
|
|
||||||
testTLSCiphers(endpoint)
|
|
||||||
testTLSEllipticCurves(endpoint)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Tests whether the endpoint accepts TLS1.0 or TLS1.1 connections - fail if so.
|
|
||||||
// Tests whether the endpoint accepts TLS1.2 connections - fail if not.
|
|
||||||
func testTLSVersions(endpoint string) {
|
|
||||||
const function = "TLSVersions"
|
|
||||||
startTime := time.Now()
|
|
||||||
|
|
||||||
// Tests whether the endpoint accepts TLS1.0 or TLS1.1 connections
|
|
||||||
args := map[string]interface{}{
|
|
||||||
"MinVersion": "tls.VersionTLS10",
|
|
||||||
"MaxVersion": "tls.VersionTLS11",
|
|
||||||
}
|
|
||||||
_, err := tls.Dial("tcp", endpoint, &tls.Config{
|
|
||||||
MinVersion: tls.VersionTLS10,
|
|
||||||
MaxVersion: tls.VersionTLS11,
|
|
||||||
})
|
|
||||||
if err == nil {
|
|
||||||
failureLog(function, args, startTime, "", "Endpoint accepts insecure connection", err).Error()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Tests whether the endpoint accepts TLS1.2 connections
|
|
||||||
args = map[string]interface{}{
|
|
||||||
"MinVersion": "tls.VersionTLS12",
|
|
||||||
}
|
|
||||||
_, err = tls.Dial("tcp", endpoint, &tls.Config{
|
|
||||||
MinVersion: tls.VersionTLS12,
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", "Endpoint rejects secure connection", err).Error()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
successLog(function, args, startTime)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Tests whether the endpoint accepts SSL3.0, TLS1.0 or TLS1.1 connections - fail if so.
|
|
||||||
// Tests whether the endpoint accepts TLS1.2 connections - fail if not.
|
|
||||||
func testTLSCiphers(endpoint string) {
|
|
||||||
const function = "TLSCiphers"
|
|
||||||
startTime := time.Now()
|
|
||||||
|
|
||||||
// Tests whether the endpoint accepts insecure ciphers
|
|
||||||
args := map[string]interface{}{
|
|
||||||
"MinVersion": "tls.VersionTLS12",
|
|
||||||
"CipherSuites": unsupportedCipherSuites,
|
|
||||||
}
|
|
||||||
_, err := tls.Dial("tcp", endpoint, &tls.Config{
|
|
||||||
MinVersion: tls.VersionTLS12,
|
|
||||||
CipherSuites: unsupportedCipherSuites,
|
|
||||||
})
|
|
||||||
if err == nil {
|
|
||||||
failureLog(function, args, startTime, "", "Endpoint accepts insecure cipher suites", err).Error()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Tests whether the endpoint accepts at least one secure cipher
|
|
||||||
args = map[string]interface{}{
|
|
||||||
"MinVersion": "tls.VersionTLS12",
|
|
||||||
"CipherSuites": supportedCipherSuites,
|
|
||||||
}
|
|
||||||
_, err = tls.Dial("tcp", endpoint, &tls.Config{
|
|
||||||
MinVersion: tls.VersionTLS12,
|
|
||||||
CipherSuites: supportedCipherSuites,
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", "Endpoint rejects all secure cipher suites", err).Error()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Tests whether the endpoint accepts at least one default cipher
|
|
||||||
args = map[string]interface{}{
|
|
||||||
"MinVersion": "tls.VersionTLS12",
|
|
||||||
"CipherSuites": nil,
|
|
||||||
}
|
|
||||||
_, err = tls.Dial("tcp", endpoint, &tls.Config{
|
|
||||||
MinVersion: tls.VersionTLS12,
|
|
||||||
CipherSuites: nil, // default value
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", "Endpoint rejects default cipher suites", err).Error()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
successLog(function, args, startTime)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Tests whether the endpoint accepts the P-384 or P-521 elliptic curve - fail if so.
|
|
||||||
// Tests whether the endpoint accepts Curve25519 or P-256 - fail if not.
|
|
||||||
func testTLSEllipticCurves(endpoint string) {
|
|
||||||
const function = "TLSEllipticCurves"
|
|
||||||
startTime := time.Now()
|
|
||||||
|
|
||||||
// Tests whether the endpoint accepts curves using non-constant time implementations.
|
|
||||||
args := map[string]interface{}{
|
|
||||||
"CurvePreferences": unsupportedCurves,
|
|
||||||
}
|
|
||||||
_, err := tls.Dial("tcp", endpoint, &tls.Config{
|
|
||||||
MinVersion: tls.VersionTLS12,
|
|
||||||
CurvePreferences: unsupportedCurves,
|
|
||||||
CipherSuites: supportedCipherSuites,
|
|
||||||
})
|
|
||||||
if err == nil {
|
|
||||||
failureLog(function, args, startTime, "", "Endpoint accepts insecure elliptic curves", err).Error()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Tests whether the endpoint accepts curves using constant time implementations.
|
|
||||||
args = map[string]interface{}{
|
|
||||||
"CurvePreferences": unsupportedCurves,
|
|
||||||
}
|
|
||||||
_, err = tls.Dial("tcp", endpoint, &tls.Config{
|
|
||||||
MinVersion: tls.VersionTLS12,
|
|
||||||
CurvePreferences: supportedCurves,
|
|
||||||
CipherSuites: supportedCipherSuites,
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
failureLog(function, args, startTime, "", "Endpoint does not accept secure elliptic curves", err).Error()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
successLog(function, args, startTime)
|
|
||||||
}
|
|
||||||
|
|
||||||
func successLog(function string, args map[string]interface{}, startTime time.Time) *log.Entry {
|
|
||||||
duration := time.Since(startTime).Nanoseconds() / 1000000
|
|
||||||
return log.WithFields(log.Fields{
|
|
||||||
"name": testName,
|
|
||||||
"function": function,
|
|
||||||
"args": args,
|
|
||||||
"duration": duration,
|
|
||||||
"status": PASS,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
func failureLog(function string, args map[string]interface{}, startTime time.Time, alert string, message string, err error) *log.Entry {
|
|
||||||
duration := time.Since(startTime).Nanoseconds() / 1000000
|
|
||||||
fields := log.Fields{
|
|
||||||
"name": testName,
|
|
||||||
"function": function,
|
|
||||||
"args": args,
|
|
||||||
"duration": duration,
|
|
||||||
"status": FAIL,
|
|
||||||
"alert": alert,
|
|
||||||
"message": message,
|
|
||||||
}
|
|
||||||
if err != nil {
|
|
||||||
fields["error"] = err
|
|
||||||
}
|
|
||||||
return log.WithFields(fields)
|
|
||||||
}
|
|
||||||
|
|
||||||
type mintJSONFormatter struct {
|
|
||||||
}
|
|
||||||
|
|
||||||
func (f *mintJSONFormatter) Format(entry *log.Entry) ([]byte, error) {
|
|
||||||
data := make(log.Fields, len(entry.Data))
|
|
||||||
for k, v := range entry.Data {
|
|
||||||
switch v := v.(type) {
|
|
||||||
case error:
|
|
||||||
// Otherwise errors are ignored by `encoding/json`
|
|
||||||
// https://github.com/sirupsen/logrus/issues/137
|
|
||||||
data[k] = v.Error()
|
|
||||||
default:
|
|
||||||
data[k] = v
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
serialized, err := json.Marshal(data)
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("Failed to marshal fields to JSON, %w", err)
|
|
||||||
}
|
|
||||||
return append(serialized, '\n'), nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Secure Go implementations of modern TLS ciphers
|
|
||||||
// The following ciphers are excluded because:
|
|
||||||
// - RC4 ciphers: RC4 is broken
|
|
||||||
// - 3DES ciphers: Because of the 64 bit blocksize of DES (Sweet32)
|
|
||||||
// - CBC-SHA256 ciphers: No countermeasures against Lucky13 timing attack
|
|
||||||
// - CBC-SHA ciphers: Legacy ciphers (SHA-1) and non-constant time
|
|
||||||
// implementation of CBC.
|
|
||||||
// (CBC-SHA ciphers can be enabled again if required)
|
|
||||||
// - RSA key exchange ciphers: Disabled because of dangerous PKCS1-v1.5 RSA
|
|
||||||
// padding scheme. See Bleichenbacher attacks.
|
|
||||||
var supportedCipherSuites = []uint16{
|
|
||||||
tls.TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,
|
|
||||||
tls.TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,
|
|
||||||
tls.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
|
|
||||||
tls.TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
|
|
||||||
tls.TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,
|
|
||||||
tls.TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,
|
|
||||||
}
|
|
||||||
|
|
||||||
// Supported elliptic curves: Implementations are constant-time.
|
|
||||||
var supportedCurves = []tls.CurveID{tls.X25519, tls.CurveP256}
|
|
||||||
|
|
||||||
var unsupportedCipherSuites = []uint16{
|
|
||||||
tls.TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, // Go stack contains (some) countermeasures against timing attacks (Lucky13)
|
|
||||||
tls.TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, // No countermeasures against timing attacks
|
|
||||||
tls.TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, // Go stack contains (some) countermeasures against timing attacks (Lucky13)
|
|
||||||
tls.TLS_ECDHE_ECDSA_WITH_RC4_128_SHA, // Broken cipher
|
|
||||||
tls.TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA, // Sweet32
|
|
||||||
tls.TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, // Go stack contains (some) countermeasures against timing attacks (Lucky13)
|
|
||||||
tls.TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, // No countermeasures against timing attacks
|
|
||||||
tls.TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, // Go stack contains (some) countermeasures against timing attacks (Lucky13)
|
|
||||||
tls.TLS_ECDHE_RSA_WITH_RC4_128_SHA, // Broken cipher
|
|
||||||
|
|
||||||
// all RSA-PKCS1-v1.5 ciphers are disabled - danger of Bleichenbacher attack variants
|
|
||||||
tls.TLS_RSA_WITH_3DES_EDE_CBC_SHA, // Sweet32
|
|
||||||
tls.TLS_RSA_WITH_AES_128_CBC_SHA, // Go stack contains (some) countermeasures against timing attacks (Lucky13)
|
|
||||||
tls.TLS_RSA_WITH_AES_128_CBC_SHA256, // No countermeasures against timing attacks
|
|
||||||
tls.TLS_RSA_WITH_AES_256_CBC_SHA, // Go stack contains (some) countermeasures against timing attacks (Lucky13)
|
|
||||||
tls.TLS_RSA_WITH_RC4_128_SHA, // Broken cipher
|
|
||||||
|
|
||||||
tls.TLS_RSA_WITH_AES_128_GCM_SHA256, // Disabled because of RSA-PKCS1-v1.5 - AES-GCM is considered secure.
|
|
||||||
tls.TLS_RSA_WITH_AES_256_GCM_SHA384, // Disabled because of RSA-PKCS1-v1.5 - AES-GCM is considered secure.
|
|
||||||
}
|
|
||||||
|
|
||||||
// Unsupported elliptic curves: Implementations are not constant-time.
|
|
||||||
var unsupportedCurves = []tls.CurveID{tls.CurveP384, tls.CurveP521}
|
|
|
@ -1,15 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
#
|
|
||||||
#
|
|
||||||
|
|
||||||
# handle command line arguments
|
|
||||||
if [ $# -ne 2 ]; then
|
|
||||||
echo "usage: run.sh <OUTPUT-LOG-FILE> <ERROR-LOG-FILE>"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
output_log_file="$1"
|
|
||||||
error_log_file="$2"
|
|
||||||
|
|
||||||
# run tests
|
|
||||||
/mint/run/core/security/tls-tests 1>>"$output_log_file" 2>"$error_log_file"
|
|
|
@ -1,15 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
#
|
|
||||||
#
|
|
||||||
|
|
||||||
# handle command line arguments
|
|
||||||
if [ $# -ne 2 ]; then
|
|
||||||
echo "usage: run.sh <OUTPUT-LOG-FILE> <ERROR-LOG-FILE>"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
output_log_file="$1"
|
|
||||||
error_log_file="$2"
|
|
||||||
|
|
||||||
# run tests
|
|
||||||
/mint/run/core/versioning/tests 1>>"$output_log_file" 2>"$error_log_file"
|
|
Loading…
Reference in New Issue