mirror of
https://github.com/minio/minio.git
synced 2024-12-24 06:05:55 -05:00
Remove white spaces (#3922)
This commit is contained in:
parent
e55421ebdd
commit
d4eea224d4
@ -9,57 +9,43 @@ This example assumes that you have a FreeBSD 10.x running
|
|||||||
As root on the FreeBSD edit `/etc/rc.conf`
|
As root on the FreeBSD edit `/etc/rc.conf`
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
|
|
||||||
zfs_enable="YES"
|
zfs_enable="YES"
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Start ZFS service
|
Start ZFS service
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
|
service zfs start
|
||||||
service zfs start
|
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
|
dd if=/dev/zero of=/zfs bs=1M count=4000
|
||||||
dd if=/dev/zero of=/zfs bs=1M count=4000
|
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Configure a loopback device on the `/zfs` file.
|
Configure a loopback device on the `/zfs` file.
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
|
mdconfig -a -t vnode -f /zfs
|
||||||
mdconfig -a -t vnode -f /zfs
|
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Create zfs pool
|
Create zfs pool
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
|
zpool create minio-example /dev/md0
|
||||||
zpool create minio-example /dev/md0
|
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
|
df /minio-example
|
||||||
df /minio-example
|
|
||||||
Filesystem 512-blocks Used Avail Capacity Mounted on
|
Filesystem 512-blocks Used Avail Capacity Mounted on
|
||||||
minio-example 7872440 38 7872402 0% /minio-example
|
minio-example 7872440 38 7872402 0% /minio-example
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Verify if it is writable
|
Verify if it is writable
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
|
|
||||||
touch /minio-example/testfile
|
touch /minio-example/testfile
|
||||||
ls -l /minio-example/testfile
|
ls -l /minio-example/testfile
|
||||||
-rw-r--r-- 1 root wheel 0 Apr 26 00:51 /minio-example/testfile
|
-rw-r--r-- 1 root wheel 0 Apr 26 00:51 /minio-example/testfile
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Now you have successfully created a ZFS pool for further reading please refer to [ZFS Quickstart Guide](https://www.freebsd.org/doc/handbook/zfs-quickstart.html)
|
Now you have successfully created a ZFS pool for further reading please refer to [ZFS Quickstart Guide](https://www.freebsd.org/doc/handbook/zfs-quickstart.html)
|
||||||
@ -68,20 +54,17 @@ However, this pool is not taking advantage of any ZFS features, so let's create
|
|||||||
|
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
|
zfs create minio-example/compressed-objects
|
||||||
zfs create minio-example/compressed-objects
|
zfs set compression=lz4 minio-example/compressed-objects
|
||||||
zfs set compression=lz4 minio-example/compressed-objects
|
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
To keep monitoring your pool use
|
To keep monitoring your pool use
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
|
zpool status
|
||||||
zpool status
|
pool: minio-example
|
||||||
pool: minio-example
|
state: ONLINE
|
||||||
state: ONLINE
|
scan: none requested
|
||||||
scan: none requested
|
|
||||||
config:
|
config:
|
||||||
|
|
||||||
NAME STATE READ WRITE CKSUM
|
NAME STATE READ WRITE CKSUM
|
||||||
@ -89,7 +72,6 @@ config:
|
|||||||
md0 ONLINE 0 0 0
|
md0 ONLINE 0 0 0
|
||||||
|
|
||||||
errors: No known data errors
|
errors: No known data errors
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Step 2.
|
#### Step 2.
|
||||||
@ -97,26 +79,22 @@ errors: No known data errors
|
|||||||
Now start minio server on the ``/minio-example/compressed-objects``, change the permissions such that this directory is accessibly by a normal user
|
Now start minio server on the ``/minio-example/compressed-objects``, change the permissions such that this directory is accessibly by a normal user
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
|
chown -R minio-user:minio-user /minio-example/compressed-objects
|
||||||
chown -R minio-user:minio-user /minio-example/compressed-objects
|
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Now login as ``minio-user`` and start minio server.
|
Now login as ``minio-user`` and start minio server.
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
|
|
||||||
curl https://dl.minio.io/server/minio/release/freebsd-amd64/minio > minio
|
curl https://dl.minio.io/server/minio/release/freebsd-amd64/minio > minio
|
||||||
chmod 755 minio
|
chmod 755 minio
|
||||||
./minio server /minio-example/compressed-objects
|
./minio server /minio-example/compressed-objects
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Point your browser to http://localhost:9000 and login with the credentials displayed on the command line.
|
Point your browser to http://localhost:9000 and login with the credentials displayed on the command line.
|
||||||
|
|
||||||
Now you have a S3 compatible server running on top of your ZFS backend which transparently provides disk level compression for your uploaded objects.
|
Now you have a S3 compatible server running on top of your ZFS backend which transparently provides disk level compression for your uploaded objects.
|
||||||
|
|
||||||
Thanks for using Minio, awaiting feedback :-)
|
Thanks for using Minio, awaiting feedback :-)
|
||||||
|
|
||||||
|
|
||||||
#### Building Minio Server From Source
|
#### Building Minio Server From Source
|
||||||
|
@ -21,5 +21,4 @@ type fsMetaV1 struct {
|
|||||||
Meta map[string]string `json:"meta,omitempty"`
|
Meta map[string]string `json:"meta,omitempty"`
|
||||||
Parts []objectPartInfo `json:"parts,omitempty"`
|
Parts []objectPartInfo `json:"parts,omitempty"`
|
||||||
}
|
}
|
||||||
|
|
||||||
```
|
```
|
||||||
|
@ -48,9 +48,7 @@ Minio erasure code backend is limited by design to a minimum of 4 drives and a m
|
|||||||
Example: Start Minio server in a 12 drives setup.
|
Example: Start Minio server in a 12 drives setup.
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
|
|
||||||
minio server /mnt/export1/backend /mnt/export2/backend /mnt/export3/backend /mnt/export4/backend /mnt/export5/backend /mnt/export6/backend /mnt/export7/backend /mnt/export8/backend /mnt/export9/backend /mnt/export10/backend /mnt/export11/backend /mnt/export12/backend
|
minio server /mnt/export1/backend /mnt/export2/backend /mnt/export3/backend /mnt/export4/backend /mnt/export5/backend /mnt/export6/backend /mnt/export7/backend /mnt/export8/backend /mnt/export9/backend /mnt/export10/backend /mnt/export11/backend /mnt/export12/backend
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## 3. Test your setup
|
## 3. Test your setup
|
||||||
|
@ -79,8 +79,8 @@ This doesn't apply for the writes because there is always one writer and many re
|
|||||||
An example here shows how the contention is handled with GetObject().
|
An example here shows how the contention is handled with GetObject().
|
||||||
|
|
||||||
GetObject() holds a read lock on `fs.json`.
|
GetObject() holds a read lock on `fs.json`.
|
||||||
```go
|
|
||||||
|
|
||||||
|
```go
|
||||||
fsMetaPath := pathJoin(fs.fsPath, minioMetaBucket, bucketMetaPrefix, bucket, object, fsMetaJSONFile)
|
fsMetaPath := pathJoin(fs.fsPath, minioMetaBucket, bucketMetaPrefix, bucket, object, fsMetaJSONFile)
|
||||||
rlk, err := fs.rwPool.Open(fsMetaPath)
|
rlk, err := fs.rwPool.Open(fsMetaPath)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@ -93,7 +93,6 @@ GetObject() holds a read lock on `fs.json`.
|
|||||||
_, err = io.CopyBuffer(writer, reader, buf)
|
_, err = io.CopyBuffer(writer, reader, buf)
|
||||||
|
|
||||||
... after successful copy operation unlocks the read lock ...
|
... after successful copy operation unlocks the read lock ...
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
A concurrent PutObject is requested on the same object, PutObject() attempts a write lock on `fs.json`.
|
A concurrent PutObject is requested on the same object, PutObject() attempts a write lock on `fs.json`.
|
||||||
@ -134,4 +133,4 @@ On minio3
|
|||||||
|
|
||||||
Once lock is acquired the minio2 validates if the file really exists to avoid obtaining lock on an fd which is already deleted. But this situation calls for a race with a third server which is also attempting to write the same file before the minio2 can validate if the file exists. It might be potentially possible `fs.json` is created so the lock acquired by minio2 might be invalid and can lead to a potential inconsistency.
|
Once lock is acquired the minio2 validates if the file really exists to avoid obtaining lock on an fd which is already deleted. But this situation calls for a race with a third server which is also attempting to write the same file before the minio2 can validate if the file exists. It might be potentially possible `fs.json` is created so the lock acquired by minio2 might be invalid and can lead to a potential inconsistency.
|
||||||
|
|
||||||
This is a known problem and cannot be solved by POSIX fcntl locks. These are considered to be the limits of shared filesystem.
|
This is a known problem and cannot be solved by POSIX fcntl locks. These are considered to be the limits of shared filesystem.
|
||||||
|
Loading…
Reference in New Issue
Block a user