Remove white spaces (#3922)

This commit is contained in:
Nitish Tiwari 2017-03-17 21:53:22 +05:30 committed by Harshavardhana
parent e55421ebdd
commit d4eea224d4
4 changed files with 18 additions and 44 deletions

View File

@ -9,57 +9,43 @@ This example assumes that you have a FreeBSD 10.x running
As root on the FreeBSD edit `/etc/rc.conf`
```sh
zfs_enable="YES"
```
Start ZFS service
```sh
service zfs start
service zfs start
```
```sh
dd if=/dev/zero of=/zfs bs=1M count=4000
dd if=/dev/zero of=/zfs bs=1M count=4000
```
Configure a loopback device on the `/zfs` file.
Configure a loopback device on the `/zfs` file.
```sh
mdconfig -a -t vnode -f /zfs
mdconfig -a -t vnode -f /zfs
```
Create zfs pool
```sh
zpool create minio-example /dev/md0
zpool create minio-example /dev/md0
```
```sh
df /minio-example
df /minio-example
Filesystem 512-blocks Used Avail Capacity Mounted on
minio-example 7872440 38 7872402 0% /minio-example
```
Verify if it is writable
```sh
touch /minio-example/testfile
ls -l /minio-example/testfile
-rw-r--r-- 1 root wheel 0 Apr 26 00:51 /minio-example/testfile
```
Now you have successfully created a ZFS pool for further reading please refer to [ZFS Quickstart Guide](https://www.freebsd.org/doc/handbook/zfs-quickstart.html)
@ -68,20 +54,17 @@ However, this pool is not taking advantage of any ZFS features, so let's create
```sh
zfs create minio-example/compressed-objects
zfs set compression=lz4 minio-example/compressed-objects
zfs create minio-example/compressed-objects
zfs set compression=lz4 minio-example/compressed-objects
```
To keep monitoring your pool use
```sh
zpool status
pool: minio-example
state: ONLINE
scan: none requested
zpool status
pool: minio-example
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
@ -89,7 +72,6 @@ config:
md0 ONLINE 0 0 0
errors: No known data errors
```
#### Step 2.
@ -97,26 +79,22 @@ errors: No known data errors
Now start minio server on the ``/minio-example/compressed-objects``, change the permissions such that this directory is accessibly by a normal user
```sh
chown -R minio-user:minio-user /minio-example/compressed-objects
chown -R minio-user:minio-user /minio-example/compressed-objects
```
Now login as ``minio-user`` and start minio server.
Now login as ``minio-user`` and start minio server.
```sh
curl https://dl.minio.io/server/minio/release/freebsd-amd64/minio > minio
chmod 755 minio
./minio server /minio-example/compressed-objects
```
Point your browser to http://localhost:9000 and login with the credentials displayed on the command line.
Point your browser to http://localhost:9000 and login with the credentials displayed on the command line.
Now you have a S3 compatible server running on top of your ZFS backend which transparently provides disk level compression for your uploaded objects.
Thanks for using Minio, awaiting feedback :-)
Thanks for using Minio, awaiting feedback :-)
#### Building Minio Server From Source

View File

@ -21,5 +21,4 @@ type fsMetaV1 struct {
Meta map[string]string `json:"meta,omitempty"`
Parts []objectPartInfo `json:"parts,omitempty"`
}
```

View File

@ -48,9 +48,7 @@ Minio erasure code backend is limited by design to a minimum of 4 drives and a m
Example: Start Minio server in a 12 drives setup.
```sh
minio server /mnt/export1/backend /mnt/export2/backend /mnt/export3/backend /mnt/export4/backend /mnt/export5/backend /mnt/export6/backend /mnt/export7/backend /mnt/export8/backend /mnt/export9/backend /mnt/export10/backend /mnt/export11/backend /mnt/export12/backend
```
## 3. Test your setup

View File

@ -79,8 +79,8 @@ This doesn't apply for the writes because there is always one writer and many re
An example here shows how the contention is handled with GetObject().
GetObject() holds a read lock on `fs.json`.
```go
```go
fsMetaPath := pathJoin(fs.fsPath, minioMetaBucket, bucketMetaPrefix, bucket, object, fsMetaJSONFile)
rlk, err := fs.rwPool.Open(fsMetaPath)
if err != nil {
@ -93,7 +93,6 @@ GetObject() holds a read lock on `fs.json`.
_, err = io.CopyBuffer(writer, reader, buf)
... after successful copy operation unlocks the read lock ...
```
A concurrent PutObject is requested on the same object, PutObject() attempts a write lock on `fs.json`.
@ -134,4 +133,4 @@ On minio3
Once lock is acquired the minio2 validates if the file really exists to avoid obtaining lock on an fd which is already deleted. But this situation calls for a race with a third server which is also attempting to write the same file before the minio2 can validate if the file exists. It might be potentially possible `fs.json` is created so the lock acquired by minio2 might be invalid and can lead to a potential inconsistency.
This is a known problem and cannot be solved by POSIX fcntl locks. These are considered to be the limits of shared filesystem.
This is a known problem and cannot be solved by POSIX fcntl locks. These are considered to be the limits of shared filesystem.