Commit Graph

18 Commits

Author SHA1 Message Date
Praveen raj Mani ce9d36d954 Add object compression support (#6292)
Add support for streaming (golang/LZ77/snappy) compression.
2018-09-28 09:06:17 +05:30
Anis Elleuch aa4e2b1542 Use GetObjectNInfo in CopyObject and CopyObjectPart (#6489) 2018-09-25 12:39:46 -07:00
Krishna Srinivas ce02ab613d Simplify erasure code by separating bitrot from erasure code (#5959) 2018-08-06 15:14:08 -07:00
kannappanr f8a3fd0c2a
Create logger package and rename errorIf to LogIf (#5678)
Removing message from error logging
Replace errors.Trace with LogIf
2018-04-05 15:04:40 -07:00
Aditya Manthramurthy ea8973b7d7 Return bit-rot verified data instead of re-reading from disk (#5568)
- Data from disk was being read after bitrot verification to return
  data for GetObject. Strictly speaking this does not guarantee bitrot
  protection, as disks may return bad data even temporarily.

- This fix reads data from disk, verifies data for bitrot and then
  returns data to the client directly.
2018-03-04 14:16:45 -08:00
Harshavardhana 8efa82126b
Convert errors tracer into a separate package (#5221) 2017-11-25 11:58:29 -08:00
Andreas Auernhammer 85fcee1919 erasure: simplify XL backend operations (#4649) (#4758)
This change provides new implementations of the XL backend operations:
 - create file
 - read   file
 - heal   file
Further this change adds table based tests for all three operations.

This affects also the bitrot algorithm integration. Algorithms are now
integrated in an idiomatic way (like crypto.Hash).
Fixes #4696
Fixes #4649
Fixes #4359
2017-08-14 18:08:42 -07:00
Aditya Manthramurthy 8975da4e84 Add new ReadFileWithVerify storage-layer API (#4349)
This is an enhancement to the XL/distributed-XL mode. FS mode is
unaffected.

The ReadFileWithVerify storage-layer call is similar to ReadFile with
the additional functionality of performing bit-rot checking. It
accepts additional parameters for a hashing algorithm to use and the
expected hex-encoded hash string.

This patch provides significant performance improvement because:

1. combines the step of reading the file (during
erasure-decoding/reconstruction) with bit-rot verification;

2. limits the number of file-reads; and

3. avoids transferring the file over the network for bit-rot
verification.

ReadFile API is implemented as ReadFileWithVerify with empty hashing
arguments.

Credits to AB and Harsha for the algorithmic improvement.

Fixes #4236.
2017-05-16 14:21:52 -07:00
Harshavardhana bcc5b6e1ef xl: Rename getOrderedDisks as shuffleDisks appropriately. (#3796)
This PR is for readability cleanup

- getOrderedDisks as shuffleDisks
- getOrderedPartsMetadata as shufflePartsMetadata

Distribution is now a second argument instead being the
primary input argument for brevity.

Also change the usage of type casted int64(0), instead
rely on direct type reference as `var variable int64` everywhere.
2017-02-24 09:20:40 -08:00
Andreas Auernhammer f38222c0cc update the blake2b implementation (#3724)
Fixes a performance bug caused by SSE-AVX register savings on amd64.
2017-02-09 15:01:00 -08:00
Harshavardhana d41dcb784b Move to blake2b-simd due to perf problems in golang.org/x/crypto
Ref https://github.com/golang/go/issues/18563
2017-01-24 18:07:34 -08:00
Harshavardhana 62f8343879 Add constants for commonly used values. (#3588)
This is a consolidation effort, avoiding usage
of naked strings in codebase. Whenever possible
use constants which can be repurposed elsewhere.

This also fixes `goconst ./...` reported issues.
2017-01-18 12:24:34 -08:00
Harshavardhana 855ef4f1aa Fix typo in erasure-utils.go 2016-12-22 10:47:10 -08:00
Harshavardhana 5878fcc086 bit-rot: Default to sha256 on ARM64. (#3488)
This is to utilize an optimized version of
sha256 checksum which @fwessels implemented.

blake2b lacks such optimizations on ARM platform,
this can provide us significant boost in performance.

blake2b on ARM64 as expected would be slower.
```
BenchmarkSize1K-4           	   30000	     44015 ns/op	  23.26 MB/s
BenchmarkSize8K-4           	    5000	    335448 ns/op	  24.42 MB/s
BenchmarkSize32K-4          	    1000	   1333960 ns/op	  24.56 MB/s
BenchmarkSize128K-4         	     300	   5328286 ns/op	  24.60 MB/s
```

sha256 on ARM64 is faster by orders of magnitude giving close to
AVX performance of blake2b.
```
BenchmarkHash8Bytes-4	 1000000	      1446 ns/op	   5.53 MB/s
BenchmarkHash1K-4    	  500000	      3229 ns/op	 317.12 MB/s
BenchmarkHash8K-4    	  100000	     14430 ns/op	 567.69 MB/s
BenchmarkHash1M-4    	    1000	   1640126 ns/op	 639.33 MB/s
```
2016-12-22 08:25:03 -08:00
Andreas Auernhammer 1ac36a95aa replace blake2b implementation (#3481)
* replace blake2b implementation
replace the blake2b-simd with the golang/x/crypto implementation

```
name        old time/op    new time/op     delta
Size64-8       715ns ±13%      614ns ± 3%    ~     (p=0.084 n=6+6)
Size128-8      612ns ± 5%      634ns ± 8%    ~     (p=0.084 n=6+6)
Size1K-8      2.18µs ± 5%     2.09µs ± 7%    ~     (p=0.084 n=6+6)
Size8K-8      13.1µs ± 2%     13.4µs ± 3%    ~     (p=0.084 n=6+6)
Size32K-8     48.5µs ± 1%     49.5µs ± 3%    ~     (p=0.775 n=6+6)
Size128K-8     199µs ± 0%      198µs ± 3%    ~     (p=0.468 n=6+6)

name        old speed      new speed       delta
Size64-8    92.6MB/s ±11%  104.2MB/s ± 3%    ~     (p=0.139 n=6+6)
Size128-8    208MB/s ± 6%    202MB/s ± 8%    ~     (p=0.102 n=6+6)
Size1K-8     466MB/s ± 7%    492MB/s ± 7%    ~     (p=0.139 n=6+6)
Size8K-8     621MB/s ± 2%    610MB/s ± 3%    ~     (p=0.102 n=6+6)
Size32K-8    672MB/s ± 2%    669MB/s ± 1%    ~     (p=0.818 n=6+6)
Size128K-8   657MB/s ± 1%    672MB/s ± 0%  +2.28%  (p=0.002 n=6+6)

name        old time/op   new time/op   delta
Size64-4      334ns ± 1%    243ns ± 0%  -27.14%  (p=0.029 n=4+4)
Size128-4     296ns ± 1%    242ns ± 0%  -18.21%  (p=0.029 n=4+4)
Size1K-4     1.44µs ± 0%   1.28µs ± 0%  -10.83%  (p=0.029 n=4+4)
Size8K-4     10.0µs ± 0%    9.4µs ± 0%   -6.23%  (p=0.029 n=4+4)
Size32K-4    39.8µs ± 1%   37.3µs ± 0%   -6.31%  (p=0.029 n=4+4)
Size128K-4    162µs ± 3%    149µs ± 0%   -7.72%  (p=0.029 n=4+4)

name        old speed     new speed     delta
Size64-4    192MB/s ± 1%  263MB/s ± 0%  +37.24%  (p=0.029 n=4+4)
Size128-4   431MB/s ± 0%  526MB/s ± 0%  +22.04%  (p=0.029 n=4+4)
Size1K-4    713MB/s ± 0%  800MB/s ± 0%  +12.17%  (p=0.029 n=4+4)
Size8K-4    815MB/s ± 0%  869MB/s ± 0%   +6.64%  (p=0.029 n=4+4)
Size32K-4   823MB/s ± 1%  878MB/s ± 0%   +6.72%  (p=0.029 n=4+4)
Size128K-4  810MB/s ± 3%  877MB/s ± 0%   +8.23%  (p=0.029 n=4+4)
```
See: https://go-review.googlesource.com/#/c/34319/
2016-12-21 14:20:01 -08:00
Harshavardhana b363709c11 caching: Optimize memory allocations. (#3405)
This change brings in changes at multiple places

 - Reuse buffers at almost all locations ranging
   from rpc, fs, xl, checksum etc.
 - Change caching behavior to disable itself
   under low memory conditions i.e < 8GB of RAM.
 - Only objects cached are of size 1/10th the size
   of the cache for example if 4GB is the cache size
   the maximum object size which will be cached
   is going to be 400MB. This change is an
   optimization to cache more objects rather
   than few larger objects.
 - If object cache is enabled default GC
   percent has been reduced to 20% in lieu
   with newly found behavior of GC. If the cache
   utilization reaches 75% of the maximum value
   GC percent is reduced to 10% to make GC
   more aggressive.
 - Do not use *bytes.Buffer* due to its growth
   requirements. For every allocation *bytes.Buffer*
   allocates an additional buffer for its internal
   purposes. This is undesirable for us, so
   implemented a new cappedWriter which is capped to a
   desired size, beyond this all writes rejected.

Possible fix for #3403.
2016-12-08 20:35:07 -08:00
Krishna Srinivas 9358ee011b logging: Print stack trace in case of errors.
fixes #1827
2016-09-13 21:18:30 -07:00
Harshavardhana bccf549463 server: Move all the top level files into cmd folder. (#2490)
This change brings a change which was done for the 'mc'
package to allow for clean repo and have a cleaner
github drop in experience.
2016-08-18 16:23:42 -07:00