minio/cmd/xl-storage-disk-id-check.go

1115 lines
33 KiB
Go
Raw Normal View History

// Copyright (c) 2015-2024 MinIO, Inc.
//
// This file is part of MinIO Object Storage stack
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package cmd
import (
"context"
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
"errors"
"fmt"
"io"
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
"math/rand"
"runtime"
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
"strconv"
"strings"
"sync"
"sync/atomic"
"time"
"github.com/minio/madmin-go/v3"
"github.com/minio/minio/internal/cachevalue"
xioutil "github.com/minio/minio/internal/ioutil"
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
"github.com/minio/minio/internal/logger"
)
//go:generate stringer -type=storageMetric -trimprefix=storageMetric $GOFILE
type storageMetric uint8
const (
storageMetricMakeVolBulk storageMetric = iota
storageMetricMakeVol
storageMetricListVols
storageMetricStatVol
storageMetricDeleteVol
storageMetricWalkDir
storageMetricListDir
storageMetricReadFile
storageMetricAppendFile
storageMetricCreateFile
storageMetricReadFileStream
storageMetricRenameFile
storageMetricRenameData
storageMetricCheckParts
storageMetricDelete
storageMetricDeleteVersions
storageMetricVerifyFile
storageMetricWriteAll
storageMetricDeleteVersion
storageMetricWriteMetadata
storageMetricUpdateMetadata
storageMetricReadVersion
storageMetricReadXL
storageMetricReadAll
storageMetricStatInfoFile
storageMetricReadMultiple
storageMetricDeleteAbandonedParts
storageMetricDiskInfo
// .... add more
storageMetricLast
)
// Detects change in underlying disk.
type xlStorageDiskIDCheck struct {
totalWrites atomic.Uint64
totalDeletes atomic.Uint64
totalErrsAvailability atomic.Uint64 // Captures all data availability errors such as faulty disk, timeout errors.
totalErrsTimeout atomic.Uint64 // Captures all timeout only errors
// apiCalls should be placed first so alignment is guaranteed for atomic operations.
apiCalls [storageMetricLast]uint64
apiLatencies [storageMetricLast]*lockedLastMinuteLatency
diskID string
storage *xlStorage
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
health *diskHealthTracker
healthCheck bool
metricsCache *cachevalue.Cache[DiskMetrics]
diskCtx context.Context
diskCancel context.CancelFunc
}
func (p *xlStorageDiskIDCheck) getMetrics() DiskMetrics {
p.metricsCache.InitOnce(5*time.Second,
cachevalue.Opts{},
func() (DiskMetrics, error) {
diskMetric := DiskMetrics{
2022-07-05 17:45:49 -04:00
LastMinute: make(map[string]AccElem, len(p.apiLatencies)),
APICalls: make(map[string]uint64, len(p.apiCalls)),
}
for i, v := range p.apiLatencies {
2022-07-05 17:45:49 -04:00
diskMetric.LastMinute[storageMetric(i).String()] = v.total()
}
for i := range p.apiCalls {
diskMetric.APICalls[storageMetric(i).String()] = atomic.LoadUint64(&p.apiCalls[i])
}
return diskMetric, nil
},
)
diskMetric, _ := p.metricsCache.Get()
// Do not need this value to be cached.
diskMetric.TotalErrorsTimeout = p.totalErrsTimeout.Load()
diskMetric.TotalErrorsAvailability = p.totalErrsAvailability.Load()
return diskMetric
}
// lockedLastMinuteLatency accumulates totals lockless for each second.
type lockedLastMinuteLatency struct {
cachedSec int64
cached atomic.Pointer[AccElem]
mu sync.Mutex
init sync.Once
lastMinuteLatency
}
func (e *lockedLastMinuteLatency) add(value time.Duration) {
e.addSize(value, 0)
}
2022-07-05 17:45:49 -04:00
// addSize will add a duration and size.
func (e *lockedLastMinuteLatency) addSize(value time.Duration, sz int64) {
// alloc on every call, so we have a clean entry to swap in.
t := time.Now().Unix()
e.init.Do(func() {
e.cached.Store(&AccElem{})
atomic.StoreInt64(&e.cachedSec, t)
})
acc := e.cached.Load()
if lastT := atomic.LoadInt64(&e.cachedSec); lastT != t {
// Check if lastT was changed by someone else.
if atomic.CompareAndSwapInt64(&e.cachedSec, lastT, t) {
// Now we swap in a new.
newAcc := &AccElem{}
old := e.cached.Swap(newAcc)
var a AccElem
a.Size = atomic.LoadInt64(&old.Size)
a.Total = atomic.LoadInt64(&old.Total)
a.N = atomic.LoadInt64(&old.N)
e.mu.Lock()
e.lastMinuteLatency.addAll(t-1, a)
e.mu.Unlock()
acc = newAcc
} else {
// We may be able to grab the new accumulator by yielding.
runtime.Gosched()
acc = e.cached.Load()
}
}
atomic.AddInt64(&acc.N, 1)
atomic.AddInt64(&acc.Total, int64(value))
atomic.AddInt64(&acc.Size, sz)
2022-07-05 17:45:49 -04:00
}
// total returns the total call count and latency for the last minute.
func (e *lockedLastMinuteLatency) total() AccElem {
e.mu.Lock()
defer e.mu.Unlock()
2022-07-05 17:45:49 -04:00
return e.lastMinuteLatency.getTotal()
}
func newXLStorageDiskIDCheck(storage *xlStorage, healthCheck bool) *xlStorageDiskIDCheck {
xl := xlStorageDiskIDCheck{
storage: storage,
health: newDiskHealthTracker(),
healthCheck: healthCheck && globalDriveMonitoring,
metricsCache: cachevalue.New[DiskMetrics](),
}
xl.totalWrites.Store(xl.storage.getWriteAttribute())
xl.totalDeletes.Store(xl.storage.getDeleteAttribute())
xl.diskCtx, xl.diskCancel = context.WithCancel(context.TODO())
for i := range xl.apiLatencies[:] {
xl.apiLatencies[i] = &lockedLastMinuteLatency{}
}
if xl.healthCheck {
go xl.monitorDiskWritable(xl.diskCtx)
}
return &xl
}
func (p *xlStorageDiskIDCheck) String() string {
return p.storage.String()
}
func (p *xlStorageDiskIDCheck) IsOnline() bool {
storedDiskID, err := p.storage.GetDiskID()
if err != nil {
return false
}
return storedDiskID == p.diskID
}
func (p *xlStorageDiskIDCheck) LastConn() time.Time {
return p.storage.LastConn()
}
func (p *xlStorageDiskIDCheck) IsLocal() bool {
return p.storage.IsLocal()
}
func (p *xlStorageDiskIDCheck) Endpoint() Endpoint {
return p.storage.Endpoint()
}
func (p *xlStorageDiskIDCheck) Hostname() string {
return p.storage.Hostname()
}
func (p *xlStorageDiskIDCheck) Healing() *healingTracker {
return p.storage.Healing()
}
func (p *xlStorageDiskIDCheck) NSScanner(ctx context.Context, cache dataUsageCache, updates chan<- dataUsageEntry, scanMode madmin.HealScanMode, _ func() bool) (dataUsageCache, error) {
if contextCanceled(ctx) {
xioutil.SafeClose(updates)
return dataUsageCache{}, ctx.Err()
}
if err := p.checkDiskStale(); err != nil {
xioutil.SafeClose(updates)
return dataUsageCache{}, err
}
weSleep := func() bool {
return scannerIdleMode.Load() == 0
}
return p.storage.NSScanner(ctx, cache, updates, scanMode, weSleep)
}
func (p *xlStorageDiskIDCheck) SetFormatData(b []byte) {
p.storage.SetFormatData(b)
}
func (p *xlStorageDiskIDCheck) GetDiskLoc() (poolIdx, setIdx, diskIdx int) {
return p.storage.GetDiskLoc()
}
func (p *xlStorageDiskIDCheck) Close() error {
p.diskCancel()
return p.storage.Close()
}
func (p *xlStorageDiskIDCheck) GetDiskID() (string, error) {
return p.storage.GetDiskID()
}
func (p *xlStorageDiskIDCheck) SetDiskID(id string) {
p.diskID = id
}
func (p *xlStorageDiskIDCheck) checkDiskStale() error {
if p.diskID == "" {
// For empty disk-id we allow the call as the server might be
// coming up and trying to read format.json or create format.json
return nil
}
storedDiskID, err := p.storage.GetDiskID()
if err != nil {
// return any error generated while reading `format.json`
return err
}
if err == nil && p.diskID == storedDiskID {
return nil
}
// not the same disk we remember, take it offline.
return errDiskNotFound
}
func (p *xlStorageDiskIDCheck) DiskInfo(ctx context.Context, opts DiskInfoOptions) (info DiskInfo, err error) {
if contextCanceled(ctx) {
return DiskInfo{}, ctx.Err()
}
si := p.updateStorageMetrics(storageMetricDiskInfo)
defer si(&err)
if opts.NoOp {
if opts.Metrics {
info.Metrics = p.getMetrics()
}
info.Metrics.TotalWrites = p.totalWrites.Load()
info.Metrics.TotalDeletes = p.totalDeletes.Load()
info.Metrics.TotalWaiting = uint32(p.health.waiting.Load())
info.Metrics.TotalErrorsTimeout = p.totalErrsTimeout.Load()
info.Metrics.TotalErrorsAvailability = p.totalErrsAvailability.Load()
if p.health.isFaulty() {
// if disk is already faulty return faulty for 'mc admin info' output and prometheus alerts.
return info, errFaultyDisk
}
return info, nil
}
defer func() {
if opts.Metrics {
info.Metrics = p.getMetrics()
}
info.Metrics.TotalWrites = p.totalWrites.Load()
info.Metrics.TotalDeletes = p.totalDeletes.Load()
info.Metrics.TotalWaiting = uint32(p.health.waiting.Load())
info.Metrics.TotalErrorsTimeout = p.totalErrsTimeout.Load()
info.Metrics.TotalErrorsAvailability = p.totalErrsAvailability.Load()
}()
if p.health.isFaulty() {
// if disk is already faulty return faulty for 'mc admin info' output and prometheus alerts.
return info, errFaultyDisk
}
info, err = p.storage.DiskInfo(ctx, opts)
if err != nil {
return info, err
}
// check cached diskID against backend
// only if its non-empty.
if p.diskID != "" && p.diskID != info.ID {
return info, errDiskNotFound
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
}
return info, nil
}
func (p *xlStorageDiskIDCheck) MakeVolBulk(ctx context.Context, volumes ...string) (err error) {
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
ctx, done, err := p.TrackDiskHealth(ctx, storageMetricMakeVolBulk, volumes...)
if err != nil {
return err
}
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
defer done(&err)
w := xioutil.NewDeadlineWorker(globalDriveConfig.GetMaxTimeout())
return w.Run(func() error { return p.storage.MakeVolBulk(ctx, volumes...) })
}
func (p *xlStorageDiskIDCheck) MakeVol(ctx context.Context, volume string) (err error) {
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
ctx, done, err := p.TrackDiskHealth(ctx, storageMetricMakeVol, volume)
if err != nil {
return err
}
defer done(&err)
w := xioutil.NewDeadlineWorker(globalDriveConfig.GetMaxTimeout())
return w.Run(func() error { return p.storage.MakeVol(ctx, volume) })
}
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
func (p *xlStorageDiskIDCheck) ListVols(ctx context.Context) (vi []VolInfo, err error) {
ctx, done, err := p.TrackDiskHealth(ctx, storageMetricListVols, "/")
if err != nil {
return nil, err
}
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
defer done(&err)
return p.storage.ListVols(ctx)
}
func (p *xlStorageDiskIDCheck) StatVol(ctx context.Context, volume string) (vol VolInfo, err error) {
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
ctx, done, err := p.TrackDiskHealth(ctx, storageMetricStatVol, volume)
if err != nil {
return vol, err
}
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
defer done(&err)
Fix all racy use of NewDeadlineWorker (#18861) AlmosAll uses of NewDeadlineWorker, which relied on secondary values, were used in a racy fashion, which could lead to inconsistent errors/data being returned. It also propagates the deadline downstream. Rewrite all these to use a generic WithDeadline caller that can return an error alongside a value. Remove the stateful aspect of DeadlineWorker - it was racy if used - but it wasn't AFAICT. Fixes races like: ``` WARNING: DATA RACE Read at 0x00c130b29d10 by goroutine 470237: github.com/minio/minio/cmd.(*xlStorageDiskIDCheck).ReadVersion() github.com/minio/minio/cmd/xl-storage-disk-id-check.go:702 +0x611 github.com/minio/minio/cmd.readFileInfo() github.com/minio/minio/cmd/erasure-metadata-utils.go:160 +0x122 github.com/minio/minio/cmd.erasureObjects.getObjectFileInfo.func1.1() github.com/minio/minio/cmd/erasure-object.go:809 +0x27a github.com/minio/minio/cmd.erasureObjects.getObjectFileInfo.func1.2() github.com/minio/minio/cmd/erasure-object.go:828 +0x61 Previous write at 0x00c130b29d10 by goroutine 470298: github.com/minio/minio/cmd.(*xlStorageDiskIDCheck).ReadVersion.func1() github.com/minio/minio/cmd/xl-storage-disk-id-check.go:698 +0x244 github.com/minio/minio/internal/ioutil.(*DeadlineWorker).Run.func1() github.com/minio/minio/internal/ioutil/ioutil.go:141 +0x33 WARNING: DATA RACE Write at 0x00c0ba6e6c00 by goroutine 94507: github.com/minio/minio/cmd.(*xlStorageDiskIDCheck).StatVol.func1() github.com/minio/minio/cmd/xl-storage-disk-id-check.go:419 +0x104 github.com/minio/minio/internal/ioutil.(*DeadlineWorker).Run.func1() github.com/minio/minio/internal/ioutil/ioutil.go:141 +0x33 Previous read at 0x00c0ba6e6c00 by goroutine 94463: github.com/minio/minio/cmd.(*xlStorageDiskIDCheck).StatVol() github.com/minio/minio/cmd/xl-storage-disk-id-check.go:422 +0x47e github.com/minio/minio/cmd.getBucketInfoLocal.func1() github.com/minio/minio/cmd/peer-s3-server.go:275 +0x122 github.com/minio/pkg/v2/sync/errgroup.(*Group).Go.func1() ``` Probably back from #17701
2024-01-24 13:08:31 -05:00
return xioutil.WithDeadline[VolInfo](ctx, globalDriveConfig.GetMaxTimeout(), func(ctx context.Context) (result VolInfo, err error) {
return p.storage.StatVol(ctx, volume)
})
}
func (p *xlStorageDiskIDCheck) DeleteVol(ctx context.Context, volume string, forceDelete bool) (err error) {
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
ctx, done, err := p.TrackDiskHealth(ctx, storageMetricDeleteVol, volume)
if err != nil {
return err
}
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
defer done(&err)
w := xioutil.NewDeadlineWorker(globalDriveConfig.GetMaxTimeout())
return w.Run(func() error { return p.storage.DeleteVol(ctx, volume, forceDelete) })
}
func (p *xlStorageDiskIDCheck) ListDir(ctx context.Context, origvolume, volume, dirPath string, count int) (s []string, err error) {
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
ctx, done, err := p.TrackDiskHealth(ctx, storageMetricListDir, volume, dirPath)
if err != nil {
return nil, err
}
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
defer done(&err)
return p.storage.ListDir(ctx, origvolume, volume, dirPath, count)
}
// Legacy API - does not have any deadlines
func (p *xlStorageDiskIDCheck) ReadFile(ctx context.Context, volume string, path string, offset int64, buf []byte, verifier *BitrotVerifier) (n int64, err error) {
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
ctx, done, err := p.TrackDiskHealth(ctx, storageMetricReadFile, volume, path)
if err != nil {
return 0, err
}
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
defer done(&err)
Fix all racy use of NewDeadlineWorker (#18861) AlmosAll uses of NewDeadlineWorker, which relied on secondary values, were used in a racy fashion, which could lead to inconsistent errors/data being returned. It also propagates the deadline downstream. Rewrite all these to use a generic WithDeadline caller that can return an error alongside a value. Remove the stateful aspect of DeadlineWorker - it was racy if used - but it wasn't AFAICT. Fixes races like: ``` WARNING: DATA RACE Read at 0x00c130b29d10 by goroutine 470237: github.com/minio/minio/cmd.(*xlStorageDiskIDCheck).ReadVersion() github.com/minio/minio/cmd/xl-storage-disk-id-check.go:702 +0x611 github.com/minio/minio/cmd.readFileInfo() github.com/minio/minio/cmd/erasure-metadata-utils.go:160 +0x122 github.com/minio/minio/cmd.erasureObjects.getObjectFileInfo.func1.1() github.com/minio/minio/cmd/erasure-object.go:809 +0x27a github.com/minio/minio/cmd.erasureObjects.getObjectFileInfo.func1.2() github.com/minio/minio/cmd/erasure-object.go:828 +0x61 Previous write at 0x00c130b29d10 by goroutine 470298: github.com/minio/minio/cmd.(*xlStorageDiskIDCheck).ReadVersion.func1() github.com/minio/minio/cmd/xl-storage-disk-id-check.go:698 +0x244 github.com/minio/minio/internal/ioutil.(*DeadlineWorker).Run.func1() github.com/minio/minio/internal/ioutil/ioutil.go:141 +0x33 WARNING: DATA RACE Write at 0x00c0ba6e6c00 by goroutine 94507: github.com/minio/minio/cmd.(*xlStorageDiskIDCheck).StatVol.func1() github.com/minio/minio/cmd/xl-storage-disk-id-check.go:419 +0x104 github.com/minio/minio/internal/ioutil.(*DeadlineWorker).Run.func1() github.com/minio/minio/internal/ioutil/ioutil.go:141 +0x33 Previous read at 0x00c0ba6e6c00 by goroutine 94463: github.com/minio/minio/cmd.(*xlStorageDiskIDCheck).StatVol() github.com/minio/minio/cmd/xl-storage-disk-id-check.go:422 +0x47e github.com/minio/minio/cmd.getBucketInfoLocal.func1() github.com/minio/minio/cmd/peer-s3-server.go:275 +0x122 github.com/minio/pkg/v2/sync/errgroup.(*Group).Go.func1() ``` Probably back from #17701
2024-01-24 13:08:31 -05:00
return xioutil.WithDeadline[int64](ctx, globalDriveConfig.GetMaxTimeout(), func(ctx context.Context) (result int64, err error) {
return p.storage.ReadFile(ctx, volume, path, offset, buf, verifier)
})
}
// Legacy API - does not have any deadlines
func (p *xlStorageDiskIDCheck) AppendFile(ctx context.Context, volume string, path string, buf []byte) (err error) {
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
ctx, done, err := p.TrackDiskHealth(ctx, storageMetricAppendFile, volume, path)
if err != nil {
return err
}
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
defer done(&err)
w := xioutil.NewDeadlineWorker(globalDriveConfig.GetMaxTimeout())
return w.Run(func() error {
return p.storage.AppendFile(ctx, volume, path, buf)
})
}
func (p *xlStorageDiskIDCheck) CreateFile(ctx context.Context, origvolume, volume, path string, size int64, reader io.Reader) (err error) {
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
ctx, done, err := p.TrackDiskHealth(ctx, storageMetricCreateFile, volume, path)
if err != nil {
return err
}
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
defer done(&err)
return p.storage.CreateFile(ctx, origvolume, volume, path, size, io.NopCloser(reader))
}
func (p *xlStorageDiskIDCheck) ReadFileStream(ctx context.Context, volume, path string, offset, length int64) (io.ReadCloser, error) {
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
ctx, done, err := p.TrackDiskHealth(ctx, storageMetricReadFileStream, volume, path)
if err != nil {
return nil, err
}
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
defer done(&err)
Fix all racy use of NewDeadlineWorker (#18861) AlmosAll uses of NewDeadlineWorker, which relied on secondary values, were used in a racy fashion, which could lead to inconsistent errors/data being returned. It also propagates the deadline downstream. Rewrite all these to use a generic WithDeadline caller that can return an error alongside a value. Remove the stateful aspect of DeadlineWorker - it was racy if used - but it wasn't AFAICT. Fixes races like: ``` WARNING: DATA RACE Read at 0x00c130b29d10 by goroutine 470237: github.com/minio/minio/cmd.(*xlStorageDiskIDCheck).ReadVersion() github.com/minio/minio/cmd/xl-storage-disk-id-check.go:702 +0x611 github.com/minio/minio/cmd.readFileInfo() github.com/minio/minio/cmd/erasure-metadata-utils.go:160 +0x122 github.com/minio/minio/cmd.erasureObjects.getObjectFileInfo.func1.1() github.com/minio/minio/cmd/erasure-object.go:809 +0x27a github.com/minio/minio/cmd.erasureObjects.getObjectFileInfo.func1.2() github.com/minio/minio/cmd/erasure-object.go:828 +0x61 Previous write at 0x00c130b29d10 by goroutine 470298: github.com/minio/minio/cmd.(*xlStorageDiskIDCheck).ReadVersion.func1() github.com/minio/minio/cmd/xl-storage-disk-id-check.go:698 +0x244 github.com/minio/minio/internal/ioutil.(*DeadlineWorker).Run.func1() github.com/minio/minio/internal/ioutil/ioutil.go:141 +0x33 WARNING: DATA RACE Write at 0x00c0ba6e6c00 by goroutine 94507: github.com/minio/minio/cmd.(*xlStorageDiskIDCheck).StatVol.func1() github.com/minio/minio/cmd/xl-storage-disk-id-check.go:419 +0x104 github.com/minio/minio/internal/ioutil.(*DeadlineWorker).Run.func1() github.com/minio/minio/internal/ioutil/ioutil.go:141 +0x33 Previous read at 0x00c0ba6e6c00 by goroutine 94463: github.com/minio/minio/cmd.(*xlStorageDiskIDCheck).StatVol() github.com/minio/minio/cmd/xl-storage-disk-id-check.go:422 +0x47e github.com/minio/minio/cmd.getBucketInfoLocal.func1() github.com/minio/minio/cmd/peer-s3-server.go:275 +0x122 github.com/minio/pkg/v2/sync/errgroup.(*Group).Go.func1() ``` Probably back from #17701
2024-01-24 13:08:31 -05:00
return xioutil.WithDeadline[io.ReadCloser](ctx, globalDriveConfig.GetMaxTimeout(), func(ctx context.Context) (result io.ReadCloser, err error) {
return p.storage.ReadFileStream(ctx, volume, path, offset, length)
})
}
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
func (p *xlStorageDiskIDCheck) RenameFile(ctx context.Context, srcVolume, srcPath, dstVolume, dstPath string) (err error) {
ctx, done, err := p.TrackDiskHealth(ctx, storageMetricRenameFile, srcVolume, srcPath, dstVolume, dstPath)
if err != nil {
return err
}
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
defer done(&err)
w := xioutil.NewDeadlineWorker(globalDriveConfig.GetMaxTimeout())
return w.Run(func() error { return p.storage.RenameFile(ctx, srcVolume, srcPath, dstVolume, dstPath) })
}
func (p *xlStorageDiskIDCheck) RenameData(ctx context.Context, srcVolume, srcPath string, fi FileInfo, dstVolume, dstPath string, opts RenameOptions) (sign uint64, err error) {
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
ctx, done, err := p.TrackDiskHealth(ctx, storageMetricRenameData, srcPath, fi.DataDir, dstVolume, dstPath)
if err != nil {
return 0, err
}
defer func() {
if err == nil && !skipAccessChecks(dstVolume) {
p.storage.setWriteAttribute(p.totalWrites.Add(1))
}
done(&err)
}()
Fix all racy use of NewDeadlineWorker (#18861) AlmosAll uses of NewDeadlineWorker, which relied on secondary values, were used in a racy fashion, which could lead to inconsistent errors/data being returned. It also propagates the deadline downstream. Rewrite all these to use a generic WithDeadline caller that can return an error alongside a value. Remove the stateful aspect of DeadlineWorker - it was racy if used - but it wasn't AFAICT. Fixes races like: ``` WARNING: DATA RACE Read at 0x00c130b29d10 by goroutine 470237: github.com/minio/minio/cmd.(*xlStorageDiskIDCheck).ReadVersion() github.com/minio/minio/cmd/xl-storage-disk-id-check.go:702 +0x611 github.com/minio/minio/cmd.readFileInfo() github.com/minio/minio/cmd/erasure-metadata-utils.go:160 +0x122 github.com/minio/minio/cmd.erasureObjects.getObjectFileInfo.func1.1() github.com/minio/minio/cmd/erasure-object.go:809 +0x27a github.com/minio/minio/cmd.erasureObjects.getObjectFileInfo.func1.2() github.com/minio/minio/cmd/erasure-object.go:828 +0x61 Previous write at 0x00c130b29d10 by goroutine 470298: github.com/minio/minio/cmd.(*xlStorageDiskIDCheck).ReadVersion.func1() github.com/minio/minio/cmd/xl-storage-disk-id-check.go:698 +0x244 github.com/minio/minio/internal/ioutil.(*DeadlineWorker).Run.func1() github.com/minio/minio/internal/ioutil/ioutil.go:141 +0x33 WARNING: DATA RACE Write at 0x00c0ba6e6c00 by goroutine 94507: github.com/minio/minio/cmd.(*xlStorageDiskIDCheck).StatVol.func1() github.com/minio/minio/cmd/xl-storage-disk-id-check.go:419 +0x104 github.com/minio/minio/internal/ioutil.(*DeadlineWorker).Run.func1() github.com/minio/minio/internal/ioutil/ioutil.go:141 +0x33 Previous read at 0x00c0ba6e6c00 by goroutine 94463: github.com/minio/minio/cmd.(*xlStorageDiskIDCheck).StatVol() github.com/minio/minio/cmd/xl-storage-disk-id-check.go:422 +0x47e github.com/minio/minio/cmd.getBucketInfoLocal.func1() github.com/minio/minio/cmd/peer-s3-server.go:275 +0x122 github.com/minio/pkg/v2/sync/errgroup.(*Group).Go.func1() ``` Probably back from #17701
2024-01-24 13:08:31 -05:00
return xioutil.WithDeadline[uint64](ctx, globalDriveConfig.GetMaxTimeout(), func(ctx context.Context) (result uint64, err error) {
return p.storage.RenameData(ctx, srcVolume, srcPath, fi, dstVolume, dstPath, opts)
})
}
func (p *xlStorageDiskIDCheck) CheckParts(ctx context.Context, volume string, path string, fi FileInfo) (err error) {
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
ctx, done, err := p.TrackDiskHealth(ctx, storageMetricCheckParts, volume, path)
if err != nil {
return err
}
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
defer done(&err)
w := xioutil.NewDeadlineWorker(globalDriveConfig.GetMaxTimeout())
return w.Run(func() error { return p.storage.CheckParts(ctx, volume, path, fi) })
}
func (p *xlStorageDiskIDCheck) Delete(ctx context.Context, volume string, path string, deleteOpts DeleteOptions) (err error) {
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
ctx, done, err := p.TrackDiskHealth(ctx, storageMetricDelete, volume, path)
if err != nil {
return err
}
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
defer done(&err)
w := xioutil.NewDeadlineWorker(globalDriveConfig.GetMaxTimeout())
return w.Run(func() error { return p.storage.Delete(ctx, volume, path, deleteOpts) })
}
// DeleteVersions deletes slice of versions, it can be same object
// or multiple objects.
func (p *xlStorageDiskIDCheck) DeleteVersions(ctx context.Context, volume string, versions []FileInfoVersions, opts DeleteOptions) (errs []error) {
// Merely for tracing storage
path := ""
if len(versions) > 0 {
path = versions[0].Name
}
errs = make([]error, len(versions))
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
ctx, done, err := p.TrackDiskHealth(ctx, storageMetricDeleteVersions, volume, path)
if err != nil {
for i := range errs {
errs[i] = ctx.Err()
}
return errs
}
defer func() {
if !skipAccessChecks(volume) {
var permanentDeletes uint64
var deleteMarkers uint64
for i, nerr := range errs {
if nerr != nil {
continue
}
for _, fi := range versions[i].Versions {
if fi.Deleted {
// Delete markers are a write operation not a permanent delete.
deleteMarkers++
continue
}
permanentDeletes++
}
}
if deleteMarkers > 0 {
p.storage.setWriteAttribute(p.totalWrites.Add(deleteMarkers))
}
if permanentDeletes > 0 {
p.storage.setDeleteAttribute(p.totalDeletes.Add(permanentDeletes))
}
}
done(&err)
}()
errs = p.storage.DeleteVersions(ctx, volume, versions, opts)
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
for i := range errs {
if errs[i] != nil {
err = errs[i]
break
}
}
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
return errs
}
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
func (p *xlStorageDiskIDCheck) VerifyFile(ctx context.Context, volume, path string, fi FileInfo) (err error) {
ctx, done, err := p.TrackDiskHealth(ctx, storageMetricVerifyFile, volume, path)
if err != nil {
return err
}
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
defer done(&err)
return p.storage.VerifyFile(ctx, volume, path, fi)
}
func (p *xlStorageDiskIDCheck) WriteAll(ctx context.Context, volume string, path string, b []byte) (err error) {
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
ctx, done, err := p.TrackDiskHealth(ctx, storageMetricWriteAll, volume, path)
if err != nil {
return err
}
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
defer done(&err)
w := xioutil.NewDeadlineWorker(globalDriveConfig.GetMaxTimeout())
return w.Run(func() error { return p.storage.WriteAll(ctx, volume, path, b) })
}
func (p *xlStorageDiskIDCheck) DeleteVersion(ctx context.Context, volume, path string, fi FileInfo, forceDelMarker bool, opts DeleteOptions) (err error) {
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
ctx, done, err := p.TrackDiskHealth(ctx, storageMetricDeleteVersion, volume, path)
if err != nil {
return err
}
defer func() {
defer done(&err)
if err == nil && !skipAccessChecks(volume) {
if opts.UndoWrite {
p.storage.setWriteAttribute(p.totalWrites.Add(^uint64(0)))
return
}
if fi.Deleted {
// Delete markers are a write operation not a permanent delete.
p.storage.setWriteAttribute(p.totalWrites.Add(1))
return
}
p.storage.setDeleteAttribute(p.totalDeletes.Add(1))
}
}()
w := xioutil.NewDeadlineWorker(globalDriveConfig.GetMaxTimeout())
return w.Run(func() error { return p.storage.DeleteVersion(ctx, volume, path, fi, forceDelMarker, opts) })
}
func (p *xlStorageDiskIDCheck) UpdateMetadata(ctx context.Context, volume, path string, fi FileInfo, opts UpdateMetadataOpts) (err error) {
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
ctx, done, err := p.TrackDiskHealth(ctx, storageMetricUpdateMetadata, volume, path)
if err != nil {
return err
}
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
defer done(&err)
w := xioutil.NewDeadlineWorker(globalDriveConfig.GetMaxTimeout())
return w.Run(func() error { return p.storage.UpdateMetadata(ctx, volume, path, fi, opts) })
}
func (p *xlStorageDiskIDCheck) WriteMetadata(ctx context.Context, origvolume, volume, path string, fi FileInfo) (err error) {
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
ctx, done, err := p.TrackDiskHealth(ctx, storageMetricWriteMetadata, volume, path)
if err != nil {
return err
}
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
defer done(&err)
w := xioutil.NewDeadlineWorker(globalDriveConfig.GetMaxTimeout())
return w.Run(func() error { return p.storage.WriteMetadata(ctx, origvolume, volume, path, fi) })
}
func (p *xlStorageDiskIDCheck) ReadVersion(ctx context.Context, origvolume, volume, path, versionID string, opts ReadOptions) (fi FileInfo, err error) {
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
ctx, done, err := p.TrackDiskHealth(ctx, storageMetricReadVersion, volume, path)
if err != nil {
return fi, err
}
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
defer done(&err)
Fix all racy use of NewDeadlineWorker (#18861) AlmosAll uses of NewDeadlineWorker, which relied on secondary values, were used in a racy fashion, which could lead to inconsistent errors/data being returned. It also propagates the deadline downstream. Rewrite all these to use a generic WithDeadline caller that can return an error alongside a value. Remove the stateful aspect of DeadlineWorker - it was racy if used - but it wasn't AFAICT. Fixes races like: ``` WARNING: DATA RACE Read at 0x00c130b29d10 by goroutine 470237: github.com/minio/minio/cmd.(*xlStorageDiskIDCheck).ReadVersion() github.com/minio/minio/cmd/xl-storage-disk-id-check.go:702 +0x611 github.com/minio/minio/cmd.readFileInfo() github.com/minio/minio/cmd/erasure-metadata-utils.go:160 +0x122 github.com/minio/minio/cmd.erasureObjects.getObjectFileInfo.func1.1() github.com/minio/minio/cmd/erasure-object.go:809 +0x27a github.com/minio/minio/cmd.erasureObjects.getObjectFileInfo.func1.2() github.com/minio/minio/cmd/erasure-object.go:828 +0x61 Previous write at 0x00c130b29d10 by goroutine 470298: github.com/minio/minio/cmd.(*xlStorageDiskIDCheck).ReadVersion.func1() github.com/minio/minio/cmd/xl-storage-disk-id-check.go:698 +0x244 github.com/minio/minio/internal/ioutil.(*DeadlineWorker).Run.func1() github.com/minio/minio/internal/ioutil/ioutil.go:141 +0x33 WARNING: DATA RACE Write at 0x00c0ba6e6c00 by goroutine 94507: github.com/minio/minio/cmd.(*xlStorageDiskIDCheck).StatVol.func1() github.com/minio/minio/cmd/xl-storage-disk-id-check.go:419 +0x104 github.com/minio/minio/internal/ioutil.(*DeadlineWorker).Run.func1() github.com/minio/minio/internal/ioutil/ioutil.go:141 +0x33 Previous read at 0x00c0ba6e6c00 by goroutine 94463: github.com/minio/minio/cmd.(*xlStorageDiskIDCheck).StatVol() github.com/minio/minio/cmd/xl-storage-disk-id-check.go:422 +0x47e github.com/minio/minio/cmd.getBucketInfoLocal.func1() github.com/minio/minio/cmd/peer-s3-server.go:275 +0x122 github.com/minio/pkg/v2/sync/errgroup.(*Group).Go.func1() ``` Probably back from #17701
2024-01-24 13:08:31 -05:00
return xioutil.WithDeadline[FileInfo](ctx, globalDriveConfig.GetMaxTimeout(), func(ctx context.Context) (result FileInfo, err error) {
return p.storage.ReadVersion(ctx, origvolume, volume, path, versionID, opts)
})
}
func (p *xlStorageDiskIDCheck) ReadAll(ctx context.Context, volume string, path string) (buf []byte, err error) {
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
ctx, done, err := p.TrackDiskHealth(ctx, storageMetricReadAll, volume, path)
if err != nil {
return nil, err
}
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
defer done(&err)
Fix all racy use of NewDeadlineWorker (#18861) AlmosAll uses of NewDeadlineWorker, which relied on secondary values, were used in a racy fashion, which could lead to inconsistent errors/data being returned. It also propagates the deadline downstream. Rewrite all these to use a generic WithDeadline caller that can return an error alongside a value. Remove the stateful aspect of DeadlineWorker - it was racy if used - but it wasn't AFAICT. Fixes races like: ``` WARNING: DATA RACE Read at 0x00c130b29d10 by goroutine 470237: github.com/minio/minio/cmd.(*xlStorageDiskIDCheck).ReadVersion() github.com/minio/minio/cmd/xl-storage-disk-id-check.go:702 +0x611 github.com/minio/minio/cmd.readFileInfo() github.com/minio/minio/cmd/erasure-metadata-utils.go:160 +0x122 github.com/minio/minio/cmd.erasureObjects.getObjectFileInfo.func1.1() github.com/minio/minio/cmd/erasure-object.go:809 +0x27a github.com/minio/minio/cmd.erasureObjects.getObjectFileInfo.func1.2() github.com/minio/minio/cmd/erasure-object.go:828 +0x61 Previous write at 0x00c130b29d10 by goroutine 470298: github.com/minio/minio/cmd.(*xlStorageDiskIDCheck).ReadVersion.func1() github.com/minio/minio/cmd/xl-storage-disk-id-check.go:698 +0x244 github.com/minio/minio/internal/ioutil.(*DeadlineWorker).Run.func1() github.com/minio/minio/internal/ioutil/ioutil.go:141 +0x33 WARNING: DATA RACE Write at 0x00c0ba6e6c00 by goroutine 94507: github.com/minio/minio/cmd.(*xlStorageDiskIDCheck).StatVol.func1() github.com/minio/minio/cmd/xl-storage-disk-id-check.go:419 +0x104 github.com/minio/minio/internal/ioutil.(*DeadlineWorker).Run.func1() github.com/minio/minio/internal/ioutil/ioutil.go:141 +0x33 Previous read at 0x00c0ba6e6c00 by goroutine 94463: github.com/minio/minio/cmd.(*xlStorageDiskIDCheck).StatVol() github.com/minio/minio/cmd/xl-storage-disk-id-check.go:422 +0x47e github.com/minio/minio/cmd.getBucketInfoLocal.func1() github.com/minio/minio/cmd/peer-s3-server.go:275 +0x122 github.com/minio/pkg/v2/sync/errgroup.(*Group).Go.func1() ``` Probably back from #17701
2024-01-24 13:08:31 -05:00
return xioutil.WithDeadline[[]byte](ctx, globalDriveConfig.GetMaxTimeout(), func(ctx context.Context) (result []byte, err error) {
return p.storage.ReadAll(ctx, volume, path)
})
}
func (p *xlStorageDiskIDCheck) ReadXL(ctx context.Context, volume string, path string, readData bool) (rf RawFileInfo, err error) {
ctx, done, err := p.TrackDiskHealth(ctx, storageMetricReadXL, volume, path)
if err != nil {
return RawFileInfo{}, err
}
defer done(&err)
Fix all racy use of NewDeadlineWorker (#18861) AlmosAll uses of NewDeadlineWorker, which relied on secondary values, were used in a racy fashion, which could lead to inconsistent errors/data being returned. It also propagates the deadline downstream. Rewrite all these to use a generic WithDeadline caller that can return an error alongside a value. Remove the stateful aspect of DeadlineWorker - it was racy if used - but it wasn't AFAICT. Fixes races like: ``` WARNING: DATA RACE Read at 0x00c130b29d10 by goroutine 470237: github.com/minio/minio/cmd.(*xlStorageDiskIDCheck).ReadVersion() github.com/minio/minio/cmd/xl-storage-disk-id-check.go:702 +0x611 github.com/minio/minio/cmd.readFileInfo() github.com/minio/minio/cmd/erasure-metadata-utils.go:160 +0x122 github.com/minio/minio/cmd.erasureObjects.getObjectFileInfo.func1.1() github.com/minio/minio/cmd/erasure-object.go:809 +0x27a github.com/minio/minio/cmd.erasureObjects.getObjectFileInfo.func1.2() github.com/minio/minio/cmd/erasure-object.go:828 +0x61 Previous write at 0x00c130b29d10 by goroutine 470298: github.com/minio/minio/cmd.(*xlStorageDiskIDCheck).ReadVersion.func1() github.com/minio/minio/cmd/xl-storage-disk-id-check.go:698 +0x244 github.com/minio/minio/internal/ioutil.(*DeadlineWorker).Run.func1() github.com/minio/minio/internal/ioutil/ioutil.go:141 +0x33 WARNING: DATA RACE Write at 0x00c0ba6e6c00 by goroutine 94507: github.com/minio/minio/cmd.(*xlStorageDiskIDCheck).StatVol.func1() github.com/minio/minio/cmd/xl-storage-disk-id-check.go:419 +0x104 github.com/minio/minio/internal/ioutil.(*DeadlineWorker).Run.func1() github.com/minio/minio/internal/ioutil/ioutil.go:141 +0x33 Previous read at 0x00c0ba6e6c00 by goroutine 94463: github.com/minio/minio/cmd.(*xlStorageDiskIDCheck).StatVol() github.com/minio/minio/cmd/xl-storage-disk-id-check.go:422 +0x47e github.com/minio/minio/cmd.getBucketInfoLocal.func1() github.com/minio/minio/cmd/peer-s3-server.go:275 +0x122 github.com/minio/pkg/v2/sync/errgroup.(*Group).Go.func1() ``` Probably back from #17701
2024-01-24 13:08:31 -05:00
return xioutil.WithDeadline[RawFileInfo](ctx, globalDriveConfig.GetMaxTimeout(), func(ctx context.Context) (result RawFileInfo, err error) {
return p.storage.ReadXL(ctx, volume, path, readData)
})
}
Add admin inspect Glob support (#13328) * Add admin Glob support Allow returning multiple files on inspect calls. ``` λ mc admin inspect --json local2/testbucket/nyc-taxi-data-10M.csv.zst/* ... λ unzip -l inspect.5f0643b2.zip Archive: inspect.5f0643b2.zip Length Date Time Name --------- ---------- ----- ---- 0 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/ 802 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/xl.meta 0 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/ 802 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/xl.meta 0 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/ 802 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/xl.meta 0 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/ 802 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/xl.meta --------- ------- 3208 8 files ``` Using fully recursive: ``` λ mc admin inspect local2/testbucket/nyc-taxi-data-10M.csv.zst/** ... Archive: inspect.79c261cb.zip Length Date Time Name --------- ---------- ----- ---- 0 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/ 0 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/ 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.1 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.10 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.11 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.12 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.13 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.14 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.15 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.16 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.17 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.18 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.19 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.2 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.20 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.21 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.22 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.23 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.24 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.25 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.26 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.27 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.28 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.29 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.3 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.30 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.31 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.32 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.33 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.34 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.35 3439368 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.36 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.4 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.5 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.6 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.7 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.8 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.9 802 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/xl.meta 0 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/ 0 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/ 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.1 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.10 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.11 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.12 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.13 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.14 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.15 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.16 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.17 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.18 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.19 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.2 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.20 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.21 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.22 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.23 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.24 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.25 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.26 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.27 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.28 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.29 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.3 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.30 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.31 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.32 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.33 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.34 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.35 3439368 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.36 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.4 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.5 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.6 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.7 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.8 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.9 802 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/xl.meta 0 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/ 0 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/ 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.1 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.10 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.11 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.12 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.13 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.14 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.15 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.16 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.17 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.18 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.19 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.2 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.20 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.21 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.22 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.23 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.24 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.25 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.26 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.27 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.28 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.29 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.3 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.30 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.31 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.32 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.33 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.34 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.35 3439368 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.36 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.4 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.5 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.6 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.7 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.8 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.9 802 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/xl.meta 0 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/ 0 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/ 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.1 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.10 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.11 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.12 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.13 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.14 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.15 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.16 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.17 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.18 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.19 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.2 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.20 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.21 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.22 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.23 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.24 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.25 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.26 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.27 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.28 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.29 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.3 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.30 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.31 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.32 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.33 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.34 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.35 3439368 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.36 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.4 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.5 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.6 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.7 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.8 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.9 802 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/xl.meta --------- ------- 601034920 156 files ``` Furthermore allow `inspect` to do direct decode from `mc`, for example: ``` λ mc admin inspect --json local2/testbucket/nyc-taxi-data-10M.csv.zst/*|inspect -json Output decrypted to inspect.5f0643b2.zip ``` - Correct error, forward non-EOF errors. - Add some extra safety. Log FNF when no files. - Add `xl-meta` zip support. For `xl-meta` multiple inputs output object with names as key. Automatically switches `xl-meta` to single-line output when multiple objects. Add double-star wildcard support to xl-meta input. Co-authored-by: Harshavardhana <harsha@minio.io>
2021-10-01 14:50:00 -04:00
func (p *xlStorageDiskIDCheck) StatInfoFile(ctx context.Context, volume, path string, glob bool) (stat []StatInfo, err error) {
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
ctx, done, err := p.TrackDiskHealth(ctx, storageMetricStatInfoFile, volume, path)
if err != nil {
Add admin inspect Glob support (#13328) * Add admin Glob support Allow returning multiple files on inspect calls. ``` λ mc admin inspect --json local2/testbucket/nyc-taxi-data-10M.csv.zst/* ... λ unzip -l inspect.5f0643b2.zip Archive: inspect.5f0643b2.zip Length Date Time Name --------- ---------- ----- ---- 0 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/ 802 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/xl.meta 0 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/ 802 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/xl.meta 0 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/ 802 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/xl.meta 0 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/ 802 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/xl.meta --------- ------- 3208 8 files ``` Using fully recursive: ``` λ mc admin inspect local2/testbucket/nyc-taxi-data-10M.csv.zst/** ... Archive: inspect.79c261cb.zip Length Date Time Name --------- ---------- ----- ---- 0 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/ 0 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/ 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.1 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.10 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.11 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.12 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.13 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.14 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.15 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.16 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.17 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.18 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.19 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.2 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.20 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.21 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.22 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.23 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.24 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.25 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.26 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.27 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.28 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.29 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.3 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.30 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.31 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.32 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.33 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.34 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.35 3439368 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.36 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.4 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.5 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.6 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.7 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.8 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.9 802 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/xl.meta 0 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/ 0 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/ 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.1 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.10 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.11 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.12 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.13 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.14 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.15 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.16 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.17 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.18 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.19 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.2 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.20 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.21 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.22 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.23 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.24 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.25 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.26 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.27 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.28 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.29 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.3 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.30 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.31 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.32 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.33 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.34 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.35 3439368 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.36 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.4 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.5 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.6 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.7 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.8 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.9 802 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/xl.meta 0 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/ 0 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/ 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.1 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.10 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.11 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.12 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.13 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.14 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.15 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.16 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.17 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.18 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.19 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.2 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.20 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.21 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.22 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.23 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.24 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.25 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.26 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.27 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.28 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.29 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.3 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.30 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.31 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.32 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.33 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.34 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.35 3439368 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.36 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.4 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.5 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.6 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.7 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.8 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.9 802 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/xl.meta 0 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/ 0 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/ 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.1 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.10 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.11 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.12 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.13 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.14 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.15 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.16 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.17 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.18 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.19 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.2 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.20 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.21 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.22 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.23 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.24 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.25 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.26 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.27 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.28 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.29 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.3 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.30 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.31 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.32 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.33 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.34 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.35 3439368 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.36 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.4 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.5 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.6 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.7 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.8 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.9 802 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/xl.meta --------- ------- 601034920 156 files ``` Furthermore allow `inspect` to do direct decode from `mc`, for example: ``` λ mc admin inspect --json local2/testbucket/nyc-taxi-data-10M.csv.zst/*|inspect -json Output decrypted to inspect.5f0643b2.zip ``` - Correct error, forward non-EOF errors. - Add some extra safety. Log FNF when no files. - Add `xl-meta` zip support. For `xl-meta` multiple inputs output object with names as key. Automatically switches `xl-meta` to single-line output when multiple objects. Add double-star wildcard support to xl-meta input. Co-authored-by: Harshavardhana <harsha@minio.io>
2021-10-01 14:50:00 -04:00
return nil, err
}
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
defer done(&err)
Add admin inspect Glob support (#13328) * Add admin Glob support Allow returning multiple files on inspect calls. ``` λ mc admin inspect --json local2/testbucket/nyc-taxi-data-10M.csv.zst/* ... λ unzip -l inspect.5f0643b2.zip Archive: inspect.5f0643b2.zip Length Date Time Name --------- ---------- ----- ---- 0 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/ 802 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/xl.meta 0 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/ 802 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/xl.meta 0 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/ 802 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/xl.meta 0 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/ 802 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/xl.meta --------- ------- 3208 8 files ``` Using fully recursive: ``` λ mc admin inspect local2/testbucket/nyc-taxi-data-10M.csv.zst/** ... Archive: inspect.79c261cb.zip Length Date Time Name --------- ---------- ----- ---- 0 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/ 0 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/ 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.1 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.10 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.11 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.12 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.13 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.14 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.15 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.16 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.17 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.18 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.19 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.2 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.20 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.21 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.22 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.23 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.24 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.25 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.26 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.27 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.28 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.29 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.3 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.30 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.31 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.32 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.33 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.34 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.35 3439368 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.36 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.4 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.5 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.6 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.7 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.8 4194816 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.9 802 2021-09-03 12:50 192.168.1.78:9001/a221edde-48fe-45f5-ad32-3bc7131c7659/testbucket/nyc-taxi-data-10M.csv.zst/xl.meta 0 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/ 0 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/ 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.1 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.10 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.11 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.12 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.13 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.14 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.15 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.16 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.17 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.18 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.19 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.2 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.20 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.21 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.22 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.23 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.24 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.25 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.26 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.27 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.28 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.29 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.3 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.30 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.31 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.32 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.33 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.34 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.35 3439368 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.36 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.4 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.5 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.6 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.7 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.8 4194816 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.9 802 2021-09-03 12:50 192.168.1.78:9001/cb7440ef-f0d9-42a8-b137-f00f519276ca/testbucket/nyc-taxi-data-10M.csv.zst/xl.meta 0 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/ 0 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/ 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.1 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.10 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.11 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.12 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.13 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.14 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.15 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.16 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.17 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.18 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.19 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.2 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.20 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.21 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.22 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.23 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.24 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.25 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.26 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.27 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.28 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.29 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.3 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.30 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.31 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.32 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.33 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.34 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.35 3439368 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.36 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.4 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.5 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.6 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.7 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.8 4194816 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.9 802 2021-09-03 12:50 192.168.1.78:9001/759cd5ac-7860-4cf3-acad-a375fcbae338/testbucket/nyc-taxi-data-10M.csv.zst/xl.meta 0 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/ 0 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/ 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.1 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.10 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.11 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.12 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.13 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.14 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.15 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.16 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.17 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.18 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.19 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.2 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.20 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.21 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.22 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.23 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.24 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.25 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.26 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.27 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.28 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.29 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.3 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.30 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.31 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.32 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.33 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.34 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.35 3439368 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.36 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.4 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.5 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.6 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.7 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.8 4194816 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/18a50b3e-3c56-418e-a045-ad5c58c1d44b/part.9 802 2021-09-09 15:56 192.168.1.78:9001/2b48619c-c2fa-4e69-839e-58fc82c1b43e/testbucket/nyc-taxi-data-10M.csv.zst/xl.meta --------- ------- 601034920 156 files ``` Furthermore allow `inspect` to do direct decode from `mc`, for example: ``` λ mc admin inspect --json local2/testbucket/nyc-taxi-data-10M.csv.zst/*|inspect -json Output decrypted to inspect.5f0643b2.zip ``` - Correct error, forward non-EOF errors. - Add some extra safety. Log FNF when no files. - Add `xl-meta` zip support. For `xl-meta` multiple inputs output object with names as key. Automatically switches `xl-meta` to single-line output when multiple objects. Add double-star wildcard support to xl-meta input. Co-authored-by: Harshavardhana <harsha@minio.io>
2021-10-01 14:50:00 -04:00
return p.storage.StatInfoFile(ctx, volume, path, glob)
}
// ReadMultiple will read multiple files and send each files as response.
// Files are read and returned in the given order.
// The resp channel is closed before the call returns.
// Only a canceled context will return an error.
func (p *xlStorageDiskIDCheck) ReadMultiple(ctx context.Context, req ReadMultipleReq, resp chan<- ReadMultipleResp) (err error) {
ctx, done, err := p.TrackDiskHealth(ctx, storageMetricReadMultiple, req.Bucket, req.Prefix)
if err != nil {
xioutil.SafeClose(resp)
return err
}
defer done(&err)
return p.storage.ReadMultiple(ctx, req, resp)
}
// CleanAbandonedData will read metadata of the object on disk
// and delete any data directories and inline data that isn't referenced in metadata.
func (p *xlStorageDiskIDCheck) CleanAbandonedData(ctx context.Context, volume string, path string) (err error) {
ctx, done, err := p.TrackDiskHealth(ctx, storageMetricDeleteAbandonedParts, volume, path)
if err != nil {
return err
}
defer done(&err)
w := xioutil.NewDeadlineWorker(globalDriveConfig.GetMaxTimeout())
return w.Run(func() error { return p.storage.CleanAbandonedData(ctx, volume, path) })
}
func storageTrace(s storageMetric, startTime time.Time, duration time.Duration, path string, err string, custom map[string]string) madmin.TraceInfo {
return madmin.TraceInfo{
TraceType: madmin.TraceStorage,
Time: startTime,
NodeName: globalLocalNodeName,
FuncName: "storage." + s.String(),
2022-07-05 17:45:49 -04:00
Duration: duration,
Path: path,
Error: err,
Fix all racy use of NewDeadlineWorker (#18861) AlmosAll uses of NewDeadlineWorker, which relied on secondary values, were used in a racy fashion, which could lead to inconsistent errors/data being returned. It also propagates the deadline downstream. Rewrite all these to use a generic WithDeadline caller that can return an error alongside a value. Remove the stateful aspect of DeadlineWorker - it was racy if used - but it wasn't AFAICT. Fixes races like: ``` WARNING: DATA RACE Read at 0x00c130b29d10 by goroutine 470237: github.com/minio/minio/cmd.(*xlStorageDiskIDCheck).ReadVersion() github.com/minio/minio/cmd/xl-storage-disk-id-check.go:702 +0x611 github.com/minio/minio/cmd.readFileInfo() github.com/minio/minio/cmd/erasure-metadata-utils.go:160 +0x122 github.com/minio/minio/cmd.erasureObjects.getObjectFileInfo.func1.1() github.com/minio/minio/cmd/erasure-object.go:809 +0x27a github.com/minio/minio/cmd.erasureObjects.getObjectFileInfo.func1.2() github.com/minio/minio/cmd/erasure-object.go:828 +0x61 Previous write at 0x00c130b29d10 by goroutine 470298: github.com/minio/minio/cmd.(*xlStorageDiskIDCheck).ReadVersion.func1() github.com/minio/minio/cmd/xl-storage-disk-id-check.go:698 +0x244 github.com/minio/minio/internal/ioutil.(*DeadlineWorker).Run.func1() github.com/minio/minio/internal/ioutil/ioutil.go:141 +0x33 WARNING: DATA RACE Write at 0x00c0ba6e6c00 by goroutine 94507: github.com/minio/minio/cmd.(*xlStorageDiskIDCheck).StatVol.func1() github.com/minio/minio/cmd/xl-storage-disk-id-check.go:419 +0x104 github.com/minio/minio/internal/ioutil.(*DeadlineWorker).Run.func1() github.com/minio/minio/internal/ioutil/ioutil.go:141 +0x33 Previous read at 0x00c0ba6e6c00 by goroutine 94463: github.com/minio/minio/cmd.(*xlStorageDiskIDCheck).StatVol() github.com/minio/minio/cmd/xl-storage-disk-id-check.go:422 +0x47e github.com/minio/minio/cmd.getBucketInfoLocal.func1() github.com/minio/minio/cmd/peer-s3-server.go:275 +0x122 github.com/minio/pkg/v2/sync/errgroup.(*Group).Go.func1() ``` Probably back from #17701
2024-01-24 13:08:31 -05:00
Custom: custom,
2022-07-05 17:45:49 -04:00
}
}
func scannerTrace(s scannerMetric, startTime time.Time, duration time.Duration, path string, custom map[string]string) madmin.TraceInfo {
2022-07-05 17:45:49 -04:00
return madmin.TraceInfo{
TraceType: madmin.TraceScanner,
Time: startTime,
NodeName: globalLocalNodeName,
FuncName: "scanner." + s.String(),
Duration: duration,
Path: path,
Custom: custom,
}
}
// Update storage metrics
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
func (p *xlStorageDiskIDCheck) updateStorageMetrics(s storageMetric, paths ...string) func(err *error) {
startTime := time.Now()
2022-07-05 17:45:49 -04:00
trace := globalTrace.NumSubscribers(madmin.TraceStorage) > 0
return func(errp *error) {
duration := time.Since(startTime)
var err error
if errp != nil && *errp != nil {
err = *errp
}
atomic.AddUint64(&p.apiCalls[s], 1)
if IsErr(err, []error{
errFaultyDisk,
errFaultyRemoteDisk,
context.DeadlineExceeded,
}...) {
p.totalErrsAvailability.Add(1)
if errors.Is(err, context.DeadlineExceeded) {
p.totalErrsTimeout.Add(1)
}
}
p.apiLatencies[s].add(duration)
if trace {
Fix all racy use of NewDeadlineWorker (#18861) AlmosAll uses of NewDeadlineWorker, which relied on secondary values, were used in a racy fashion, which could lead to inconsistent errors/data being returned. It also propagates the deadline downstream. Rewrite all these to use a generic WithDeadline caller that can return an error alongside a value. Remove the stateful aspect of DeadlineWorker - it was racy if used - but it wasn't AFAICT. Fixes races like: ``` WARNING: DATA RACE Read at 0x00c130b29d10 by goroutine 470237: github.com/minio/minio/cmd.(*xlStorageDiskIDCheck).ReadVersion() github.com/minio/minio/cmd/xl-storage-disk-id-check.go:702 +0x611 github.com/minio/minio/cmd.readFileInfo() github.com/minio/minio/cmd/erasure-metadata-utils.go:160 +0x122 github.com/minio/minio/cmd.erasureObjects.getObjectFileInfo.func1.1() github.com/minio/minio/cmd/erasure-object.go:809 +0x27a github.com/minio/minio/cmd.erasureObjects.getObjectFileInfo.func1.2() github.com/minio/minio/cmd/erasure-object.go:828 +0x61 Previous write at 0x00c130b29d10 by goroutine 470298: github.com/minio/minio/cmd.(*xlStorageDiskIDCheck).ReadVersion.func1() github.com/minio/minio/cmd/xl-storage-disk-id-check.go:698 +0x244 github.com/minio/minio/internal/ioutil.(*DeadlineWorker).Run.func1() github.com/minio/minio/internal/ioutil/ioutil.go:141 +0x33 WARNING: DATA RACE Write at 0x00c0ba6e6c00 by goroutine 94507: github.com/minio/minio/cmd.(*xlStorageDiskIDCheck).StatVol.func1() github.com/minio/minio/cmd/xl-storage-disk-id-check.go:419 +0x104 github.com/minio/minio/internal/ioutil.(*DeadlineWorker).Run.func1() github.com/minio/minio/internal/ioutil/ioutil.go:141 +0x33 Previous read at 0x00c0ba6e6c00 by goroutine 94463: github.com/minio/minio/cmd.(*xlStorageDiskIDCheck).StatVol() github.com/minio/minio/cmd/xl-storage-disk-id-check.go:422 +0x47e github.com/minio/minio/cmd.getBucketInfoLocal.func1() github.com/minio/minio/cmd/peer-s3-server.go:275 +0x122 github.com/minio/pkg/v2/sync/errgroup.(*Group).Go.func1() ``` Probably back from #17701
2024-01-24 13:08:31 -05:00
custom := make(map[string]string, 2)
paths = append([]string{p.String()}, paths...)
var errStr string
if err != nil {
errStr = err.Error()
}
custom["total-errs-timeout"] = strconv.FormatUint(p.totalErrsTimeout.Load(), 10)
custom["total-errs-availability"] = strconv.FormatUint(p.totalErrsAvailability.Load(), 10)
globalTrace.Publish(storageTrace(s, startTime, duration, strings.Join(paths, " "), errStr, custom))
}
}
}
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
const (
diskHealthOK int32 = iota
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
diskHealthFaulty
)
type diskHealthTracker struct {
// atomic time of last success
lastSuccess int64
// atomic time of last time a token was grabbed.
lastStarted int64
// Atomic status of disk.
status atomic.Int32
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
// Atomic number indicates if a disk is hung
waiting atomic.Int32
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
}
// newDiskHealthTracker creates a new disk health tracker.
func newDiskHealthTracker() *diskHealthTracker {
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
d := diskHealthTracker{
lastSuccess: time.Now().UnixNano(),
lastStarted: time.Now().UnixNano(),
}
d.status.Store(diskHealthOK)
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
return &d
}
// logSuccess will update the last successful operation time.
func (d *diskHealthTracker) logSuccess() {
atomic.StoreInt64(&d.lastSuccess, time.Now().UnixNano())
}
func (d *diskHealthTracker) isFaulty() bool {
return d.status.Load() == diskHealthFaulty
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
}
type (
healthDiskCtxKey struct{}
healthDiskCtxValue struct {
lastSuccess *int64
}
)
// logSuccess will update the last successful operation time.
func (h *healthDiskCtxValue) logSuccess() {
atomic.StoreInt64(h.lastSuccess, time.Now().UnixNano())
}
// noopDoneFunc is a no-op done func.
// Can be reused.
var noopDoneFunc = func(_ *error) {}
// TrackDiskHealth for this request.
// When a non-nil error is returned 'done' MUST be called
// with the status of the response, if it corresponds to disk health.
// If the pointer sent to done is non-nil AND the error
// is either nil or io.EOF the disk is considered good.
// So if unsure if the disk status is ok, return nil as a parameter to done.
// Shadowing will work as long as return error is named: https://go.dev/play/p/sauq86SsTN2
func (p *xlStorageDiskIDCheck) TrackDiskHealth(ctx context.Context, s storageMetric, paths ...string) (c context.Context, done func(*error), err error) {
done = noopDoneFunc
if contextCanceled(ctx) {
return ctx, done, ctx.Err()
}
if p.health.status.Load() != diskHealthOK {
return ctx, done, errFaultyDisk
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
}
// Verify if the disk is not stale
// - missing format.json (unformatted drive)
// - format.json is valid but invalid 'uuid'
if err = p.checkDiskStale(); err != nil {
return ctx, done, err
}
// Disallow recursive tracking to avoid deadlocks.
if ctx.Value(healthDiskCtxKey{}) != nil {
done = p.updateStorageMetrics(s, paths...)
return ctx, done, nil
}
if contextCanceled(ctx) {
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
return ctx, done, ctx.Err()
}
atomic.StoreInt64(&p.health.lastStarted, time.Now().UnixNano())
p.health.waiting.Add(1)
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
ctx = context.WithValue(ctx, healthDiskCtxKey{}, &healthDiskCtxValue{lastSuccess: &p.health.lastSuccess})
si := p.updateStorageMetrics(s, paths...)
var once sync.Once
return ctx, func(errp *error) {
p.health.waiting.Add(-1)
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
once.Do(func() {
if errp != nil {
err := *errp
if err == nil || errors.Is(err, io.EOF) {
p.health.logSuccess()
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
}
}
si(errp)
})
}, nil
}
var toWrite = []byte{2048: 42}
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
// monitorDiskStatus should be called once when a drive has been marked offline.
// Once the disk has been deemed ok, it will return to online status.
func (p *xlStorageDiskIDCheck) monitorDiskStatus(spent time.Duration, fn string) {
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
t := time.NewTicker(5 * time.Second)
defer t.Stop()
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
for range t.C {
if contextCanceled(p.diskCtx) {
return
}
err := p.storage.WriteAll(context.Background(), minioMetaTmpBucket, fn, toWrite)
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
if err != nil {
continue
}
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
b, err := p.storage.ReadAll(context.Background(), minioMetaTmpBucket, fn)
if err != nil || len(b) != len(toWrite) {
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
continue
}
err = p.storage.Delete(context.Background(), minioMetaTmpBucket, fn, DeleteOptions{
Recursive: false,
Immediate: false,
})
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
if err == nil {
logger.Event(context.Background(), "node(%s): Read/Write/Delete successful, bringing drive %s online", globalLocalNodeName, p.storage.String())
p.health.status.Store(diskHealthOK)
p.health.waiting.Add(-1)
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
return
}
}
}
// monitorDiskStatus should be called once when a drive has been marked offline.
// Once the disk has been deemed ok, it will return to online status.
func (p *xlStorageDiskIDCheck) monitorDiskWritable(ctx context.Context) {
var (
// We check every 15 seconds if the disk is writable and we can read back.
checkEvery = 15 * time.Second
// If the disk has completed an operation successfully within last 5 seconds, don't check it.
skipIfSuccessBefore = 5 * time.Second
)
// if disk max timeout is smaller than checkEvery window
// reduce checks by a second.
if globalDriveConfig.GetMaxTimeout() <= checkEvery {
checkEvery = globalDriveConfig.GetMaxTimeout() - time.Second
if checkEvery <= 0 {
checkEvery = globalDriveConfig.GetMaxTimeout()
}
}
// if disk max timeout is smaller than skipIfSuccessBefore window
// reduce the skipIfSuccessBefore by a second.
if globalDriveConfig.GetMaxTimeout() <= skipIfSuccessBefore {
skipIfSuccessBefore = globalDriveConfig.GetMaxTimeout() - time.Second
if skipIfSuccessBefore <= 0 {
skipIfSuccessBefore = globalDriveConfig.GetMaxTimeout()
}
}
t := time.NewTicker(checkEvery)
defer t.Stop()
fn := mustGetUUID()
rng := rand.New(rand.NewSource(time.Now().UnixNano()))
monitor := func() bool {
if contextCanceled(ctx) {
return false
}
if p.health.status.Load() != diskHealthOK {
return true
}
if time.Since(time.Unix(0, atomic.LoadInt64(&p.health.lastSuccess))) < skipIfSuccessBefore {
// We recently saw a success - no need to check.
return true
}
goOffline := func(err error, spent time.Duration) {
if p.health.status.CompareAndSwap(diskHealthOK, diskHealthFaulty) {
storageLogAlwaysIf(ctx, fmt.Errorf("node(%s): taking drive %s offline: %v", globalLocalNodeName, p.storage.String(), err))
p.health.waiting.Add(1)
go p.monitorDiskStatus(spent, fn)
}
}
// Offset checks a bit.
time.Sleep(time.Duration(rng.Int63n(int64(1 * time.Second))))
dctx, dcancel := context.WithCancel(ctx)
started := time.Now()
go func() {
timeout := time.NewTimer(globalDriveConfig.GetMaxTimeout())
select {
case <-dctx.Done():
if !timeout.Stop() {
<-timeout.C
}
case <-timeout.C:
spent := time.Since(started)
goOffline(fmt.Errorf("unable to write+read for %v", spent.Round(time.Millisecond)), spent)
}
}()
func() {
defer dcancel()
err := p.storage.WriteAll(ctx, minioMetaTmpBucket, fn, toWrite)
if err != nil {
if osErrToFileErr(err) == errFaultyDisk {
goOffline(fmt.Errorf("unable to write: %w", err), 0)
}
return
}
b, err := p.storage.ReadAll(context.Background(), minioMetaTmpBucket, fn)
if err != nil || len(b) != len(toWrite) {
if osErrToFileErr(err) == errFaultyDisk {
goOffline(fmt.Errorf("unable to read: %w", err), 0)
}
return
}
}()
// Continue to monitor
return true
}
for {
select {
case <-ctx.Done():
return
case <-t.C:
if !monitor() {
return
}
}
}
}
perf: websocket grid connectivity for all internode communication (#18461) This PR adds a WebSocket grid feature that allows servers to communicate via a single two-way connection. There are two request types: * Single requests, which are `[]byte => ([]byte, error)`. This is for efficient small roundtrips with small payloads. * Streaming requests which are `[]byte, chan []byte => chan []byte (and error)`, which allows for different combinations of full two-way streams with an initial payload. Only a single stream is created between two machines - and there is, as such, no server/client relation since both sides can initiate and handle requests. Which server initiates the request is decided deterministically on the server names. Requests are made through a mux client and server, which handles message passing, congestion, cancelation, timeouts, etc. If a connection is lost, all requests are canceled, and the calling server will try to reconnect. Registered handlers can operate directly on byte slices or use a higher-level generics abstraction. There is no versioning of handlers/clients, and incompatible changes should be handled by adding new handlers. The request path can be changed to a new one for any protocol changes. First, all servers create a "Manager." The manager must know its address as well as all remote addresses. This will manage all connections. To get a connection to any remote, ask the manager to provide it given the remote address using. ``` func (m *Manager) Connection(host string) *Connection ``` All serverside handlers must also be registered on the manager. This will make sure that all incoming requests are served. The number of in-flight requests and responses must also be given for streaming requests. The "Connection" returned manages the mux-clients. Requests issued to the connection will be sent to the remote. * `func (c *Connection) Request(ctx context.Context, h HandlerID, req []byte) ([]byte, error)` performs a single request and returns the result. Any deadline provided on the request is forwarded to the server, and canceling the context will make the function return at once. * `func (c *Connection) NewStream(ctx context.Context, h HandlerID, payload []byte) (st *Stream, err error)` will initiate a remote call and send the initial payload. ```Go // A Stream is a two-way stream. // All responses *must* be read by the caller. // If the call is canceled through the context, //The appropriate error will be returned. type Stream struct { // Responses from the remote server. // Channel will be closed after an error or when the remote closes. // All responses *must* be read by the caller until either an error is returned or the channel is closed. // Canceling the context will cause the context cancellation error to be returned. Responses <-chan Response // Requests sent to the server. // If the handler is defined with 0 incoming capacity this will be nil. // Channel *must* be closed to signal the end of the stream. // If the request context is canceled, the stream will no longer process requests. Requests chan<- []byte } type Response struct { Msg []byte Err error } ``` There are generic versions of the server/client handlers that allow the use of type safe implementations for data types that support msgpack marshal/unmarshal.
2023-11-20 20:09:35 -05:00
// checkID will check if the disk ID matches the provided ID.
func (p *xlStorageDiskIDCheck) checkID(wantID string) (err error) {
if wantID == "" {
return nil
}
id, err := p.storage.GetDiskID()
if err != nil {
return err
}
if id != wantID {
return fmt.Errorf("disk ID %s does not match. disk reports %s", wantID, id)
}
return nil
}
Add local disk health checks (#14447) The main goal of this PR is to solve the situation where disks stop responding to operations. This generally causes an FD build-up and eventually will crash the server. This adds detection of hung disks, where calls on disk get stuck. We add functionality to `xlStorageDiskIDCheck` where it keeps track of the number of concurrent requests on a given disk. A total number of 100 operations are allowed. If this limit is reached we will block (but not reject) new requests, but we will monitor the state of the disk. If no requests have been completed or updated within a 15-second window, we mark the disk as offline. Requests that are blocked will be unblocked and return an error as "faulty disk". New requests will be rejected until the disk is marked OK again. Once a disk has been marked faulty, a check will run every 5 seconds that will attempt to write and read back a file. As long as this fails the disk will remain faulty. To prevent lots of long-running requests to mark the disk faulty we implement a callback feature that allows updating the status as parts of these operations are running. We add a reader and writer wrapper that will update the status of each successful read/write operation. This should allow fine enough granularity that a slow, but still operational disk will not reach 15 seconds where 50 operations have not progressed. Note that errors themselves are not enough to mark a disk faulty. A nil (or io.EOF) error will mark a disk as "good". * Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`. * de-couple IsOnline() from disk health tracker The purpose of IsOnline() is to ensure that we reconnect the drive only when the "drive" was - disconnected from network we need to validate if the drive is "correct" and is the same drive which belongs to this server. - drive was replaced we have to format it - we support hot swapping of the drives. IsOnline() is not meant for taking the drive offline when it is hung, it is not useful we can let the drive be online instead "return" errors for relevant calls. * return errFaultyDisk for DiskInfo() call Co-authored-by: Harshavardhana <harsha@minio.io> Possible future Improvements: * Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly. * Allow reads/writes to be aborted by the context. * Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 14:38:54 -05:00
// diskHealthCheckOK will check if the provided error is nil
// and update disk status if good.
// For convenience a bool is returned to indicate any error state
// that is not io.EOF.
func diskHealthCheckOK(ctx context.Context, err error) bool {
// Check if context has a disk health check.
tracker, ok := ctx.Value(healthDiskCtxKey{}).(*healthDiskCtxValue)
if !ok {
// No tracker, return
return err == nil || errors.Is(err, io.EOF)
}
if err == nil || errors.Is(err, io.EOF) {
tracker.logSuccess()
return true
}
return false
}
// diskHealthWrapper provides either a io.Reader or io.Writer
// that updates status of the provided tracker.
// Use through diskHealthReader or diskHealthWriter.
type diskHealthWrapper struct {
tracker *healthDiskCtxValue
r io.Reader
w io.Writer
}
func (d *diskHealthWrapper) Read(p []byte) (int, error) {
if d.r == nil {
return 0, fmt.Errorf("diskHealthWrapper: Read with no reader")
}
n, err := d.r.Read(p)
if err == nil || err == io.EOF && n > 0 {
d.tracker.logSuccess()
}
return n, err
}
func (d *diskHealthWrapper) Write(p []byte) (int, error) {
if d.w == nil {
return 0, fmt.Errorf("diskHealthWrapper: Write with no writer")
}
n, err := d.w.Write(p)
if err == nil && n == len(p) {
d.tracker.logSuccess()
}
return n, err
}
// diskHealthReader provides a wrapper that will update disk health on
// ctx, on every successful read.
// This should only be used directly at the os/syscall level,
// otherwise buffered operations may return false health checks.
func diskHealthReader(ctx context.Context, r io.Reader) io.Reader {
// Check if context has a disk health check.
tracker, ok := ctx.Value(healthDiskCtxKey{}).(*healthDiskCtxValue)
if !ok {
// No need to wrap
return r
}
return &diskHealthWrapper{r: r, tracker: tracker}
}
// diskHealthWriter provides a wrapper that will update disk health on
// ctx, on every successful write.
// This should only be used directly at the os/syscall level,
// otherwise buffered operations may return false health checks.
func diskHealthWriter(ctx context.Context, w io.Writer) io.Writer {
// Check if context has a disk health check.
tracker, ok := ctx.Value(healthDiskCtxKey{}).(*healthDiskCtxValue)
if !ok {
// No need to wrap
return w
}
return &diskHealthWrapper{w: w, tracker: tracker}
}