Release v0.1.0

This commit is contained in:
Manu Herrera
2019-10-01 12:22:30 -03:00
parent 41e6aad190
commit d301c63596
915 changed files with 378049 additions and 11 deletions

103
vendor/github.com/btcsuite/btcd/blockchain/README.md generated vendored Normal file
View File

@@ -0,0 +1,103 @@
blockchain
==========
[![Build Status](http://img.shields.io/travis/btcsuite/btcd.svg)](https://travis-ci.org/btcsuite/btcd)
[![ISC License](http://img.shields.io/badge/license-ISC-blue.svg)](http://copyfree.org)
[![GoDoc](https://img.shields.io/badge/godoc-reference-blue.svg)](http://godoc.org/github.com/btcsuite/btcd/blockchain)
Package blockchain implements bitcoin block handling and chain selection rules.
The test coverage is currently only around 60%, but will be increasing over
time. See `test_coverage.txt` for the gocov coverage report. Alternatively, if
you are running a POSIX OS, you can run the `cov_report.sh` script for a
real-time report. Package blockchain is licensed under the liberal ISC license.
There is an associated blog post about the release of this package
[here](https://blog.conformal.com/btcchain-the-bitcoin-chain-package-from-bctd/).
This package has intentionally been designed so it can be used as a standalone
package for any projects needing to handle processing of blocks into the bitcoin
block chain.
## Installation and Updating
```bash
$ go get -u github.com/btcsuite/btcd/blockchain
```
## Bitcoin Chain Processing Overview
Before a block is allowed into the block chain, it must go through an intensive
series of validation rules. The following list serves as a general outline of
those rules to provide some intuition into what is going on under the hood, but
is by no means exhaustive:
- Reject duplicate blocks
- Perform a series of sanity checks on the block and its transactions such as
verifying proof of work, timestamps, number and character of transactions,
transaction amounts, script complexity, and merkle root calculations
- Compare the block against predetermined checkpoints for expected timestamps
and difficulty based on elapsed time since the checkpoint
- Save the most recent orphan blocks for a limited time in case their parent
blocks become available
- Stop processing if the block is an orphan as the rest of the processing
depends on the block's position within the block chain
- Perform a series of more thorough checks that depend on the block's position
within the block chain such as verifying block difficulties adhere to
difficulty retarget rules, timestamps are after the median of the last
several blocks, all transactions are finalized, checkpoint blocks match, and
block versions are in line with the previous blocks
- Determine how the block fits into the chain and perform different actions
accordingly in order to ensure any side chains which have higher difficulty
than the main chain become the new main chain
- When a block is being connected to the main chain (either through
reorganization of a side chain to the main chain or just extending the
main chain), perform further checks on the block's transactions such as
verifying transaction duplicates, script complexity for the combination of
connected scripts, coinbase maturity, double spends, and connected
transaction values
- Run the transaction scripts to verify the spender is allowed to spend the
coins
- Insert the block into the block database
## Examples
* [ProcessBlock Example](http://godoc.org/github.com/btcsuite/btcd/blockchain#example-BlockChain-ProcessBlock)
Demonstrates how to create a new chain instance and use ProcessBlock to
attempt to add a block to the chain. This example intentionally
attempts to insert a duplicate genesis block to illustrate how an invalid
block is handled.
* [CompactToBig Example](http://godoc.org/github.com/btcsuite/btcd/blockchain#example-CompactToBig)
Demonstrates how to convert the compact "bits" in a block header which
represent the target difficulty to a big integer and display it using the
typical hex notation.
* [BigToCompact Example](http://godoc.org/github.com/btcsuite/btcd/blockchain#example-BigToCompact)
Demonstrates how to convert a target difficulty into the
compact "bits" in a block header which represent that target difficulty.
## GPG Verification Key
All official release tags are signed by Conformal so users can ensure the code
has not been tampered with and is coming from the btcsuite developers. To
verify the signature perform the following:
- Download the public key from the Conformal website at
https://opensource.conformal.com/GIT-GPG-KEY-conformal.txt
- Import the public key into your GPG keyring:
```bash
gpg --import GIT-GPG-KEY-conformal.txt
```
- Verify the release tag with the following command where `TAG_NAME` is a
placeholder for the specific tag:
```bash
git tag -v TAG_NAME
```
## License
Package blockchain is licensed under the [copyfree](http://copyfree.org) ISC
License.

92
vendor/github.com/btcsuite/btcd/blockchain/accept.go generated vendored Normal file
View File

@@ -0,0 +1,92 @@
// Copyright (c) 2013-2017 The btcsuite developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package blockchain
import (
"fmt"
"github.com/btcsuite/btcd/database"
"github.com/btcsuite/btcutil"
)
// maybeAcceptBlock potentially accepts a block into the block chain and, if
// accepted, returns whether or not it is on the main chain. It performs
// several validation checks which depend on its position within the block chain
// before adding it. The block is expected to have already gone through
// ProcessBlock before calling this function with it.
//
// The flags are also passed to checkBlockContext and connectBestChain. See
// their documentation for how the flags modify their behavior.
//
// This function MUST be called with the chain state lock held (for writes).
func (b *BlockChain) maybeAcceptBlock(block *btcutil.Block, flags BehaviorFlags) (bool, error) {
// The height of this block is one more than the referenced previous
// block.
prevHash := &block.MsgBlock().Header.PrevBlock
prevNode := b.index.LookupNode(prevHash)
if prevNode == nil {
str := fmt.Sprintf("previous block %s is unknown", prevHash)
return false, ruleError(ErrPreviousBlockUnknown, str)
} else if b.index.NodeStatus(prevNode).KnownInvalid() {
str := fmt.Sprintf("previous block %s is known to be invalid", prevHash)
return false, ruleError(ErrInvalidAncestorBlock, str)
}
blockHeight := prevNode.height + 1
block.SetHeight(blockHeight)
// The block must pass all of the validation rules which depend on the
// position of the block within the block chain.
err := b.checkBlockContext(block, prevNode, flags)
if err != nil {
return false, err
}
// Insert the block into the database if it's not already there. Even
// though it is possible the block will ultimately fail to connect, it
// has already passed all proof-of-work and validity tests which means
// it would be prohibitively expensive for an attacker to fill up the
// disk with a bunch of blocks that fail to connect. This is necessary
// since it allows block download to be decoupled from the much more
// expensive connection logic. It also has some other nice properties
// such as making blocks that never become part of the main chain or
// blocks that fail to connect available for further analysis.
err = b.db.Update(func(dbTx database.Tx) error {
return dbStoreBlock(dbTx, block)
})
if err != nil {
return false, err
}
// Create a new block node for the block and add it to the node index. Even
// if the block ultimately gets connected to the main chain, it starts out
// on a side chain.
blockHeader := &block.MsgBlock().Header
newNode := newBlockNode(blockHeader, prevNode)
newNode.status = statusDataStored
b.index.AddNode(newNode)
err = b.index.flushToDB()
if err != nil {
return false, err
}
// Connect the passed block to the chain while respecting proper chain
// selection according to the chain with the most proof of work. This
// also handles validation of the transaction scripts.
isMainChain, err := b.connectBestChain(newNode, block, flags)
if err != nil {
return false, err
}
// Notify the caller that the new block was accepted into the block
// chain. The caller would typically want to react by relaying the
// inventory to other peers.
b.chainLock.Unlock()
b.sendNotification(NTBlockAccepted, block)
b.chainLock.Lock()
return isMainChain, nil
}

View File

@@ -0,0 +1,348 @@
// Copyright (c) 2015-2017 The btcsuite developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package blockchain
import (
"math/big"
"sort"
"sync"
"time"
"github.com/btcsuite/btcd/chaincfg"
"github.com/btcsuite/btcd/chaincfg/chainhash"
"github.com/btcsuite/btcd/database"
"github.com/btcsuite/btcd/wire"
)
// blockStatus is a bit field representing the validation state of the block.
type blockStatus byte
const (
// statusDataStored indicates that the block's payload is stored on disk.
statusDataStored blockStatus = 1 << iota
// statusValid indicates that the block has been fully validated.
statusValid
// statusValidateFailed indicates that the block has failed validation.
statusValidateFailed
// statusInvalidAncestor indicates that one of the block's ancestors has
// has failed validation, thus the block is also invalid.
statusInvalidAncestor
// statusNone indicates that the block has no validation state flags set.
//
// NOTE: This must be defined last in order to avoid influencing iota.
statusNone blockStatus = 0
)
// HaveData returns whether the full block data is stored in the database. This
// will return false for a block node where only the header is downloaded or
// kept.
func (status blockStatus) HaveData() bool {
return status&statusDataStored != 0
}
// KnownValid returns whether the block is known to be valid. This will return
// false for a valid block that has not been fully validated yet.
func (status blockStatus) KnownValid() bool {
return status&statusValid != 0
}
// KnownInvalid returns whether the block is known to be invalid. This may be
// because the block itself failed validation or any of its ancestors is
// invalid. This will return false for invalid blocks that have not been proven
// invalid yet.
func (status blockStatus) KnownInvalid() bool {
return status&(statusValidateFailed|statusInvalidAncestor) != 0
}
// blockNode represents a block within the block chain and is primarily used to
// aid in selecting the best chain to be the main chain. The main chain is
// stored into the block database.
type blockNode struct {
// NOTE: Additions, deletions, or modifications to the order of the
// definitions in this struct should not be changed without considering
// how it affects alignment on 64-bit platforms. The current order is
// specifically crafted to result in minimal padding. There will be
// hundreds of thousands of these in memory, so a few extra bytes of
// padding adds up.
// parent is the parent block for this node.
parent *blockNode
// hash is the double sha 256 of the block.
hash chainhash.Hash
// workSum is the total amount of work in the chain up to and including
// this node.
workSum *big.Int
// height is the position in the block chain.
height int32
// Some fields from block headers to aid in best chain selection and
// reconstructing headers from memory. These must be treated as
// immutable and are intentionally ordered to avoid padding on 64-bit
// platforms.
version int32
bits uint32
nonce uint32
timestamp int64
merkleRoot chainhash.Hash
// status is a bitfield representing the validation state of the block. The
// status field, unlike the other fields, may be written to and so should
// only be accessed using the concurrent-safe NodeStatus method on
// blockIndex once the node has been added to the global index.
status blockStatus
}
// initBlockNode initializes a block node from the given header and parent node,
// calculating the height and workSum from the respective fields on the parent.
// This function is NOT safe for concurrent access. It must only be called when
// initially creating a node.
func initBlockNode(node *blockNode, blockHeader *wire.BlockHeader, parent *blockNode) {
*node = blockNode{
hash: blockHeader.BlockHash(),
workSum: CalcWork(blockHeader.Bits),
version: blockHeader.Version,
bits: blockHeader.Bits,
nonce: blockHeader.Nonce,
timestamp: blockHeader.Timestamp.Unix(),
merkleRoot: blockHeader.MerkleRoot,
}
if parent != nil {
node.parent = parent
node.height = parent.height + 1
node.workSum = node.workSum.Add(parent.workSum, node.workSum)
}
}
// newBlockNode returns a new block node for the given block header and parent
// node, calculating the height and workSum from the respective fields on the
// parent. This function is NOT safe for concurrent access.
func newBlockNode(blockHeader *wire.BlockHeader, parent *blockNode) *blockNode {
var node blockNode
initBlockNode(&node, blockHeader, parent)
return &node
}
// Header constructs a block header from the node and returns it.
//
// This function is safe for concurrent access.
func (node *blockNode) Header() wire.BlockHeader {
// No lock is needed because all accessed fields are immutable.
prevHash := &zeroHash
if node.parent != nil {
prevHash = &node.parent.hash
}
return wire.BlockHeader{
Version: node.version,
PrevBlock: *prevHash,
MerkleRoot: node.merkleRoot,
Timestamp: time.Unix(node.timestamp, 0),
Bits: node.bits,
Nonce: node.nonce,
}
}
// Ancestor returns the ancestor block node at the provided height by following
// the chain backwards from this node. The returned block will be nil when a
// height is requested that is after the height of the passed node or is less
// than zero.
//
// This function is safe for concurrent access.
func (node *blockNode) Ancestor(height int32) *blockNode {
if height < 0 || height > node.height {
return nil
}
n := node
for ; n != nil && n.height != height; n = n.parent {
// Intentionally left blank
}
return n
}
// RelativeAncestor returns the ancestor block node a relative 'distance' blocks
// before this node. This is equivalent to calling Ancestor with the node's
// height minus provided distance.
//
// This function is safe for concurrent access.
func (node *blockNode) RelativeAncestor(distance int32) *blockNode {
return node.Ancestor(node.height - distance)
}
// CalcPastMedianTime calculates the median time of the previous few blocks
// prior to, and including, the block node.
//
// This function is safe for concurrent access.
func (node *blockNode) CalcPastMedianTime() time.Time {
// Create a slice of the previous few block timestamps used to calculate
// the median per the number defined by the constant medianTimeBlocks.
timestamps := make([]int64, medianTimeBlocks)
numNodes := 0
iterNode := node
for i := 0; i < medianTimeBlocks && iterNode != nil; i++ {
timestamps[i] = iterNode.timestamp
numNodes++
iterNode = iterNode.parent
}
// Prune the slice to the actual number of available timestamps which
// will be fewer than desired near the beginning of the block chain
// and sort them.
timestamps = timestamps[:numNodes]
sort.Sort(timeSorter(timestamps))
// NOTE: The consensus rules incorrectly calculate the median for even
// numbers of blocks. A true median averages the middle two elements
// for a set with an even number of elements in it. Since the constant
// for the previous number of blocks to be used is odd, this is only an
// issue for a few blocks near the beginning of the chain. I suspect
// this is an optimization even though the result is slightly wrong for
// a few of the first blocks since after the first few blocks, there
// will always be an odd number of blocks in the set per the constant.
//
// This code follows suit to ensure the same rules are used, however, be
// aware that should the medianTimeBlocks constant ever be changed to an
// even number, this code will be wrong.
medianTimestamp := timestamps[numNodes/2]
return time.Unix(medianTimestamp, 0)
}
// blockIndex provides facilities for keeping track of an in-memory index of the
// block chain. Although the name block chain suggests a single chain of
// blocks, it is actually a tree-shaped structure where any node can have
// multiple children. However, there can only be one active branch which does
// indeed form a chain from the tip all the way back to the genesis block.
type blockIndex struct {
// The following fields are set when the instance is created and can't
// be changed afterwards, so there is no need to protect them with a
// separate mutex.
db database.DB
chainParams *chaincfg.Params
sync.RWMutex
index map[chainhash.Hash]*blockNode
dirty map[*blockNode]struct{}
}
// newBlockIndex returns a new empty instance of a block index. The index will
// be dynamically populated as block nodes are loaded from the database and
// manually added.
func newBlockIndex(db database.DB, chainParams *chaincfg.Params) *blockIndex {
return &blockIndex{
db: db,
chainParams: chainParams,
index: make(map[chainhash.Hash]*blockNode),
dirty: make(map[*blockNode]struct{}),
}
}
// HaveBlock returns whether or not the block index contains the provided hash.
//
// This function is safe for concurrent access.
func (bi *blockIndex) HaveBlock(hash *chainhash.Hash) bool {
bi.RLock()
_, hasBlock := bi.index[*hash]
bi.RUnlock()
return hasBlock
}
// LookupNode returns the block node identified by the provided hash. It will
// return nil if there is no entry for the hash.
//
// This function is safe for concurrent access.
func (bi *blockIndex) LookupNode(hash *chainhash.Hash) *blockNode {
bi.RLock()
node := bi.index[*hash]
bi.RUnlock()
return node
}
// AddNode adds the provided node to the block index and marks it as dirty.
// Duplicate entries are not checked so it is up to caller to avoid adding them.
//
// This function is safe for concurrent access.
func (bi *blockIndex) AddNode(node *blockNode) {
bi.Lock()
bi.addNode(node)
bi.dirty[node] = struct{}{}
bi.Unlock()
}
// addNode adds the provided node to the block index, but does not mark it as
// dirty. This can be used while initializing the block index.
//
// This function is NOT safe for concurrent access.
func (bi *blockIndex) addNode(node *blockNode) {
bi.index[node.hash] = node
}
// NodeStatus provides concurrent-safe access to the status field of a node.
//
// This function is safe for concurrent access.
func (bi *blockIndex) NodeStatus(node *blockNode) blockStatus {
bi.RLock()
status := node.status
bi.RUnlock()
return status
}
// SetStatusFlags flips the provided status flags on the block node to on,
// regardless of whether they were on or off previously. This does not unset any
// flags currently on.
//
// This function is safe for concurrent access.
func (bi *blockIndex) SetStatusFlags(node *blockNode, flags blockStatus) {
bi.Lock()
node.status |= flags
bi.dirty[node] = struct{}{}
bi.Unlock()
}
// UnsetStatusFlags flips the provided status flags on the block node to off,
// regardless of whether they were on or off previously.
//
// This function is safe for concurrent access.
func (bi *blockIndex) UnsetStatusFlags(node *blockNode, flags blockStatus) {
bi.Lock()
node.status &^= flags
bi.dirty[node] = struct{}{}
bi.Unlock()
}
// flushToDB writes all dirty block nodes to the database. If all writes
// succeed, this clears the dirty set.
func (bi *blockIndex) flushToDB() error {
bi.Lock()
if len(bi.dirty) == 0 {
bi.Unlock()
return nil
}
err := bi.db.Update(func(dbTx database.Tx) error {
for node := range bi.dirty {
err := dbStoreBlockNode(dbTx, node)
if err != nil {
return err
}
}
return nil
})
// If write was successful, clear the dirty set.
if err == nil {
bi.dirty = make(map[*blockNode]struct{})
}
bi.Unlock()
return err
}

1803
vendor/github.com/btcsuite/btcd/blockchain/chain.go generated vendored Normal file

File diff suppressed because it is too large Load Diff

1416
vendor/github.com/btcsuite/btcd/blockchain/chainio.go generated vendored Normal file

File diff suppressed because it is too large Load Diff

423
vendor/github.com/btcsuite/btcd/blockchain/chainview.go generated vendored Normal file
View File

@@ -0,0 +1,423 @@
// Copyright (c) 2017 The btcsuite developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package blockchain
import (
"sync"
)
// approxNodesPerWeek is an approximation of the number of new blocks there are
// in a week on average.
const approxNodesPerWeek = 6 * 24 * 7
// log2FloorMasks defines the masks to use when quickly calculating
// floor(log2(x)) in a constant log2(32) = 5 steps, where x is a uint32, using
// shifts. They are derived from (2^(2^x) - 1) * (2^(2^x)), for x in 4..0.
var log2FloorMasks = []uint32{0xffff0000, 0xff00, 0xf0, 0xc, 0x2}
// fastLog2Floor calculates and returns floor(log2(x)) in a constant 5 steps.
func fastLog2Floor(n uint32) uint8 {
rv := uint8(0)
exponent := uint8(16)
for i := 0; i < 5; i++ {
if n&log2FloorMasks[i] != 0 {
rv += exponent
n >>= exponent
}
exponent >>= 1
}
return rv
}
// chainView provides a flat view of a specific branch of the block chain from
// its tip back to the genesis block and provides various convenience functions
// for comparing chains.
//
// For example, assume a block chain with a side chain as depicted below:
// genesis -> 1 -> 2 -> 3 -> 4 -> 5 -> 6 -> 7 -> 8
// \-> 4a -> 5a -> 6a
//
// The chain view for the branch ending in 6a consists of:
// genesis -> 1 -> 2 -> 3 -> 4a -> 5a -> 6a
type chainView struct {
mtx sync.Mutex
nodes []*blockNode
}
// newChainView returns a new chain view for the given tip block node. Passing
// nil as the tip will result in a chain view that is not initialized. The tip
// can be updated at any time via the setTip function.
func newChainView(tip *blockNode) *chainView {
// The mutex is intentionally not held since this is a constructor.
var c chainView
c.setTip(tip)
return &c
}
// genesis returns the genesis block for the chain view. This only differs from
// the exported version in that it is up to the caller to ensure the lock is
// held.
//
// This function MUST be called with the view mutex locked (for reads).
func (c *chainView) genesis() *blockNode {
if len(c.nodes) == 0 {
return nil
}
return c.nodes[0]
}
// Genesis returns the genesis block for the chain view.
//
// This function is safe for concurrent access.
func (c *chainView) Genesis() *blockNode {
c.mtx.Lock()
genesis := c.genesis()
c.mtx.Unlock()
return genesis
}
// tip returns the current tip block node for the chain view. It will return
// nil if there is no tip. This only differs from the exported version in that
// it is up to the caller to ensure the lock is held.
//
// This function MUST be called with the view mutex locked (for reads).
func (c *chainView) tip() *blockNode {
if len(c.nodes) == 0 {
return nil
}
return c.nodes[len(c.nodes)-1]
}
// Tip returns the current tip block node for the chain view. It will return
// nil if there is no tip.
//
// This function is safe for concurrent access.
func (c *chainView) Tip() *blockNode {
c.mtx.Lock()
tip := c.tip()
c.mtx.Unlock()
return tip
}
// setTip sets the chain view to use the provided block node as the current tip
// and ensures the view is consistent by populating it with the nodes obtained
// by walking backwards all the way to genesis block as necessary. Further
// calls will only perform the minimum work needed, so switching between chain
// tips is efficient. This only differs from the exported version in that it is
// up to the caller to ensure the lock is held.
//
// This function MUST be called with the view mutex locked (for writes).
func (c *chainView) setTip(node *blockNode) {
if node == nil {
// Keep the backing array around for potential future use.
c.nodes = c.nodes[:0]
return
}
// Create or resize the slice that will hold the block nodes to the
// provided tip height. When creating the slice, it is created with
// some additional capacity for the underlying array as append would do
// in order to reduce overhead when extending the chain later. As long
// as the underlying array already has enough capacity, simply expand or
// contract the slice accordingly. The additional capacity is chosen
// such that the array should only have to be extended about once a
// week.
needed := node.height + 1
if int32(cap(c.nodes)) < needed {
nodes := make([]*blockNode, needed, needed+approxNodesPerWeek)
copy(nodes, c.nodes)
c.nodes = nodes
} else {
prevLen := int32(len(c.nodes))
c.nodes = c.nodes[0:needed]
for i := prevLen; i < needed; i++ {
c.nodes[i] = nil
}
}
for node != nil && c.nodes[node.height] != node {
c.nodes[node.height] = node
node = node.parent
}
}
// SetTip sets the chain view to use the provided block node as the current tip
// and ensures the view is consistent by populating it with the nodes obtained
// by walking backwards all the way to genesis block as necessary. Further
// calls will only perform the minimum work needed, so switching between chain
// tips is efficient.
//
// This function is safe for concurrent access.
func (c *chainView) SetTip(node *blockNode) {
c.mtx.Lock()
c.setTip(node)
c.mtx.Unlock()
}
// height returns the height of the tip of the chain view. It will return -1 if
// there is no tip (which only happens if the chain view has not been
// initialized). This only differs from the exported version in that it is up
// to the caller to ensure the lock is held.
//
// This function MUST be called with the view mutex locked (for reads).
func (c *chainView) height() int32 {
return int32(len(c.nodes) - 1)
}
// Height returns the height of the tip of the chain view. It will return -1 if
// there is no tip (which only happens if the chain view has not been
// initialized).
//
// This function is safe for concurrent access.
func (c *chainView) Height() int32 {
c.mtx.Lock()
height := c.height()
c.mtx.Unlock()
return height
}
// nodeByHeight returns the block node at the specified height. Nil will be
// returned if the height does not exist. This only differs from the exported
// version in that it is up to the caller to ensure the lock is held.
//
// This function MUST be called with the view mutex locked (for reads).
func (c *chainView) nodeByHeight(height int32) *blockNode {
if height < 0 || height >= int32(len(c.nodes)) {
return nil
}
return c.nodes[height]
}
// NodeByHeight returns the block node at the specified height. Nil will be
// returned if the height does not exist.
//
// This function is safe for concurrent access.
func (c *chainView) NodeByHeight(height int32) *blockNode {
c.mtx.Lock()
node := c.nodeByHeight(height)
c.mtx.Unlock()
return node
}
// Equals returns whether or not two chain views are the same. Uninitialized
// views (tip set to nil) are considered equal.
//
// This function is safe for concurrent access.
func (c *chainView) Equals(other *chainView) bool {
c.mtx.Lock()
other.mtx.Lock()
equals := len(c.nodes) == len(other.nodes) && c.tip() == other.tip()
other.mtx.Unlock()
c.mtx.Unlock()
return equals
}
// contains returns whether or not the chain view contains the passed block
// node. This only differs from the exported version in that it is up to the
// caller to ensure the lock is held.
//
// This function MUST be called with the view mutex locked (for reads).
func (c *chainView) contains(node *blockNode) bool {
return c.nodeByHeight(node.height) == node
}
// Contains returns whether or not the chain view contains the passed block
// node.
//
// This function is safe for concurrent access.
func (c *chainView) Contains(node *blockNode) bool {
c.mtx.Lock()
contains := c.contains(node)
c.mtx.Unlock()
return contains
}
// next returns the successor to the provided node for the chain view. It will
// return nil if there is no successor or the provided node is not part of the
// view. This only differs from the exported version in that it is up to the
// caller to ensure the lock is held.
//
// See the comment on the exported function for more details.
//
// This function MUST be called with the view mutex locked (for reads).
func (c *chainView) next(node *blockNode) *blockNode {
if node == nil || !c.contains(node) {
return nil
}
return c.nodeByHeight(node.height + 1)
}
// Next returns the successor to the provided node for the chain view. It will
// return nil if there is no successfor or the provided node is not part of the
// view.
//
// For example, assume a block chain with a side chain as depicted below:
// genesis -> 1 -> 2 -> 3 -> 4 -> 5 -> 6 -> 7 -> 8
// \-> 4a -> 5a -> 6a
//
// Further, assume the view is for the longer chain depicted above. That is to
// say it consists of:
// genesis -> 1 -> 2 -> 3 -> 4 -> 5 -> 6 -> 7 -> 8
//
// Invoking this function with block node 5 would return block node 6 while
// invoking it with block node 5a would return nil since that node is not part
// of the view.
//
// This function is safe for concurrent access.
func (c *chainView) Next(node *blockNode) *blockNode {
c.mtx.Lock()
next := c.next(node)
c.mtx.Unlock()
return next
}
// findFork returns the final common block between the provided node and the
// the chain view. It will return nil if there is no common block. This only
// differs from the exported version in that it is up to the caller to ensure
// the lock is held.
//
// See the exported FindFork comments for more details.
//
// This function MUST be called with the view mutex locked (for reads).
func (c *chainView) findFork(node *blockNode) *blockNode {
// No fork point for node that doesn't exist.
if node == nil {
return nil
}
// When the height of the passed node is higher than the height of the
// tip of the current chain view, walk backwards through the nodes of
// the other chain until the heights match (or there or no more nodes in
// which case there is no common node between the two).
//
// NOTE: This isn't strictly necessary as the following section will
// find the node as well, however, it is more efficient to avoid the
// contains check since it is already known that the common node can't
// possibly be past the end of the current chain view. It also allows
// this code to take advantage of any potential future optimizations to
// the Ancestor function such as using an O(log n) skip list.
chainHeight := c.height()
if node.height > chainHeight {
node = node.Ancestor(chainHeight)
}
// Walk the other chain backwards as long as the current one does not
// contain the node or there are no more nodes in which case there is no
// common node between the two.
for node != nil && !c.contains(node) {
node = node.parent
}
return node
}
// FindFork returns the final common block between the provided node and the
// the chain view. It will return nil if there is no common block.
//
// For example, assume a block chain with a side chain as depicted below:
// genesis -> 1 -> 2 -> ... -> 5 -> 6 -> 7 -> 8
// \-> 6a -> 7a
//
// Further, assume the view is for the longer chain depicted above. That is to
// say it consists of:
// genesis -> 1 -> 2 -> ... -> 5 -> 6 -> 7 -> 8.
//
// Invoking this function with block node 7a would return block node 5 while
// invoking it with block node 7 would return itself since it is already part of
// the branch formed by the view.
//
// This function is safe for concurrent access.
func (c *chainView) FindFork(node *blockNode) *blockNode {
c.mtx.Lock()
fork := c.findFork(node)
c.mtx.Unlock()
return fork
}
// blockLocator returns a block locator for the passed block node. The passed
// node can be nil in which case the block locator for the current tip
// associated with the view will be returned. This only differs from the
// exported version in that it is up to the caller to ensure the lock is held.
//
// See the exported BlockLocator function comments for more details.
//
// This function MUST be called with the view mutex locked (for reads).
func (c *chainView) blockLocator(node *blockNode) BlockLocator {
// Use the current tip if requested.
if node == nil {
node = c.tip()
}
if node == nil {
return nil
}
// Calculate the max number of entries that will ultimately be in the
// block locator. See the description of the algorithm for how these
// numbers are derived.
var maxEntries uint8
if node.height <= 12 {
maxEntries = uint8(node.height) + 1
} else {
// Requested hash itself + previous 10 entries + genesis block.
// Then floor(log2(height-10)) entries for the skip portion.
adjustedHeight := uint32(node.height) - 10
maxEntries = 12 + fastLog2Floor(adjustedHeight)
}
locator := make(BlockLocator, 0, maxEntries)
step := int32(1)
for node != nil {
locator = append(locator, &node.hash)
// Nothing more to add once the genesis block has been added.
if node.height == 0 {
break
}
// Calculate height of previous node to include ensuring the
// final node is the genesis block.
height := node.height - step
if height < 0 {
height = 0
}
// When the node is in the current chain view, all of its
// ancestors must be too, so use a much faster O(1) lookup in
// that case. Otherwise, fall back to walking backwards through
// the nodes of the other chain to the correct ancestor.
if c.contains(node) {
node = c.nodes[height]
} else {
node = node.Ancestor(height)
}
// Once 11 entries have been included, start doubling the
// distance between included hashes.
if len(locator) > 10 {
step *= 2
}
}
return locator
}
// BlockLocator returns a block locator for the passed block node. The passed
// node can be nil in which case the block locator for the current tip
// associated with the view will be returned.
//
// See the BlockLocator type for details on the algorithm used to create a block
// locator.
//
// This function is safe for concurrent access.
func (c *chainView) BlockLocator(node *blockNode) BlockLocator {
c.mtx.Lock()
locator := c.blockLocator(node)
c.mtx.Unlock()
return locator
}

View File

@@ -0,0 +1,261 @@
// Copyright (c) 2013-2016 The btcsuite developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package blockchain
import (
"fmt"
"time"
"github.com/btcsuite/btcd/chaincfg"
"github.com/btcsuite/btcd/chaincfg/chainhash"
"github.com/btcsuite/btcd/txscript"
"github.com/btcsuite/btcutil"
)
// CheckpointConfirmations is the number of blocks before the end of the current
// best block chain that a good checkpoint candidate must be.
const CheckpointConfirmations = 2016
// newHashFromStr converts the passed big-endian hex string into a
// chainhash.Hash. It only differs from the one available in chainhash in that
// it ignores the error since it will only (and must only) be called with
// hard-coded, and therefore known good, hashes.
func newHashFromStr(hexStr string) *chainhash.Hash {
hash, _ := chainhash.NewHashFromStr(hexStr)
return hash
}
// Checkpoints returns a slice of checkpoints (regardless of whether they are
// already known). When there are no checkpoints for the chain, it will return
// nil.
//
// This function is safe for concurrent access.
func (b *BlockChain) Checkpoints() []chaincfg.Checkpoint {
return b.checkpoints
}
// HasCheckpoints returns whether this BlockChain has checkpoints defined.
//
// This function is safe for concurrent access.
func (b *BlockChain) HasCheckpoints() bool {
return len(b.checkpoints) > 0
}
// LatestCheckpoint returns the most recent checkpoint (regardless of whether it
// is already known). When there are no defined checkpoints for the active chain
// instance, it will return nil.
//
// This function is safe for concurrent access.
func (b *BlockChain) LatestCheckpoint() *chaincfg.Checkpoint {
if !b.HasCheckpoints() {
return nil
}
return &b.checkpoints[len(b.checkpoints)-1]
}
// verifyCheckpoint returns whether the passed block height and hash combination
// match the checkpoint data. It also returns true if there is no checkpoint
// data for the passed block height.
func (b *BlockChain) verifyCheckpoint(height int32, hash *chainhash.Hash) bool {
if !b.HasCheckpoints() {
return true
}
// Nothing to check if there is no checkpoint data for the block height.
checkpoint, exists := b.checkpointsByHeight[height]
if !exists {
return true
}
if !checkpoint.Hash.IsEqual(hash) {
return false
}
log.Infof("Verified checkpoint at height %d/block %s", checkpoint.Height,
checkpoint.Hash)
return true
}
// findPreviousCheckpoint finds the most recent checkpoint that is already
// available in the downloaded portion of the block chain and returns the
// associated block node. It returns nil if a checkpoint can't be found (this
// should really only happen for blocks before the first checkpoint).
//
// This function MUST be called with the chain lock held (for reads).
func (b *BlockChain) findPreviousCheckpoint() (*blockNode, error) {
if !b.HasCheckpoints() {
return nil, nil
}
// Perform the initial search to find and cache the latest known
// checkpoint if the best chain is not known yet or we haven't already
// previously searched.
checkpoints := b.checkpoints
numCheckpoints := len(checkpoints)
if b.checkpointNode == nil && b.nextCheckpoint == nil {
// Loop backwards through the available checkpoints to find one
// that is already available.
for i := numCheckpoints - 1; i >= 0; i-- {
node := b.index.LookupNode(checkpoints[i].Hash)
if node == nil || !b.bestChain.Contains(node) {
continue
}
// Checkpoint found. Cache it for future lookups and
// set the next expected checkpoint accordingly.
b.checkpointNode = node
if i < numCheckpoints-1 {
b.nextCheckpoint = &checkpoints[i+1]
}
return b.checkpointNode, nil
}
// No known latest checkpoint. This will only happen on blocks
// before the first known checkpoint. So, set the next expected
// checkpoint to the first checkpoint and return the fact there
// is no latest known checkpoint block.
b.nextCheckpoint = &checkpoints[0]
return nil, nil
}
// At this point we've already searched for the latest known checkpoint,
// so when there is no next checkpoint, the current checkpoint lockin
// will always be the latest known checkpoint.
if b.nextCheckpoint == nil {
return b.checkpointNode, nil
}
// When there is a next checkpoint and the height of the current best
// chain does not exceed it, the current checkpoint lockin is still
// the latest known checkpoint.
if b.bestChain.Tip().height < b.nextCheckpoint.Height {
return b.checkpointNode, nil
}
// We've reached or exceeded the next checkpoint height. Note that
// once a checkpoint lockin has been reached, forks are prevented from
// any blocks before the checkpoint, so we don't have to worry about the
// checkpoint going away out from under us due to a chain reorganize.
// Cache the latest known checkpoint for future lookups. Note that if
// this lookup fails something is very wrong since the chain has already
// passed the checkpoint which was verified as accurate before inserting
// it.
checkpointNode := b.index.LookupNode(b.nextCheckpoint.Hash)
if checkpointNode == nil {
return nil, AssertError(fmt.Sprintf("findPreviousCheckpoint "+
"failed lookup of known good block node %s",
b.nextCheckpoint.Hash))
}
b.checkpointNode = checkpointNode
// Set the next expected checkpoint.
checkpointIndex := -1
for i := numCheckpoints - 1; i >= 0; i-- {
if checkpoints[i].Hash.IsEqual(b.nextCheckpoint.Hash) {
checkpointIndex = i
break
}
}
b.nextCheckpoint = nil
if checkpointIndex != -1 && checkpointIndex < numCheckpoints-1 {
b.nextCheckpoint = &checkpoints[checkpointIndex+1]
}
return b.checkpointNode, nil
}
// isNonstandardTransaction determines whether a transaction contains any
// scripts which are not one of the standard types.
func isNonstandardTransaction(tx *btcutil.Tx) bool {
// Check all of the output public key scripts for non-standard scripts.
for _, txOut := range tx.MsgTx().TxOut {
scriptClass := txscript.GetScriptClass(txOut.PkScript)
if scriptClass == txscript.NonStandardTy {
return true
}
}
return false
}
// IsCheckpointCandidate returns whether or not the passed block is a good
// checkpoint candidate.
//
// The factors used to determine a good checkpoint are:
// - The block must be in the main chain
// - The block must be at least 'CheckpointConfirmations' blocks prior to the
// current end of the main chain
// - The timestamps for the blocks before and after the checkpoint must have
// timestamps which are also before and after the checkpoint, respectively
// (due to the median time allowance this is not always the case)
// - The block must not contain any strange transaction such as those with
// nonstandard scripts
//
// The intent is that candidates are reviewed by a developer to make the final
// decision and then manually added to the list of checkpoints for a network.
//
// This function is safe for concurrent access.
func (b *BlockChain) IsCheckpointCandidate(block *btcutil.Block) (bool, error) {
b.chainLock.RLock()
defer b.chainLock.RUnlock()
// A checkpoint must be in the main chain.
node := b.index.LookupNode(block.Hash())
if node == nil || !b.bestChain.Contains(node) {
return false, nil
}
// Ensure the height of the passed block and the entry for the block in
// the main chain match. This should always be the case unless the
// caller provided an invalid block.
if node.height != block.Height() {
return false, fmt.Errorf("passed block height of %d does not "+
"match the main chain height of %d", block.Height(),
node.height)
}
// A checkpoint must be at least CheckpointConfirmations blocks
// before the end of the main chain.
mainChainHeight := b.bestChain.Tip().height
if node.height > (mainChainHeight - CheckpointConfirmations) {
return false, nil
}
// A checkpoint must be have at least one block after it.
//
// This should always succeed since the check above already made sure it
// is CheckpointConfirmations back, but be safe in case the constant
// changes.
nextNode := b.bestChain.Next(node)
if nextNode == nil {
return false, nil
}
// A checkpoint must be have at least one block before it.
if node.parent == nil {
return false, nil
}
// A checkpoint must have timestamps for the block and the blocks on
// either side of it in order (due to the median time allowance this is
// not always the case).
prevTime := time.Unix(node.parent.timestamp, 0)
curTime := block.MsgBlock().Header.Timestamp
nextTime := time.Unix(nextNode.timestamp, 0)
if prevTime.After(curTime) || nextTime.Before(curTime) {
return false, nil
}
// A checkpoint must have transactions that only contain standard
// scripts.
for _, tx := range block.Transactions() {
if isNonstandardTransaction(tx) {
return false, nil
}
}
// All of the checks passed, so the block is a candidate.
return true, nil
}

586
vendor/github.com/btcsuite/btcd/blockchain/compress.go generated vendored Normal file
View File

@@ -0,0 +1,586 @@
// Copyright (c) 2015-2016 The btcsuite developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package blockchain
import (
"github.com/btcsuite/btcd/btcec"
"github.com/btcsuite/btcd/txscript"
)
// -----------------------------------------------------------------------------
// A variable length quantity (VLQ) is an encoding that uses an arbitrary number
// of binary octets to represent an arbitrarily large integer. The scheme
// employs a most significant byte (MSB) base-128 encoding where the high bit in
// each byte indicates whether or not the byte is the final one. In addition,
// to ensure there are no redundant encodings, an offset is subtracted every
// time a group of 7 bits is shifted out. Therefore each integer can be
// represented in exactly one way, and each representation stands for exactly
// one integer.
//
// Another nice property of this encoding is that it provides a compact
// representation of values that are typically used to indicate sizes. For
// example, the values 0 - 127 are represented with a single byte, 128 - 16511
// with two bytes, and 16512 - 2113663 with three bytes.
//
// While the encoding allows arbitrarily large integers, it is artificially
// limited in this code to an unsigned 64-bit integer for efficiency purposes.
//
// Example encodings:
// 0 -> [0x00]
// 127 -> [0x7f] * Max 1-byte value
// 128 -> [0x80 0x00]
// 129 -> [0x80 0x01]
// 255 -> [0x80 0x7f]
// 256 -> [0x81 0x00]
// 16511 -> [0xff 0x7f] * Max 2-byte value
// 16512 -> [0x80 0x80 0x00]
// 32895 -> [0x80 0xff 0x7f]
// 2113663 -> [0xff 0xff 0x7f] * Max 3-byte value
// 270549119 -> [0xff 0xff 0xff 0x7f] * Max 4-byte value
// 2^64-1 -> [0x80 0xfe 0xfe 0xfe 0xfe 0xfe 0xfe 0xfe 0xfe 0x7f]
//
// References:
// https://en.wikipedia.org/wiki/Variable-length_quantity
// http://www.codecodex.com/wiki/Variable-Length_Integers
// -----------------------------------------------------------------------------
// serializeSizeVLQ returns the number of bytes it would take to serialize the
// passed number as a variable-length quantity according to the format described
// above.
func serializeSizeVLQ(n uint64) int {
size := 1
for ; n > 0x7f; n = (n >> 7) - 1 {
size++
}
return size
}
// putVLQ serializes the provided number to a variable-length quantity according
// to the format described above and returns the number of bytes of the encoded
// value. The result is placed directly into the passed byte slice which must
// be at least large enough to handle the number of bytes returned by the
// serializeSizeVLQ function or it will panic.
func putVLQ(target []byte, n uint64) int {
offset := 0
for ; ; offset++ {
// The high bit is set when another byte follows.
highBitMask := byte(0x80)
if offset == 0 {
highBitMask = 0x00
}
target[offset] = byte(n&0x7f) | highBitMask
if n <= 0x7f {
break
}
n = (n >> 7) - 1
}
// Reverse the bytes so it is MSB-encoded.
for i, j := 0, offset; i < j; i, j = i+1, j-1 {
target[i], target[j] = target[j], target[i]
}
return offset + 1
}
// deserializeVLQ deserializes the provided variable-length quantity according
// to the format described above. It also returns the number of bytes
// deserialized.
func deserializeVLQ(serialized []byte) (uint64, int) {
var n uint64
var size int
for _, val := range serialized {
size++
n = (n << 7) | uint64(val&0x7f)
if val&0x80 != 0x80 {
break
}
n++
}
return n, size
}
// -----------------------------------------------------------------------------
// In order to reduce the size of stored scripts, a domain specific compression
// algorithm is used which recognizes standard scripts and stores them using
// less bytes than the original script. The compression algorithm used here was
// obtained from Bitcoin Core, so all credits for the algorithm go to it.
//
// The general serialized format is:
//
// <script size or type><script data>
//
// Field Type Size
// script size or type VLQ variable
// script data []byte variable
//
// The specific serialized format for each recognized standard script is:
//
// - Pay-to-pubkey-hash: (21 bytes) - <0><20-byte pubkey hash>
// - Pay-to-script-hash: (21 bytes) - <1><20-byte script hash>
// - Pay-to-pubkey**: (33 bytes) - <2, 3, 4, or 5><32-byte pubkey X value>
// 2, 3 = compressed pubkey with bit 0 specifying the y coordinate to use
// 4, 5 = uncompressed pubkey with bit 0 specifying the y coordinate to use
// ** Only valid public keys starting with 0x02, 0x03, and 0x04 are supported.
//
// Any scripts which are not recognized as one of the aforementioned standard
// scripts are encoded using the general serialized format and encode the script
// size as the sum of the actual size of the script and the number of special
// cases.
// -----------------------------------------------------------------------------
// The following constants specify the special constants used to identify a
// special script type in the domain-specific compressed script encoding.
//
// NOTE: This section specifically does not use iota since these values are
// serialized and must be stable for long-term storage.
const (
// cstPayToPubKeyHash identifies a compressed pay-to-pubkey-hash script.
cstPayToPubKeyHash = 0
// cstPayToScriptHash identifies a compressed pay-to-script-hash script.
cstPayToScriptHash = 1
// cstPayToPubKeyComp2 identifies a compressed pay-to-pubkey script to
// a compressed pubkey. Bit 0 specifies which y-coordinate to use
// to reconstruct the full uncompressed pubkey.
cstPayToPubKeyComp2 = 2
// cstPayToPubKeyComp3 identifies a compressed pay-to-pubkey script to
// a compressed pubkey. Bit 0 specifies which y-coordinate to use
// to reconstruct the full uncompressed pubkey.
cstPayToPubKeyComp3 = 3
// cstPayToPubKeyUncomp4 identifies a compressed pay-to-pubkey script to
// an uncompressed pubkey. Bit 0 specifies which y-coordinate to use
// to reconstruct the full uncompressed pubkey.
cstPayToPubKeyUncomp4 = 4
// cstPayToPubKeyUncomp5 identifies a compressed pay-to-pubkey script to
// an uncompressed pubkey. Bit 0 specifies which y-coordinate to use
// to reconstruct the full uncompressed pubkey.
cstPayToPubKeyUncomp5 = 5
// numSpecialScripts is the number of special scripts recognized by the
// domain-specific script compression algorithm.
numSpecialScripts = 6
)
// isPubKeyHash returns whether or not the passed public key script is a
// standard pay-to-pubkey-hash script along with the pubkey hash it is paying to
// if it is.
func isPubKeyHash(script []byte) (bool, []byte) {
if len(script) == 25 && script[0] == txscript.OP_DUP &&
script[1] == txscript.OP_HASH160 &&
script[2] == txscript.OP_DATA_20 &&
script[23] == txscript.OP_EQUALVERIFY &&
script[24] == txscript.OP_CHECKSIG {
return true, script[3:23]
}
return false, nil
}
// isScriptHash returns whether or not the passed public key script is a
// standard pay-to-script-hash script along with the script hash it is paying to
// if it is.
func isScriptHash(script []byte) (bool, []byte) {
if len(script) == 23 && script[0] == txscript.OP_HASH160 &&
script[1] == txscript.OP_DATA_20 &&
script[22] == txscript.OP_EQUAL {
return true, script[2:22]
}
return false, nil
}
// isPubKey returns whether or not the passed public key script is a standard
// pay-to-pubkey script that pays to a valid compressed or uncompressed public
// key along with the serialized pubkey it is paying to if it is.
//
// NOTE: This function ensures the public key is actually valid since the
// compression algorithm requires valid pubkeys. It does not support hybrid
// pubkeys. This means that even if the script has the correct form for a
// pay-to-pubkey script, this function will only return true when it is paying
// to a valid compressed or uncompressed pubkey.
func isPubKey(script []byte) (bool, []byte) {
// Pay-to-compressed-pubkey script.
if len(script) == 35 && script[0] == txscript.OP_DATA_33 &&
script[34] == txscript.OP_CHECKSIG && (script[1] == 0x02 ||
script[1] == 0x03) {
// Ensure the public key is valid.
serializedPubKey := script[1:34]
_, err := btcec.ParsePubKey(serializedPubKey, btcec.S256())
if err == nil {
return true, serializedPubKey
}
}
// Pay-to-uncompressed-pubkey script.
if len(script) == 67 && script[0] == txscript.OP_DATA_65 &&
script[66] == txscript.OP_CHECKSIG && script[1] == 0x04 {
// Ensure the public key is valid.
serializedPubKey := script[1:66]
_, err := btcec.ParsePubKey(serializedPubKey, btcec.S256())
if err == nil {
return true, serializedPubKey
}
}
return false, nil
}
// compressedScriptSize returns the number of bytes the passed script would take
// when encoded with the domain specific compression algorithm described above.
func compressedScriptSize(pkScript []byte) int {
// Pay-to-pubkey-hash script.
if valid, _ := isPubKeyHash(pkScript); valid {
return 21
}
// Pay-to-script-hash script.
if valid, _ := isScriptHash(pkScript); valid {
return 21
}
// Pay-to-pubkey (compressed or uncompressed) script.
if valid, _ := isPubKey(pkScript); valid {
return 33
}
// When none of the above special cases apply, encode the script as is
// preceded by the sum of its size and the number of special cases
// encoded as a variable length quantity.
return serializeSizeVLQ(uint64(len(pkScript)+numSpecialScripts)) +
len(pkScript)
}
// decodeCompressedScriptSize treats the passed serialized bytes as a compressed
// script, possibly followed by other data, and returns the number of bytes it
// occupies taking into account the special encoding of the script size by the
// domain specific compression algorithm described above.
func decodeCompressedScriptSize(serialized []byte) int {
scriptSize, bytesRead := deserializeVLQ(serialized)
if bytesRead == 0 {
return 0
}
switch scriptSize {
case cstPayToPubKeyHash:
return 21
case cstPayToScriptHash:
return 21
case cstPayToPubKeyComp2, cstPayToPubKeyComp3, cstPayToPubKeyUncomp4,
cstPayToPubKeyUncomp5:
return 33
}
scriptSize -= numSpecialScripts
scriptSize += uint64(bytesRead)
return int(scriptSize)
}
// putCompressedScript compresses the passed script according to the domain
// specific compression algorithm described above directly into the passed
// target byte slice. The target byte slice must be at least large enough to
// handle the number of bytes returned by the compressedScriptSize function or
// it will panic.
func putCompressedScript(target, pkScript []byte) int {
// Pay-to-pubkey-hash script.
if valid, hash := isPubKeyHash(pkScript); valid {
target[0] = cstPayToPubKeyHash
copy(target[1:21], hash)
return 21
}
// Pay-to-script-hash script.
if valid, hash := isScriptHash(pkScript); valid {
target[0] = cstPayToScriptHash
copy(target[1:21], hash)
return 21
}
// Pay-to-pubkey (compressed or uncompressed) script.
if valid, serializedPubKey := isPubKey(pkScript); valid {
pubKeyFormat := serializedPubKey[0]
switch pubKeyFormat {
case 0x02, 0x03:
target[0] = pubKeyFormat
copy(target[1:33], serializedPubKey[1:33])
return 33
case 0x04:
// Encode the oddness of the serialized pubkey into the
// compressed script type.
target[0] = pubKeyFormat | (serializedPubKey[64] & 0x01)
copy(target[1:33], serializedPubKey[1:33])
return 33
}
}
// When none of the above special cases apply, encode the unmodified
// script preceded by the sum of its size and the number of special
// cases encoded as a variable length quantity.
encodedSize := uint64(len(pkScript) + numSpecialScripts)
vlqSizeLen := putVLQ(target, encodedSize)
copy(target[vlqSizeLen:], pkScript)
return vlqSizeLen + len(pkScript)
}
// decompressScript returns the original script obtained by decompressing the
// passed compressed script according to the domain specific compression
// algorithm described above.
//
// NOTE: The script parameter must already have been proven to be long enough
// to contain the number of bytes returned by decodeCompressedScriptSize or it
// will panic. This is acceptable since it is only an internal function.
func decompressScript(compressedPkScript []byte) []byte {
// In practice this function will not be called with a zero-length or
// nil script since the nil script encoding includes the length, however
// the code below assumes the length exists, so just return nil now if
// the function ever ends up being called with a nil script in the
// future.
if len(compressedPkScript) == 0 {
return nil
}
// Decode the script size and examine it for the special cases.
encodedScriptSize, bytesRead := deserializeVLQ(compressedPkScript)
switch encodedScriptSize {
// Pay-to-pubkey-hash script. The resulting script is:
// <OP_DUP><OP_HASH160><20 byte hash><OP_EQUALVERIFY><OP_CHECKSIG>
case cstPayToPubKeyHash:
pkScript := make([]byte, 25)
pkScript[0] = txscript.OP_DUP
pkScript[1] = txscript.OP_HASH160
pkScript[2] = txscript.OP_DATA_20
copy(pkScript[3:], compressedPkScript[bytesRead:bytesRead+20])
pkScript[23] = txscript.OP_EQUALVERIFY
pkScript[24] = txscript.OP_CHECKSIG
return pkScript
// Pay-to-script-hash script. The resulting script is:
// <OP_HASH160><20 byte script hash><OP_EQUAL>
case cstPayToScriptHash:
pkScript := make([]byte, 23)
pkScript[0] = txscript.OP_HASH160
pkScript[1] = txscript.OP_DATA_20
copy(pkScript[2:], compressedPkScript[bytesRead:bytesRead+20])
pkScript[22] = txscript.OP_EQUAL
return pkScript
// Pay-to-compressed-pubkey script. The resulting script is:
// <OP_DATA_33><33 byte compressed pubkey><OP_CHECKSIG>
case cstPayToPubKeyComp2, cstPayToPubKeyComp3:
pkScript := make([]byte, 35)
pkScript[0] = txscript.OP_DATA_33
pkScript[1] = byte(encodedScriptSize)
copy(pkScript[2:], compressedPkScript[bytesRead:bytesRead+32])
pkScript[34] = txscript.OP_CHECKSIG
return pkScript
// Pay-to-uncompressed-pubkey script. The resulting script is:
// <OP_DATA_65><65 byte uncompressed pubkey><OP_CHECKSIG>
case cstPayToPubKeyUncomp4, cstPayToPubKeyUncomp5:
// Change the leading byte to the appropriate compressed pubkey
// identifier (0x02 or 0x03) so it can be decoded as a
// compressed pubkey. This really should never fail since the
// encoding ensures it is valid before compressing to this type.
compressedKey := make([]byte, 33)
compressedKey[0] = byte(encodedScriptSize - 2)
copy(compressedKey[1:], compressedPkScript[1:])
key, err := btcec.ParsePubKey(compressedKey, btcec.S256())
if err != nil {
return nil
}
pkScript := make([]byte, 67)
pkScript[0] = txscript.OP_DATA_65
copy(pkScript[1:], key.SerializeUncompressed())
pkScript[66] = txscript.OP_CHECKSIG
return pkScript
}
// When none of the special cases apply, the script was encoded using
// the general format, so reduce the script size by the number of
// special cases and return the unmodified script.
scriptSize := int(encodedScriptSize - numSpecialScripts)
pkScript := make([]byte, scriptSize)
copy(pkScript, compressedPkScript[bytesRead:bytesRead+scriptSize])
return pkScript
}
// -----------------------------------------------------------------------------
// In order to reduce the size of stored amounts, a domain specific compression
// algorithm is used which relies on there typically being a lot of zeroes at
// end of the amounts. The compression algorithm used here was obtained from
// Bitcoin Core, so all credits for the algorithm go to it.
//
// While this is simply exchanging one uint64 for another, the resulting value
// for typical amounts has a much smaller magnitude which results in fewer bytes
// when encoded as variable length quantity. For example, consider the amount
// of 0.1 BTC which is 10000000 satoshi. Encoding 10000000 as a VLQ would take
// 4 bytes while encoding the compressed value of 8 as a VLQ only takes 1 byte.
//
// Essentially the compression is achieved by splitting the value into an
// exponent in the range [0-9] and a digit in the range [1-9], when possible,
// and encoding them in a way that can be decoded. More specifically, the
// encoding is as follows:
// - 0 is 0
// - Find the exponent, e, as the largest power of 10 that evenly divides the
// value up to a maximum of 9
// - When e < 9, the final digit can't be 0 so store it as d and remove it by
// dividing the value by 10 (call the result n). The encoded value is thus:
// 1 + 10*(9*n + d-1) + e
// - When e==9, the only thing known is the amount is not 0. The encoded value
// is thus:
// 1 + 10*(n-1) + e == 10 + 10*(n-1)
//
// Example encodings:
// (The numbers in parenthesis are the number of bytes when serialized as a VLQ)
// 0 (1) -> 0 (1) * 0.00000000 BTC
// 1000 (2) -> 4 (1) * 0.00001000 BTC
// 10000 (2) -> 5 (1) * 0.00010000 BTC
// 12345678 (4) -> 111111101(4) * 0.12345678 BTC
// 50000000 (4) -> 47 (1) * 0.50000000 BTC
// 100000000 (4) -> 9 (1) * 1.00000000 BTC
// 500000000 (5) -> 49 (1) * 5.00000000 BTC
// 1000000000 (5) -> 10 (1) * 10.00000000 BTC
// -----------------------------------------------------------------------------
// compressTxOutAmount compresses the passed amount according to the domain
// specific compression algorithm described above.
func compressTxOutAmount(amount uint64) uint64 {
// No need to do any work if it's zero.
if amount == 0 {
return 0
}
// Find the largest power of 10 (max of 9) that evenly divides the
// value.
exponent := uint64(0)
for amount%10 == 0 && exponent < 9 {
amount /= 10
exponent++
}
// The compressed result for exponents less than 9 is:
// 1 + 10*(9*n + d-1) + e
if exponent < 9 {
lastDigit := amount % 10
amount /= 10
return 1 + 10*(9*amount+lastDigit-1) + exponent
}
// The compressed result for an exponent of 9 is:
// 1 + 10*(n-1) + e == 10 + 10*(n-1)
return 10 + 10*(amount-1)
}
// decompressTxOutAmount returns the original amount the passed compressed
// amount represents according to the domain specific compression algorithm
// described above.
func decompressTxOutAmount(amount uint64) uint64 {
// No need to do any work if it's zero.
if amount == 0 {
return 0
}
// The decompressed amount is either of the following two equations:
// x = 1 + 10*(9*n + d - 1) + e
// x = 1 + 10*(n - 1) + 9
amount--
// The decompressed amount is now one of the following two equations:
// x = 10*(9*n + d - 1) + e
// x = 10*(n - 1) + 9
exponent := amount % 10
amount /= 10
// The decompressed amount is now one of the following two equations:
// x = 9*n + d - 1 | where e < 9
// x = n - 1 | where e = 9
n := uint64(0)
if exponent < 9 {
lastDigit := amount%9 + 1
amount /= 9
n = amount*10 + lastDigit
} else {
n = amount + 1
}
// Apply the exponent.
for ; exponent > 0; exponent-- {
n *= 10
}
return n
}
// -----------------------------------------------------------------------------
// Compressed transaction outputs consist of an amount and a public key script
// both compressed using the domain specific compression algorithms previously
// described.
//
// The serialized format is:
//
// <compressed amount><compressed script>
//
// Field Type Size
// compressed amount VLQ variable
// compressed script []byte variable
// -----------------------------------------------------------------------------
// compressedTxOutSize returns the number of bytes the passed transaction output
// fields would take when encoded with the format described above.
func compressedTxOutSize(amount uint64, pkScript []byte) int {
return serializeSizeVLQ(compressTxOutAmount(amount)) +
compressedScriptSize(pkScript)
}
// putCompressedTxOut compresses the passed amount and script according to their
// domain specific compression algorithms and encodes them directly into the
// passed target byte slice with the format described above. The target byte
// slice must be at least large enough to handle the number of bytes returned by
// the compressedTxOutSize function or it will panic.
func putCompressedTxOut(target []byte, amount uint64, pkScript []byte) int {
offset := putVLQ(target, compressTxOutAmount(amount))
offset += putCompressedScript(target[offset:], pkScript)
return offset
}
// decodeCompressedTxOut decodes the passed compressed txout, possibly followed
// by other data, into its uncompressed amount and script and returns them along
// with the number of bytes they occupied prior to decompression.
func decodeCompressedTxOut(serialized []byte) (uint64, []byte, int, error) {
// Deserialize the compressed amount and ensure there are bytes
// remaining for the compressed script.
compressedAmount, bytesRead := deserializeVLQ(serialized)
if bytesRead >= len(serialized) {
return 0, nil, bytesRead, errDeserialize("unexpected end of " +
"data after compressed amount")
}
// Decode the compressed script size and ensure there are enough bytes
// left in the slice for it.
scriptSize := decodeCompressedScriptSize(serialized[bytesRead:])
if len(serialized[bytesRead:]) < scriptSize {
return 0, nil, bytesRead, errDeserialize("unexpected end of " +
"data after script size")
}
// Decompress and return the amount and script.
amount := decompressTxOutAmount(compressedAmount)
script := decompressScript(serialized[bytesRead : bytesRead+scriptSize])
return amount, script, bytesRead + scriptSize, nil
}

View File

@@ -0,0 +1,312 @@
// Copyright (c) 2013-2017 The btcsuite developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package blockchain
import (
"math/big"
"time"
"github.com/btcsuite/btcd/chaincfg/chainhash"
)
var (
// bigOne is 1 represented as a big.Int. It is defined here to avoid
// the overhead of creating it multiple times.
bigOne = big.NewInt(1)
// oneLsh256 is 1 shifted left 256 bits. It is defined here to avoid
// the overhead of creating it multiple times.
oneLsh256 = new(big.Int).Lsh(bigOne, 256)
)
// HashToBig converts a chainhash.Hash into a big.Int that can be used to
// perform math comparisons.
func HashToBig(hash *chainhash.Hash) *big.Int {
// A Hash is in little-endian, but the big package wants the bytes in
// big-endian, so reverse them.
buf := *hash
blen := len(buf)
for i := 0; i < blen/2; i++ {
buf[i], buf[blen-1-i] = buf[blen-1-i], buf[i]
}
return new(big.Int).SetBytes(buf[:])
}
// CompactToBig converts a compact representation of a whole number N to an
// unsigned 32-bit number. The representation is similar to IEEE754 floating
// point numbers.
//
// Like IEEE754 floating point, there are three basic components: the sign,
// the exponent, and the mantissa. They are broken out as follows:
//
// * the most significant 8 bits represent the unsigned base 256 exponent
// * bit 23 (the 24th bit) represents the sign bit
// * the least significant 23 bits represent the mantissa
//
// -------------------------------------------------
// | Exponent | Sign | Mantissa |
// -------------------------------------------------
// | 8 bits [31-24] | 1 bit [23] | 23 bits [22-00] |
// -------------------------------------------------
//
// The formula to calculate N is:
// N = (-1^sign) * mantissa * 256^(exponent-3)
//
// This compact form is only used in bitcoin to encode unsigned 256-bit numbers
// which represent difficulty targets, thus there really is not a need for a
// sign bit, but it is implemented here to stay consistent with bitcoind.
func CompactToBig(compact uint32) *big.Int {
// Extract the mantissa, sign bit, and exponent.
mantissa := compact & 0x007fffff
isNegative := compact&0x00800000 != 0
exponent := uint(compact >> 24)
// Since the base for the exponent is 256, the exponent can be treated
// as the number of bytes to represent the full 256-bit number. So,
// treat the exponent as the number of bytes and shift the mantissa
// right or left accordingly. This is equivalent to:
// N = mantissa * 256^(exponent-3)
var bn *big.Int
if exponent <= 3 {
mantissa >>= 8 * (3 - exponent)
bn = big.NewInt(int64(mantissa))
} else {
bn = big.NewInt(int64(mantissa))
bn.Lsh(bn, 8*(exponent-3))
}
// Make it negative if the sign bit is set.
if isNegative {
bn = bn.Neg(bn)
}
return bn
}
// BigToCompact converts a whole number N to a compact representation using
// an unsigned 32-bit number. The compact representation only provides 23 bits
// of precision, so values larger than (2^23 - 1) only encode the most
// significant digits of the number. See CompactToBig for details.
func BigToCompact(n *big.Int) uint32 {
// No need to do any work if it's zero.
if n.Sign() == 0 {
return 0
}
// Since the base for the exponent is 256, the exponent can be treated
// as the number of bytes. So, shift the number right or left
// accordingly. This is equivalent to:
// mantissa = mantissa / 256^(exponent-3)
var mantissa uint32
exponent := uint(len(n.Bytes()))
if exponent <= 3 {
mantissa = uint32(n.Bits()[0])
mantissa <<= 8 * (3 - exponent)
} else {
// Use a copy to avoid modifying the caller's original number.
tn := new(big.Int).Set(n)
mantissa = uint32(tn.Rsh(tn, 8*(exponent-3)).Bits()[0])
}
// When the mantissa already has the sign bit set, the number is too
// large to fit into the available 23-bits, so divide the number by 256
// and increment the exponent accordingly.
if mantissa&0x00800000 != 0 {
mantissa >>= 8
exponent++
}
// Pack the exponent, sign bit, and mantissa into an unsigned 32-bit
// int and return it.
compact := uint32(exponent<<24) | mantissa
if n.Sign() < 0 {
compact |= 0x00800000
}
return compact
}
// CalcWork calculates a work value from difficulty bits. Bitcoin increases
// the difficulty for generating a block by decreasing the value which the
// generated hash must be less than. This difficulty target is stored in each
// block header using a compact representation as described in the documentation
// for CompactToBig. The main chain is selected by choosing the chain that has
// the most proof of work (highest difficulty). Since a lower target difficulty
// value equates to higher actual difficulty, the work value which will be
// accumulated must be the inverse of the difficulty. Also, in order to avoid
// potential division by zero and really small floating point numbers, the
// result adds 1 to the denominator and multiplies the numerator by 2^256.
func CalcWork(bits uint32) *big.Int {
// Return a work value of zero if the passed difficulty bits represent
// a negative number. Note this should not happen in practice with valid
// blocks, but an invalid block could trigger it.
difficultyNum := CompactToBig(bits)
if difficultyNum.Sign() <= 0 {
return big.NewInt(0)
}
// (1 << 256) / (difficultyNum + 1)
denominator := new(big.Int).Add(difficultyNum, bigOne)
return new(big.Int).Div(oneLsh256, denominator)
}
// calcEasiestDifficulty calculates the easiest possible difficulty that a block
// can have given starting difficulty bits and a duration. It is mainly used to
// verify that claimed proof of work by a block is sane as compared to a
// known good checkpoint.
func (b *BlockChain) calcEasiestDifficulty(bits uint32, duration time.Duration) uint32 {
// Convert types used in the calculations below.
durationVal := int64(duration / time.Second)
adjustmentFactor := big.NewInt(b.chainParams.RetargetAdjustmentFactor)
// The test network rules allow minimum difficulty blocks after more
// than twice the desired amount of time needed to generate a block has
// elapsed.
if b.chainParams.ReduceMinDifficulty {
reductionTime := int64(b.chainParams.MinDiffReductionTime /
time.Second)
if durationVal > reductionTime {
return b.chainParams.PowLimitBits
}
}
// Since easier difficulty equates to higher numbers, the easiest
// difficulty for a given duration is the largest value possible given
// the number of retargets for the duration and starting difficulty
// multiplied by the max adjustment factor.
newTarget := CompactToBig(bits)
for durationVal > 0 && newTarget.Cmp(b.chainParams.PowLimit) < 0 {
newTarget.Mul(newTarget, adjustmentFactor)
durationVal -= b.maxRetargetTimespan
}
// Limit new value to the proof of work limit.
if newTarget.Cmp(b.chainParams.PowLimit) > 0 {
newTarget.Set(b.chainParams.PowLimit)
}
return BigToCompact(newTarget)
}
// findPrevTestNetDifficulty returns the difficulty of the previous block which
// did not have the special testnet minimum difficulty rule applied.
//
// This function MUST be called with the chain state lock held (for writes).
func (b *BlockChain) findPrevTestNetDifficulty(startNode *blockNode) uint32 {
// Search backwards through the chain for the last block without
// the special rule applied.
iterNode := startNode
for iterNode != nil && iterNode.height%b.blocksPerRetarget != 0 &&
iterNode.bits == b.chainParams.PowLimitBits {
iterNode = iterNode.parent
}
// Return the found difficulty or the minimum difficulty if no
// appropriate block was found.
lastBits := b.chainParams.PowLimitBits
if iterNode != nil {
lastBits = iterNode.bits
}
return lastBits
}
// calcNextRequiredDifficulty calculates the required difficulty for the block
// after the passed previous block node based on the difficulty retarget rules.
// This function differs from the exported CalcNextRequiredDifficulty in that
// the exported version uses the current best chain as the previous block node
// while this function accepts any block node.
func (b *BlockChain) calcNextRequiredDifficulty(lastNode *blockNode, newBlockTime time.Time) (uint32, error) {
// Genesis block.
if lastNode == nil {
return b.chainParams.PowLimitBits, nil
}
// Return the previous block's difficulty requirements if this block
// is not at a difficulty retarget interval.
if (lastNode.height+1)%b.blocksPerRetarget != 0 {
// For networks that support it, allow special reduction of the
// required difficulty once too much time has elapsed without
// mining a block.
if b.chainParams.ReduceMinDifficulty {
// Return minimum difficulty when more than the desired
// amount of time has elapsed without mining a block.
reductionTime := int64(b.chainParams.MinDiffReductionTime /
time.Second)
allowMinTime := lastNode.timestamp + reductionTime
if newBlockTime.Unix() > allowMinTime {
return b.chainParams.PowLimitBits, nil
}
// The block was mined within the desired timeframe, so
// return the difficulty for the last block which did
// not have the special minimum difficulty rule applied.
return b.findPrevTestNetDifficulty(lastNode), nil
}
// For the main network (or any unrecognized networks), simply
// return the previous block's difficulty requirements.
return lastNode.bits, nil
}
// Get the block node at the previous retarget (targetTimespan days
// worth of blocks).
firstNode := lastNode.RelativeAncestor(b.blocksPerRetarget - 1)
if firstNode == nil {
return 0, AssertError("unable to obtain previous retarget block")
}
// Limit the amount of adjustment that can occur to the previous
// difficulty.
actualTimespan := lastNode.timestamp - firstNode.timestamp
adjustedTimespan := actualTimespan
if actualTimespan < b.minRetargetTimespan {
adjustedTimespan = b.minRetargetTimespan
} else if actualTimespan > b.maxRetargetTimespan {
adjustedTimespan = b.maxRetargetTimespan
}
// Calculate new target difficulty as:
// currentDifficulty * (adjustedTimespan / targetTimespan)
// The result uses integer division which means it will be slightly
// rounded down. Bitcoind also uses integer division to calculate this
// result.
oldTarget := CompactToBig(lastNode.bits)
newTarget := new(big.Int).Mul(oldTarget, big.NewInt(adjustedTimespan))
targetTimeSpan := int64(b.chainParams.TargetTimespan / time.Second)
newTarget.Div(newTarget, big.NewInt(targetTimeSpan))
// Limit new value to the proof of work limit.
if newTarget.Cmp(b.chainParams.PowLimit) > 0 {
newTarget.Set(b.chainParams.PowLimit)
}
// Log new target difficulty and return it. The new target logging is
// intentionally converting the bits back to a number instead of using
// newTarget since conversion to the compact representation loses
// precision.
newTargetBits := BigToCompact(newTarget)
log.Debugf("Difficulty retarget at block height %d", lastNode.height+1)
log.Debugf("Old target %08x (%064x)", lastNode.bits, oldTarget)
log.Debugf("New target %08x (%064x)", newTargetBits, CompactToBig(newTargetBits))
log.Debugf("Actual timespan %v, adjusted timespan %v, target timespan %v",
time.Duration(actualTimespan)*time.Second,
time.Duration(adjustedTimespan)*time.Second,
b.chainParams.TargetTimespan)
return newTargetBits, nil
}
// CalcNextRequiredDifficulty calculates the required difficulty for the block
// after the end of the current best chain based on the difficulty retarget
// rules.
//
// This function is safe for concurrent access.
func (b *BlockChain) CalcNextRequiredDifficulty(timestamp time.Time) (uint32, error) {
b.chainLock.Lock()
difficulty, err := b.calcNextRequiredDifficulty(b.bestChain.Tip(), timestamp)
b.chainLock.Unlock()
return difficulty, err
}

81
vendor/github.com/btcsuite/btcd/blockchain/doc.go generated vendored Normal file
View File

@@ -0,0 +1,81 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
/*
Package blockchain implements bitcoin block handling and chain selection rules.
The bitcoin block handling and chain selection rules are an integral, and quite
likely the most important, part of bitcoin. Unfortunately, at the time of
this writing, these rules are also largely undocumented and had to be
ascertained from the bitcoind source code. At its core, bitcoin is a
distributed consensus of which blocks are valid and which ones will comprise the
main block chain (public ledger) that ultimately determines accepted
transactions, so it is extremely important that fully validating nodes agree on
all rules.
At a high level, this package provides support for inserting new blocks into
the block chain according to the aforementioned rules. It includes
functionality such as rejecting duplicate blocks, ensuring blocks and
transactions follow all rules, orphan handling, and best chain selection along
with reorganization.
Since this package does not deal with other bitcoin specifics such as network
communication or wallets, it provides a notification system which gives the
caller a high level of flexibility in how they want to react to certain events
such as orphan blocks which need their parents requested and newly connected
main chain blocks which might result in wallet updates.
Bitcoin Chain Processing Overview
Before a block is allowed into the block chain, it must go through an intensive
series of validation rules. The following list serves as a general outline of
those rules to provide some intuition into what is going on under the hood, but
is by no means exhaustive:
- Reject duplicate blocks
- Perform a series of sanity checks on the block and its transactions such as
verifying proof of work, timestamps, number and character of transactions,
transaction amounts, script complexity, and merkle root calculations
- Compare the block against predetermined checkpoints for expected timestamps
and difficulty based on elapsed time since the checkpoint
- Save the most recent orphan blocks for a limited time in case their parent
blocks become available
- Stop processing if the block is an orphan as the rest of the processing
depends on the block's position within the block chain
- Perform a series of more thorough checks that depend on the block's position
within the block chain such as verifying block difficulties adhere to
difficulty retarget rules, timestamps are after the median of the last
several blocks, all transactions are finalized, checkpoint blocks match, and
block versions are in line with the previous blocks
- Determine how the block fits into the chain and perform different actions
accordingly in order to ensure any side chains which have higher difficulty
than the main chain become the new main chain
- When a block is being connected to the main chain (either through
reorganization of a side chain to the main chain or just extending the
main chain), perform further checks on the block's transactions such as
verifying transaction duplicates, script complexity for the combination of
connected scripts, coinbase maturity, double spends, and connected
transaction values
- Run the transaction scripts to verify the spender is allowed to spend the
coins
- Insert the block into the block database
Errors
Errors returned by this package are either the raw errors provided by underlying
calls or of type blockchain.RuleError. This allows the caller to differentiate
between unexpected errors, such as database errors, versus errors due to rule
violations through type assertions. In addition, callers can programmatically
determine the specific rule violation by examining the ErrorCode field of the
type asserted blockchain.RuleError.
Bitcoin Improvement Proposals
This package includes spec changes outlined by the following BIPs:
BIP0016 (https://en.bitcoin.it/wiki/BIP_0016)
BIP0030 (https://en.bitcoin.it/wiki/BIP_0030)
BIP0034 (https://en.bitcoin.it/wiki/BIP_0034)
*/
package blockchain

298
vendor/github.com/btcsuite/btcd/blockchain/error.go generated vendored Normal file
View File

@@ -0,0 +1,298 @@
// Copyright (c) 2014-2016 The btcsuite developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package blockchain
import (
"fmt"
)
// DeploymentError identifies an error that indicates a deployment ID was
// specified that does not exist.
type DeploymentError uint32
// Error returns the assertion error as a human-readable string and satisfies
// the error interface.
func (e DeploymentError) Error() string {
return fmt.Sprintf("deployment ID %d does not exist", uint32(e))
}
// AssertError identifies an error that indicates an internal code consistency
// issue and should be treated as a critical and unrecoverable error.
type AssertError string
// Error returns the assertion error as a human-readable string and satisfies
// the error interface.
func (e AssertError) Error() string {
return "assertion failed: " + string(e)
}
// ErrorCode identifies a kind of error.
type ErrorCode int
// These constants are used to identify a specific RuleError.
const (
// ErrDuplicateBlock indicates a block with the same hash already
// exists.
ErrDuplicateBlock ErrorCode = iota
// ErrBlockTooBig indicates the serialized block size exceeds the
// maximum allowed size.
ErrBlockTooBig
// ErrBlockWeightTooHigh indicates that the block's computed weight
// metric exceeds the maximum allowed value.
ErrBlockWeightTooHigh
// ErrBlockVersionTooOld indicates the block version is too old and is
// no longer accepted since the majority of the network has upgraded
// to a newer version.
ErrBlockVersionTooOld
// ErrInvalidTime indicates the time in the passed block has a precision
// that is more than one second. The chain consensus rules require
// timestamps to have a maximum precision of one second.
ErrInvalidTime
// ErrTimeTooOld indicates the time is either before the median time of
// the last several blocks per the chain consensus rules or prior to the
// most recent checkpoint.
ErrTimeTooOld
// ErrTimeTooNew indicates the time is too far in the future as compared
// the current time.
ErrTimeTooNew
// ErrDifficultyTooLow indicates the difficulty for the block is lower
// than the difficulty required by the most recent checkpoint.
ErrDifficultyTooLow
// ErrUnexpectedDifficulty indicates specified bits do not align with
// the expected value either because it doesn't match the calculated
// valued based on difficulty regarted rules or it is out of the valid
// range.
ErrUnexpectedDifficulty
// ErrHighHash indicates the block does not hash to a value which is
// lower than the required target difficultly.
ErrHighHash
// ErrBadMerkleRoot indicates the calculated merkle root does not match
// the expected value.
ErrBadMerkleRoot
// ErrBadCheckpoint indicates a block that is expected to be at a
// checkpoint height does not match the expected one.
ErrBadCheckpoint
// ErrForkTooOld indicates a block is attempting to fork the block chain
// before the most recent checkpoint.
ErrForkTooOld
// ErrCheckpointTimeTooOld indicates a block has a timestamp before the
// most recent checkpoint.
ErrCheckpointTimeTooOld
// ErrNoTransactions indicates the block does not have a least one
// transaction. A valid block must have at least the coinbase
// transaction.
ErrNoTransactions
// ErrNoTxInputs indicates a transaction does not have any inputs. A
// valid transaction must have at least one input.
ErrNoTxInputs
// ErrNoTxOutputs indicates a transaction does not have any outputs. A
// valid transaction must have at least one output.
ErrNoTxOutputs
// ErrTxTooBig indicates a transaction exceeds the maximum allowed size
// when serialized.
ErrTxTooBig
// ErrBadTxOutValue indicates an output value for a transaction is
// invalid in some way such as being out of range.
ErrBadTxOutValue
// ErrDuplicateTxInputs indicates a transaction references the same
// input more than once.
ErrDuplicateTxInputs
// ErrBadTxInput indicates a transaction input is invalid in some way
// such as referencing a previous transaction outpoint which is out of
// range or not referencing one at all.
ErrBadTxInput
// ErrMissingTxOut indicates a transaction output referenced by an input
// either does not exist or has already been spent.
ErrMissingTxOut
// ErrUnfinalizedTx indicates a transaction has not been finalized.
// A valid block may only contain finalized transactions.
ErrUnfinalizedTx
// ErrDuplicateTx indicates a block contains an identical transaction
// (or at least two transactions which hash to the same value). A
// valid block may only contain unique transactions.
ErrDuplicateTx
// ErrOverwriteTx indicates a block contains a transaction that has
// the same hash as a previous transaction which has not been fully
// spent.
ErrOverwriteTx
// ErrImmatureSpend indicates a transaction is attempting to spend a
// coinbase that has not yet reached the required maturity.
ErrImmatureSpend
// ErrSpendTooHigh indicates a transaction is attempting to spend more
// value than the sum of all of its inputs.
ErrSpendTooHigh
// ErrBadFees indicates the total fees for a block are invalid due to
// exceeding the maximum possible value.
ErrBadFees
// ErrTooManySigOps indicates the total number of signature operations
// for a transaction or block exceed the maximum allowed limits.
ErrTooManySigOps
// ErrFirstTxNotCoinbase indicates the first transaction in a block
// is not a coinbase transaction.
ErrFirstTxNotCoinbase
// ErrMultipleCoinbases indicates a block contains more than one
// coinbase transaction.
ErrMultipleCoinbases
// ErrBadCoinbaseScriptLen indicates the length of the signature script
// for a coinbase transaction is not within the valid range.
ErrBadCoinbaseScriptLen
// ErrBadCoinbaseValue indicates the amount of a coinbase value does
// not match the expected value of the subsidy plus the sum of all fees.
ErrBadCoinbaseValue
// ErrMissingCoinbaseHeight indicates the coinbase transaction for a
// block does not start with the serialized block block height as
// required for version 2 and higher blocks.
ErrMissingCoinbaseHeight
// ErrBadCoinbaseHeight indicates the serialized block height in the
// coinbase transaction for version 2 and higher blocks does not match
// the expected value.
ErrBadCoinbaseHeight
// ErrScriptMalformed indicates a transaction script is malformed in
// some way. For example, it might be longer than the maximum allowed
// length or fail to parse.
ErrScriptMalformed
// ErrScriptValidation indicates the result of executing transaction
// script failed. The error covers any failure when executing scripts
// such signature verification failures and execution past the end of
// the stack.
ErrScriptValidation
// ErrUnexpectedWitness indicates that a block includes transactions
// with witness data, but doesn't also have a witness commitment within
// the coinbase transaction.
ErrUnexpectedWitness
// ErrInvalidWitnessCommitment indicates that a block's witness
// commitment is not well formed.
ErrInvalidWitnessCommitment
// ErrWitnessCommitmentMismatch indicates that the witness commitment
// included in the block's coinbase transaction doesn't match the
// manually computed witness commitment.
ErrWitnessCommitmentMismatch
// ErrPreviousBlockUnknown indicates that the previous block is not known.
ErrPreviousBlockUnknown
// ErrInvalidAncestorBlock indicates that an ancestor of this block has
// already failed validation.
ErrInvalidAncestorBlock
// ErrPrevBlockNotBest indicates that the block's previous block is not the
// current chain tip. This is not a block validation rule, but is required
// for block proposals submitted via getblocktemplate RPC.
ErrPrevBlockNotBest
)
// Map of ErrorCode values back to their constant names for pretty printing.
var errorCodeStrings = map[ErrorCode]string{
ErrDuplicateBlock: "ErrDuplicateBlock",
ErrBlockTooBig: "ErrBlockTooBig",
ErrBlockVersionTooOld: "ErrBlockVersionTooOld",
ErrBlockWeightTooHigh: "ErrBlockWeightTooHigh",
ErrInvalidTime: "ErrInvalidTime",
ErrTimeTooOld: "ErrTimeTooOld",
ErrTimeTooNew: "ErrTimeTooNew",
ErrDifficultyTooLow: "ErrDifficultyTooLow",
ErrUnexpectedDifficulty: "ErrUnexpectedDifficulty",
ErrHighHash: "ErrHighHash",
ErrBadMerkleRoot: "ErrBadMerkleRoot",
ErrBadCheckpoint: "ErrBadCheckpoint",
ErrForkTooOld: "ErrForkTooOld",
ErrCheckpointTimeTooOld: "ErrCheckpointTimeTooOld",
ErrNoTransactions: "ErrNoTransactions",
ErrNoTxInputs: "ErrNoTxInputs",
ErrNoTxOutputs: "ErrNoTxOutputs",
ErrTxTooBig: "ErrTxTooBig",
ErrBadTxOutValue: "ErrBadTxOutValue",
ErrDuplicateTxInputs: "ErrDuplicateTxInputs",
ErrBadTxInput: "ErrBadTxInput",
ErrMissingTxOut: "ErrMissingTxOut",
ErrUnfinalizedTx: "ErrUnfinalizedTx",
ErrDuplicateTx: "ErrDuplicateTx",
ErrOverwriteTx: "ErrOverwriteTx",
ErrImmatureSpend: "ErrImmatureSpend",
ErrSpendTooHigh: "ErrSpendTooHigh",
ErrBadFees: "ErrBadFees",
ErrTooManySigOps: "ErrTooManySigOps",
ErrFirstTxNotCoinbase: "ErrFirstTxNotCoinbase",
ErrMultipleCoinbases: "ErrMultipleCoinbases",
ErrBadCoinbaseScriptLen: "ErrBadCoinbaseScriptLen",
ErrBadCoinbaseValue: "ErrBadCoinbaseValue",
ErrMissingCoinbaseHeight: "ErrMissingCoinbaseHeight",
ErrBadCoinbaseHeight: "ErrBadCoinbaseHeight",
ErrScriptMalformed: "ErrScriptMalformed",
ErrScriptValidation: "ErrScriptValidation",
ErrUnexpectedWitness: "ErrUnexpectedWitness",
ErrInvalidWitnessCommitment: "ErrInvalidWitnessCommitment",
ErrWitnessCommitmentMismatch: "ErrWitnessCommitmentMismatch",
ErrPreviousBlockUnknown: "ErrPreviousBlockUnknown",
ErrInvalidAncestorBlock: "ErrInvalidAncestorBlock",
ErrPrevBlockNotBest: "ErrPrevBlockNotBest",
}
// String returns the ErrorCode as a human-readable name.
func (e ErrorCode) String() string {
if s := errorCodeStrings[e]; s != "" {
return s
}
return fmt.Sprintf("Unknown ErrorCode (%d)", int(e))
}
// RuleError identifies a rule violation. It is used to indicate that
// processing of a block or transaction failed due to one of the many validation
// rules. The caller can use type assertions to determine if a failure was
// specifically due to a rule violation and access the ErrorCode field to
// ascertain the specific reason for the rule violation.
type RuleError struct {
ErrorCode ErrorCode // Describes the kind of error
Description string // Human readable description of the issue
}
// Error satisfies the error interface and prints human-readable errors.
func (e RuleError) Error() string {
return e.Description
}
// ruleError creates an RuleError given a set of arguments.
func ruleError(c ErrorCode, desc string) RuleError {
return RuleError{ErrorCode: c, Description: desc}
}

30
vendor/github.com/btcsuite/btcd/blockchain/log.go generated vendored Normal file
View File

@@ -0,0 +1,30 @@
// Copyright (c) 2013-2016 The btcsuite developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package blockchain
import (
"github.com/btcsuite/btclog"
)
// log is a logger that is initialized with no output filters. This
// means the package will not perform any logging by default until the caller
// requests it.
var log btclog.Logger
// The default amount of logging is none.
func init() {
DisableLog()
}
// DisableLog disables all library log output. Logging output is disabled
// by default until UseLogger is called.
func DisableLog() {
log = btclog.Disabled
}
// UseLogger uses a specified Logger to output package logging info.
func UseLogger(logger btclog.Logger) {
log = logger
}

View File

@@ -0,0 +1,218 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package blockchain
import (
"math"
"sort"
"sync"
"time"
)
const (
// maxAllowedOffsetSeconds is the maximum number of seconds in either
// direction that local clock will be adjusted. When the median time
// of the network is outside of this range, no offset will be applied.
maxAllowedOffsetSecs = 70 * 60 // 1 hour 10 minutes
// similarTimeSecs is the number of seconds in either direction from the
// local clock that is used to determine that it is likely wrong and
// hence to show a warning.
similarTimeSecs = 5 * 60 // 5 minutes
)
var (
// maxMedianTimeEntries is the maximum number of entries allowed in the
// median time data. This is a variable as opposed to a constant so the
// test code can modify it.
maxMedianTimeEntries = 200
)
// MedianTimeSource provides a mechanism to add several time samples which are
// used to determine a median time which is then used as an offset to the local
// clock.
type MedianTimeSource interface {
// AdjustedTime returns the current time adjusted by the median time
// offset as calculated from the time samples added by AddTimeSample.
AdjustedTime() time.Time
// AddTimeSample adds a time sample that is used when determining the
// median time of the added samples.
AddTimeSample(id string, timeVal time.Time)
// Offset returns the number of seconds to adjust the local clock based
// upon the median of the time samples added by AddTimeData.
Offset() time.Duration
}
// int64Sorter implements sort.Interface to allow a slice of 64-bit integers to
// be sorted.
type int64Sorter []int64
// Len returns the number of 64-bit integers in the slice. It is part of the
// sort.Interface implementation.
func (s int64Sorter) Len() int {
return len(s)
}
// Swap swaps the 64-bit integers at the passed indices. It is part of the
// sort.Interface implementation.
func (s int64Sorter) Swap(i, j int) {
s[i], s[j] = s[j], s[i]
}
// Less returns whether the 64-bit integer with index i should sort before the
// 64-bit integer with index j. It is part of the sort.Interface
// implementation.
func (s int64Sorter) Less(i, j int) bool {
return s[i] < s[j]
}
// medianTime provides an implementation of the MedianTimeSource interface.
// It is limited to maxMedianTimeEntries includes the same buggy behavior as
// the time offset mechanism in Bitcoin Core. This is necessary because it is
// used in the consensus code.
type medianTime struct {
mtx sync.Mutex
knownIDs map[string]struct{}
offsets []int64
offsetSecs int64
invalidTimeChecked bool
}
// Ensure the medianTime type implements the MedianTimeSource interface.
var _ MedianTimeSource = (*medianTime)(nil)
// AdjustedTime returns the current time adjusted by the median time offset as
// calculated from the time samples added by AddTimeSample.
//
// This function is safe for concurrent access and is part of the
// MedianTimeSource interface implementation.
func (m *medianTime) AdjustedTime() time.Time {
m.mtx.Lock()
defer m.mtx.Unlock()
// Limit the adjusted time to 1 second precision.
now := time.Unix(time.Now().Unix(), 0)
return now.Add(time.Duration(m.offsetSecs) * time.Second)
}
// AddTimeSample adds a time sample that is used when determining the median
// time of the added samples.
//
// This function is safe for concurrent access and is part of the
// MedianTimeSource interface implementation.
func (m *medianTime) AddTimeSample(sourceID string, timeVal time.Time) {
m.mtx.Lock()
defer m.mtx.Unlock()
// Don't add time data from the same source.
if _, exists := m.knownIDs[sourceID]; exists {
return
}
m.knownIDs[sourceID] = struct{}{}
// Truncate the provided offset to seconds and append it to the slice
// of offsets while respecting the maximum number of allowed entries by
// replacing the oldest entry with the new entry once the maximum number
// of entries is reached.
now := time.Unix(time.Now().Unix(), 0)
offsetSecs := int64(timeVal.Sub(now).Seconds())
numOffsets := len(m.offsets)
if numOffsets == maxMedianTimeEntries && maxMedianTimeEntries > 0 {
m.offsets = m.offsets[1:]
numOffsets--
}
m.offsets = append(m.offsets, offsetSecs)
numOffsets++
// Sort the offsets so the median can be obtained as needed later.
sortedOffsets := make([]int64, numOffsets)
copy(sortedOffsets, m.offsets)
sort.Sort(int64Sorter(sortedOffsets))
offsetDuration := time.Duration(offsetSecs) * time.Second
log.Debugf("Added time sample of %v (total: %v)", offsetDuration,
numOffsets)
// NOTE: The following code intentionally has a bug to mirror the
// buggy behavior in Bitcoin Core since the median time is used in the
// consensus rules.
//
// In particular, the offset is only updated when the number of entries
// is odd, but the max number of entries is 200, an even number. Thus,
// the offset will never be updated again once the max number of entries
// is reached.
// The median offset is only updated when there are enough offsets and
// the number of offsets is odd so the middle value is the true median.
// Thus, there is nothing to do when those conditions are not met.
if numOffsets < 5 || numOffsets&0x01 != 1 {
return
}
// At this point the number of offsets in the list is odd, so the
// middle value of the sorted offsets is the median.
median := sortedOffsets[numOffsets/2]
// Set the new offset when the median offset is within the allowed
// offset range.
if math.Abs(float64(median)) < maxAllowedOffsetSecs {
m.offsetSecs = median
} else {
// The median offset of all added time data is larger than the
// maximum allowed offset, so don't use an offset. This
// effectively limits how far the local clock can be skewed.
m.offsetSecs = 0
if !m.invalidTimeChecked {
m.invalidTimeChecked = true
// Find if any time samples have a time that is close
// to the local time.
var remoteHasCloseTime bool
for _, offset := range sortedOffsets {
if math.Abs(float64(offset)) < similarTimeSecs {
remoteHasCloseTime = true
break
}
}
// Warn if none of the time samples are close.
if !remoteHasCloseTime {
log.Warnf("Please check your date and time " +
"are correct! btcd will not work " +
"properly with an invalid time")
}
}
}
medianDuration := time.Duration(m.offsetSecs) * time.Second
log.Debugf("New time offset: %v", medianDuration)
}
// Offset returns the number of seconds to adjust the local clock based upon the
// median of the time samples added by AddTimeData.
//
// This function is safe for concurrent access and is part of the
// MedianTimeSource interface implementation.
func (m *medianTime) Offset() time.Duration {
m.mtx.Lock()
defer m.mtx.Unlock()
return time.Duration(m.offsetSecs) * time.Second
}
// NewMedianTime returns a new instance of concurrency-safe implementation of
// the MedianTimeSource interface. The returned implementation contains the
// rules necessary for proper time handling in the chain consensus rules and
// expects the time samples to be added from the timestamp field of the version
// message received from remote peers that successfully connect and negotiate.
func NewMedianTime() MedianTimeSource {
return &medianTime{
knownIDs: make(map[string]struct{}),
offsets: make([]int64, 0, maxMedianTimeEntries),
}
}

265
vendor/github.com/btcsuite/btcd/blockchain/merkle.go generated vendored Normal file
View File

@@ -0,0 +1,265 @@
// Copyright (c) 2013-2016 The btcsuite developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package blockchain
import (
"bytes"
"fmt"
"math"
"github.com/btcsuite/btcd/chaincfg/chainhash"
"github.com/btcsuite/btcd/txscript"
"github.com/btcsuite/btcutil"
)
const (
// CoinbaseWitnessDataLen is the required length of the only element within
// the coinbase's witness data if the coinbase transaction contains a
// witness commitment.
CoinbaseWitnessDataLen = 32
// CoinbaseWitnessPkScriptLength is the length of the public key script
// containing an OP_RETURN, the WitnessMagicBytes, and the witness
// commitment itself. In order to be a valid candidate for the output
// containing the witness commitment
CoinbaseWitnessPkScriptLength = 38
)
var (
// WitnessMagicBytes is the prefix marker within the public key script
// of a coinbase output to indicate that this output holds the witness
// commitment for a block.
WitnessMagicBytes = []byte{
txscript.OP_RETURN,
txscript.OP_DATA_36,
0xaa,
0x21,
0xa9,
0xed,
}
)
// nextPowerOfTwo returns the next highest power of two from a given number if
// it is not already a power of two. This is a helper function used during the
// calculation of a merkle tree.
func nextPowerOfTwo(n int) int {
// Return the number if it's already a power of 2.
if n&(n-1) == 0 {
return n
}
// Figure out and return the next power of two.
exponent := uint(math.Log2(float64(n))) + 1
return 1 << exponent // 2^exponent
}
// HashMerkleBranches takes two hashes, treated as the left and right tree
// nodes, and returns the hash of their concatenation. This is a helper
// function used to aid in the generation of a merkle tree.
func HashMerkleBranches(left *chainhash.Hash, right *chainhash.Hash) *chainhash.Hash {
// Concatenate the left and right nodes.
var hash [chainhash.HashSize * 2]byte
copy(hash[:chainhash.HashSize], left[:])
copy(hash[chainhash.HashSize:], right[:])
newHash := chainhash.DoubleHashH(hash[:])
return &newHash
}
// BuildMerkleTreeStore creates a merkle tree from a slice of transactions,
// stores it using a linear array, and returns a slice of the backing array. A
// linear array was chosen as opposed to an actual tree structure since it uses
// about half as much memory. The following describes a merkle tree and how it
// is stored in a linear array.
//
// A merkle tree is a tree in which every non-leaf node is the hash of its
// children nodes. A diagram depicting how this works for bitcoin transactions
// where h(x) is a double sha256 follows:
//
// root = h1234 = h(h12 + h34)
// / \
// h12 = h(h1 + h2) h34 = h(h3 + h4)
// / \ / \
// h1 = h(tx1) h2 = h(tx2) h3 = h(tx3) h4 = h(tx4)
//
// The above stored as a linear array is as follows:
//
// [h1 h2 h3 h4 h12 h34 root]
//
// As the above shows, the merkle root is always the last element in the array.
//
// The number of inputs is not always a power of two which results in a
// balanced tree structure as above. In that case, parent nodes with no
// children are also zero and parent nodes with only a single left node
// are calculated by concatenating the left node with itself before hashing.
// Since this function uses nodes that are pointers to the hashes, empty nodes
// will be nil.
//
// The additional bool parameter indicates if we are generating the merkle tree
// using witness transaction id's rather than regular transaction id's. This
// also presents an additional case wherein the wtxid of the coinbase transaction
// is the zeroHash.
func BuildMerkleTreeStore(transactions []*btcutil.Tx, witness bool) []*chainhash.Hash {
// Calculate how many entries are required to hold the binary merkle
// tree as a linear array and create an array of that size.
nextPoT := nextPowerOfTwo(len(transactions))
arraySize := nextPoT*2 - 1
merkles := make([]*chainhash.Hash, arraySize)
// Create the base transaction hashes and populate the array with them.
for i, tx := range transactions {
// If we're computing a witness merkle root, instead of the
// regular txid, we use the modified wtxid which includes a
// transaction's witness data within the digest. Additionally,
// the coinbase's wtxid is all zeroes.
switch {
case witness && i == 0:
var zeroHash chainhash.Hash
merkles[i] = &zeroHash
case witness:
wSha := tx.MsgTx().WitnessHash()
merkles[i] = &wSha
default:
merkles[i] = tx.Hash()
}
}
// Start the array offset after the last transaction and adjusted to the
// next power of two.
offset := nextPoT
for i := 0; i < arraySize-1; i += 2 {
switch {
// When there is no left child node, the parent is nil too.
case merkles[i] == nil:
merkles[offset] = nil
// When there is no right child, the parent is generated by
// hashing the concatenation of the left child with itself.
case merkles[i+1] == nil:
newHash := HashMerkleBranches(merkles[i], merkles[i])
merkles[offset] = newHash
// The normal case sets the parent node to the double sha256
// of the concatentation of the left and right children.
default:
newHash := HashMerkleBranches(merkles[i], merkles[i+1])
merkles[offset] = newHash
}
offset++
}
return merkles
}
// ExtractWitnessCommitment attempts to locate, and return the witness
// commitment for a block. The witness commitment is of the form:
// SHA256(witness root || witness nonce). The function additionally returns a
// boolean indicating if the witness root was located within any of the txOut's
// in the passed transaction. The witness commitment is stored as the data push
// for an OP_RETURN with special magic bytes to aide in location.
func ExtractWitnessCommitment(tx *btcutil.Tx) ([]byte, bool) {
// The witness commitment *must* be located within one of the coinbase
// transaction's outputs.
if !IsCoinBase(tx) {
return nil, false
}
msgTx := tx.MsgTx()
for i := len(msgTx.TxOut) - 1; i >= 0; i-- {
// The public key script that contains the witness commitment
// must shared a prefix with the WitnessMagicBytes, and be at
// least 38 bytes.
pkScript := msgTx.TxOut[i].PkScript
if len(pkScript) >= CoinbaseWitnessPkScriptLength &&
bytes.HasPrefix(pkScript, WitnessMagicBytes) {
// The witness commitment itself is a 32-byte hash
// directly after the WitnessMagicBytes. The remaining
// bytes beyond the 38th byte currently have no consensus
// meaning.
start := len(WitnessMagicBytes)
end := CoinbaseWitnessPkScriptLength
return msgTx.TxOut[i].PkScript[start:end], true
}
}
return nil, false
}
// ValidateWitnessCommitment validates the witness commitment (if any) found
// within the coinbase transaction of the passed block.
func ValidateWitnessCommitment(blk *btcutil.Block) error {
// If the block doesn't have any transactions at all, then we won't be
// able to extract a commitment from the non-existent coinbase
// transaction. So we exit early here.
if len(blk.Transactions()) == 0 {
str := "cannot validate witness commitment of block without " +
"transactions"
return ruleError(ErrNoTransactions, str)
}
coinbaseTx := blk.Transactions()[0]
if len(coinbaseTx.MsgTx().TxIn) == 0 {
return ruleError(ErrNoTxInputs, "transaction has no inputs")
}
witnessCommitment, witnessFound := ExtractWitnessCommitment(coinbaseTx)
// If we can't find a witness commitment in any of the coinbase's
// outputs, then the block MUST NOT contain any transactions with
// witness data.
if !witnessFound {
for _, tx := range blk.Transactions() {
msgTx := tx.MsgTx()
if msgTx.HasWitness() {
str := fmt.Sprintf("block contains transaction with witness" +
" data, yet no witness commitment present")
return ruleError(ErrUnexpectedWitness, str)
}
}
return nil
}
// At this point the block contains a witness commitment, so the
// coinbase transaction MUST have exactly one witness element within
// its witness data and that element must be exactly
// CoinbaseWitnessDataLen bytes.
coinbaseWitness := coinbaseTx.MsgTx().TxIn[0].Witness
if len(coinbaseWitness) != 1 {
str := fmt.Sprintf("the coinbase transaction has %d items in "+
"its witness stack when only one is allowed",
len(coinbaseWitness))
return ruleError(ErrInvalidWitnessCommitment, str)
}
witnessNonce := coinbaseWitness[0]
if len(witnessNonce) != CoinbaseWitnessDataLen {
str := fmt.Sprintf("the coinbase transaction witness nonce "+
"has %d bytes when it must be %d bytes",
len(witnessNonce), CoinbaseWitnessDataLen)
return ruleError(ErrInvalidWitnessCommitment, str)
}
// Finally, with the preliminary checks out of the way, we can check if
// the extracted witnessCommitment is equal to:
// SHA256(witnessMerkleRoot || witnessNonce). Where witnessNonce is the
// coinbase transaction's only witness item.
witnessMerkleTree := BuildMerkleTreeStore(blk.Transactions(), true)
witnessMerkleRoot := witnessMerkleTree[len(witnessMerkleTree)-1]
var witnessPreimage [chainhash.HashSize * 2]byte
copy(witnessPreimage[:], witnessMerkleRoot[:])
copy(witnessPreimage[chainhash.HashSize:], witnessNonce)
computedCommitment := chainhash.DoubleHashB(witnessPreimage[:])
if !bytes.Equal(computedCommitment, witnessCommitment) {
str := fmt.Sprintf("witness commitment does not match: "+
"computed %v, coinbase includes %v", computedCommitment,
witnessCommitment)
return ruleError(ErrWitnessCommitmentMismatch, str)
}
return nil
}

View File

@@ -0,0 +1,81 @@
// Copyright (c) 2013-2016 The btcsuite developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package blockchain
import (
"fmt"
)
// NotificationType represents the type of a notification message.
type NotificationType int
// NotificationCallback is used for a caller to provide a callback for
// notifications about various chain events.
type NotificationCallback func(*Notification)
// Constants for the type of a notification message.
const (
// NTBlockAccepted indicates the associated block was accepted into
// the block chain. Note that this does not necessarily mean it was
// added to the main chain. For that, use NTBlockConnected.
NTBlockAccepted NotificationType = iota
// NTBlockConnected indicates the associated block was connected to the
// main chain.
NTBlockConnected
// NTBlockDisconnected indicates the associated block was disconnected
// from the main chain.
NTBlockDisconnected
)
// notificationTypeStrings is a map of notification types back to their constant
// names for pretty printing.
var notificationTypeStrings = map[NotificationType]string{
NTBlockAccepted: "NTBlockAccepted",
NTBlockConnected: "NTBlockConnected",
NTBlockDisconnected: "NTBlockDisconnected",
}
// String returns the NotificationType in human-readable form.
func (n NotificationType) String() string {
if s, ok := notificationTypeStrings[n]; ok {
return s
}
return fmt.Sprintf("Unknown Notification Type (%d)", int(n))
}
// Notification defines notification that is sent to the caller via the callback
// function provided during the call to New and consists of a notification type
// as well as associated data that depends on the type as follows:
// - NTBlockAccepted: *btcutil.Block
// - NTBlockConnected: *btcutil.Block
// - NTBlockDisconnected: *btcutil.Block
type Notification struct {
Type NotificationType
Data interface{}
}
// Subscribe to block chain notifications. Registers a callback to be executed
// when various events take place. See the documentation on Notification and
// NotificationType for details on the types and contents of notifications.
func (b *BlockChain) Subscribe(callback NotificationCallback) {
b.notificationsLock.Lock()
b.notifications = append(b.notifications, callback)
b.notificationsLock.Unlock()
}
// sendNotification sends a notification with the passed type and data if the
// caller requested notifications by providing a callback function in the call
// to New.
func (b *BlockChain) sendNotification(typ NotificationType, data interface{}) {
// Generate and send the notification.
n := Notification{Type: typ, Data: data}
b.notificationsLock.RLock()
for _, callback := range b.notifications {
callback(&n)
}
b.notificationsLock.RUnlock()
}

244
vendor/github.com/btcsuite/btcd/blockchain/process.go generated vendored Normal file
View File

@@ -0,0 +1,244 @@
// Copyright (c) 2013-2017 The btcsuite developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package blockchain
import (
"fmt"
"time"
"github.com/btcsuite/btcd/chaincfg/chainhash"
"github.com/btcsuite/btcd/database"
"github.com/btcsuite/btcutil"
)
// BehaviorFlags is a bitmask defining tweaks to the normal behavior when
// performing chain processing and consensus rules checks.
type BehaviorFlags uint32
const (
// BFFastAdd may be set to indicate that several checks can be avoided
// for the block since it is already known to fit into the chain due to
// already proving it correct links into the chain up to a known
// checkpoint. This is primarily used for headers-first mode.
BFFastAdd BehaviorFlags = 1 << iota
// BFNoPoWCheck may be set to indicate the proof of work check which
// ensures a block hashes to a value less than the required target will
// not be performed.
BFNoPoWCheck
// BFNone is a convenience value to specifically indicate no flags.
BFNone BehaviorFlags = 0
)
// blockExists determines whether a block with the given hash exists either in
// the main chain or any side chains.
//
// This function is safe for concurrent access.
func (b *BlockChain) blockExists(hash *chainhash.Hash) (bool, error) {
// Check block index first (could be main chain or side chain blocks).
if b.index.HaveBlock(hash) {
return true, nil
}
// Check in the database.
var exists bool
err := b.db.View(func(dbTx database.Tx) error {
var err error
exists, err = dbTx.HasBlock(hash)
if err != nil || !exists {
return err
}
// Ignore side chain blocks in the database. This is necessary
// because there is not currently any record of the associated
// block index data such as its block height, so it's not yet
// possible to efficiently load the block and do anything useful
// with it.
//
// Ultimately the entire block index should be serialized
// instead of only the current main chain so it can be consulted
// directly.
_, err = dbFetchHeightByHash(dbTx, hash)
if isNotInMainChainErr(err) {
exists = false
return nil
}
return err
})
return exists, err
}
// processOrphans determines if there are any orphans which depend on the passed
// block hash (they are no longer orphans if true) and potentially accepts them.
// It repeats the process for the newly accepted blocks (to detect further
// orphans which may no longer be orphans) until there are no more.
//
// The flags do not modify the behavior of this function directly, however they
// are needed to pass along to maybeAcceptBlock.
//
// This function MUST be called with the chain state lock held (for writes).
func (b *BlockChain) processOrphans(hash *chainhash.Hash, flags BehaviorFlags) error {
// Start with processing at least the passed hash. Leave a little room
// for additional orphan blocks that need to be processed without
// needing to grow the array in the common case.
processHashes := make([]*chainhash.Hash, 0, 10)
processHashes = append(processHashes, hash)
for len(processHashes) > 0 {
// Pop the first hash to process from the slice.
processHash := processHashes[0]
processHashes[0] = nil // Prevent GC leak.
processHashes = processHashes[1:]
// Look up all orphans that are parented by the block we just
// accepted. This will typically only be one, but it could
// be multiple if multiple blocks are mined and broadcast
// around the same time. The one with the most proof of work
// will eventually win out. An indexing for loop is
// intentionally used over a range here as range does not
// reevaluate the slice on each iteration nor does it adjust the
// index for the modified slice.
for i := 0; i < len(b.prevOrphans[*processHash]); i++ {
orphan := b.prevOrphans[*processHash][i]
if orphan == nil {
log.Warnf("Found a nil entry at index %d in the "+
"orphan dependency list for block %v", i,
processHash)
continue
}
// Remove the orphan from the orphan pool.
orphanHash := orphan.block.Hash()
b.removeOrphanBlock(orphan)
i--
// Potentially accept the block into the block chain.
_, err := b.maybeAcceptBlock(orphan.block, flags)
if err != nil {
return err
}
// Add this block to the list of blocks to process so
// any orphan blocks that depend on this block are
// handled too.
processHashes = append(processHashes, orphanHash)
}
}
return nil
}
// ProcessBlock is the main workhorse for handling insertion of new blocks into
// the block chain. It includes functionality such as rejecting duplicate
// blocks, ensuring blocks follow all rules, orphan handling, and insertion into
// the block chain along with best chain selection and reorganization.
//
// When no errors occurred during processing, the first return value indicates
// whether or not the block is on the main chain and the second indicates
// whether or not the block is an orphan.
//
// This function is safe for concurrent access.
func (b *BlockChain) ProcessBlock(block *btcutil.Block, flags BehaviorFlags) (bool, bool, error) {
b.chainLock.Lock()
defer b.chainLock.Unlock()
fastAdd := flags&BFFastAdd == BFFastAdd
blockHash := block.Hash()
log.Tracef("Processing block %v", blockHash)
// The block must not already exist in the main chain or side chains.
exists, err := b.blockExists(blockHash)
if err != nil {
return false, false, err
}
if exists {
str := fmt.Sprintf("already have block %v", blockHash)
return false, false, ruleError(ErrDuplicateBlock, str)
}
// The block must not already exist as an orphan.
if _, exists := b.orphans[*blockHash]; exists {
str := fmt.Sprintf("already have block (orphan) %v", blockHash)
return false, false, ruleError(ErrDuplicateBlock, str)
}
// Perform preliminary sanity checks on the block and its transactions.
err = checkBlockSanity(block, b.chainParams.PowLimit, b.timeSource, flags)
if err != nil {
return false, false, err
}
// Find the previous checkpoint and perform some additional checks based
// on the checkpoint. This provides a few nice properties such as
// preventing old side chain blocks before the last checkpoint,
// rejecting easy to mine, but otherwise bogus, blocks that could be
// used to eat memory, and ensuring expected (versus claimed) proof of
// work requirements since the previous checkpoint are met.
blockHeader := &block.MsgBlock().Header
checkpointNode, err := b.findPreviousCheckpoint()
if err != nil {
return false, false, err
}
if checkpointNode != nil {
// Ensure the block timestamp is after the checkpoint timestamp.
checkpointTime := time.Unix(checkpointNode.timestamp, 0)
if blockHeader.Timestamp.Before(checkpointTime) {
str := fmt.Sprintf("block %v has timestamp %v before "+
"last checkpoint timestamp %v", blockHash,
blockHeader.Timestamp, checkpointTime)
return false, false, ruleError(ErrCheckpointTimeTooOld, str)
}
if !fastAdd {
// Even though the checks prior to now have already ensured the
// proof of work exceeds the claimed amount, the claimed amount
// is a field in the block header which could be forged. This
// check ensures the proof of work is at least the minimum
// expected based on elapsed time since the last checkpoint and
// maximum adjustment allowed by the retarget rules.
duration := blockHeader.Timestamp.Sub(checkpointTime)
requiredTarget := CompactToBig(b.calcEasiestDifficulty(
checkpointNode.bits, duration))
currentTarget := CompactToBig(blockHeader.Bits)
if currentTarget.Cmp(requiredTarget) > 0 {
str := fmt.Sprintf("block target difficulty of %064x "+
"is too low when compared to the previous "+
"checkpoint", currentTarget)
return false, false, ruleError(ErrDifficultyTooLow, str)
}
}
}
// Handle orphan blocks.
prevHash := &blockHeader.PrevBlock
prevHashExists, err := b.blockExists(prevHash)
if err != nil {
return false, false, err
}
if !prevHashExists {
log.Infof("Adding orphan block %v with parent %v", blockHash, prevHash)
b.addOrphanBlock(block)
return false, true, nil
}
// The block has passed all context independent checks and appears sane
// enough to potentially accept it into the block chain.
isMainChain, err := b.maybeAcceptBlock(block, flags)
if err != nil {
return false, false, err
}
// Accept any orphan blocks that depend on this block (they are
// no longer orphans) and repeat for those accepted blocks until
// there are no more.
err = b.processOrphans(blockHash, flags)
if err != nil {
return false, false, err
}
log.Debugf("Accepted block %v", blockHash)
return isMainChain, false, nil
}

319
vendor/github.com/btcsuite/btcd/blockchain/scriptval.go generated vendored Normal file
View File

@@ -0,0 +1,319 @@
// Copyright (c) 2013-2016 The btcsuite developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package blockchain
import (
"fmt"
"math"
"runtime"
"time"
"github.com/btcsuite/btcd/txscript"
"github.com/btcsuite/btcd/wire"
"github.com/btcsuite/btcutil"
)
// txValidateItem holds a transaction along with which input to validate.
type txValidateItem struct {
txInIndex int
txIn *wire.TxIn
tx *btcutil.Tx
sigHashes *txscript.TxSigHashes
}
// txValidator provides a type which asynchronously validates transaction
// inputs. It provides several channels for communication and a processing
// function that is intended to be in run multiple goroutines.
type txValidator struct {
validateChan chan *txValidateItem
quitChan chan struct{}
resultChan chan error
utxoView *UtxoViewpoint
flags txscript.ScriptFlags
sigCache *txscript.SigCache
hashCache *txscript.HashCache
}
// sendResult sends the result of a script pair validation on the internal
// result channel while respecting the quit channel. This allows orderly
// shutdown when the validation process is aborted early due to a validation
// error in one of the other goroutines.
func (v *txValidator) sendResult(result error) {
select {
case v.resultChan <- result:
case <-v.quitChan:
}
}
// validateHandler consumes items to validate from the internal validate channel
// and returns the result of the validation on the internal result channel. It
// must be run as a goroutine.
func (v *txValidator) validateHandler() {
out:
for {
select {
case txVI := <-v.validateChan:
// Ensure the referenced input utxo is available.
txIn := txVI.txIn
utxo := v.utxoView.LookupEntry(txIn.PreviousOutPoint)
if utxo == nil {
str := fmt.Sprintf("unable to find unspent "+
"output %v referenced from "+
"transaction %s:%d",
txIn.PreviousOutPoint, txVI.tx.Hash(),
txVI.txInIndex)
err := ruleError(ErrMissingTxOut, str)
v.sendResult(err)
break out
}
// Create a new script engine for the script pair.
sigScript := txIn.SignatureScript
witness := txIn.Witness
pkScript := utxo.PkScript()
inputAmount := utxo.Amount()
vm, err := txscript.NewEngine(pkScript, txVI.tx.MsgTx(),
txVI.txInIndex, v.flags, v.sigCache, txVI.sigHashes,
inputAmount)
if err != nil {
str := fmt.Sprintf("failed to parse input "+
"%s:%d which references output %v - "+
"%v (input witness %x, input script "+
"bytes %x, prev output script bytes %x)",
txVI.tx.Hash(), txVI.txInIndex,
txIn.PreviousOutPoint, err, witness,
sigScript, pkScript)
err := ruleError(ErrScriptMalformed, str)
v.sendResult(err)
break out
}
// Execute the script pair.
if err := vm.Execute(); err != nil {
str := fmt.Sprintf("failed to validate input "+
"%s:%d which references output %v - "+
"%v (input witness %x, input script "+
"bytes %x, prev output script bytes %x)",
txVI.tx.Hash(), txVI.txInIndex,
txIn.PreviousOutPoint, err, witness,
sigScript, pkScript)
err := ruleError(ErrScriptValidation, str)
v.sendResult(err)
break out
}
// Validation succeeded.
v.sendResult(nil)
case <-v.quitChan:
break out
}
}
}
// Validate validates the scripts for all of the passed transaction inputs using
// multiple goroutines.
func (v *txValidator) Validate(items []*txValidateItem) error {
if len(items) == 0 {
return nil
}
// Limit the number of goroutines to do script validation based on the
// number of processor cores. This helps ensure the system stays
// reasonably responsive under heavy load.
maxGoRoutines := runtime.NumCPU() * 3
if maxGoRoutines <= 0 {
maxGoRoutines = 1
}
if maxGoRoutines > len(items) {
maxGoRoutines = len(items)
}
// Start up validation handlers that are used to asynchronously
// validate each transaction input.
for i := 0; i < maxGoRoutines; i++ {
go v.validateHandler()
}
// Validate each of the inputs. The quit channel is closed when any
// errors occur so all processing goroutines exit regardless of which
// input had the validation error.
numInputs := len(items)
currentItem := 0
processedItems := 0
for processedItems < numInputs {
// Only send items while there are still items that need to
// be processed. The select statement will never select a nil
// channel.
var validateChan chan *txValidateItem
var item *txValidateItem
if currentItem < numInputs {
validateChan = v.validateChan
item = items[currentItem]
}
select {
case validateChan <- item:
currentItem++
case err := <-v.resultChan:
processedItems++
if err != nil {
close(v.quitChan)
return err
}
}
}
close(v.quitChan)
return nil
}
// newTxValidator returns a new instance of txValidator to be used for
// validating transaction scripts asynchronously.
func newTxValidator(utxoView *UtxoViewpoint, flags txscript.ScriptFlags,
sigCache *txscript.SigCache, hashCache *txscript.HashCache) *txValidator {
return &txValidator{
validateChan: make(chan *txValidateItem),
quitChan: make(chan struct{}),
resultChan: make(chan error),
utxoView: utxoView,
sigCache: sigCache,
hashCache: hashCache,
flags: flags,
}
}
// ValidateTransactionScripts validates the scripts for the passed transaction
// using multiple goroutines.
func ValidateTransactionScripts(tx *btcutil.Tx, utxoView *UtxoViewpoint,
flags txscript.ScriptFlags, sigCache *txscript.SigCache,
hashCache *txscript.HashCache) error {
// First determine if segwit is active according to the scriptFlags. If
// it isn't then we don't need to interact with the HashCache.
segwitActive := flags&txscript.ScriptVerifyWitness == txscript.ScriptVerifyWitness
// If the hashcache doesn't yet has the sighash midstate for this
// transaction, then we'll compute them now so we can re-use them
// amongst all worker validation goroutines.
if segwitActive && tx.MsgTx().HasWitness() &&
!hashCache.ContainsHashes(tx.Hash()) {
hashCache.AddSigHashes(tx.MsgTx())
}
var cachedHashes *txscript.TxSigHashes
if segwitActive && tx.MsgTx().HasWitness() {
// The same pointer to the transaction's sighash midstate will
// be re-used amongst all validation goroutines. By
// pre-computing the sighash here instead of during validation,
// we ensure the sighashes
// are only computed once.
cachedHashes, _ = hashCache.GetSigHashes(tx.Hash())
}
// Collect all of the transaction inputs and required information for
// validation.
txIns := tx.MsgTx().TxIn
txValItems := make([]*txValidateItem, 0, len(txIns))
for txInIdx, txIn := range txIns {
// Skip coinbases.
if txIn.PreviousOutPoint.Index == math.MaxUint32 {
continue
}
txVI := &txValidateItem{
txInIndex: txInIdx,
txIn: txIn,
tx: tx,
sigHashes: cachedHashes,
}
txValItems = append(txValItems, txVI)
}
// Validate all of the inputs.
validator := newTxValidator(utxoView, flags, sigCache, hashCache)
return validator.Validate(txValItems)
}
// checkBlockScripts executes and validates the scripts for all transactions in
// the passed block using multiple goroutines.
func checkBlockScripts(block *btcutil.Block, utxoView *UtxoViewpoint,
scriptFlags txscript.ScriptFlags, sigCache *txscript.SigCache,
hashCache *txscript.HashCache) error {
// First determine if segwit is active according to the scriptFlags. If
// it isn't then we don't need to interact with the HashCache.
segwitActive := scriptFlags&txscript.ScriptVerifyWitness == txscript.ScriptVerifyWitness
// Collect all of the transaction inputs and required information for
// validation for all transactions in the block into a single slice.
numInputs := 0
for _, tx := range block.Transactions() {
numInputs += len(tx.MsgTx().TxIn)
}
txValItems := make([]*txValidateItem, 0, numInputs)
for _, tx := range block.Transactions() {
hash := tx.Hash()
// If the HashCache is present, and it doesn't yet contain the
// partial sighashes for this transaction, then we add the
// sighashes for the transaction. This allows us to take
// advantage of the potential speed savings due to the new
// digest algorithm (BIP0143).
if segwitActive && tx.HasWitness() && hashCache != nil &&
!hashCache.ContainsHashes(hash) {
hashCache.AddSigHashes(tx.MsgTx())
}
var cachedHashes *txscript.TxSigHashes
if segwitActive && tx.HasWitness() {
if hashCache != nil {
cachedHashes, _ = hashCache.GetSigHashes(hash)
} else {
cachedHashes = txscript.NewTxSigHashes(tx.MsgTx())
}
}
for txInIdx, txIn := range tx.MsgTx().TxIn {
// Skip coinbases.
if txIn.PreviousOutPoint.Index == math.MaxUint32 {
continue
}
txVI := &txValidateItem{
txInIndex: txInIdx,
txIn: txIn,
tx: tx,
sigHashes: cachedHashes,
}
txValItems = append(txValItems, txVI)
}
}
// Validate all of the inputs.
validator := newTxValidator(utxoView, scriptFlags, sigCache, hashCache)
start := time.Now()
if err := validator.Validate(txValItems); err != nil {
return err
}
elapsed := time.Since(start)
log.Tracef("block %v took %v to verify", block.Hash(), elapsed)
// If the HashCache is present, once we have validated the block, we no
// longer need the cached hashes for these transactions, so we purge
// them from the cache.
if segwitActive && hashCache != nil {
for _, tx := range block.Transactions() {
if tx.MsgTx().HasWitness() {
hashCache.PurgeSigHashes(tx.Hash())
}
}
}
return nil
}

View File

@@ -0,0 +1,356 @@
// Copyright (c) 2016-2017 The btcsuite developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package blockchain
import (
"fmt"
"github.com/btcsuite/btcd/chaincfg/chainhash"
)
// ThresholdState define the various threshold states used when voting on
// consensus changes.
type ThresholdState byte
// These constants are used to identify specific threshold states.
const (
// ThresholdDefined is the first state for each deployment and is the
// state for the genesis block has by definition for all deployments.
ThresholdDefined ThresholdState = iota
// ThresholdStarted is the state for a deployment once its start time
// has been reached.
ThresholdStarted
// ThresholdLockedIn is the state for a deployment during the retarget
// period which is after the ThresholdStarted state period and the
// number of blocks that have voted for the deployment equal or exceed
// the required number of votes for the deployment.
ThresholdLockedIn
// ThresholdActive is the state for a deployment for all blocks after a
// retarget period in which the deployment was in the ThresholdLockedIn
// state.
ThresholdActive
// ThresholdFailed is the state for a deployment once its expiration
// time has been reached and it did not reach the ThresholdLockedIn
// state.
ThresholdFailed
// numThresholdsStates is the maximum number of threshold states used in
// tests.
numThresholdsStates
)
// thresholdStateStrings is a map of ThresholdState values back to their
// constant names for pretty printing.
var thresholdStateStrings = map[ThresholdState]string{
ThresholdDefined: "ThresholdDefined",
ThresholdStarted: "ThresholdStarted",
ThresholdLockedIn: "ThresholdLockedIn",
ThresholdActive: "ThresholdActive",
ThresholdFailed: "ThresholdFailed",
}
// String returns the ThresholdState as a human-readable name.
func (t ThresholdState) String() string {
if s := thresholdStateStrings[t]; s != "" {
return s
}
return fmt.Sprintf("Unknown ThresholdState (%d)", int(t))
}
// thresholdConditionChecker provides a generic interface that is invoked to
// determine when a consensus rule change threshold should be changed.
type thresholdConditionChecker interface {
// BeginTime returns the unix timestamp for the median block time after
// which voting on a rule change starts (at the next window).
BeginTime() uint64
// EndTime returns the unix timestamp for the median block time after
// which an attempted rule change fails if it has not already been
// locked in or activated.
EndTime() uint64
// RuleChangeActivationThreshold is the number of blocks for which the
// condition must be true in order to lock in a rule change.
RuleChangeActivationThreshold() uint32
// MinerConfirmationWindow is the number of blocks in each threshold
// state retarget window.
MinerConfirmationWindow() uint32
// Condition returns whether or not the rule change activation condition
// has been met. This typically involves checking whether or not the
// bit associated with the condition is set, but can be more complex as
// needed.
Condition(*blockNode) (bool, error)
}
// thresholdStateCache provides a type to cache the threshold states of each
// threshold window for a set of IDs.
type thresholdStateCache struct {
entries map[chainhash.Hash]ThresholdState
}
// Lookup returns the threshold state associated with the given hash along with
// a boolean that indicates whether or not it is valid.
func (c *thresholdStateCache) Lookup(hash *chainhash.Hash) (ThresholdState, bool) {
state, ok := c.entries[*hash]
return state, ok
}
// Update updates the cache to contain the provided hash to threshold state
// mapping.
func (c *thresholdStateCache) Update(hash *chainhash.Hash, state ThresholdState) {
c.entries[*hash] = state
}
// newThresholdCaches returns a new array of caches to be used when calculating
// threshold states.
func newThresholdCaches(numCaches uint32) []thresholdStateCache {
caches := make([]thresholdStateCache, numCaches)
for i := 0; i < len(caches); i++ {
caches[i] = thresholdStateCache{
entries: make(map[chainhash.Hash]ThresholdState),
}
}
return caches
}
// thresholdState returns the current rule change threshold state for the block
// AFTER the given node and deployment ID. The cache is used to ensure the
// threshold states for previous windows are only calculated once.
//
// This function MUST be called with the chain state lock held (for writes).
func (b *BlockChain) thresholdState(prevNode *blockNode, checker thresholdConditionChecker, cache *thresholdStateCache) (ThresholdState, error) {
// The threshold state for the window that contains the genesis block is
// defined by definition.
confirmationWindow := int32(checker.MinerConfirmationWindow())
if prevNode == nil || (prevNode.height+1) < confirmationWindow {
return ThresholdDefined, nil
}
// Get the ancestor that is the last block of the previous confirmation
// window in order to get its threshold state. This can be done because
// the state is the same for all blocks within a given window.
prevNode = prevNode.Ancestor(prevNode.height -
(prevNode.height+1)%confirmationWindow)
// Iterate backwards through each of the previous confirmation windows
// to find the most recently cached threshold state.
var neededStates []*blockNode
for prevNode != nil {
// Nothing more to do if the state of the block is already
// cached.
if _, ok := cache.Lookup(&prevNode.hash); ok {
break
}
// The start and expiration times are based on the median block
// time, so calculate it now.
medianTime := prevNode.CalcPastMedianTime()
// The state is simply defined if the start time hasn't been
// been reached yet.
if uint64(medianTime.Unix()) < checker.BeginTime() {
cache.Update(&prevNode.hash, ThresholdDefined)
break
}
// Add this node to the list of nodes that need the state
// calculated and cached.
neededStates = append(neededStates, prevNode)
// Get the ancestor that is the last block of the previous
// confirmation window.
prevNode = prevNode.RelativeAncestor(confirmationWindow)
}
// Start with the threshold state for the most recent confirmation
// window that has a cached state.
state := ThresholdDefined
if prevNode != nil {
var ok bool
state, ok = cache.Lookup(&prevNode.hash)
if !ok {
return ThresholdFailed, AssertError(fmt.Sprintf(
"thresholdState: cache lookup failed for %v",
prevNode.hash))
}
}
// Since each threshold state depends on the state of the previous
// window, iterate starting from the oldest unknown window.
for neededNum := len(neededStates) - 1; neededNum >= 0; neededNum-- {
prevNode := neededStates[neededNum]
switch state {
case ThresholdDefined:
// The deployment of the rule change fails if it expires
// before it is accepted and locked in.
medianTime := prevNode.CalcPastMedianTime()
medianTimeUnix := uint64(medianTime.Unix())
if medianTimeUnix >= checker.EndTime() {
state = ThresholdFailed
break
}
// The state for the rule moves to the started state
// once its start time has been reached (and it hasn't
// already expired per the above).
if medianTimeUnix >= checker.BeginTime() {
state = ThresholdStarted
}
case ThresholdStarted:
// The deployment of the rule change fails if it expires
// before it is accepted and locked in.
medianTime := prevNode.CalcPastMedianTime()
if uint64(medianTime.Unix()) >= checker.EndTime() {
state = ThresholdFailed
break
}
// At this point, the rule change is still being voted
// on by the miners, so iterate backwards through the
// confirmation window to count all of the votes in it.
var count uint32
countNode := prevNode
for i := int32(0); i < confirmationWindow; i++ {
condition, err := checker.Condition(countNode)
if err != nil {
return ThresholdFailed, err
}
if condition {
count++
}
// Get the previous block node.
countNode = countNode.parent
}
// The state is locked in if the number of blocks in the
// period that voted for the rule change meets the
// activation threshold.
if count >= checker.RuleChangeActivationThreshold() {
state = ThresholdLockedIn
}
case ThresholdLockedIn:
// The new rule becomes active when its previous state
// was locked in.
state = ThresholdActive
// Nothing to do if the previous state is active or failed since
// they are both terminal states.
case ThresholdActive:
case ThresholdFailed:
}
// Update the cache to avoid recalculating the state in the
// future.
cache.Update(&prevNode.hash, state)
}
return state, nil
}
// ThresholdState returns the current rule change threshold state of the given
// deployment ID for the block AFTER the end of the current best chain.
//
// This function is safe for concurrent access.
func (b *BlockChain) ThresholdState(deploymentID uint32) (ThresholdState, error) {
b.chainLock.Lock()
state, err := b.deploymentState(b.bestChain.Tip(), deploymentID)
b.chainLock.Unlock()
return state, err
}
// IsDeploymentActive returns true if the target deploymentID is active, and
// false otherwise.
//
// This function is safe for concurrent access.
func (b *BlockChain) IsDeploymentActive(deploymentID uint32) (bool, error) {
b.chainLock.Lock()
state, err := b.deploymentState(b.bestChain.Tip(), deploymentID)
b.chainLock.Unlock()
if err != nil {
return false, err
}
return state == ThresholdActive, nil
}
// deploymentState returns the current rule change threshold for a given
// deploymentID. The threshold is evaluated from the point of view of the block
// node passed in as the first argument to this method.
//
// It is important to note that, as the variable name indicates, this function
// expects the block node prior to the block for which the deployment state is
// desired. In other words, the returned deployment state is for the block
// AFTER the passed node.
//
// This function MUST be called with the chain state lock held (for writes).
func (b *BlockChain) deploymentState(prevNode *blockNode, deploymentID uint32) (ThresholdState, error) {
if deploymentID > uint32(len(b.chainParams.Deployments)) {
return ThresholdFailed, DeploymentError(deploymentID)
}
deployment := &b.chainParams.Deployments[deploymentID]
checker := deploymentChecker{deployment: deployment, chain: b}
cache := &b.deploymentCaches[deploymentID]
return b.thresholdState(prevNode, checker, cache)
}
// initThresholdCaches initializes the threshold state caches for each warning
// bit and defined deployment and provides warnings if the chain is current per
// the warnUnknownVersions and warnUnknownRuleActivations functions.
func (b *BlockChain) initThresholdCaches() error {
// Initialize the warning and deployment caches by calculating the
// threshold state for each of them. This will ensure the caches are
// populated and any states that needed to be recalculated due to
// definition changes is done now.
prevNode := b.bestChain.Tip().parent
for bit := uint32(0); bit < vbNumBits; bit++ {
checker := bitConditionChecker{bit: bit, chain: b}
cache := &b.warningCaches[bit]
_, err := b.thresholdState(prevNode, checker, cache)
if err != nil {
return err
}
}
for id := 0; id < len(b.chainParams.Deployments); id++ {
deployment := &b.chainParams.Deployments[id]
cache := &b.deploymentCaches[id]
checker := deploymentChecker{deployment: deployment, chain: b}
_, err := b.thresholdState(prevNode, checker, cache)
if err != nil {
return err
}
}
// No warnings about unknown rules or versions until the chain is
// current.
if b.isCurrent() {
// Warn if a high enough percentage of the last blocks have
// unexpected versions.
bestNode := b.bestChain.Tip()
if err := b.warnUnknownVersions(bestNode); err != nil {
return err
}
// Warn if any unknown new rules are either about to activate or
// have already been activated.
if err := b.warnUnknownRuleActivations(bestNode); err != nil {
return err
}
}
return nil
}

View File

@@ -0,0 +1,27 @@
// Copyright (c) 2013-2017 The btcsuite developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package blockchain
// timeSorter implements sort.Interface to allow a slice of timestamps to
// be sorted.
type timeSorter []int64
// Len returns the number of timestamps in the slice. It is part of the
// sort.Interface implementation.
func (s timeSorter) Len() int {
return len(s)
}
// Swap swaps the timestamps at the passed indices. It is part of the
// sort.Interface implementation.
func (s timeSorter) Swap(i, j int) {
s[i], s[j] = s[j], s[i]
}
// Less returns whether the timstamp with index i should sort before the
// timestamp with index j. It is part of the sort.Interface implementation.
func (s timeSorter) Less(i, j int) bool {
return s[i] < s[j]
}

604
vendor/github.com/btcsuite/btcd/blockchain/upgrade.go generated vendored Normal file
View File

@@ -0,0 +1,604 @@
// Copyright (c) 2017 The btcsuite developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package blockchain
import (
"bytes"
"container/list"
"errors"
"fmt"
"time"
"github.com/btcsuite/btcd/chaincfg/chainhash"
"github.com/btcsuite/btcd/database"
"github.com/btcsuite/btcd/wire"
)
const (
// blockHdrOffset defines the offsets into a v1 block index row for the
// block header.
//
// The serialized block index row format is:
// <blocklocation><blockheader>
blockHdrOffset = 12
)
// errInterruptRequested indicates that an operation was cancelled due
// to a user-requested interrupt.
var errInterruptRequested = errors.New("interrupt requested")
// interruptRequested returns true when the provided channel has been closed.
// This simplifies early shutdown slightly since the caller can just use an if
// statement instead of a select.
func interruptRequested(interrupted <-chan struct{}) bool {
select {
case <-interrupted:
return true
default:
}
return false
}
// blockChainContext represents a particular block's placement in the block
// chain. This is used by the block index migration to track block metadata that
// will be written to disk.
type blockChainContext struct {
parent *chainhash.Hash
children []*chainhash.Hash
height int32
mainChain bool
}
// migrateBlockIndex migrates all block entries from the v1 block index bucket
// to the v2 bucket. The v1 bucket stores all block entries keyed by block hash,
// whereas the v2 bucket stores the exact same values, but keyed instead by
// block height + hash.
func migrateBlockIndex(db database.DB) error {
// Hardcoded bucket names so updates to the global values do not affect
// old upgrades.
v1BucketName := []byte("ffldb-blockidx")
v2BucketName := []byte("blockheaderidx")
err := db.Update(func(dbTx database.Tx) error {
v1BlockIdxBucket := dbTx.Metadata().Bucket(v1BucketName)
if v1BlockIdxBucket == nil {
return fmt.Errorf("Bucket %s does not exist", v1BucketName)
}
log.Info("Re-indexing block information in the database. This might take a while...")
v2BlockIdxBucket, err :=
dbTx.Metadata().CreateBucketIfNotExists(v2BucketName)
if err != nil {
return err
}
// Get tip of the main chain.
serializedData := dbTx.Metadata().Get(chainStateKeyName)
state, err := deserializeBestChainState(serializedData)
if err != nil {
return err
}
tip := &state.hash
// Scan the old block index bucket and construct a mapping of each block
// to parent block and all child blocks.
blocksMap, err := readBlockTree(v1BlockIdxBucket)
if err != nil {
return err
}
// Use the block graph to calculate the height of each block.
err = determineBlockHeights(blocksMap)
if err != nil {
return err
}
// Find blocks on the main chain with the block graph and current tip.
determineMainChainBlocks(blocksMap, tip)
// Now that we have heights for all blocks, scan the old block index
// bucket and insert all rows into the new one.
return v1BlockIdxBucket.ForEach(func(hashBytes, blockRow []byte) error {
endOffset := blockHdrOffset + blockHdrSize
headerBytes := blockRow[blockHdrOffset:endOffset:endOffset]
var hash chainhash.Hash
copy(hash[:], hashBytes[0:chainhash.HashSize])
chainContext := blocksMap[hash]
if chainContext.height == -1 {
return fmt.Errorf("Unable to calculate chain height for "+
"stored block %s", hash)
}
// Mark blocks as valid if they are part of the main chain.
status := statusDataStored
if chainContext.mainChain {
status |= statusValid
}
// Write header to v2 bucket
value := make([]byte, blockHdrSize+1)
copy(value[0:blockHdrSize], headerBytes)
value[blockHdrSize] = byte(status)
key := blockIndexKey(&hash, uint32(chainContext.height))
err := v2BlockIdxBucket.Put(key, value)
if err != nil {
return err
}
// Delete header from v1 bucket
truncatedRow := blockRow[0:blockHdrOffset:blockHdrOffset]
return v1BlockIdxBucket.Put(hashBytes, truncatedRow)
})
})
if err != nil {
return err
}
log.Infof("Block database migration complete")
return nil
}
// readBlockTree reads the old block index bucket and constructs a mapping of
// each block to its parent block and all child blocks. This mapping represents
// the full tree of blocks. This function does not populate the height or
// mainChain fields of the returned blockChainContext values.
func readBlockTree(v1BlockIdxBucket database.Bucket) (map[chainhash.Hash]*blockChainContext, error) {
blocksMap := make(map[chainhash.Hash]*blockChainContext)
err := v1BlockIdxBucket.ForEach(func(_, blockRow []byte) error {
var header wire.BlockHeader
endOffset := blockHdrOffset + blockHdrSize
headerBytes := blockRow[blockHdrOffset:endOffset:endOffset]
err := header.Deserialize(bytes.NewReader(headerBytes))
if err != nil {
return err
}
blockHash := header.BlockHash()
prevHash := header.PrevBlock
if blocksMap[blockHash] == nil {
blocksMap[blockHash] = &blockChainContext{height: -1}
}
if blocksMap[prevHash] == nil {
blocksMap[prevHash] = &blockChainContext{height: -1}
}
blocksMap[blockHash].parent = &prevHash
blocksMap[prevHash].children =
append(blocksMap[prevHash].children, &blockHash)
return nil
})
return blocksMap, err
}
// determineBlockHeights takes a map of block hashes to a slice of child hashes
// and uses it to compute the height for each block. The function assigns a
// height of 0 to the genesis hash and explores the tree of blocks
// breadth-first, assigning a height to every block with a path back to the
// genesis block. This function modifies the height field on the blocksMap
// entries.
func determineBlockHeights(blocksMap map[chainhash.Hash]*blockChainContext) error {
queue := list.New()
// The genesis block is included in blocksMap as a child of the zero hash
// because that is the value of the PrevBlock field in the genesis header.
preGenesisContext, exists := blocksMap[zeroHash]
if !exists || len(preGenesisContext.children) == 0 {
return fmt.Errorf("Unable to find genesis block")
}
for _, genesisHash := range preGenesisContext.children {
blocksMap[*genesisHash].height = 0
queue.PushBack(genesisHash)
}
for e := queue.Front(); e != nil; e = queue.Front() {
queue.Remove(e)
hash := e.Value.(*chainhash.Hash)
height := blocksMap[*hash].height
// For each block with this one as a parent, assign it a height and
// push to queue for future processing.
for _, childHash := range blocksMap[*hash].children {
blocksMap[*childHash].height = height + 1
queue.PushBack(childHash)
}
}
return nil
}
// determineMainChainBlocks traverses the block graph down from the tip to
// determine which block hashes that are part of the main chain. This function
// modifies the mainChain field on the blocksMap entries.
func determineMainChainBlocks(blocksMap map[chainhash.Hash]*blockChainContext, tip *chainhash.Hash) {
for nextHash := tip; *nextHash != zeroHash; nextHash = blocksMap[*nextHash].parent {
blocksMap[*nextHash].mainChain = true
}
}
// deserializeUtxoEntryV0 decodes a utxo entry from the passed serialized byte
// slice according to the legacy version 0 format into a map of utxos keyed by
// the output index within the transaction. The map is necessary because the
// previous format encoded all unspent outputs for a transaction using a single
// entry, whereas the new format encodes each unspent output individually.
//
// The legacy format is as follows:
//
// <version><height><header code><unspentness bitmap>[<compressed txouts>,...]
//
// Field Type Size
// version VLQ variable
// block height VLQ variable
// header code VLQ variable
// unspentness bitmap []byte variable
// compressed txouts
// compressed amount VLQ variable
// compressed script []byte variable
//
// The serialized header code format is:
// bit 0 - containing transaction is a coinbase
// bit 1 - output zero is unspent
// bit 2 - output one is unspent
// bits 3-x - number of bytes in unspentness bitmap. When both bits 1 and 2
// are unset, it encodes N-1 since there must be at least one unspent
// output.
//
// The rationale for the header code scheme is as follows:
// - Transactions which only pay to a single output and a change output are
// extremely common, thus an extra byte for the unspentness bitmap can be
// avoided for them by encoding those two outputs in the low order bits.
// - Given it is encoded as a VLQ which can encode values up to 127 with a
// single byte, that leaves 4 bits to represent the number of bytes in the
// unspentness bitmap while still only consuming a single byte for the
// header code. In other words, an unspentness bitmap with up to 120
// transaction outputs can be encoded with a single-byte header code.
// This covers the vast majority of transactions.
// - Encoding N-1 bytes when both bits 1 and 2 are unset allows an additional
// 8 outpoints to be encoded before causing the header code to require an
// additional byte.
//
// Example 1:
// From tx in main blockchain:
// Blk 1, 0e3e2357e806b6cdb1f70b54c3a3a17b6714ee1f0e68bebb44a74b1efd512098
//
// 010103320496b538e853519c726a2c91e61ec11600ae1390813a627c66fb8be7947be63c52
// <><><><------------------------------------------------------------------>
// | | \--------\ |
// | height | compressed txout 0
// version header code
//
// - version: 1
// - height: 1
// - header code: 0x03 (coinbase, output zero unspent, 0 bytes of unspentness)
// - unspentness: Nothing since it is zero bytes
// - compressed txout 0:
// - 0x32: VLQ-encoded compressed amount for 5000000000 (50 BTC)
// - 0x04: special script type pay-to-pubkey
// - 0x96...52: x-coordinate of the pubkey
//
// Example 2:
// From tx in main blockchain:
// Blk 113931, 4a16969aa4764dd7507fc1de7f0baa4850a246de90c45e59a3207f9a26b5036f
//
// 0185f90b0a011200e2ccd6ec7c6e2e581349c77e067385fa8236bf8a800900b8025be1b3efc63b0ad48e7f9f10e87544528d58
// <><----><><><------------------------------------------><-------------------------------------------->
// | | | \-------------------\ | |
// version | \--------\ unspentness | compressed txout 2
// height header code compressed txout 0
//
// - version: 1
// - height: 113931
// - header code: 0x0a (output zero unspent, 1 byte in unspentness bitmap)
// - unspentness: [0x01] (bit 0 is set, so output 0+2 = 2 is unspent)
// NOTE: It's +2 since the first two outputs are encoded in the header code
// - compressed txout 0:
// - 0x12: VLQ-encoded compressed amount for 20000000 (0.2 BTC)
// - 0x00: special script type pay-to-pubkey-hash
// - 0xe2...8a: pubkey hash
// - compressed txout 2:
// - 0x8009: VLQ-encoded compressed amount for 15000000 (0.15 BTC)
// - 0x00: special script type pay-to-pubkey-hash
// - 0xb8...58: pubkey hash
//
// Example 3:
// From tx in main blockchain:
// Blk 338156, 1b02d1c8cfef60a189017b9a420c682cf4a0028175f2f563209e4ff61c8c3620
//
// 0193d06c100000108ba5b9e763011dd46a006572d820e448e12d2bbb38640bc718e6
// <><----><><----><-------------------------------------------------->
// | | | \-----------------\ |
// version | \--------\ unspentness |
// height header code compressed txout 22
//
// - version: 1
// - height: 338156
// - header code: 0x10 (2+1 = 3 bytes in unspentness bitmap)
// NOTE: It's +1 since neither bit 1 nor 2 are set, so N-1 is encoded.
// - unspentness: [0x00 0x00 0x10] (bit 20 is set, so output 20+2 = 22 is unspent)
// NOTE: It's +2 since the first two outputs are encoded in the header code
// - compressed txout 22:
// - 0x8ba5b9e763: VLQ-encoded compressed amount for 366875659 (3.66875659 BTC)
// - 0x01: special script type pay-to-script-hash
// - 0x1d...e6: script hash
func deserializeUtxoEntryV0(serialized []byte) (map[uint32]*UtxoEntry, error) {
// Deserialize the version.
//
// NOTE: Ignore version since it is no longer used in the new format.
_, bytesRead := deserializeVLQ(serialized)
offset := bytesRead
if offset >= len(serialized) {
return nil, errDeserialize("unexpected end of data after version")
}
// Deserialize the block height.
blockHeight, bytesRead := deserializeVLQ(serialized[offset:])
offset += bytesRead
if offset >= len(serialized) {
return nil, errDeserialize("unexpected end of data after height")
}
// Deserialize the header code.
code, bytesRead := deserializeVLQ(serialized[offset:])
offset += bytesRead
if offset >= len(serialized) {
return nil, errDeserialize("unexpected end of data after header")
}
// Decode the header code.
//
// Bit 0 indicates whether the containing transaction is a coinbase.
// Bit 1 indicates output 0 is unspent.
// Bit 2 indicates output 1 is unspent.
// Bits 3-x encodes the number of non-zero unspentness bitmap bytes that
// follow. When both output 0 and 1 are spent, it encodes N-1.
isCoinBase := code&0x01 != 0
output0Unspent := code&0x02 != 0
output1Unspent := code&0x04 != 0
numBitmapBytes := code >> 3
if !output0Unspent && !output1Unspent {
numBitmapBytes++
}
// Ensure there are enough bytes left to deserialize the unspentness
// bitmap.
if uint64(len(serialized[offset:])) < numBitmapBytes {
return nil, errDeserialize("unexpected end of data for " +
"unspentness bitmap")
}
// Add sparse output for unspent outputs 0 and 1 as needed based on the
// details provided by the header code.
var outputIndexes []uint32
if output0Unspent {
outputIndexes = append(outputIndexes, 0)
}
if output1Unspent {
outputIndexes = append(outputIndexes, 1)
}
// Decode the unspentness bitmap adding a sparse output for each unspent
// output.
for i := uint32(0); i < uint32(numBitmapBytes); i++ {
unspentBits := serialized[offset]
for j := uint32(0); j < 8; j++ {
if unspentBits&0x01 != 0 {
// The first 2 outputs are encoded via the
// header code, so adjust the output number
// accordingly.
outputNum := 2 + i*8 + j
outputIndexes = append(outputIndexes, outputNum)
}
unspentBits >>= 1
}
offset++
}
// Map to hold all of the converted outputs.
entries := make(map[uint32]*UtxoEntry)
// All entries will need to potentially be marked as a coinbase.
var packedFlags txoFlags
if isCoinBase {
packedFlags |= tfCoinBase
}
// Decode and add all of the utxos.
for i, outputIndex := range outputIndexes {
// Decode the next utxo.
amount, pkScript, bytesRead, err := decodeCompressedTxOut(
serialized[offset:])
if err != nil {
return nil, errDeserialize(fmt.Sprintf("unable to "+
"decode utxo at index %d: %v", i, err))
}
offset += bytesRead
// Create a new utxo entry with the details deserialized above.
entries[outputIndex] = &UtxoEntry{
amount: int64(amount),
pkScript: pkScript,
blockHeight: int32(blockHeight),
packedFlags: packedFlags,
}
}
return entries, nil
}
// upgradeUtxoSetToV2 migrates the utxo set entries from version 1 to 2 in
// batches. It is guaranteed to updated if this returns without failure.
func upgradeUtxoSetToV2(db database.DB, interrupt <-chan struct{}) error {
// Hardcoded bucket names so updates to the global values do not affect
// old upgrades.
var (
v1BucketName = []byte("utxoset")
v2BucketName = []byte("utxosetv2")
)
log.Infof("Upgrading utxo set to v2. This will take a while...")
start := time.Now()
// Create the new utxo set bucket as needed.
err := db.Update(func(dbTx database.Tx) error {
_, err := dbTx.Metadata().CreateBucketIfNotExists(v2BucketName)
return err
})
if err != nil {
return err
}
// doBatch contains the primary logic for upgrading the utxo set from
// version 1 to 2 in batches. This is done because the utxo set can be
// huge and thus attempting to migrate in a single database transaction
// would result in massive memory usage and could potentially crash on
// many systems due to ulimits.
//
// It returns the number of utxos processed.
const maxUtxos = 200000
doBatch := func(dbTx database.Tx) (uint32, error) {
v1Bucket := dbTx.Metadata().Bucket(v1BucketName)
v2Bucket := dbTx.Metadata().Bucket(v2BucketName)
v1Cursor := v1Bucket.Cursor()
// Migrate utxos so long as the max number of utxos for this
// batch has not been exceeded.
var numUtxos uint32
for ok := v1Cursor.First(); ok && numUtxos < maxUtxos; ok =
v1Cursor.Next() {
// Old key was the transaction hash.
oldKey := v1Cursor.Key()
var txHash chainhash.Hash
copy(txHash[:], oldKey)
// Deserialize the old entry which included all utxos
// for the given transaction.
utxos, err := deserializeUtxoEntryV0(v1Cursor.Value())
if err != nil {
return 0, err
}
// Add an entry for each utxo into the new bucket using
// the new format.
for txOutIdx, utxo := range utxos {
reserialized, err := serializeUtxoEntry(utxo)
if err != nil {
return 0, err
}
key := outpointKey(wire.OutPoint{
Hash: txHash,
Index: txOutIdx,
})
err = v2Bucket.Put(*key, reserialized)
// NOTE: The key is intentionally not recycled
// here since the database interface contract
// prohibits modifications. It will be garbage
// collected normally when the database is done
// with it.
if err != nil {
return 0, err
}
}
// Remove old entry.
err = v1Bucket.Delete(oldKey)
if err != nil {
return 0, err
}
numUtxos += uint32(len(utxos))
if interruptRequested(interrupt) {
// No error here so the database transaction
// is not cancelled and therefore outstanding
// work is written to disk.
break
}
}
return numUtxos, nil
}
// Migrate all entries in batches for the reasons mentioned above.
var totalUtxos uint64
for {
var numUtxos uint32
err := db.Update(func(dbTx database.Tx) error {
var err error
numUtxos, err = doBatch(dbTx)
return err
})
if err != nil {
return err
}
if interruptRequested(interrupt) {
return errInterruptRequested
}
if numUtxos == 0 {
break
}
totalUtxos += uint64(numUtxos)
log.Infof("Migrated %d utxos (%d total)", numUtxos, totalUtxos)
}
// Remove the old bucket and update the utxo set version once it has
// been fully migrated.
err = db.Update(func(dbTx database.Tx) error {
err := dbTx.Metadata().DeleteBucket(v1BucketName)
if err != nil {
return err
}
return dbPutVersion(dbTx, utxoSetVersionKeyName, 2)
})
if err != nil {
return err
}
seconds := int64(time.Since(start) / time.Second)
log.Infof("Done upgrading utxo set. Total utxos: %d in %d seconds",
totalUtxos, seconds)
return nil
}
// maybeUpgradeDbBuckets checks the database version of the buckets used by this
// package and performs any needed upgrades to bring them to the latest version.
//
// All buckets used by this package are guaranteed to be the latest version if
// this function returns without error.
func (b *BlockChain) maybeUpgradeDbBuckets(interrupt <-chan struct{}) error {
// Load or create bucket versions as needed.
var utxoSetVersion uint32
err := b.db.Update(func(dbTx database.Tx) error {
// Load the utxo set version from the database or create it and
// initialize it to version 1 if it doesn't exist.
var err error
utxoSetVersion, err = dbFetchOrCreateVersion(dbTx,
utxoSetVersionKeyName, 1)
return err
})
if err != nil {
return err
}
// Update the utxo set to v2 if needed.
if utxoSetVersion < 2 {
if err := upgradeUtxoSetToV2(b.db, interrupt); err != nil {
return err
}
}
return nil
}

View File

@@ -0,0 +1,642 @@
// Copyright (c) 2015-2016 The btcsuite developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package blockchain
import (
"fmt"
"github.com/btcsuite/btcd/chaincfg/chainhash"
"github.com/btcsuite/btcd/database"
"github.com/btcsuite/btcd/txscript"
"github.com/btcsuite/btcd/wire"
"github.com/btcsuite/btcutil"
)
// txoFlags is a bitmask defining additional information and state for a
// transaction output in a utxo view.
type txoFlags uint8
const (
// tfCoinBase indicates that a txout was contained in a coinbase tx.
tfCoinBase txoFlags = 1 << iota
// tfSpent indicates that a txout is spent.
tfSpent
// tfModified indicates that a txout has been modified since it was
// loaded.
tfModified
)
// UtxoEntry houses details about an individual transaction output in a utxo
// view such as whether or not it was contained in a coinbase tx, the height of
// the block that contains the tx, whether or not it is spent, its public key
// script, and how much it pays.
type UtxoEntry struct {
// NOTE: Additions, deletions, or modifications to the order of the
// definitions in this struct should not be changed without considering
// how it affects alignment on 64-bit platforms. The current order is
// specifically crafted to result in minimal padding. There will be a
// lot of these in memory, so a few extra bytes of padding adds up.
amount int64
pkScript []byte // The public key script for the output.
blockHeight int32 // Height of block containing tx.
// packedFlags contains additional info about output such as whether it
// is a coinbase, whether it is spent, and whether it has been modified
// since it was loaded. This approach is used in order to reduce memory
// usage since there will be a lot of these in memory.
packedFlags txoFlags
}
// isModified returns whether or not the output has been modified since it was
// loaded.
func (entry *UtxoEntry) isModified() bool {
return entry.packedFlags&tfModified == tfModified
}
// IsCoinBase returns whether or not the output was contained in a coinbase
// transaction.
func (entry *UtxoEntry) IsCoinBase() bool {
return entry.packedFlags&tfCoinBase == tfCoinBase
}
// BlockHeight returns the height of the block containing the output.
func (entry *UtxoEntry) BlockHeight() int32 {
return entry.blockHeight
}
// IsSpent returns whether or not the output has been spent based upon the
// current state of the unspent transaction output view it was obtained from.
func (entry *UtxoEntry) IsSpent() bool {
return entry.packedFlags&tfSpent == tfSpent
}
// Spend marks the output as spent. Spending an output that is already spent
// has no effect.
func (entry *UtxoEntry) Spend() {
// Nothing to do if the output is already spent.
if entry.IsSpent() {
return
}
// Mark the output as spent and modified.
entry.packedFlags |= tfSpent | tfModified
}
// Amount returns the amount of the output.
func (entry *UtxoEntry) Amount() int64 {
return entry.amount
}
// PkScript returns the public key script for the output.
func (entry *UtxoEntry) PkScript() []byte {
return entry.pkScript
}
// Clone returns a shallow copy of the utxo entry.
func (entry *UtxoEntry) Clone() *UtxoEntry {
if entry == nil {
return nil
}
return &UtxoEntry{
amount: entry.amount,
pkScript: entry.pkScript,
blockHeight: entry.blockHeight,
packedFlags: entry.packedFlags,
}
}
// UtxoViewpoint represents a view into the set of unspent transaction outputs
// from a specific point of view in the chain. For example, it could be for
// the end of the main chain, some point in the history of the main chain, or
// down a side chain.
//
// The unspent outputs are needed by other transactions for things such as
// script validation and double spend prevention.
type UtxoViewpoint struct {
entries map[wire.OutPoint]*UtxoEntry
bestHash chainhash.Hash
}
// BestHash returns the hash of the best block in the chain the view currently
// respresents.
func (view *UtxoViewpoint) BestHash() *chainhash.Hash {
return &view.bestHash
}
// SetBestHash sets the hash of the best block in the chain the view currently
// respresents.
func (view *UtxoViewpoint) SetBestHash(hash *chainhash.Hash) {
view.bestHash = *hash
}
// LookupEntry returns information about a given transaction output according to
// the current state of the view. It will return nil if the passed output does
// not exist in the view or is otherwise not available such as when it has been
// disconnected during a reorg.
func (view *UtxoViewpoint) LookupEntry(outpoint wire.OutPoint) *UtxoEntry {
return view.entries[outpoint]
}
// addTxOut adds the specified output to the view if it is not provably
// unspendable. When the view already has an entry for the output, it will be
// marked unspent. All fields will be updated for existing entries since it's
// possible it has changed during a reorg.
func (view *UtxoViewpoint) addTxOut(outpoint wire.OutPoint, txOut *wire.TxOut, isCoinBase bool, blockHeight int32) {
// Don't add provably unspendable outputs.
if txscript.IsUnspendable(txOut.PkScript) {
return
}
// Update existing entries. All fields are updated because it's
// possible (although extremely unlikely) that the existing entry is
// being replaced by a different transaction with the same hash. This
// is allowed so long as the previous transaction is fully spent.
entry := view.LookupEntry(outpoint)
if entry == nil {
entry = new(UtxoEntry)
view.entries[outpoint] = entry
}
entry.amount = txOut.Value
entry.pkScript = txOut.PkScript
entry.blockHeight = blockHeight
entry.packedFlags = tfModified
if isCoinBase {
entry.packedFlags |= tfCoinBase
}
}
// AddTxOut adds the specified output of the passed transaction to the view if
// it exists and is not provably unspendable. When the view already has an
// entry for the output, it will be marked unspent. All fields will be updated
// for existing entries since it's possible it has changed during a reorg.
func (view *UtxoViewpoint) AddTxOut(tx *btcutil.Tx, txOutIdx uint32, blockHeight int32) {
// Can't add an output for an out of bounds index.
if txOutIdx >= uint32(len(tx.MsgTx().TxOut)) {
return
}
// Update existing entries. All fields are updated because it's
// possible (although extremely unlikely) that the existing entry is
// being replaced by a different transaction with the same hash. This
// is allowed so long as the previous transaction is fully spent.
prevOut := wire.OutPoint{Hash: *tx.Hash(), Index: txOutIdx}
txOut := tx.MsgTx().TxOut[txOutIdx]
view.addTxOut(prevOut, txOut, IsCoinBase(tx), blockHeight)
}
// AddTxOuts adds all outputs in the passed transaction which are not provably
// unspendable to the view. When the view already has entries for any of the
// outputs, they are simply marked unspent. All fields will be updated for
// existing entries since it's possible it has changed during a reorg.
func (view *UtxoViewpoint) AddTxOuts(tx *btcutil.Tx, blockHeight int32) {
// Loop all of the transaction outputs and add those which are not
// provably unspendable.
isCoinBase := IsCoinBase(tx)
prevOut := wire.OutPoint{Hash: *tx.Hash()}
for txOutIdx, txOut := range tx.MsgTx().TxOut {
// Update existing entries. All fields are updated because it's
// possible (although extremely unlikely) that the existing
// entry is being replaced by a different transaction with the
// same hash. This is allowed so long as the previous
// transaction is fully spent.
prevOut.Index = uint32(txOutIdx)
view.addTxOut(prevOut, txOut, isCoinBase, blockHeight)
}
}
// connectTransaction updates the view by adding all new utxos created by the
// passed transaction and marking all utxos that the transactions spend as
// spent. In addition, when the 'stxos' argument is not nil, it will be updated
// to append an entry for each spent txout. An error will be returned if the
// view does not contain the required utxos.
func (view *UtxoViewpoint) connectTransaction(tx *btcutil.Tx, blockHeight int32, stxos *[]SpentTxOut) error {
// Coinbase transactions don't have any inputs to spend.
if IsCoinBase(tx) {
// Add the transaction's outputs as available utxos.
view.AddTxOuts(tx, blockHeight)
return nil
}
// Spend the referenced utxos by marking them spent in the view and,
// if a slice was provided for the spent txout details, append an entry
// to it.
for _, txIn := range tx.MsgTx().TxIn {
// Ensure the referenced utxo exists in the view. This should
// never happen unless there is a bug is introduced in the code.
entry := view.entries[txIn.PreviousOutPoint]
if entry == nil {
return AssertError(fmt.Sprintf("view missing input %v",
txIn.PreviousOutPoint))
}
// Only create the stxo details if requested.
if stxos != nil {
// Populate the stxo details using the utxo entry.
var stxo = SpentTxOut{
Amount: entry.Amount(),
PkScript: entry.PkScript(),
Height: entry.BlockHeight(),
IsCoinBase: entry.IsCoinBase(),
}
*stxos = append(*stxos, stxo)
}
// Mark the entry as spent. This is not done until after the
// relevant details have been accessed since spending it might
// clear the fields from memory in the future.
entry.Spend()
}
// Add the transaction's outputs as available utxos.
view.AddTxOuts(tx, blockHeight)
return nil
}
// connectTransactions updates the view by adding all new utxos created by all
// of the transactions in the passed block, marking all utxos the transactions
// spend as spent, and setting the best hash for the view to the passed block.
// In addition, when the 'stxos' argument is not nil, it will be updated to
// append an entry for each spent txout.
func (view *UtxoViewpoint) connectTransactions(block *btcutil.Block, stxos *[]SpentTxOut) error {
for _, tx := range block.Transactions() {
err := view.connectTransaction(tx, block.Height(), stxos)
if err != nil {
return err
}
}
// Update the best hash for view to include this block since all of its
// transactions have been connected.
view.SetBestHash(block.Hash())
return nil
}
// fetchEntryByHash attempts to find any available utxo for the given hash by
// searching the entire set of possible outputs for the given hash. It checks
// the view first and then falls back to the database if needed.
func (view *UtxoViewpoint) fetchEntryByHash(db database.DB, hash *chainhash.Hash) (*UtxoEntry, error) {
// First attempt to find a utxo with the provided hash in the view.
prevOut := wire.OutPoint{Hash: *hash}
for idx := uint32(0); idx < MaxOutputsPerBlock; idx++ {
prevOut.Index = idx
entry := view.LookupEntry(prevOut)
if entry != nil {
return entry, nil
}
}
// Check the database since it doesn't exist in the view. This will
// often by the case since only specifically referenced utxos are loaded
// into the view.
var entry *UtxoEntry
err := db.View(func(dbTx database.Tx) error {
var err error
entry, err = dbFetchUtxoEntryByHash(dbTx, hash)
return err
})
return entry, err
}
// disconnectTransactions updates the view by removing all of the transactions
// created by the passed block, restoring all utxos the transactions spent by
// using the provided spent txo information, and setting the best hash for the
// view to the block before the passed block.
func (view *UtxoViewpoint) disconnectTransactions(db database.DB, block *btcutil.Block, stxos []SpentTxOut) error {
// Sanity check the correct number of stxos are provided.
if len(stxos) != countSpentOutputs(block) {
return AssertError("disconnectTransactions called with bad " +
"spent transaction out information")
}
// Loop backwards through all transactions so everything is unspent in
// reverse order. This is necessary since transactions later in a block
// can spend from previous ones.
stxoIdx := len(stxos) - 1
transactions := block.Transactions()
for txIdx := len(transactions) - 1; txIdx > -1; txIdx-- {
tx := transactions[txIdx]
// All entries will need to potentially be marked as a coinbase.
var packedFlags txoFlags
isCoinBase := txIdx == 0
if isCoinBase {
packedFlags |= tfCoinBase
}
// Mark all of the spendable outputs originally created by the
// transaction as spent. It is instructive to note that while
// the outputs aren't actually being spent here, rather they no
// longer exist, since a pruned utxo set is used, there is no
// practical difference between a utxo that does not exist and
// one that has been spent.
//
// When the utxo does not already exist in the view, add an
// entry for it and then mark it spent. This is done because
// the code relies on its existence in the view in order to
// signal modifications have happened.
txHash := tx.Hash()
prevOut := wire.OutPoint{Hash: *txHash}
for txOutIdx, txOut := range tx.MsgTx().TxOut {
if txscript.IsUnspendable(txOut.PkScript) {
continue
}
prevOut.Index = uint32(txOutIdx)
entry := view.entries[prevOut]
if entry == nil {
entry = &UtxoEntry{
amount: txOut.Value,
pkScript: txOut.PkScript,
blockHeight: block.Height(),
packedFlags: packedFlags,
}
view.entries[prevOut] = entry
}
entry.Spend()
}
// Loop backwards through all of the transaction inputs (except
// for the coinbase which has no inputs) and unspend the
// referenced txos. This is necessary to match the order of the
// spent txout entries.
if isCoinBase {
continue
}
for txInIdx := len(tx.MsgTx().TxIn) - 1; txInIdx > -1; txInIdx-- {
// Ensure the spent txout index is decremented to stay
// in sync with the transaction input.
stxo := &stxos[stxoIdx]
stxoIdx--
// When there is not already an entry for the referenced
// output in the view, it means it was previously spent,
// so create a new utxo entry in order to resurrect it.
originOut := &tx.MsgTx().TxIn[txInIdx].PreviousOutPoint
entry := view.entries[*originOut]
if entry == nil {
entry = new(UtxoEntry)
view.entries[*originOut] = entry
}
// The legacy v1 spend journal format only stored the
// coinbase flag and height when the output was the last
// unspent output of the transaction. As a result, when
// the information is missing, search for it by scanning
// all possible outputs of the transaction since it must
// be in one of them.
//
// It should be noted that this is quite inefficient,
// but it realistically will almost never run since all
// new entries include the information for all outputs
// and thus the only way this will be hit is if a long
// enough reorg happens such that a block with the old
// spend data is being disconnected. The probability of
// that in practice is extremely low to begin with and
// becomes vanishingly small the more new blocks are
// connected. In the case of a fresh database that has
// only ever run with the new v2 format, this code path
// will never run.
if stxo.Height == 0 {
utxo, err := view.fetchEntryByHash(db, txHash)
if err != nil {
return err
}
if utxo == nil {
return AssertError(fmt.Sprintf("unable "+
"to resurrect legacy stxo %v",
*originOut))
}
stxo.Height = utxo.BlockHeight()
stxo.IsCoinBase = utxo.IsCoinBase()
}
// Restore the utxo using the stxo data from the spend
// journal and mark it as modified.
entry.amount = stxo.Amount
entry.pkScript = stxo.PkScript
entry.blockHeight = stxo.Height
entry.packedFlags = tfModified
if stxo.IsCoinBase {
entry.packedFlags |= tfCoinBase
}
}
}
// Update the best hash for view to the previous block since all of the
// transactions for the current block have been disconnected.
view.SetBestHash(&block.MsgBlock().Header.PrevBlock)
return nil
}
// RemoveEntry removes the given transaction output from the current state of
// the view. It will have no effect if the passed output does not exist in the
// view.
func (view *UtxoViewpoint) RemoveEntry(outpoint wire.OutPoint) {
delete(view.entries, outpoint)
}
// Entries returns the underlying map that stores of all the utxo entries.
func (view *UtxoViewpoint) Entries() map[wire.OutPoint]*UtxoEntry {
return view.entries
}
// commit prunes all entries marked modified that are now fully spent and marks
// all entries as unmodified.
func (view *UtxoViewpoint) commit() {
for outpoint, entry := range view.entries {
if entry == nil || (entry.isModified() && entry.IsSpent()) {
delete(view.entries, outpoint)
continue
}
entry.packedFlags ^= tfModified
}
}
// fetchUtxosMain fetches unspent transaction output data about the provided
// set of outpoints from the point of view of the end of the main chain at the
// time of the call.
//
// Upon completion of this function, the view will contain an entry for each
// requested outpoint. Spent outputs, or those which otherwise don't exist,
// will result in a nil entry in the view.
func (view *UtxoViewpoint) fetchUtxosMain(db database.DB, outpoints map[wire.OutPoint]struct{}) error {
// Nothing to do if there are no requested outputs.
if len(outpoints) == 0 {
return nil
}
// Load the requested set of unspent transaction outputs from the point
// of view of the end of the main chain.
//
// NOTE: Missing entries are not considered an error here and instead
// will result in nil entries in the view. This is intentionally done
// so other code can use the presence of an entry in the store as a way
// to unnecessarily avoid attempting to reload it from the database.
return db.View(func(dbTx database.Tx) error {
for outpoint := range outpoints {
entry, err := dbFetchUtxoEntry(dbTx, outpoint)
if err != nil {
return err
}
view.entries[outpoint] = entry
}
return nil
})
}
// fetchUtxos loads the unspent transaction outputs for the provided set of
// outputs into the view from the database as needed unless they already exist
// in the view in which case they are ignored.
func (view *UtxoViewpoint) fetchUtxos(db database.DB, outpoints map[wire.OutPoint]struct{}) error {
// Nothing to do if there are no requested outputs.
if len(outpoints) == 0 {
return nil
}
// Filter entries that are already in the view.
neededSet := make(map[wire.OutPoint]struct{})
for outpoint := range outpoints {
// Already loaded into the current view.
if _, ok := view.entries[outpoint]; ok {
continue
}
neededSet[outpoint] = struct{}{}
}
// Request the input utxos from the database.
return view.fetchUtxosMain(db, neededSet)
}
// fetchInputUtxos loads the unspent transaction outputs for the inputs
// referenced by the transactions in the given block into the view from the
// database as needed. In particular, referenced entries that are earlier in
// the block are added to the view and entries that are already in the view are
// not modified.
func (view *UtxoViewpoint) fetchInputUtxos(db database.DB, block *btcutil.Block) error {
// Build a map of in-flight transactions because some of the inputs in
// this block could be referencing other transactions earlier in this
// block which are not yet in the chain.
txInFlight := map[chainhash.Hash]int{}
transactions := block.Transactions()
for i, tx := range transactions {
txInFlight[*tx.Hash()] = i
}
// Loop through all of the transaction inputs (except for the coinbase
// which has no inputs) collecting them into sets of what is needed and
// what is already known (in-flight).
neededSet := make(map[wire.OutPoint]struct{})
for i, tx := range transactions[1:] {
for _, txIn := range tx.MsgTx().TxIn {
// It is acceptable for a transaction input to reference
// the output of another transaction in this block only
// if the referenced transaction comes before the
// current one in this block. Add the outputs of the
// referenced transaction as available utxos when this
// is the case. Otherwise, the utxo details are still
// needed.
//
// NOTE: The >= is correct here because i is one less
// than the actual position of the transaction within
// the block due to skipping the coinbase.
originHash := &txIn.PreviousOutPoint.Hash
if inFlightIndex, ok := txInFlight[*originHash]; ok &&
i >= inFlightIndex {
originTx := transactions[inFlightIndex]
view.AddTxOuts(originTx, block.Height())
continue
}
// Don't request entries that are already in the view
// from the database.
if _, ok := view.entries[txIn.PreviousOutPoint]; ok {
continue
}
neededSet[txIn.PreviousOutPoint] = struct{}{}
}
}
// Request the input utxos from the database.
return view.fetchUtxosMain(db, neededSet)
}
// NewUtxoViewpoint returns a new empty unspent transaction output view.
func NewUtxoViewpoint() *UtxoViewpoint {
return &UtxoViewpoint{
entries: make(map[wire.OutPoint]*UtxoEntry),
}
}
// FetchUtxoView loads unspent transaction outputs for the inputs referenced by
// the passed transaction from the point of view of the end of the main chain.
// It also attempts to fetch the utxos for the outputs of the transaction itself
// so the returned view can be examined for duplicate transactions.
//
// This function is safe for concurrent access however the returned view is NOT.
func (b *BlockChain) FetchUtxoView(tx *btcutil.Tx) (*UtxoViewpoint, error) {
// Create a set of needed outputs based on those referenced by the
// inputs of the passed transaction and the outputs of the transaction
// itself.
neededSet := make(map[wire.OutPoint]struct{})
prevOut := wire.OutPoint{Hash: *tx.Hash()}
for txOutIdx := range tx.MsgTx().TxOut {
prevOut.Index = uint32(txOutIdx)
neededSet[prevOut] = struct{}{}
}
if !IsCoinBase(tx) {
for _, txIn := range tx.MsgTx().TxIn {
neededSet[txIn.PreviousOutPoint] = struct{}{}
}
}
// Request the utxos from the point of view of the end of the main
// chain.
view := NewUtxoViewpoint()
b.chainLock.RLock()
err := view.fetchUtxosMain(b.db, neededSet)
b.chainLock.RUnlock()
return view, err
}
// FetchUtxoEntry loads and returns the requested unspent transaction output
// from the point of view of the end of the main chain.
//
// NOTE: Requesting an output for which there is no data will NOT return an
// error. Instead both the entry and the error will be nil. This is done to
// allow pruning of spent transaction outputs. In practice this means the
// caller must check if the returned entry is nil before invoking methods on it.
//
// This function is safe for concurrent access however the returned entry (if
// any) is NOT.
func (b *BlockChain) FetchUtxoEntry(outpoint wire.OutPoint) (*UtxoEntry, error) {
b.chainLock.RLock()
defer b.chainLock.RUnlock()
var entry *UtxoEntry
err := b.db.View(func(dbTx database.Tx) error {
var err error
entry, err = dbFetchUtxoEntry(dbTx, outpoint)
return err
})
if err != nil {
return nil, err
}
return entry, nil
}

1279
vendor/github.com/btcsuite/btcd/blockchain/validate.go generated vendored Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,301 @@
// Copyright (c) 2016-2017 The btcsuite developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package blockchain
import (
"math"
"github.com/btcsuite/btcd/chaincfg"
)
const (
// vbLegacyBlockVersion is the highest legacy block version before the
// version bits scheme became active.
vbLegacyBlockVersion = 4
// vbTopBits defines the bits to set in the version to signal that the
// version bits scheme is being used.
vbTopBits = 0x20000000
// vbTopMask is the bitmask to use to determine whether or not the
// version bits scheme is in use.
vbTopMask = 0xe0000000
// vbNumBits is the total number of bits available for use with the
// version bits scheme.
vbNumBits = 29
// unknownVerNumToCheck is the number of previous blocks to consider
// when checking for a threshold of unknown block versions for the
// purposes of warning the user.
unknownVerNumToCheck = 100
// unknownVerWarnNum is the threshold of previous blocks that have an
// unknown version to use for the purposes of warning the user.
unknownVerWarnNum = unknownVerNumToCheck / 2
)
// bitConditionChecker provides a thresholdConditionChecker which can be used to
// test whether or not a specific bit is set when it's not supposed to be
// according to the expected version based on the known deployments and the
// current state of the chain. This is useful for detecting and warning about
// unknown rule activations.
type bitConditionChecker struct {
bit uint32
chain *BlockChain
}
// Ensure the bitConditionChecker type implements the thresholdConditionChecker
// interface.
var _ thresholdConditionChecker = bitConditionChecker{}
// BeginTime returns the unix timestamp for the median block time after which
// voting on a rule change starts (at the next window).
//
// Since this implementation checks for unknown rules, it returns 0 so the rule
// is always treated as active.
//
// This is part of the thresholdConditionChecker interface implementation.
func (c bitConditionChecker) BeginTime() uint64 {
return 0
}
// EndTime returns the unix timestamp for the median block time after which an
// attempted rule change fails if it has not already been locked in or
// activated.
//
// Since this implementation checks for unknown rules, it returns the maximum
// possible timestamp so the rule is always treated as active.
//
// This is part of the thresholdConditionChecker interface implementation.
func (c bitConditionChecker) EndTime() uint64 {
return math.MaxUint64
}
// RuleChangeActivationThreshold is the number of blocks for which the condition
// must be true in order to lock in a rule change.
//
// This implementation returns the value defined by the chain params the checker
// is associated with.
//
// This is part of the thresholdConditionChecker interface implementation.
func (c bitConditionChecker) RuleChangeActivationThreshold() uint32 {
return c.chain.chainParams.RuleChangeActivationThreshold
}
// MinerConfirmationWindow is the number of blocks in each threshold state
// retarget window.
//
// This implementation returns the value defined by the chain params the checker
// is associated with.
//
// This is part of the thresholdConditionChecker interface implementation.
func (c bitConditionChecker) MinerConfirmationWindow() uint32 {
return c.chain.chainParams.MinerConfirmationWindow
}
// Condition returns true when the specific bit associated with the checker is
// set and it's not supposed to be according to the expected version based on
// the known deployments and the current state of the chain.
//
// This function MUST be called with the chain state lock held (for writes).
//
// This is part of the thresholdConditionChecker interface implementation.
func (c bitConditionChecker) Condition(node *blockNode) (bool, error) {
conditionMask := uint32(1) << c.bit
version := uint32(node.version)
if version&vbTopMask != vbTopBits {
return false, nil
}
if version&conditionMask == 0 {
return false, nil
}
expectedVersion, err := c.chain.calcNextBlockVersion(node.parent)
if err != nil {
return false, err
}
return uint32(expectedVersion)&conditionMask == 0, nil
}
// deploymentChecker provides a thresholdConditionChecker which can be used to
// test a specific deployment rule. This is required for properly detecting
// and activating consensus rule changes.
type deploymentChecker struct {
deployment *chaincfg.ConsensusDeployment
chain *BlockChain
}
// Ensure the deploymentChecker type implements the thresholdConditionChecker
// interface.
var _ thresholdConditionChecker = deploymentChecker{}
// BeginTime returns the unix timestamp for the median block time after which
// voting on a rule change starts (at the next window).
//
// This implementation returns the value defined by the specific deployment the
// checker is associated with.
//
// This is part of the thresholdConditionChecker interface implementation.
func (c deploymentChecker) BeginTime() uint64 {
return c.deployment.StartTime
}
// EndTime returns the unix timestamp for the median block time after which an
// attempted rule change fails if it has not already been locked in or
// activated.
//
// This implementation returns the value defined by the specific deployment the
// checker is associated with.
//
// This is part of the thresholdConditionChecker interface implementation.
func (c deploymentChecker) EndTime() uint64 {
return c.deployment.ExpireTime
}
// RuleChangeActivationThreshold is the number of blocks for which the condition
// must be true in order to lock in a rule change.
//
// This implementation returns the value defined by the chain params the checker
// is associated with.
//
// This is part of the thresholdConditionChecker interface implementation.
func (c deploymentChecker) RuleChangeActivationThreshold() uint32 {
return c.chain.chainParams.RuleChangeActivationThreshold
}
// MinerConfirmationWindow is the number of blocks in each threshold state
// retarget window.
//
// This implementation returns the value defined by the chain params the checker
// is associated with.
//
// This is part of the thresholdConditionChecker interface implementation.
func (c deploymentChecker) MinerConfirmationWindow() uint32 {
return c.chain.chainParams.MinerConfirmationWindow
}
// Condition returns true when the specific bit defined by the deployment
// associated with the checker is set.
//
// This is part of the thresholdConditionChecker interface implementation.
func (c deploymentChecker) Condition(node *blockNode) (bool, error) {
conditionMask := uint32(1) << c.deployment.BitNumber
version := uint32(node.version)
return (version&vbTopMask == vbTopBits) && (version&conditionMask != 0),
nil
}
// calcNextBlockVersion calculates the expected version of the block after the
// passed previous block node based on the state of started and locked in
// rule change deployments.
//
// This function differs from the exported CalcNextBlockVersion in that the
// exported version uses the current best chain as the previous block node
// while this function accepts any block node.
//
// This function MUST be called with the chain state lock held (for writes).
func (b *BlockChain) calcNextBlockVersion(prevNode *blockNode) (int32, error) {
// Set the appropriate bits for each actively defined rule deployment
// that is either in the process of being voted on, or locked in for the
// activation at the next threshold window change.
expectedVersion := uint32(vbTopBits)
for id := 0; id < len(b.chainParams.Deployments); id++ {
deployment := &b.chainParams.Deployments[id]
cache := &b.deploymentCaches[id]
checker := deploymentChecker{deployment: deployment, chain: b}
state, err := b.thresholdState(prevNode, checker, cache)
if err != nil {
return 0, err
}
if state == ThresholdStarted || state == ThresholdLockedIn {
expectedVersion |= uint32(1) << deployment.BitNumber
}
}
return int32(expectedVersion), nil
}
// CalcNextBlockVersion calculates the expected version of the block after the
// end of the current best chain based on the state of started and locked in
// rule change deployments.
//
// This function is safe for concurrent access.
func (b *BlockChain) CalcNextBlockVersion() (int32, error) {
b.chainLock.Lock()
version, err := b.calcNextBlockVersion(b.bestChain.Tip())
b.chainLock.Unlock()
return version, err
}
// warnUnknownRuleActivations displays a warning when any unknown new rules are
// either about to activate or have been activated. This will only happen once
// when new rules have been activated and every block for those about to be
// activated.
//
// This function MUST be called with the chain state lock held (for writes)
func (b *BlockChain) warnUnknownRuleActivations(node *blockNode) error {
// Warn if any unknown new rules are either about to activate or have
// already been activated.
for bit := uint32(0); bit < vbNumBits; bit++ {
checker := bitConditionChecker{bit: bit, chain: b}
cache := &b.warningCaches[bit]
state, err := b.thresholdState(node.parent, checker, cache)
if err != nil {
return err
}
switch state {
case ThresholdActive:
if !b.unknownRulesWarned {
log.Warnf("Unknown new rules activated (bit %d)",
bit)
b.unknownRulesWarned = true
}
case ThresholdLockedIn:
window := int32(checker.MinerConfirmationWindow())
activationHeight := window - (node.height % window)
log.Warnf("Unknown new rules are about to activate in "+
"%d blocks (bit %d)", activationHeight, bit)
}
}
return nil
}
// warnUnknownVersions logs a warning if a high enough percentage of the last
// blocks have unexpected versions.
//
// This function MUST be called with the chain state lock held (for writes)
func (b *BlockChain) warnUnknownVersions(node *blockNode) error {
// Nothing to do if already warned.
if b.unknownVersionsWarned {
return nil
}
// Warn if enough previous blocks have unexpected versions.
numUpgraded := uint32(0)
for i := uint32(0); i < unknownVerNumToCheck && node != nil; i++ {
expectedVersion, err := b.calcNextBlockVersion(node.parent)
if err != nil {
return err
}
if expectedVersion > vbLegacyBlockVersion &&
(node.version & ^expectedVersion) != 0 {
numUpgraded++
}
node = node.parent
}
if numUpgraded > unknownVerWarnNum {
log.Warn("Unknown block versions are being mined, so new " +
"rules might be in effect. Are you running the " +
"latest version of the software?")
b.unknownVersionsWarned = true
}
return nil
}

117
vendor/github.com/btcsuite/btcd/blockchain/weight.go generated vendored Normal file
View File

@@ -0,0 +1,117 @@
// Copyright (c) 2013-2017 The btcsuite developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package blockchain
import (
"fmt"
"github.com/btcsuite/btcd/txscript"
"github.com/btcsuite/btcd/wire"
"github.com/btcsuite/btcutil"
)
const (
// MaxBlockWeight defines the maximum block weight, where "block
// weight" is interpreted as defined in BIP0141. A block's weight is
// calculated as the sum of the of bytes in the existing transactions
// and header, plus the weight of each byte within a transaction. The
// weight of a "base" byte is 4, while the weight of a witness byte is
// 1. As a result, for a block to be valid, the BlockWeight MUST be
// less than, or equal to MaxBlockWeight.
MaxBlockWeight = 4000000
// MaxBlockBaseSize is the maximum number of bytes within a block
// which can be allocated to non-witness data.
MaxBlockBaseSize = 1000000
// MaxBlockSigOpsCost is the maximum number of signature operations
// allowed for a block. It is calculated via a weighted algorithm which
// weights segregated witness sig ops lower than regular sig ops.
MaxBlockSigOpsCost = 80000
// WitnessScaleFactor determines the level of "discount" witness data
// receives compared to "base" data. A scale factor of 4, denotes that
// witness data is 1/4 as cheap as regular non-witness data.
WitnessScaleFactor = 4
// MinTxOutputWeight is the minimum possible weight for a transaction
// output.
MinTxOutputWeight = WitnessScaleFactor * wire.MinTxOutPayload
// MaxOutputsPerBlock is the maximum number of transaction outputs there
// can be in a block of max weight size.
MaxOutputsPerBlock = MaxBlockWeight / MinTxOutputWeight
)
// GetBlockWeight computes the value of the weight metric for a given block.
// Currently the weight metric is simply the sum of the block's serialized size
// without any witness data scaled proportionally by the WitnessScaleFactor,
// and the block's serialized size including any witness data.
func GetBlockWeight(blk *btcutil.Block) int64 {
msgBlock := blk.MsgBlock()
baseSize := msgBlock.SerializeSizeStripped()
totalSize := msgBlock.SerializeSize()
// (baseSize * 3) + totalSize
return int64((baseSize * (WitnessScaleFactor - 1)) + totalSize)
}
// GetTransactionWeight computes the value of the weight metric for a given
// transaction. Currently the weight metric is simply the sum of the
// transactions's serialized size without any witness data scaled
// proportionally by the WitnessScaleFactor, and the transaction's serialized
// size including any witness data.
func GetTransactionWeight(tx *btcutil.Tx) int64 {
msgTx := tx.MsgTx()
baseSize := msgTx.SerializeSizeStripped()
totalSize := msgTx.SerializeSize()
// (baseSize * 3) + totalSize
return int64((baseSize * (WitnessScaleFactor - 1)) + totalSize)
}
// GetSigOpCost returns the unified sig op cost for the passed transaction
// respecting current active soft-forks which modified sig op cost counting.
// The unified sig op cost for a transaction is computed as the sum of: the
// legacy sig op count scaled according to the WitnessScaleFactor, the sig op
// count for all p2sh inputs scaled by the WitnessScaleFactor, and finally the
// unscaled sig op count for any inputs spending witness programs.
func GetSigOpCost(tx *btcutil.Tx, isCoinBaseTx bool, utxoView *UtxoViewpoint, bip16, segWit bool) (int, error) {
numSigOps := CountSigOps(tx) * WitnessScaleFactor
if bip16 {
numP2SHSigOps, err := CountP2SHSigOps(tx, isCoinBaseTx, utxoView)
if err != nil {
return 0, nil
}
numSigOps += (numP2SHSigOps * WitnessScaleFactor)
}
if segWit && !isCoinBaseTx {
msgTx := tx.MsgTx()
for txInIndex, txIn := range msgTx.TxIn {
// Ensure the referenced output is available and hasn't
// already been spent.
utxo := utxoView.LookupEntry(txIn.PreviousOutPoint)
if utxo == nil || utxo.IsSpent() {
str := fmt.Sprintf("output %v referenced from "+
"transaction %s:%d either does not "+
"exist or has already been spent",
txIn.PreviousOutPoint, tx.Hash(),
txInIndex)
return 0, ruleError(ErrMissingTxOut, str)
}
witness := txIn.Witness
sigScript := txIn.SignatureScript
pkScript := utxo.PkScript()
numSigOps += txscript.GetWitnessSigOpCount(sigScript, pkScript, witness)
}
}
return numSigOps, nil
}