Skip to content

Releases: decred/dcrdata

v0.7.1 - Important bug fix release

28 Aug 17:49
Compare
Choose a tag to compare

There was a bug in block count retrieval from sqlite that caused a fatal error (crash) of dcrd. It is a small fix, but important.

v0.7.0 - Growing Up

27 Aug 07:17
v0.7.0
Compare
Choose a tag to compare

NOTE: There are important steps to follow when upgrading. See Upgrading below for details
SUPER IMPORTANT NOTE: An important bug fix is in v0.7.1 -- please use that instead.

Highlights

Since the previous tagged release, there have been extensive changes. The highlights are:

  • WebSocket server with support for live updates.
  • New API endpoints: address, transactions, block by hash, block size and transaction info, custom step in block range request, and more.
  • Improved reorganization handling by dcrsqlite and blockdata, on top of previous work on stakedb handling.
  • Improved synchronization of collection routines.

There are tons of other improvements and fixes. A more detailed list is in Changes below, but see the commit messages and code diff (v0.3.2...v0.7.0) for the gory details.

Upgrading

Delete the databases to force a resync: rm -r ffldb_stake dcrdata.sqlt.db then start dcrdata.
If you are upgrading from source, you must refresh the vendor folder after pulling changes from git:

# first checkout and pull latest master
# refresh vendor:
rm -rf vendor/
glide install
# proceed to build/install as usual

Acknowledgements

Thanks to the following people for their contributions: @gozart1, @RogueElement, @raedah.

Changes

  • Use RealIP middleware to log actual client IP when behind reverse proxy.
  • Switch to new logger (btclog+jrick's logrotate).
  • Add CollectBlockInfo to get by hash most data.
  • Remove poolinfo config option.
  • Give stakedb.PoolInfo() a second output, the height of the stake node.
  • Switch chi import path, update to chi 3.0.
  • Enable CORS for /api
  • Update all depends with glide
  • Reorg handling for dcrsqlite.
  • Create (*Collector).CollectAPITypes to get the apitypes.BlockDataBasic and apitypes.StakeInfoExtended needed to update blocks in sqlite during reorg.
  • Remove problematic mempool trigger, and make limits more sensitive. There was a collection condition if num tickets in mempool was still less than max fresh stake. This was silly. Remove this condition.
  • Set the mempool new ticket count trigger default to 1, and the min interval to 2 sec.
  • Fixed nodaemontls value (pull request #64 from RogueElement/notls)
  • API - Stream and compress block range response
  • WebSocket support. This adds WebSocket connection handling to the web interface, on /ws. There are three events:
    1. Block data. More than just data from the block, but it contains information obtained following the latest block.
    2. Mempool updates. Potentially send much more frequently than block updates.
    3. Ping.
  • API - Block verbose and block-by-hash
  • Add getBlockHashSQL, getBlockHeightSQL queries to sqlite.go
  • Add RetrieveBestBlockHash and RetrieveBestBlockHeight to Sqlite
  • API - Add endpoints: /api/block/hash/{blockhash}/...
  • API - Add .../hash to the routers on: /api/block/best and /api/block/{idx}.
  • Add GetBlockVerboseByHash to rpcutils package.
  • API - Add transactions-for-block via .../tx endpoints, and apitypes.BlockTransactions.
  • API - Add block size endpoints, like /block/.../size. This includes range, where an array of integers is returned.
  • A new SQLite query is added to make block size lookup fast. Only the size column is returned.
  • API - Transaction Endpoints:
    1. /tx/{txid}. Returns apitypes.Tx. Uses one getrawtransaction call to node.
    2. Add /tx/{txid}/[in|out], new types.
    3. Add /tx/{txid}/[in|out]/{txinoutindex} to get a specific input or output. This is outpoint -> output lookup, and similarly for input.
  • Fix crash on missing config file (merge PR 76 from RogueElement)
  • API - Added transaction count for block: /api/block/[{id}|{hash}]/tx/count
  • Mempool TxHandler efficiency. Include Tx acquire time in mempool event handler. Make sure time since acquire is less than time since lass collect, otherwise we've already collected it via the previous event. log already collected tx only if it would have been collected
  • Tweak web ui.
  • Add sample nginx.conf and rate_limiting.html
  • API - Step param in block range requests
  • Change default APIListen to 127.0.0.1:7777
  • Do not continue startup if stake DB is already opened (unavailable).
  • API - added Address endpoint
  • Make a collectionQueue to run certain collection/storage functions in order and synchronously
  • Create blockdata.(*Collector).CollectHash(*chainhash.Hash) to be used in the event that the chain node best block is higher than the block requested for data collection. Pool value may be wrong if the stake DB is not on the same height however. But the new synchronous execution changes are indended to prevent the stake DB from getting ahead. Update blockdata.(*chainMonitor).BlockConnectedHandler so that it will call Collect() only if chain height equals the height of the requested block data, and call CollectHash(hash) if collection is behind. A caveat with CollectHash, in addition to the pool value bit described above, is that it will not get the next block window estimates, however this is not needed if a higher block is waiting to be processed subsequently.
  • Create PoolInfoCache, and collect pool info for each block that is connected in stakedb.
  • rename daemonLog -> blockdataLog, DCRD -> BLKD
  • Enable blockdata's reorg handler to collect update web UI only on last block of new main chain. Add a special BlockDataSaver slice just for use in a reorg, so as not to run the sqlite saver twice. dcrsqlite handles the reorg itself. Turn trace logging back to info in blockdata.
  • Signal mempool collector that a new block was mined via a nil Hash, sent from (*collectionQueue).ProcessBlocks(). Update (*mempoolMonitor).TxHandler to recognize the signal and skip transaction checking and other irrelevant logic. Refactor collection and storage code into (*mempoolMonitor) CollectAndStore() for reuse.
  • API - remove /directory (still have /list)

v0.3.2...v0.7.0

Stake Database, Fast Pool Info, Reorg Handling

10 Jun 18:44
v0.3.2
Compare
Choose a tag to compare

IMPORTANT: When upgrading to v0.3.2 it is necessary to delete your current dcrdata.sqlt.db. It shouldn't be necessary to delete the ffldb_stake folder, but it is recommended to delete it as well so it can also be fresh.

The main visible difference is faster startup once the databases are synchronized because: (1) as the stake database is kept up-to-date when running, and (2) no rewind by TicketMaturityHeight blocks is needed anymore. The only delay at startup is a pre-population of the live ticket cache, which takes 30-60 seconds.

Several calculations are fixed, reorganization is handled by the stake database now, and blockdata.Collect() is much faster as the ticketpoolinfo RPC is no longer used (in favor of the stake database and a live ticket cache).

Notable changes:

  • The stakedb package for maintaining the stake database while running.
  • Use StakeDatabase.PoolInfo() in sync instead of slow RPC. Also don't use BlockHeader.PoolSize as it is not the same as what stake/Node.LiveTickets() indicates.
  • Reorganization handling framework and a complete handler in the stakedb package that will disconnect orphaned blocks and switch to the new sidechain.
  • Updated dependencies (JSONRPCClientPort, testnet2, fixed OnNewTickets, wtxmgr->udb, required RPC API version, etc).
  • Mempool section of web UI is updated immediately rather than after the first regular collection.
  • Use the new GetBlockHeaderVerbose RPC.
  • Fix some data races.
  • Fix min/max fee computation in FeeInfoBlock. Add tests for FeeInfoBlock and FeeRateInfoBlock.
  • Docs and copyrights.
  • A few new functions for future development.

This is a step toward collecting block data without using RPCs that operate implicitly on the current best block, however BlockDataSaver.Store() needs refactoring.

stakedb has a PoolInfo() method for getting ticket pool value quickly using its live ticket cache, and it does it as the height of the database rather than using the node RPC to get ticket pool value at current best height. Also, FeeInfoBlock() in txhelpers computes ticket fee info for an input block, rather than the node RPC.

5cecd17...master