Releases: ArweaveTeam/arweave
Release 2.8.2
Fixes issue with peer history validation upon re-joining the network.
Full Changelog: N.2.8.1...N.2.8.2
Release 2.8.1
This is a minor update. This software was prepared by the Digital History Association, in cooperation from the wider Arweave ecosystem.
Bug Fix: OOM when setting mining_server_chunk_cache_size_limit
2.8.1 deprecates the mining_server_chunk_cache_size_limit
flag and replaces it with the mining_cache_size_mb
flag. Miners who wish to increase or decrease the amount of memory allocated to the mining cache can specify the target cache size (in MiB) using the mining_cache_size_mb NUM
flag.
Feature: verify mode
The new release includes a new verify
mode. When set the node will run a series of checks on all listed storage_modules
. If the node discovers any inconsistencies (e.g. missing proofs, inconsistent indices) it will flag the chunks so that they can be resynced and repacked later. Once the verification completes, you can restart then node in a normal mode and it should re-sync and re-pack any flagged chunks.
Note: When running in verify
mode several flags will be forced on and several flags are disallowed. See the node output for details.
An example launch command:
./bin/start verify data_dir /opt/data storage_module 10,unpacked storage_module 20,En2eqsVJARnTVOSh723PBXAKGmKgrGSjQ2YIGwE_ZRI.1
Community involvement
A huge thank you to all the Mining community members who contributed to this release by identifying and investigating bugs, sharing debug logs and node metrics, and providing guidance on performance tuning!
Discord users (alphabetical order):
- BloodHunter
- Butcher_
- JF
- MCB
- Mastermind
- Qwinn
- Thaseus
- Vidiot
- a8_ar
- jimmyjoe7768
- lawso2517
- smash
- thekitty
What's Changed
- Implement configurable requesting of packed chunks from peers by @shizzard in #633
- Log every case of mining solution failure consistently by @ldmberman in #632
- Repack in place complete console message now includes storage module name by @vird in #635
- fix: fix mining chunk cache by @JamesPiechota in #636
- add verify mode by @JamesPiechota in #638
Full Changelog: N.2.8.0...N.2.8.1
Release 2.8.0
This Arweave node implementation proposes a hard fork that activates at height 1547120, approximately 2024-11-13 14:00 UTC. This software was prepared by the Digital History Association, in cooperation from the wider Arweave ecosystem.
Note: with 2.8.0 when enabling the randomx_large_pages
option you will need to configure 3,500 HugePages rather than the 1,000 required for earlier releases. More information below.
Composite Packing
The biggest change in 2.8.0 is the introduction of a new packing format referred to as "composite". Composite packing allows miners in the Arweave network to have slower access to the dataset over time (and thus, mine on larger hard drives at the same bandwidth). The packing format used from version 2.6.0 through 2.7.4 will be referred to as spora_2_6
going forward. spora_2_6
will continue to be supported by the software without change for roughly 4 years.
The composite packing format allows node operators to provide a difficulty setting varying from 1 to 32. Higher difficulties take longer to pack data, but have proportionately lower read requirements while mining. For example, the read speeds for a variety of difficulties are as follows:
Packing Format | Example storage module configuration | Example storage_modules directory name |
Time to pack (benchmarked to spora_2_6) | Disk read rate per partition when mining against a full replica |
---|---|---|---|---|
spora_2_6 |
12,addr | storage_module_12_addr | 1x | 200 MiB/s |
composite.1 |
12,addr.1 | storage_module_12_addr.1 | 1x | 50 MiB/s |
composite.2 |
12,addr.2 | storage_module_12_addr.2 | 2x | 25 MiB/s |
composite.3 |
12,addr.3 | storage_module_12_addr.3 | 3x | 16.6667 MiB/s |
composite.4 |
12,addr.4 | storage_module_12_addr.4 | 4x | 12.5 MiB/s |
... | ... | ... | ... | ... |
composite.32 |
12,addr.32 | storage_module_12_addr.32 | 32x | 1.5625 MiB/s |
The effective hashrate for a full replica packed to any of the supported packing formats is the same. A miner who has packed a full replica to spora_2_6
or composite.1
or composite.32
can expect to find the same number of blocks on average, but with the higher difficulty miner reading fewer chunks from their storage per second. This allows the miner to use larger hard drives in their setup, without increasing the necessary bandwidth between disk and CPU.
Each composite-packed chunk is divided into 32 sub-chunks and then packed with increasing rounds of the RandomX packing function. Each sub-chunk at difficulty 1 is packed with 10 RandomX rounds. This value was selected to roughly match the time it takes to pack a chunk using spora_2_6
. At difficulty 2 each sub-chunk is packed with 20 RandomX rounds - this will take roughly twice as long to pack a chunk as it does with difficulty 1 or spora_2_6
. At difficulty 3, 30 rounds, and so on.
Composite packing also uses a slightly different version of the RandomX packing function with further improvements to ASIC resistance properties. As a result when running Arweave 2.8 with the randomx_large_pages
option you will need to allocate 3,500 HugePages rather than the 1,000 needed for earlier node implementations. If you're unable to immediately increase your HugePages value we recommend restarting your server and trying again. If your node has been running for a while the memory space may simply be too fragmented to allocate the needed HugePages. A reboot should alleviate this issue.
When mining, all storage modules within the same replica must be packed to the same packing format and difficulty level. For example, a single miner will not be able to build a solution involving chunks from storage_module_1_addr.1
and storage_module_2_addr.2
even if the packing address is the same.
To use composite
packing miners can modify their storage_module
configuration. E.g. if previously you used storage_module 12,addr
and had a storage module directory named storage_module_12_addr
now you use storage_module 12,addr.1
and create a directory named storage_module_12_addr.1
. Syncing, packing, repacking, and repacking in place are handled the same as before just with the addition of the new packing formats.
While you can begin packing data to the composite format immediately, you will not be able to mine the data until the 2.8 hard fork activates at block height 1547120.
Implications of Composite Packing
By enabling lower read rates the new packing format provides greater flexibility when selecting hard drives. For example, it is now possible to mine 4 partitions off a single 16TB hard drive. Whether you need to pack to composite difficulty 1 or 2 in order to optimally mine 4 partitions on a 16TB drive will depend on the specific performance characteristics of your setup.
CPU and RAM requirements while mining will be lower for composite
packing versus spora_2_6
, and will continue to reduce further as the packing difficulty increases. Extensive benchmarking to confirm the degree of these efficiency gains have yet to be confirmed, but with the lower read rate comes a lower volume of data that needs to be hashed (CPU) and a lower volume of data that needs to be held in memory (RAM).
Block Header Format
The following block header fields have been added or changed:
packing_difficulty
: the packing difficulty of the chunks used in the block solution. Both reward_address and packing_difficulty together are needed to unpack and validate the solution chunk.packing_difficulty
is 0 forspora_2_6
chunkspoa1->chunk
andpoa2->chunk
: underspora_2_6
the full packed chunk is provided. Under composite only a packed sub-chunk is included. A sub-chunk is 1/32 of a packed chunk.poa1->unpacked_chunk
andpoa2->unpacked_chunk
: this field is omitted forspora_2_6
, and includes the complete unpacked chunk for all composite blocks.unpacked_chunk_hash
andunpacked_chunk_hash2
: these fields are omitted underspora_2_6
and contain the hash of the full unpacked_chunks for composite blocks
Other Fixes and Improvements
- Protocol change: The current protocol (implemented prior to the 2.8 Hard Fork) will begin transitioning the upload pricing to a trustless oracle at block height 1551470. 2.8 introduces a slight change: 3 months of blockchain history rather than 1 month will be used to calculate the upload price.
- Bug fix: several updates to the RocksDB handling have been made which should reduce the frequency of RocksDB corruption - particularly corruption that may have previously occurred during a hard node shutdown.
- Note: with these changes the
repair_rocksdb
option has been removed.
- Note: with these changes the
- Optimization: Blockchain syncing (e.g. block and transaction headers) has been optimized to reduce the time it takes to sync the full blockchain
- Bug fix:
GET /data_sync_record
no longer reports chunks that have been purged from the disk pool
Community involvement
A huge thank you to all the Mining community members who contributed to this release by identifying and investigating bugs and providing guidance on performance tuning!
Discord users (alphabetical order):
- BloodHunter
- Butcher_
- dzeto
- edzo
- heavyarms1912
- lawso2517
- ldp
- MaSTeRMinD
- MCB
- Methistos
- qq87237850
- Qwinn
- sk
- smash
- sumimi
- tashilo
- Thaseus
- thekitty
- Vidiot
- Wednesday
Code Changes
- pool reduced poll time by @vird in #612
- Feature/packing difficulty by @ldmberman in #590
- Don't block on ar_sync_record:add when repacking by @JamesPiechota in #619
- Create an Interface to remote_console release command by @humaite in #620
- Introduce P3 and RocksDB fixes by @shizzard in #624
- Improvement/longer pricing window by @ldmberman in #616
- Fix p3 missing pattern by @humaite in #627
- Header Synchronization Improvement by @humaite in #625
- Switch to arweave.xyz by @humaite in #629
- Improve the speed of doctor-dump and allow omitting txs by @JamesPiechota in #630
- Ensure ar_global_sync_record is notified when a chunk is removed from the disk pool by @JamesPiechota in #626
- Arweave Should Not Build Before Being Stopped by @humaite in #621
- Ensure the unpacked chunk is passed from the H2 CM peer to the H1 CM peer by @JamesPiechota in #628
New Contributors
Full Changelog: N.2.7.4...N.2.8.0
Release 2.7.4
Arweave 2.7.4 Release Notes
If you were previously running the 2.7.4 pre-release we recommend you update to this release. This release includes all changes from the pre-release, plus some additional fixes and features.
Mining Performance Improvements
This release includes a number of mining performance improvements, and is the first release for which we've seen a single-node miner successfully mine a full replica at almost the full expected hashrate (56 partitions mined at 95% efficiency at the time of the test). If your miner previously saw a loss of hashrate at higher partition counts despite low CPU utilization, it might be worth retesting.
Erlang VM arguments
Adjusting the arguments provided to the Erlang VM can sometimes improve mining hashrate. In particular we found that on some high-core count CPUs, restricting the number of threads available to Erlang actually improved performance. You'll want to test these options for yourself as behavior varies dramatically from system to system.
This release introduces a new command-line separator: --
All arguments before the --
separator are passed to the Erlang VM, all arguments after it are passed to Arweave. If the --
is omitted, all arguments are passed to Arweave.
For example, to restrict the number of threads available to Arweave to 24, you would build a command like:
./bin/start +S 24:24 -- <regular arweave command line flags>
Faster Node Shutdown
Unrelated to the above changes, this release includes a couple fixes that should reduce the time it takes for a node to shut down following the ./bin/stop
command.
Solution recovery
This release includes several features and bug fixes intended to increase the chance that a valid solution results in a confirmed block.
Rebasing
When two or more miners post blocks at the same height, the block that is adopted by a majority of the network first will be added to the blockchain and the other blocks will be orphaned. Miners of orphaned blocks do not receive block rewards for those blocks.
This release introduce the ability for orphaned blocks to be rebased. If a miner detects that their block has been orphaned, but the block solution is still valid, the miner will take that solution and build a new block with it. When a block is rebased a rebasing_block
message will be printed to the logs.
Last minute proof fetching
After finding a valid solution a miner goes through several steps as they build a block. One of those steps involves loading the selected chunk proofs from disk. Occasionally those proofs might be missing or corrupt. Prior to this release when that happened, the solution would be rejected and the miner would return to hashing. With this release the miner will reach out to several peers and request the missing proofs - if successful the miner can continue building and publishing the block.
last_step_checkpoints
recovery
This release provides more robust logic for generating the last_step_checkpoints
field in mined blocks. Prior to this release there were some scenarios where a miner would unnecessarily reject a solution due to missing last_step_checkpoints
.
VDF Server Improvements
In addition to a number of VDF server/client bug fixes and performance improvements, this release includes two new VDF server configurations.
VDF Forwarding
You can now set up a node as a VDF forwarder. If a node specifies both the vdf_server_trusted_peer
and vdf_client_peer
flags it will receive its VDF from the specified VDF Servers and provide its VDF to the specified VDF clients. The push/pull behavior remains unchanged - any of the server/client relationships can be configured to push VDF updates or pull them
Public VDF
If a VDF server specifies the enable public_vdf_server
flag it will provide VDF to any peer that requests it without needing to first whitelist that peer via the vdf_client_peer
flag.
/recent endpoint
This release adds a new /recent
endpoint which will return a list of recent forks that the node has detected, as well as the last 18 blocks they've received as well as the timestamps they received them.
Webhooks
This release adds additional webhook support. When webhooks are configured a node will POST data to a provided URL (aka webhook) when certain events are triggered.
Node webhooks can only be configured via a JSON config_file
. For example:
{
"webhooks": [
{
"events": ["transaction", "block"],
"url": "https://example.com/block_or_tx",
"headers": {
"Authorization": "Bearer 123"
}
},
{
"events": ["transaction_data"],
"url": "http://127.0.0.1:1985/tx_data"
}
}
The supported events are:
transaction
: POSTS- the transaction header whenever this node accepts and validates a new transaction
transaction_data
: POSTS{ "event": "transaction_data_synced", "txid": <TXID> }
once this node has received all the chunks belonging to the transaction TXID{ "event": "transaction_orphaned", "txid": <TXID> }
when this node detects that TXID has been orphaned{ "event": "transaction_data_removed", "txid": <TXID> }
when this node detects that at least one chunk has been removed from a previouslysynced
transaction
block
: POSTS- the block header whenever this node accepts and validates a new block
In all cases the POST payload is JSON-encoded
Benchmarking and data utilities
./bin/benchmark-hash
prints benchmark data on H0 and H1/H2 hashing performance- fix for
./bin/data-doctor bench
- it should now be able to correctly report storage module read performance data-doctor dump
dumps all block headers and transactions
Miscellaneous Bug Fixes and additions
- Several coordinated mining and mining pool bug fixes
/metrics
was incorrect if mining address included a_
- Fix bug in
start_from_block
andstart_from_latest_state
- Add CORS header to /metrics so it can be queried from an in-browser app
- Blacklist handling optimizations
Pre-Release 2.7.4
This is a pre-release and has not gone through a full release validation, please install with that in mind
Note: In order to test the VDF client/server fixes please make sure to set your VDF server to vdf-server-4.arweave.xyz
. We will keep vdf-server-3.arweave.xyz
running an older version of the software (without the fixes) in case there are issues with this release.
Summary of changes in this release:
- Fixes for several VDF client/server communication issues.
- Fixes to some pool mining bugs
- Solution rebasing to lower orphan rate
- Last minute proof fetching if they can't be found locally
- More support for webhooks
- Performance improvments for syncing and blacklist processing
Release 2.7.3
Arweave 2.7.3 Release Notes
2.7.3 is a minor release containing:
Re-packing in place
You can now repack a storage module from one packing address to another without needing any extra storage space. The repacking happens "in-place" replacing the original data with the repacked data.
See the storage_module
section in the arweave
help ( ./bin/start help
) for more information.
Packing bug fixes and performance improvements
This release contains several packing performance improvements and bug fixes.
Coordinated Mining performance improvement
This release implements an improvement in how nodes process H1 batches that they receive from their Coordinated Mining peers. As a result the cm_in_batch_timeout
is no longer needed and has been deprecated.
Release 2.7.2
This release introduces a hard fork that activates at height 1391330, approximately 2024-03-26 14:00 UTC.
Coordinated Mining
When coordinated mining is configured multiple nodes can cooperate to find mining solutions for the same mining address without the risk of losing reserved rewards and blacklisting of the mining address. Without coordinated mining if two nodes publish blocks at the same height and with the same mining address, they may lose their reserved rewards and have their mining address blacklisted (See the Mining Guide for more information). This allows multiple nodes which each store a disjoint subset of the weave to reap the hashrate benefits of more two-chunk solutions.
Basic System
In a coordinated mining cluster there are 2 roles:
- Exit Node
- Miners
All nodes in the cluster share the same mining address. Each Miner generates H1 hashes for the partitions they store. Occasionally they will need an H2 for a packed partition they don't store. In this case, they can find another Miner in the coordinated mining cluster who does store the required partition packed with the required address, send them the H1 and ask them to calculate the H2. When a valid solution is found (either one- or two-chunk) the solution is sent to the Exit Node. Since the Exit Node is the only node in the coordinated mining cluster which publishes blocks, there's no risk of slashing. This point can be further enforced by ensuring only the Exit Node stores the mining address private key (and therefore only the Exit Node can sign blocks for that mining address)
Every node in the coordinated mining cluster is free to peer with any other nodes on the network as normal.
Single-Miner One Chunk Flow
Note: The single-miner two chunk flow (where Miner1 stores both the H1 and H2 partitions) is very similar
Coordinated Two Chunk Flow
Configuration
- All nodes in the Coordinated Mining cluster must specify the
coordinated_mining
parameter - All nodes in the Coordinated Mining cluster must specify the same secret via the
cm_api_secret
parameter. A secret can be a string of any length. - All miners in the Coordinated Mining cluster should identify all other miners in the cluster using the
cm_peer
multi-use parameter.- Note: an exit node can also optionally mine, in which case it is also considered a miner and should be identified by the
cm_peer
parameter
- Note: an exit node can also optionally mine, in which case it is also considered a miner and should be identified by the
- All miners (excluding the exit node) should identify the exit node via the
cm_exit_peer
parameter.- Note: the exit node should not include the
cm_exit_peer
parameter
- Note: the exit node should not include the
- All miners in the Coordinated Mining cluster can be configured as normal but they should all specify the same
mining_addr
.
There is one additional parameter which can be used to tune performance:
cm_out_batch_timeout
: The frequency in milliseconds of sending other nodes in the coordinated mining setup a batch of H1 values to hash. A higher value reduces network traffic, a lower value reduces hashing latency. Default is 20.
Native Support for Pooled Mining
The Arweave node now has built-in support for pooled mining.
New configuration parameters (see arweave node help for descriptions)::
is_pool_server
is_pool_client
pool_api_key
pool_server_address
Mining Performance Improvements
Implemented several optimizations and bug fixes to enable more miners to achieve their maximal hashrate - particularly at higher partition counts.
A summary of changes:
- Increase the degree of horizontal distribution used by the mining processes to remove performance bottlenecks at higher partition counts
- Optimize the erlang VM memory allocation, management, and garbage collection
- Fix several out of memory errors that could occur at higher partition counts
- Fix a bug which could cause valid chunks to be discarded before being hashed
Updated Mining Performance Report:
=========================================== Mining Performance Report ============================================
VDF Speed: 3.00 s
H1 Solutions: 0
H2 Solutions: 3
Confirmed Blocks: 0
Local mining stats:
+-----------+-----------+----------+-------------+-------------+---------------+------------+------------+--------------+
| Partition | Data Size | % of Max | Read (Cur) | Read (Avg) | Read (Ideal) | Hash (Cur) | Hash (Avg) | Hash (Ideal) |
+-----------+-----------+----------+-------------+-------------+---------------+------------+------------+--------------+
| Total | 2.0 TiB | 5 % | 1.3 MiB/s | 1.3 MiB/s | 21.2 MiB/s | 5 h/s | 5 h/s | 84 h/s |
| 1 | 1.2 TiB | 34 % | 0.8 MiB/s | 0.8 MiB/s | 12.4 MiB/s | 3 h/s | 3 h/s | 49 h/s |
| 2 | 0.8 TiB | 25 % | 0.5 MiB/s | 0.5 MiB/s | 8.8 MiB/s | 2 h/s | 2 h/s | 35 h/s |
| 3 | 0.0 TiB | 0 % | 0.0 MiB/s | 0.0 MiB/s | 0.0 MiB/s | 0 h/s | 0 h/s | 0 h/s |
+-----------+-----------+----------+-----------+---------------+---------------+------------+------------+--------------+
(All values are reset when a node launches)
- H1 Solutions / H2 Solutions display the number of each solution type discovered
- Confirmed Blocks displays the number of blocks that were mined by this node and accepted by the network
- Cur values refer to the most recent value (e.g. the average over the last ~10seconds)
- Avg values refer to the all-time running average
- Ideal refers to the optimal rate given the VDF speed and amount of data currently packed
% of Max refers to how much of the given partition - or whole weave - is packed
Protocol Changes
The 2.7.2 Hard Fork is scheduled for block 1391330 (or roughly 2024-03-26 14:00 UTC), at which time the following protocol changes will activate:
- The difficulty of a 1-chunk solution increases by 100x to better incentivize full-weave replicas
- An additional pricing transition phase is scheduled to start November, 2024
- A pricing cap of 340 Winston per GiB/minute is implemented until the November pricing transition
- The checkpoint depth is reduced from 50 blocks to 18
- Unnecessary poa2 chunks are rejected early to prevent a low impact spam attack. Even in the worst case this attack would add minimal bloat to the blockchain and thus wasn't a practical exploit. Closing the vector as a matter of good hygiene.
Additional Bug Fixes and Improvements
- Enable Randomx support for OSX and arm/aarch64
- Simplified TLS protocol support
- See new configuration parameters
tls_cert_file
andtls_key_file
to configure TLS
- See new configuration parameters
- Add several more prometheus metrics:
- debug-only metrics to track memory performance and processor utilization
- mining performance metrics
- coordinated mining metrics
- metrics to track network characteristics (e.g. partitions covered in blocks, current/scheduled price, chunks per block)
- Introduce a
bin/data-doctor
utilitydata-doctor merge
can merge multiple storage modules into 1data-doctor bench
runs a series of read rate benchmarks
- Introduce a new
bin/benchmark-packing
utility to benchmark a node's packing peformance- The utility will generate input files if necessary and will process as close to 1GiB of data as possible while still allowing each core to process the same number of whole chunks.
- Results are written to a csv and printed to console
Release 2.7.1
This release introduces a hard fork that activates at height 1316410, approximately 2023-12-05 14:00 UTC.
Note if you are running your own VDF Servers, update the server nodes first, then the client nodes.
Bug fixes
Address Occasional Block Validation Failures on VDF Clients
This release fixes an error that would occasionally cause VDF Clients to fail to validate valid blocks. This could occur following a VDF Difficulty Retarget if the VDF client had cached a stale VDF session with steps computed at the prior difficulty. With this change VDF sessions are refreshed whenever the difficulty retargets.
Stabilize VDF Difficulty Oscillation
This release fixes an error that caused unnecessary oscillation when retargeting VDF difficulty. With this patch the VDF difficulty will adjust smoothly towards a difficulty that will yield a network average VDF speed of 1 second.
Ensure VDF Clients Process Updates from All Configured VDF Servers
This release makes an update to the VDF Client code so that it processes all updates from all configured VDF Servers. Prior to this change a VDF Client would only switch VDF Servers when the active server became non-responsive - this could cause a VDF Client to get "stuck" on one VDF Server even if an alternate server provided better data.
Delay the pricing transition
This release introduces a patch that adds to the transition period before the activation of Arweave 2.6βs trustless price oracle, in order to give miners additional time to on-board packed data to the network. The release delays the onset of the transition window to roughly February 20, 2024
The release comes with the prebuilt binaries for the Linux x86_64 platforms.
If you want to run the miner from the existing Git folder, execute the following command:
git fetch --all --tags && git checkout -f N.2.7.1
See the Mining Guide for further instructions.
If you have any issues upgrading or would like to know more about the release, feel free to reach out to us in the Arweave Miners Discord (https://discord.gg/GHB4fxVv8B) or email us at [email protected].
Release 2.7.0
This release introduces a hard fork that activates at height 1275480, approximately 2023-10-05 07:00 UTC.
New features
Flexible Merkle Tree Combinations
When combining different data transactions, the merkle trees for each data root can be added to the larger merkle tree without being rebuilt or modified. This makes it easier, quicker, and less CPU-intensive to combine together multiple data transactions.
Documentation on Merkle Tree Rebasing: https://github.com/ArweaveTeam/examples/blob/main/rebased_merkle_tree/README.md
Example Code: https://github.com/ArweaveTeam/examples/blob/main/rebased_merkle_tree/rebased_merkle_tree.js
VDF Retargeting
The average VDF speed across the network is now tracked and used to increase or decrease the VDF difficulty so as to maintain a roughly 1-second VDF time across the network.
Bug fixes and other updates
Delay the pricing transition
This release introduces a patch that adds to the transition period before the activation of Arweave 2.6βs trustless price oracle, in order to give miners additional time to on-board packed data to the network. The release delays the onset of the transition window to roughly Dec. 14, 2023.
Memory optimization when mining
This change allows the mining server to periodically reclaim memory. Previously when a miner was configured with a suitably high mining_server_chunk_cache_size_limit
(e.g. 5,000-7,000 per installed GB of RAM) memory usage would creep up, sometimes causing an out of memory error. With this change, that memory usage can be periodically reclaimed, delaying or eliminating the OOM error. Further performance and memory improvements are planned in the next release.
Start form local state
Introduce the start_from_latest_state
and start_from_block
configuration options allowing a miner to be launched from their local state rather than downloading the initialization data from peers. Most useful when bootstrapping a testnet.
Ensure genesis transaction data is served via the /tx endpoint
Fix for issue #455
The release comes with the prebuilt binaries for the Linux x86_64 platforms.
If you want to run the miner from the existing Git folder, execute the following command:
git fetch --all --tags && git checkout -f N.2.7.0
See the Mining Guide for further instructions.
If you have any issues upgrading or would like to know more about the release, feel free to reach out to us in the Arweave Miners Discord (https://discord.gg/GHB4fxVv8B) or email us at [email protected].
Release 2.6.10
The release introduces a few improvements, bug fixes, and one new endpoint.
- Fix two memory issues that occasionally cause out-of-memory exceptions:
- When running a VDF server with a slow VDF client, the memory footprint of the VDF server would gradually increase until all memory was consumed;
- When syncing weave data the memory use of a node would spike when copying data locally between neighboring partitions, occasionally triggering an out-of-memory exception
- implement the
GET /total_supply
endpoint to return the sum of all the existing accounts in the latest state, in Winston; - several performance improvements to the weave sync process;
- remove the following metrics from the
/metrics
endpoint (together accounting for several thousand individual metrics):erlang_vm_msacc_XXX
erlang_vm_allocators
erlang_vm_dist_XXX
The release comes with the prebuilt binaries for the Linux x86_64 platforms.
If you want to run the miner from the existing Git folder, execute the following command:
git fetch --all --tags && git checkout -f N.2.6.10
See the mining guide for further instructions.
If you have any issues upgrading or would like to know more about the release, feel free to reach out to us in the Arweave Miners Discord (https://discord.gg/GHB4fxVv8B) or email us at [email protected].