Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fewer dockers #112

Merged
merged 24 commits into from
Apr 2, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
24 commits
Select commit Hold shift + click to select a range
bb647d1
Modify the message when the selection of transactions is interrupted …
ahamlat Mar 26, 2024
56e1844
Block on skipped matrix (#6818)
jflo Mar 26, 2024
e954537
build - Refactor Besu custom error prone dependency (#6692)
usmansaleem Mar 26, 2024
7df1732
Expose `v` field in JSON-RPC in some transaction types (#6819)
shemnon Mar 26, 2024
63a53aa
Add holesky DNS server (#6824)
gfukushima Mar 27, 2024
7e46889
Support running acceptance tests on Windows (#6820)
fab-10 Mar 27, 2024
f2c2512
storage format refactor for preparing verkle trie integration (#6721)
matkt Mar 27, 2024
5bc81ae
Ensure empty withdrawal lists are set in BFT blocks when the protocol…
matthew1001 Mar 27, 2024
ceafa2a
Fix two flacky acceptance tests (#6837)
fab-10 Mar 28, 2024
3a2eb4e
Fix to avoid broadcasting full blob txs (#6835)
fab-10 Mar 28, 2024
464cd26
logging fix for historical queries (#6830)
non-fungible-nelson Mar 28, 2024
a2ef6c4
refactor to check for null peer (#6841)
macfarla Mar 29, 2024
6c1991a
update broken link, issue template (#6829)
non-fungible-nelson Mar 29, 2024
ad49e21
Prevent startup with privacy and bonsai enabled (#6809)
macfarla Mar 29, 2024
1679525
Dedicated log marker for invalid txs removed from the txpool (#6826)
fab-10 Mar 29, 2024
d8e1e17
Remove deprecated Forest pruning (#6810)
fab-10 Mar 29, 2024
eab55f7
fix account copy issue (#6845)
matkt Mar 29, 2024
9b3a219
increase timeout (#6852)
macfarla Mar 29, 2024
d926052
disable flaky test - LegacyFeeMarketBlockTransactionSelectorTest (#6851)
macfarla Mar 29, 2024
deaea9b
Snap client fixes (#6847)
garyschulte Mar 30, 2024
34fc5ee
Snap server rebase (#6640)
garyschulte Mar 30, 2024
5ea9cd6
workaround for broken publishing of buildinfo (#6856)
jflo Apr 1, 2024
2658059
reduce number of jvms provided
jflo Apr 1, 2024
fa875b7
reduce number of jvms provided
jflo Apr 2, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/issue_template.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
<!-- comply with it, including treating everyone with respect: -->
<!-- https://github.com/hyperledger/besu/blob/main/CODE_OF_CONDUCT.md -->
<!-- * Reproduced the issue in the latest version of the software -->
<!-- * Read the debugging docs: https://besu.hyperledger.org/en/stable/HowTo/Monitor/Logging/ -->
<!-- * Read the debugging docs: https://besu.hyperledger.org/private-networks/how-to -->
<!-- * Duplicate Issue check: https://github.com/search?q=+is%3Aissue+repo%3Ahyperledger/Besu -->
<!-- Note: Not all sections will apply to all issue types. -->

Expand Down
13 changes: 11 additions & 2 deletions .github/workflows/acceptance-tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -77,6 +77,15 @@ jobs:
permissions:
checks: write
statuses: write
if: always()
steps:
- name: consolidation
run: echo "consolidating statuses"
# Fail if any `needs` job was not a success.
# Along with `if: always()`, this allows this job to act as a single required status check for the entire workflow.
- name: Fail on workflow error
run: exit 1
if: >-
${{
contains(needs.*.result, 'failure')
|| contains(needs.*.result, 'cancelled')
|| contains(needs.*.result, 'skipped')
}}
17 changes: 11 additions & 6 deletions .github/workflows/artifacts.yml
Original file line number Diff line number Diff line change
Expand Up @@ -46,11 +46,7 @@ jobs:
path: 'build/distributions/besu*.zip'
name: besu-${{ github.ref_name }}.zip
compression-level: 0
- name: Artifactory Publish
env:
ARTIFACTORY_USER: ${{ secrets.BESU_ARTIFACTORY_USER }}
ARTIFACTORY_KEY: ${{ secrets.BESU_ARTIFACTORY_TOKEN }}
run: ./gradlew -Prelease.releaseVersion=${{ github.ref_name }} -Pversion=${{github.ref_name}} artifactoryPublish

testWindows:
runs-on: windows-2022
needs: artifacts
Expand Down Expand Up @@ -94,4 +90,13 @@ jobs:
build/distributions/besu*.zip
body: |
${{steps.hashes.outputs.tarSha}}
${{steps.hashes.outputs.zipSha}}
${{steps.hashes.outputs.zipSha}}
arifactoryPublish:
runs-on: ubuntu-22.04
needs: artifacts
steps:
- name: Artifactory Publish
env:
ARTIFACTORY_USER: ${{ secrets.BESU_ARTIFACTORY_USER }}
ARTIFACTORY_KEY: ${{ secrets.BESU_ARTIFACTORY_TOKEN }}
run: ./gradlew -Prelease.releaseVersion=${{ github.ref_name }} -Pversion=${{github.ref_name}} artifactoryPublish
8 changes: 0 additions & 8 deletions .github/workflows/docker.yml
Original file line number Diff line number Diff line change
Expand Up @@ -21,16 +21,8 @@ jobs:
uses: gradle/actions/setup-gradle@9e899d11ad247ec76be7a60bc1cf9d3abbb9e7f1
with:
cache-disabled: true
- name: hadoLint_openj9-jdk_17
run: docker run --rm -i hadolint/hadolint < docker/openj9-jdk-17/Dockerfile
- name: hadoLint_openjdk_17
run: docker run --rm -i hadolint/hadolint < docker/openjdk-17/Dockerfile
- name: hadoLint_openjdk_17_debug
run: docker run --rm -i hadolint/hadolint < docker/openjdk-17-debug/Dockerfile
- name: hadoLint_openjdk_latest
run: docker run --rm -i hadolint/hadolint < docker/openjdk-latest/Dockerfile
- name: hadoLint_graalvm
run: docker run --rm -i hadolint/hadolint < docker/graalvm/Dockerfile
buildDocker:
needs: hadolint
permissions:
Expand Down
13 changes: 11 additions & 2 deletions .github/workflows/pre-review.yml
Original file line number Diff line number Diff line change
Expand Up @@ -135,6 +135,15 @@ jobs:
permissions:
checks: write
statuses: write
if: always()
steps:
- name: consolidation
run: echo "consolidating statuses"
# Fail if any `needs` job was not a success.
# Along with `if: always()`, this allows this job to act as a single required status check for the entire workflow.
- name: Fail on workflow error
run: exit 1
if: >-
${{
contains(needs.*.result, 'failure')
|| contains(needs.*.result, 'cancelled')
|| contains(needs.*.result, 'skipped')
}}
14 changes: 11 additions & 3 deletions .github/workflows/reference-tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,15 @@ jobs:
permissions:
checks: write
statuses: write
if: always()
steps:
- name: consolidation
run: echo "consolidating statuses"

# Fail if any `needs` job was not a success.
# Along with `if: always()`, this allows this job to act as a single required status check for the entire workflow.
- name: Fail on workflow error
run: exit 1
if: >-
${{
contains(needs.*.result, 'failure')
|| contains(needs.*.result, 'cancelled')
|| contains(needs.*.result, 'skipped')
}}
9 changes: 9 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
### Breaking Changes
- RocksDB database metadata format has changed to be more expressive, the migration of an existing metadata file to the new format is automatic at startup. Before performing a downgrade to a previous version it is mandatory to revert to the original format using the subcommand `besu --data-path=/path/to/besu/datadir storage revert-metadata v2-to-v1`.
- BFT networks won't start with SNAP or CHECKPOINT sync (previously Besu would start with this config but quietly fail to sync, so it's now more obvious that it won't work) [#6625](https://github.com/hyperledger/besu/pull/6625), [#6667](https://github.com/hyperledger/besu/pull/6667)
- Forest pruning has been removed, it was deprecated since 24.1.0. In case you are still using it you must now remove any of the following options: `pruning-enabled`, `pruning-blocks-retained` and `pruning-block-confirmations`, from your configuration, and you may want to consider switching to Bonsai.

### Upcoming Breaking Changes
- Receipt compaction will be enabled by default in a future version of Besu. After this change it will not be possible to downgrade to the previous Besu version.
Expand All @@ -27,11 +28,19 @@
- Extend error handling of plugin RPC methods [#6759](https://github.com/hyperledger/besu/pull/6759)
- Added engine_newPayloadV4 and engine_getPayloadV4 methods [#6783](https://github.com/hyperledger/besu/pull/6783)
- Reduce storage size of receipts [#6602](https://github.com/hyperledger/besu/pull/6602)
- Dedicated log marker for invalid txs removed from the txpool [#6826](https://github.com/hyperledger/besu/pull/6826)
- Prevent startup with BONSAI and privacy enabled [#6809](https://github.com/hyperledger/besu/pull/6809)
- Remove deprecated Forest pruning [#6810](https://github.com/hyperledger/besu/pull/6810)
- Experimental Snap Sync Server [#6640](https://github.com/hyperledger/besu/pull/6640)

### Bug fixes
- Fix txpool dump/restore race condition [#6665](https://github.com/hyperledger/besu/pull/6665)
- Make block transaction selection max time aware of PoA transitions [#6676](https://github.com/hyperledger/besu/pull/6676)
- Don't enable the BFT mining coordinator when running sub commands such as `blocks export` [#6675](https://github.com/hyperledger/besu/pull/6675)
- In JSON-RPC return optional `v` fields for type 1 and type 2 transactions [#6762](https://github.com/hyperledger/besu/pull/6762)
- Fix Shanghai/QBFT block import bug when syncing new nodes [#6765](https://github.com/hyperledger/besu/pull/6765)
- Fix to avoid broadcasting full blob txs, instead of only the tx announcement, to a subset of nodes [#6835](https://github.com/hyperledger/besu/pull/6835)
- Snap client fixes discovered during snap server testing [#6847](https://github.com/hyperledger/besu/pull/6847)

### Download Links

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -56,15 +56,12 @@
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;

import org.junit.After;
import org.junit.jupiter.api.AfterEach;
import org.junit.jupiter.api.extension.ExtendWith;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

/**
* Superclass for acceptance tests. For now (transition to junit5 is ongoing) this class supports
* junit4 format.
*/
/** Superclass for acceptance tests. */
@ExtendWith(AcceptanceTestBaseTestWatcher.class)
public class AcceptanceTestBase {

Expand Down Expand Up @@ -131,7 +128,7 @@ protected AcceptanceTestBase() {
exitedSuccessfully = new ExitedWithCode(0);
}

@After
@AfterEach
public void tearDownAcceptanceTestBase() {
reportMemory();
cluster.close();
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -62,6 +62,7 @@
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Locale;
import java.util.Map;
import java.util.Optional;
import java.util.Properties;
Expand Down Expand Up @@ -431,7 +432,9 @@ public NodeRequests nodeRequests() {
getGenesisConfig()
.map(
gc ->
gc.toLowerCase().contains("ibft") ? ConsensusType.IBFT2 : ConsensusType.QBFT)
gc.toLowerCase(Locale.ROOT).contains("ibft")
? ConsensusType.IBFT2
: ConsensusType.QBFT)
.orElse(ConsensusType.IBFT2);

nodeRequests =
Expand Down Expand Up @@ -786,6 +789,21 @@ public void stop() {
nodeRequests.shutdown();
nodeRequests = null;
}

deleteRuntimeFiles();
}

private void deleteRuntimeFiles() {
try {
Files.deleteIfExists(homeDirectory.resolve("besu.networks"));
} catch (IOException e) {
LOG.error("Failed to clean up besu.networks file in {}", homeDirectory, e);
}
try {
Files.deleteIfExists(homeDirectory.resolve("besu.ports"));
} catch (IOException e) {
LOG.error("Failed to clean up besu.ports file in {}", homeDirectory, e);
}
}

@Override
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -52,6 +52,7 @@
import java.util.concurrent.TimeUnit;
import java.util.stream.Collectors;

import org.apache.commons.lang3.SystemUtils;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.slf4j.MDC;
Expand All @@ -77,8 +78,15 @@ public void startNode(final BesuNode node) {

final Path dataDir = node.homeDirectory();

final var workingDir =
new File(System.getProperty("user.dir")).getParentFile().getParentFile().toPath();

final List<String> params = new ArrayList<>();
params.add("build/install/besu/bin/besu");
if (SystemUtils.IS_OS_WINDOWS) {
params.add(workingDir.resolve("build\\install\\besu\\bin\\besu.bat").toString());
} else {
params.add("build/install/besu/bin/besu");
}

params.add("--data-path");
params.add(dataDir.toAbsolutePath().toString());
Expand Down Expand Up @@ -422,15 +430,13 @@ public void startNode(final BesuNode node) {
LOG.info("Creating besu process with params {}", params);
final ProcessBuilder processBuilder =
new ProcessBuilder(params)
.directory(new File(System.getProperty("user.dir")).getParentFile().getParentFile())
.directory(workingDir.toFile())
.redirectErrorStream(true)
.redirectInput(Redirect.INHERIT);
if (!node.getPlugins().isEmpty()) {
processBuilder
.environment()
.put(
"BESU_OPTS",
"-Dbesu.plugins.dir=" + dataDir.resolve("plugins").toAbsolutePath().toString());
.put("BESU_OPTS", "-Dbesu.plugins.dir=" + dataDir.resolve("plugins").toAbsolutePath());
}
// Use non-blocking randomness for acceptance tests
processBuilder
Expand Down Expand Up @@ -572,7 +578,7 @@ private void killBesuProcess(final String name) {

LOG.info("Killing {} process, pid {}", name, process.pid());

process.destroy();
process.descendants().forEach(ProcessHandle::destroy);
try {
process.waitFor(30, TimeUnit.SECONDS);
} catch (final InterruptedException e) {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@
import java.math.BigInteger;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.stream.Collectors;

import org.web3j.crypto.Credentials;
Expand Down Expand Up @@ -83,7 +84,7 @@ && parameterTypesAreEqual(i.getParameterTypes(), parameterObjects))

@SuppressWarnings("rawtypes")
private boolean parameterTypesAreEqual(
final Class<?>[] expectedTypes, final ArrayList<Object> actualObjects) {
final Class<?>[] expectedTypes, final List<Object> actualObjects) {
if (expectedTypes.length != actualObjects.size()) {
return false;
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,6 @@
import java.util.function.UnaryOperator;

import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Disabled;
import org.junit.jupiter.api.Test;

public class EthSendRawTransactionAcceptanceTest extends AcceptanceTestBase {
Expand All @@ -45,22 +44,27 @@ public void setUp() throws Exception {
strictNode = besu.createArchiveNode("strictNode", configureNode((true)));
miningNode = besu.createMinerNode("strictMiningNode", configureNode((true)));
cluster.start(lenientNode, strictNode, miningNode);
// verify all nodes are done syncing so the tx pool will be enabled
lenientNode.verify(eth.syncingStatus(false));
strictNode.verify(eth.syncingStatus(false));
miningNode.verify(eth.syncingStatus(false));

// verify nodes are fully connected otherwise tx could not be propagated
lenientNode.verify(net.awaitPeerCount(2));
strictNode.verify(net.awaitPeerCount(2));
miningNode.verify(net.awaitPeerCount(2));

// verify that the miner started producing blocks and all other nodes are syncing from it
waitForBlockHeight(miningNode, 1);
final var minerChainHead = miningNode.execute(ethTransactions.block());
lenientNode.verify(blockchain.minimumHeight(minerChainHead.getNumber().longValue()));
strictNode.verify(blockchain.minimumHeight(minerChainHead.getNumber().longValue()));
}

@Test
@Disabled("flaky with timeout")
public void shouldSendSuccessfullyToLenientNodeWithoutChainId() {
final TransferTransaction tx = createTransactionWithoutChainId();
final String rawTx = tx.signedTransactionData();
final String txHash = tx.transactionHash();

lenientNode.verify(eth.expectSuccessfulEthRawTransaction(rawTx));

// this line is where the test is flaky
// Tx should be included on-chain
miningNode.verify(eth.expectSuccessfulTransactionReceipt(txHash));
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -41,12 +41,19 @@ public void setUp() throws Exception {
minerNode = besu.createMinerNode("miner-node1");
archiveNode = besu.createArchiveNode("full-node1");
cluster.start(minerNode, archiveNode);

// verify nodes are fully connected otherwise tx could not be propagated
minerNode.verify(net.awaitPeerCount(1));
archiveNode.verify(net.awaitPeerCount(1));

accountOne = accounts.createAccount("account-one");
minerWebSocket = new WebSocket(vertx, minerNode.getConfiguration());
archiveWebSocket = new WebSocket(vertx, archiveNode.getConfiguration());
// verify all nodes are done syncing so the tx pool will be enabled
archiveNode.verify(eth.syncingStatus(false));
minerNode.verify(eth.syncingStatus(false));

// verify that the miner started producing blocks and all other nodes are syncing from it
waitForBlockHeight(minerNode, 1);
final var minerChainHead = minerNode.execute(ethTransactions.block());
archiveNode.verify(blockchain.minimumHeight(minerChainHead.getNumber().longValue()));
}

@AfterEach
Expand Down
7 changes: 4 additions & 3 deletions besu/src/main/java/org/hyperledger/besu/RunnerBuilder.java
Original file line number Diff line number Diff line change
Expand Up @@ -133,6 +133,7 @@
import java.util.Collection;
import java.util.Collections;
import java.util.List;
import java.util.Locale;
import java.util.Map;
import java.util.Optional;
import java.util.Set;
Expand Down Expand Up @@ -800,7 +801,7 @@ public Runner build() {
metricsSystem,
supportedCapabilities,
jsonRpcConfiguration.getRpcApis().stream()
.filter(apiGroup -> !apiGroup.toLowerCase().startsWith("engine"))
.filter(apiGroup -> !apiGroup.toLowerCase(Locale.ROOT).startsWith("engine"))
.collect(Collectors.toList()),
filterManager,
accountLocalConfigPermissioningController,
Expand Down Expand Up @@ -938,7 +939,7 @@ public Runner build() {
metricsSystem,
supportedCapabilities,
webSocketConfiguration.getRpcApis().stream()
.filter(apiGroup -> !apiGroup.toLowerCase().startsWith("engine"))
.filter(apiGroup -> !apiGroup.toLowerCase(Locale.ROOT).startsWith("engine"))
.collect(Collectors.toList()),
filterManager,
accountLocalConfigPermissioningController,
Expand Down Expand Up @@ -1021,7 +1022,7 @@ public Runner build() {
metricsSystem,
supportedCapabilities,
jsonRpcIpcConfiguration.getEnabledApis().stream()
.filter(apiGroup -> !apiGroup.toLowerCase().startsWith("engine"))
.filter(apiGroup -> !apiGroup.toLowerCase(Locale.ROOT).startsWith("engine"))
.collect(Collectors.toList()),
filterManager,
accountLocalConfigPermissioningController,
Expand Down
Loading
Loading