Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ADD: (Logs) Remove suppression in docker #101

Open
wants to merge 54 commits into
base: master
Choose a base branch
from

Conversation

TechDufus
Copy link

This PR allows the logs of the qortal.jar file to hit stdout when run inside a docker container.

Really, just providing the docker string as the first arg to the start.sh script will do this, which the dockerfile is configured to by default.

TechDufus and others added 30 commits October 12, 2022 20:25
Adding docker arg to unsuppress logs.

Fixing entry point.

fixing paths

fixing paths
…} and /crosschain/htlc/refundAll

This could probably be refactored into multiple classes to make the code cleaner, but it is functional for now.
This will ease the transition to a Q-Chat protocol, where chat messages will no longer be regular transactions.
…ounts.

This allows some time for initial online account lists to be retrieved, and reduces the chances of the same nonce being computed twice.
…eue, but don't allow two entries that match exactly.
… duplicates before unnecessarily validating them again.

This closes a gap where accounts would be moved from onlineAccountsImportQueue to onlineAccountsToAdd, but not yet imported. During this time, there was nothing to stop them from being added to the import queue again, causing duplicate validations.
It will now request online accounts every 1 minute instead of every 5 seconds, except for the first 5 minutes following a new online accounts timestamp, in which it will request every 5 seconds (referred to as the "burst" interval). It will also use the burst interval for the first 5 minutes after the node starts.

This is based on the idea that most online accounts arrive soon after a new timestamp begins, and so there is no need to request accounts so frequently after that. This should reduce data usage by a significant amount.

Once mempow is fully rolled out, the "burst" feature can be reduced or removed, since online accounts will be sent ahead of time, generally 15-30 mins prior to the new online accounts timestamp becoming active.
…to monitor the queue processing when in DEBUG.
…ynchronization.

Touches quite a few files because:

* Deprecate HEIGHT_V2 because it doesn't contain enough info to be fully useful during sync.
Newer peers will re-use BLOCK_SUMMARIES_V2.

* For newer peers, instead of sending / broadcasting HEIGHT_V2,
send top N block summaries instead, to avoid requests for minor reorgs.

* When responding to GET_BLOCK, and we don't actually have the requested block,
we currently send an empty BLOCK_SUMMARIES message instead of not responding,
which would cause a slow timeout in Synchronizer.

This pattern has spread to other network message response code,
so now we introduce a generic 'unknown' message type for all these cases.

* Remove PeerChainTipData class entirely and re-use BlockSummaryData instead.

* Each Peer instance used to hold PeerChainTipData - essentially single latest block summary - but now holds a List of latest block summaries.

* PeerChainTipData getter/setter methods modified for compatibility at this point in time.

* Repository methods that return BlockSummaryData (or lists of) now try to fully populate them,
including newly added block reference field.

* Re-worked Peer.canUseCommonBlockData() to be more readable

* Cherry-picked patch to Message.fromByteBuffer() to pass an empty, read-only ByteBuffer to subclass fromByteBuffer() methods, instead of null.
This allows natural use of BufferUnderflowException if a subclass tries to use read(), or hasRemaining(), etc. from an empty data-payload message.
Previously this could have caused an NPE.
…ARIES, but updated peers send BLOCK_SUMMARIES_V2
…up, due to commit 99858f3.

This also shows that commit 99858f3 now prevents a block candidate with a very small number of online accounts being built immediately after startup.
…it would request any.

This caused large gaps with no presence data. They are removed when they expire, causing the local count to drop to zero, and the node would only start requesting them again once a peer had pushed one or more entries proactively.
archived-2 and others added 24 commits October 12, 2022 20:25
…mport queue, but don't allow two entries that match exactly."

This reverts commit 6d9e6e8.
…essing.

The main difference here is that we now remove items from the onlineAccountsImportQueue in a batch, _after_ they have been imported. This prevents duplicates from being added to the queue in the previous time gap between them being removed and imported.
We need to track items to remove separately from items to add, otherwise invalid accounts remain in the queue.
This could prevent additional wallets from being initialized if connection was lost while syncing an existing one.
Should fix OutOfMemoryException often seen when syncing from 1000+ blocks behind the chain tip.
…rely from the peer's chain tip block summaries cache.

Loading from the cache should speed up sync decisions, particularly when choose which peer to sync from. The greater the number of connected peers, the more significant this optimization will be. It should also reduce wasted network requests and data usage.

Adding this check prior to making a network request is a simple way to introduce the new cached summaries from BLOCK_SUMMARIES_V2 without having to rewrite a lot of the complex sync / peer comparison logic. Longer term we may want to rewrite that logic to read from the cache directly, but it doesn't make sense to introduce that level of risk at this point time, especially as the Synchronizer may be rewritten soon to prefer longer chains.

Even so, this is still quite a high risk commit so lots of testing will be needed.
This allows the first pass to always be served from the peer's cache of 8 summaries. This allows a maximum of 7 to be returned, because the 8th spot is needed for the parent block's signature.
The `start.sh` & `stop.sh` scripts have already been marked as executables in the source folder... But since we have only piped their contents, we need to set correct file permissions again.
…led entirely from the peer's chain tip block summaries cache."

This reverts commit 8cedf61.
…o GetBlockSummaries requests.

This should hopefully fix a potential issue where peer's chain tip data becomes contaminated with other summary data, causing incorrect sync decisions.
This should ideally have been set in the 3.6.1 release, but not setting it is unlikely to have caused any problems.
…r the code)

This is a better fix for the "contaminated chain tip summaries" issue. Need to reduce the logging level to debug before release.
…onding to GetBlockSummaries requests."

This reverts commit 2d58118.
…mmaries.

The BLOCK_SUMMARIES message type would differentiate between an empty response and a missing/invalid response. However, in V2, a response with empty summaries would throw a BufferUnderflowException and be treated by the caller as a null message.

This caused problems when trying to find a common block with peers that have diverged by more than 8 blocks. With V1 the caller would know to search back further (e.g. 16 blocks) but in V2 it was treated as "no response" and so the caller would give up instead of increasing the look-back threshold.

This fix will identify BLOCK_SUMMARIES_V2 messages with no content, and return an empty array of block summaries instead of a null message.

Should be enough to recover any stuck nodes, as long as they haven't diverged more than 240 blocks from the main chain.
finishing log supression for docker instances.
@TechDufus
Copy link
Author

TechDufus commented Oct 13, 2022

Sorry for the git history on this one... I tried squashing my mess.... it made a bigger mess it seems...

@QuickMythril QuickMythril changed the title Adding log suppression only for non-docker processes ADD: (Logs) Remove suppression in docker Sep 8, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: TESTING - finished
Development

Successfully merging this pull request may close these issues.

4 participants