Avoid overhead for synthesized nodes lookup (backport #13424) #13425
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Summary
After #12550 a hash implementation was added to the implementation of DAGOpNode to be able to have identical instances of dag nodes used be usable in a set or dict. This is because after #12550 changed the DAGCircuit so the DAGOpNode instances were just a python view of the data contained in the nodes of a dag. While prior to #12550 the actual DAGOpNode objects were returned by reference from DAG methods. However, this hash implementation has additional overhead compared to the object identity based version used before. This has caused a regression in some cases for high level synthesis when it's checking for nodes it's already synthesized. This commit addresses this by changing the dict key to be the node id instead of the node object. The integer hashing is significantly faster than the object hashing.
Details and comments
This is an automatic backport of pull request #13424 done by Mergify.