Remove heap-type encoding capability from user facing ABIEncoder #1127
segfault-magnet
started this conversation in
Ideas
Replies: 1 comment 2 replies
-
I'm more leaned towards 2. If we can't promise the correct behaviour, just return in error in the heap type case. |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
@iqdecay suggested on this PR that we should document how we acquire the address given to
.resolve
ofUnresolvedBytes
.Contemplating this I came to the conclusion that documenting this would prove unattractively complicated for users. We would need to point out how the node spins up a VM, and what is pushed on the stack before a call execution, where those data structures can be found. What fields need to be skipped, encoded data before the heap type, heap types in predicates shifting everything, etc..
Rather, until we actually get encoding support, I suggest we stop providing
UnresolvedBytes
by providing an ABIEncoder that either.resolve(0)
on theUnresolvedBytes
before giving the result back to the userChances are that currently nobody ever put anything besides
0
for theresolve
arg. Even if we explained the whole process, is the explanation going to be good enough so that most users manage to produce the correct offset? If we want to support them will we then be able to generate intuitive helpers for calculating the offset?Approach 1:
Pros:
more flexible -- perhaps the user wants to encode a type that has heap types in it, but the user doesn't care much about them, rather it is the rest of the type the user is interested in.
Cons:
A certain amount of surprise - if we didn't complain about the heap types the user might expect them to be encoded properly so that when given as data it will just work on the VM.
Approach 2:
Pros:
No surprise - have heap types? Can't decode sorry.
Cons:
Less flexible
Wdyt? @FuelLabs/sdk-rust
Beta Was this translation helpful? Give feedback.
All reactions