Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Misc questions about use #24

Open
rvagg opened this issue Aug 19, 2024 · 1 comment
Open

Misc questions about use #24

rvagg opened this issue Aug 19, 2024 · 1 comment

Comments

@rvagg
Copy link

rvagg commented Aug 19, 2024

Nice work so far. In trying to get my head around what this is aiming to do and having a play with it, I have some questions:

  1. I think that the tool openrpc generate command is used to generate what's in rust/bindings/src/v0.rs, I suppose off a current forest openrpc spec?
    a. I couldn't get it to work with the lotus one you bundled, is that because it doesn't clearly conform to spec? Is there an easy way to see those problems so we can address them individually? The failure and backtrace doesn't really help much in understanding that.
    b. The output for forest v0 doesn't seem to have newlines at all; is this meant to be auto formatted later on? What's the workflow here to generate these?
    c. How can we use this generate process to check the openrpc spec of the node being pointed to? The ideal seems to be that we converge on a single subset API and we should be able to just poke the node and see that it has that subset and they are properly defined.
    d. Is it intended that you'd manually verify the bindings once generated since they seem to be a critical piece here.
  2. Some docs on how to run it would be good, I eventually figured out that you need a config that this minimal version would do: {"v0":{ "url": "http://localhost:1234/rpc/v0" }}. Spitting out json-schema is a bit gnarly, maybe just an example in markdown would be good.
  3. Can you give some thoughts on what an expanded test suite would look like in here? How far do you imagine going in exercising and validating the API calls' responses; will it be enough for the purpose of this tool to just ensure that the call came back, and have the binding do the work of validating that the response was well-formed?
@aatifsyed
Copy link
Contributor

aatifsyed commented Aug 20, 2024

Thanks for the detailed feedback :)

1a. I couldn't get it to work with the lotus one you bundled, is that because it doesn't clearly conform to spec? Is there an easy way to see those problems so we can address them individually?

  • Yep, this is bad UX. It looks like the library we depend on has internal panics (which is fair enough, it's at 0.1.0).
    In this case it's because some of the lotus (sub)schemas don't have titles.
    I've opened Hit panic in add_type oxidecomputer/typify#661
    Possible approaches to addressing this:
    • Override type names where they're absent to get the lotus schema to generate.
    • Devote some time to typify to make their methods fallible instead of panic-happy.

1b. The output for forest v0 doesn't seem to have newlines at all; is this meant to be auto formatted later on? What's the workflow here to generate these?

1c. How can we use this generate process to check the openrpc spec of the node being pointed to? The ideal seems to be that we converge on a single subset API and we should be able to just poke the node and see that it has that subset and they are properly defined.

  • This isn't possible with the current approach, because there's a compile step between the codegen and tests.
    That is, suppose the OpenRPC spec of the node was wrong, all the tests wouldn't compile, and you'd have to fix or remove them before running against a random node.
    The intent is for the tests here to be the source of truth.
    That said, there is Validate method calls in-flight #23 which could dynamically check that the calls match any, possibly dynamic spec.

1d. Is it intended that you'd manually verify the bindings once generated since they seem to be a critical piece here.

Some docs on how to run it would be good, I eventually figured out that you need a config that this minimal version would do: {"v0":{ "url": "http://localhost:1234/rpc/v0" }}. Spitting out json-schema is a bit gnarly, maybe just an example in markdown would be good.

Can you give some thoughts on what an expanded test suite would look like in here? How far do you imagine going in exercising and validating the API calls' responses; will it be enough for the purpose of this tool to just ensure that the call came back, and have the binding do the work of validating that the response was well-formed?

  • I think I want the test suite to:
    • Exercise every schema, i.e check that in conforms. It is NOT sufficient that the bindings match.
    • Exercise logic, i.e do a GetHead SetHead GetHead, and validate appropriately

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants