Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Take over benchmarks #2149

Closed
13 of 18 tasks
bdemann opened this issue Sep 30, 2024 · 4 comments · Fixed by #2138
Closed
13 of 18 tasks

Take over benchmarks #2149

bdemann opened this issue Sep 30, 2024 · 4 comments · Fixed by #2138
Assignees
Milestone

Comments

@bdemann
Copy link
Member

bdemann commented Sep 30, 2024

  • multi canister support
  • convert instructions to cycles to dollars and have a column for each
  • clean up the code
    • AZLE_RECORD_BENCHMARKS=true npm test by default
    • TODOs in src/build/rust/canister/src/lib.rs
    • see if we can get the actual names instead of the hashes for candid records (see test/index.ts Benchmark type)
      • It might be as simple as telling the agent about the candid info
    • test/index.ts large additions need to be double checked
    • TODOs in src/lib/stable/canister_methods/query.ts and update.ts
    • TODOs in src/build/rust/canister/src/execute_method_js.rs
    • Make sure tests are passing
    • Final pass at code clean up
      • make sure that places that use jsonStringify aren't using experimental just because of that (esp prop tests)
  • During Review
    • discuss name RECORD_BENCHMARKS vs RUN_BENCHMARKS
    • should jsonStringify and other things still be exported from experimental?
      • For that matter is it okay to be in stable? I think it can pretty easily move to stable and there are stable situations where it would be helpful.
    • What do we do about the function names that are harder/impossible to get? Numbers okay? For Right now?
@bdemann bdemann added this to the 1.0 milestone Sep 30, 2024
@bdemann bdemann self-assigned this Sep 30, 2024
@bdemann
Copy link
Member Author

bdemann commented Oct 9, 2024

According to this the cycles is computed like this:

The per instruction fee is .4 cycles per instruction (or 10 instructions for 4 cycles)
Then the total cycles fee is base-fee + per-instruction-fee * number-of-instructions
The current values of fees are base-fee = 590K cycles (or $0.0000007885999 USD), per-instruction-fee = 0.4 cycles (or $0.0053 USD for 1B instructions).

@bdemann
Copy link
Member Author

bdemann commented Oct 10, 2024

In the release process we run the benchmarks, and commit each on their own commit, merge those commits into one commit, and then have that one commit be reviewed and merged it

#2173

@bdemann
Copy link
Member Author

bdemann commented Oct 10, 2024

Get rid of publish script? Do the same approach for publish

Yes #2173

@bdemann
Copy link
Member Author

bdemann commented Oct 10, 2024

Options for always recording benchmarks by default on running tests

  1. Take advantage of the npm_lifecycle_event
const wasmData: WasmData = {
    envVars,
    recordBenchmarks:
        process.env.npm_lifecycle_event === 'pretest'
            ? process.env.AZLE_RECORD_BENCHMARKS !== 'false'
            : process.env.AZLE_RECORD_BENCHMARKS === 'true'
};
  1. Make our own env variable and set it on the pretest
    1. Alternatively we could hijack NODE_ENV here as well
{
    "pretest": "AZLE_ENV=test tsx test/pretest.ts",
}
  1. Do something like this
{
    "test": "AZLE_RECORD_BENCHMARKS=${AZLE_RECORD_BENCHMARKS:-true} jest",
}

I'm going with option 1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant