Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature/dev harrel impl #50

Merged

Conversation

harrel56
Copy link
Contributor

@harrel56 harrel56 commented Nov 12, 2023

Reviewer checklist

  • Read the contributing guide
  • PR should be motivated, i.e. what does it fix, why, and if relevant, how
  • Ensure relevant issues are linked (description should include text like "Fixes #")
  • Ensure any appropriate documentation has been added or amended

@harrel56 harrel56 requested a review from a team as a code owner November 12, 2023 21:01
@harrel56
Copy link
Contributor Author

Hi, let me know if there is something missing. I have no suggestion though: could you somehow separate spec compliance to required & optional? Because now it feels a little bit unfair as I have the highest score for required and the lowest score for overall. I just don't support format assertions (yet?) which is most of the optional tests are checking.

Also I think the serde tests might be a bit unfair? E.g. "Medeia" doesn't support newest drafts so it is run with draft7 and it cannot really be compared to implementations which are run with draft2020-12. Probably this also should be grouped by specs?

@harrel56
Copy link
Contributor Author

And just out of curiosity: how long does it take you to run all benchmarks? :P It looks like it might take a whole night or so

@big-andy-coates big-andy-coates added the enhancement New feature or request label Nov 13, 2023
@big-andy-coates
Copy link
Member

big-andy-coates commented Nov 13, 2023

Thanks @harrel56, I'll take a look at this.

could you somehow separate spec compliance to required & optional?

Results are split between required and optional. Though currently the graph is not.

The table summarising the functional results breaks them down into pass/fail counts on optional and rrequired features.

The over all score treats blends both required and optional, with required test cases making up 75% of the score.

It would be possible to add a graph comparing only required functional coverage. Feel free to raise an issue, and if you think its important enough, ideally work on it :).

Also I think the serde tests might be a bit unfair? E.g. "Medeia" doesn't support newest drafts so it is run with draft7 and it cannot really be compared to implementations which are run with draft2020-12.

Yes, I'm aware of this. Though it would be great if you could raise this as an issue.

I'm currently working on making the charts and tables of results be driven from the latest test runs, so that results update as new versions of validator libraries come out. As part of this work, I have a todo to call out which draft version the serde benchmark is using, what this means, and to potentially break it down by spec.

Time is always a limiting factor though ;)

@harrel56
Copy link
Contributor Author

When the benchmarks and graphs are going to be automatically generated I'll be happy to help out with splitting this graph - I just don't wanna do it manually. Gonna report this as a separate issue then

Copy link
Member

@big-andy-coates big-andy-coates left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @harrel56,

First off, a big apology for committing a refactor while you were in the middle of your work. My bad. I'm just trying to finish off a few things to make results more data driven.

This looks great!

I've made a few comments below. Mainly looks like stuff due to you refactoring the code after I changed things. A bit of clean up and should be good to go.

@big-andy-coates
Copy link
Member

And just out of curiosity: how long does it take you to run all benchmarks? :P It looks like it might take a whole night or so

Yeah, the benchmarks take a while to run! They run in a GitHub workflow.

There is a smoke test for the functional tests that runs as part of the build though, to catch any issues.

@big-andy-coates
Copy link
Member

big-andy-coates commented Nov 13, 2023

### Conflicting files
src/main/java/org/creekservice/kafka/test/perf/JsonSerdeBenchmark.java
src/main/java/org/creekservice/kafka/test/perf/JsonValidateBenchmark.java

Sorry - forgot I moved those files! They are now under performance package.

@harrel56
Copy link
Contributor Author

First off, a big apology for committing a refactor while you were in the middle of your work. My bad. I'm just trying to finish off a few things to make results more data driven.

Yeah, that really wasn't a problem so no worries :)

I think I addressed all your comments - let me know if there's anything left

Copy link
Member

@big-andy-coates big-andy-coates left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM @harrel56

Few minors above, plus the following remaining from the previous reviews:

@harrel56
Copy link
Contributor Author

Ok, completed another iteration - it's getting a little messy, hopefully didn't omit anything

Copy link
Member

@big-andy-coates big-andy-coates left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

Thanks for your contribution.

In case you missed them, there are two follow on comments you may be interested in:

Functional results are up: https://www.creekservice.org/json-schema-validation-comparison/functional

Performance results are working locally - pushing soon.

@big-andy-coates big-andy-coates merged commit 1612852 into creek-service:main Nov 14, 2023
7 checks passed
@harrel56
Copy link
Contributor Author

Nice, thanks!

Functional results are up: https://www.creekservice.org/json-schema-validation-comparison/functional

Hate to be dead last here - looking forward to required & optional separation! BTW I messaged you on slack regarding this

@big-andy-coates
Copy link
Member

big-andy-coates commented Nov 15, 2023

@harrel56 harrel56 deleted the feature/dev-harrel-impl-2 branch November 15, 2023 22:04
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants