-
-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature/dev harrel impl #50
Feature/dev harrel impl #50
Conversation
Hi, let me know if there is something missing. I have no suggestion though: could you somehow separate spec compliance to required & optional? Because now it feels a little bit unfair as I have the highest score for required and the lowest score for overall. I just don't support format assertions (yet?) which is most of the optional tests are checking. Also I think the serde tests might be a bit unfair? E.g. "Medeia" doesn't support newest drafts so it is run with draft7 and it cannot really be compared to implementations which are run with draft2020-12. Probably this also should be grouped by specs? |
And just out of curiosity: how long does it take you to run all benchmarks? :P It looks like it might take a whole night or so |
Thanks @harrel56, I'll take a look at this.
Results are split between required and optional. Though currently the graph is not. The table summarising the functional results breaks them down into pass/fail counts on optional and rrequired features. The over all score treats blends both required and optional, with required test cases making up 75% of the score. It would be possible to add a graph comparing only required functional coverage. Feel free to raise an issue, and if you think its important enough, ideally work on it :).
Yes, I'm aware of this. Though it would be great if you could raise this as an issue. I'm currently working on making the charts and tables of results be driven from the latest test runs, so that results update as new versions of validator libraries come out. As part of this work, I have a Time is always a limiting factor though ;) |
When the benchmarks and graphs are going to be automatically generated I'll be happy to help out with splitting this graph - I just don't wanna do it manually. Gonna report this as a separate issue then |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @harrel56,
First off, a big apology for committing a refactor while you were in the middle of your work. My bad. I'm just trying to finish off a few things to make results more data driven.
This looks great!
I've made a few comments below. Mainly looks like stuff due to you refactoring the code after I changed things. A bit of clean up and should be good to go.
src/main/java/org/creekservice/kafka/test/perf/implementations/DevHarrelImplementation.java
Outdated
Show resolved
Hide resolved
src/main/java/org/creekservice/kafka/test/perf/implementations/DevHarrelImplementation.java
Show resolved
Hide resolved
src/main/java/org/creekservice/kafka/test/perf/implementations/DevHarrelImplementation.java
Outdated
Show resolved
Hide resolved
src/main/java/org/creekservice/kafka/test/perf/implementations/DevHarrelImplementation.java
Outdated
Show resolved
Hide resolved
src/main/java/org/creekservice/kafka/test/perf/implementations/DevHarrelImplementation.java
Outdated
Show resolved
Hide resolved
src/main/java/org/creekservice/kafka/test/perf/implementations/DevHarrelImplementation.java
Outdated
Show resolved
Hide resolved
src/main/java/org/creekservice/kafka/test/perf/implementations/DevHarrelImplementation.java
Outdated
Show resolved
Hide resolved
src/main/java/org/creekservice/kafka/test/perf/testsuite/TestSuiteLoader.java
Show resolved
Hide resolved
src/test/java/org/creekservice/kafka/test/perf/implementations/DevHarrelImplementationTest.java
Show resolved
Hide resolved
Yeah, the benchmarks take a while to run! They run in a GitHub workflow. There is a smoke test for the functional tests that runs as part of the build though, to catch any issues. |
src/main/java/org/creekservice/kafka/test/perf/implementations/DevHarrelImplementation.java
Outdated
Show resolved
Hide resolved
Sorry - forgot I moved those files! They are now under performance package. |
Yeah, that really wasn't a problem so no worries :) I think I addressed all your comments - let me know if there's anything left |
src/main/java/org/creekservice/kafka/test/perf/implementations/DevHarrelImplementation.java
Outdated
Show resolved
Hide resolved
src/main/java/org/creekservice/kafka/test/perf/implementations/DevHarrelImplementation.java
Outdated
Show resolved
Hide resolved
src/test/java/org/creekservice/kafka/test/perf/implementations/DevHarrelImplementationTest.java
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM @harrel56
Few minors above, plus the following remaining from the previous reviews:
Can be merged once #50 is merged.
Ok, completed another iteration - it's getting a little messy, hopefully didn't omit anything |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
Thanks for your contribution.
In case you missed them, there are two follow on comments you may be interested in:
Functional results are up: https://www.creekservice.org/json-schema-validation-comparison/functional
Performance results are working locally - pushing soon.
Nice, thanks!
Hate to be dead last here - looking forward to required & optional separation! BTW I messaged you on slack regarding this |
Performance results are up: https://www.creekservice.org/json-schema-validation-comparison/performance |
Reviewer checklist