Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Handle grouped test cases + auto-retry if out of tokens #49

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

RussellDash332
Copy link

What's the problem?

Sometimes, when you're working on a question with partial scores, it is very likely that the test cases are made into groups. The current implementation hinders whether we fail the test case group(s) or not.

Here's an example I took while working on this question and this question. Assume that ktcli <problem_id> is my own command alias to run the submission script along with <problem_id>.py attached somewhere.

Normally, if you fail a testcase, the judging will stop, as shown below.

image

For those with testcase groups, if you fail a testcase, instead of ending the entire judgement, there can be multiple behaviors. Usually, it will simply move on to the next testcase group, until there is none left. On other occassions, it will keep judging the next test cases anyways.

Therefore, we can't tell if we're failing a particular test case or not until we know it stops judging. In fact, we can still get partially accepted with these mistakes existing, as shown below (I did not get 100 points for this).

image

So what does this PR do?

Introduce colours (and scores too! #44)

With colours, we can finally tell these testcases apart. Here are two scenarios: when the judging is in progress, and when it's done.

Judging in progress

Yellow question mark for a nice touch 😄

Single group (default)
image

Grouped testcases
image

AC verdict

Single group (default)
image

Grouped testcases
image

Non-AC verdict

image

Small addition: auto-retry for out-of-tokens scenario

I personally find myself too lazy to keep running the script whenever I ran out of tokens. So I made sure that the script auto-retries until it has a token again.

  • Pro: less manual work involved
  • Con: more requests sent as an attempt to submit

Here's a reference to what originally happens when you're out of tokens.

image

This PR will ensure when the same thing happens, it will simply wait until the time is right and then does the usual behavior.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant