-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
implement testing #3
Comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Goal of this ticket is to implement an annotation scheme for each entry, which in the end makes sure that the examples we want to test actually work.
So far, there is a primitive
text_examples
function here in the main file and some associated attribute for testing is here. This won't work out, because it takes way too much time to run all examples. It would be better to have this wrapped into a persistent session and reset the environment before each example. Maybe it can be done via usual doctest methods, or we have to use pexpect or jupyter kernels. Also, before each run, it needs to be reset. My thoughts are to start a "main session", and for each test we fork off from that process and use pexpect. I don't know if doctests already do this.Besides that, it also needs to take the "setup:..." code part into account.
Finally, this is only an "opt-in" mechanism. Maybe we want to switch this to "opt-out", such that all examples are run and check if they do not produce any errors -- unless there is a marker set that this example is expected to not work.
The text was updated successfully, but these errors were encountered: