-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Change graphql library to use Strawberry and fix memory leak with large numbers subscription #44
Conversation
Remove all reference to device configs, remove resolver layer, rename self argument to clarify the meaning.
Update tests accordingly.
Repeated test code moved to conftest fixtures. Running a single test IOC for the test modules instead of per function. Added event loop fixture to accommodate this.
Change implementation to signal when deque should be processed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like this overall. I definitely like the code schema rather than the separate .gql
file that tartiflette
used.
I note that test_pvaplugin.py
seems to have not been updated - what is the intention with that file? I note that pvaplugin.py
was updated so I expected the tests to be updated.
A number of my comments are duplicates of each other, where i've noted the same issue appearing in multiple places. There's also a bunch here that are by no means musts - a lot are optional or simply opinions.
The problem I ran into here was that I needed to parametrize the fixtures for the two versions (schema and aiohttp) as one takes the schema directly and the other creates an aiohttp_client. The latter has to be done in a fixture as is a feature of pytest-aiohttp. The problem is that the fixture to create the aiohttp_client has to be |
Remove attempt to create parametrized tests that run both schema and client instances with the same test as this impacted the readability of the tests.
This will suppress the stacktrace on error if the debug argument is false.
Fixes the "Task was destroyed but it is pending" warnings at the end of the tests.
This occasional failure was coming from the subscription 'ticking' test. I have updated this now to make it more reliable including a wait to ensure we are connected to the websocket before trying to subscribe.
This was an interesting one. In some cases this was caused by the pytest event loop closing too early, which was fixed by using a new event loop for each 'module' (i.e. each set of tests). |
@aawdls @AlexanderWells-diamond I think I have now finished addressing the second round of review comments. Please feel free to review again or provide feedback on my recent changes. As a side note, I know we are moving away from the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Re-running the tests and I'm unfortunately still seeing warnings, despite tests passing:
======================================================================================= 25 passed in 21.13s =======================================================================================
Task was destroyed but it is pending!
task: <Task pending name='Task-169' coro=<BaseGraphQLTransportWSHandler.handle_connection_init_timeout() running at /home/eyh46967/dev/coniql/venv/lib64/python3.8/site-packages/strawberry/subscriptions/protocols/graphql_transport_ws/handlers.py:82> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x7f3c0f9452b0>()]>>
Task was destroyed but it is pending!
task: <Task pending name='Task-198' coro=<BaseGraphQLTransportWSHandler.handle_connection_init_timeout() running at /home/eyh46967/dev/coniql/venv/lib64/python3.8/site-packages/strawberry/subscriptions/protocols/graphql_transport_ws/handlers.py:82> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x7f3c0f7ece50>()]>>
This is almost certainly a timing issue, where running the test on different hardware produces the warnings. This issue will likely be exacerbated by GitHub Actions, as some of their runners (especially MacOS) are very slow.
I don't know how much longer it is worth spending on this issue though, as the issue most likely comes from how Pytest is handling itself, which is rather unlike any actual client out there.
I am unable to install this internally using the default pipenv
command, as several of the modules are not available on the internal PyPi. I haven't spent the time installing the missing ones as we're going to be scrapping pipenv
anyway.
Other than the test warnings, this all looks good to me.
I've finally pinpointed where these warnings are coming from and it looks like it's an issue in Strawberry. Essentially they create a task that does an This 60 secs is strange to me. In testing I found that this
The second seems cleaner to me but I will have to approach the Strawberry collaboration to get this fixed so might just ask for their opinion. |
Scrap those fixes. Turns out it's much simpler - we can configure the timeout ourselves so could override the 60 seconds. I'll look into that. |
@AlexanderWells-diamond if you have a moment would you mind re-running the tests on your system with my latest changes to see if you still get any warnings? Thanks! |
I've run the tests several times and see no warnings printed. I'm always a little suspicious about hardcoding any form of timeout into tests. The machines that run the CI have proven to be very slow at times (especially the MacOS ones), which if I understand the changes means that any machine that takes longer than 2 seconds to connect will cause a test failure. I suppose we'll just have to keep an eye on the CI and adjust this timeout accordingly. |
We send the |
Yes, it sounds like there may be something that could be done in Strawberry so should be raised with them. Otherwise this all looks good. |
This is now ready to be merged and will be deployed in pre-prod for further evaluation at scale. |
This PR includes the following major changes:
Other changes after a review of PR #39:
Changes to tests: