-
Notifications
You must be signed in to change notification settings - Fork 663
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
parallelism of pytest tests #1702
Comments
Travis engines have 2 cores: https://docs.travis-ci.com/user/reference/overview/#virtualization-environments |
FTR xdist internally identifies Travis and know that it has two cores |
Depends on: #1573 |
Added #1573 (comment). Any further ideas on this? I'm wondering if we can use State to have all this naming issue resolved internally (some k:v where instance maps to the instance+uuid naming or so) and keep the end-user facing instance names. |
Not sure, it'd be also weird: when users will run |
@lwm after rethinking what you wrote I think that you're actually on the right track and using state would probably be beneficial. |
Further data points: #1715 (comment). |
I doubt we are facing a pytest-xdist issue here, is mostly the lack of concurrency support in molecule which prevents us from running tests in parallel. I attempted to run functional ones and it mainly felt appart. Still, this does not mean that we shoud not sort this. It just means that it will take a lot of time to make required changes for allowing molecule to run multiple instances in parallel, under the same user on a single machine. I do want to be able to run molecule in parallel on multiple roles and multiple scenarios and I already have repositories where this is desirable. At this moment we run sequentially but this increases the runtime considerably as the number of scenarios and roles increases. |
I created |
We need to solve two things here:
|
I am bit worried about unique instance names because it means no caching would be available. Also remember that user may run different steps at any time and any order (test, verify, converge, destroy,...). This makes me think that at most we could make a temp folder unique per scenario-path (checksum of full path?). Another concern is leftovers. Who is going to assure that we do not endup with huge leftovers in molecule temp folders? (from partial executions). BTW, I am considering having a look at how pre-commit does the caching of repos, maybe I can learn something out of it. But as my time is limited I would appreciate if someone else can look into that. |
I've been trying to make progress based on #2108 (review). Molecule partial sequence runs ( So, I've been thinking about this proposal:
We have the following sequences now: We'd have to adapt our functional tests to only test sequences that we can run concurrently and rely on the fact that the create/converge moving parts are tested in the context of another functional test. I think that is fine in practice. Thoughts? |
OK, after a quick spike, the above plan is not workable because something must be stored to disk on the sequence run (like the roles downloaded files). So, I'm going to try and adjust the internal APIs to be parallelizable (use random UUIDs for paths and instances) instead of trying to memcache all the internal state. This seems doable and might actually be easier to manage. |
I've got a proof of concept over at #2135. |
Working towards closing ansible#1702.
Working towards closing ansible#1702.
Working towards closing ansible#1702.
OK, #2137 is out now working on this. After that is merged then I think we can start to address this issue. We'll need to figure out how to allow pytest to select those tests that can be run in parallel - anything that does Looks like this is where we need to look ... pytest-dev/pytest-xdist#18 ... |
We could perhaps arrange it another way: we mark the tests that we know cannot be run in parallel and then arrange An example of what we need to avoid: https://travis-ci.com/ansible/molecule/jobs/174056358#L1822. |
Working towards closing ansible#1702.
This comment has been minimized.
This comment has been minimized.
PR to reduce workload over at #2146. Also, I've experimented with pytest markers to run some functional tests in parallel and some not. I can only get ~25 to run in parallel and the rest ~200+ are still in serial mode. Maybe we can't remove sharding .... I might just go with the markers instead. |
Working towards closing ansible#1702.
See #2147 as an example of what we can do to paralleize the functional tests! |
OK, #2155 is as far as we can take this for now. |
Issue Type
To execute the tests, pytest-xdist is installed as a requirement, but pytest is invoked without the -n option which would permit to have some parallelism during tests.
See: https://pypi.org/project/pytest-xdist/
In pytest.ini, we could complete the addopts line with some option relevant to travis runner (to be determined).
This option has to be tested: maybe it could influence positively the duration of travis builds.
The text was updated successfully, but these errors were encountered: