-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Speed up tests on Travis CI (when using DRMAA) #238
Comments
We have reverted any changes to the test settings this brings the runtime down a few minutes depending. It appears a sizable amount of time is spent in install (~4-5 min based on a couple of cases), which we can't do too much about. When comparing the tests to running on my computer, it seems Travis seems to be a bit slower in most cases. Below is a graph of the ratio of Travis time vs my MacBook Pro from Late 2013. Our MacBook Pro compared to Travis CI has more memory 16GB vs 3GB and more cores 8 vs 2. However, both use solid state drives, Travis CI may have some overhead from the VM provided drive. Also, the disk space for Travis is shared between any other jobs running on the same node. Due to rudimentary profiling, we know our test suite uses no more and generally much less than 1GB RAM. That being said, here are the stats. In order are the tests that are 5 times slower or worse. Asterisks are placed in front of those that are 10 times slower or worse. Note that these aren't particularly long tests so optimizing them may not be necessary. Also, note the bold ones use IO. These tend to be the longer running tests, as well. tests.test_nanshe.test_io.test_hdf5.test_search.TestHDF5Searchers.test_get_matching_paths: 0.3991s / 0.0678s = 5.88643067847 TL;DR In general, Travis can be taken to be about 2x slower than our MacBook. The causes of this are not entirely clear, but likely boil down to worse specs, VM overhead, and sharing resources with other jobs. The best way to improve this is to return to the container environment by solving this issue ( https://github.com/jakirkham/nanshe/issues/230 ), but is somewhat challenging. A quick improvement would be to cut down on unnecessary setup costs in testing. Following this would be optimizing the test themselves or the code. However, a fair bit of time was already spent on test and code optimization. So, this would take some time, as well. |
Opened the following issues to look into getting |
Installing apt-get packages in the container environment can be accomplished this way ( http://docs.travis-ci.com/user/apt/ ). |
…esources are better managed at setup and teardown. Related to jakirkham/nanshe#238.
Also, it is possible to cache our dependencies to speed up the build. ( http://docs.travis-ci.com/user/caching/ ). However, we must use a container to do this. |
Other thoughts include separating tests into a build matrix. For instance, regression tests from unit tests using environment variables ( http://docs.travis-ci.com/user/build-configuration/#The-Build-Matrix ). See example. ( http://docs.travis-ci.com/user/speeding-up-the-build/#Parallelizing-your-builds-across-virtual-machines ) Combined with nose attributes, the environment variables could be simply which attribute(s) to run. ( http://nose.readthedocs.org/en/latest/plugins/attrib.html ) If this were done, it is possible to combine two coverage reports using coverage ( http://nedbatchelder.com/code/coverage/cmd.html#h_combining_data_files ). However, proper merging of results and correct submission hinges on a successful resolution to this issue ( travis-ci/travis-ci#929 ). A workaround might be to use the script from here ( https://github.com/dmakhno/travis_after_all ), but it requires the use of |
Currently, it is taking around 12 mins to complete. This is not terrible, but is twice as long as our current (master) test suite. Ideally, we want to take a comparable amount of time to what it does now. Admittedly, it may take a minute or so longer.
The text was updated successfully, but these errors were encountered: