Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use binary search to find the failing packages faster? #44

Open
joeytwiddle opened this issue Jul 16, 2016 · 4 comments
Open

Use binary search to find the failing packages faster? #44

joeytwiddle opened this issue Jul 16, 2016 · 4 comments

Comments

@joeytwiddle
Copy link

joeytwiddle commented Jul 16, 2016

We have a big slow test suite, and a lot of dependencies. I fear running updtr will take all night. But perhaps there is a faster way, using a binary search:

  1. updtr updates all the packages to the latest versions.
  2. It runs the tests. If they succeeded, then we are finished.
  3. Otherwise one of the tests failed. Reverts half of the upgrades it just made, and test again.
  4. If the tests now succeed, updtr knows which half of the upgrades the problem upgrade(s) live in. Focus on that half.
    Otherwise, if the tests failed, updtr needs to keep testing both halves to track down the problem upgrade(s).
  5. Define this algorithm better, then try to explain it again. ;-)

I hope you get the idea. (If you have 32 dependencies, and one of them is breaking, a binary search could find it by running the test suite 5 times instead of 32 times.)

(Optional: Once a package has been identified as breaking the tests, updtr could even do a binary search on the versions to determine the exact version which caused the breakage.)

@joeytwiddle joeytwiddle changed the title Use binary search to find failing upgrades faster? Use binary search to find the failing packages faster? Jul 16, 2016
@jhnns
Copy link
Member

jhnns commented Jul 20, 2016

Nice idea! 👍

However, what we need to consider: If I have to revert all the files and re-install the old dependency, could it be possible that npm has to do more work in this case (which could make it even slower than now)?

@matthaias
Copy link
Member

Also: updtr does it only for outdated modules. Maybe i'm wrong, but if you run updtr once a week, i think you won't have too much outdated modules?

@joeytwiddle
Copy link
Author

@jhnns That's a valid concern. This really depends on the time it takes to run your test suite, compared to the time it takes to run npm. I have found npm runs somewhat faster once it has cached the packages, but even then the time is still not zero!

I suspect downgrading would be faster than upgrading because the package should be already cached. But upgrading to the most recent version could be slow, if npm checks the registry each time to determine what is the most recent version. To reduce registry queries, perhaps updtr could determine the most recent version of the dependencies when it starts running, and after that ask npm to install specific versions, rather than asking it for the most recent. (Or perhaps npm already debounces identical registry queries for a short time, I don't know about that.)

@matthaias has a good point, this could well be overkill if there are only a few updates to check. It might help for the first run (many dependencies are behind), but offer little benefit if run regularly after that.

I just wanted to offer the suggestion when I thought of it. But if nobody else is crying out for it, then it's probably not be worth pursuing.

@jhnns
Copy link
Member

jhnns commented Aug 31, 2016

I think this is a nice idea but its real-world improvements are a bit uncertain. So this is personally not high prio for me. If anyone wants to give it a try, go ahead :)

I'll leave it open for discussion.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants