Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Coerce major wildcards #13535

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

Nikita240
Copy link

@Nikita240 Nikita240 commented Mar 5, 2024

What does this PR try to resolve?

Fixes #13534 #9029

This PR implements a change that will make major wildcards in sub-dependencies be coerced by a more restrictive dependency to avoid unnecessary duplicate dependencies.

It does this by presorting the remaining_candidates array to prioritize candidates that already have activations (if you'd like an easier diff to understand the change, see the first commit.

How should we test and review this PR?

There are unit tests for the new behavior.

Additional information

There is a bit of a special case here that needs to be talked about. When there are multiple possible activated candidates that could be used to coerce the wildcard, a choice needs to be made.

I did the simplest and most intuitive implementation, which is to always choose the candidate with the highest SemVer.

An argument could be made for an alternative: If the root crate has a direct dependency on the package we are coercing, always pick whatever the root activates. This is nice because it seemingly allows for maximum compatibility to the end user, but feels very inconsistent as it will inevitably favor binaries over libraries.

@rustbot
Copy link
Collaborator

rustbot commented Mar 5, 2024

Thanks for the pull request, and welcome! The Rust team is excited to review your changes, and you should hear from @epage (or someone else) some time within the next two weeks.

Please see the contribution instructions for more information. Namely, in order to ensure the minimum review times lag, PR authors and assigned reviewers should ensure that the review label (S-waiting-on-review and S-waiting-on-author) stays updated, invoking these commands when appropriate:

  • @rustbot author: the review is finished, PR author should check the comments and take action accordingly
  • @rustbot review: the author is ready for a review, this PR will be queued again in the reviewer's queue

@rustbot rustbot added A-dependency-resolution Area: dependency resolution and the resolver S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. labels Mar 5, 2024
@epage
Copy link
Contributor

epage commented Mar 5, 2024

r? @Eh2406

@rustbot rustbot assigned Eh2406 and unassigned epage Mar 5, 2024
@Eh2406
Copy link
Contributor

Eh2406 commented Mar 5, 2024

I will try and make time to think about this more carefully as soon as I can. In the meantime I'm gonna quote something Ed routinely says:

When this is ready for review, I'd recommend reworking the commits

  • An initial commit should exist that has passing tests, showing the current behavior
  • A follow up commit should change the behavior and update the tests so they still pass
    • This helps show we are testing the right thing and makes the change in behavior very clear
  • Any refactorings along the way should be separate commits

Specifically in this case I'm pretty sure some of these tests would already pass before this PR. Which is fine, more tests for the existing functionality are always helpful. But it's not clear as a reviewer if you expected there to be a behavior change.

@Nikita240
Copy link
Author

I will try and make time to think about this more carefully as soon as I can. In the meantime I'm gonna quote something Ed routinely says:

When this is ready for review, I'd recommend reworking the commits

  • An initial commit should exist that has passing tests, showing the current behavior

  • A follow up commit should change the behavior and update the tests so they still pass

    • This helps show we are testing the right thing and makes the change in behavior very clear
  • Any refactorings along the way should be separate commits

Specifically in this case I'm pretty sure some of these tests would already pass before this PR. Which is fine, more tests for the existing functionality are always helpful. But it's not clear as a reviewer if you expected there to be a behavior change.

Understood, I will force-push the commit tree to update it as soon as I'm done brainstorming more tests.

@Nikita240
Copy link
Author

Specifically in this case I'm pretty sure some of these tests would already pass before this PR.

Yes, test_wildcard_minor would pass. I just wrote that next to test_wildcard_major to demonstrate how the current behavior differs and is inconsistent based on where the wildcard is placed.

@Nikita240
Copy link
Author

Ok I fixed the commit tree.

Here is a summary of the tests

New tests with current implementation

test test_range_major ... FAILED
test test_wildcard_major ... FAILED
test test_wildcard_major_coerced_by_indirect_subdependency ... FAILED
test test_wildcard_major_coerced_by_subdependency ... FAILED
test test_wildcard_major_duplicate_selection ... ok
test test_wildcard_minor ... ok

New tests with proposed implementation

test test_range_major ... ok
test test_wildcard_major ... ok
test test_wildcard_major_coerced_by_subdependency ... ok
test test_wildcard_major_coerced_by_indirect_subdependency ... ok
test test_wildcard_major_duplicate_selection ... ok
test test_wildcard_minor ... ok

There is a regression however:

test resolving_with_sys_crates ... FAILED

This is because we actually fixed an unneccessary duplicate dependency within that test.

To address this, I changed the assertion to remove the duplicate dependency and then added a new test that replicates the old test by explicitelly pinning the requirements to generate duplicate dependencies (see this commit):

test resolving_with_sys_crates ... ok
test resolving_with_sys_crates_duplicates ... ok

@epage
Copy link
Contributor

epage commented Mar 5, 2024

@Nikita240 to clarify what @Eh2406 asked, each commit should be passing tests. This helps to test the tests and makes changes in behavior clearer to reviewers.

From my quick look at the commits and with your last message, it sounds like this wasn't done.

#13505 is an example of this

  • 0955a80 adds tests, they pass
  • 0f9d4a3 changes behavior and updates tests

@Nikita240
Copy link
Author

@Nikita240 to clarify what @Eh2406 asked, each commit should be passing tests. This helps to test the tests and makes changes in behavior clearer to reviewers.

From my quick look at the commits and with your last message, it sounds like this wasn't done.

#13505 is an example of this

  • 0955a80 adds tests, they pass
  • 0f9d4a3 changes behavior and updates tests

Ahh, I misunderstood. Thanks

@Nikita240
Copy link
Author

Rebuilt the tree again 👍🏻

@Eh2406
Copy link
Contributor

Eh2406 commented Mar 5, 2024

I got a chance to look this over. My brings a little bit exploding with different ways I want to respond to this PR. Sorry if this comment ends up all over the place. So before I forget, thank you for working on this very tricky issue!

Edit: Got sucked into a meeting and so this comment is unaware of the past two hours of progress, although I don't think it affects things that much.

I'm going to try and respond from most detailed out toward most general.

The most detailed feedback is about how things are named and where comments go, which I'm largely going to save till after we discussed the bigger questions. Except to thank you for working on tests. This stuff is tricky and hard to get right, I deeply appreciate your use of tests.

At the next level the critical question here is what property is this trying to uphold? If we can define it strictly enough we can add a proptest and discover all kinds of fun counterexamples. Even without that level of strictness we can add an assert to the code and watch proptest generate examples that should be standalone tests.

Specifically things get tricky with regard to what order operations are evaluated in. Most concretely this code moves checking cx.activations.get from RemainingCandidates::next to ValidCandidates::new. I don't remember the structure of loops well enough, what ensures cx.activations will not change in between new and next?

Similarly in these small examples the wildcard requirements will get evaluated after the specific requirements, because we have a heuristic that requirements that match fewer versions get processed first. But this is not always true. We could have a situation the direct dependency is wildcard, only after we've evaluated that to be discovered that a transitive dependency adds a specific requirement. So I suspect that whatever property were trying to uphold this will only make it "heuristically" true and not "mathematically" true. Nothing wrong with that, the resolver involves a lot of load bearing heuristics, but a different kind of discussion.

In the last level of abstraction is a question of design, is this "Doesn't unnecessarily include newer versions" a property we even want to cargos resolver to have?

  • Pro: There are clearly a lot of people who've been asking us for this functionality, just look at the linked issues. Also we already use a related heuristic when a new dependencies discovered, if you add foo = "*" to cargo.toml it will select a version of foo that's already in your lock file, originally to allow more builds to work off-line. (Perhaps this "optimization" should be backed out now that we have --offline.)
  • Con: We don't know how many people who intentionally want "the latest version of their dependencies even if there's another version in the tree", because that is what cargo currently does so they have no reason to complain to us.

@Nikita240
Copy link
Author

Nikita240 commented Mar 5, 2024

The most detailed feedback is about how things are named and where comments go, which I'm largely going to save till after we discussed the bigger questions.

I appreciate that. We can always work that out once there is consensus on the direction to take this.

At the next level the critical question here is what property is this trying to uphold? If we can define it strictly enough we can add a proptest and discover all kinds of fun counterexamples.

We should indeed define this. Here is my attempt:

"When a dependency is explicitly requested to have a non-SemVer-compatible range, it's compatibility across SemVer boundaries should be resolved in the same exact way that a SemVer compatible range would be. For example: 0.* should follow the same rules as 0.1.*."

Specifically things get tricky with regard to what order operations are evaluated in. Most concretely this code moves checking cx.activations.get from RemainingCandidates::next to ValidCandidates::new. I don't remember the structure of loops well enough, what ensures cx.activations will not change in between new and next?

Similarly in these small examples the wildcard requirements will get evaluated after the specific requirements, because we have a heuristic that requirements that match fewer versions get processed first. But this is not always true. We could have a situation the direct dependency is wildcard, only after we've evaluated that to be discovered that a transitive dependency adds a specific requirement. So I suspect that whatever property were trying to uphold this will only make it "heuristically" true and not "mathematically" true. Nothing wrong with that, the resolver involves a lot of load bearing heuristics, but a different kind of discussion.

I share all of these concerns with you. I think it would be wise for me to refactor the code to ensure that these heuristics are strictly upheld (and to make it difficult to break in the future).

My goal with the current implementation was to get working code with a minimal change-set to make it simpler to grok what the proposal does. Refactoring the code to be more strict would require a lot of code changes, probably a few levels up from the current function, so it would have been a huge diff to try to process.

In the last level of abstraction is a question of design, is this "Doesn't unnecessarily include newer versions" a property we even want to cargos resolver to have?

  • Pro: There are clearly a lot of people who've been asking us for this functionality, just look at the linked issues. Also we already use a related heuristic when a new dependencies discovered, if you add foo = "*" to cargo.toml it will select a version of foo that's already in your lock file, originally to allow more builds to work off-line. (Perhaps this "optimization" should be backed out now that we have --offline.)
  • Con: We don't know how many people who intentionally want "the latest version of their dependencies even if there's another version in the tree", because that is what cargo currently does so they have no reason to complain to us.

This is the main question we have to answer.

My opinion:

You have two possible conditions to optimize for:

  • Maximize SemVer
  • Minimize duplicates

Which one of these a user actually wants is likely based on whether or not the dependency is private or public, which as far as I understand one of the goals of the public-private-dependencies RFC.

However, until we have that, here are some assertions we can make:

  • With the current resolver implementation, when a library specifies version = ">0.1, <=0.10 it's essentially interpreted as version = "^0.10" by the resolver. Specifying version = ">0.1, <=0.10 only allows the user to manually update the lock file with cargo update --precise. This is likely unintuitive to most users.
  • A library author is likely only going to specify non-SemVer-compatible ranges for a dependency if it's a public dependency.

Hence, I think the following logic is a fair compromise:

  • Maximize SemVer, UNLESS the major version is unclear, in which case, minimze duplicates while maximizing semver.

I basically view version = "0.*" or version = ">0.1, <=0.10" as an open invitation by the library author to coerce the dependency.

@Eh2406
Copy link
Contributor

Eh2406 commented Mar 6, 2024

I basically view version = "0.*" or version = ">0.1, <=0.10" as an open invitation by the library author to coerce the dependency.

This seems pretty convincing to me, but I started a conversation with the rest of the cargo team on zulip.

I share all of these concerns with you. I think it would be wise for me to refactor the code to ensure that these heuristics are strictly upheld (and to make it difficult to break in the future).

New better abstractions would certainly help hear. The existing ones are not great. That being said if we find a way to make sure this "always" happens then we have proven SAT == MAX-SAT. We need to be comfortable with the fact that there are going to be cases where the resolver does not give the most "optimal" solution.

We should indeed define this. Here is my attempt:
"When a dependency is explicitly requested to have a non-SemVer-compatible range, it's compatibility across SemVer boundaries should be resolved in the same exact way that a SemVer compatible range would be. For example: 0.* should follow the same rules as 0.1.*."

I think this is a necessary but insufficient condition for the properties we care about. As both of those examples are handled "the same exact way" currently, they match the largest version that makes the system work.

(On a side note I will be out for a few days and may not be able to respond promptly until next week.)

Copy link
Contributor

@Eh2406 Eh2406 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just sharing some thoughts before you dig into polishing the implementation.

@@ -260,16 +260,8 @@ fn activate_deps_loop(
.conflicting(&resolver_ctx, &dep)
.is_some();

let mut remaining_candidates = RemainingCandidates::new(&candidates);

// `conflicting_activations` stores all the reasons we were unable to
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this comment has a lot more to do with how conflicting_activations influences the entire loop, so I think it should stay And not move with the ConflictMap::new();.

/// `Candidate` is the candidate to attempt to activate, and the `bool` is
/// an indicator of whether there are remaining candidates to try of if
/// we've reached the end of iteration.
fn next(&mut self) -> Option<(Summary, bool)> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if next does not take additional arguments, then it should probably be a proper iterator. ( Or document why it can't implement that trait)

let mut conflicting_activations = ConflictMap::new();

let mut activated_candidates: Vec<Summary> = Vec::with_capacity(candidates.len());
let mut non_activated_candidates: Vec<Summary> = Vec::with_capacity(candidates.len());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a little confused why we need two vectors here (that get concatenated) instead of just sorting one vec. and if it's just a sort, can it be combined with the existing sort at

pub fn sort_summaries(

Copy link
Author

@Nikita240 Nikita240 Mar 6, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not quite a sort. We're actually just splitting the vector into two buckets, while preserving the order of the elements.

While this can technically be done in place, you'd still need a tmp array to copy out segments. Here is the theoretical performance comparison:

N = activated candidates
M = total candidates

Current implementation with split arrays

item memcopies = 2 * M
array mallocs = 3

Current implementation if optimized

item memcopies = 2 * M - N
array mallocs = 1

In-place implementation

item memcopies = N * (M + 2)
array mallocs = 1

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is what the optimal looks like imo:

pub trait PrioritizeVecExt<T> {
    fn prioritize_by_predicate(&mut self, predicate: impl Fn(&T) -> bool) -> &mut Self;
}

impl<T: Clone> PrioritizeVecExt<T> for Vec<T> {
    fn prioritize_by_predicate(&mut self, predicate: impl Fn(&T) -> bool) -> &mut Self {
        let mut non_priority = Vec::with_capacity(self.len());

        let mut i = 0;
        for j in 0..self.len() {
            if predicate(&self[j]) {
                self[i] = self[j].clone();
                i += 1;
            } else {
                non_priority.push(self[j].clone());
            }
        }
        for k in 0..non_priority.len() {
            self[i + k] = non_priority[k].clone();
        }
        self
    }
}

#[test]
fn test_prioritize_by_predicate() {
    let mut v = vec![1, 2, 3, 4, 5, 6, 7, 8, 9];
    v.prioritize_by_predicate(|&x| x % 2 == 0);
    assert_eq!(v, vec![2, 4, 6, 8, 1, 3, 5, 7, 9]);
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes but:

  • candidates.sort_by_key(|c| cx.activated.contains(c)) is a lot shorter to read.
  • It takes a lot of pointer (Summary) copying to compete with the cost of two allocations.
  • We already have code sort_summaries sorting this array.

Overall we should merge whichever solution is the easiest to understand.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok that's neat. I didn't realize sort_by_key would maintain segment order when used with a boolean predicate.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All the sort functions maintain insertion order anytime the predicates are equal. They each come with a _unstable variant that is a little faster but does not uphold this property. (And yes, it's unfortunate that unstable has a specific meaning in rust and a different specific meaning in sorting algorithms.)

Copy link
Author

@Nikita240 Nikita240 Mar 11, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok after poking around a bit I realized we cannot filter & sort in-place. The outer loop relies on the original candidates vector to generate the Error here:

debug!("no candidates found");
Err(errors::activation_error(
    &resolver_ctx,
    registry.registry,
    &parent,
    &dep,
    &conflicting_activations,
    &candidates,
    gctx,
))

That means if we're pre-filtering the array, we must alloc a new one and copy the pointers over.

On the topic of using sort_by_key(), after playing around with it I actually think the way it's currently done is better because we are able to both sort and filter based on the result of one comparison. If we did it as filter().sort_by_key() we'd be doing an unnecessary extra comparison on each item instead of relying on the else condition.

As to how to remove the extra allocation I'm currently doing; I could make the container be something like a VecDeque, where the "prioritized" candidates are pushed to the back and the "non-prioritized" candidates are pushed to the front, then use a chained iterator that iterates in reverse until it reaches the "partition point", after which it will iterate over the front elements normally.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The resolver is pretty fast these days. On one hand that took a lot of focus on micro-optimizations in these hot loops. On the other hand that means we can afford to write things "slower but more readable". Let's figure out what we want before geeking out too much about how to get all the performance out of it?

@@ -774,18 +757,50 @@ impl RemainingCandidates {
// semver-compatible versions of a crate. For example we can't
// simultaneously activate `foo 1.0.2` and `foo 1.2.0`. We can,
// however, activate `1.0.2` and `2.0.0`.
//
// Here we throw out our candidate if it's *compatible*, yet not
// equal, to all previously activated versions.
if let Some((a, _)) = cx.activations.get(&b_id.as_activations_key()) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I brought this up in prose, but I want to market is a concern. What ensures that a relevant package is not added to activations between new and next?

Copy link
Author

@Nikita240 Nikita240 Mar 11, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is nothing preventing it, but it seemingly does not matter.

The old implementation would take a &mut ConflictMap in the next() method, but it would ONLY append new conflicts. It would not perform checks against or remove elements. As a result, it does not matter if we move the appending of conflicts to new() and do them preemptively.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was not worried about ConflictMap getting corrupted, although maybe I should have been. If an item is missing from ConflictMap that did in fact prevent a candidate version from being tried that would cause big problems. Similarly, if we backtrack and construct a new ConflictMap it needs to know about all of the conflicts that were identified during the call to new.

The situation I was worried about is next returning a candidate that conflicts with a candidate in activations?
Something like:

  • new is called for <=1.2. Because there is no version in activations self.prioritized_candidates is [1.2.0, 1.1.1, 1.1.0, 1.0.0, 0.9.0 .... ]
  • Any number of things happen including discovering new dependencies and backtracking and whatever.
  • new is called for >=1, <2. Because there is no version in activations self.prioritized_candidates is [1.999.999, 1.999.998, ... , 1.1.0, 1.0.0]
  • next is called on the second one suggesting candidate 1.999.999
  • since there are no problems with it that version is selected and added to activations.
  • Any number of things happen including discovering new dependencies and backtracking and whatever.
  • next is called on the first one incorrectly suggesting candidate 1.2.0 💥

@rustbot rustbot added S-waiting-on-author Status: The marked PR is awaiting some action (such as code changes) from the PR author. and removed S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. labels Mar 6, 2024
@Eh2406
Copy link
Contributor

Eh2406 commented Mar 11, 2024

I am back. Let me know how I can help.

@bors
Copy link
Collaborator

bors commented Sep 27, 2024

☔ The latest upstream changes (presumably #14583) made this pull request unmergeable. Please resolve the merge conflicts.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-dependency-resolution Area: dependency resolution and the resolver S-waiting-on-author Status: The marked PR is awaiting some action (such as code changes) from the PR author.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants