Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix redis cluster test_recycled test #345

Merged
merged 5 commits into from
Aug 5, 2024

Conversation

rgrfoss
Copy link
Contributor

@rgrfoss rgrfoss commented Aug 5, 2024

No description provided.

@rgrfoss rgrfoss mentioned this pull request Aug 5, 2024
@rgrfoss
Copy link
Contributor Author

rgrfoss commented Aug 5, 2024

Worked on my machine 🙃

@bikeshedder
Copy link
Owner

The code is nicer to look at but I don't see how it would change the outcome of the test. 🤔

It's just a refactoring and not a change on what the test does.

Maybe the test is flawed to begin with? I seriously don't know how redis does work in cluster mode. It almost looks like the connection is switching servers in a random fashion?

@bikeshedder
Copy link
Owner

I'm actually surprised that this code compiles to begin with:

drop(client_id_1);
// ...
assert_eq!(
    client_id_1, client_id_2,
    "The Redis connection was not recycled"
);

I would have expected that Rust doesn't allow accessing a value after it's dropped. It works because client_id_1 implements Copy and is therefore not consumed by the drop(...) call. I would have expected clippy to comlain about this no-op.

The Arc<Mutex<...>> also doesn't add anything to the test. The code runs as a single future with no concurrency. Therefore locking a Mutex in between the two Pool::get calls doesn't change the outcome.

@rgrfoss
Copy link
Contributor Author

rgrfoss commented Aug 5, 2024

Redis cluster might redirect connections to different nodes, potentially resulting in different client_ids even if the connection is logically the same

@bikeshedder
Copy link
Owner

Redis cluster might redirect connections to different nodes, potentially resulting in different client_ids even if the connection is logically the same

Does it change servers in between query_async calls? If that's the case then this entire test somewhat meaningless to begin with.

The latest code you committed doesn't test anything. It just creates a variable (wrapped inside an Arc<Mutex<...>>) performs a redis query and then changes the value of it. You could replace the redis calls with any code that doesn't panic and it would still compile and run. It's a very complicated version of this test:

let mut reused = false;
reused = true;
assert!(reused);

You could compare the address of the underlying connection object via std::ptr::eq of the object returned by dereferencing the deadpool object.

I think I remember why this test was added in the first place. Some version of deadpool-redis had a bug where recycling of connections always failed due to an error in the recycle method and you would always get a fresh connection from the pool. This tests just makes sure this never happens again.

@rgrfoss
Copy link
Contributor Author

rgrfoss commented Aug 5, 2024

Would you prefer comparing a set name or the address of the underlying connection object?

@bikeshedder
Copy link
Owner

It's pretty wild that the test using CLIENT SETNAME/GETNAME works while CLIENT ID doesn't.

If the underlying connection is changed the underlying GETNAME query should fail, too, shouldn't it?

@rgrfoss
Copy link
Contributor Author

rgrfoss commented Aug 5, 2024

Why you cant compare client ID in this case idk, but client IDs are not a reliable way to check connection reuse, it can change even if the logical client remains the same it seems.

@bikeshedder
Copy link
Owner

LGTM. I wonder why the CI pipeline didn't complain about the unused imports. That used to work. 🤔

I'll merge this PR as is. The test seams to be fixed! 👍

@bikeshedder bikeshedder merged commit 251aa11 into bikeshedder:master Aug 5, 2024
64 checks passed
@bikeshedder
Copy link
Owner

Thanks a lot. 🥳

I just released deadpool-redis on crates.io:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants