Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: Notification to waiting clients during drop #201

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

vaikzs
Copy link

@vaikzs vaikzs commented Jun 3, 2024

Following up with @djc's intuition (re: #194) -- I noticed that the final call to spawn_replenishing_approvals(approvals) is incomplete due to empty approvals and returns early after several attempts to put_back keeping 5 connections waiting. Due to this, the call stack doesn't reach pool.notify.notify_one() at the end, and the pool.notify.notify_one() invocation at every connection polling (i.e.) also fails.

However, instead of adding pool.notify.notify_waiters() in put_back() which makes it redundant, replacing pool.notify.notify_one() in PoolInternals::put to pool.notify.notify_waiters() (no permit is stored to be used by the next call to notified().await) solves this. I tested with the same test case (shown below and already included in the main branch) provided by @xortive

#[tokio::test]
async fn test_broken_connections_dont_starve_pool() {
    use std::sync::RwLock;
    use std::{convert::Infallible, time::Duration};

    #[derive(Default)]
    struct ConnectionManager {
        counter: RwLock<u16>,
    }
    #[derive(Debug)]
    struct Connection;

    #[async_trait::async_trait]
    impl bb8::ManageConnection for ConnectionManager {
        type Connection = Connection;
        type Error = Infallible;

        async fn connect(&self) -> Result<Self::Connection, Self::Error> {
            Ok(Connection)
        }

        async fn is_valid(&self, _: &mut Self::Connection) -> Result<(), Self::Error> {
            Ok(())
        }

        fn has_broken(&self, _: &mut Self::Connection) -> bool {
            let mut counter = self.counter.write().unwrap();
            let res = *counter < 5;
            *counter += 1;
            res
        }
    }

    let pool = bb8::Pool::builder()
        .max_size(5)
        .connection_timeout(Duration::from_secs(10))
        .build(ConnectionManager::default())
        .await
        .unwrap();

    let mut futures = Vec::new();

    for _ in 0..10 {
        let pool = pool.clone();
        futures.push(tokio::spawn(async move {
            let conn = pool.get().await.unwrap();
            drop(conn);
        }));
    }

    for future in futures {
        future.await.unwrap();
    }
}

@vaikzs vaikzs changed the title fix: fixing notification to waiting clients fix: Notification to waiting clients during drop Jun 3, 2024
Copy link

codecov bot commented Jun 4, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 74.90%. Comparing base (3190c75) to head (8a1949c).
Report is 29 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #201      +/-   ##
==========================================
- Coverage   74.95%   74.90%   -0.05%     
==========================================
  Files           6        6              
  Lines         519      518       -1     
==========================================
- Hits          389      388       -1     
  Misses        130      130              

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@djc
Copy link
Owner

djc commented Jun 4, 2024

Sorry, I still don't completely understand why this is necessary. Why doesn't my call stack in #194 (comment) hold? If something is returning an empty approval iter in a state where that is not sufficient, why is that? Pointing at an existing test case doesn't help me because that test already passes today.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants