-
Notifications
You must be signed in to change notification settings - Fork 849
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[core] Converted GlobControlLock to a read-write lock. #3014
base: master
Are you sure you want to change the base?
[core] Converted GlobControlLock to a read-write lock. #3014
Conversation
@@ -191,7 +191,6 @@ srt::CUDTUnited::CUDTUnited() | |||
// be a problem in general. | |||
setupMutex(m_GCStopLock, "GCStop"); | |||
setupCond(m_GCStopCond, "GCStop"); | |||
setupMutex(m_GlobControlLock, "GlobControl"); | |||
setupMutex(m_IDLock, "ID"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we need to decide something about these calls. With the current locking implementation these are not needed and have been left here to allow implementation of the mutex tracker, which is still hanging around in one of the PRs. So, either add an appropriate empty function with that name to cover the stub, or we'll need to remove them all.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem with an initialization of a conditional variable. There is a difference in POSIX on how you initialize the one in a global / static scope, and the one in runtime. That's what setupCond
do on POSIX builds. And there was a crash if that initialization was done in the constructor.
setupMutex
is indeed not needed except for the mutex tracker.
void Condition::init()
{
pthread_condattr_t* attr = NULL;
#if SRT_SYNC_CLOCK == SRT_SYNC_CLOCK_GETTIME_MONOTONIC
pthread_condattr_t CondAttribs;
pthread_condattr_init(&CondAttribs);
pthread_condattr_setclock(&CondAttribs, CLOCK_MONOTONIC);
attr = &CondAttribs;
#endif
const int res = pthread_cond_init(&m_cv, attr);
if (res != 0)
throw std::runtime_error("pthread_cond_init monotonic failed");
}
s->core().closeInternal(); | ||
enterCS(m_GlobControlLock); | ||
HLOGC(smlog.Debug, log << "GC/removeSocket: DELETING SOCKET @" << u); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That was here to fix the problem of locking m_ConnectionLock after m_GlobControlLock. Why is that no longer necessary?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The GlobControl "resource" is now held in a shared manner protecting containers from modification, but there is no mutex being locked. Hence no need for this unlock-lock trick.
@@ -407,7 +407,12 @@ class CUDTUnited | |||
groups_t m_Groups; | |||
#endif | |||
|
|||
sync::Mutex m_GlobControlLock; // used to synchronize UDT API | |||
/// Guards all member containers of `CUDTUnited`. Protect UDT API from data races. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
UDT???
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/// Guards all member containers of `CUDTUnited`. Protect UDT API from data races. | |
/// Guards all member containers of `CUDTUnited`. Protect SRT API from data races. |
@@ -407,7 +407,12 @@ class CUDTUnited | |||
groups_t m_Groups; | |||
#endif | |||
|
|||
sync::Mutex m_GlobControlLock; // used to synchronize UDT API | |||
/// Guards all member containers of `CUDTUnited`. Protect UDT API from data races. | |||
/// Non-exclusive lock prohibits changes of containers (insert/remove), but allows modifications |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"Shared lock" would be not unconfusing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/// Non-exclusive lock prohibits changes of containers (insert/remove), but allows modifications | |
/// A shared (non-exclusive) lock prohibits changes of containers (insert/remove), but allows modifications |
CUDTGroup* g = m_parent->m_GroupOf; | ||
if (g) | ||
{ | ||
// TODO: Lock this group `g`?? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that's likely a good idea. And also the shared lock suffices for that, while acquireSocketsGroup
could be used here (that is, the constructor overload of GroupKeeper that uses this call).
@@ -8385,7 +8388,7 @@ void srt::CUDT::updateSndLossListOnACK(int32_t ackdata_seqno) | |||
{ | |||
// m_RecvAckLock is ordered AFTER m_GlobControlLock, so this can only | |||
// be done now that m_RecvAckLock is unlocked. | |||
ScopedLock glock (uglobal().m_GlobControlLock); | |||
SharedLock glock (uglobal().m_GlobControlLock); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this place you can consider using GroupKeeper instead of SharedLock on m_GlobControlLock - if possible.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One comment describes a serious problem here, please fix. Others just to consider.
@@ -2141,7 +2141,7 @@ vector<CUDTSocket*> CUDTGroup::recv_WaitForReadReady(const vector<CUDTSocket*>& | |||
} | |||
} | |||
|
|||
leaveCS(CUDT::uglobal().m_GlobControlLock); | |||
CUDT::uglobal().m_GlobControlLock.lock_shared(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't this be unlock_shared()
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agh, right.
CUDT::uglobal().m_GlobControlLock.lock_shared(); | |
CUDT::uglobal().m_GlobControlLock.unlock_shared(); |
CUDTUnited::m_GlobControlLock
is now a read-write lock (srt::sync::SharedMutex)
.CUDTUnited::m_GlobControlLock
guards all member containers of theCUDTUnited
.CUDTUnited::m_Groups
- add/remove under exclusive lock, access - under non-exclusive lock.CUDTUnited::m_Sockets
- add/remove under exclusive lock, access - under non-exclusive lock.CUDTUnited::m_PeerRec
– record sockets from peers to avoid repeated connection request. Add/remove under exclusive lock, access - under non-exclusive lock.CUDTUnited::m_ClosedSockets
- add/remove under exclusive lock, access - under non-exclusive lock.CUDTUnited::m_mMultiplexer
- - add/remove and access under exclusive lock. ❗Addresses #2393.