-
Notifications
You must be signed in to change notification settings - Fork 232
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Never failed if no matching ICE pair #280
Comments
I'm pretty sure the case of no pairs is handled correctly since in jitsi-videobridge we don't even signal remote candidates. From the logs it looks like there were matching ICE pairs, but starting their checks failed with Is this using HostCandidateHarvester? Can you check if reverting #260 makes a difference? |
I have try the version 3.0-57-gdec3a87 which is before #260. I see also ICE state change to FAILED. This is the log output:
|
Yes. I use the follow API calls (simplified). The server run inside of a docker container which does not publish the UDP port 7002 to simulate the misconfiguration of a customer. It is impossible for the server to connect to the browser. It should failed. List<CandidateHarvester> harvesters = SinglePortUdpHarvester.createHarvesters( 7002 );
....
agent.setControlling( true );
agent.setTrickling( true );
IceMediaStream stream = agent.createMediaStream( "stream" );
agent.addStateChangeListener( evt -> {
....
});
agent.setUseDynamicPorts( false );
for( CandidateHarvester harvester : harvesters ) {
agent.addCandidateHarvester( harvester );
}
Component component = agent.createComponent( stream, 0, 0, 0 );
sendLocalCandidatesToBrowser( component.getLocalCandidates() );
agent.startCandidateTrickle( iceCandidates -> {
....
} );
agent.startConnectivityEstablishment();
....
component.addUpdateRemoteCandidates( candidate );
component.updateRemoteCandidates(); |
I agree that with all pairs failing the Agent should transition to the FAILED state. I don't understand why it doesn't. Is your trickle callback getting called with |
The stacktrace from the log point me to the catch of NetAccessManager.SocketNotFoundException in ConnectivityCheckClient. It does not set the To answer your questions:
|
I was not able to run the version |
If I return
like in the old version I can't find any code in the library that checks that all candidates are failing. |
If there are no matching candidate because a configuration mistake then the agent should failed.
Agent should failed on SocketNotFoundException jitsi#280
I have create a PR #281 that after 5 Seconds (5000 milliseconds) let failing the agent with a bad configuration. |
If the ICE candidates of the the client and server does not match then the agent never go into the state FAILED or TERMINATED. It is for ever in the state RUNNING. There are endless log entries with
will skip a check beat.
See the sample log entries.Is this a bug? Can I set any timeout? Does I must check this self after the last candidate from client? If yes, how can I check this?
I use version: 3.0-68-gd289f12
The text was updated successfully, but these errors were encountered: