Does `keep_alive` behaviour work with WebRTC Transport?

In my rust-libp2p server using WebRTC Transport, keep_alive behaviour is enabled, so I expect the swarm to keep listening even when a client drops off.

But instead, most of the time ListenerClosed Event occurs and the WebRTC listener closes, undesirably, when a browser client disconnects.

Event: ListenerClosed { 
listener_id: ListenerId(16861393669339383903), addresses: ["/ip6/::1/udp/42069/webrtc/certhash/uEiDHVx-evVihAOvIsWCX0Za1fcwbaHMBZKPbkyeUodTV2A"], 
reason: Err(Custom { kind: Other, error: 
UDPMux(Os { code: 10054, kind: 
ConnectionReset, 
message: "An existing connection was forcibly closed by the remote host." }) }) }

I’m not sure if this is a bug, intended behaviour, or if my config is wrong somewhere. I’d love some insight!

I have posted the example as a simple ping client (js) with the ping server (rust) here, with the issue details in the README:

I have never used rust libp2p, but keep_alive is a well defined concept in networking. This relates to keeping alive NAT records.
When keep alive is enabled you will send keep_alive (probes, packets, frames, … whatever it’s called in the protocol you are using) to ensure that the NAT does not forget about the connection.
This also allows to detect timeouts even when you are not sending data because you will from time to time send an empty packet to confirm the connection is still online.

I would be extremely surprised if keep_alive in rust libp2p isn’t what I described above, whatever problem you have seems to be different and unrelated.

keep_alive is a swarm NetworkBehaviour intended to keep connections alive, yes.

Which is exactly what I want – to keep the WebRTC listener alive, even after a js client has disappeared.

So I did a bit more digging on this:

ListenerClosed is a TransportEvent, likely coming from ListenStream.close function in webrtc::tokio::Transport.

ListenStream also has a report_closed Option, which is None when ListenStream::new is called in transport.listen_on, but set to Some on .close so we get our TransportEvent::ListenerClosed event.

The ListenStream is .close 'd is called when there is a UDPMux error:

// https://github.com/libp2p/rust-libp2p/blob/master/transports/webrtc/src/tokio/transport.rs#L351
Poll::Ready(UDPMuxEvent::Error(e)) => {
    self.close(Err(Error::UDPMux(e)));
    continue;
}

So my question becomes: Is closing the javascript client triggering the UDPMux error because the client drops off? If so, should we be closing the listener without checking keep_alive first (in case we want to listen for subsequent clients)?

Maybe there is another element that I am missing, would appreciate any insights ont he design here, thanks!

You havn’t commited package.json in your repo :slight_smile:

Oh the package.json is in the web client folder: webrtc-ping/package.json at master · DougAnderson444/webrtc-ping · GitHub to keep it separate from the Rust code.

I’m confident the javascript code is working properly, and this is a Rust server issue instead. Unless js-libp2p is supposed to send some sort of graceful WebRTC connection close upon libp2p.stop()…

Possible Fix: I ended up forking the rust-libp2p repo and inserted a match on Line 351 which confirms my suspicions – on any Mux Error the swarm listener is closed. If it ignores ConnectionReset, then the WebRTC server keeps listening for any subsequent connections:

// libp2p_webrtc::tokio::transport.rs # Line 351
Poll::Ready(UDPMuxEvent::Error(e)) => {
    match e.kind() {
        // If client drops off and resets the connection, do nothing with the ListenStream
        ErrorKind::ConnectionReset => {
            log::debug!("ConnectionReset by remote client, but keep ListenStream open {e:?}");
            todo!("Trigger poll to close any connections if there is no ping loop? But do not close the listener")
        }
        // close the listener on any other kind of Mux error
        _ => {
            self.close(Err(Error::UDPMux(e)));
            continue;
        }
    }
}

Is there any design issue with keeping the connection open like this? This is the behaviour I am looking for.

mb I just did npm run dev.

I am missing something of what you are trying to do, if the other client close the connection, what are you hopping to do anyway ? It’s not like the other client is gonna answer anything.
If this is actually closing the listener for all connections not just the one that got closed that seems very fishy as I would expect them to be different pieces of state but what do I know ?

From far away it seems that JS is closing the connection and that you are trying to keep the connection open from the rust side, you can trick rust to still think it’s open, but it’s still not gonna work if JS refuse to talk to you anymore now that it things the connection is closed.

Hey Doug, looks like you’ve asked in the rust-libp2p repo as well. Awesome!
Linking here for discoverability: WebRTC listener always closes when any browser client disconnects · Issue #3574 · libp2p/rust-libp2p · GitHub

@Jorropo WebRTC Chat server :speech_balloon: :slight_smile:

If there are multiple clients, the remaining clients will need the server to keep listening for further chats, but that was the problem!

I felt the same way :pray:, we did figure out a way (see on Github) to make an exception on the “ConnectionReset” OS Error to skip this use case, so the fishiness will be fixed shortly :sushi:

2 Likes

To close this loop, this issue got resolved with libp2p 0.51.1 and libp2p-webrtc 0.4.0-alpha.3, and I’ve updated the repo here: