Differences between Yamux and Mplex for a user

I recently came across the following issue Support both Yamux and Mplex · Issue #4318 · ipfs/js-ipfs · GitHub

What are the main differences and tradeoffs to consider between the two stream multiplexers: Yamux and Mplex?

It seems that Kubo is shifting from mplex to Yamux

Mplex does not have backpressure, this means if you send more data than the other peer is able to receive (on one stream, cross stream still have TCP backpressure) the stream will Reset itself due to a buffer overflow (not the security kind).

Example code:

func Handle(s network.Stream) {
  defer s.Close()
  var buffer [1024 * 1024]byte
  var err error
  var n int
  for err != nil {
    n, err = s.Read(buffer[:])
    doSomethingWithData(buffer[:n])
    time.Sleep(time.Second)
  }
}

// sender code in the remote peer ...
func Send(s network.Stream, f *os.File) {
  defer s.Close()
  io.Copy(s, f)
}

Here one peer is reading data at 1MiB/s, however the other peer is sending data at an uncapped speed.

Assuming the underlying network connection is capable of > 1MiB/s speed.
What you would expect is something where sending a 16MiB file takes 16 seconds because that is the max speed of the receiver and this is what happen with yamux. With mplex this copy will be cut short with a stream reset error because the receive buffer will overflow.


Why removing mplex matters ?
For a while in Kubo we had mplex and yamux configured the wrong way around, nodes would wrongly prefer mplex over yamux, this has been fix some time ago but still represent some non trivial amount of nodes. If we turned off mplex all of thoses nodes which now sometime negociate mplex, will always negociate yamux instead (because mplex wont be an option anymore).
This also push everyone else in the ecosystem to implement yamux (like java libp2p).
If protocols will now apply backpressure on purpose but we continue to have mplex this is a lie to other nodes in the network, because if they start using mplex like we tell them to (either because of miss configuration or because they don’t support something better) things will not work (stream resets all over the place). Dropping mplex from kubo makes it clear that backpressure on streams is a requirement to talk with us.


Gus and I want to add backpressure to multiple servers of Kubo, this means the server will purposely stop or slow down reading from the stream if load or whatever limiting metric is going too high, forcing the remote peer to block and wait before sending us more data, however if we continue to have mplex we will experience random disconnects on thoses streams because mplex doesn’t know how to handle this.
Note that yamux only have backpressure on streams, not on streams open.
That means with yamux the remote node can tell you to slow down on already open streams but it can’t tell you to slow down the openninig of streams itself.
This is something that QMUX (QUIC’s muxer) supports , this is why there is intrest to run QMUX on top of TCP.

2 Likes

@Jorropo Where is Kubo in terms of removing mplex? It’s sad that we still have to deal with subpar stream muxers in 2023.

1 Like

feat: remove MPLEX by default by Jorropo · Pull Request #9641 · ipfs/kubo · GitHub I’ll reopen it soon enough when I add backpressure in some stuff.

1 Like