Mplex does not have backpressure, this means if you send more data than the other peer is able to receive (on one stream, cross stream still have TCP backpressure) the stream will Reset itself due to a buffer overflow (not the security kind).
Example code:
func Handle(s network.Stream) {
defer s.Close()
var buffer [1024 * 1024]byte
var err error
var n int
for err != nil {
n, err = s.Read(buffer[:])
doSomethingWithData(buffer[:n])
time.Sleep(time.Second)
}
}
// sender code in the remote peer ...
func Send(s network.Stream, f *os.File) {
defer s.Close()
io.Copy(s, f)
}
Here one peer is reading data at 1MiB/s, however the other peer is sending data at an uncapped speed.
Assuming the underlying network connection is capable of > 1MiB/s speed.
What you would expect is something where sending a 16MiB file takes 16 seconds because that is the max speed of the receiver and this is what happen with yamux. With mplex this copy will be cut short with a stream reset error because the receive buffer will overflow.
Why removing mplex matters ?
For a while in Kubo we had mplex and yamux configured the wrong way around, nodes would wrongly prefer mplex over yamux, this has been fix some time ago but still represent some non trivial amount of nodes. If we turned off mplex all of thoses nodes which now sometime negociate mplex, will always negociate yamux instead (because mplex wont be an option anymore).
This also push everyone else in the ecosystem to implement yamux (like java libp2p).
If protocols will now apply backpressure on purpose but we continue to have mplex this is a lie to other nodes in the network, because if they start using mplex like we tell them to (either because of miss configuration or because they don’t support something better) things will not work (stream resets all over the place). Dropping mplex from kubo makes it clear that backpressure on streams is a requirement to talk with us.
Gus and I want to add backpressure to multiple servers of Kubo, this means the server will purposely stop or slow down reading from the stream if load or whatever limiting metric is going too high, forcing the remote peer to block and wait before sending us more data, however if we continue to have mplex we will experience random disconnects on thoses streams because mplex doesn’t know how to handle this.
Note that yamux only have backpressure on streams, not on streams open.
That means with yamux the remote node can tell you to slow down on already open streams but it can’t tell you to slow down the openninig of streams itself.
This is something that QMUX (QUIC’s muxer) supports , this is why there is intrest to run QMUX on top of TCP.