IO usage of libp2p

I switched a naive p2p implementation with libp2p in a distributed service I’m building. A few hours after the deployment I’m running out of burst credits for my attached AWS volumes. It seems that something uses a lot of IO, I haven’t had the problem before.

To be more specific: The reads are quite high, the writes look normal.

There are 5 nodes, that are connected to each other.

Which parts of libp2p could affect IO performance? Are there any alternatives?

My node config looks like this

const cfg: any = {
    peerId,
    addresses: {
      listen: [`/ip4/0.0.0.0/tcp/${port || 0}`],

    },
    modules: {
        transport: [TCP],
        streamMuxer: [Mplex],
        connEncryption: [NOISE, SECIO],
        dht: KadDHT,
        pubsub: Gossipsub,
        peerDiscovery: [Bootstrap],
    },
    config: {
        dht: {
            enabled: true,
            randomWalk: {
              enabled: true,
              interval: 60e3,
              timeout: 10e3,
            },
          },
          peerDiscovery: {
            bootstrap: {
              interval: 15e3,
              enabled: true,
              list: [...],
            },
          },
    }
  };

@Strernd have you noticed a difference in IO after applying the memory leak fix mentioned in Possible memory leak?

I haven’t had a chance to do any profiling yet, but tuning the DHT config might help here, depending on how much of an impact network IO is on the total. Currently random walk is configured to run every minute, this is pretty aggressive. When we update the DHT in JS the random walk will get removed in favor of table refresh logic. In Go we run this no more frequently than 10 minutes. I’d consider bumping that to at least 10mins. You can also increase the timeout (maybe 60 seconds), which will allow for more peers to be discovered in the single query.

This was related to the memory leak downstream.

Apparently if you run this on EC2 and EC2 goes out of memory it uses EBS as swap, thus causing a lot of IO operations on the EBS.