I am new to LibP2P and am trying to get familiar with it as a P2P library for a Golang project that I am doing.
With that in mind, I have heard that IPFS and LibP2P have huge problems in scaling up to hundreds or even multiple thousands of concurrent nodes and would like to get in feedback from the LibP2P community if at all possible.
Any information or discussion on this would be truly appreciated.
Thanks and have a great day
Thoses issues were content routing issues with the DHT.
Now the DHT is correct and works correctly.
Protocol labs also runs hydra nodes to help about routing GitHub - libp2p/hydra-booster: A DHT Indexer node & Peer Router this was actually what was stabelising the network at one point but now it’s not needed anymore (they still run because they make the network a that bit faster but the network have been tested without and it’s now fine).
The IPFS network have ~11k of dht-server nodes and works fine, their are still questions about scalling but more in the ~100k range of nodes.
Thanks so very much for the information on this question.
Would it be fair to presume that there are not too many Golang P2P network libraries that can scale in 100k range?
I have heard of a few others as well that are interesting, but as the project that I am working on is a Blockchain based system it is a bit hard to gauge the level of activity that could be on the system.
Although not really knowing if these can also scale well, I have really only come across just a few others like:
Noise — from perlin-network on github. (As a new user this forum will not let me add more than 2 URLs)
But have not played with them much since the current Go app has already been written to use Go-LibP2P and I find that it has some nice features so far.
Oh no that very different, libp2p doesn’t keep connection with everyone nor it’s an overlay network.
With libp2p you do not need to be aware of all other peers, you only need the one you are talking too (with IPFS for example the one you are downloading / sharing files) and some other peers for routing content (finding who owns what and how to connect people).
In theory a 100k peer networks would run fine if everyone only knows ~100 other peers however finding content and other nodes would be kinda slow.
100k peer is not that much, IPv4 has up to ~>4 billions and runs fine, the question is how to be smart about this.