Relay Query optimisation on DHT (https://github.com/libp2p/go-libp2p/issues/694)

Hey all,

Could someone recapitulate what exactly was the solution for repeated querying on DHT for relays

  1. Reducing the number of relay points
  2. Static auto relays for PL operated relays.

I couldn’t comprehend on the pubsub solution laid out by @adin .

Sorry in advance if the question is too obtuse or outdated.

1 Like

Sounds interesting. Is there a libp2p Glossary? Just today, I was confronted with terms like “order”, “relay”, “PL” and likely others… Even worse I could have thought I understood something, but hadn’t.

Peer discovery is a topic I’ve found most interesting and am curious how it’s handled when not all nodes implement every protocol?

From the commits I could see that static auto relays are the only solution?, Please correct me if Im wrong.

The interim solution we chose was static relays. We’re currently landing hole-punching support which should reduce the need for full relays in most cases.

Thanks @stebalien ,

IIUC, the problem was the immense number of DHT queries for establishing relay connections.
I’m not clear on how hole punching would help. As hole punching also requires an outbound TCP to address in the public realm, how are we planning on knowing the address status? wouldn’t the same problem arise?

Are we planning to broadcast AutoNat status or planning to add it as a part of multiAddr? or Id protocol?

The number of DHT queries required for establishing any connection is the same. The problem was that there were many people accidentally enabling “autorelay” then getting hammered.

Note: There were DHT issues, but that’s because unreachable peers (and peers behind relays) were joining as DHT “servers”. This has since been fixed.

As hole punching also requires an outbound TCP to address in the public realm, how are we planning on knowing the address status? wouldn’t the same problem arise?

We learn our addresses from AutoNAT (and identify) then send them to the peer in the “holepunch” protocol.