List of noob questsions

Earlier on Matrix:
Hello, I just found out about it. I’ve been waiting for this, I think.
Questions:

  1. Does my node need to be online like a torrent?
  2. How feature complete compared to FreeNet, that has email/facebook/dropbox edit: and github-pages?
  3. What server to run if wanting to add storage and network capacity, AutoNAT, WebSocket, p2p-circuit, ect to the network on a hosted VPS? I run FreeNode on a few boxes to contribute extra disk space.
  4. Answered in an out-of-the-way FAQ I was curious about GnuNet until it looked like I needed to share my code with the rest of the network, does extending the established network with my own apps require code adoption?
  5. What about libp2p in game clients, where each node runs for an hour or so a day(will that cause network disruptions)?
  6. Is there an “evil” browser service that uses push notification and offline sync calls to process network requests?
  7. Anything like Freenet’s WebOfTrust, I thought I saw something, but tell me more.

…Someday I’ll get to 10 :slight_smile:

Does my node need to be online like a torrent?

Yes unless other online nodes sharing this exact same file exists (exactly like torrent).

How feature complete compared to FreeNet, that has email/facebook/dropbox edit: and github-pages?

IPFS doesn’t aim to be feature complete like FreeNet.
IPFS is just a CDN, it share static files (plus a few other things like distributed naming using IPNS (free names but random that makes no sense) or ENS (paid names on ethereum chain that actually makes sense)) and an SDK, fancy DAG files, …
You can make an email/facebook/private dropbox on top of IPFS but it’s not included by default in IPFS, other people use IPFS to build thoses things.

What server to run if wanting to add storage and network capacity.

From 1 Core 3 Gigs of ram to 8 cores 48 Gigs, depends what you do with it.
There is no real recomandation because you can make it run on 1 Core 1 Gig if you really try (do almost nothing with it), and 1 Core <10Mb are fine if you don’t run the networking part for example. (for example you can only run the DAG and hashing part to have the hashing security features, but delegate expensive things such as networking to an external node)
My way to do it is start with 2 Threads (one physical core), 4 Gigs of ram and bump it if it crashes / stall while I’m using my app.

For the disk space, you need to store the binary <100Mb and a few kilobytes of metadata.
Then you need more space to store what you host, but if you download or host nothing you don’t use any space (just like torrent).

I run FreeNode on a few boxes to contribute extra disk space.

It’s not really how IPFS work, there is no shared disk, you host what you want to host (for example I myself pin things I like such as XKCD, dist.ipfs.io, … but you don’t just offer free disk space to the network because the network don’t care).
See IPFS as a transport layer, like torrent, you don’t offer free space to torrent to host things on, because that not how torrent works. You must yourself add the torrent you like for others to download.
And if you are fancy you even write a script to host the new ubuntu (or whatever) torrent automatically, it’s completelly possible (and easier) to write such script on IPFS.

If you like doing this kind of stuff without thinking much about it, run a Tor node (without exit relay if you don’t like receiving email from angry people that doesn’t understand tor :smiley: ), an i2p one or something else …

There is also a list of collaborative clusters (a cluster of IPFS node where someone invite random peoples on internet to help them pin their stuff) https://collab.ipfscluster.io/, thoses are more of set and forget mindset (because the cluster owner will add and remove old pins for you).

I was curious about GnuNet until it looked like I needed to share my code with the rest of the network, does extending the established network with my own apps require code adoption?

IPFS is MIT licensed which is a much lighter license and way less requirements than AGPL.
There is no code sharing close like AGPL has.

What about libp2p in game clients, where each node runs for an hour or so a day(will that cause network disruptions)?

That works fine (as long as you use dht-client, if you don’t it’s still fine but you waste lots of bandwith). In fact only ~16% of the network is long lived node, most node spin up, do something, and stop 30s after being started.

Is there an “evil” browser service that uses push notification and offline sync calls to process network requests?

There is no browser service, but there is a golang linux, macos (or any OS really) daemon.
It doesn’t use push notification because you don’t need to trick your environment to do this kind of stuff.
IPFS is just a long lived service, pretty much like transmission.
(there is a JS-IPFS that can run in browsers, but it’s less featured, even more in browser (you can’t run a full node in browser and must rely on centralised tricks for now))

Anything like Freenet’s WebOfTrust, I thought I saw something, but tell me more.

I don’t know about any sorry, maybe but idk. :smiley:
I would find that weird if there is any tho.
IPFS is not a one solution fits all project, it’s just a transport layer, pretty much like torrent but in practice it’s more handy than torrent for downloading simpler things like website, or just browsing the web.
(in the future IPFS will be better than torrent for multiple gigs / petabyte files at least it’s one of my goals :smiley:, but for now IPFS’s through put is still pretty low, that is issues in your local node that isn’t smart enough about how to download file and is still a one thread like task, however the protocol is made to be able to download from 100+ different sources and saturate your connection link to download at incredible speed (if you want) like torrent does, the node just doesn’t do that)

Bonus point

Checkout https://filecoin.io maybe, it’s a sister project of IPFS made by the same company maintaining IPFS and that build on the same tech as IPFS (IPLD, multiformats, libp2p, …) and allows to pay storage providers to host your files (and for you / others to retrieve them).
If IPFS replace HTTP. Filecoin replace AWS.

And if you don’t like blockchain or paying stuff, IPFS is consensus free and blockchain free network. Filecoin is an other (and heavier) network build on top of IPFS.

1 Like

I’ve no reason to be disappointed IPFS isn’t the tool I’m looking for, as there is plenty libp2p does do. I’ve dreams of many candy-crush like games that host leaderboards and exceptional play clips. The blocker was a that at first there would only be a few nodes… It’s awesome that I can now leverage ?usable? node discovery and even a DHT.

Looks like what’s missing is a project implementing a modular node daemon, and all the pieces are just lying around.

  1. Is there a catalog of protocol( NetworkBehaviour)s currently in use?
  2. I’m curious about Kademlia(that’s the variant of DHT right), but much of the Wikipedia page is over my head. Is there some documentation/media that teaches rather than explains it? Seems like something 3blue1brown would do a video on.
    1. How useful are general explanations about Kademlia in describing libp2p’s implementation?
    2. What’s retention currently like?
    3. Bringing this back to a network daemon, what can be done to improve retention? Would that be desirable?
    4. Are keys variable length?
    5. What’s the key/value size limitations?
    6. How much can inserting be hammered: Is it possible to play chess? BsdGame’s Hunt? Dance Dance Revolution? AIs playing Global Thermal Nuclear War? AIs Playing Tic-tac-toe?
    7. Would there be an issue with games inserting 1.4Kb 10 times per day per player?
    8. What can be done to improve performance wrt inserting chunks of data? Is that desirable?
      a. Higher frequency
      b. Larger size, assume Usenet levels of splitting the data into smaller chunks with parity(Par2) ect.
    9. Generally, what’s reasonable usage look like? In the future, what could this look like?

Must haves I’d be interested in implementing as NetworkBehaviours if need be: Leaderboards, Achievements, Clips of The Day, Multi-player Lobbies and obviously all those require Profiles, so I’m already looking at WebOfTrust(In my opinion this is just a special type of Multi-player Game).

Is there a catalog of protocol( NetworkBehaviour )s currently in use?

You mean protocols built on top of libp2p ?
I don’t know one, that would be cool to do.

I’m curious about Kademlia(that’s the variant of DHT right), but much of the Wikipedia page is over my head. Is there some documentation/media that teaches rather than explains it? Seems like something 3blue1brown would do a video on.

I’m writing an article about that, soon enough hopefully. :slight_smile: a 3blue1brown video would be awesome !
However libp2p buiilds a lot on top of kadamelia, it’s more than a simple kadamelia DHT, it takes inspiration from chore and bittorent’s DHT.

How useful are general explanations about Kademlia in describing libp2p’s implementation?

Depends, in practice I belive not, you don’t need to know how the DHT work, you just need to know that it’s a decentralised phonebook and what is it’s API.
The details of how it works matters very little.

What’s retention currently like?

Not much, I belive it’s 2hrs maybe 2 days I’m not sure.
It’s clearly build to store metadata.
Note you can’t even just store arbitrary records, nodes only accepts provider kind of records (it’s a record where multiple nodes claim to provide something and when someone wants it the tracking node returns the list of nodes, it’s used for files for example, all hoster of a file in theory join this provider record and people that want the file go fetch it there), IPNS records (which are signed CNAMEs mostly) and peer routing (public key, …).

The DHT is made to not be usable to store actual cold information.

Bringing this back to a network daemon, what can be done to improve retention? Would that be desirable?

I don’t know, I’m not a maintainer but I don’t think so.
The ratio of dht user to dht servers is huge, like 6~7 times. (we only enable the DHT on node publicly reachable, to do so nodes ask other random nodes for a dialback (close all connection and try to regain it), if that fails the DHT is only enabled in client mode, so you can use others to store records but wont store records yourself)
DHT nodes are already more CPU intensive than they should (the DHT part is piece of cake, but the Eliptic-Curve Crypto negociation happening constantly due to the constant flux of nodes walking the DHT is hard).

Are keys variable length?

Yes, keys are actually hashed once more, so you are free to use variable length or even STRINGS !
(the distance function is changed from XOR(A,B) to XOR(SHA256(A),SHA256(B)))

What’s the key/value size limitations?

Key pretty much whatever you want.
Value idk, but like you don’t get to choose what you insert anyway so …

How much can inserting be hammered: Is it possible to play chess? BsdGame’s Hunt? Dance Dance Revolution? AIs playing Global Thermal Nuclear War? AIs Playing Tic-tac-toe?

For all of thoses games I would just use the DHT to negociate a direct one to one stream (using pubsub, or the provider record).
And then you have a TCP like stream to use however your heart wants (or multiple parallel one, libp2p mux many streams somewhat efficiently (depends on the transport, multiple stream over QUIC is better than multiple streams muxed using yamux in a single TCP stream)).

The issues would be more if you want to do thoses things async (imagine a node sending your messages while you are offline). You can (using stuff such as orbit DB) but you would need some kind of middle node that stays online while both nodes are offline, offline messaging is not something IPFS or libp2p provide by default.

Would there be an issue with games inserting 1.4Kb 10 times per day per player?

Likely fine, but thoses will get forgot pretty quickly.

What can be done to improve performance wrt inserting chunks of data?

It’s not the current goal of the DHT, libp2p’s DHT is just a decentralised phone book.
For the future I don’t know, however IPNS which is the only exception to this and actually store data into the DHT and it’s pretty bad, it’s slow and get forgot quickly. (IPNS is moving away from DHT to using pubsub + DHT).

Generally, what’s reasonable usage look like?

For now it’s use the DHT how torrent use trackers, find your first nodes, then do everything else in one to one.

1 Like