There are a few points that we might have a disagreement on.
-
If routing is probabilistic, it will cause a much larger jitter than you thought, can even be something like 100ms. This is because when packets are sent to different nodes, their neighbor set and next hop will be different, and the path will diverge. Just in case you don’t know, the routing is based on DHT space for security reasons, so different path could have very different latency, causing a significant jitter. Similarly, for packet ordering, probabilistic routing will make packet order from mostly ordered to mostly unordered.
-
Jitter and packet ordering are more important than you thought. In general NKN network is used in two ways: one is using raw client/multiclient packet like UDP packet (e.g. d-chat, nMobile, etc), in which case rough ordering is much more friendly to application side than unordered delivery. As an example, let’s say we want to build a realtime voice communication app. If packets are mostly ordered, a packet can be added to play buffer the moment it arrives. But if they are mostly unordered, about 2 * jitter or more latency needs to be added before it’s added to buffer. The other way of using NKN is to use the session mode (think of it as TCP), in which case packet ordering is even more important, because session needs deliver packets in strict order. In this case, packet ordering will significantly reduce session latency, increase throughput and reduce client side resource consumption.
-
Having a cutoff will not make the network fragile. If the weight function is continuous (this is important), with a cutoff at the end, near the cutoff point it just degenerates to the current behavior where uptime is not a factor. And I don’t think the current network is fragile.
Besides, if you take a weighted approach, the node count will grow faster because new node operators won’t be stymied by a lack of rewards for such a long period if time. (This is the #1 problem of NKN because it’s directly tied to marketcap growth.)
This is definitely not right. Node count will always be stabilized at the point where mining is just about to be profitable. To be more precise:
node count = total mining reward in NKN per month * NKN price / average node cost per month
total mining reward in NKN per month is constant and slowly change with time, and average node cost per month is also roughly constant, so the node count only depends on token price. Choosing different weight function will only affect who will end up with mining, but won’t change the network size.
Also, why not send packets down duplicate routes? Packets cost like picodollars to deliver. Who cares if the cost of doing that more reliably is double the nonredundant cost? This would be antifragile and performant.
On the long term, bandwidth cost will be the major cost of a node. Actually it is already the major cost on some platforms. Sending a packet in 2 different routes will double the bandwidth cost of every node in the network, which will reduce the network size by about half when bandwidth is the major cost.
Performance is important but ecosystem growth is king. Billions have been made by companies producing junk software far worse than anything NKN has to offer, simply because of adoption and standardization.
I definitely agree. But by saying adoption and standardization, we are talking about developer and application side who use NKN. For them, network performance, developer friendly, etc are the relevant factors, that’s why we should always make NKN easier to use and have better performance.