Thinking about ideal nodes and in relation to optimal distribution of nodes and the proccessing or transfer (or both) of information etc:
The ideal isn’t random, it’s equidistant. Distance in a computer network being defined as data throughput. In bitcoin the ideal is also equal power over the state of the blockchain, so each node must have equal hash rate and an equal connection to every other node. The only reason I call this ideal is because of how distance affects power, and because equal ‘power’ means the maximum number of participants required to undermine the blockchain commons, which would be 51% of all participants (which would mean more than half of them would want to undermine the network).
Then it should be easy to show that by optimizing our evolution of technology and our global economy that over time we will tend (probably asymptotically) towards this ideal which I think is what in one way is explained below:
I think my thinking is the same though: A network full of self-maximalists eventually becomes distributed because those who maximize themselves at a net loss to the network lose out to those who don’t due to the relationship between the individual participant and the network. The environment selects against the network, and the network selects against its participants. So a participant that maximizes itself within the network while maintaining the network’s ability to grow is likely to persist as the network is likely to. Of course, plenty of networks are destroyed from within by self-maximizing participants. Markets are essentially just the more-conscious abstraction of less-conscious actors. But while markets/networks are capable of more than the sum of their parts, they are limited by the arrangement of those parts.
Thinking about light as the ultimate or optimal information passer we might even think of stars as nodes, this might allow us to understand reality or at least the universe as some form of a giant computer albeit with no (or little) understanding of what this computer does or is capable of. Nonetheless we can think of theoretical computers and ideals involving the speed of light with little or no processing time or “distance”.
If we can imagine reality as a computer simulation we might understand how light speed could SEEM limited (whether physically or theoretically) but not actually be limited at all. It might likely be impossible to describe though without the levation of implicate order. Then light might not “move” but rather information “propagates” (like a telecommunications network) just as much as it waves or travels.
The author is not familiar with the term “explicate order” but thinking that it might be the opposite of implicate order it should be easy to show that explicate order resides within the concept, theory, or experience of implicate order as much as the classical resides within our higher level understanding of the quantum.
This arises a possible theory of implicate collusion (or collision!) in which breaking theoretical ideals might then be possible. Again this should likely be shown in maths (and of the inter-universal kind).