I think I see then, something I may have already alluded, to or even fully commented on, but I wish to either re-iterate or give a general overview of. Society it seems, in full compliance with its tenancy to fear that which it doesn’t understand, has grown an aversion to the possible arrival of an AI system that might exponentially out-think our best collective or individual efforts as ordinary humans (which might paradoxically include our technological response to attacks).
It could be seen as a necessary survival mechanism to be so adverse to such an evolution, or it might be simply an archaic natural instinct of some sort that simply serves to hold back what might raise the average living standard well beyond that which we could imagine.
Of course it could be, without further evidence or a strong theory, the ultimate demise of the human race as we know it as well. Whether for the better or for worse.
Nonetheless I think that I see a mechanism that would protect the human race that might be related to some already existing literature on the subject. And this is of course then related to bitcoin’s mechanism of propriety that I have previously commented on (and so Adam Smith’s works on the subject).
I think it could be that we set these “machines” to regulate themselves in relation to our societal norms. The usefulness of which is based on the condition that our society is built in such a way to protect itself which I think is probably the most safest assumption we have available in this regard.
I think then that furthermore it might probably be shown that this is the only way in which a truly sustainable AI model could be created or bootstrapped. And I also suspect in some or many ways that proprietorship of the other kind of the definition is also relevant, much like there was a likely profitor or profitors from the person or groups responsible for the advent of bitcoin. (It is then my conjecture that a profitor and a propriety mechanism are necessary).
I still have studying to do on the subject (especially specifically finishing TOMS), and I don’t want to expand too far on this yet so this will be the basic point I have made.
If there is a clever point made here at all, I think it is that this may be an intelligent move on the part of the humans, and then therefore can be considered natural evolution. This suggests then that machines or AI would also naturally grow this tenancy-albeit only “eventually”