top | item 44104022

(no title)

dhqgekt | 9 months ago

I am not an expert in any of the relevant disciplines, but I've some ideas, I don't know how right or wrong they are. A conscious being should have an internal model of the observable external world, and given the means, it should be able to interact with the world, observe changes and update its model accordingly. https://en.wikipedia.org/wiki/Free_energy_principle

But to "experience its [own] existence", it needs to have a model of its own internals, observe, improve itself and perhaps preserve its own "values" and integrity. I do wonder what kind of values are needed for intelligent autonomous systems, that they can justify by and for themselves, even in the absense of human beings or presence of other intelligent agents.

I find (human) languages to be inefficient media to store and perform operations from the perspective of an AGI. Feeding lots of text samples to develop logical reasoning abilities, such extravagance I can not accept. Even more so trying to emulate neural networks, which I understand to be naturally analog entities, in digital manner. Can we expect any gain in power efficiency or correctness gains when using analog computers for this purpose? I wonder what we will get to see with analog computers for neural networks, with proper human-language-independent knowledge representation and well developed global (as in being able to decide which way to reason, given its limitations, for efficiency) logical reasoning capabilities, developed by itself from a reasonable basis of principles, that it can justify for itself and avoid the usual and unusual paradoxes. What core set of principles would be sufficient for emerging, evolving or developing into a proficient general intelligent being, when sufficient resources would be available to it? Like "ancestor" microbes evolving into human beings in hundreds of millions of years, but wayyyyy faster and more efficient?

discuss

order

No comments yet.