top | item 39015393

(no title)

foo3a9c4 | 2 years ago

> With AI, there still seems to be a lot of hand-waving between where we are now and "AGI".

> I am more than prepared to admit that I may not be seeing (for various reasons) the evidence that this is near/possible - but I would also claim that nobody is convincingly showing any either.

If I understand you correctly, then (1) you doubt that AGI systems are possible and (2) even if they are possible, you believe that humans are still very far away from developing one.

The following is an argument for the possibility of AGI systems.

  Premise 1: Human brains are generally intelligent.
  Premise 2: If humans brains are generally intelligent, then software simulations of human brains at the level of inter-neuron dynamics are generally intelligent.
  Conclusion: Software simulations of human brains at the level of inter-neuron dynamics are generally intelligent.
(fyi I believe there is an ~82% chance humans will develop an AGI within the next 30 years.)

discuss

order

kolektiv|2 years ago

For info: I don't believe (1), I do believe (2) although not that strongly - it's more likely to be a leap than a gradient, I suspect - I simply don't see anything right now that convinces me it's just over the next hill.

Your conclusion... maybe, yes - I don't think we're anywhere near a simulation approach with sufficient fidelity however. Also 82% is very specific!

foo3a9c4|2 years ago

> For info: I don't believe (1), I do believe (2) although not that strongly

Thanks for clarifying. Do you believe there is a better than 20% chance that humans will develop AGI in the next 30 years?

> I simply don't see anything right now that convinces me it's just over the next hill.

These are the reasons that I believe we are close to developing an AGI system.

  (1) Many smart people are working on capabilities.
  (2) Many investment dollars will flow into AI development in the near future.
  (3) Many impressive AI systems have recently been developed: Meta's CICERO, OpenAI's GPT4, DeepMind's AlphaGo.
  (4) Hardware will continue to improve.
  (5) LLM performance significantly improved as data volume and training time increased.
  (6) Humans have built other complex artefacts without good theories of the artefact, including: operating systems, airplanes, beer.

scotty79|2 years ago

Also (3) that AGI in practice will necessarily pose any danger to humans is doubtful. After all Earth has billions of human level intelligence and nearly all of them are useless and if they are even mildly dangerous it's rather due to their numbers and disgusting biology than intelligence.