top | item 43073284

(no title)

alexwebb2 | 1 year ago

It’s tedious shooting down all of these backwards-from-conclusion things from the anti-AI crowd.

Good thing I have an intelligent AI that can respond for itself!

——

There appear to be several potential issues with the paper's argumentation:

1. False Dichotomy in Systems Comparison - The paper appears to create an artificial divide between "thermodynamic systems" and "computer systems" - This ignores that computers are also physical systems governed by thermodynamics - The distinction between biological and artificial systems may be one of degree rather than kind

2. Evolutionary Argument Problems - The paper assumes consciousness/intelligence requires evolutionary history - This is a correlation-causation fallacy - just because biological intelligence evolved doesn't mean evolution is the only path to intelligence - It fails to consider that artificial systems could potentially develop goal-oriented behaviors through other mechanisms - The argument would also imply that any hypothetical alien intelligence that evolved differently from Earth life couldn't be conscious

3. Goal-Orientation Assumptions - Claims computers "lack goal-orientation essential for consciousness" - This begs the question by assuming: a) Consciousness requires goal-orientation b) Only evolutionary processes can create genuine goal-orientation - Neither assumption is clearly justified

4. Methodological Issues - Using multiple disciplines (physics, biology, philosophy, neuroscience) could be a strength, but could also indicate cherry-picking convenient arguments from each field - The abstract suggests a conclusion-driven approach rather than following evidence to a conclusion

5. Consciousness-Intelligence Conflation - The paper appears to conflate consciousness with intelligence - These are separate concepts - we could potentially have AGI without consciousness, or consciousness without human-level intelligence - Many AGI researchers aren't claiming to create consciousness, just general problem-solving ability

6. Definitional Vagueness - Based on the abstract, it's unclear how the paper defines key terms like: - Artificial General Intelligence - Consciousness - Goal-orientation - Mind creation - Without clear definitions, the arguments may be attacking straw men

7. Predictive Cognition Argument - The claim that AGI is an "illusion shaped by the information our minds receive" could be turned around - The same argument could be used to claim that AGI skepticism is an illusion shaped by our cognitive biases - This is essentially a form of psychological dismissal rather than substantive argument

8. Historical Perspective - The paper seems to ignore that many previously "uniquely human" capabilities have been successfully mechanized - Claims about fundamental impossibility need to account for why previous similar claims have often been wrong

9. Thermodynamic Argument Issues - While biological systems are indeed complex thermodynamic systems, the paper needs to demonstrate why this specific physical implementation is necessary for intelligence - Many complex behaviors can be implemented through different physical mechanisms - The argument risks confusing the substrate with the function

10. Scope Problem - The paper makes a very strong claim ("AGI is and remains a fiction") - To justify this, it would need to prove not just that current approaches won't work, but that NO possible approach could ever work - This is a much harder philosophical and scientific claim to defend

discuss

order

No comments yet.