top | item 41235743

(no title)

ai4ever | 1 year ago

LLM have unleashed the dreamer in each and every young coder. Now, there is all sorts of speculation on what these machines can or cannot do. This is a natural process of any mania.

These folks must all do courses in epistemology to realize that all knowledge is built up of symbolic components and not spit out by a probabilistic machine.

Gradually, reality will sync (intentional misspelling) in, and such imaginations will be seen to be futile manic episodes.

discuss

order

visarga|1 year ago

> These folks must all do courses in epistemology to realize that all knowledge is built up of symbolic components and not spit out by a probabilistic machine.

Knowledge ends up as symbolic representation, but it ultimately comes from the environment. Science is search, searching the physical world or other search spaces, but always about an environment.

I think many people here almost forget that the training set of GPT was the hard work of billions of people over history, who researched and tested ideas in the real world and build up to our current level. Imitation can only take you so far. For new discoveries the environment is the ultimate teacher. It's not a symbolic processing thing, it's a search thing.

Everything is search - protein folding? search. DNA evolution? search. Memory? search. Even balancing while walking is search - where should I put my foot? Science - search. Optimizing models - search for best parameters to fit the data. Learning is data compression and search for optimal representations.

Symbolic representations are very important in search, they quantize our decisions and make it possible to choose in complex spaces. Symbolic representation can be copied, modified and transmitted, without it we would not get too far. Even DNA uses its own language of "symbols".

Symbols can encode both rules and data, and more importantly, can encode rules as data, so syntax becomes object of meta-syntax. It's how compilers, functional programming and ML models work - syntax creating syntax, rules creating rules. This dual aspect of "behavior and data" is important for getting to semantics and understanding.

defamation|1 year ago

my guy you're so confident yet you forget AlphaFold, it designs protein structures that don't exist.

Who's to say that a model can't eventually be trained to work within certain parameters the real word operates in and make new novel ideas and inventions much like a human does in a larger scope.

ai4ever|1 year ago

Claims of inventing new materials via AI were debunked...

https://www.siliconrepublic.com/machines/deepmind-ai-study-c...

DeepMind is overselling their AI hand when they dont have to.

"whos to say that" - this could be a leading question for any "possibility" in the AI religion.

"whos to say that god doesnt exist" etc. questions for which there are no tests, and hence fall outside the realm of science and in the realm of religion.

mnky9800n|1 year ago

If you go and look at the list of authors on those papers you will see that most of the authors have PhD doing something protein folding related. It's not that some computer scientists figured it out. It's that someone built the infrastructure then gave it to the subject matter experts to use.

the__alchemist|1 year ago

AlphaFold doesn't solve the protein folding problem. It has practical applications, but IMO we still need to (And can!) build better ab-initio chemistry models that will actually simulate protein folding, or chemical reactions more generally.

JBorrow|1 year ago

Models like AlphaFold are very different beasts. There's definitely a place for tools that suggest verifiable, specific, products. Overarching models like 'The AI Scientist' that try to do 'end-to-end' science, especially when your end product is a paper, are significantly less useful.

gessha|1 year ago

I’ll believe it when I see it and/or when I see the research path that goes there.

Judge a technology based on what it’s currently capable of and not what it promises to be.