top | item 43330805

(no title)

eterevsky | 11 months ago

This article seems to argues from the way scientific discoveries are made by humans. It seems to me that its gist is similar to some article from the 80s that claims that computers will never play good chess, or an article from the 2000s that claims the same for go.

The general shape of these arguments is: "Playing chess/go well, or making scientific discoveries requires specific way of strategic thinking or the ability to form the right hypotheses. Computers don't do this, ergo they won't be able to play chess or make scientific discoveries".

I don't think this is a very good frame of reasoning. A scientific question can take one of the following shapes:

- (Mathematical) Here's a mathematical statement. Prove either it or its negation.

- (Fundamental natural science) Here're the results of the observations. What are the simplest possible model that explains all of them?

- (Engineering) We need to do X. What's an efficient way of doing it?

All of these questions could be solved in a "human" way, but it also possible to train AIs to approach them without going through the same process as the human scientists.

discuss

order

sweezyjeezy|11 months ago

> but it also possible to train AIs to approach them without going through the same process as the human scientists

With chess the answer was more or less completely brute force the problem space, but will that work with math / science? Is there a way to widely explore the problem space with AI, especially in a way that goes above or even against the contents of it's training data? I don't know the answer, but that seems to be the crucial question here.