(no title)
eterevsky | 11 months ago
The general shape of these arguments is: "Playing chess/go well, or making scientific discoveries requires specific way of strategic thinking or the ability to form the right hypotheses. Computers don't do this, ergo they won't be able to play chess or make scientific discoveries".
I don't think this is a very good frame of reasoning. A scientific question can take one of the following shapes:
- (Mathematical) Here's a mathematical statement. Prove either it or its negation.
- (Fundamental natural science) Here're the results of the observations. What are the simplest possible model that explains all of them?
- (Engineering) We need to do X. What's an efficient way of doing it?
All of these questions could be solved in a "human" way, but it also possible to train AIs to approach them without going through the same process as the human scientists.
sweezyjeezy|11 months ago
With chess the answer was more or less completely brute force the problem space, but will that work with math / science? Is there a way to widely explore the problem space with AI, especially in a way that goes above or even against the contents of it's training data? I don't know the answer, but that seems to be the crucial question here.
unknown|11 months ago
[deleted]