top | item 43209844

(no title)

nritchie | 1 year ago

As a scientist, I like to believe that my greatest contribution is my fresh ideas. Maybe AI will be able to do this better than I can. If so, I'm not sure that I really want to be a scientist anymore. TBH, I'm in science because I enjoy the process of thinking, coming up with ideas and testing them. Eliminate any part and it just becomes another tedious job.

discuss

order

jszymborski|1 year ago

As Sir Martyn Poliakoff likes to say (greatly paraphrasing), your mind is like a vegetable soup, and stirring it around surfaces all sorts of interesting bits.

I see this as a great way of stirring things up! Like, once I asked ChatGPT about some ideas I had, and it had hallucinated something which gave me a great idea.

mnky9800n|1 year ago

The truth is that as scientific problems become sufficiently complex we will need to trust a computer and what it tells us. And that computer acts as an abstraction layer for the knowledge in the same way that Python abstracts away memory management. This will allow us to conceive ideas that were otherwise impossible. To put it a different way, it used to be a department of physicists or glaciologists or hydrologists or whatever were happy enough talking to each other. Nowadays people are rather interdisciplinary and often those people are talking across departments trying to come up with new ideas. Eventually the complexity of the new idea will be such that you cannot conceive it without the help of a computer. At that time you will have to trust the output of the computer in the same way you trusted your colleagues. I suppose we already sort of do that with many computational tools. Like you have trust in the software we use to analyse data. People use python because the data science ecosystem is well developed and well trusted. That’s the problem with the current crops of LLMs. None of them are able to build trust as a knowledge abstraction layer. But when they do they will become very useful as you can ask them to do all sorts of things.

imtringued|1 year ago

My problem with AI nutjobs like you is that you can think of a similar scenario with classical algorithms/technology with almost no discernible changes.

There is no way that humans fully understand the inner workings of a camera sensor, but they trust the output of a camera recording to make decisions.

The point is that you don't have to trust the output of the camera the same way you trust a colleague. You don't need to anthromorphise the camera to make use of it.

Nothing changes. There is nothing special about LLMs. There is no need to worship them or think of them as anything other than tools.

spking|1 year ago

AI can "play" the drums better and more precisely than me but I still enjoy the act of playing the drums. AI can or will probably drive a Formula 1 car better than Max Verstappen but he will still enjoy being a driver. I don't see why we can't coexist with machine counterparts.

mynameajeff|1 year ago

From all the attempts I've seen in the past few years, we haven't been able to get AI to successfully complete a lap of wheel to wheel racing at full speed on a real track[1], I'm not sure if they're exactly competing for a seat as a pay driver at Haas let alone racing for Red Bull anytime soon.

[1] https://youtu.be/TPzBH-7ckO0

numba888|1 year ago

> If so, I'm not sure that I really want to be a scientist anymore

I'm not sure you have many interesting options.