top | item 39214663

(no title)

Mvandenbergh | 2 years ago

We (an engineering consultancy, so not software but physical infrastructure) have an internal ChatGPT so that people can use it for work.

I find that it is quite good at answering textbook type questions and giving background on things. Basically a kind of supercharged search engine.

So for example, if I knew nothing about water treatment technology, I could ask it "what are the typical stages of water treatment for groundwater from a borehole?" and it will give me a good answer. Sometimes when you ask more specific questions, it will give some weird answers. It was convinced that desalination was already the main source of drinking water in a particular country, probably because there have been a lot of new desal schemes proposed and so the input corpus has a lot of associations between that country and desalination.

I just asked it a question on whether you could use nitrogen to cool a nuclear reactor (correct answer is yes, if it's enriched heavy nitrogen) and it gave an ok answer but didn't mention that N14 absorbs neutrons. This is pretty obvious but there is very little written about this idea in the likely input corpus so ChatGPT doesn't know this.

It cannot (and in fact, standard LLMs cannot because of how they architected) answer questions that require constructing counterfactuals or hypotheticals outside their input base. To note, from the point of view of an LLM, something which is covered in its input corpus is neither a counterfactual nor a hypothetical even if they are relative to the real world. So if nobody has ever written anything on the use of a particular technology for a particular purpose, no pure LLM will be able to answer questions relating to it. Other emerging AI technologies look like they will be able to do that though.

discuss

order

No comments yet.