top | item 42534268

(no title)

singingfish | 1 year ago

The reasoning is quite subtle, and because I'm not a very coherent guy I have problems expressing it. In the LLM space there are a whole bunch of pitfalls around overfit (largely solvable with pretty standard statistical methods) and inherent bias in training material which is a much harder to problem to solve. The fact that the internal representation gives you zero information on how to handle this bias means the tool can itself not be used to detect or resolve the problem.

I found this episode of the nature podcast - "How AI works is often a mystery — that's a problem": https://www.nature.com/articles/d41586-023-04154-4 - very useful in a 'thank goodness someone else has done the work of being coherent so I don't have to' way.

discuss

order

busyant|1 year ago

Thank you.

That's a really interesting (and understandable) explanation.