top | item 47132471

(no title)

pbmango | 7 days ago

This is very interesting. I don't see much discussion of interpretability in day to the day discourse of AI builders. I wonder if everyone assumes it to either be solved, or to be too out of reach to bother stopping and thinking about.

discuss

order

yogurt-male|4 days ago

Mostly out of reach. There is a ton of research on figuring out how to do this coming out every day, including both proposals of new ways to do things and (often strong) critiques of old or recently proposed ways of doing things. Interpretability (esp. for large, modern models) is very, very far from being a solved problem.

adebayoj|6 days ago

Most interpretability techniques haven't yet to be shown to be useful for everyday model pipelines. However, the field is working hard to change this.