I think as-applied now, the author's take strikes a chord with me. LLMs are a technical marvel, but their results feel very cheap and tacky...it doesn't take much to catch at glimpse at the fallible man behind the curtain. They are lauded more for how impressive the illusion they render all things considered more than what they are actually GOOD FOR. Weird hallucinating search, smart but untrustworthy coding intern...people hold these things up in a way that suggests we've arrived instead of acknowledging their disappointment and saying "yes, there is a lot of power here, but we still haven't found the killer application or the thing that takes this from an impressive but flawed trick to indispensable".
No comments yet.