top | item 38504892

(no title)

reexpressionist | 2 years ago

"As with most software development, modern AI work is all about knowing your tools and when it's appropriate to use them." 100% agree. Ditto with just using the easiest to access models as initial proof-of-concept/dev/etc. to get started.

(I do agree with the overall sentiment of the TC article, although as noted by others below, there's some mashing of terminology in the article. E.g., I, too, associate GOFAI with symbolic AI and planning.)

There's another dimension, too, not mentioned in the article: Even with general purpose LLMs, for production applications, it's still required to have labeled data to produce uncertainty estimates. (There's a sense in which any well-defined and tested production application is a 'single-task' setting, in it its own way.) One of the reasons on-device/edge AI has gotten so interesting, in my opinion, is that we now know how to derive reliable uncertainty estimates with the neural models (more or less independent of scale). As long as prediction uncertainty is sufficiently low, there's no particular reason to go to a larger model. That can lead to non-trivial cost/resource savings, as well as the other benefits of keeping things on-device.

discuss

order

wiricon|2 years ago

Can link to any methods for deriving reliable uncertainty estimates? Sounds useful.