top | item 38860980

(no title)

necroforest | 2 years ago

> We arent going to see more progress until we have a way to generalize the compute graph as a learnable parameter

That's a bold statement since a ton of progress has been made without learning the compute graph.

discuss

order

nomel|2 years ago

From my naive perspective, there seems to be a plateau, that everyone is converging on, somewhere between ChatGPT 3.5 and 4 level of performance, with some suspecting that the implementation of 4 might involve several expert models, which would already be extra sauce, external to the LLM. This, combined with the observation that generative models converge to the same output, given the same training data, regardless of architecture (having trouble finding the link, it was posted here some weeks ago), external secret sauce, outside the model, might be where the near term gains are.

I suppose we'll see in the next year!

uoaei|2 years ago

A ton of progress can be made climbing a tree, but if your goal is reaching the moon it becomes clear pretty quickly that climbing taller trees will never get you there.

nethi|2 years ago

True, but it is the process of climbing trees that gives the insight whether taller trees help or not and if not, what to do next.

gpderetta|2 years ago

With enough thrust, even p̵i̵g̵s̵ trees can fly.

ActorNightly|2 years ago

We have made progress in efficiency, not functionality. Instead of searching google or stack overflow or any particular documentation, we just go to Chatgpt.

Information compression is cool, but I want actual AI.

danielmarkbruce|2 years ago

The idea that there has been no progress in functionality is silly.

Your whole brain might just be doing "information compression" by that analogy. An LLM is sort of learning concepts. Even Word2Vec "learned" than king - male + female = queen and that's a small model that's really just one part (not exact, but similar) of a transformer.

p1esk|2 years ago

Fascinating. What’s “actual AI”?