top | item 39455627

(no title)

eladgil | 2 years ago

I agree this is a potential outcome. One big question is generalizability versus niche models. For example, is the best legal model a frontier model + a giant context window + RAG? Or is it a niche model trained or fine tuned for law?

Right now at least people seem to decouple some measures of how smart the model is from knowledge base, and at least for now the really big models seem smartest. So part of the question is well is how insightful / synthesis centric the model needs to be versus effectively doing regressions....

discuss

order

CuriouslyC|2 years ago

Frontier model + rag is good when you need cross-discipline abilities and general knowledge, niche models are best when the domain is somewhat self contained (for instance, if you wanted a model that is amazing at role playing certain types of characters).

The future is model graphs with networked mixtures of experts, where models know about other models and can call them as part of recursive prompts, with some sort of online training to tune the weights of the model graph.

sfink|2 years ago

> The future is model graphs with networked mixtures of experts, where models know about other models and can call them as part of recursive prompts, with some sort of online training to tune the weights of the model graph.

What's the difference between that and combining all of the models into a single model? Aren't you just introducing limitations in communication and training between different parts of that über-model, limitations that may as well be encoded into the single model if they're useful? Are you just partitioning for training performance? Which is a big deal, of course, but it just seems like guessing the right partitioning and communication limitations is not going to be straightforward compared to the usual stupid "throw it all in one big pile and let it work itself out" approach.

danielmarkbruce|2 years ago

Yup, it's unclear. The current ~consensus is "general purpose frontier model + very sophisticated RAG/system architecture" for legal as an example. I'm building something here using this idea and think its 50/50 (at best) I'm on the right path. It's quite easy to build very clever sounding but often wrong insights into various legal agreements (m&a docs for example). When looking at the tokenization, the training data, decode, architecture (lots of guesses) of the big models, there are a lot of things where the knobs seem turned slightly incorrectly for the domain.

Some of the domains are so large that a specialized model might seem niche but the value prop is potentially astronomical.