top | item 37076609

(no title)

krautt | 2 years ago

there is no technical reason why you couldn't unbolt openai interface and bolt in llama. Moreover, once you have this, you need only load the model into memory once. emulating different agents would be handled exclusively through the context window sizes that llm's expect. each agent would just have its own evolving context window. roundrobin the submissions. repeat

what's crazy to think about is what new things will become possible as the context window sizes creep up.

discuss

order

dogcomplex|2 years ago

Or as the model is able to be trained (integrated with new information) on the fly, making a somewhat limitless context window

__loam|2 years ago

I think there are actual technical problems in doing this.