top | item 47199090

(no title)

bluegatty | 1 day ago

We don't have the right abstractions in place to support true AI driven work. We replaced ourselves but we don't have the tools to do '1 layer up'.

discuss

order

enknee1|15 hours ago

Nailed it.

We desperately need a new set of abstractions for human- and AI-based knowledge.

I prefer humans-as-a-network of abstractions piloting an organic robot perspective. Sans mathematical framework, this is an unsatisfying claim, I know... But just hear me out.

This allows for extreme complexity between individuals and for language to act as a standard serial com channel with high dimensional abstractions embedded across words - a network of abstractions unto itself. Models of this network are embedded in books and 'live' in oral history.

LLMs, then, are just a much better model of the abstraction networks that span people through language (and often thought).

Notice that they're NOT people. And that we are actively developing network science to accommodate the complexities of inherent in examining both the real world and modeled versions of these networks.

As an example, the tools to layer up can be envisioned as more networks on top of these networks: reasoning and cognitive patterns are captured in recursive transformer-based LLMs. So a metacognative model might actively generate LoRA for each prompt.

Again, much math and research needed. But it's been a very useful set of abstractions this far.