top | item 45175796

(no title)

mmargenot | 5 months ago

At least within tech, there seem to have been explosive changes and development of new products. While many of these fail, things like agents and other approaches for handling foundation models are only expanding in use cases. Agents themselves are hardly a year old as part of common discourse on AI, though technologists have been building POCs for longer. I've been very impressed with the wave of tools along the lines of Claude Code and friends.

Maybe this will end up relegated to a single field, but from where I'm standing (from within ML / AI), the way in which greenfield projects develop now is fundamentally different as a result of these foundation models. Even if development on these models froze today, MLEs would still likely be prompted to start with feeding something to a LLM, just because it's lightning fast to stand up.

discuss

order

andy99|5 months ago

Its probably cliche but I think it's both overhyped and under hyped, and for the same reason. They hype comes from "leadership" types that don't understand what LLMs actually do and so imagine all sorts of nonsense (replacing vast swaths of jobs or autonomously writing code) but don't understand how valuable a productivity enhancer and automation tool to can be. Eventually hype and reality will converge, but unlike e.g. blockchain or even some of the less bullshit "big data" and similar trends, there's no doubt that access to an LLM is a clear productivity enhancer for many jobs.

mallowdram|5 months ago

AI was a colossal mistake. A lazy primate's total failure of imagination. It conflated the "conduit metaphor paradox" from animal behavior with "the illusion of prediction/error prediction/error minimization" from spatiotemporal dynamical neuroscience with complete ignorance of the "arbitrary/specific" dichotomy in signaling from coordination dynamics. AI is a short cut to nowhere. It's an abrogation of responsibility in progress of signaling that required we evolve our lax signals that instead doubles down on them. CS destroys society as a way of pretend efficiency to extract value from signals. It's deeply inferior thinking.

kragen|5 months ago

What new non-AI products do you think wouldn't have existed without current AI? Because I don't see the "explosive changes and development of new products" you'd expect if things like Claude Code were a major advance.

spicyusername|5 months ago

At the moment, LLM products are like Microsoft Office, they primarily serve as a tool to help solve other problems more efficiently. They do not themselves solve problems directly.

Nobody would ask, "What new Office-based products have been created lately?", but that doesn't mean that Office products aren't a permanent, and critical, foundation of all white collar work. I suspect it will be the same with LLMs as they mature, they will become tightly integrated into certain categories of work and remain forever.

Whether the current pricing models or stock market valuations will survive the transition to boring technology is another question.

apwell23|5 months ago

> What new non-AI products do you think wouldn't have existed without current AI?

AI slop is a product

boringg|5 months ago

I think the payment model is still not there which is making everything blurry. Until we figure out how much people have to pay to use it and all the services built on its back it will remain challenging to figure out full value prop. That and a lot of company are going to go belly up when they have to start paying the real cost instead of growth acquisition phase.

jebarker|5 months ago

I don’t think a payment model can be figured out until the utility of the technology justifies the true cost of training and running the models. As you say, right now it’s all subsidized based on the belief it will become drastically more useful. If that happens I think the payment model becomes simple.

mallowdram|5 months ago

The immaterial units are arbitrary, so 'agents' are themselves arbitrary, ie illusory. They will not arrive except as being wet nursed infinitely. The developers neglected to notice the fatal flaw, there are specific targets but automating the arbitrary never reaches them, never. It's an egregious monumental fly in the ointment.