top | item 44084785

(no title)

-__---____-ZXyw | 9 months ago

It is not "an agent" in the sense you are implying here, it does not will, want, plan, none of those words apply meaningfully. It doesn't reason, or think, either.

I'll be excited if that changes, but there is absolutely no sign of it changing. I mean, explicitly, the possibility of thinking machines is where it was before this whole thing started - maybe slightly higher, but moreso because a lot of money is being pumped into research.

LLMs might still replace some software workers, or lead to some reorganising of tech roles, but for a whole host of reasons, none of which are related to machine sentience.

As one example - software quality matters less and less the as users get locked in. If some juniors get replaced by LLMs and code quality plummets causing major headaches and higher workloads for senior devs, as long as sales don't dip, managers will be skipping around happily.

discuss

order

ruraljuror|9 months ago

I didn't mean to imply AI was sentient or approaching sentience. Agency seems to be the key distinction between it and other technologies. You can have agency, apparently, without the traits you claim I imply.

-__---____-ZXyw|9 months ago

Ah, ok, you must be using agency in some new way I'm not aware of.

Can you clarify what exactly you mean then when you say that "AI" (presumably you mean LLMs) has agency, and that this sets it apart from all other technologies? If this agency as you define it makes it different from all other technologies, presumably it must mean something pretty serious.