top | item 44336049

(no title)

Swinx43 | 8 months ago

The writing perpetuates the anthropomorphising of these agents. If you view the agent as simply a program that is given a goal to achieve and tools to achieve it with, without any higher order “thought” or “thinking”, then you realise it is simply doing what it is “programmed” to do. No magic, just a drone fixed on an outcome.

discuss

order

nilirl|8 months ago

Just like an analogy between humans fails to capture how an LLM works, so does the analogy of being "programmed".

Being "programmed" is being given a set of instructions.

This ignores explicit instructions.

It may not be magic; but it is still surprising, uncontrollable, and risky. We don't need to be doomsayers, but let's not downplay our uncertainty.

itvision|8 months ago

How is it different from our genes that "program" us to procreate successfully?

Can you name a single thing that you enjoy doing that's outside your genetic code?

> If you view the human being as simply a program that is given a goal to achieve and tools to achieve it with, without any higher order “thought” or “thinking”, then you realise they are simply doing what they are genetically “programmed” to do.

FTFY

raincole|8 months ago

I think the narrative of "AI is just a tool" is much more harmful than the anthropomorphism of AI.

Yes, AI is a tool. So are guns. So are nukes. Many tools are easy to be misused. Most tools are inherently dangerous.

Topfi|8 months ago

I don’t quite follow. Just because a tool has the potential for misuse, doesn’t make it not a tool.

Anthropomorphizing LLMs, on the other hand, has a multitude of clearly evident problems arising from it.

Or do you focus on the “just” part of the statement? That I very much agree with. Genuinely asking for understanding, not a native speaker.

ACCount36|8 months ago

The more powerful a tool is, the more dangerous it is, as a rule. And intelligence is extremely powerful.