Awesome, I am a fan of their work, just wish they did not use the word biology (which is rooted in living) to describe LLMs, we have enough anthropomorphizing of AI tech.
The entire paper is riddled with anthropomorphic terms - it's part of AI culture unfortunately. When they start talking about "planning", "choosing", "reasoning" it biases the perception of their analysis. One could certainly talk about a night light equipped with a photoresistor as "planning to turn on the light when it is dark", "choosing to turn on the light because it is dark, and "reasoning that since it is dark, it turned on the light"- but is that accurate?
I agree. "Planning" means we come up with alternative sets of steps or tasks which we then order into sequences or acyclic directed graphs and then pick the plan we think is the best. We can also create "Plan B" and "Plan C" for the cases that the main plan fails to execute successfully.
But as far as we know does AI internally assemeble subtasks into graphs and then evaluate them and pick the best one?
Is there any evidence in the memory traces of the executing AI that there are tasks and sub-tasks and ordering and evaluating of them, then taking a decision to choose and EXECUTE the best plan?
Where is the evidence that AI-programs do "planning"?
They're doing natural science on a thing full of complex purposive undesigned machinery. There used to be Artificial Life conferences -- the proceedings were pretty interesting. Now the objects of study are getting past a "gosh that's cute" level but I doubt anyone here's misled by the title.
Given that LLMs are literally trained on huge amounts of human-originated text and taught to model it, informing our intuitions regarding their external behaviour through a frame influenced by anthropomorphism... actually makes sense.
I really don't see the controversy here. My prompts, including ones meant for actual hard productivity (programming, image OCR and analysis, Q&A and summarisation of news articles), behave very differently when I introduce elements that work on the assumption that the model is partly anthropomorphic. We can't pretend that the behaviour replication isn't there, where is demonstrably is there.
EncomLab|11 months ago
galaxyLogic|11 months ago
But as far as we know does AI internally assemeble subtasks into graphs and then evaluate them and pick the best one?
Is there any evidence in the memory traces of the executing AI that there are tasks and sub-tasks and ordering and evaluating of them, then taking a decision to choose and EXECUTE the best plan?
Where is the evidence that AI-programs do "planning"?
profchemai|11 months ago
abecedarius|11 months ago
selfhoster11|11 months ago
I really don't see the controversy here. My prompts, including ones meant for actual hard productivity (programming, image OCR and analysis, Q&A and summarisation of news articles), behave very differently when I introduce elements that work on the assumption that the model is partly anthropomorphic. We can't pretend that the behaviour replication isn't there, where is demonstrably is there.
KingLancelot|11 months ago
[deleted]