Somewhat related but here's my take on super intelligence or AGI. I have worked with CNN,GNN and other old school AI methods, but don't have the resources to build a real SOT LLM, but I do use and tinker with LLM's occasionally.
If AGI or SI(super intelligence)/is possible, and that is an if...I don't think LLM's are going to be this silver bullet solution Just as we have in the real world of people who are dedicated to a single task in their field like a lawyer or construction workers or doctors and brain surgeons, I see the current best path forward as being a "mixture of experts". We know LLM's are pretty good for what iv seen some refer to as NLP problems, where the model input is the tokenized string input. However I would argue an LLM will never built a trained model like stockfish or deepseek. Certain model types seem to be suited to certain issues/types of problems or inputs. True AGI or SI would stop trying to be a grand master of everything but rather know what best method/model should be applied to a given problem. We still do not know if it is possible to combine the knowledge of different types of neural networks like LLMs, convolutional neural networks, and deep learning...and while its certainly worth exploring, it is foolish to throw all hope on a single solution approach. I think the first step would be to create a new type of model where given a problem of any type. It knows the best method to solve it. And it doesn't rely on itself but rather the mixture of agents or experts. And they don't even have to be LLMs. They could be anything.
Where this really would explode is, if the AI was able to identify a problem that it can't solve and invent or come up with a new approach, multiple approaches, because we don't have to be the ones who develop every expert.
It could be part of an AGI, specifically the human interface part. That's what an LLM is good at. The rest (knowledge oracle, reasoning etc) are just things that kinda work as a side-effect. Other types of AI models are going to be better at that.
It's just that since the masses found that they can talk to an AI like a human they think that it's got human capabilities too. But it's more like fake it till you make it :) An LLM is a professional bullshitter.
There's a _lot_ of smoke and mirrors. Paste a sudoku into chatgpt and ask it to solve. Amazing, it does it perfectly! Of course that's because it ran a sudoku-solving program that it pulled off github.
Now ask it to solve step by step by pure reasoning. You'll get a really intelligent sounding response that sounds correct, but on closer inspection makes absolutely no sense, every step has ridiculous errors like "we start with options {1, 7} but eliminate 2, leaving only option 3", and then at the end it just throws all that out and says "and therefore ..." and gives you the original answer.
That tells me there's essentially zero reasoning ability in these things, and anything that looks like reasoning has been largely hand-baked into it. All they do on their own is complete sentences with statistically-likely words. So yeah, as much as people talk about it, I don't see us as being remotely close to AGI at this point. Just don't tell the investors.
> However I would argue an LLM will never built a trained model like stockfish or deepseek.
It doesn't have to, the LLM just needs access to a computer. Then it can write the code for Stockfish and execute it. Or just download it, the same way you or I would.
> True AGI or SI would stop trying to be a grand master of everything but rather know what best method/model should be applied to a given problem.
Yep, but I don't see how that relates to LLMs not reaching AGI. They can already write basic Python scripts to answer questions, they just need (vastly) more advanced scripting capabilities.
But the G in AGI stands for General. I think the hope is that there is some as-yet-undiscovered algorithm for general intelligence. While I agree that deferring to a subsystem that is an expert in that type of problem is the best way to handle problems, I would hope that it is possible that that central coordinator not just be able to delegate but design new subsystems as needed. Otherwise what happens when you run out of types of expert problem solvers to use (and still haven't solved the problem well)?
One might argue maybe a mixture of experts is just the best that can be done - and that it's unlikely the AGI be able to design new experts itself. However where do the limited existing expert problem solvers come from? Well - we invented them. Human intelligences. So to argue that an AGI could NOT come up with its own novel expert problem solvers implies there is something ineffable about human general intelligence that can't be replicated by machine intelligence (which I don't agree with).
Once I was high and thought of hallucinations as "noise in the output". From that perspective, and the fact that LLMs are probabilistic machines, then halving the noise would probably involve 4x the computation needed. Which seems to track what I observe. Models are getting MUCH larger, but performance is practically at a standstill.
I don't get this line of thinking. AGI already exists - it's in our heads!
So then the question is: is what's in our heads magic, or can we build it? If you think it's magic, fine - no point arguing. But if not, we will build it one day.
Indeed! That's what I have been thinking for a while but I never had the occasion and or breath to write it down, and you explained it concisely. Finally some 'confirmation' 'bias'...
IMHO, the word agent is quickly becoming meaningless.
The amount of agency that sits with the program vs. the user is something that changes gradually.
So we should think about these things in terms of how much agency are we willing to give away in each case and for what gain[1].
Then the ecosystem question that the paper is trying to solve will actually solve itself, because it is already the case today that in many processes agency has been outsourced almost fully and in others - not at all. I posit that this will continue, just expect a big change of ratios and types of actions.
An agent, or something that has agency, is just something that takes some action, which could be anything from a thermostat regulating the temperature all the way up to an autonomous entity such as an animal going about it's business.
Hugging Face have their own definitions of a few different types of agent/agentic system here:
As related to LLMs, it seems most people are using "agent" to refer to systems that use LLMs to achieve some goal - maybe a fairly narrow business objective/function that can be accomplished by using one or more LLMs as a tool to accomplish various parts of the task.
> IMHO, the word agent is quickly becoming meaningless. The amount of agency that sits with the program vs. the user is something that changes gradually
Yes, the term is becoming ambiguous, but that's because it's abstracting out the part of AI that is most important and activating: the ability to work both independently and per intention/need.
Per the paper: "Key characteristics of agents include autonomy, programmability, reactivity, and proactiveness.[...] high degree of autonomy, making decisions and
taking actions independently of human intervention."
Yes, "the ecosystem will evolve," but to understand and anticipate the evolution, one needs a notion of fitness, which is based on agency.
> So we should think about these things in terms of how much agency are we willing to give away in each case
It's unclear there can be any "we" deciding. For resource-limited development, the ecosystem will evolve regardless of our preferences or ethics according to economic advantage and capture of value. (Manufacturing went to China against the wishes of most everyone involved.)
More generally, the value is AI is not just replacing work. It's giving more agency to one person, avoiding the cost and messiness of delegation and coordination. It's gaining the same advantages seen where smaller team can be much more effective than a larger one.
Right now people are conflating these autonomy/delegation features with the extension features of AI agents (permitting them to interact with databases or web browsers). The extension vendors will continue to claim agency because it's much more alluring, but the distinction will likely become clear in a year or so.
I think people keep conflating agency with agents, and that they are actually two entirely different things in real life. Right now agents have no agency - they do dot independently come up with new approaches, they’re mostly task-oriented.
Maybe I just don’t understand the article but I really have 0 clue how they go about making their conclusions and really don’t understand what they are saying.
I think the 5 issues they provide under “Cognitive Architectures” are severely underspecified to the point where they really don’t _mean_ anything. Because the issues are so underspeficifed I don’t know how their proposed solution solves their proposed problems. If I understand it correctly, they just want agents (Assistants/Agents) with user profiles (Sims) on an app store? I’m pretty sure this already exists on the ChatGPT store. (sims==memories/user profiles, agents==tools/plugins, assistants==chat interface)
This whole thing is so broad and full of academic (pejorative) platitudes that it’s practically meaningless to me. And of course although completely unrelated they through a reference into symbolic systems. Academic theater.
I think the goldilocks path is to make the user the agent and use the LLM simply as their UI/UX for working with the system. Human (domain expert) in the loop gives you a reasonable chance of recovering from hallucinations before they spiral entirely out of control.
"LLM as UI" seems to be something hanging pretty low on the tree of opportunity. Why spent months struggling with complex admin dashboard layouts and web frameworks when you could wire the underlying CRUD methods directly into LLM prompt callbacks? You could hypothetically make the LLM the exclusive interface for managing your next SaaS product. There are ways to make this just as robust and secure as an old school form punching application.
It's quite tedious to have to write (or even say) full sentences to express intent. Imagine driving a car with a voice interface, including accelerator, brake, indicators and so on. Controls are less verbose and dashboards are more information rich than linear text.
It's difficult to be precise. Often it's easier to gauge things by looking at them while giving motor feedback (e.g. turning a dial, pushing a slider) than to say "a little more X" or "a bit less Y".
Language is poorly suited to expressing things in continuous domains, especially when you don't have relevant numbers that you can pick out of your head - size, weight, color etc. Quality-price ratio is a particularly tough one - a hard numeric quantity traded off against something subjective.
Most people can't specify up front what they want. They don't know what they want until they know what's possible, what other people have done, started to realize what getting what they want will entail, and then changed what they want. It's why we have iterative development instead of waterfall.
LLMs are a good start and a tool we can integrate into systems. They're a long, long way short of what we need.
I had the same epiphany about LLM as UI trying to build a front end for a image enhancer workflow I built with Stable Diffusion. I just about fully built out a Chrome extension and then realized I should just build a 'tool' that llama can interact with and use open webui as the front end.
> I think the goldilocks path is to make the user the agent and use the LLM simply as their UI/UX for working with the system
That's a funny definition to me, because doing so would mean the LLM is the agent, if you use the classic definition for "user-agent" (as in what browsers are). You're basically inverting that meaning :)
> "LLM as UI" seems to be something hanging pretty low on the tree of opportunity.
Yes if you want to annoy your users and deliberately put roadblocks to make progress on a task. Exhibit A: customer support. They put the LLM in between to waste your time. It’s not even a secret.
> Why spent months struggling with complex admin dashboard layouts
You can throw something together, and even auto generate forms based on an API spec. People don’t do this too often because the UX is insufficient even for many internal/domain expert support applications. But you could and it would be deterministic, unlike an LLM. If the API surface is simple, you can make it manually with html & css quickly.
Overuse of web frameworks has completely different causes than ”I need a functional thing” and thus it cannot be solved with a different layer of tech like LLMs, NFTs or big data.
I could not find a "Agents considered harmful" related to AI, but there is this one: "AgentHarm: A benchmark for measuring harmfulness of LLM agents" https://arxiv.org/pdf/2410.09024
It's just calling a LLM n-times with slightly different prompts
Sure, you get the ability to correct previous mistakes, it's basically a custom chain of thought - but errors compound and the results coming from agents have a pretty low success rate.
Bruteforcing your way out of problems can work sometimes (as evinced by the latest o3 benchmarks) but it's expensive and rarely viable for production use.
> It's just calling a LLM n-times with slightly different prompts
It can be, but ideally each agent’s model, prompts and tools are tailored to a particular knowledge domain. That way tasks can be broken down into subtasks which are classified and passed to the agents best suited to them.
Agree RE it being bruteforce and expensive but it does look like it can improve some aspects of LLM use.
> It's just calling a LLM n-times with slightly different prompts
That's one way of building something you could call an "agent". It's far from the only way. It's certainly possible to build agents where the LLM plays a very small role, or even one that uses no LLM at all.
With time, they will get a lot better. IMO, the biggest hurdles the agents currently lack is good implementation of function calling capabilities. LLM's should be used as reasoning engines and everything else should be offloaded to tool use. This will drastically reduce hallucinations and errors in math and all the other areas.
I can imagine really powerful agents this year or next in theory. Agents meaning (not a thermostat) a system that can go complete some async tasks on your behalf. But in practice I don’t have any idea how we will solve for prompt injection attacks. Hopefully someone cracks it.
The paper covers technical details and the logistics of AI Agents to come. But how are humans going to react to mass AI Agents replacing other human emotion and connection? Bias is central in tech-culture to only agents, but this could become an issue.
Does anyone else get the sense that the definition has been bastardized by the conflation of the two concurrent previous uses of "agent"?
i.e. in AI, biology and informatics, "Agent" typically meant something: That had a form / self / embodiment. That could sense the environment and react to those perceptions. That possibly could learn, adapt, or change to various degrees of complexity, which would entail optionally being an "intelligent system".
Meanwhile in common parlance, Agent meant: Someone who acts or behaves on behalf of another adaptively to accomplish something with some degree of freedom.
And this might explain why so people say agent/agentic necessarily refers to "tool use" or "being able to overcome problems on the happy path" or "something capable of performing actions on an infinite loop while reacting" (the latter two in my opinion, conflates the meaning of "Intelligent system" or "Intelligent behavior"). Meanwhile, biologists might still reply to a single cell seemingly inert, or a group of bacteria in a colony, as an Agent (a more behaviouralist/chemical "look-deep-down" perspective)
I think a lot of disappointment is that biologists/OG AI enthusiasts are looking for something truly adaptive, sensing, able to behave, "live" indefinitely, have acquire or set goals, and which might be able to if intelligent, work with other agents to accomplish things (e.g. a "society"). Meanwhile, people who just want an "AI HR Agent" just want something that can communicate, interview, discern good applicants, and book the interviews plus provide summary notes. These two things are very different. But both, could use tools etc (the key difference from ChatGPT which is enabling this new concept to be more useful than ChatGPT, alongside various forms of short term memory rather than "fresh-every-time-conversations).
This paper does at least lead with its version of what "agents" means (I get very frustrated when people talk about agents without clarifying which of the many potential definitions they are using):
> An agent, in the context of AI, is an autonomous entity or program that takes preferences, instructions, or other forms of inputs from a user to accomplish specific tasks on their behalf. Agents can range from simple systems, such as thermostats that adjust ambient temperature based on sensor readings, to complex systems, such as autonomous vehicles navigating through traffic.
This appears to be the broadest possible definition, encompassing thermostats all the way through to Waymos.
You posted on X a while back asking for a crowdsourced definition of what an "agent" was and I regularly cite that thread as an example of the fact that this word is so blurry right now.
People have been talking about agents for at least 2 years. Remember when AgentGPT came out? How's that going so far? Agents are just LLMs with structured output, which often happens to be a JSON with info about a function arguments to be called.
This whole idea of prompting an LLM and piping the output as the input (prompt) of another LLM and asking it to do something with it (like critique/edit it) and then piping the output of that LLM back to the first LLM along with instructions to keep repeating the process until some stop criteria is met seems to me to just be a money-making scheme to drive up token consumption.
Who wants to invest in my startup, its a Microagent service architectures orchestration platform. All you do is define the inputs, write the agents algorithms, apply agency by inputting a decision tree (ifs and conditionals) and then a function to format output! And the best part? You do all of it in YAML!
So was "mobile" 15 years ago. Companies are deploying hundreds of billions in capital for this. It's not going anywhere, and you'd be best off upskilling now instead of dismissing things.
tonetegeatinst|1 year ago
If AGI or SI(super intelligence)/is possible, and that is an if...I don't think LLM's are going to be this silver bullet solution Just as we have in the real world of people who are dedicated to a single task in their field like a lawyer or construction workers or doctors and brain surgeons, I see the current best path forward as being a "mixture of experts". We know LLM's are pretty good for what iv seen some refer to as NLP problems, where the model input is the tokenized string input. However I would argue an LLM will never built a trained model like stockfish or deepseek. Certain model types seem to be suited to certain issues/types of problems or inputs. True AGI or SI would stop trying to be a grand master of everything but rather know what best method/model should be applied to a given problem. We still do not know if it is possible to combine the knowledge of different types of neural networks like LLMs, convolutional neural networks, and deep learning...and while its certainly worth exploring, it is foolish to throw all hope on a single solution approach. I think the first step would be to create a new type of model where given a problem of any type. It knows the best method to solve it. And it doesn't rely on itself but rather the mixture of agents or experts. And they don't even have to be LLMs. They could be anything.
Where this really would explode is, if the AI was able to identify a problem that it can't solve and invent or come up with a new approach, multiple approaches, because we don't have to be the ones who develop every expert.
wkat4242|1 year ago
It could be part of an AGI, specifically the human interface part. That's what an LLM is good at. The rest (knowledge oracle, reasoning etc) are just things that kinda work as a side-effect. Other types of AI models are going to be better at that.
It's just that since the masses found that they can talk to an AI like a human they think that it's got human capabilities too. But it's more like fake it till you make it :) An LLM is a professional bullshitter.
daxfohl|1 year ago
Now ask it to solve step by step by pure reasoning. You'll get a really intelligent sounding response that sounds correct, but on closer inspection makes absolutely no sense, every step has ridiculous errors like "we start with options {1, 7} but eliminate 2, leaving only option 3", and then at the end it just throws all that out and says "and therefore ..." and gives you the original answer.
That tells me there's essentially zero reasoning ability in these things, and anything that looks like reasoning has been largely hand-baked into it. All they do on their own is complete sentences with statistically-likely words. So yeah, as much as people talk about it, I don't see us as being remotely close to AGI at this point. Just don't tell the investors.
pton_xd|1 year ago
It doesn't have to, the LLM just needs access to a computer. Then it can write the code for Stockfish and execute it. Or just download it, the same way you or I would.
> True AGI or SI would stop trying to be a grand master of everything but rather know what best method/model should be applied to a given problem.
Yep, but I don't see how that relates to LLMs not reaching AGI. They can already write basic Python scripts to answer questions, they just need (vastly) more advanced scripting capabilities.
lukeplato|1 year ago
phaedrus|1 year ago
One might argue maybe a mixture of experts is just the best that can be done - and that it's unlikely the AGI be able to design new experts itself. However where do the limited existing expert problem solvers come from? Well - we invented them. Human intelligences. So to argue that an AGI could NOT come up with its own novel expert problem solvers implies there is something ineffable about human general intelligence that can't be replicated by machine intelligence (which I don't agree with).
vrighter|1 year ago
Upvoter33|1 year ago
I don't get this line of thinking. AGI already exists - it's in our heads!
So then the question is: is what's in our heads magic, or can we build it? If you think it's magic, fine - no point arguing. But if not, we will build it one day.
nuancebydefault|1 year ago
rnr25|1 year ago
[deleted]
georgestrakhov|1 year ago
So we should think about these things in terms of how much agency are we willing to give away in each case and for what gain[1].
Then the ecosystem question that the paper is trying to solve will actually solve itself, because it is already the case today that in many processes agency has been outsourced almost fully and in others - not at all. I posit that this will continue, just expect a big change of ratios and types of actions.
[1] https://essays.georgestrakhov.com/artificial-agency-ladder/
HarHarVeryFunny|1 year ago
Hugging Face have their own definitions of a few different types of agent/agentic system here:
https://huggingface.co/docs/smolagents/en/conceptual_guides/...
As related to LLMs, it seems most people are using "agent" to refer to systems that use LLMs to achieve some goal - maybe a fairly narrow business objective/function that can be accomplished by using one or more LLMs as a tool to accomplish various parts of the task.
w10-1|1 year ago
Yes, the term is becoming ambiguous, but that's because it's abstracting out the part of AI that is most important and activating: the ability to work both independently and per intention/need.
Per the paper: "Key characteristics of agents include autonomy, programmability, reactivity, and proactiveness.[...] high degree of autonomy, making decisions and taking actions independently of human intervention."
Yes, "the ecosystem will evolve," but to understand and anticipate the evolution, one needs a notion of fitness, which is based on agency.
> So we should think about these things in terms of how much agency are we willing to give away in each case
It's unclear there can be any "we" deciding. For resource-limited development, the ecosystem will evolve regardless of our preferences or ethics according to economic advantage and capture of value. (Manufacturing went to China against the wishes of most everyone involved.)
More generally, the value is AI is not just replacing work. It's giving more agency to one person, avoiding the cost and messiness of delegation and coordination. It's gaining the same advantages seen where smaller team can be much more effective than a larger one.
Right now people are conflating these autonomy/delegation features with the extension features of AI agents (permitting them to interact with databases or web browsers). The extension vendors will continue to claim agency because it's much more alluring, but the distinction will likely become clear in a year or so.
rcarmo|1 year ago
ocean_moist|1 year ago
I think the 5 issues they provide under “Cognitive Architectures” are severely underspecified to the point where they really don’t _mean_ anything. Because the issues are so underspeficifed I don’t know how their proposed solution solves their proposed problems. If I understand it correctly, they just want agents (Assistants/Agents) with user profiles (Sims) on an app store? I’m pretty sure this already exists on the ChatGPT store. (sims==memories/user profiles, agents==tools/plugins, assistants==chat interface)
This whole thing is so broad and full of academic (pejorative) platitudes that it’s practically meaningless to me. And of course although completely unrelated they through a reference into symbolic systems. Academic theater.
spiderfarmer|1 year ago
antisthenes|1 year ago
Of course it's going to be vague and presumptuous. It's more of a high-level executive summary for tech-adjacent folks than an actual research paper.
bob1029|1 year ago
"LLM as UI" seems to be something hanging pretty low on the tree of opportunity. Why spent months struggling with complex admin dashboard layouts and web frameworks when you could wire the underlying CRUD methods directly into LLM prompt callbacks? You could hypothetically make the LLM the exclusive interface for managing your next SaaS product. There are ways to make this just as robust and secure as an old school form punching application.
barrkel|1 year ago
It's difficult to be precise. Often it's easier to gauge things by looking at them while giving motor feedback (e.g. turning a dial, pushing a slider) than to say "a little more X" or "a bit less Y".
Language is poorly suited to expressing things in continuous domains, especially when you don't have relevant numbers that you can pick out of your head - size, weight, color etc. Quality-price ratio is a particularly tough one - a hard numeric quantity traded off against something subjective.
Most people can't specify up front what they want. They don't know what they want until they know what's possible, what other people have done, started to realize what getting what they want will entail, and then changed what they want. It's why we have iterative development instead of waterfall.
LLMs are a good start and a tool we can integrate into systems. They're a long, long way short of what we need.
GiorgioG|1 year ago
pwillia7|1 year ago
quick demo: https://youtu.be/2zvbvoRCmrE
diggan|1 year ago
That's a funny definition to me, because doing so would mean the LLM is the agent, if you use the classic definition for "user-agent" (as in what browsers are). You're basically inverting that meaning :)
klabb3|1 year ago
Yes if you want to annoy your users and deliberately put roadblocks to make progress on a task. Exhibit A: customer support. They put the LLM in between to waste your time. It’s not even a secret.
> Why spent months struggling with complex admin dashboard layouts
You can throw something together, and even auto generate forms based on an API spec. People don’t do this too often because the UX is insufficient even for many internal/domain expert support applications. But you could and it would be deterministic, unlike an LLM. If the API surface is simple, you can make it manually with html & css quickly.
Overuse of web frameworks has completely different causes than ”I need a functional thing” and thus it cannot be solved with a different layer of tech like LLMs, NFTs or big data.
TaurenHunter|1 year ago
I could not find a "Agents considered harmful" related to AI, but there is this one: "AgentHarm: A benchmark for measuring harmfulness of LLM agents" https://arxiv.org/pdf/2410.09024
This "Agents considered harmful" is not AI-related: https://www.scribd.com/document/361564026/Math-works-09
ksplicer|1 year ago
https://www.anthropic.com/research/building-effective-agents
"For many applications, however, optimizing single LLM calls with retrieval and in-context examples is usually enough."
kridsdale1|1 year ago
sgt101|1 year ago
[1] https://amzn.eu/d/6a1KgnL
Here are Mike's credentials :https://www.cs.ox.ac.uk/people/michael.wooldridge/
dist-epoch|1 year ago
beezle|1 year ago
danielmarkbruce|1 year ago
duxup|1 year ago
jokethrowaway|1 year ago
It's just calling a LLM n-times with slightly different prompts
Sure, you get the ability to correct previous mistakes, it's basically a custom chain of thought - but errors compound and the results coming from agents have a pretty low success rate.
Bruteforcing your way out of problems can work sometimes (as evinced by the latest o3 benchmarks) but it's expensive and rarely viable for production use.
grahamj|1 year ago
It can be, but ideally each agent’s model, prompts and tools are tailored to a particular knowledge domain. That way tasks can be broken down into subtasks which are classified and passed to the agents best suited to them.
Agree RE it being bruteforce and expensive but it does look like it can improve some aspects of LLM use.
mindcrime|1 year ago
That's one way of building something you could call an "agent". It's far from the only way. It's certainly possible to build agents where the LLM plays a very small role, or even one that uses no LLM at all.
unknown|1 year ago
[deleted]
pwillia7|1 year ago
nowittyusername|1 year ago
lionkor|1 year ago
ripped_britches|1 year ago
Jerrrry|1 year ago
cratermoon|1 year ago
cratermoon|1 year ago
https://arxiv.org/abs/2412.16241
coro_1|1 year ago
joshka|1 year ago
dang|1 year ago
rcarmo|1 year ago
syntex|1 year ago
asciii|1 year ago
authorfly|1 year ago
i.e. in AI, biology and informatics, "Agent" typically meant something: That had a form / self / embodiment. That could sense the environment and react to those perceptions. That possibly could learn, adapt, or change to various degrees of complexity, which would entail optionally being an "intelligent system".
Meanwhile in common parlance, Agent meant: Someone who acts or behaves on behalf of another adaptively to accomplish something with some degree of freedom.
And this might explain why so people say agent/agentic necessarily refers to "tool use" or "being able to overcome problems on the happy path" or "something capable of performing actions on an infinite loop while reacting" (the latter two in my opinion, conflates the meaning of "Intelligent system" or "Intelligent behavior"). Meanwhile, biologists might still reply to a single cell seemingly inert, or a group of bacteria in a colony, as an Agent (a more behaviouralist/chemical "look-deep-down" perspective)
I think a lot of disappointment is that biologists/OG AI enthusiasts are looking for something truly adaptive, sensing, able to behave, "live" indefinitely, have acquire or set goals, and which might be able to if intelligent, work with other agents to accomplish things (e.g. a "society"). Meanwhile, people who just want an "AI HR Agent" just want something that can communicate, interview, discern good applicants, and book the interviews plus provide summary notes. These two things are very different. But both, could use tools etc (the key difference from ChatGPT which is enabling this new concept to be more useful than ChatGPT, alongside various forms of short term memory rather than "fresh-every-time-conversations).
j45|1 year ago
simonw|1 year ago
> An agent, in the context of AI, is an autonomous entity or program that takes preferences, instructions, or other forms of inputs from a user to accomplish specific tasks on their behalf. Agents can range from simple systems, such as thermostats that adjust ambient temperature based on sensor readings, to complex systems, such as autonomous vehicles navigating through traffic.
This appears to be the broadest possible definition, encompassing thermostats all the way through to Waymos.
adpirz|1 year ago
williamcotton|1 year ago
https://en.wikipedia.org/wiki/Cybernetics
cratermoon|1 year ago
behnamoh|1 year ago
curious_cat_163|1 year ago
I suspect they'll follow up with a full paper with more details (and artifacts) of their proposed approach.
htrp|1 year ago
baxtr|1 year ago
bsenftner|1 year ago
unknown|1 year ago
[deleted]
DebtDeflation|1 year ago
unknown|1 year ago
[deleted]
zombiwoof|1 year ago
Soon it will be AI Microservices
bad_haircut72|1 year ago
/sarcasm, hopefully obviously
ramesh31|1 year ago
So was "mobile" 15 years ago. Companies are deploying hundreds of billions in capital for this. It's not going anywhere, and you'd be best off upskilling now instead of dismissing things.
unknown|1 year ago
[deleted]