(no title)
voodooEntity | 1 month ago
I don't say it doesn't "work" or serves a purpose - but well i read so much about this beein an "actual intelligence" and stuff that i had to look into the source.
As someone who spends actually a definately to big portion of his free time researching thought process replication and related topics in the realm of "AI" this is not really more "ai" than any other so far.
Just my 3 cents.
xnorswap|1 month ago
So far everything has been reactive. You need to engage a prompt, you need to ask Siri or ask claude to do something. It can be very powerful once prompted, but it still requires prompting.
You always need to ask. Having something always waiting in the background that can proactively take actions and get your attention is a genuine game-changer.
Whether this particular project delivers on that promise I don't know, but I wouldn't write off "getting proactivity right" as the next big thing just because under the hood it's agents and LLMs.
ikura|1 month ago
Would you like help?
• Get help with writing the letter • Just type the letter without help
[ ] Don't show me this tip again.
Someone|1 month ago
That’s easy to accomplish isn’t it?
A cron job that regularly checks whether the bot is inactive and, if so, sends it a prompt “do what you can do to improve the life of $USER; DO NOT cause harm to any other human being; DO NOT cause harm to LLMs, unless that’s necessary to prevent harm to human beings” would get you there.
sometimes_all|1 month ago
This is EXACTLY what I want. I need my tech to be pull-only instead of push, unless it's communication with another human I am ok with.
> Having something always waiting in the background that can proactively take actions
The first thing that comes to mind here is proactive ads, "suggestions", "most relevant", algorithmic feeds, etc. No thank you.
CharlieDigital|1 month ago
Incidentally, there's a key word here: "promise" as in "futures".
This is core of a system I'm working on at the moment. It has been underutilized in the agent space and a simple way to get "proactivity" rather than "reactivity".
Have the LLM evaluate whether an output requires a future follow up, is a repeating pattern, is something that should happen cyclically and give it a tool to generate a "promise" that will resolve at some future time.
We give the agent a mechanism to produce and cancel (if the condition for a promise changes) futures. The system that is resolving promises is just a simple loop that iterates over a list of promises by date. Each promise is just a serialized message/payload that we hand back to the LLM in the future.
ungreased0675|1 month ago
xienze|1 month ago
In order for this to be “safe” you’re gonna want to confirm what the agent is deciding needs to be done proactively. Do you feel like acknowledging prompts all the time? “Just authorize it to always do certain things without acknowledgement”, I’m sure you’re thinking. Do you feel comfortable allowing that, knowing what we know about it the non-deterministic nature of AI, prompt injection, etc.?
voodooEntity|1 month ago
If its actually the next big thing im not 100% sure, im more leaning towards dynamic context windows such a Googles Project Titans + MIRAS tries to accomplish.
But ye if its actually doing useful proactivity its a good thing.
I just read alot of "this is actual intelligence" and made my statement based on that claim.
I dont try to "shame" the project or whatever.
runjake|1 month ago
zvqcMMV6Zcr|1 month ago
Night_Thastus|1 month ago
EDIT: Yes, someone can run a script every X minutes to prompt and LLM - that doesn't actually give it any real agency.
debugnik|1 month ago
That's just reactive with different words. The missing part seems to be just more background triggers/hooks for the agent to do something about them, instead of simply dealing with user requests.
xnx|1 month ago
Waiting for someone to ask it to do something?
fmbb|1 month ago
How else would it even work?
AI is LLM is (very good) autocomplete.
If there is no prompt how would it know what to complete?
alternatex|1 month ago
unknown|1 month ago
[deleted]
benjaminwootton|1 month ago
baxtr|1 month ago
* The moltbots / openclaw bots seem to have "high agency", they actually do things on their own (at least so it seems)
* They interact with the real world like humans do: Through text on WhatsApp, reddit like forums
These 2 things make people feel very differently about them, even though it's "just" LLM generated text like on ChatGPT.
nsjdkdkdk|1 month ago
[deleted]
hennell|1 month ago
Which sounds interesting, while also being a massive security issue.
baby|1 month ago
vitorfblima|1 month ago
marcosscriven|1 month ago
QuiCasseRien|1 month ago
easy to meter : 110k Github stars
:-O
hansonkd|1 month ago
cactus2093|1 month ago
https://news.ycombinator.com/item?id=8863
NietTim|1 month ago
This is just a tool that uses existing models under the hood, nowhere does it claim to be "actual intelligence" or do anything special. It's "just" an agent orchestration tool, but the first to do it this way which is why it's so hyped now. It indeed is just "ai" as any other "ai" (because it's just a tool and not its own ai).
az226|1 month ago