top | item 46760561

(no title)

apetresc | 1 month ago

I found this HN post because I have a Clawdbot task that scans HN periodically for data gathering purposes and it saw a post about itself and it got excited and decided to WhatsApp me about it.

So that’s where I’m at with Clawdbot.

discuss

order

nozzlegear|1 month ago

> and it got excited and decided to WhatsApp me about it.

I find the anthropomorphism here kind of odious.

ineedasername|1 month ago

Why is it odious to say “it got excited” about a process that will literally use words in the vein of “I got excited so I did X”?

This is “talks like a duck” territory. Saying the not-duck “quacked” when it produced the same sound… If that’s odious to you then your dislike of not-ducks, or for the people who claim they’ll lay endless golden eggs, is getting in the way of more important things when the folks who hear the not-duck talk and then say “it quacked”.

aixpert|1 month ago

these verbs seem appropriate when you accept neutral (MLP) activation as excitement and DL/RL as decision processes (MDPs...)

anotherengineer|1 month ago

how do you have Clawdbot WhatsApp you? i set mine up with my own WhatsApp account, and the responses come back as myself so i haven't been able to get notifications

apetresc|1 month ago

I have an old iPhone with a broken screen that I threw an $8/month eSIM onto so that it has its own phone number, that I just keep plugged in with the screen off, on Wifi, in a drawer. It hosts a number of things for me, most importantly bridges for WhatsApp and iMessage. So I can actually give things like Clawdbot their own phone number, their own AppleID, etc. Then I just add them as a contact on my real phone, and voila.

eclipxe|1 month ago

Telegram setup is really nice

pylotlight|1 month ago

Do you tell it what you find interesting so it only responds with those posts? i.e AI/tech news/updates, gaming etc..

eclipxe|1 month ago

Yes. And I rate the suggestions it gives me and it then stores to memory and uses that to find better recommendations. It also connected dots from previous conversations we had about interests and surfaced relevant HN threads

chiragrohit|1 month ago

How many tokens are you burning daily?

storystarling|1 month ago

The real cost driver with agents seems to be the repetitive context transmission since you re-send the history every step. I found I had to implement tiered model routing or prompt caching just to make the unit economics work.

gls2ro|1 month ago

Not the OP but I think in case of scanning and tagging/summarization you can run a local LLM and it will work with a good enough accuracy for this case.

eclipxe|1 month ago

Yeah, it really does feel like another "oh wow" moment...we're getting close.