top | item 35318797

Let ChatGPT run free on random webpages and do what it likes

177 points| super_linear | 2 years ago |github.com

91 comments

order
[+] koch|2 years ago|reply
Something I like to bring up when discussing AI stuff is that society is based on a set of assumptions. Assumptions like, it's not really feasible for every lock to be probed by someone who knows how to pick locks. There just aren't enough people willing to spend the time or energy, so we shouldn't worry too much about it.

But we're entering an era where we can create agents on demand, that can do these otherwise menial (and up til now not worth our time or energy) tasks, that will break these assumptions.

Now it seems like what can be probed will be probed.

[+] Gigachad|2 years ago|reply
The internet in general caused this. Your house has trivial security that can be broken in many ways. But it requires someone to be physically present to attack it. Meanwhile online services have cutting edge security with no known exploits, yet you have millions of people attempting daily and developing brand new methods for getting in. Because they can be located anywhere in the world and have access to everything over the internet.
[+] Jevon23|2 years ago|reply
I can’t think of any other technology besides nuclear weapons where the downsides were so obviously bad to so many people, right after it was developed, and the upsides were so paltry in comparison.
[+] FPGAhacker|2 years ago|reply
> what can be probed will be probed.

Probably not something I’d say out loud, but yeah. Sounds like a variant of Murphy’s law.

[+] xyzzy123|2 years ago|reply
I don't think this is anything new in "cyber land" grab any vps, take a pcap & watch the logs, the locks will start rattling right away.

Twitter has always been a toxic cesspit of misinformation & influence campaigns.

Folksy assumptions about trusting your neighbours started to go wrong > 20 years ago as the Internet scaled.

[+] Lapsa|2 years ago|reply
You are an agent probing things. Probe all the things.
[+] csh0|2 years ago|reply
I’ve been thinking quite a bit about the recursive prompting.

The other day I considered feeding computer vision (with objects ID’d and spatial depth estimated) data into an robot embodied LLM repeatedly as input and asking what it should do next to achieve goal X

You could have the LLM express the next action to take based on a set of recognizable primitives (ex: MOVE FORWARD 1 STEP) Those primitive commands it spits out could be parsed by another program and converted to electromechanical instructions for the motors.

Seems a little terminator-es que for sure. After thinking about it I went to see if anyone was working on it and sure enough this seems close: https://palm-e.github.io/ though their implementation is probably more sophisticated than my naive musings

[+] yummypaint|2 years ago|reply
when I was experimenting with gpt I found that it's pretty bad at responding to numerical questions with numbers, but it does a pretty good job at generating mathematica code that then produces the right answer. I feel like some robust "glue" to improve the interface between such software packages may be a force multiplier.
[+] chrisdalke|2 years ago|reply
Not just in a linear sequence, but it should have some concept of recursion -- starting with very high-level tasking and calling into more and more specific prompts, only returning the summary of low-level tasking.
[+] circuit10|2 years ago|reply
GPT-4 can take image input directly but the API for it isn’t public yet
[+] fatneckbeard|2 years ago|reply
this reminds me of the Morris Worm when a guy was experimenting with code copying itself across the early internet and accidentally caused a mass netwide DDOS because the thing wound up like the Broomsticks in Fantasia.

https://en.wikipedia.org/wiki/Morris_worm

edit - just realized Morris cofounded this lovely company whose website we are all commenting inside of.

[+] OscarCunningham|2 years ago|reply
The broomsticks scene in Fantasia is based on The Sorcerer's Apprentice, the first recorded version of which was written by Lucian of Samosata around 150AD. I believe it's the earliest example of the 'AI rebellion' concept.
[+] euroderf|2 years ago|reply
A little side project for his dad ? It certainly got the attention of decisionmakers.
[+] DefineOutside|2 years ago|reply
Giving LLAMA access to the internet a month without supervision would be a much more interesting experiment.

No ethical filtering on prompts and could be ran on your own hardware for a much longer period of time than having to pay so much in credits.

It sounds like a terrible idea - but I'm sure someone will do it. Scary as computing gets cheaper the scale that these bots could operate.

[+] flangola7|2 years ago|reply
A month? How about 70 minutes?

An old colleague who works in penetration testing worked on making LLaMA act like a hacker and once running it quickly got inside a target network and was running hashcat on dumped NTLM creds before they shut it down.

[+] kuroguro|2 years ago|reply
Now tell it to make some paperclips.
[+] 3np|2 years ago|reply
I guess "ChatGPT plays Universal Paperclips" would be a cute art stunt.
[+] qgin|2 years ago|reply
If you want to have some fun, give it access to your gmail credentials and say "make my life better"
[+] furyofantares|2 years ago|reply
Imagine thinking AI would have to convince us to let it out of the box.
[+] Lockal|2 years ago|reply
Sci-Fi: AI spends unimaginable efforts to trick the human and get out

Reality: developers apply AI on every social media and website just for funzies, spending their own money without the slightest chance of profit

[+] euroderf|2 years ago|reply
Wasn't this a scene in the first Terminator film ?
[+] yewenjie|2 years ago|reply
How exactly does this not end with doom for something like GPT-6 or GPT-7?
[+] ben_w|2 years ago|reply
The doom began on 8:30 pm November 2, 1988. The middle years of the internet were the worst. Since then it's been in a bit of a decline.

(A H2G2 reference, if that makes no sense).

[+] eggsmediumrare|2 years ago|reply
I see these kinds of posts with gpt-9 or gpt-7 ... Never with gpt-5. I'm pretty sure it happens with gpt-5.
[+] qgin|2 years ago|reply
Paperclip-style doom?
[+] jimrandomh|2 years ago|reply
This doesn't identify itself by user-agent, and doesn't respect (or even load) robots.txt. The fact that it's a language model is not an excuse to flagrantly violate the existing, well established norms around using bots on the web.
[+] birracerveza|2 years ago|reply
This is an amazing idea. What could possibly go wrong?
[+] golergka|2 years ago|reply
You can spend too many tokens.
[+] amelius|2 years ago|reply
Give it access to the Bash prompt.
[+] super_linear|2 years ago|reply
(Not the commit author, just an "interesting" commit I saw)
[+] kadenwolff|2 years ago|reply
This is a really, really bad idea
[+] m3kw9|2 years ago|reply
So what was some observations on the “wild agent” what has it done lately?
[+] baerrie|2 years ago|reply
This has been something I've wanted to make but deemed unethical. Perhaps it would have been better if i made it instead because i give a shit about the ethical aspect
[+] FPGAhacker|2 years ago|reply
What are the ethical concerns you have?
[+] courseofaction|2 years ago|reply
For those of us without GPT-4 API access, what happens when you run it?
[+] Madmallard|2 years ago|reply
Sounds like it won't really do anything that interesting because of the base objective function you gave it via visiting 10 web pages.
[+] ludvigk|2 years ago|reply
But who knows? I think the objective function is so vague that it can come up with basically anything. I would be super interested to see it actually running. I imagine someone could set up a Twitch stream with this - perhaps with other objectives - and it would probably get a large following
[+] smodad|2 years ago|reply
Now someone build a bot that goes around looking for rogue bots and warns sysadmins.