Something I like to bring up when discussing AI stuff is that society is based on a set of assumptions. Assumptions like, it's not really feasible for every lock to be probed by someone who knows how to pick locks. There just aren't enough people willing to spend the time or energy, so we shouldn't worry too much about it.
But we're entering an era where we can create agents on demand, that can do these otherwise menial (and up til now not worth our time or energy) tasks, that will break these assumptions.
Now it seems like what can be probed will be probed.
The internet in general caused this. Your house has trivial security that can be broken in many ways. But it requires someone to be physically present to attack it. Meanwhile online services have cutting edge security with no known exploits, yet you have millions of people attempting daily and developing brand new methods for getting in. Because they can be located anywhere in the world and have access to everything over the internet.
I can’t think of any other technology besides nuclear weapons where the downsides were so obviously bad to so many people, right after it was developed, and the upsides were so paltry in comparison.
I’ve been thinking quite a bit about the recursive prompting.
The other day I considered feeding computer vision (with objects ID’d and spatial depth estimated) data into an robot embodied LLM repeatedly as input and asking what it should do next to achieve goal X
You could have the LLM express the next action to take based on a set of recognizable primitives (ex: MOVE FORWARD 1 STEP) Those primitive commands it spits out could be parsed by another program and converted to electromechanical instructions for the motors.
Seems a little terminator-es que for sure. After thinking about it I went to see if anyone was working on it and sure enough this seems close: https://palm-e.github.io/ though their implementation is probably more sophisticated than my naive musings
when I was experimenting with gpt I found that it's pretty bad at responding to numerical questions with numbers, but it does a pretty good job at generating mathematica code that then produces the right answer. I feel like some robust "glue" to improve the interface between such software packages may be a force multiplier.
Not just in a linear sequence, but it should have some concept of recursion -- starting with very high-level tasking and calling into more and more specific prompts, only returning the summary of low-level tasking.
this reminds me of the Morris Worm when a guy was experimenting with code copying itself across the early internet and accidentally caused a mass netwide DDOS because the thing wound up like the Broomsticks in Fantasia.
The broomsticks scene in Fantasia is based on The Sorcerer's Apprentice, the first recorded version of which was written by Lucian of Samosata around 150AD. I believe it's the earliest example of the 'AI rebellion' concept.
An old colleague who works in penetration testing worked on making LLaMA act like a hacker and once running it quickly got inside a target network and was running hashcat on dumped NTLM creds before they shut it down.
This doesn't identify itself by user-agent, and doesn't respect (or even load) robots.txt. The fact that it's a language model is not an excuse to flagrantly violate the existing, well established norms around using bots on the web.
This has been something I've wanted to make but deemed unethical. Perhaps it would have been better if i made it instead because i give a shit about the ethical aspect
But who knows? I think the objective function is so vague that it can come up with basically anything. I would be super interested to see it actually running. I imagine someone could set up a Twitch stream with this - perhaps with other objectives - and it would probably get a large following
[+] [-] koch|2 years ago|reply
But we're entering an era where we can create agents on demand, that can do these otherwise menial (and up til now not worth our time or energy) tasks, that will break these assumptions.
Now it seems like what can be probed will be probed.
[+] [-] Gigachad|2 years ago|reply
[+] [-] Jevon23|2 years ago|reply
[+] [-] FPGAhacker|2 years ago|reply
Probably not something I’d say out loud, but yeah. Sounds like a variant of Murphy’s law.
[+] [-] xyzzy123|2 years ago|reply
Twitter has always been a toxic cesspit of misinformation & influence campaigns.
Folksy assumptions about trusting your neighbours started to go wrong > 20 years ago as the Internet scaled.
[+] [-] Lapsa|2 years ago|reply
[+] [-] csh0|2 years ago|reply
The other day I considered feeding computer vision (with objects ID’d and spatial depth estimated) data into an robot embodied LLM repeatedly as input and asking what it should do next to achieve goal X
You could have the LLM express the next action to take based on a set of recognizable primitives (ex: MOVE FORWARD 1 STEP) Those primitive commands it spits out could be parsed by another program and converted to electromechanical instructions for the motors.
Seems a little terminator-es que for sure. After thinking about it I went to see if anyone was working on it and sure enough this seems close: https://palm-e.github.io/ though their implementation is probably more sophisticated than my naive musings
[+] [-] yummypaint|2 years ago|reply
[+] [-] chrisdalke|2 years ago|reply
[+] [-] circuit10|2 years ago|reply
[+] [-] fatneckbeard|2 years ago|reply
https://en.wikipedia.org/wiki/Morris_worm
edit - just realized Morris cofounded this lovely company whose website we are all commenting inside of.
[+] [-] OscarCunningham|2 years ago|reply
[+] [-] euroderf|2 years ago|reply
[+] [-] shahahmed|2 years ago|reply
[+] [-] DefineOutside|2 years ago|reply
No ethical filtering on prompts and could be ran on your own hardware for a much longer period of time than having to pay so much in credits.
It sounds like a terrible idea - but I'm sure someone will do it. Scary as computing gets cheaper the scale that these bots could operate.
[+] [-] flangola7|2 years ago|reply
An old colleague who works in penetration testing worked on making LLaMA act like a hacker and once running it quickly got inside a target network and was running hashcat on dumped NTLM creds before they shut it down.
[+] [-] kuroguro|2 years ago|reply
[+] [-] 3np|2 years ago|reply
[+] [-] qgin|2 years ago|reply
[+] [-] furyofantares|2 years ago|reply
[+] [-] Lockal|2 years ago|reply
Reality: developers apply AI on every social media and website just for funzies, spending their own money without the slightest chance of profit
[+] [-] euroderf|2 years ago|reply
[+] [-] yewenjie|2 years ago|reply
[+] [-] ben_w|2 years ago|reply
(A H2G2 reference, if that makes no sense).
[+] [-] eggsmediumrare|2 years ago|reply
[+] [-] qgin|2 years ago|reply
[+] [-] jimrandomh|2 years ago|reply
[+] [-] birracerveza|2 years ago|reply
[+] [-] golergka|2 years ago|reply
[+] [-] amelius|2 years ago|reply
[+] [-] gandalfgeek|2 years ago|reply
[+] [-] sp332|2 years ago|reply
[+] [-] super_linear|2 years ago|reply
[+] [-] kadenwolff|2 years ago|reply
[+] [-] unknown|2 years ago|reply
[deleted]
[+] [-] m3kw9|2 years ago|reply
[+] [-] baerrie|2 years ago|reply
[+] [-] FPGAhacker|2 years ago|reply
[+] [-] courseofaction|2 years ago|reply
[+] [-] wbradley|2 years ago|reply
[+] [-] Madmallard|2 years ago|reply
[+] [-] ludvigk|2 years ago|reply
[+] [-] unknown|2 years ago|reply
[deleted]
[+] [-] smodad|2 years ago|reply