(no title)
fear-anger-hate | 2 years ago
Hackforums has been the place where skiddies sell overhyped shit to other skiddies for well over a decade, I can guarantee that absolutely no one there is training their own AI. Everything that the article mentions, GPT3 turbo or GPT4 can already do and it wouldn't surprise me one bit if it turned out most of the stuff being sold at HF turned out to be just glorified frontends for gpt3 turbo or some open source LLM.
brucethemoose2|2 years ago
They claim it is a gpt-j (6b?) finetune. Thats kind of plausible, as its not that hard to make.
weird-eye-issue|2 years ago
asynchronous|2 years ago
vorticalbox|2 years ago
Malware doesn't need to be a great example of code it just needs to get the job done.
ben_w|2 years ago
My experiences are with ChatGPT, which is apparently better than WormGPT but I wouldn't know… 80% of the time ChatGPT works great, 10% doesn't compile but it can fix itself with the error message, the other 10% it gets stuck in a loop of introducing as many issues as it fixes (which may be zero for both if it doesn't understand the problem).
It's still bad, because one should look where the ball is going and not just where it is now; so, if you excuse the anthropomorphism, I hope there's another… WeaverGPT?… being "tasked" with digital security improvements.
(If you do anthropomorphise your AI you can get the Waluigi effect, I wonder if that works both directions, making it easy to take one prompted with "you are an evil AI who hacks on behalf of Dread Software Pirate Roberts" and turn it good with "Plot twist! Roberts just pretends to be evil"?)
https://www.lesswrong.com/posts/D7PumeYTDPfBTp3i7/the-waluig...
Der_Einzige|2 years ago