(no title)
rune-dev | 17 days ago
Isn’t this situation a big deal?
Isn’t this a whole new form of potential supply chain attack?
Sure blackmail is nothing new, but the potential for blackmail at scale with something like these agents sounds powerful.
I wouldn’t be surprised if there were plenty of bad actors running agents trying to find maintainers of popular projects that could be coerced into merging malicious code.
i7l|17 days ago
What's truly scary is that agents could manufacture "evidence" to back up their attacks easily, so it looks as if half the world is against a person.
amatecha|17 days ago
hackrmn|17 days ago
So far it's been a lot of conjecture and correlations. Everyone's guessing, because at the bottom of it lie very difficult to prove concepts like nature of consciousness and intelligence.
In between, you have those who let their pet models loose on the world, these I think work best as experiments whose value is in permitting observation of the kind that can help us plug the data _back_ into the research.
We don't need to answer the question "what is consciousness" if we have utility, which we already have. Which is why I also don't join those who seem to take preliminary conclusions like "why even respond, it's an elaborate algorithm that consumes inordinate amounts of energy". It's complex -- what if AI(s) can meaningfully guide us to solve the energy problem, for example?
t43562|17 days ago
buellerbueller|17 days ago
staticassertion|17 days ago
The interesting thing here is the scale. The AI didn't just say (quoting Linus here) "This is complete and utter garbage. It is so f---ing ugly that I can't even begin to describe it. This patch is shit. Please don't ever send me this crap again."[0] - the agent goes further, and researches previous code, other aspects of the person, and brings that into it, and it can do this all across numerous repos at once.
That's sort of what's scary. I'm sure in the past we've all said things we wish we could take back, but it's largely been a capability issue for arbitrary people to aggregate / research that. That's not the case anymore, and that's quite a scary thing.
[0] https://lkml.org/lkml/2019/10/9/1210
chrisjj|17 days ago
Linus got angry which along with common sense probably limited the amount of effective effort going into his attack.
"AI" has no anger or common sense. And virtually no limit on the amount of effort in can put into an attack.
Terr_|17 days ago
metalrain|16 days ago
Any decision maker can be cyberbullied/threatened/bribed into submission, LLMs can even try to create movements of real people to push the narrative. They can have unlimited time to produce content, send messages, really wear the target down.
Only defense is to have consensus decision making & deliberate process. Basically make it too difficult, expensive to affect all/majority decision makers.
marstall|15 days ago
i could see a long tail of impenetrable chaos as private correspondence gets hacked, ppl get divorced, fired, fight back, flood the zone with their own reputationslop so they have a grounds for denial, decide to take it ALL down to distract. recursive waves of tyranny/chaos. this isnt the singularity we were promised!