top | item 40268742

(no title)

piecerough | 1 year ago

This is only going to get worse with Large Language Models. Let's imagine a somewhat knowledgeable individual, could craft both emails, messages and even commits with a bunch of prompts. Those will relate deeply to the project.

discuss

order

smsm42|1 year ago

Maybe one day it will happen, but right now LLM-generated persona would likely set off every alarm bell for a lot of people. LLMs have very recognizable style, and it usually falls right into the uncanny valley.

int_19h|1 year ago

The "recognizable style" that people usually refer to is the default persona that most are exposed. However, the style can be changed very drastically with some fairly simple prompting.

heavyset_go|1 year ago

It doesn't have to be completely automated, just enough to make the process of juggling multiple personas a bit smoother.

andy99|1 year ago

Do you have any evidence or real examples to support that? I hear people say similar things but see nothing to suggest LLMs are a particular threat.

TechDebtDevin|1 year ago

The real threat of LLMs is their potential to ruin your day if you use them to assist in your work.

kemotep|1 year ago

Are you asking for evidence that LLM’s can be used to write emails and chat messages?

andix|1 year ago

I don't think this is going to be a big issue. Those attacks have to be high-profile attacks. If you look at the xz backdoor, there was some top notch engineering behind it.

If we ever reach a level of LLMs being able to do that, we don't need any open source contributors any more. We just tell tell the LLM to program an operating system and it will just do it.