top | item 43807592

(no title)

hdhdhsjsbdh | 10 months ago

Bit of a motte and bailey. Stitching living people into a human centipede is blatantly, obviously wrong and has no scientific merit. Understanding the effects of AI-driven manipulation is, on the other hand, obviously incredibly relevant and important and doing it with a small scale study in a niche subreddit seems like a reasonable way to do it.

discuss

order

OtherShrezzing|10 months ago

At least part of the ethics problem here is that it'd be plausible to conduct this research without creating any new posts. There's a huge volume of generative AI content on Reddit already - and a meaningfully large %ge of it follows predictable patterns. Wildly divergent writing styles between posts, posting 24/7, posting multiple long-form comments in short time periods, usernames following a specific pattern, and dozens of other heuristics.

It's not difficult to find this content on the site. Creating more of it seems like a redundant step in the research. It added little to the research, while creating very obvious ethical issues.

hdhdhsjsbdh|10 months ago

That would be a very difficult study to design. How do you know with 100% certainty that any given post is AI-generated? If the account is tagged as a bot, then you aren’t measuring the effect of manipulation from comments presented as real. If you are trying to detect whether they are AI-generated, then any noise in your heuristic or model for detecting AI-generated comments is then baked into your results.

photonthug|10 months ago

> At least part of the ethics problem here is that it'd be plausible to conduct this research without creating any new posts.

This is a good point. Arguably though if you want people to take the next cambridge analytica or similar as something serious from the very beginning, we need an arsenal of academic studies with results that are clearly applicable and very hard to ignore or dispute. So I can see the appeal of producing a paper abstract that's specifically "X% of people shift their opinions with minor exposure to targeted psyops LLMs".

alpaca128|10 months ago

Intentionally manipulating opinions is also obviously wrong and has no scientific merit either. You don't need a study to know that an LLM can successfully manipulate people. And for "understanding the effects" it doesn't matter whether they spam AI generated content or analyse existing comments written by other users.

dmvdoug|10 months ago

It’s the same logic. You just have decided that you accepted in some factual circumstances and not others. If you bothered to reflect on that, and had any intellectual humility, you might take pause at that idea.