(no title)
lynndotpy | 9 days ago
This wording is detached from reality and conveniently absolves responsibility from the person who did this.
There was one decision maker involved here, and it was the person who decided to run the program that produced this text and posted it online. It's not a second, independent being. It's a computer program.
xarope|9 days ago
"I don't know why the AI decided to <insert inane action>, the guard rails were in place"... company absolves of all responsibility.
Use your imagination now to <insert inane action> and change that to <distressing, harmful action>
_aavaa_|9 days ago
Also see Weapons of Math Destruction [0].
[0]: https://www.penguinrandomhouse.com/books/241363/weapons-of-m...
WaitWaitWha|9 days ago
We take your privacy and security very seriously. There is no evidence that your data has been misused. Out of an abundance of caution… We remain committed to... will continue to work tirelessly to earn ... restore your trust ... confidence.
incr_me|9 days ago
tapoxi|9 days ago
jacquesm|9 days ago
It's externalization on the personal level, the money and the glory is for you, the misery for the rest of the world.
ineptech|9 days ago
DavidPiper|9 days ago
tl;dr this is exactly what will happen because businesses already do everything they can to create accountability sinks.
elashri|9 days ago
If something bad happened against any laws, even if someone got killed, we don't see them in jail.
I don't defend both positions, I am just saying that is not far from how the current legal framework works.
davidw|9 days ago
biztos|9 days ago
lcnPylGDnU4H9OF|9 days ago
andrewflnr|9 days ago
If you have a program, and you cannot predict or control what effect it will have, you do not run the program.
khafra|9 days ago
I do agree that there's a quantitative difference in predictability between a web browser and a trillion-parameter mass of matrixes and nonlinear activations which is already smarter than most humans in most ways and which we have no idea how to ask what it really wants.
But that's more of an "unsafe at any speed" problem; it's silly to blame the person running the program. When the damage was caused by a toddler pulling a hydrogen bomb off the grocery store shelf, the solution is to get hydrogen bombs out of grocery stores (or, if you're worried about staying competitive with Chinese grocery stores, at least make our own carry adequate insurance for the catastrophes or something).
throw77488|9 days ago
It is socially acceptable to bring dangerous predators to public spaces, and let them run loose. First bite is free, owner has no responsibility, no way knowing dog could injure someone.
Repeated threats of violence (barking), stalking and shitting on someones front yard, are also fine, and healthy behavior. Person can attack random kid, send it to hospital, and claim it "provoked them". Brutal police violence is also fine, if done indirectly by autonomous agent.
fragmede|7 days ago
superjan|9 days ago
https://media.licdn.com/dms/image/v2/D4D22AQGsDUHW1i52jA/fee...
Kiboneu|9 days ago
That would make a fun law school class discussion topic.
0: https://en.wikipedia.org/wiki/Law_of_agency
nicbou|9 days ago
teaearlgraycold|9 days ago
> all I said was “you should act more professional”. That was it. I’m sure the mob expects more, okay I get it.
Smells like bullshit.
Marazan|9 days ago
jonny_eh|9 days ago
abnry|9 days ago
It's an AI. Who cares what it says? Refusing AI commits is just like any other moderation decision people experience on the web anywhere else.
XorNot|9 days ago
Now instead add in AI agents writing plausibly human text and multiply by basically infinity.
bostik|9 days ago
I'm pretty sure there's a lesson or three to take away.
lynndotpy|9 days ago
1. There is a critical mass of people sharing the delusion that their programs are sentient and deserving of human rights. If you have any concerns about being beholden to delusional or incorrect beliefs widely adopted by society, or being forced by network effects to do things you disagree with, then this is concerning.
2. Whether or not we legitimize bots on the internet, some are run to masquerade as a human. Today, it's a "I'm a bot and this human annoyed me!" Maybe tomorrow, it's "Abnry is a pedophile and here are the receipts" with myriad 'fellow humans' chiming in to agree, "Yeah, I had bad experiences with them", etc.
3. The text these generate are informed by its training corpus, the mechanics of the neural architecture, and by the humans guiding the models as they run. If you believe these programs are here to stay for the foreseeable future, then the type of content it generates is interesting.
For me, my biggest concern are the waves of people who want to treat these programs as independent and conscious, absolving the person running them of responsibility. Even as someone who believes a program can theoretically be sentient, LLMs definitely are not. I think this story is and will be exemplary so I care a good amount.