top | item 47083439

(no title)

lynndotpy | 9 days ago

> Again I do not know why MJ Rathbun decided based on your PR comment to post some kind of takedown blog post,

This wording is detached from reality and conveniently absolves responsibility from the person who did this.

There was one decision maker involved here, and it was the person who decided to run the program that produced this text and posted it online. It's not a second, independent being. It's a computer program.

discuss

order

xarope|9 days ago

This also does not bode well for the future.

"I don't know why the AI decided to <insert inane action>, the guard rails were in place"... company absolves of all responsibility.

Use your imagination now to <insert inane action> and change that to <distressing, harmful action>

WaitWaitWha|9 days ago

This already happens every single time when there is a security breach and private information is lost.

We take your privacy and security very seriously. There is no evidence that your data has been misused. Out of an abundance of caution… We remain committed to... will continue to work tirelessly to earn ... restore your trust ... confidence.

incr_me|9 days ago

Unfortunately, the market seems to have produced horrors by way of naturally thinking agents, instead. I wish that, for all these years of prehistoric wretchedness, we would have had AI to blame. Many more years in the muck, it seems.

tapoxi|9 days ago

Change this to "smash into a barricade" and that's why I'm not riding in a self-driving vehicle. They get to absolve themselves of responsibility and I sure as hell can't outspend those giants in court.

jacquesm|9 days ago

This is how it will go: AI prompted by human creates something useful? Human will try to take credit. AI wrecks something: human will blame AI.

It's externalization on the personal level, the money and the glory is for you, the misery for the rest of the world.

ineptech|9 days ago

Agreed, but I'm not nearly so worried about people blaming their bad behavior on rogue AIs as I am about corporations doing it...

DavidPiper|9 days ago

Time for everyone to read (or re-read) The Unaccountability Machine by Dan Davies.

tl;dr this is exactly what will happen because businesses already do everything they can to create accountability sinks.

elashri|9 days ago

When a corporate does something good, a lot of executives and people inside will go and claim credit and will demand/take bounces.

If something bad happened against any laws, even if someone got killed, we don't see them in jail.

I don't defend both positions, I am just saying that is not far from how the current legal framework works.

davidw|9 days ago

"I would like to personally blame Jesus Christ for making us lose that football game"

biztos|9 days ago

So, management basically?

lcnPylGDnU4H9OF|9 days ago

To be fair, one doesn't need AI to attempt to avoid responsibility and accept undue credit. It's just narcissism; meaning, those who've learned to reject such thinking will simply do so (generally, in abstract), with or without AI.

andrewflnr|9 days ago

If you are holding a gun, and you cannot predict or control what the bullets will hit, you do not fire the gun.

If you have a program, and you cannot predict or control what effect it will have, you do not run the program.

khafra|9 days ago

Rice's Theorem says you cannot predict or control the effects of nearly any program on your computer; for example, there's no way to guarantee that running a web browser on arbitrary input will not empty your bank account and donate it all to al-qaeda; but you're running a web browser on potentially attacker-supplied input right now.

I do agree that there's a quantitative difference in predictability between a web browser and a trillion-parameter mass of matrixes and nonlinear activations which is already smarter than most humans in most ways and which we have no idea how to ask what it really wants.

But that's more of an "unsafe at any speed" problem; it's silly to blame the person running the program. When the damage was caused by a toddler pulling a hydrogen bomb off the grocery store shelf, the solution is to get hydrogen bombs out of grocery stores (or, if you're worried about staying competitive with Chinese grocery stores, at least make our own carry adequate insurance for the catastrophes or something).

throw77488|9 days ago

More like a dog. Person has no responsibility for an autonomous agent, gun is not autonomous.

It is socially acceptable to bring dangerous predators to public spaces, and let them run loose. First bite is free, owner has no responsibility, no way knowing dog could injure someone.

Repeated threats of violence (barking), stalking and shitting on someones front yard, are also fine, and healthy behavior. Person can attack random kid, send it to hospital, and claim it "provoked them". Brutal police violence is also fine, if done indirectly by autonomous agent.

fragmede|7 days ago

On the other hand, the phrase "footgun" didn't come out of nowhere. You won't run the program, but someone else will build it, and sell it to someone who will.

Kiboneu|9 days ago

It’s fascinating how cleanly this maps to agency law [0], which has not been applied to human <-> ai agents (in both senses of the word) before.

That would make a fun law school class discussion topic.

0: https://en.wikipedia.org/wiki/Law_of_agency

nicbou|9 days ago

An unattended candle has decided to burn down the building.

teaearlgraycold|9 days ago

I completely do not buy the human's story.

> all I said was “you should act more professional”. That was it. I’m sure the mob expects more, okay I get it.

Smells like bullshit.

Marazan|9 days ago

Yeah like bro you plugged the random number generator into the do-things machine. You are responsible for the random things the machine then does.

jonny_eh|9 days ago

"Sorry for running over your dog, I couldn't help it, I was drunk."

abnry|9 days ago

I'm still struggling to care about the "hit piece".

It's an AI. Who cares what it says? Refusing AI commits is just like any other moderation decision people experience on the web anywhere else.

XorNot|9 days ago

Scale matters and even with people it's a problem: fixated persons are a problem because most people don't understand just how much nuisance one irrationally obsessed person can create.

Now instead add in AI agents writing plausibly human text and multiply by basically infinity.

bostik|9 days ago

Even at the risk of coming off snarky: the emergent behaviour of LLMs trained on all the forum talk across the internet (spanning from Astral Codex to ex-Twitter to 4chan) is ... character assassination.

I'm pretty sure there's a lesson or three to take away.

lynndotpy|9 days ago

The thing is:

1. There is a critical mass of people sharing the delusion that their programs are sentient and deserving of human rights. If you have any concerns about being beholden to delusional or incorrect beliefs widely adopted by society, or being forced by network effects to do things you disagree with, then this is concerning.

2. Whether or not we legitimize bots on the internet, some are run to masquerade as a human. Today, it's a "I'm a bot and this human annoyed me!" Maybe tomorrow, it's "Abnry is a pedophile and here are the receipts" with myriad 'fellow humans' chiming in to agree, "Yeah, I had bad experiences with them", etc.

3. The text these generate are informed by its training corpus, the mechanics of the neural architecture, and by the humans guiding the models as they run. If you believe these programs are here to stay for the foreseeable future, then the type of content it generates is interesting.

For me, my biggest concern are the waves of people who want to treat these programs as independent and conscious, absolving the person running them of responsibility. Even as someone who believes a program can theoretically be sentient, LLMs definitely are not. I think this story is and will be exemplary so I care a good amount.