top | item 42392963

(no title)

thr3000 | 1 year ago

My reading is pretty much the same as yours. I think of it in terms of tuples:

   { parents, child, AI_character, lawsuit_dollars }
The AI was trained to minimize lawsuit_dollars. The first two were selected to maximize it. Selected as in "drawn from a pool," not that they necessarily made anything up.

It's obvious that parents and/or child can manipulate the character in the direction of a judgment > 0. It'd be nice if the legal system made sure it's not what happened here.

discuss

order

dmurray|1 year ago

> The AI was trained to minimize lawsuit_dollars.

That seems wrong. The null AI would have been better at minimizing legal liability. The actual character.ai to some extent prioritized user engagement over a fear of lawsuits.

Probably it's more correct to say that the AI was chosen to maximize lawsuit_dollars. The parents and child could have conspired to make the AI more like Barney, and no one would have entertained a lawsuit.

thr3000|1 year ago

OK, it seems like a nitpick argument, but I'll refine my statement, even if doing so obfuscates it and does not change the conclusion.

The AI was trained to maximize profit, defined as net profit before lawsuits (NPBL) minus lawsuits. Obviously the null AI has a NPBL of zero, so it's eliminated from the start. We can expect NPBL to be primarily a function of userbase minus training costs. Within the training domain, maximizing the userbase and minimizing lawsuits are not in much conflict, so the loss function can target both. It seems to me that the additional training costs to minimize lawsuits (that is, holding userbase constant) pay off handsomely in terms of reduced liability. Therefore, the resulting AI is approximately the same as if it was trained primarily to minimize lawsuits.