(no title)
thr3000 | 1 year ago
{ parents, child, AI_character, lawsuit_dollars }
The AI was trained to minimize lawsuit_dollars. The first two were selected to maximize it. Selected as in "drawn from a pool," not that they necessarily made anything up.It's obvious that parents and/or child can manipulate the character in the direction of a judgment > 0. It'd be nice if the legal system made sure it's not what happened here.
dmurray|1 year ago
That seems wrong. The null AI would have been better at minimizing legal liability. The actual character.ai to some extent prioritized user engagement over a fear of lawsuits.
Probably it's more correct to say that the AI was chosen to maximize lawsuit_dollars. The parents and child could have conspired to make the AI more like Barney, and no one would have entertained a lawsuit.
thr3000|1 year ago
The AI was trained to maximize profit, defined as net profit before lawsuits (NPBL) minus lawsuits. Obviously the null AI has a NPBL of zero, so it's eliminated from the start. We can expect NPBL to be primarily a function of userbase minus training costs. Within the training domain, maximizing the userbase and minimizing lawsuits are not in much conflict, so the loss function can target both. It seems to me that the additional training costs to minimize lawsuits (that is, holding userbase constant) pay off handsomely in terms of reduced liability. Therefore, the resulting AI is approximately the same as if it was trained primarily to minimize lawsuits.