(no title)
rajvarkala | 2 months ago
The same oversight mechanism that applies to humans cannot correct the flaws of AI agents? What do you think is the catch?
I am not saying things are clearly defined in most settings. But my accounting agent ( real person) gets paid only when he files my tax returns.
Neywiny|2 months ago
Your accountant has to build in margin that you pay for for clients who stiff him on the bill or who he has to take to court to argue he did the service as described in the contract. If you didn't hold that threshold over his head, he would be able to charge less. Would he? Maybe not, I don't know the guy, but he could.
rajvarkala|2 months ago
I think that is the core of the argument. It is the risk-sharing between buyer and seller. If sold on outcomes, seller carries all risk. If sold on work-put-in, buyer carries all risk.
Add to that, in some scenarios, outcomes themselves are fuzzy.
free_bip|2 months ago
If you finetune a model and it starts misbehaving, what are you going to do to it exactly? PIP it? Fire it? Of course not. AIs cannot be managed the same ways as humans (and I would argue that's for the best). Best you can do is try using a different model, but you have no guarantee that whatever issue your model has is actually solved in the new one.
lbreakjai|2 months ago
There's no LLM equivalent.
rajvarkala|2 months ago