(no title)
i_dont_know_ | 1 month ago
Is it this particular model from today? What if it's a minor release version change, is it a new entity, or is it only a new entity on major release versions? What about a finetune on it? Or a version with a particular tool pipeline? Are they all the same being?
I think the analogy breaks down pretty fast. Again, not to say we shouldn't think about it, but clearly the way to think of it is not "exactly a person"
kayo_20211030|1 month ago
To be clear, I believe that models are machines. They're clever, useful machines. We get sucked in. But, they're just machines, and thus property. If I delete a model, in an effective sense, I've disposed of property. I have not destroyed anything that I would consider a "who", i.e. a person. I've just turned off the computer. But, as the original piece points out, there are folks out there with a pathological (yes!) concept of AI as sentient entities - persons; well, let's say person-adjacent, at least. They have "relationships". Will they feel absolutely evil when they stop paying the subscription, and the company "terminates" the model? Maybe they will, but that's their scrambled thinking, not mine. If one believes an AI is a person, one *does* have an ethical dilemma when it's turned off. You'd have an ethical obligation to stop the slaughter, wouldn't you?
If I take my sick dog to the vet to be put down because she has a cancer that's making her life miserable I'm emotional, but ethically I feel it's the right thing to do. It's also lawful. I don't think I'd feel as comfortable ethically taking my grandmother for the big exit. Also, it's not lawful in most places: even with informed consent. The distinction is the difference.