Do you care first to justify the assertions you made on the back of yours? I realize it's a common utilitarian habit to believe the quantification they do is meaningful outside the purely reflexive sense, but that is a belief, requiring as much substantiation as any other before it's taken to model anything about the world.
I don't think I actually asserted anything, but let me give a shot at explaining:
The model is that higher values of G/B are better, where G is good years per capita and B is bad years per capita, so the ethical thing to do is maximize G and/or minimize B. I called this "basic" because even though it tends to fall apart when scaled up, I think it captures what most people think of ethics. It's good for making tactical decisions about things like "should I punch that guy" or whatever.
For #1, where ethics would be just a fiction we made up to support a social game, what I mean is: People in a group have to play a prisoners' dilemma game where they find a balance of cooperation and sharing of resources. One way to reach that balance is to assume good will and treat others like you want to be treated. (Right up until someone violates that, then it's hyperbolic response time...) If ethics is just something people made up to get everyone to maintain that balance, then higher G/B being better is rooted in the same fiction.
For #2: My definition of G and B included good and bad experiences by animals, not just people. That means it's possible that over a billion years or so you could still achieve higher G/B by going back to a "natural" world inhabited by sentient but less intelligent animals.
For #3: It could be that the G/B in a natural world would actually be lower than if it was dominated by humans, because several billion people being able to live out their lives before the crash could result in a big enough G value to outweigh the eventual suffering.
For #4: If life as a human counted more towards G than the same amount of life as a beaver, then it could be a force multiplier for #3.
throwanem|1 year ago
13of40|1 year ago
The model is that higher values of G/B are better, where G is good years per capita and B is bad years per capita, so the ethical thing to do is maximize G and/or minimize B. I called this "basic" because even though it tends to fall apart when scaled up, I think it captures what most people think of ethics. It's good for making tactical decisions about things like "should I punch that guy" or whatever.
For #1, where ethics would be just a fiction we made up to support a social game, what I mean is: People in a group have to play a prisoners' dilemma game where they find a balance of cooperation and sharing of resources. One way to reach that balance is to assume good will and treat others like you want to be treated. (Right up until someone violates that, then it's hyperbolic response time...) If ethics is just something people made up to get everyone to maintain that balance, then higher G/B being better is rooted in the same fiction.
For #2: My definition of G and B included good and bad experiences by animals, not just people. That means it's possible that over a billion years or so you could still achieve higher G/B by going back to a "natural" world inhabited by sentient but less intelligent animals.
For #3: It could be that the G/B in a natural world would actually be lower than if it was dominated by humans, because several billion people being able to live out their lives before the crash could result in a big enough G value to outweigh the eventual suffering.
For #4: If life as a human counted more towards G than the same amount of life as a beaver, then it could be a force multiplier for #3.