Recursive Values
If I'd make a list of how people should behave, it would include things like
- avoid killing other animals, in particular humans
- help your friends when they are in need
etc. The list should be weighted and pruned of redundancy, so that it can be used to assign to every possible life a goodness value. Suppose that is done. I wonder if the list should contain (or entail) a rule that says that good people see to it that other people are also good:
- maximize the goodness in the world
Notice "goodness", not "pleasure", where "goodness" is defined by the very list this rule is on.
It certainly seems to me that it is a good thing to increase other people's goodness rank (say, by collecting money for charities or by convincing friends to eat organic food). And it is a good thing to make other people increase yet other people's goodness rank. The rule would also explain why it is acceptable to harm other people in order to prevent them from committing horrible crimes (to even shoot a suicide bomber on her way to kill many innocent people): Doing harm decreases my goodness, but preventing the evil makes up for that.
Unlike a rule that we should maximize pleasure or minimize suffering, this rule doesn't allow sacrificing innocent people for their body organs. Does it allow killing innocent people (say, a bus driver) if that prevents other people from committing horrible crimes? That depends on the weighting: whether maximizing goodness to that amount -- preventing the horrible crimes -- is more valuable than killing innocent people is bad. Which sounds like exactly the question one would face.
What the rule doesn't explain is why I find it acceptable to kill somebody in order to prevent her from killing somebody else. It seems that the net effect on total goodness in this case is zero (either way, somebody ends up being killed), which can hardly compensate for the evil done by killing somebody. I'm not sure how to handle that.
Anyway, the recursive nature of this set of rules seems interesting.
Hey Wo,
If we're allowed to factor in expected "goodness rankings" and what not, then it shouldn't be too difficult to use this rule to explain why it might be acceptable to kill someone else in order to prevent her from killing somebody else. Presumably, we wouldn't feel as motivated to prevent one criminal from killing another (unless, of course, there are values other than "goodness ranking" at stake).