11 Comments

I confess I am failing to follow a key step in the discussion. Why does having generally discoverable rational reasons to co-operate in certain circumstances mean moral realism is true? To me, it doesn’t count as moral if ‘co-operative’ actions are taken only because the expected value from a purely selfish perspective is positive. Ditto if the actions are chosen from fear of public punishment or from the selfish need to inspire others to take a course of action that is personally beneficial. Objective morality is true (it seems to me) only if there are reasons that mandate acting unselfishly in some circumstances.

Expand full comment
author

I think this is a merely verbal dispute, but here are the reasons you could have for beneficence towards others:

1) **Agape/metta**: you just want, as a matter of preference or sentiment, for other beings to flourish. I think this is a really important factor in human behavior and I think the objective considerations defended in this post give you reason to foster them, but I also don't think they can be objective - they're preferences that are embedded or not in any given agent. (Personal identity is a construct so caring about your future self is just a highly partial version of this.)

2) **Social pressure/law**: the reputation economy, social signalling, conscious incentives, etc, as you note. These are intersubjective rather than objective; I don’t think this is what grounds moral realism but that moral realism nudges these in better and better directions all else being equal.

3) **Galaxy brain game theoretic considerations outlined above**. These are objective, but they are grounded in having some particular preferences (which might be altruistic as per agape or just wanting Doritos, or maximizing paper clips, and so on.)

4) (INSERT UNKNOWN THING HERE) I think this is what you want, and I don’t have anything to definitely rule it out. But I don’t know what the metaphysical grounding would be, so until I do I wouldn’t defend moral realism on the basis of it. Some hedonic utilitarians say that pleasure and pain are intrinsically good and bad and that we have direct access to this, which seems like the most plausible account, but it seems more parsimonious to say that pleasure and pain are posterior to our wanting or avoiding particular experiences, which is part of why a preference-based rather than hedonic or objective list conception of welfare seems right to me.

That said, I don’t think (3) is *so* galaxy brained that it’s that far off from ordinary moral reasoning, if you say “well, I could cheat on my taxes and almost certainly not get caught, but just imagine what would happen if everyone did that” or “of course I went to save that drowning kid, I mean, it’s only what I hope someone else would do if I were in the same situation” I wouldn’t call that non-moral reasoning. Our use of moral discourse involves a bit of all three, but the third is what seems to me to be the kind I’d most expect aliens or Clippy to converge on.

Expand full comment

First of all, this post is really nicely written so great job! Second, I have a few concerns with some of your points:

1) It seems like people have enough differences in thought (and are not correlated enough) for your thoughts to not acausally influence other peoples’ thoughts. Should you go around giving people money because that will acausally increase the chances of you getting money. If you do think so, can I be first?

2) even if you showed that it’s good for people to think moral realism is true and that people should cooperate accordingly, does not mean that it is actually true. Acting in this way would still be in one’s self interest. Similarly, telling someone that they should be honest because they will be more happy if they do, does not make honesty a moral thing…

Tell me where you think I’m going wrong!

Expand full comment
author

1) Yes, but everyone just giving each other money is less useful than giving it to people when they could most use it. So give it through GiveDirectly :) this also acausally makes it more likely that the Demiurge will be nice to us!

2) Also a fair consideration - see my reply to Eugene above :)

Expand full comment

I’m now realizing that this post seems largely LW inspired, and I’m happy to see that more of us have taken over the Substack sphere.

1) I’m a little confused here. I just don’t think one would be better off if they give to people with GiveDirectly (besides for maybe social status and fuzzies). I don’t think this is true empirically. How would one know if they are being acausally influenced either positively or negatively in practice? **How do these beliefs anticipate experience?**

2) Understood. I’m not sure I would call this moral realism as most philosophers talk about, but it seems to result in very similar outcomes, and I’m not a huge stickler on words. If it be the case that your theory is correct, I’m willing to grant you the title of moral realism lol.

Expand full comment
author

0) Largely, yes! I could also cite Korsgaard and Elster. No claims made to originality, though there's also no one neat source I can point to and say "yes, exactly that."

1) An empirical prediction would be that the orthogonality thesis is false and that advanced intelligences with reflective capacity would be somewhat influenced by considerations of this kind. But my main motivation for finding this convincing is (like my belief that the world will exist a million years after my own death) theoretical not empirical - it seems to follow from what hard-nosed materialists are committed to already.

2) :) (I'm willing likewise to adopt whatever terminology one prefers - Korsgaard calls herself an intersubjectivist rather than a realist, though she has something more robust in mind than what some people mean by that term.)

Expand full comment
Jun 20Liked by metaphysiocrat

It is true that moral talk is reason and not just manipulation. But some moral talk is manipulation. Sometimes what people claim is game theory is actually manipulation, just as some of what is accused of manipulation is actually game theory.

An awful lot of moral talk is both game theory and manipulation. It game theory between coalitions, hiding the question of whether the individual gets a good deal joining the coalition.

Expand full comment
author

Agreed entirely - to be clear, I was not saying moral talk is not manipulation, but there being objective reasons to appeal to allows for it to be more than *just* manipulation. (And to be clear, manipulation is not always bad either - it's a good thing that we gas each other up for moral action every so often.)

Expand full comment

I am pretty skeptical of cheerleading. Most of the time it is the coalition saying "trust me, this is moral." This is a symmetric weapon that will be selected for the coalition exploiting the individual.

Expand full comment

I got off the train fairly early in this discussion, in the section on nihilism and egoism. I'm puzzled as to what you have in mind when you talk about "reasons." For instance, you say:

"If you want to eat something tasty, the fact that Doritos would be tasty to you is a reason to eat them."

What do you mean when you say that something "is a reason" in this context? You go on to say:

"Still, they show that normativity can be derived from natural facts alone in a straightforward way."

I'm not sure you've shown this is true, in part because I don't know what you have in mind by normativity.

Expand full comment