I have observed that my priorities reveal some degree of suffering focus. This seems to be due to two levels of robustness considerations.

Contents

The influential article The Case for Suffering-Focused Ethics presents four common intuitions that indicate that a suffering focus may be present if a person shares them. I recommend reading it. It provides a few examples for intuitions that motivate some of my assessments below.

The reasons I see for why my ethical views qualify as suffering focused, however, are reasons that follow from moral uncertainty about such intuitions and from my strategies for making robustly positive decisions despite my uncertainty. I default to weaker views because I’m insufficiently sure of stronger views, and those weaker views seem to involve minimizing suffering.

This is in contract to my impressions (1) that some people seem to perceive a suffering focus as more rather than less controversial than some alternative views, and (2) that “suffering focus” is a normative or prescriptive position rather than an a posteriori description and observation. (I think that impression 1 is limited to the effective altruism and rationality communities where documents like Astronomical Waste have had a much greater influence on people’s values than in the population at large.)

Moral Uncertainty

I seem to subscribe to some form of moral antirealism. At least I would expect that idealized agents – perfectly rational people with all the knowledge in the world, arbitrarily high intelligence, great eloquence, unlimited time, and a perfectly cooperative mindset – need not end up agreeing on moral questions as Aumann’s agreement theorem would predict for facts. Nonetheless I feel moral uncertainty in that I don’t know what models of ethics I subscribe to. There are a number of candidates that I can often empathize with, but they are all different models and diverge often.1

Enter the Parliamentary Model. I imagine that all the different moral theories I subscribe to are factions in a moral parliament. Then I intuit some rough fractions of the parliament and assign the factions to the fractions. So we have a smaller faction of something deontological looking, a larger faction of something preference utilitarian, etc.2

Theoretically, you could now just crunch the numbers for every decision, but unfortunately my model is not that precise, not in a simple sense and also not in the sense that I’m unsure whether my deontoological looking faction is a smaller actually, say, Kantian one, or whether it’s a much larger two-level utilitarian one that includes the preference utilitarian one among others. I imagine that many people will face such issues.

But the model is still helpful in that a lot of potential actions that I can take elicit near unanimous votes (with abstention). A vote on whether I want to start space colonization and fill the Hubble volume with happy people (or better yet, very simple, very happy, simulated beings) elicits some agreement, some shrugs, but also many loud voices urging me to consider the individuals that are suffering, that will be suffering, or that may be suffering if something goes wrong. It’s not met with unanimous approval. Neither would be a goal such as blowing up the planet.

But there are a lot of actions that are met with unanimous approval (and maybe some abstentions), and those often involve actions that cost-effectively minimize suffering (not necessarily “reduce suffering” since they may just decrease the rate at which suffering increases).

This approach loses out on some of the minority protection perks of the fully quantified version of the moral parliament but that’s somewhat unavoidable for me and probably most people, and since I often have to make costly and hard-to-reverse decisions, it’s also psychologically beneficial not to feel opposition to them in my moral parliament, whether of a minority or the majority.

That is one completely internal consideration that led me to act in ways that seem suffering focused to me.

Moral Cooperation

Outside my own mind, there is another problem, an optimization problem. Goals such as making a cup of coffee are fairly attainable for most people who are likely to read this blog post.3 But a lot of moral goals are harder to attain than that. In fact, they may be maximization type of goals, so there may not be such a thing as attaining them in any absolute sense.

If a goal is hard to attain like that, it is usually easier to attain it to a greater degree the more people cooperate with you and the fewer hinder you.

But say your goal is somewhat outré. Few people will support such a goal and some may oppose it. In order to attain more of it, it may be useful to sacrifice some of its goal content in order to reap more cooperation or face less opposition. It’s a trade-off between alignment and leverage, and we need to find the right trade-off point.

Our above examples of blowing up the planet and filling the Hubble volume densely with highly efficient (and space-efficient), very happy beings are both a bit outré. So even if your personal moral parliament endorses one of them unanimously, you may need to tone it down significantly before it becomes tractable to find cooperation partners for it.

And again, in my experience, a lot of the highly uncontroversial and widely supported actions that one can take have to do with minimizing suffering.

Note

Note that I think that this also extends to some degree into the direction of population ethics. For example, most people in my circles endorse a “Pro Choice” stance even in the case where (1) in world A, the child is not born and, in world B, has a net positive life, (2) the decrease in well-being of the parents in world B compared to A is offset by the additional well-being of the child in world B, and (3) the resources freed by not having to raise the child in world A are not invested into an at least equal gain of happiness for an existing or marginal person. These assumptions are perfectly plausible for non-EA parents and run counter to the implications of classic utilitarianism (i.e. total utilitarianism). The “Pro Life” position is very similar so long as it allows abstinence. You have to turn to fringy movements like Quiverfull to find a group that decides in accordance with classic utilitarian population ethics.

A bit of a special case is cooperation with near copies of yourself that yet have significantly different moral goals. Here the concept of superrationality becomes relevant and can lend further support to general cooperativeness heuristics.


  1. Back when I hadn’t thought about metaethics and assumed that moral realism must be true, I predictably dedicated a lot of time to the search for the true morality and to the search for some form of test for the truth value of moral theories. That should probably be the top priority for moral realists. 

  2. There are two things that I’ve decided not to merge down to this level, namely my decision procedures (I’m a big fan of the integrity one) and heuristics that follow from considerations of cooperation (more on that in the second section). 

  3. If it is not for you, then I feel all the more honored that you decided to invest into reading my blog. 


Comments

comments powered by Disqus