1. Anthropic Capture, Intelligence, and Trees

    I present a rather speculative argument whose most likely implication is that if we’re in a simulation, then the root is occupied by a superintelligence, and probably not a value-aligned one. If you’re new to the topic, this is probably not a good introduction, since I mostly wrote it for myself so not to forget it all. I recommend Nick Bostrom’s Superintelligence instead.

    Read more
  2. The Attribution Moloch

    I argue that sufficient resource scarcity can exacerbate the effects of tiny differences in value alignment to the point where charities with almost identical goals will compete rather than cooperate. Further, a skewed perception of how impact is created as well as mere ignorance can cause prioritization to aggravate failures of coordination.

    Read more
  3. Dissociation for Altruists

    Some people do not lack in altruism and are well aware of effectiveness considerations too, but the sheer magnitude of suffering that effective interventions would force them to face is too unbearable for them to acknowledge. I give tips on how they can use dissociation to put altruism on a more scalable basis.

    Read more