Initial thoughts on What We Owe The Future

I’ve been reading What We Owe The Future as part of an EA book group. Some tentative initial thoughts:

  • The book seems to conflate two versions of longtermism. Longtermism as the philosophical position that almost all value likely exists in the distant future and longtermism as the cause-area for EA.

Philosophical longtermism = the view that most moral value in our timeline exist in the (distant) future. Thoughts on philosophical longtermism

  • You can optimize for world states and be indifferent to which specific people happen to come to exist in those worlds. This get’s around any non-identity objections.
  • If you want to say we have obligations to future people specifically, rather than just obligations to optimize for better future world-states, you have to defend a bunch of really weird claims. (e.g: it not having sex at a precise moment in time immoral because you deny a specific future person the right to life)
  • My guess is the book will go for a world state optimisation argument. If it does, longtermism as a conclusion is pretty self-evident and shouldn’t require any large argumentative leaps.
  • I think the analogy between spatial location being morally irrelevant and temporal location being morally irrelevant is highly interesting. I’ve given it a lot of thought myself and have come to conclusions that are similar and potentially more radical (e.g: whether a mind is or will ever be physically instantiated is not a morally relevant factro)

Thoughts on longtermism as a cause-area

  • debating 101: when someone is saying something that seems a) unclear or b) so obviously true that disagreeing with it would be stupid you should be sceptical and try to pin down their position. What does longtermism as a cause area actually mean? Does it mean x-risk? But we already care a great deal about x-risk. Does it mean trying to influence the far, far future? But then isn’t tractability a huge concern? How would a band leader in Africa 200k years ago have been able to predict the impact of their actions on today?
  • I worry quite a bit about corruption and staying honest. Most charities are highly inefficient because the charity sector, unlike for-profits, does not have meaningful natural selection for effectiveness. EA partially tries to alleviate this by funding effective charities and improving collective epistemic norms. Don’t "longtermist" charities with impossible to guage impacts inevitabally mean funding and prestige distribution just reverts to the standard model of effectiveness not mattering at all? Won’t that mean that, even if longtermism was hypothetically viable, our selection mech would be so broken that we’d pretty much only get useless charities.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s