Last year, Open Philanthropy Project wrote:

  • A major goal of 2017 will be to reach and publish better-developed views on: Which worldviews we find most plausible: for example, how we allocate resources between giving that primarily focuses on present-day human welfare vs. present-day animal welfare vs. global catastrophic risks.
  • How we allocate resources among worldviews.
  • How we determine whether it’s better to make a given grant or save the money for a later date.

This post is an update on the work.

We’ve since published an extensive report on moral patienthood that grew out of our efforts to become better informed on this topic. However, we still feel that we have relatively little to go on, to the point where the report’s author wasn’t comfortable publishing even his roughest guesses at the relative moral weights of different animals. Although he did publish his subjective probabilities that different species have “consciousness of a sort I intuitively morally care about,” these are not sufficient to establish relative weight, and one of the main inputs into these probabilities is simple ignorance/agnosticism ...

We will also likely want to ensure that we have substantial, and somewhat diversified, programs in policy-oriented philanthropy and scientific research funding, for a variety of practical reasons. I expect that we will recommend allocating at least $50 million per year to policy-oriented causes, and at least $50 million per year to scientific-research-oriented causes, for at least the next 5 or so years.

Many details remain to be worked out on this front. When possible, we’d like to accomplish the goals of these allocations while also accomplishing the goals of other worldviews; for example, we have funded scientific research that we feel is among the best giving opportunities we’ve found for biosecurity and pandemic preparedness, while also making a major contribution to the goals we have for our scientific research program. However, there is also some work that will likely not be strictly optimal (considering only the direct effects) from the point of view of any of the worldviews listed in this section. We choose these partly for reasons of inertia from previous decisions, preferences of specialist staff, etc. as well as an all-else-equal preference for reasonable-length feedback loops(though we will always be taking importance, neglectedness, and tractability strongly into account).

This work has proven quite complex, and we expect that it could take many years to reach reasonably detailed and solid expectations about our long-term giving trajectory and allocations. However, this is arguably the most important choice we are making as a philanthropist - how much we want to allocate to each cause in order to best accomplish the many and varied values we find important.

Read the source article about cause prioritization at Open Philanthropy Project.