What is Giving Compass?
We connect donors to learning resources and ways to support community-led solutions. Learn more about us.
My impression is that the cause of AI safety has become increasingly mainstream, with a lot of researchers unaffiliated with the above organisations working at least tangentially on it.
However, it’s tough from the point of view of an external donor. Some of the organizations doing the best work are well funded. Others (MIRI) seem to be doing a lot of good work but (perhaps necessarily) it is significantly harder for outsiders to judge than last year, as there doesn’t seem to be a really heavy-hitting paper like there was last year.
I see MIRI’s work as being a long-shot bet that their specific view of the strategic landscape is correct, but given this, they’re basically irreplaceable. GCRI and CSER’s work is more mainstream in this regard, but GCRI’s productivity is especially noteworthy, given the order of magnitude of difference in budget size.
As I have once again failed to reduce charity selection to a science, I’ve instead attempted to subjectively weigh the productivity of the different organizations against the resources they used to generate that output, and donate accordingly.
Read the full article on AI risk by Larks at Effective Altruism Forum