Rob: Since 2012, there's been a distinct movement towards focusing on existential risk and long-term future causes. But it's not as if there are many new arguments that we weren't aware of in 2012. We've just become more confident, or we're more willing to make the contrarian bet that thinking about the very long-term future is the most impactful thing to do.

I'm curious to know whether you think we should have made that switch earlier. Whether we were too averse to doing something that was weird, unconventional, and that other people in general society might not have respected or understood.

Nick: Yeah, that's a good question. I think some things have changed that have made focusing on the long-term future more attractive. But I mostly think we should've gone in more boldly on those issues sooner, and that we had a sufficient case to do so.

I think the main thing that has changed, with AI in particular, is that the sense of progress in the field has taken off. It's become a bit more tangible for that reason. But mostly, I do agree that we could've gone in earlier. And I regret that. If we had spent money to grow work in that field sooner, I think it would have been better spent than what the Effective Altruist community is likely to spend its last dollars on. I wish the field was larger today. Yeah, so, I think that was a mistake.

I guess if you try to ask: "Why did we make that mistake?" I think you're probably pointing in roughly the right direction. It's uncomfortable to do something super weird. I don't know, though, I guess different people would have different answers.

I think if I ask myself why I didn't come into this more boldly earlier - I include myself in the group of people who could have - I think there was a sense of: "Wow, that's pretty crazy. Am I ready for that? What are people going to think?"

Read the full interview with Nick Beckstead about effective altruism priorities by Rob Wiblin at Effective Altruism.