Previous: [[2021-08-01 A reply to V]] --- Hullo Peter, and anyone else who might be reading this. Thank you for the characteristically thoughtful and scalable reply. > What do you think? What do I think? Let me try to figure it out. I think that you essentially agree with my simplified and exaggerated, not to mention hastily drafted and overly combative #workingtheory of EA. I think your reply is kind of a “Yes, and?” (I’m simplifying and exaggerating again, I know.) I think I’m not grossly misrepresenting or misunderstanding EA, nor am I unsympathetic to the many good arguments in its favour. ![[Pasted image 20210821120109.png]] I think that my little paragraph could in fact be read as a ringing endorsement: “EA seems to be the perfect ethical philosophy of our times.” A true product/market fit. I think (I admit) that I took great pleasure in writing the previous paragraph. I think it’s kind of funny. But I also think it’s kind of true. I think that many of what I’ll loosely refer to as my “issues” with EA are in fact not with EA itself, but with what I’ll loosely refer to as “our times.” I think that critiques of EA and/or longtermism are less interesting than the question of why EA and “our times” are such a good match. (I also think EA and/or longtermism themselves are less interesting than that question, but that’s just me.) I think EAs shouldn’t necessarily assume (not that I suppose they generally do) that critiques of EA are based on claims of its ultimate ineffectiveness as measured by its own goals. What passes for my own humble “critique” of EA is that it is, in fact, much too effective. I think it’s interesting how EA stems from the mindset and worldview that created the very problems it aims to solve. (Let’s make that “somewhat interesting.” I don’t want to exaggerate or make any claims to originality or depth here. This is just one of the things that “I think.” You asked.) I think it’s interesting that EA’s only documented engagement with Descartes is with his thoughts on animal cruelty. I think you should read Heidegger. I think you would enjoy it. I also think you should read One Thousand Years of Nonlinear History. I think you would also enjoy it, but for different reasons. I think EA’s claim to universality and seeing from God’s eye view annoys me. But then Christianity (to which, as you know but your readers may not, I am quite partial) makes the very same claim, so there’s that. (Is it interesting how our times have replaced God’s Eye View with the View from Nowhere? Maybe? But I digress.) I think the reference to Stalin in my draft theory was unnecessary. I’m sure there are gentler ways to make fun of people who think they’ll know how future people will judge them. I think that when I hear someone use the word “humanity,” I reach for my gun. I think it’s interesting how often EAs say “we should.” What do they mean? I think that’s it for now. Cheers from Stalinallee, V, I think.