Where did the AI safety meme come from?
I've been lurking effective altruist communities for years, and recently I've noticed that there seems to be an almost unusual amount of brain power and concern dedicated towards AI risk. It's always been around, but it seems like up until the past couple years it was more of a "thing that possibly could be a problem" than a "thing that must be actively combated right now." I'm not really sure if I like the direction towards AI that the EA movement seems to be taking. There seems to be a lot of hypothesizing about possible existential risks of AI without any actual concrete evidence that we will ever build artificial general intelligence (however you choose to define that), or any evidence that such intelligence would necessarily be harmful. Additionally, there doesn't seem to be much evidence that current research into AI safety will actually yield anything fruitful if AGI is created.
Meanwhile, there are plenty of actual, real, concrete, definite problems that the world faces! Millions of people per year die of preventable diseases! In light of that, donating to MIRI versus buying some malaria nets seems pretty ineffective. Maybe that donation to MIRI will lead to an extremely important insight regarding AI that brings safe AGI to the world, but we have no way of knowing that in the moment. Meanwhile, donating some money to treat hypovitaminosis A will, with near-certainty, materially improve the lives of the poorest among us. I don't mean to pretend like AI is unimportant, but a lot of this AI safety hype seems very... un-empirical, more the result of the sort of people interested in EA also being interested in AI, and less the result of rational analysis of existential risks.