Yudkowsky does not expect a stop to the development of AI globally. He, and thousands of others (including me) are merely saying that if there is not a stop, then we are all dead.
Yudkowsky thinks we are all dead with probability .999. I don't know the subject nearly as well as he does, which makes it impossible for me to have much certainty, so my probability is only .9.
Also, it is only the development of ever-more-capable AIs that is the danger. If our society wants to integrate language models approximately as capable as GPT-4 thoroughly into the economy, there is very little extinction danger in that (because if GPT-4 were capable enough to be dangerous, we'd already all be dead by now).
Also, similar to how even the creators of GPT-4 failed to predict that GPT-4 would turn out capable enough to score in the 92th percentile of the Bar Exam, almost none of the people who think we are all dead claim to know when exactly we are all dead except to say that it will probably be some time in the next 30 years or so.
I've been in data science only a few years but I fail to understand what people's issue with AI is. Best 8 can tell is it's ignorance coupled with fear mongering.
Yudkowsky thinks we are all dead with probability .999. I don't know the subject nearly as well as he does, which makes it impossible for me to have much certainty, so my probability is only .9.
Also, it is only the development of ever-more-capable AIs that is the danger. If our society wants to integrate language models approximately as capable as GPT-4 thoroughly into the economy, there is very little extinction danger in that (because if GPT-4 were capable enough to be dangerous, we'd already all be dead by now).
Also, similar to how even the creators of GPT-4 failed to predict that GPT-4 would turn out capable enough to score in the 92th percentile of the Bar Exam, almost none of the people who think we are all dead claim to know when exactly we are all dead except to say that it will probably be some time in the next 30 years or so.