But it's not an argument from the paper. The idea is that superintelligence have to understand a consequence of any program to check for harm. And this is undecidable due to the halting problem. This argument is correct, but trivial and not worth a paper, in my opinion.