Re Economics: binary compatibility is not very relevant for folks using open source software, most of the time source code is pretty portable, Debian riscv64 seems to be above 98% built:
What happens if your RISC-V implementation doesn't meet the debian requirements? Or for that matter, provides some sweet instructions that speed up memcpy, or whatever, 100x?
Binary compatibility matters on anything that isn't compiled specifically for the machine.
Yocto/Gentoo might have been better examples if you want to argue binary compatibility doesn't matter. Particularly if you need to compile the bootstrap image just to get the machine to boot.
It just has to meet the standardised baseline that is already defined and all Linux distros use. For all the extensions, Linux has various well established mechanisms to detect and utilise varying CPU capabilities, these are necessary and used on many other platforms.
As I'm sure your aware, those mechanisms aren't ideal. There are performance, maintenance and various other problems with them. They exist because distro's are compiled to the lowest acceptable common denominator and then libraries/etc are swapped in as needed. For something like x86, this is an acceptable tradeoff because there is an expectation that a distro boots on a 25 year old computer as well as the latest amd/intel offerings with a ton of new instructions.
That said, if you rebuild many apps with -march=native, -flto, etc there are frequently large performance benefits due to the compiler being able to selectively use things like AVX512/whatever for random code sequences where calling out to a library function, or checking for feature existence at runtime wipes out much of the perf advantage.
A large part of the advantage of a new architecture would be avoiding all this crap. If it comes baked in from the start that isn't a good start, considering what it will look like in a decade or two.
https://buildd.debian.org/stats/