Not OP. I purchased 8 Cascade lake servers for HPC after testing AMD EPYC latest. One of the reasons is Intel compiler! - we use it in scientific software to get extra performance, and it shits over AMD processors. I saw the same software take approx. 30 times longer to run because it was not an intel CPU(worst case. Other GCC compiler not so much). This is not AMD’s fault. Just saying there are some of us who are stuck with the devil that is Intel.
Doesn't the Intel compiler let you disable the runtime CPU detection and generate an executable that unconditionally uses the instruction sets you explicitly enable? I know they also provide environment variable overrides for at least some of their numerical libraries.
It is a bit willy-nilly to get it to do the right thing. We compile with arch specific settings, and add features we’d like as well, but in spite of it it does not look like it is using all the facilities available(unverified claim based on perf outcome). I guessed it ignored our flags once it did not see “Genuine Intel” on the model field. To be honest, I had to calculate cost benefit of trading off my time to figure this out vs the savings I get from going AMD. Two things made us stop, and buy Intel:
1. Our major cost in BOM is memory, not the CPU. So, a 30% savings in CPu cost is not 30% off the bill, but much less.
2. Even if we found a way to tip the scale in AMD’s favour, our binaries still need to run in the rest of the intel servers without significant perf hit. So, our liberty to change is limited.
It’s sad, but reality that we had to buy more intel. But, luckily, their prices are far lower than our last purchase before AMD lit a fire under their asses. So, there is that.
Maybe you have a problem that can use AVX-512 trivially in the compiler, in which case yes, Intel is hugely better. We are all very luck to have the crazy high end hardware we have. I can't wait in a few years I should be able to get a fairly cool Mac Mini+ equivalent with 32 cores so that I a javascript test suite can take less than 10 minutes to run...
There was a HN discussion a couple of weeks ago about whether the intel cpu detection “feature” is an evil money grab, or a legitimate way to prevent unexpected runtime behavior on AMD CPUs.
I'm not sure which discussion you're referring to; I've seen the topic come up many times. But I haven't seen a reasonably non-evil explanation for why the compiler should preemptively assume that AMD's CPU feature flags cannot be trusted, while Intel's can. Detecting known-buggy CPU models is fine, but assuming that future AMD CPUs are more likely to introduce bugs to AVX-whatever than future Intel CPUs is not something that I have seen justified.
Okay so there was another article recently which talked about a new fuzzing tool implemented by google that revealed thousands of bugs in safari and open source projects.
I assume the exploitable edge cases are so numerous and so hard to have 100% test coverage on (is it even possible?) that it is hard enough for Intel to deal with correct execution on their own platform.