Or have them set up in a way that makes them hard to run full-system benchmarks on. I can think of a couple of financial firms that have clusters that would rate on the Top 10, but a) as you suspected, they’re too busy running the money printers to take them down for a few days to run benchmarks, and b) they’re split up into more manageable little clusters, so the high speed fabrics don’t connect and allow every node to talk to every other node, which you need for an HPL run.
TOP500 and other similar HPC benchmark lists are only for FP64 computations.
While both NVIDIA and AMD design their top GPU model for both FP64 and AI/ML workloads, to save on the design cost, you can do AI training using GPUs that have only high AI performance (like RTX 4090 or its workstation counterpart, RTX 6000), without implementing FP64 operations at all (the FP64 performance of RTX 4090 is negligible, being worse than of any decent cheap CPU, it is provided only for compatibility in testing).
I mean there is no rule if you buy a supercomputer that you have to benchmark it and submit the results. This said, in the days before AI the number and type of players that had this amount of compute were also commonly the types to submit their scores to said benchmark lists.
Most players that had this amount of compute also tend to pay less than the equivalent corporate wage, and so besides being a good way to stress-test your cluster (prior to general availability), it gives you the positive feeling of “I helped make this, I help run this.”