The question posed specifies real coefficients, looks like experimentally tested on random coefficients selected from [-1, 1] from uniform distribution and a sort of scaled normal distribution.
This stuck out to me, and might be a place where our intuition breaks down. As a human if you tell me, "give me a polynomial with degree n with coefficients drawn uniformly from [-1, 1]," I will likely produce one with a lot of zero coefficients, just because a lot of the polynomials you encounter in the wild have a lot of zero coefficients.
In reality, if you grab a coefficient uniformly from [-1,1], the probability that it is 0 is 0.
Likely (can’t find the test code) tested on coefficients selected from the IEEE doubles in [-1, +1], so from a subset of numbers of the form m/2^n with m and n integers and m between 0 and 2^53 or thereabouts.
For this problem, I don’t see how that would give different results as picking any number in [-1, +1], but that’s not a priori guaranteed. Mathematicians are well-trained as nitpickers.
Why would that matter? I was thinking “If you’d use 4-bit ‘IEEE’ floats, it wouldn’t be very surprising to see an anomaly like this. Would it be for 5-bit ones? 6-bit? What’s a large enough number of bits?”
I also said “For this problem, I don’t see how that would give different results as picking any number in [-1, +1]” because I think 64 is (more than) large enough, but still, I think you’d need an argument if you want to draw a mathematically sound conclusion “this is worth looking into” from such a simulation.
True, the function could in principle have a structure such that sampling it at m/2*n gives a misleading impression of its behavior outside that set. I can't think of an actual function which behaves like that, though (IANAM).