Their conclusion is that in big O notation, quantum annealing and best classical algorithms are the same.
Parallelization would get you a factor 1/k (k being the number of cores) in favor of the classical algorithm at best, which specialized hardware would give you a constant factor boost without affecting the big O characteristics.
The problem is not "which is faster given a fixed n". Quantum computers are interesting for certain problems when n becomes larger and larger.
The real issue is whether if there is an exponential difference between quantum annealing and classical algorithms or not, in big O notation.
Remember, that is the reason why people are so interested in quantum computation. Not some constant speed up.
O(f(n)) vs O(log(f(n))) and O(f(n)/k) vs O(log(f(n))) are essentially the same in terms of their capabilities of solving NP problems.
Parallelization would get you a factor 1/k (k being the number of cores) in favor of the classical algorithm at best.
Exactly. The question is what would be the actual real life speedup. If it is not easily parallelizable then it becomes much more interesting finding.
I am not familiar with these algorithms, but name of QMC would suggest that this is an embarrassingly parallelizable problem, so my interest might be just from my ignorance.
The real issue is whether if there is an exponential difference between quantum annealing and classical algorithms
Yes, I know and that was not the focus of my comment. Sorry.
If you give classical k cores, then to be fair you should give quantum k D-waves. If you are only making a scaling argument, then no need to include terms that cancel.
If you want to make a comparison between currently available hardware, then the metric should be some combination of speed, power usage, space, cost, etc.
Their conclusion is that in big O notation, quantum annealing and best classical algorithms are the same.
Parallelization would get you a factor 1/k (k being the number of cores) in favor of the classical algorithm at best, which specialized hardware would give you a constant factor boost without affecting the big O characteristics.
The problem is not "which is faster given a fixed n". Quantum computers are interesting for certain problems when n becomes larger and larger.
The real issue is whether if there is an exponential difference between quantum annealing and classical algorithms or not, in big O notation. Remember, that is the reason why people are so interested in quantum computation. Not some constant speed up.
O(f(n)) vs O(log(f(n))) and O(f(n)/k) vs O(log(f(n))) are essentially the same in terms of their capabilities of solving NP problems.