The danger is throwing something into production without understanding bias and variance, overfitting (or other important concept) with potentially disastrous results.
One cannot do ML without some basic theoretical knowledge of Statistics and Probability. This gives you the What and the Why behind everything. GI-GO is more true of ML than other disciplines. The techniques used are so opaque that if you don't know what you are doing, you can never trust the results.
One thing that made the Uber fatality possible was their over-confidence in their AI, which they apparently did not fully understand. They considered it unnecessary and disabled the car-integrated emergency collision breake system ...