This could be evidence that the often used trope that "open source" software is more secure because there are more "eyes" looking at it is false or just misleading.
If everyone using "open source" believes someone else is looking at the code for security problems, who is it that actually is? (conjecture, not enough of the "good" people. Sad face)
How many lines of code can a human security auditor process in a day, 2000, 3000? At what point does system complexity / other component interaction cause failure? What is the rate of change for the code base?
How many capable human security auditors exist in the world? How many can you hire?
My personal opinion about how to get a human to identify problems is that they have to be able to build very accurate "mind" models of how complex software behaves. My experience tells me this is a very difficult skill to teach to adults. My feeling is that it is tied to how an individual perceives, models and transitions between real and abstract things.
Using computers to "solve" this problem seems like it is the answer, but doing so requires humans and human interactions with entirely disjoint skill sets to make it happen. I don't think the current way that "security problems" are handled is going to suddenly make this happen. Opinion: "Open Source" software vendors will find it difficult to get any traction on this problem.
Hiring Problem: If human security auditors find a problem and prevent it from ever being known, how do you quantifiably measure their effectiveness and justify the costs in employing them?
Fictional-Exec: "We've not had a security problem in two years, why are we paying XXXX for all these security people? What are they doing?"
TL;DR (at the end :-): This is not surprising. Other surprises will continue to happen. This is why the "security industry" has lots of players peddling some quite awful products. It is easy to backwards-sell a product if if has been "taught" a specific problem.
Despite the end result of the cycle, in the long term of making better software through embarrassing disclosure, it really does leave a negative impression on "why should this be made open source?".
The act of making something open source really can result in pain without any measurable gain or positive. Measuring things this way really isn't a good equation for open source.
I'll go for "possible for someone to look over and discover the flaw but nobody does" over "impossible for a random person to look at it but hopefully we can trust this company to do so" any day