Sure, we found out about the faults in SSL (and Bash) eventually. In the case of Bash, some reckon Shellshock had been there for years.
So, as Six_Degrees says, the idea that simply because something is open source, it will necessarily be scrutinised, and have flaws spotted, is itself flawed.
It would be interesting to know why this happens. Is that "many eyes" argument always false, or have things changed - for example, if the technology was largely still in the hands of CS grads and engineers, would more of them cast an eye over every bit of code they installed?
Is it the increasing democratisation of technology - in part, of course, driven by free software - that means there are people who simply take the packages and install them, because they take it on trust that it will work, and don't have the skills themselves to do a technical review?
Or is it the case that, even if we were all programmers, many of us would still happily install all that stuff, because we have deadlines to meet, we have only so many hours in the day, and all those other pressures that mean yes, we could perhaps look through the source code and build everything from scratch but, surely, someone else has already done it?
The idea of scrutiny is a good one, on the face of it. But I'm tending to think that while noble, it's something that might work in the academic world of the 90s, where people have the skills and the time to do it. The pressures of commerce and the 21st century make that much much harder.
Commercial software has vulnerabilities disclosed too, so I'm not convinced that inherent transparency is really the issue.
Bugs are found when people look for them.