Signal to noise
The primary characteristic of big data is that for any particular problem you wish to apply to it, the proportion of interesting information compared to the amount of garbage you have to sift through is very, very small. With old-fashioned databases, designed when storage was expensive, the intended use was to focus on a small number of very well defined questions: where does account number 8892784237 live? How many months in arrears are they? What was their latest purchase?
While we're getting better at storing big data, which just increases the volume of stuff we collect, we're still in the infancy of extracting relevant, accurate and causally-linked intelligence from it (just ask the NSA or GCHQ). Sure, you can use it to foretell failures, a la HAL in 2001 - so long as your computer doesn't have an agenda of its own - but the number of false positives is very high. There is also the danger with BD of treating everyone as if they were Mr. or Mrs. Average and not designing enough flexibility into its reporting to allow for the possibility that maybe some people are different, or purposely give odd answers.
So while BD probably has some advantages in industrial processes, can feed back failure modes to manufacturers to design-out faults from products - it also leads to an increasing homogenisation when applied to people. Until we can advance the data selection processes to match the data collection abilities there will always be the wrong attributes applied to the wrong people, just because we accidentally triggered something while buying granny's incontinence pants on Amazon.