"...a skilled hacker will alway get in..."
In the common business model, where we rely on technology for protection, maybe. Probably, even. But we can do better. We HAVE to do better.
Our typical business security model is roughly equivalent to putting your front door on the side of the building and painting it purple, because no one will ever expect it there or to look like that. And stunningly enough, the average cyber thief is completely stumped by this, as they aren't overly clever (and are REALLY proud of themselves when they recognize the door on the side of the house, even though it is purple). Problem is...stopping 99.99% of the cyber-thief-wannabes is not enough when millions of attempts are being made...or one person wants your data really badly.
My other analogy is:
You run a business with a fleet of vehicles driven by your employees. A few of your employees are responsible for an unusual number of "events" with those vehicles. Do you:
1) Fire the employees?
2) Reassign them to non-driving jobs?
3) Train them to drive better?
4) Put bigger bumpers on the vehicles?
In the IT world, we just put bigger bumpers on the vehicles, the one thing that most people would consider the only WRONG answer.
I hate the statement "You can't achieve perfect security" -- while it may be true, it almost always is used as an excuse to not even try. Just because you may SOMEDAY make a mistake behind the wheel of a vehicle isn't an excuse to not try your best to drive safely, nor is it a vindication for those who perpetually put themselves and others at risk.
Technology can not counter stupid people and bad designs. You cannot take a horribly insecure applications and rely on technology to make them "safe". You cannot antivirus/firewall/technology your way to security. Yet that's all we do.
And yet, that's what we do. We implement bad designs, let untrained people have access to things they shouldn't, and managers offer to terminate and replace any IT person who has the guts to say, "that's a bad idea from a security standpoint".
Realistically, security is almost never the first priority. In fact, it is usually close to dead-last, behind convenience, cost, something to stuff on my resume, and coolness.
I used to work for a large company which had a rigorous set of criteria for company-network connected smart phones. At the time I started, only the Blackberry came close to meeting the requirements (central manageability, remote wipe, full encryption, among others). We heard word that the CIO personally owned five iProducts. Those of us at the grunt level knew what was coming, and sure enough, it did: iProducts were to be permitted onto the company network, even though they didn't (yet) meet most of the security /requirements/ for attachment, and our job was to figure out how to make the new iProducts as unbad as we could make them, not say "we got bigger problems we need to solve first before you give us new problems".
We can do much better than we have. Step one will probably be liability for the people who allow data out. Not "We followed all these compliance steps so it isn't our fault" -- doesn't matter, YOU collected the data, you retained the data, you lost the data, IT IS YOUR RESPONSIBILITY. Simple.
Yes, I'm saying Schneier is wrong on this, and that puts me on the wrong side of a lot of people. But I feel he is. Can we make something 100% "secure"? Probably not. But we always need to try. And we can't take the totally full-a**ed attempts we've been making at something pathetically called "security" and say, "See? It doesn't work!".
We can't keep using the same insecure apps, no matter how "common". We can't keep using bad designs. We can't keep letting untrained, ignorant people play with dangerous tools like computers, and we can't keep taking a "Security Last" approach to design.