Having talked to many of the startups (stealth and non), I'd say they know a hell of a lto more than the basics. But that's personal opinion from having talked to them. I recommend bringing yoru own experts to talk to the startup of your choice.
Want security? Next-gen startups show how old practices don't cut it
In case you hadn't noticed, IT security sucks. There is a chronic lack of people trained in IT security, people who will listen to IT security, and even a lack of agreement on how best to go about IT security. Fortunately, a new generation of startups are helping to tackle the issues. No matter how good a sysadmin you think …
COMMENTS
-
This post has been deleted by its author
-
-
Thursday 3rd September 2015 22:22 GMT Anonymous Coward
Re: They would be better off taking notice of the greybeards
It doesn't mean they are all that hot either. The older you are and the better your memory is the more you can see what helps and what doesn' help. From this old greybeard I am skeptical that there is one wham doozle product that does it all. Sure shifting from Windows to Linux can make you safer as long as you run them properly. If that secretary doesn't know how to do a chmod on that binary malware then it cannot run. It is saved on the system with the execute bit turned off. Linux has file systems with permission modes set on everything. That is called mandatory access control. But you are a darn fool if you depend solely on that to protect you. It is a good starting point but security comes from one layer after another like an onion to protect you. The more security layers you have the more secure you are. Since when did that security maxim get lost from the lexicon?
-
-
Saturday 22nd August 2015 15:00 GMT Martin Maisey
Not fully convinced
Completely agree on the prevalence of eggshell security, I'm just not sure automated response is viable a lot of the time. Rapidly and automatically fusing lots of correlated sources (including honeypots etc.) with a recommendation action for a human to eyeball - absolutely.
The reason is that if attackers can rely on an automated action being taken, they can manipulate this for there benefit. Standard example with traditional burglar alarms that silently notify police - turn up on 3 consecutive days and do something that triggers the alarm without breaking in. Police will turn up once, max twice, then decide they're dealing with a faulty alarm. On fourth day, turn up, break in.
-
Saturday 22nd August 2015 15:34 GMT Trevor_Pott
Re: Not fully convinced
Humans are slow. Too slow.
By all means, have the decisions of the automated software reviewed by a human after they have taken effect, but do not sit around and wait for some human to wake up, have a shower, have some coffee, get to work, shoot the shit, look at the problem and make a decision. Even if you have a 24/7 staffed security NOC, humans are still too slow. Security compromises can spread faster than humans can react, and the number of malicious actors working to increase the speed of compromise spread is far - far - greater than the number of analysts. Or their reaction times.
If you want to put a hold on notifying police or customers, maybe that's a reasonable business decision. But automated isolation of compromised systems and services needs to be automatic. Gathering forensic information needs to be automatic. getting information wrapped up into a bundle so that if the analyst pulls the trigger for sending to the cops it is all ready to go needs to be automatic.
Humans are just too damned slow. The only time they really should be involved here is when making decisions about hot to interface with other humans, and in doing post-event analysis to ensure that the automatics didn't isolate a system/service as a false positive.
But you'll never convince me that we should simply wait around for a human being to decide if a compromise if valid before locking down threats in our datacenters. The threat landscape has just evolved beyond what humans can handle, even in 24/7 real time.
-
Saturday 22nd August 2015 17:30 GMT Martin Maisey
Re: Not fully convinced
Thanks for the answer, and for the Twitter conversations. I'm definitely going to take a look at the tech in more detail. The interesting thing for me is the rules that determine isolation, how robust these are in terms of false positives, and the interplay between confidentiality / integrity concerns and availability - the latter being, unfortunately, probably all your manager's manager cares about in the average enterprise (separate issue, that, and unfortunately one technology won't solve). I can't help but think that after a couple of false alarms, the response will be "the CEO's said turn the bloody thing off, or it will kill our business". Particularly if there's any way those events might be triggered as a DoS by - for example - disgruntled insiders etc.
By the way, I have for some time thought that honeypots have been a much undervalued component of most security solutions, as they reverse the attacker advantage and generate really, really, high quality red flags for action. John Strand and Paul Asadoorian's book ‘Offensive Countermeasures: The Art of Active Defence’ is a nice read on this topic. Any security system that integrates them gets a gold star in my book.
But to continue play a devil's advocate for a moment, are most data stealing attacks really actually that fast that they require fully automated isolation? Even in an eggshell datacentre, most require a human to do external recon, break an endpoint, work out what security is actually in place, establish C2, break internal systems to research the DC/app environment, run some scans, move laterally to obtain some credentials, actually break the systems with the sensitive data, then potentially stage the data to systems where it can be actually exfiltrated. At each stage, they have to be really careful, as getting discovered may mean complete failure. I don't have hard numbers to hand on this, but I suspect most attacks evolve over days or weeks. Most internal security teams are horribly underfunded/undertooled, but could do a decent job if that were fixed.
On the other hand, DoS type attacks can happen quite quickly, as often the attacker doesn't care about leaving lots of tracks etc., particularly if they're out of reach of the long arm of the law/not thinking straight; they just want to damage the organisation.
-
Saturday 22nd August 2015 17:51 GMT a_yank_lurker
Re: Not fully convinced
I agree the process should be automated as much as possible. However, the article highlights security is often an eggshell with nothing behind the shell. Breach the shell or all already behind the shell (insider) you can do a tremendous amount of damage.
Security best practices include a layered defense with strict limits on user permissions including admins, user training, and white-hat attacks. Layered defense assumes the outer defenses will be breached and there are more defenses set up behind the crust. Standard military defense doctrine is "defense in depth". Users need training to identify phishing attacks - in person, phone, fax, and email - and how to respond. Also, they need training about basic physical and electronic security - do not assume they know. Irregular, unannounced white-hat attacks will help identify weaknesses to be fixed.
-
Sunday 23rd August 2015 04:05 GMT Charles 9
Re: Not fully convinced
But unlike a military, a business needs to be able to, well, do business. At some point, the return on security diminishes because you stifle the business flow. That's why there's a sliding scale of security versus ease of use. Improving one necessarily stifles the other the way a locked door delays you getting into your own house.
Plus one needs to realize that no security measure can be effective even in a practical sense since there's always the threat of the trusted insider turned traitor. I mean, insiders defeated the Great Wall of China.
-
-
Tuesday 25th August 2015 21:58 GMT Anonymous Coward
Re: Not fully convinced
"Insiders defeating the Great Wall of China is literally an example of why you need defence in depth."
No, insiders are precisely the reason defense in depth can't work. Due to business hierarchy, someone (emphasis on ONE) has to be the ultimate arbiter of security, the one who gets the credit and the blame, the one who ultimately checks everything else in the system. Guess who your insider is likely to be.
It's like with the UNIX world where there is ultimately an ultimate user (root) who has access to everything: the user of last resort when something gets locked in and no one else can reach it or some other locking condition. Lacking an ultimate authority may solve problems but presents others.
-
Thursday 3rd September 2015 22:39 GMT Anonymous Coward
Re: Not fully convinced
root does not have access to everything on Unix / Linux. I had a boss that asked why he wasn't able to see a certain section of the file system that all the software engineers could see. Just a df -m showed all. He was tryng to look at an NFS mount from another system and only the software engineering group could see that file system. Yes, you can add root as one of the members of the software engineering group and then you can see it.
SeLinux or similar DAC (Discretionary Access Control) systems similarly limits what root can do. You have to do an setuserid before you can do some things. But there are some things that you just cannot do as root. You need to be both the user and in the group to work with the web server file sytem for example. Unix has grown up and adopted some of the same strategies as its IBM mainframe brethren.
If you are concerned about the inside people that just means you need to do a more thorough background investigation first.
-
Friday 4th September 2015 09:54 GMT Charles 9
Re: Not fully convinced
But doesn't that create the risk of a lockout situation where no one can access it because, say, the owner doesn't exist anymore and there's no user of last resort?
"If you are concerned about the inside people that just means you need to do a more thorough background investigation first."
But then the infiltrators just make a better job of hiding their tracks. Trouble is, the infiltrator has the aggressor's advantage in the siege game.
-
-
-
-
-
-
Sunday 23rd August 2015 09:29 GMT Disko
Also
...tasking humans with guarding high-speed equipment that pushes more information and transactions per second than same humans could process in a month, is arguably silly, and it's not like monitoring stuff and doing incident response dances is the most rewarding job. And why WOULDN'T it be better to have network security as much automated as possible? Isn't automation the whole idea of IT? And then: any disruption, partial downtime, cost or nuisance from security systems kicking in on false positives, no matter how inconvenient, imo far outweighs the real risks of infection and compromise.
-
-
-
-
Saturday 22nd August 2015 15:35 GMT Trevor_Pott
If the attacker is able to compromise your datacenter enough to trigger an isolation event, isn't that exactly the sort of reason you should be locking things down? Better that you take the services offline than that you allow the compromise to spread or that you allow personally identifiable information to be extracted from the DC.
-
-
-
-
Saturday 22nd August 2015 16:08 GMT Trevor_Pott
Additional thoughts
It's worth noting that it really boils down to your take on "privacy before profits."
As I see it, lockdown events are acceptable life lessons. Data exfiltration events are not. If you properly invest in mitigation, lockdown events should be relatively rare. If you don't invest in automated incident response data exfiltration is inevitable.
Yes, there are life-critical systems in the world. These should be designed so that data exflitration is impossible because they simply don't have access to compromisable data. They should also have redundancies such that if a compromise is detected services are flipped over to a backup system designed by a completely separate provider so that the same vulnerabilities cannot be exploited.
In every other scenario, a lockdown event is absolutely preferable to a data exfiltration event. At least, as long as you believe in the principle of privacy before profits.
If you don't, then none of this really means anything and you're entirely likely to accept a perpetually compromised datacenter. You'll treat data exfiltration events that become generally known as life lessons and otherwise not care.
If the previous sentence describes you or your employ you are part of the problem and I really do hope you aren't in business all that long. In fact, I hope that legislation is enacted in your jurisdiction that drives you out of business for having that attitude.
A service lockdown is an acceptable life lesson. Data exfiltration is not. That's really all there is to it.
-
Saturday 22nd August 2015 22:41 GMT a_yank_lurker
Re: Additional thoughts
"a backup system designed by a completely separate provider so that the same vulnerabilities cannot be exploited" - How many offices are MS (or more rarely Mac) mono-cultures? In ecology species diversity is one sign of a healthy ecosystem. If a company used a variety of OSes in all areas including Windows, various Linux distros from different families (Debian, Slackware, Ubuntu, Arch, Redhat, SUSE, etc.), Macs, etc. Attackers would be slowed down if not sometimes stopped because the each OS has different vulnerabilities and quirks. The only common vulnerabilities would be applications installed on all devices such as web browser.
-
Sunday 23rd August 2015 05:40 GMT Charles 9
Re: Additional thoughts
Over a whole ecosystem, yes diversity is a plus. But within a clan (that is, within one group of a single species), diversity has to play second fiddle to compatibility (as in, the males and females need to be able to breed). Same in the office: diversity in software has to take second place to network communication; otherwise, things can't get done.
-
-
-
Saturday 22nd August 2015 16:37 GMT K
Things will head this way, its inevitable..
I've spent the past 8 months overhauling security at an SME, taking it from a wide-open network, to a White-listed based security policy, completely overhauling desktop and server based security. Next step is implementation of application whitelisting - And I still have sleepless nights!
Travor is right, people are too slow to recognise problems, additionally most SME's simply don't have the tools or expertise to correlate all the events happening on their networks. The logical step is for companies to deploy SIEM, but there are very few SME's that have the know-how and staff numbers to effectively manage this,
So any system that can begin automating the process, even if it means servers dropping offline and it turns out to be a false-positive... thats still better than getting breached!
I just hope they make it affordable for SME's.
-
Saturday 22nd August 2015 17:18 GMT Anonymous Coward
People "trained in IT security" are a lot of the problem
They focus on best practices that date from the 90s, or have been thoroughly discredited. My pet peeve are overly aggressive password change policies of 90 and sometimes as little as 30 days, which when combined with ever more complex password standards (which are of course necessary as computational power available for cracking increases) leads to a lot of average people writing down their passwords.
There's an over-reliance on firewalls for protection, ignoring the fact that more and more exploits are caused by people on the inside unknowingly bringing in the nasties via their web browser (e.g. by visiting a site that uses Flash, for instance) which isn't subject that stout firewall config that Security spends so much time agonizing over a minor change in.
They spend a lot of time doing things that are visible, but don't really help much, so they can be seen as doing "something". But they're afraid to step on toes to effect changes in policy that will truly make a difference, like banning the use of USB sticks that can not only be a vector for infection they can provide an easy conduit for IP theft on a massive scale as well as too often data loss to the outside through carelessness or negligence.
If the training didn't turn out such homogeneous like-minded people, who all do the same things and all leave the same exposures for miscreants to exploit, IT security would be in a better state.
-
This post has been deleted by its author
-
Saturday 22nd August 2015 19:48 GMT K
Re: People "trained in IT security" are a lot of the problem
I'd agree on the password changes, but frequent password changes do help mitigate damage - if your password has been hacked, whilst damage has been done and data stolen, a forced password change will mitigate further damage.
Also security staff are not the problem - Its corporate attitude that Security is the responsibility of Person X or Department Y. Security is everybodies responsibility, especially the users. After all. its rarely the Security Admins who opens the email saying "Invoice Attached".
Unfortunately most companies provide about 30-60 minutes of "training" when a new employee starts, this is at a time they are getting inundated with "more important" information about the job they will be doing, so it goes in one ear and out of the other. Additionally this training tends be a HR check-box exercise, which dictates to users, Don't Do This, Don't Do That. Its just a waste of time!
Companies need to invest in properly in training staff about security both for the office and at home - explaining why its needed, how it impacts them personal, then how they should approach it. Rinse and repeat it every 6 months.
Actually, what we need is a national campaign to make bad security-judgement socially unacceptable..
-
Saturday 22nd August 2015 21:38 GMT Anonymous Coward
Re: People "trained in IT security" are a lot of the problem
"I'd agree on the password changes, but frequent password changes do help mitigate damage - if your password has been hacked, whilst damage has been done and data stolen, a forced password change will mitigate further damage."
Actually the main benefit is avoiding your users re-using their "usual" password that is in Facebook, Twatter and the dodgy shopping website that they get their pet meds from.
Within 30/60/90 days an infiltrator will have managed to syphon everything out, even over a bumpkin broadband link.
Another handy tip: Create a new group and don't give it any permissions, call it something like "DUFF-GROUP". Now when you create new Service Accounts (you do use them - don't you???) make it a member of DUFF-GROUP only. Now add perms directly to the account as needed to do its job, create new groups if necessary for shared stuff but don't re use existing default groups like say Domain Users. Unix packages generally do this by default. One simple reason for this approach is that Domain Users eg are generally allowed access to RDS/Terminal Servers. Also remember that the username needs guessing as well as the password. I've recently diagnosed a nasty case of u=mail p=mail for a customer ...
-
Saturday 22nd August 2015 22:49 GMT K
Re: People "trained in IT security" are a lot of the problem
@gerdasj, Agreed on everything you say, but its still relevant for internal accounts, for example
1) A company has web-facing services such as OWA or other business systems
2) An employee leaves the company and there is risk he/she is aware of other employees credentials
-
Sunday 23rd August 2015 20:33 GMT P. Lee
Re: People "trained in IT security" are a lot of the problem
>Within 30/60/90 days an infiltrator will have managed to syphon everything out, even over a bumpkin broadband link.
You'd be surprised how many people put in firewalls and then allow a fair amount of outbound traffic from their DMZ hosts. SMTP from their webhosts to their hardened, but non-monitoring mail servers and DNS straight out to the internet, for example - that's quite handy for ex-filtration of data.
Another problem is encryption everywhere. It's quite hard to get your NIDS to be fast enough to do decryption and security checking fast enough to be usable. Actually, it isn't so much that its really hard technically, the problem is that all the security vendor appliances are pitched at high-end customers with prices to match and are performance-limited to ensure upgrades every few years when they go "end of life."
-
-
Sunday 23rd August 2015 04:53 GMT Anonymous Coward
Re: People "trained in IT security" are a lot of the problem
"Also security staff are not the problem - Its corporate attitude that Security is the responsibility of Person X or Department Y. Security is everybodies responsibility, especially the users. After all. its rarely the Security Admins who opens the email saying "Invoice Attached"."
But security interferes with business. That makes it ornerous right there, right up there with regulations. Since interference affects returns and the bottom line, there's an ingrained business culture pushing back against it.
-
Sunday 23rd August 2015 15:00 GMT Alan Brown
Re: People "trained in IT security" are a lot of the problem
> After all. its rarely the Security Admins who opens the email saying "Invoice Attached".
No, but that person can and will file an official complaint about being made to feel bad when security lectures her about having caused 3 man-days of fixup time.
Seriously, We have users like that. They will _override_ the anti-malware warnings and open attachments anyway and if we lock the systems so they can't do it, they go to $BOSS and complain shrilly until he tells us to stop doing that.
-
Sunday 23rd August 2015 15:25 GMT Anonymous Coward
Re: People "trained in IT security" are a lot of the problem
"Seriously, We have users like that. They will _override_ the anti-malware warnings and open attachments anyway and if we lock the systems so they can't do it, they go to $BOSS and complain shrilly until he tells us to stop doing that."
So, unless this person is over IT's head, what's to stop the IT guy from going over the boss's head, say to the accountants or even to the board, and spell out the cost of the squeaky wheel in dollars, cents, hours, and minutes, and basically note that doing otherwise could mean explaining themselves to the investors?
-
Monday 24th August 2015 09:47 GMT rh587
Re: People "trained in IT security" are a lot of the problem
Make it a game, offer a carrot in addition to the stick - people who correctly identify White-Hat phishing attacks get a bottle of wine at the Christmas party, or gift vouchers or something.
Of course that requires them to have had at least a day (and not their first orientation day when it's in one ear and out the other) of training on identifying such attacks and secure/insecure practices.
There is the carrot side, which is remedial training for people who aren't getting it, but there is a sales element to this - sell this as useful skills which will keep the company safe, but also protect their personal details at home, help them mitigate phishing and browser-based attacks during home surfing, etc.
i.e. incentivise them to give a shit!
-
-
Sunday 23rd August 2015 16:03 GMT Anonymous Coward
Frequent password changes
It comes down to where you think you have the biggest exposure. The extra 270 days from forcing year password changes instead of 4x a year, or from people having stickies on their monitor/hidden in a desk drawer/in their wallet/in a note on their phone containing their password for more days before they finally have typed it enough that they can remember it and don't need the note.
As stated, if someone gets a employee password and uses it to access the network, they've probably done most of their damage by the time an employee changes the password. Once inside the network they can easily set up a way that allows them to access from the outside (i.e. set up an outgoing SSH session with a tunnel) so they can get back in after the password has changed.
I'd much rather see people with a really high quality password they NEVER change than have a password changed regularly but is written down in plaintext somewhere (well, maybe not, but I would if I could somehow insure that the high quality password was only used in that one place...) Suggest that to a security "pro" and they'd have a cow, because "best practices" - most couldn't even say why that became a best practice in the first place and defend its validity in 2015, they just know to quote their bible.
-
Monday 24th August 2015 14:07 GMT Anonymous Coward
Re: Frequent password changes
I happen to be a "security pro" and I actually fully support that strong passwords (read: passphrases) are more desirable than frequently changing passwords. And I happen to have many friends in the industry who share the same opinion.
If your in-house "security pro" thinks otherwise, then you have a dud.
-
-
Monday 24th August 2015 08:37 GMT DanielN
Re: People "trained in IT security" are a lot of the problem
"Rinse and repeat it every 6 months."
Repeat in 6 hours. It improves learning. Studying the first time lines up a bunch of protein changes in the brain. The second time wires those changed synapses nice and strong.
For most things you can just let the training fade and let people learn from mistakes. How much does it really matter if Liz reloads the toner wrong a few times? But for security, optimized learning is worth it. If Liz uses a USB stick wrong, it can cost millions.
-
Thursday 3rd September 2015 22:51 GMT Anonymous Coward
Re: People "trained in IT security" are a lot of the problem
You are not thinking of the human being in that password change. The more often they have to change their password the more likely it is that they will pick something easy. Which is best? A password hastily selected every three months that says it can be broke in several hundred hours by a password strength checker or a password set for gulp, an entire year that a strength checker says it will take a Cray working on the problem for 33 years to crack it? BTW, the password to login here used a comparatively low strength https cipher to traverse the Internet.
-
-
Sunday 23rd August 2015 08:06 GMT Robert Helpmann??
Re: People "trained in IT security" are a lot of the problem
I would argue that not enough people being trained in security is a major problem. I don't mean security professionals. I mean every user in the company environment ought to have at least a basic amount of training as to how they are supposed to behave and why and that it should be an integral part of corporate IT culture. In fact, while Trevor might lump this in with his Prevention category, I would argue that it is important enough to rate its own entry. When I evaluate a corporate IT product, I look at what training the company selling the product offers. Why would information security products be different in that regard?
-
Monday 24th August 2015 03:36 GMT Charles 9
Re: People "trained in IT security" are a lot of the problem
"They spend a lot of time doing things that are visible, but don't really help much, so they can be seen as doing "something". But they're afraid to step on toes to effect changes in policy that will truly make a difference, like banning the use of USB sticks that can not only be a vector for infection they can provide an easy conduit for IP theft on a massive scale as well as too often data loss to the outside through carelessness or negligence."
The reason they're afraid to step on toes is they're afraid one of those toes is someone above them who goes, "Who hired this clown?" IT security doesn't do much if the top brass don't see the point, and part of IT's job is making those same top brass see the point.
-
-
Saturday 22nd August 2015 21:17 GMT Anonymous Coward
Honey pot
Honey pots are a great tool for anyone to use. Create a new VLAN and put this on it: https://bruteforce.gr/honeydrive Lock down egress on that VLAN obviously. Use port spanning to watch what happens if you like.
Get the fake SSH daemon running and then watch the logs and get the nice reports. In a week you will have a huge list of usernames and passwords that you will immediately ban on your network on pain of <whatever> That's one use and one tool, there are loads more.
If your budget is a bit limited or you want to take security a it more seriously than throwing £s at someone to do it for you then look into these: Security Onion, Snort/Securicata, HA Proxy, Squid and friends, Logstash/Kibana/ElasticSearch, Kali and many more. Spend at least 3 months full time or 12+ part time on implementing that lot along with your ISO 27001 8) ..... and a lifetime of tweaking it all and improving it. It's never a done job.
-
This post has been deleted by its author
-
-
Sunday 23rd August 2015 14:11 GMT Mario Becroft
Total lack of SQL security in web apps
One thing that gets my goat...
The number one attack vector for just about every public facing web site is bugs in the cruddy PHP front-end code (trust me, I've had to maintain the stuff; yes there are exceptions which prove the rule) which has superuser access (often with a plaintext password in a configuration file) to the entire database and is rife with SQL injection and other vulnerabilities.
You have an SQL database. Your database has highly advanced, well-tested and secure role-based, schema-level, table-level, column-level, and row-level access control for the precise purpose of making your app secure. Damn well use it.
-
Monday 24th August 2015 13:06 GMT 0laf
Re: Total lack of SQL security in web apps
I'd add to the the vacant dead eyed stare of the vendor trying to sell you the aforementioned web service when you ask what assurance of security they can provide for their product. Has it been pen tested? How quickly do you respond to identified vulnerabilities? Do you even do input validation?
Invariably the answer is "I'll get back to you" or "but we're 27k accredited!" or "it's in a datacentre with fences". Then the squirming begins on both sides as the vendor tries to justify storing sensitive info in an insecure system (which they refuse to secure) and the department that has already bought the product (but didn't speak to the security guy (me) until installation was due) starts to shit it that they've spent £110k on an outsourced lemon with a good salesman.
None the less it'll all be my fault for asking awkward questions late into a project I didn't know about.
-
-
Sunday 23rd August 2015 15:09 GMT Alan Brown
WRT webservers
The best way is to treat them as "disposable" - sandboxed, read-only access to resources and _unable_ to make outbound connections (firewalls go both ways)
Total agreement on PHP. The only thing worse is Twiki code, but both are only minor compared to the simple issue that 99.9%+ of website coders have no idea what security is.
-
Monday 24th August 2015 10:18 GMT 0laf
Sounds wonderful
Sounds wonderful, sorry can't afford it.
Plus I can't afford to re-jig the 15yr old network completely based on your eggshell principle plus all the ancient legacy applications that would never cope with a proper secure network.
I can't even get money to put in proper monitoring which is probably what one of these wonderful toys would need.
And even if I could change the network and put in SIEM I wouldn't be permitted to have the downtime to do it or get a body to run it.
So I suppose I'll get some money for a new firewall every 5yr and we'll continue to cross our fingers. As long as the iPads flow for now the heads are happy.
-
Monday 24th August 2015 15:35 GMT yokelizer
Cynicism?
I think what everyone appears to be saying (or I hope, as it reflects my feelings); Is that these applications seem to be great, but we have been sold snake oil before that will solve all our problems except for (we later find out), 80% of all our real world problems.
At the end of the day defense in depth comes from getting all the non-sexy, expensive basics right, like patching, security zoning, two factor authentication, explicit authorisation etc. etc.
These products may be great and a wonderful investment, but they will be additional to all the other stuff, it won't solve the worlds security problems in isolation.