Re: That includes the firmware.
Getting the right core team together would be the make-or-break of the whole enterprise.
No, no, no and 10,000 times no. This is absolutely wrong. The whole point of security by design is to design out any single point of failure, including the failure of individuals. You don't need a stellar core team to run a secure, successful business. You need one to run a business that will rock wall street and perpetually exceed expectations.
There are literally thousands of examples of large enterprises around the world that are well run, stable, steady businesses that do things in a secure fashion. They don't make the news because they aren't prima donnas, they aren't high-stakes wall street derivatives stocks but many of them are household names.
If you design your business to rely on the charisma and personality of individual members of your corporate team you have already failed at information security. Everyone in a company is disposable. Even the CEO. That's proper security. Nobody can be indispensable. Nobody can be in a position to "leverage" the company. No one person - not even the CEO - can be allowed to have full security access to anything.
Policies, procedures and best practices determine how operations are carried out. Changes to those policies procedures and best practices are researched, audited, vetted and tested before being implemented.
It means the company evolves slowly. It means they will never be on the bleeding edge. But it can mean - assuming the design is correct - that they will be secure.
Anyone who is "exceptional" is a threat to the stability of such a company. Exceptional individuals have no place in the smooth running of an organization. They may be useful in research and development, but not on the implementation side.
None of this is a dig, by the way. I'm almost certainly worse at this hoo-man stuff than you are. People can also be considered as exploitable flaws, however, and a bit of introspection does no harm.
People are exploitable flaws. But the biggest risks are in ongoing operations (and the people making those operations go). New equipment can be vetted and tested and verified before being put into service. Any behaviour that deviates from modeled behaviour can be/should be analyzed. Equipment can be deployed in test/simulation environments before going into real ones.
Individuals responsible for design of equipment should be isolated from those designing testing. Those implementing testing should be separate from those implementing production and from those who designed the tests. Those who deliver the goods should be separate from everyone. There should be a "chaos monkey" group internally whose job it is to try to break things. Talk to Netflix about it and you'll understand the benefits.
But the people who are doing day-to-day production. Who are working the help desk, who have access to backups, administrative privs, commit privs, push privs, deploy privs...all these people are threats. They need to be categorized. They need to be maintained. They need to be well cared for, kept happy and - above all - their activities need to be closely monitored and documented so that if they attempt to screw up you not only know about it, but you can replace them at a moment's notice.
That said, that doesn't mean you have to be the evil overlords. You make it clear to people up front that you are a secure environment. They will be monitored. The company doesn't care if they watch porn while waiting for something to break. The company doesn't care if they listen to music or drink coffee at their desks.
The company does have issues with communications with the outside world during office hours unless they agree to allow that communication to be monitored for corporate secrets getting out. If they want to type sexy somethings to their significant other, that's fine: but it's going through the corporate network, not their cell phone, and the content will be analyzed by computers.
Make sure the corporate policy doesn't prevent them from typing sexy sweet nothings, and that corporate policy prevents anyone other than security teams from accessing those messages. respect privacy as much as possible and provide as relaxed an environment as possible, but make it clear that there are concessions to security.
If they don't act against the company's interests then they are guaranteed a job as long as they perform adequately. If the systems detect them acting against company interests a specially qualified, vetted individual trained in discretion and personal privacy ethics will examine thier suspect events/traffic and determine if they pose a risk to the company. The individual will be informed of the event and information about whether or not the data was false or positive will go back to the algorithm team to make the machine better.
That's the best design I have for keeping operations teams satisfied, but I am still not sure if it manages the balance quite well enough. And it is here where, if there is a failure in my design or a breach in the company that it will occur.
This is why I would personally bring experts in to pick apart various stages of my design.
That said the design is based on a lot of research. Failures and successes of other companies. Every single security expert I've talked to - and most that I've read - are adamant that the biggest risk to any company is ongoing operations. Not procurement.
What's more, the procurement design discussed here ad nauseam is one that aligns not only with the best expert advice, but with game theory as well. I simply do not understand why you seem so obsessed with the idea of compromising devices as opposed to compromising the people who will be safeguarding and using those devices every day.