Or you could just...
Buy Symbian...
What does it take to build a classified smartphone? Demand clearly exists Given how readily every iPhone and Android device is rooted, infected, and otherwise compromised, the answer isn't simply "better software." In the battle to secure our mobile endpoints, operating system tricks and mobile device management will only take …
So much promise yet the end result will be smartphones that you cant remove the shitty operator crippled firmware from and computers that nickel and dime you for every little function because you cant install anything except through their app store, it may involve increased risk but ill take freedom over security.
It means "things you aren't allowed to know". I really don't think you need to divert yourself from your penguin-fondling time wastery with concerns over a type of phone which, by definition, isn't ever going to be on the general market where you could actually complain about not wanting to buy it.
Greenhills do a nice little OS called Integrity, and is probably one of the very few out there that stands a chance of actually passing anything like a serious standards-based accreditation.
Blackberry seem to have done a reasonably good job, with governments every seemingly trusting them to a limited extent. Any additional tech from Intel, Greenhills or whoever would probably be able to improve on Blackberry's security, but there's an elephant in the room. It doesn't really solve the biggest issue out there regarding improved mobile security: who the hell is using this device? And just who is looking at the screen anyway? There's been many a thing tried (passwords, biometrics, secure tokens, you name it) but none of them really cut the mustard.
The biggest stumbling block for a secure mobile device is getting it to recognise when it's being used and/or looked at by someone other than the authorised holder. If you don't solve that problem then we'll just have repeats of the usual 'left it on a train' stories, appended with 'forgot to screen lock it...'. Passwords will get written down, secure tokens will get lost, and you're biometrics aren't exactly private either unless you wear a rubber gloves, a false mustache and funny glasses all the time.
Nothing in Intel's toolbox (nor in any one else's toolbox either) seems able to solve that.
"The biggest stumbling block for a secure mobile device is getting it to recognise when it's being used and/or looked at by someone other than the authorised holder."
Unclassified consumer smartphones will have increasingly high resolution camera chips. There have been issues in the UK with confidential paper documents being held print side outside, and photographed using high end press cameras and read.
Best ban using the classified phones outside, or on public transport or anywhere except a secure room.... I know, we could put a network cable on them. Perhaps have a special dock to rest them on when not in use... could be secure then.
"we'll just have repeats of the usual 'left it on a train' stories, appended with 'forgot to screen lock it...'. Nothing in Intel's toolbox (nor in any one else's toolbox either) seems able to solve that."
Managed policies enforcing timed lockout? Remote lock and wipe? Remote tracking (though that can be a two-edged sword)? McAfee Enterprise Mobility Management (McAfee EMM) purports to do this sort of stuff, though I would not care to guess at the quality of the product as I have not gotten my hands on it yet.
"Managed policies enforcing timed lockout? Remote lock and wipe? Remote tracking (though that can be a two-edged sword)?"
Most of that relies on the idiot who has lost it phoning in to say they'd lost it. There have been plenty of cases where things left on trains have not been reported immediately, leading ultimately to loss. It doesn't really matter how good the technology is, the weak point is the squidgy bio thing carrying it. The timed lockout is OK up to a point, but can turn these things into a right pain in the rear for users.
"Best ban using the classified phones outside, or on public transport or anywhere except a secure room.... I know, we could put a network cable on them. Perhaps have a special dock to rest them on when not in use... could be secure then."
That hits the nail on the head. Mobile and secure is really hard because of the unreliability of the human using it. Locking it up in a secure room largely removes solves that problem, but then it's definitely not a mobile anymore.
"and you're biometrics aren't exactly private either unless you wear a rubber gloves"
For years, i have wondered whether the spooks in DC, Moscow, Tokyo, Israel, et cet used bars and dance clubs as a means of obtaining DNA, fingerprints, saliva, and hair samples...
Diplomats and their families may be exempt from certain checks, but that does not mean tableware is sloppily collected from and commingled with stuff from other tables.
With enough biometrics collection, this so-called "classified phone" or many other devices will be broken unless brain scans and body scans against a live user are matched.
And, when I read "classified phone" i imagine it is one that governments declare users are not supposed to violate, such as the gov/mil lingo on mil spec safes and sensitive equipment or cabinets(at lrast with cabinet safety interlocks): "Contains no user-servicable or field-replacable parts! DO NOT TAMPER WITH, DEFEAT, DESTROY, OR DISABLE", except the label will have appended verbiage being something like:
"...the approved and restricted measures and countermeasures on this device at the risk of revocation of comms privileges, facing fines and prosecution, and being subject to being locked up in an underground reeducation and reassimilation center..." hahahaha
"With enough biometrics collection, this so-called "classified phone" or many other devices will be broken unless brain scans and body scans against a live user are matched."
What about iris prints? About the only way you can get those is by shooting the eye point-blank. Doing it on foot would be to obvious, I would think. So that leaves a third-party iris scanner, and there aren't too many places that actually use that level of security.
I'm positive. We're not just talking phones. We're talking drones, self-driving cars, and any appliance that is "install once, and walk away." Frankly, that's the majority of computers deployed today. In the near future, this will explode growing to completely dwarf extant PCs, Mainframes and Mobiles.
If you are going to store (encrypted) classified data on the device for use offline then I don't think no matter what they do it won't be safe enough. (A country like China or the US NSA will if they want it will be able to get past it.)
I suppose you would have to know what you needed to know was on the device to make it worth expending the effort.
The private key has to be somewhere to unencrypt
(The entire SoC could be replaced for one without a locked bootloader which would defeat force signing of binaries, The way that stopping jtag is employed is pretty useless (Phone jtag boxes manage to work regardless). Or you could read the flash chips directly).
I think all classified stuff should be done Inspector Gadget style (This message will self destruct in 5 seconds.)
Just an application perhaps with a hardware secure part might be reasonable to do. (And stuff that can be reasonably done on the device to make a harder target).
It is still bad having this stuff being mobile at all. (Stuff like kidnapping the politicians children and hanging them over the edge of a building and then waiting at the home with the wife and a gun against the head would crack quite allot of people.)
There is too much risk for something like this to exist at all in my opinion.
Actually, moves have been made into tamper-detecting ICs. The private key could be held in such a way that attempting to etch down to read it would trip a chemical failsafe that destroys the key and bricks the device (in a classified environment, bricking would be considered an acceptable outcome because it means the enemy can't get at the data). If that key is only used within the IC itself (outside communications uses another set of keys), then the pins won't tell you anything, plus you can perform trace detection on those pins. So if you can't etch it and you can't trace the pins, where would you go next?
...what it's proposing isn't just just a secure mobile 'phone, it's the most secure communication system ever invented for common man.
So, unless you're the ruler of a nuclear power, I suggest you just install https://silentcircle.com/web/silent-phone/
I seriously doubt the classified phone's target market is "the common man." It likely won't be as thin-and-light as the iPhone, nor quite as fast, as energy efficient, etc. It will be close enough for jazz, but a notable amount more pricy. Proles don't need classified smartphones....but the gov't, military, emergency services, etc markets for mobile comns are booming worldwide.
For those of you moaning about this "being an infomercial," how about you put your time where your trolling is, and tell me just who, exactly, you feel has comperable tech right now. I agree wholeheartedly that there are far better OSes than Tizen for this. I don't think Intel is going to push Tizen as their spyphone OS.
Tizen was a learning experience; the OS powering the eventual phone will - I hope - be made by Research in Motion. There are surely others that make the OS. What there aren't, are people with the right hardware, other than Intel. That will include writing secure firmware, even embedding new and interesting security into silicon than exists right now.
So...who else is doing it? Give me names, I'll go get interviews. Intel's folks seem confident they have zero competition here. I'm inclined to agree. You won't make the "secure" device on wide open hardware, just by trying *really hard* with the software.
Perhaps you might want to do a comparison with Good Technology (http://www.good.com/). Their implementation can work with a CAC for authentication, which is nice for government types, but is not exactly going to make supported devices any smaller.
Additional infomercial material: Alternative Way To Secure Smartphones For Government (http://gov.aol.com/2012/01/17/alternative-way-to-secure-smartphones-for-government/)
I've worked with Good's MDM. It's still just "using software to try to get other software to behave in a mostly useable fashion. It doesn't come close to "tamperproof;" it is still as vulnerable as the operating system, bootloader, etc underneath. This isn't a hardware + software solution, it's just a series of band-aids, one on top of the other trying to contain the bleeding. What's needed is to open the patient up and repair the artery itself.
This means – as I said in the article – security in silicon that works in conjunction with a well coded OS. Sealed storage, curtained memory, signed bootloaders/OSes/apps/patches, centrally monitored communications, secured heuristics (anti-malware in silicon/firmware) and remote bricking at a bare minimum. You just can't do that all in software. You need that security silicon to pull this stuff off.
This comments thread is full of folks saying "but you can just use MDM, or use an off-the-shelf OS!" Yes, you can. For corporate level security. For classified security – or to meet "tamper proof" requirements for critical appliances – this simply isn't good enough. The contents of your executive's e-mail might lose a corporation a few million dollars. Maybe it even knocks the share price down a few cents. It doesn't cost lives.
A malfunctioning or captured UAV, the contents of certain classified documents, minehunters or self-driving automobiles gone mad…these can most certainly cost lives.
Every other day I'm reading about some new "secure" industrial control system compromised. iOS and Android compromised before the units even hit general availability. The top of the top of browsers, operating systems, etc cracked within hours when there is real money on the line…
…and you want to tell me you are going to solve this problem entirely in software?
I'll cheerily call up Good and see if they feel they can do "tamper proof" entirely in software. Anyone else who feels they can too. I'll need to see it to believe it.
>remote bricking
At least, I would expect self-bricking in the event of being unable to call home, receive a beacon signal or being tampered with.
>cost lives
Yes this security level is probably becoming drawn much wider and could include the emergency services and others. The thing that will concern our friends in Cheltenham is, like Enigma, the more of these devices that get deployed for mundane work the greater the risk of compromise.
>security silicon
I wouldn't be surprised to see the first incarnation of this phone being based around an ARM design (lower power etc. than Intel's own current processor line up), although obviously only our friends are likely to know these details ...
Also Intel are probably not the only OEM in the race - based on the Nonstop Laptop Guardian (NLG), I would not rule out Alcatel-Lucent Bell Labs.
>commodity platform
The really clever bit will be to fit it all within an existing mass market smartphone chassis so that the phone doesn't draw attention to itself when in use...
So happy I will never have to use one of these phones (probably will never even see one). Maybe %5 of the people using them are actually do something worthwhile for society but the rest are mostly just cogs in the runaway military industrial complex death machine (especially true in the USA).
I don't know that this is true. Far more than "classified smartphones," the tech is useful for "smart appliances." Think of "smart radios" in an ambulance. Converging your GPS, communications, various flavours of application that require them to run a PC inside the vehicle today, etc. This is a perfect example of a situation where you need the power and flexibility of a "general purpose computer" (or a smartphone), but you absolutely don't need the end user loading apps, changing data, or sucking off patient records.
A centrally provisioned/verified/etc OS/application stack on a completely locked down device would be a good fit.
What about automated buoys for scientific research? Minehunting subs or landmine hunting automated ground vehicles? Autonomous cars are becoming "a thing," what if we start building automated snow plows, street sanders/sweepers or other such robots to pick up the drudge work. These should be run on general purpose PCs that are rootable, or on which we can reasonably easily change applications/move around data?
None of these applications are "science fiction" any more. I have seen the Google self-driving car with my own eyes, I have built UAVs with my own two hands. I have loaded software on to automated buoys and worked with a team to design automated landmine hunters.
Every one of these devices will need the same tech we would use to build a secure smartphone. Some – like the "smart radio" ambulanceputer – are designed for human interaction with the device. Others are automated. The point is that they are appliances. Appliances that need to do very complicated things which we still need "general purpose computers" for. Unlike a proper consumer-level general purpose computer, however, these appliances absolutely must be tamper-proof.
Today – and for the foreseeable future – that means Intel. For all those irked by this, call up AMD, your favourite ARM manufacturer, and anyone else good at putting things into silicon. Scream at them and get them put some competing products into the field. I think it's a terrible thing that what promises to be the next wave of computing – billions of devices with "tamper proof" requirements – looks set to be dominated by one company.
I don't want that. You don't want that. Probably even Intel doesn't want that. (The critical antitrust eye of ultimate scrutiny is not your friend.) Unfortunately, nobody except Intel has the right mix of stuff to pull it off. So yes, it does mean "Intel Inside" our automated and tamper-proof robotic overlords. For several product cycles worth of "kick the tires and working the bugs out" on behalf of Intel's competitors, at the bare minimum.
The reason it is so bad is that this doesn't just apply to the military death machine. The potential addressable market here is far, far bigger than that.
You need to have authorization by three factors:
Something you have: Like a token
Something you are: Like a fingerprint
Something you know: Like a password.
Unless you have all three, you get nowhere. Of course having "portable" documents (those residing on the mobile device) is VERY problematic, so you need to transfer these EVERY time you want to access them. Anything else is VERY insecure.
Maybe the solution is to go to police boxes to get the data. Yeah, that's the ticket!
And in doing so, end up with a phone that costs thousands of dollars each due to small market size versus the development costs, or you can design something that meets those requirements while also being more general purpose, to expand the potential market greatly.
Running a hypervisor on the phone, using the whole secure boot sequence, signed code, etc. seems a better way to go, then it has the capability to run not just the NSA version of Android or whatever you choose, but also a corporate approved load if you are say a Boeing engineer working on the next generation airplane. Then the thing has a chance of not costing $10,000/ea, not that this has ever stopped the US government.
Theoretically you could have a dual or software SIM and combine your personal work and work phone, so you get that secure phone with the corporate secret protecting load for people with access to the types of data corporate spies want to steal, along with the phone that can run Facebook and Angry Birds you want, without carrying two phones.
For a secure data terminal, all you need is a small mikro controller, a display, a keyboard and a GSM-module. That can be done quite cheaply even in low volumes.
Alternatively you take an off-the-shelf mobile phone, unlock the bootloader and run some special stripped down version of Cyanogenmod. Throw out everything you don't need, particularly the "stores" and things like the Flash-plugin.
Again, signatures are not a security feature. Having an open bootloader is, since it allows you to run more secure software than what the manufacturer intended.
I'm sorry, but no, code signing never was and never will be a security feature. If it was, we'd all be doing sensitive work on iPhones and Games consoles.
The only chance to get a secure system is to design a minimalistic system by non-idiots.
Let me elaborate on this. The more complex a system is, the more lines of code it contains. The more lines of code it contains, the more bugs there are. More bugs means there are more security relevant bugs.
Now imagine there being a buffer overflow in one of the many routines, for example one that checks the validity of a signature. Suddenly simply placing a file to be checked can make it execute code on your computer. This problem wouldn't have existed if that routine wouldn't have existed.
Since checking for errors in other code is hard, it's much simpler to just replace the complex general purpose system with a simpler limited-purpose one. This is what's commonly known as hardening. Unfortunately, if your kernel is signed, you cannot replace it with a kernel you just compiled yourself.
Particularly with mobile devices code signing is useless since there the physical access vector is most common. Once you have physical access you don't care about signed bootloaders, you can simply replace the keyboard and the screen with versions that report back to you. There already are replacement battery packs with radio-microphones for most types of mobile phone.
You are wrong. What you are describing is a phone that you, the end user, can "verify" is secure by running whatever software you want on it. This is the exact opposite of a secure device, from the perspective of people who own those devices, but have to have other people using them. For people who have classified data, or "tamper proof" requirements on devices they absolutely cannot have people hacking and cracking, then signed everything – along with many other features – is the only way to go.
Allowing end users to have any control whatsoever of their devices benefits a very small group of end users. It doesn't benefit the people who are trying to keep information secure and secret. It doesn't benefit governments or the "populace at large" who absolutely must have cars running known software so that you don't have your self-driving robocar running anything except obscenely over-tested self-driving software. It benefits ideologues, not those who need control over the endpoint.
Fully locked down systems can be designed to require multiple authentication vectors, even to freak out of - as with your example - someone replaces a screen or battery. Your descriptions of how to crack a "secure device" are based on common operating systems which are not designed by paranoid people. A secure device should (and would) freak out if you swapped the battery. Bricking the phone, reporting it's location to the authorities and wiping all local data. Same with the screen, they keyboard or any of a dozen other things.
A secure phone with heuristics software in silicon or firmware would look for attempts to fuzz the system for a buffer overflow and....brick the phone, reporting back the location and wiping the data. In fact, anything out of the ordinary, expected operating procedure should result in bricking the phone, calling home with location, and wiping the data. The secure device doesn't belong to the end user. The secure device belongs to the organisation that purchased it. Any attempt to modify or alter the application loadout, hardware or so forth should result in a useless phone and an arrest. (Depending on the context, charges of treason and a bullet.)
In today's world, "hardened" systems are not a reality for mass production. That was fine and good when we were making a handful of industrial control systems and mainframes. We're going to be talking here about building nearly everything that humans use except (possibly) cutlery as compute-enhanced appliances. They have to be tamper-proof and cheap. Cheap means not paying a programmer to custom design software for every single one.
Instead, it means taking generic, well understood software that a huge security machine is involved in continually testing for issues and locking it down as much as humanly possible. In some cases you can do basic hardening (pulling out unnecessary kernel elements, etc.), but you aren't going to do real ground-up custom OSes and minimalist designs for each new unit and variant.
Even if you did, that "hardened", unique phone would still contain bugs and exploits. Worse, it would be a special flower, unique like all the others and only the programmer who built it would (maybe) know how it was different from all the other special flowers. Trying to secure and render "tamper proof" thousands or millions of unique branches of the code base (which is ultimately what we're talking about in the next 20 years of computing, and the explosion of appliances that we are on the cusp of,) is lunacy.
Even if you could find the manpower for it, the cost would be incomprehensible. Programmers are expensive. Silicon is not. By removing choice and control over the endpoints from the end user – using methods that even Apple wouldn't dare – these systems can be rendered secure. Apple doesn't have Intel's full suite of tech. Even if it did, it wouldn't use this level of paranoia, because it would probably violate several laws to prevent you so utterly from altering your device.
In the context of devices sold to be "tamper proof," there is no such legal consideration. Quite the opposite, most nations have laws that explicitly state trying to crack such a device leads to big time jail time. A "tamper proof" secure device doesn't belong to the end user.
So let's agree to disagree on the definition of secure in this context. Your definition is not the definition used by the people who will be buying the Intel Inside super spyphones and appliances.
Well the way it is done now usually means that the manufacturer of the device can determine what software runs on it. And the manufacturer usually doesn't care about security. They only care about their business model.
By the way, tamper proving a device has more to do with increasing its physical security (i.e. filling it with some sort of resin) than secure boot loaders. Once you have access to the chips, you will always be able to bypass any "security".
That's the entire point. This is beyond what most would term "hardware" security. We're talking utterly paranoid PHYSICAL security. Epoxy and resins will likely be part of the solution, yes, but what about chemical and mechanical failsafes (meaning they can be tripped even with zero electricity) embedded into the device housing or deep in the internals that would trip on any attempt to get inside? Remember, bricking is preferable to unauthorized access. You can always issue another phone; you can't get the cat back in the bag.
"Any attempt to modify or alter the application loadout, hardware or so forth should result in a useless phone and an arrest. (Depending on the context, charges of treason and a bullet.)"
Herein lies the rub in implementing this sort of thing at the hardware level. Government agencies will want to set the response to tampering by policy. Policies are typically implemented at the software level for the simple reasons that they may change over time and that different agencies have different requirements.
What you really want is a phone with a fingerprint and retina scan camera that works as a dumb terminal as long as it can is able to scan your retina, while you are holding your thumb on the scanner and you have entered a passphrase (to allow duress flagging). And you'll need a sealed unit with all sorts of electronic checks to make sure that the electrical properties of the screen haven't changed (someone mirroring the screen to a remote device) etc.
In short, mobile comms is very difficult to secure. Much easier to get someone to meet you at the bottom of the 99 Steps.
No such thing.
Tamper resistant (to various degrees), tamper evident... but not tamper proof.
The proper description would be "tamper resistant meeting FIPS..." (or whatever your favored rating is). An unlimited resource attack will always break a production device. Having said that, for any particular case the attack may be unrealistic-- for instance if the useful life of the one shot nuke arming key is 30S but the attack takes a minimum of 90S, well, your data is safe from hardware/software assault. Time to take out the rubber hose... there is probably some human in the chain that is vulnerable. Or their spouse, or children, or dog...