The kids have absolutely no idea how BSD works ...
A group of researchers presenting at Usenix last week turned up a startling new way to sneak malicious apps through the AppStore and onto iOS devices. By spreading malicious chunks of code through an apparently-innocuous app for activation later, the researchers say they were able to evade Apple's test regime. The Georgia Tech …
The kids have absolutely no idea how BSD works ...
Errr... wrong thread?
While I can see the fun in writing some malleable table driven software, or possibly even bundling your own custom interpreter that you can later feed with instructions of evil intent, wouldn't it be easier to simply download malicious code with the next automatic app software update? Or doesn't the iPhone have those?
There are no automatic updates.
All updates are reviewed the same as new apps.. otherwise there would be no point in having any review process if an update was then not checked.
Also while this may be a good thing for people to do when they are being paid to research and do this kind of thing, but we should be pleased that at least some testing is done with iOS as apposed to Android which can install anything it likes.
During installation on Android a list of permissions that the app wants is displayed, and the user can accept or reject the app on that basis. This also applies to automatic updates, and newly sought additional permissions are singled out for special attention.
"During installation on Android a list of permissions that the app wants is displayed" - yes but unfortunately most Android apps seem to want permissions for everything, so the list of permissions to check becomes just another "EULA" and people just accept them with much thought. Sure you can read the list and decide not to install an app that wants unwarranted permissions but how many do?
I've read the permissions list and thought: "WTF are those?". You don't know what you are letting it do, so you let it pass. The alternative is to have no apps that do anything.
It just takes a bit of googling or even reading the short description for each to get some information about the permissions.
I'm a long-term Android user, and only just recently got an iAllFormAndNoFunction device. At least with Android I can see which apps are wanting to get hold of things like my device identity (IMEI, phone number, device ID) and avoid installing them. On the iDevice I have absolutely no idea what an app will want to do. (I'm aware that Apple ditched the idea of access to a device ID a while back, but I don't know if apps can get hold of equally "personally identifiable" data.)
As has been mentioned in various previous discussions on theregister the Android system is somewhat brain dead compared with that used in Symbian, whereby you could interactively choose to allow or deny an app access to various features. I've seen that my iDevice asks permission to share location information with the app, which is great, but that seems to be about all that you can selectively "filter".
"as apposed to Android which can install anything it likes."
Not true at all, by default things are limited to Google Play.
And never mind the theory, how does practice compare? All claimed instances of Android malware have been on sites other than Google Play, so there is no evidence that IOS's method is more secure. But we do have plenty of examples of how Apple have used the power to block all kinds of applications that people might find useful.
Nokia Store has checks too - and as much as I loved Symbian, I have to say that as a user and developer, I much prefer the straightforward method of Android and Google Play, compared to the laborious and restrictive checks of someone else telling you what you can release for the platform.
The message we want to deliver is that right now, the Apple review process is mostly doing a static analysis of the app, which we say is not sufficient because dynamically generated logic cannot be very easily seen
And how do you expect Apple to be able to exercise every code path, for every possible input parameter ?
Correct. Thus, basically, the tests CANNOT catch malicious apps (except the most feeble ones). They can catch badly written apps though.
Why should they need to?
The point of a sandbox is that, once inside, the program has the run of the sandbox. But if you don't let ANYTHING in the sandbox post a tweet or access the contact list, then it doesn't matter what it ASKS to do - it gets nothing back.
Nobody can tell you an app is "safe". But they can tell you what it has permission to do. If it doesn't have permission to do X, Y, Z in the first place then all you have to worry about is the security of the sandbox - which you have to worry about anyway.
The problem only comes when people authorise an app to have the "can read contact list" and "can send SMS" permissions for apparently legitimate reasons (e.g. sending texts to you when your phone is lost, say) and don't realise that it could be used by a nefarious part of the app for whatever it wants to do (e.g. send spam text to your friends).
Nobody should be relying on some static analysis "test" to determine if an app is evil. It simply should never have permission to touch facilities that it does not need. And I've felt the same way about every program on every one of my computers for the last 20 years. I shouldn't have to know if a program is safe or not to run and be half as suspicious as I am - it simply should not be possible for a program to put itself into startup, access the network, or read my private files without my explicit permission.
Our focus on "security" is completely misguided, as was even MS's attempt to splat permissions into Windows (remember Vista's UAC? Well, I can still write a program that will put an entry into one of the many startup lists of a user on an unmodified and patched-up-to-date Windows machine. The only thing I've seen that can "fix" this is "Startup Monitor", which was written for Windows XP, and basically works by watching the relevant registry entries on a regular basis).
DON'T LET APPS DO ANYTHING MORE THAN THE ABSOLUTE BARE MINIMUM THEY NEED TO WORK. After that, it's the user's fault if they install something that has permission to do dangerous things.
"The point of a sandbox is that, once inside, the program has the run of the sandbox. But if you don't let ANYTHING in the sandbox post a tweet or access the contact list, then it doesn't matter what it ASKS to do - it gets nothing back"
Thats fine unless the purpose of the app allows these things. Or the user doesnt care.
This was a news feed app. Most of them have "Retweet" buttons these days. As for the contacts button, does it have a "Send article by email" option?
You certainly couldnt stop it from accessing websites, cos as a news feed app, that would make it pointless.
"DON'T LET APPS DO ANYTHING MORE THAN THE ABSOLUTE BARE MINIMUM THEY NEED TO WORK. After that, it's the user's fault if they install something that has permission to do dangerous things."
For performance reasons, iOS make most functions available to every app. Apple's static analysis stop code calling the function they say they're not using, and that's good enough to stop the researchers calling the extra functions directly. But libraries they're permitted to call contain trampolines, and the researchers subvert these to the rest of the functions.
Apple could definitely fix this. But it's a lot of work and a performance hit. And iOS is a juicy target so this could become a problem as it's well within the capabilities of a spyware gang.
>For performance reasons, iOS make most functions available to every app. Apple's static analysis stop code calling the function they say they're not using
OK, that makes sense, but...
How about having a table of "system function points" that the app is expected to call. Call it a registry (eeek, not my favorite item in Windows). Or an ACL. Whatever. Just something managed by system/app store during the install process, that is NOT under the control of the app.
Now, only put in things like SMS sending, contact list lookups and the like. When the app wants to do those things it will call the function, and the system, not the app, will lookup the ACL before calling the function. No pre-provisioned ACL for that function? No play & maybe a call to the mothership that someone is being naughty.
Yes, there would be a cost, but an app shouldn't be firing SMSs at a high frequency, so a 0.1 second lag will not be noticed. For frequently used functions, like GPS or network access => return an access token of sorts (a randomized function pointer alias to the system function would do nicely) - you pay first time in an app's session, not afterwards.
You could even allow the user to defer ACL grant to actual execution time.
Seriously, a granular permission system a la Android is something iPhones would be nice to have.
I suspect the main problem is not technical, it is asking the average iPhone user to have to consider what her app is doing. That doesn't play well with the "it just works" mantra. Just like old style Outlook expected that users just "needed" VB macros, "just in case".
Wouldn't be surprised that a big Melissa/Blaster moment on any one mobile platform will concentrate minds wonderfully.
The researches appear to be talking about code that self modifies (If I understand what their meaning of re-arranging code gadgets)
So why not mark all executable memory regions read-only, and forbid any regions that have been writeable to from every being marked executable ?
Or am I missing something here ?
No, it's more like there is dormant code that is only executed in certain circumstances. The code isn't run during the tests (because it goes down the "clean side" of an "if", for example), so never triggers warnings, but can be "switched on" after talking home and seeing a certain flag, for example (when it goes down the "dirty" side of the same if in the same memory page).
The code is executable all the time. It just never gets executed in the tests. And designing a test to execute all executable code during the testing phase is almost impossible.
DEP is not the be-all-and-end-all of security. We've had it enabled on every machine on every OS since Windows XP SP2 / Linux 2.2/2.4 I believe. We still have viruses, drivers that crash and trash memory, and buffer overflows (DEP is never fine-grained enough to catch things like that).
Apps can't write self modifying code (that's why apps can't use the superfast Just-In-Time compiler). But the authors include a vulnerability (a buffer overrun) that allows the server to smash the stack. Nothing is moved or rearranged, but return addresses are tweaked so that genuine code is called in a different order.
This is the same story I commented on, just with the words changed.
Have the reg runout of Apple bashing stories so just rehash a new one?
Or have the main article writers gone on holiday and left the work experience kids to run the ship, saying "Just rehash a few stories no-one will notice"?
"...such as stealthily posting tweets, taking photos, stealing device identity information, sending email and SMS, attacking other apps, and even exploiting kernel vulnerabilities..."
fairly sure that the user must first allow the program to send tweets/take photos/send sms...
not sure how it would attack other apps as its sandboxed, - unless it jailbreaks the device first..
But I thought there was a ban on apps which received instructions from elsewhere and acted upon them? Hence non-Safari browsers (with the exception of Opera Mini), emulators, interpreters, Scatch, etc...
Do they mean "functions" by any chance?
Obfuscated code is obfuscated!
There have been countless apps that have passed approval that did things the authors didn't disclose. Much of the vetting process is questions asked of the submitting authors, not extensive digging through the source. The risk is if they detect something amiss, even later from complaining users, they can pull the apps and dev account.
Biting the hand that feeds IT © 1998–2017