126 posts • joined 27 Jun 2007
You might want to double-check the security of that biometric scanner...
You see, the Windows login process is not designed to work with biometric data. So, you still have to set up a regular password to the account. The biometric scanner just provides an easy access to that password.
The idea is good but the implementation is often awful. I once bought a third-party fingerprint scanner that could be attached to any computer via its USB port. The scanner came with software that would interface between the scanner and the Windows login process and let you log in with your fingerprint. Like you, I was fascinated and found it very convenient...
Until I discovered that AT EVERT LOGIN the software was appending to a TEXT file in the ROOT directory a complete copy of the environment variables and the password in CLEARTEXT!!!
When I complained (loudly) to the producer, all they could advise me was to use the NTFS file permissions to remove read access rights from that file. Morons!
Needless to say, I would never, ever buy ANYTHING from that company EVER again.
Re: Where does it go?
The banks charge extortionately the thieves too. But the thieves don't care - the money isn't theirs.
Yes, there is a trail. The money goes to a money mule who has been duped to send it further via Western Union. There the trail ends.
Re: Does it affect the Foxit standalone reader?
The PDF-reading and interpreting code is not vulnerable. The vulnerability is in how the browser plug-in interprets URLs is has to open and pass to the PDF reader.
Excellent criticism of Imperva's so-called study
Some of the juiciest quotes:
"imperva keeps shopping this quackery out to more and more media outlets where it gets gobbled up and regurgitated uncritically by writers/editors (who really ought to know better if reporting on this sort of topic is part of their actual job)"
"imperva has behaved like a dung beetle, persistently rolling this turd around, but somehow it keeps getting bigger like some katamari damacy of bullshit"
Re: Flawed study?
Excellent points, Alan! (Hi there, BTW. Long time, no see. Yes, I'm still alive.) Here are a few more:
1) If Imperva are selling a security product, then it is highly unethical for them to test (or even comment on the quality of) other people's security products. They are obviously biased. As the following points demonstrate, they are incompetent, as well.
2) They don't seem to distinguish between viruses and malware in general. Most of what they have used in the tests were not viruses but various kinds of Trojans. Trojans don't "spread"; only viruses are able to replicate themselves. It is because of this lack of self-replication that the spread is low and the AV vendors haven't got samples or got around to implementing detection of them. With thousands of new malware variants appearing every day, the AV vendors are forced to concentrate on handling the more widespread threats first.
3) They don't seem to understand how AV works. There are two main kinds of AV solutions - malware-specific ones and generic ones. The malware-specific ones (commonly known as "scanners") is what most people think of when they talk about AV products. As their name suggests, such products detect KNOWN malware - known to their producers, that is. If it is not known to them, they won't detect it. Revealing the "troubling" fact that such products are not very good at detecting unknown malware is like saying that a screwdriver isn't a very efficient tool for nailing nails. It's true, but it is a completely pointless statement and only reveals the incompetence of the person saying it.
The generic AV products (of which there various kinds - heuristic analyzers, behavior blockers, integrity checkers, etc.) try to detect malware not known to them by using some generic knowledge about its structure or behavior (like "if an executable file tries to modify another executable file, this is suspicious" or "if a set of executable files have one and the same code at the end and this code receives control when the file is executed, then they might be infected"). Unfortunately, it is mathematically provable that it is impossible to detect all possible viruses without causing false positives. (The proof is constructive - i.e., if you claim to have an algorithm that does it, the proof shows how to construct a virus for which the algorithm will fail.) In the above examples, the "executable modifying other executables" could be a compiler or a linker, and the files having common executable code at the end might be compressed and executing the decompressor at runtime. So, most AV products of the generic kind try to strike some kind of balance between detection and false positives.
Most AV packages nowadays try to combine products of both kinds. However, VirusTotal uses only the known-malware scanner part of them. Testing it with unknown malware is simply wrong.
Finally, even if Imperva's claim were true (which, I contend, it is not), would you rather use something that gives you a 5% chance of protection or nothing at all?
"That status could be in jeopardy, however, because the only solution to the spoofed-certificate problem, now that the cat is out of the bag, is to revoke the authority of some or all certificates issued by Turktrust."
How so?! All you have to do is issue revocation certificates for the mistakenly issued certificates. Then they can no longer be used to certify fraudulent keys and the keys already certified with them will become invalid, because the chain of trust will break.
That it might be advisable to revoke Turktrust's license for issuing certificates because of their demonstrated incompetence is a completely different matter.
Precisely. And PGPDisk goes as far as to disable hibernation by default. And clears the key from memory when no longer needed. And has a timeout after which it dismounts the disk (as does TrueCrypt).
Plus, if this tool can sniff the disk encryption key only when the drive is mounted - what is the point? If the drive is still mounted, you can simply copy its contents - the disk encryption software will decrypt it on-the-fly for you. Not to mention that it is much simpler to install a keylogger (even a hardware one) than to sniff the computer's memory.
This whole thing sounds like a lot of self-serving hype from the part of ElcomSoft.
The article lists a bunch of links to other El Reg articles. Why not to the original article by Prof. Jiang, from where the information was taken? Not nice, El Reg! Here is the link:
If you read the original article, you'll be able to spot another inaccuracy. The statement "without naming any of the products involved" is false. He quite clearly names them: Avast, AVG, TrendMicro, Symantec, BitDefender, ClamAV, F-Secure, Fortinet, Kaspersky, and Kingsoft; those are the products used by VirusTotal.
Now, regarding Google's approach. Scanning for known malware is acceptable for an anti-virus product that can be updated fast enough as new variants appear. A huge company like Google who is not in the AV business, to begin with, simply cannot be that agile. They should have opted for a more generic approach. Fact is that they have such a ludicrously low detection rate, despite that they have had all the samples used in the test for quite some time.
However, even if you opt for such an approach, identifying the known malware by the hash of the APK file is utter idiocy! There are many known server-side polymorphic Android Trojans (there are no viruses for the Android platform yet). This means that each time you download a copy from the server that hosts them, you get a different APK file - because some random data files inside are changed every time. At the very least Google should have used a hash of the classes.dex file inside the APK file (which is the file containing the actual code; APK files are just ZIP archives).
Even that is unreliable, of course. The proper way to do it is to parse the structure of the classes.dex file and identify properly the code inside. I have written a Perl script that does this and it's freely available. Perhaps Google should have consulted an anti-virus expert before putting firmly their foot into their mouth. I know that many people think that AV stuff is easy, but the truth is that there are a lot of pitfalls and you need a lot of experience in this area, if you want to have a prayer of designing a reasonably good product....
As a general note, Android's approach to security is just plain stupid. Each application requests, at install time, a bunch of rights which few users really understand. The only choice is to grant them all - or not to install the app. The proper way to do it is to allow the user to select which rights to grant, and make it possible to revoke some of them or grant additional ones at any time after installing the app. This way you could refrain from granting any rights you feel suspicious about and later grant them if the app really needs them. (Perhaps I want to play this great game but don't want it to connect to the internet, even at the price of not being able to post my best score to my Facebook page.) Or, revoke some of the rights, if you get a suspicion that the app is doing something dodgy.
But this is a fundamental design flaw. It cannot be fixed without making all the existing apps incompatible - which means that Google isn't going to do it. So, we'll have to live with a fundamentally badly designed security, just like with the Windows platform. Not because the platform itself is inherently insecure (Windows isn't, either), but because idiotic design decisions make it way too easy for the user to screw up and install malware on it.
Elk Cloner wasn't a boot sector virus
Mikko is wrong here - technically, Elk Cloner wasn't a boot sector virus. It was an OS infector. The virus didn't touch the boot sector. Instead, it modified the operating system (called, unimaginatively, DOS), which resided on the first 3 tracks of the floppy disk (after the boot sector). Unlike the MS-DOS, which resided in files (that had to be, however, in fixed places on the disk), the Apple ][ OS was not visible from the file system; it occupied whole disk tracks. There were some unused sectors on these tracks - this is where the virus put itself into, besides modifying a few instructions of the OS to make sure that its code was called. (There were legitimate - non-malicious - variants of the OS where the "unused" sectors were used to add various useful extensions to the operating system, like a line editor for the command line with command history. The virus would damage these if it managed to infect the disks containing them, but that wasn't really a problem, because these OS dialects appeared much later, when the virus was no longer widespread.)
The Multics cookie monster wasn't a virus, since it did not replicate itself. It was just a joke program or, with some stretch of imagination, a Trojan Horse.
The CHRISTMA EXEC can be called a virus (well, a worm really) only with some stretch of imagination, since it resided in a text script that the user was supposed to execute manually. That is, when you got it, and started viewing it, you saw at the beginning a bunch of commands for drawing a Christmas tree and some text that said "reading this is no fun at all, simply execute it" (not the exact wording). If you did execute it, some code at the end (which the sender hoped you didn't see after the many lines drawing the picture) re-sent the file to all your contacts (after drawing the promised picture). Kinda like the joke e-mail that said "Check if today is Friday the 13th., If it is, delete all your files. If not, forward this message to all your contacts."
And, of course, it wasn't a PC virus.
Re: Do you mean
Of course not. AV packages are just applications. MACs have to be enforced by the OS (preferably - with hardware support), or they are useless. In addition, MACs enforce confidentiality, while malware tends to be an integrity problem. While a typical MAC system is very robust for protecting higher-classified information from being leaked to lower-ranked users, the integrity problems that the lower-ranked users have tend to move (i.e., infect) the higher-ranked ones even faster than on a typical DAC (discretionary access control) system, where the disaster happens only after the virus manages to infect a high-ranked user.
No, I was talking about much simpler things. Behavior blocking ("why does Excel.exe suddenly want to open cmd.exe for writing?!"), integrity checking ("why the heck did the master boot record change?!"), heuristic analysis (dynamic like in "let's run this program in a sandbox and see if it does anything naughty" or static like "does the structure of this executable file suggest that it is obfuscated and tries to do something naughty when executed?").
I am not sure what you meant with your remark about open source. The only open source AV I know of is pure crap and is clearly made by people who don't have the slightest clue how to design a proper AV product. Or if you meant that I don't really know how AV products work, since I haven't seen their source, then I suggest that you google my name. Trust me, I *have* seen them from the inside and *know* how they work.
No. A zero-day threat is a threat not KNOWN to the produces of defenses. Just because they haven't seen it yet doesn't necessarily mean that they cannot detect and block it. Many security packages contain not only a known-malware scanner but also a range of generic tools that can detect not-yet-known threats. Such tools include heuristic analyzers, behavior blockers, integrity checkers, and so on.
Yes, I am aware that the unwashed masses still think that anti-virus products just look for scan strings (short sequences of bytes from the known malicious programs) but in reality we stopped relying exclusively on that method more than two decades ago. Nowadays the better products use scan strings only to trigger their other, more sophisticated malware detection and identification algorithms.
"The issue is not the number/frequency of releases of Firefox, but more likely the age and oddity(?) of the extensions you are using."
That's not true. The problem is the frequent release of MAJOR versions.
You see, when writing a plug-in for Firefox, you have to specify for which versions of it the plug-in works. Of course, if you specify either a particular version (e.g., 16.0.1), or a hard diapason of versions (e.g., 4.0-16.0), then the plug-in will stop working as soon a new version is released. However, it is possible to specify a fuzzy diapason of versions - e.g., 4.*-16.*. Then if the new releases change only the minor version number, the plug-in will continue working (unless, of course, it uses something that gets broken - but that's extremely unlikely in a minor version).
However, you can't specify wildcards for the major version number and because Mozilla change the major version number of their browser so often, lots of plug-ins stop working.
In addition, Mozilla take half of forever to approve a new plug-in (or a new version of an old plug-in), so even if you update the ones you've written immediately, it will take considerable time until their new versions become available to their users.
I know the man personally from his anti-virus days and I must agree with the Belize PM - he *is* bonkers. It's not a recent development, either.
I'm reasonably confident that he is not guilty of murder, though.
Much ado about nothing
1) Most browsers already implement the App Store model for extensions distribution. Google even went as far as to make installing extensions (or even user scripts!) from other sources a major pain in the butt.
2) This isn't, of course, a complete solution to the problem, since malicious extensions WILL find their way in the app store - as has happened with Android apps in Google Play.
3) Where exactly is the problem for the anti-virus developers?! The extension arrives as a file. Any file can be scanned before the browser is allowed to access it. If it contains known malware, access to it will be denied. If the malware is not known, it doesn't matter whether the virus scanner could scan it or not. Even the already installed extensions exist as files (or sets of files) on the file system of the computer and can be scanned.
The idea isn't even new; I remember somebody from Symantec covering this issue (as well as the "widgeds" issue - as in Yahoo! Widgets, etc.) on some Virus Bulletin conference years ago.
"Bitcoin-mining botnets are big business for fraudsters. Most recently, Sophos estimated that the ZeroAccess botnet could potentially bring in more than $100,000 per day."
That's a fine example of the "inaccurate reporting" the Foundation is talking about. Sophos' estimate refers mostly to the clickfraud perpetrated by ZeroAccess. Although the zombies of the botnet can be configured to mine bitcoins, there is no way in hell anyone or anything - not even a botnet as large as ZeroAccess - can make $100k per day by mining bitcoins. Anybody implying that simply doesn't understand how bitcoin mining works.
According to which article? ElReg's? Since when do you get technically correct information from such sources? Always follow the links to the original articles and read the text there, if you want to know what the truth is.
The site being encrypted?! Don't make me laugh. It's just a blog with links to MediaFire. The links contain password-protected ZIP archives and the password is always one and the same and is mentioned in the blog posts. Of course, it is trivial for a human to obtain the contents of the archive. But it is trivial even for a bot to identify that contents - ZIP archives contain CRC-32 checksums of the uncompressed and unencrypted files.
However, according to Mila, the bot checks just the file NAMES.
Re: Copyright and malware
1) Are you able to assert credibly that ALL the bad guys already know how to do it? In the past, present and future? Have you thought that somebody who isn't a "bad guy" right now because he has no clue how to "do it" might become one if he gets ready-to-use malware from a public distribution site?
2) We in the AV industry are already fed up fielding idiotic claims that it is we who make and release all this malware, so that we can sell our AV programs. Can you imagine what will happen if one of us actually started a public malware distribution site?
Copyright and malware
While LeakID's claim is indeed utterly bogus (a FRENCH company trying to enforce a US law in SLOVENIA or where Mila is, and filing inappropriately the claim, at that), there IS a reason why we do these things differently in the anti-virus industry. (Mila isn't part of the AV industry. She's just a blogger who has a collection of mobile malware samples available for download. In fact, she rather takes exception when implied that he's part of the AV industry.)
No self-respecting AV company will ever make their malware collection publicly available for download. There are several perfectly valid copyright violation cases that can arise if this is done, as I have tried to explain her in the comments to her blog message. Plus, of course, there is the responsibility issue.
Re: more info please
You really should read the explanation and description of the algorithm on Kaspersky's blog (referenced near the end of the ElReg article). It can't be explained simpler than that, sorry.
The virus knows that it has found the right file because the cryptographic hash of the file name matches a value hard-coded in the virus. But since crypto hashes are not reversible, we can't know what the file name is just by knowing the hash. And when the right name is found, the virus uses a DIFFERENT crypto hash of it as a decryption key. So, we can't find the key without finding the file name.
It is like this. Suppose that a secret agent has been given a locked case with instructions what to do. He doesn't have a key to the case, and doesn't know where to find it, but he's given a pretty good description of the key. So, he wanders around aimlessly, looking for the key. You have captured the agent and have interrogated him. He has told you everything he knows - but he can't tell you what he doesn't know. He's clueless regarding his secret instructions. So, you have two choices. Either start wandering aimlessly around, looking for the key by its description (which the agent has told you), or try to break the locked case, which is very hard to do.
Re: more info please
Re: Textbook clueless agent
The very same. :-) It's nice that someone still remembers a dinosaur like me.
Re: That Is Only True, If
If RC4 had such a weakness, it would be considered a "toy" cipher, not a real one. :-)
I am not a crypto expert, BTW. I'm a computer virus expert. Crypto is just a hobby of mine and I'm nothing but an informed amateur there.
Re: Textbook clueless agent
Something like this has most probably been done already - a custom program that check's the user's file system for file names that would produce the correct hash, offered to the victims. This is exactly the first step Gryaznov took when trying to crack the Cheeba code - and it yielded nothing, so he used better means. Even if this succeeds, I would still classify it as "luck" and wouldn't rely on it.
Textbook clueless agent
From the description on Kaspersky's blog, this is a textbook implementation of Bruce Schneider's "clueless agents" idea . Virus writers had discovered it on their own in the early DOS days, but the encryption used then was sloppy (essentially a trivial Vegenere variant) and easily breakable . The people behinds the Gauss thingy were obviously pros and implemented it properly - as I predicted it would happen in a paper of mine presented at the RSA crypto conference in Tokyo in 2004. 
There is no hope breaking the code except by luck (i.e., the anti-virus researchers happen to stumble upon an infected system that contains the file names the virus is looking for) or by breaking the RC4 cypher, which isn't doable by amateurs (i.e., it requires the resources of a nation-state). That, or unexpected advances in cryptanalysis, discovering holes in the RC4 cypher - but I wouldn't bet on that happening any time soon, either.
 James Riordan and Bruce Schneider , "Environmental Key Generation Towards Clueless Agents," Mobile Agents and Security, Springer-Verlag, 1998, pp. 15-24.
 Dmitry Gryaznov , "Analyzing the Cheeba Virus," EICAR Conference, 1992, pp. 124-136.
 Dr. Vesselin Bontchev , "Cryptographic and Cryptanalytic Methods Used in Computer Viruses and Anti-Virus Software," RSA Conference, 2004.
"Moral crime"?! You've got to be kidding. It was a military action. According to international law, all wars of aggression are crimes. Not "moral" crimes - crimes pure and simple. Iran did not attack the USA or Israel in any way - cybernetically or otherwise. Therefore, any kind of military action against it was criminal.
As for the "how do we know the plant wasn't making nukes" quip - stop swallowing the mainstream media propaganda. We KNOW that the plant wasn't making nukes, because all the major intelligence outfits of USA and the UK told us so. Iran isn't trying to make a nuclear weapon. Iran gave up the idea years ago. The spiritual leader of Iran issued a religious ban against nuclear weapons. It's just the politicians of the USA and Israel who are hungry to find any reason to attack Iran, change its regime and steal its oil. Worked so well in Iraq, didn't it? Oh, wait a minute...
Mikko's comment is disingenuous, too. Decades ago "decisions were made" to use weapons of mass destruction - poisonous gas, nukes, etc. (Remember which was the only country to use nukes? Against civilian targets, at that?) Does that also mean that their use "does not matter" - i.e., is not a crime against humanity?!
Oh, and about "proportional response". The USA (and some other countries) have stated that they will consider a cyber-attack on their infrastructure as an act of war and will feel free to respond with conventional weapons. Does that mean that Iran has now the right to bomb the USA and Israel?
Arrggh, what the world has become! :-( When I started working in this field, virus were just malicious pranks created by juveniles. I was just helping their innocent victims. Nowadays malware is a weapon used by organized crime and the military (is there really a distinction between the two?!). I don't want to be part of this any more! :-(((
Re: The spam uses a *fixed* message ID...
What makes you think that the message ID is always one and the same? The wording of an ElReg article? Since when do you get reliable information from there? Go read the original articles.
At this point it is impossible to tell who is right. It is certainly possible that the messages are sent from a mobile botnet - but there is not enough evidence to prove it. It is also possible that the messages are sent from PCs and are faked to look as if sent from mobile devices. Finally (and my bet is on that) it is possible that vulnerabilities exist in some mobile app for accessing Yahoo! Mail and the spammers have used this vulnerability to create a bunch of accounts and are sending the spam from there.
Typical incompetent crap
This article contains the typical incompetent crap you can expect from an ElReg article. The only surprising thing is that the blame does not lay only on the stupid journo who has written it (as is usually the case) but also on the AV companies from whose blogs he has taken the info.
You see, folks, there ain't no such thing as "Android Instagram SMS Trojan". This thing is not specific to the Instagram app in any way. You see, the scam works like this.
There is a site (a whole network of sites, actually), which claims to be a repository of Android apps - mostly free ones. It's not a market, technically - it's just a site from where you can download app. The site is Russian. Why would anyone want to use a dodgy site instead of the Google Market/Play or whatever it is called this Thursday? Beats me. We're talking Russia, remember? Maybe they don't have an easy enough access to all these apps - remember, Google restricts them by country. Maybe it's too slow or expensive to connect to the genuine market. Maybe they just don't know better. Whatever.
Any time the (l)user tries to download an app from these Russian sites, no matter which app s/he has specified, s/he gets something completely different. It is actually a "download app". This app sends 3 SMS messages to premium numbers (some variants even say that they would do so, although they don't specify clearly the numbers and the costs) and then download the real app that the user has ordered. Which app it is is written in a data file inside the APK file (APK files are ZIP archives) of the "downloader app" - but the code of the "downloader app" is one and the same, no matter which particular (genuine) app the user has ordered.
In addition, random data files are added automatically to the APK file of the downloader, in to fool AV programs that depend on whole-file checksums. This is done automatically before every download of the "downloader app".
But that's not all. In addition, very often (almost every workday) the code of the "downloader app" is edited manually, some trivial changes are made in it (e.g., the classes are renamed, some lines are switched around, variables are defined, etc.) and the "downloader app" is recompiled. This is done in order to fool AV programs that checksum the file inside the APK archive that contains the actual code (classes.dex).
So, basically, the thing uses server-side polymorphism. It's a downloader and it is stupid to name it after one particular app that the original researcher has initially downloaded without thinking or analyzing the thing.
It's not really new, either. It's called FakeSMSInstaller and has been around for several months already. But since a new variant appears almost every day, some poor excuse for an AV researcher has decided that they have found something genuinely new. Not so, grasshopper!
Re: So AV firms forgot how to read x86 assembly?
So, you have forgotten how to read English? "These guys" have no problem reading the x86 disassembly and understanding what the code DOES. What they are wondering is what language it was originally written in and compiled from. It definitely wasn't hand-written x86 assembly.
From the looks of it, my guess would be one of the relatively less-widely used object-oriented languages. Maybe compiled Pyhton or Forth... Compiled Perl might be worth looking at, although personally I think it's unlikely.
The 11k number is utterly bogus
"AV-Test reckons there were more than 11,000 strains of Android malware"?! You've got to be kidding! There are just a few hundreds of them. Apparently, the AV-Test folks do not understand what a "strain" (or "variant") is. They probably have 11,000 SAMPLES in their collection, many of which contain one and the same malware variant.
Malware sandwiches have been with us since the time of the Jerusalem virus (remember that one?).
Even more interesting (but similarly not new), some computer viruses can "mate" and exchange malicious code, resulting in new, previously unknown variants. Used to happen a lot in the MacOS (that was before Apple switched to a Linux variant for the OS of the Macs, for you youngsters out there) and the macro virus world.
But self-replicating malware (i.e., viruses) is mostly irrelevant nowadays. Most of the infections are caused by various kinds of Trojan horses (i.e., malware that does not replicate itself).
So, I'd classify this "news" item as "yet another AV company seeking attention".
Sadly, that's not practical. There are thousands of new apps or updates of old apps uploaded every day. It is not humanly possible to examine carefully each one of them before allowing them to the Market. This is precisely why Google doesn't want to do it.
Some anti-virus companies routinely download apps from there (and from many alternate markets) and scan them for malware, but since the scanning is pretty much automated, it is not guaranteed to detect everything.
In fact, even manual examination won't detect everything, as Charlie Miller demonstrated by getting his malicious app into Apple's walled garden...
You are very right about the first part - it is a big problem that the Android security paradigm does not allow the user to choose which of the requested privileges to grant to the app. (And be later able to grant or revoke any other privileges.)
Sadly, you are wrong about the second part - it is not practical to implement this without a complete re-design of Android.
AV and malicious apps
While the apps in question are indeed not viruses (they are Trojans at best; no viruses for the Android exist yet, while at least two viruses exist for - jailbroken - iPhones), the existing anti-virus programs for Android do detect malware (including Trojans) - not only viruses. In particular, some anti-virus programs detected these particular apps long before Google got wise any removed them.
So, having a good anti-virus on your phone isn't a bad idea, after all. Emphasis on "good", though. Most of those out there suck.
Of flaws and men
In that particular aspect, Google is even more invasive that Apple, as far as I know. It can not only delete from your phone an app that you have installed from the official Android Market, but it can also force the installation on your phone of an app residing on this Market without your consent. Thank goodness, the removal works only for apps from the official Android Market - not just for anything that you have installed on your phone.
As far as we know, Apple at least can't force-feed you apps. Of course, maybe they can and we just don't know it yet...
Of course, there is a bright side to the force-feeding, too. One of the security companies, Lookout, has a product, called Plan B, which makes use of this "feature". Suppose you've lost your Android phone without taking any measures to protect it - like installing some security software on it. Then you can force-feed it Lookout's Plan B (all you need is the Gmail credentials for accessing the Android Market with that particular phone) and then lock it, locate it, wipe it, etc.
Correspondingly, the dark side is that if a malicious app makes it into the Android Market, anybody who can steal your Gmail credentials for your phone can force this malicious app to be installed on your phone without your consent.
Lagostrod is not the only one
Apparently, another (or maybe the same?) publisher of dodgy scamware apps is "Miriada Production", see this:
I have yet to get a sample of the scamware, but I suspect very much that it is related to a set of scamware apps used on a group of Russian sites. They all carry a bunch of supposedly free apps, but when you try to get one of them, you essentially get a downloader, which warns you in very vague terms that it is going to send a premium SMS (it doesn't tell you how much exactly it is going to cost and it sends 3 of them) and then proceeds to download the real app that you wanted.
Those Russian apps use server-side polymorphism, though - something which, I suspect, is not possible for malware uploaded to the official Market. The code of the apps (the classes.dex file inside the APK package) is modified by hand almost every day and the data inside the APK package is modified automatically for every download.
While true, it only means that CarrierIQ is a user-land rootkit, not a kernel-land rootkit.
Well, it's a matter of choice, really.
You can have a completely closed system, be allowed to run only what the system producer thinks is good for you, be relatively save from malware and be left without recourse if something bad happens (like, the producer screws up big time).
Or you can have an open system, vulnerable to malware (because it is just as opened to the bad guys too), which leaves the responsibility for your protection mostly on your own shoulders, and have freedom to run on it whatever you want (including malware) and get quick help from more knowledgeable enthusiasts whenever a need arises.
I realize that each of these two alternatives appeals to different kinds of people. Me, I'd take malevolent freedom over benevolent dictatorship any time - but not everybody might feel the same.
Define "hiding". Do you know how many processes are running on your average PC, which aren't immediately obvious? Should AV programs "warn" about each one of them too?
Not defending CarrierIQ (or the carriers pre-installing it) here - I personally think that it is a huge privacy violation - but AV programs have to be more particular than reporting anything you don't immediately see.
Not CarrierIQ. The carrier. It is the carrier who instructs CarrierIQ what data to collect and send. Yes, it is remotely triggerable (configurable, more exactly). Adding new functionality - no, but there is plenty of existing one.
The fascist government doesn't need to mandate the use of CarrierIQ. First of all, they can go directly to the carrier (with a secret court order or just with a big gun, depending on how fascist the government is) and require access to all the phone-related traffic of the victim, CarrierIQ or not. Second, the GSM phones use the A5 encryption algorithm, which isn't that difficult to crack in real-time. I've seen offers from security companies that have devices doing it within 0.3 seconds.
I guess you've missed the message that the iPhone was found to record your whereabouts and keep a week worth of information of this kind on the phone (accessible to anyone with physical access and a bit of knowledge) and send it to Apple too.
We know it very well. It IS a purely diagnostic tool. Nobody is disputing this. The problems are two:
First, in order to provide exhaustive and useful diagnostic information, it can collect vast amounts of privacy-sensitive data. (I wrote "can collect" as opposed to "collects", because what it actually collects can be configured by the carrier.) This is a HUGE breach of privacy.
Second, it tries to hide while doing so and it cannot be easily turned off. This tends to annoy people. If it had an opt-in policy (as opposed to the current no-opt-out policy), if it clearly explained what exactly it collects, for what purpose, what it sends to the carrier and how that can help the user, nobody would have had any problems with it.
Also, it is a bit unfair that CarrierIQ gets all the blame. After all, they are just a software company making a diagnostic tool. This tool does exactly what their customers - the carriers - want. People should direct their ire towards the carriers who have been shipping it pre-installed, instead. Why aren't they telling their customers what kind of information they are collecting on them and why there is no easy way to opt out?
Ah, you are mistaken. It doesn't use any undocumented Android functionality. The reason why it cannot be removed (without rooting the device) is because it comes pre-installed by the carrier and resides in an area of the device's memory to which the user doesn't have write access on a non-rooted device. it is no more and no less difficult to remove than any of the other pre-installed apps.
Some of the questions raised in the article are relatively easy to answer.
1) Why are some AV companies reluctant to label Carrier IQ as malware and, most importantly, add detection of it in their main scanners and even if they do implement detection, they do it in a separate app? Well, dunno about Kaspersky, by Lookout comes pre-installed by several carriers on their phones. Most of these carriers also pre-install CarrierIQ. Imagine now if the pre-installed malware scanner starts reporting out-of-the-box that the phone contains malware. What will happen? The carriers will drop the AV product, of course - leading to financial losses for its producer. Ergo, the producer isn't going to do detect CarrirIQ as malware with its main product.
2) Why don't they offer removal? CarrierIQ comes pre-installed by the carrier, which means that it resides in the firmware, among the other pre-installed apps. The only way to remove any of those is by rooting the phone. A security company can't afford to do this routinely on the phones it processes - or its own product would be classified as malware by some.
3) Why weren't the AV products detecting CarrierIQ heuristically, using the fact that it requires many dodgy privileges? Unfortunately, Android's privileges are not granular enough to be usable as a base for good heuristics. By this I mean that you can't easily pick a set of privileges and say that if an app requires, then it is suspicious. There has been a rather deep study of this issue (an AV company comparing the privileges used in the known malicious and in the apps on the Android Market) and the conclusion was that it is not possible to determine the maliciousness of an application from the set of privileges it requires.
DiBona is so full of it
While it is true that there are snake oil salesmen in the mobile security business (which field of business doesn't have them?!) - like scanners with pitiful detection rates and overblown estimates of the number of Android malware programs out there - this DiBona chap is so full of it that it's not even funny.
Smart phones are not "inherently more secure than PCs". Just like with the PCs, the weakest link is the user. The user would install anything from anywhere without ever stopping to think. And it's kinda difficult to protect people from themselves, you know? No solution is fool-proof, because the fool is always bigger than the proof...
Mobile malware hasn't caused "much of a problem"? OK, let us assume, for the sake of argument, that it has hit only ONE user (in reality, thousands have been hit, but humor me). That certainly wouldn't be "much of a problem", compared to the millions of smart phones out there, right? Now, stop and think for a moment. What if that ONE user was YOU? Do you still think that protection for mobile devices is useless because malware "isn't much of a problem"?
No major cell phone has a virus problem?! I guess, he doesn't count Nokia as a major brand of cell phones, then. In the early days of Symbian (S60) - the OS that most Nokia smart phones used - many mobile viruses spread accross such phones over Bluetooth and MMS.
Regarding the "no Linux desktop has a real virus problem" crap, with the risk of being flamed by all the Linux fanbois here, I'd say that it again depends on how you define "no" and "a real virus problem".
One more point regarding the "snake oil salesmen". Please note that many (most?) Android security vendors offer their scanners for FREE and only sell for money their other, non-malware related cervices, like backing up the information on the phone into the cloud, tracking the phone, locking the phone and so on. You can hardly call a "snake oil salesman" somebody who is giving you their product for free. Or is Mr. DiBona actually claiming that the other security services are worthless?!
Now, speaking of worthless and incompetent stuff, how about a long and hard look into the Android security model, huh?
1) Android, out-of-the-box would install and run any signed app (if configured to use alternate markets). Signed by anyone, I mean. As opposed to that, the iPhone would run only apps signed by Apple. That's not necessarily a good thing - personally I'd take malevolent freedom over benevolent dictatorship any time - but it does have a negative impact on security.
2) Android is plagued by bugs, exploited by the various rooting exploits, the fixes for which take ages to reach the end user. This is not only Google's fault - much of the blame falls on the mobile operators - but fact is that Apple's model provides better security in this aspect too.
3) Android has the same user-incomprehensibility problem that has plagued the Windows security software for ages. You download an app. It tells you that it requires X, Y and Z rights. The vast majority of people have absolutely no clue what these rights really mean and why the app might need them. Android's description of them is pitiful. The responsibility for making a correct security decision is dumped entirely on the user. In such a situation, most users will fail to make the correct decision.
Why is it not possible to grant only some of the rights that the app requests?!
Why is it not possible to change later the rights granted to an installed app?!
1) While occasionally malware has made it into the Android Market, the vast majority of such malware comes from alternate markets and stand-alone APK files distributed by various Web sites.
2) If malware has been installed on the user's phone from the Android Market, Google has the capability to remove it from there without requiring the consent of the said user. Remove it from the user's phone, I mean - not just from the Android Market. However, this capability is not present, if the malware has been installed from alternate sources.
3) Lookout is exaggerating a bit, IMHO. The known variants of Android malware are about half of what they state. 400+ - not 1000.
4) It is most definitely not true that the Android applications store model "lacks signing". Just the opposite - every app must be signed, or it cannot be installed on a non-rooted device. The problems are elsewhere: (a) the apps are signed by their producer, not by Google (for comparison, the iPhone apps are signed by Apple) and (b) there is no review process. Arguably, the app access rights model is also flawed. It relies on the user being able to decide whether to install an app that requires specific rights. Most people don't even understand what these rights mean and just allow them. In addition, there is no way of granting only some of the requested rights to the app and later granting more rights or revoking some, if necessary.
"Android Malware "problem" is being vastly over-exaggerated."
Percentage-wise, the Android Malware problem is growing faster than any other malware problem. While there are no viruses for this platform yet, the number of new Trojan horses discovered has reached in the range of several per day (on average). Given that there were just a handful in existence about a year ago, this is tremendous growth. And it is just a matter of time until a full-fledged worm appears. (We have botnets already.)
"The bottom line is if you have left that "allow non-marketplace apps" tickbox unticked, and you quickly check the app permissions and download count before installing, then you have nothing to fear."
You are very much mistaken here. First of all, there have been several cases of malware appearing on the "official" marketplace (and Google had to pull it very fast). Second, there are ways to bypass the permissions protection by using already existing apps. For instance, a malicious app might not request the permission to access the Internet, yet still do so by using an already installed browser app that can act as a server.
"The only people saying otherwise are those trying to sell you an app to protect you."
The test was of FREE protecting apps, in case you haven't noticed. The results sound about right, too - in the AV business, when you get a free anti-virus program, you usually get what you pay for. (Sadly, the opposite is not necessarily true - buying a paid AV program doesn't guarantee you quality.)
I wholeheartedly agree with your remark about iOS, though.
Of course not. We, the anti-virus people, are not dumb users. We have special, in-house developed tools for extracting of such information.
What is steganography?
"The features include steganographic processes that encrypt stolen data and embed it into image files before sending it to attacker-controlled servers, an analysis by NSS researchers found."
Actually, if you bother to follow the link to the NSS report, you'll see that its authors, being knowledgeable researchers, don't use the word "steganography" at all. And rightfully so, because Duqu doesn't use it. Obviously, the ElReg reporter has heard the buzzword from somewhere, has half-understood it, hasn't even looked at the Duqu code, and has decided to include this buzzword in his article to make it more "juicy".
What Duqu does, is APPEND (not "embed") the collected and encrypted information at the end of JPG images. The reason for this is to conceal the fact that it is sending such information from casual observers of the 'net traffic. However, if somebody is actually LOOKING for this info in these JPG images, it is blindingly obvious that it is there.
As opposed to that, when REAL steganography is used, the information is encoded by toggling single bits in the image. If it is done right, it is practically IMPOSSIBLE to detect that hidden information is present in the image, unless you have the original image to compare it with. Calling what Duqu does "steganography" is like calling wearing sunglasses a "professional disguise".
As for the similarity between Duqu and Stuxnet, it is more appropriate to say that one of the components of Duqu is very similar to one of the components of Stuxnet. But the similarity ends here.
Don't be mislead by incompetent journalists
Facebook has absolutely nothing to do with this. As usual, the journalists (not just ElReg's) reporting a technical issue have screwed up.
My guess is that the only one of the people mentioned who is playing Mafia Wars on Facebook is the "anonymous defence official" - the source quoted by Associated Press. And since "military installation hit by Mafia Wars virus" sounds sexy, all the stupid journos have jumped on the bandwagon.
In reality, the computers of the drone program have been hit by a keylogger. A keylogger logs whatever the user types - usually as login passwords to web sites. Yes, it could be the password to your Mafia Wars account. Or to your GMail account. Or your bank account (which is usually the real target). Or whatever.
It does not mean that you are playing Mafia Wars on that computer. Or that you're using it to check your GMail account. Or to do Internet banking. The only thing it means is that your computer has been infected by a keylogger.
Which is, actually, much worse. If one of the infected computers is used to login to a classified account without using anything besides a user name and a password (e.g., smart card, a biometric scanner or whatever), the attackers now have access to that classified account.
How did the military get infected? Certainly not by playing Mafia Wars. Most likely, the infection came from an USB drive. The drone pilots often bring updates to maps, etc. on USB drives.
Why wasn't autorun disabled on these computers? Incompetence.
"How is it targeted?"
It isn't. Doesn't have to be. It's a Trojan, remember? Not a virus. It doesn't spread by itself. It has to be installed on the computer of the victim.
"Any electronic equipment that is taken out of your sight by German customs must be assumed to be compromised."
The-he-heee... They could always try. As long as Germany doesn't outlaw encryption (like France) they will fail. The best they could do is to boot from an external medium (and that won't be easy, either - they will have to bypass the BIOS password) and instal either an MBR or a BIOS rootkit. Which I'll detect during the proper boot process. ;-)
"Better not have any commercial confidential information on it either - clean it before travelling."
Better have it properly protected.
"Basically treat travel to Germany like travel to the USA."
Trust me, it's nothing of the sort. German security is generally polite and competent.