Google invited developers to its London office for one of three workshops - the others being in Munich and Tel Aviv to spread the word and teach developers how to write for their new OS. Another event will be held in Boston on February 23rd (check at the blog for an announcement). Here's what they told us. The mantra for Android …
"We all know what minimum specs mean". Hah! Don't extrapolate minimum-spec-itis on a PC onto phone/embedded platforms.
About the biggest killer on a minimum spec PC is all the swapping due to insufficient RAM which is why RAM doubling on a slow PC typically has more dramatic impact than doubling CPU speed.
None (or very few) of these phone/mobile devices do any swapping and don't run swap disks. Sure, they do drop and reload clean pages but not swapping.
Instead, minimum spec devices just won't have all the capabilities of premium specced machines.
These days a gig of NAND flash is only single-digit dollars so the cost of upping flash is pretty low.
Been dabbling a bit with Android, and reasonably impressed!
Google seemed on the back foot
I attended too, and during the presentation (which turned into more of a debate), the Google guy seemed pretty hammered with questions - most of which seemed to be questioning/dismissing Google's hippie mantra of "peace love and tolerance^H^H^H^H^H^H^H^H^Hopenness". Sadly, in my opinion, Google wasn't able to provide any significant assurances about how it's going to all work safely and securely.
Optional 'self signing' of applications (with no provision for trusted CA keys etc) merely there to group apps together from the same provider, rather than to provide any security. So no ability to actually trust/know whether an application is really from Google, IBM or HackMe Inc.
Network operators aren't going to like handsets which they cannot lock down. Orange traditionally lock down their smartphones, stopping some 3rd party apps from working. Networks also love removing (and if possible blocking re-installation of) VoIP apps.
Google seemed to be completely oblivious to the market in the UK whereby most handsets are network branded, modified and heavily subsidised. They seemed to be working around the SIM-free market, where the owner purchased the handset outright and therefore wanted full control over what does on, and can be taken off, the handset. Sadly, that market in the UK is probably niche at best. Combine that with large businesses wanting to lock down handsets for business use (stop them from taking data off site etc), makes it a strange stand to be taking. Everything seems to be going down the route of more security - not less - and much of it is being pushed by governments and business requirements.
And the question I raised there wasn't adequately answered - merely glossed over with something like "trust us, there will something in there". Everything seemed to be "we don't want to annoy the users with allow/deny requests" but also with the repeated caveat of "all this is far from finished".
With how Android works, it appears that apps announce what intents (actions) you offer. These intents are then managed by the Android OS, so when someone want to perform an intent (pick a photo, send an SMS etc), then the OS will route that intent to the available applications, possibly giving a choice of apps to pick.
My question was, with all the extensibility and lack of security/code signing - what's to stop HackMe Inc writing an application which looked like the Google/Android dialler (or maybe a Google text message app) but was actually dialling premium rate numbers and/or sending premium rate SMS whilst you were sleeping? How can you know when presented with 2 apps, with the same name, look and feel etc - which is the real Google app, and which is the dodgy one? Without signing, you cannot ever be sure and you get the possibility of on-handset phishing.
If an app can make a call, it can be a dialler. If it can dial, it can dial something different (or at different times). If there's no trusted signing of apps, there can be no way of assigning 'trusted' intents/actions to 'trusted' apps - ie. stopping which app can make a call.
The Google security model appears to be either (they don't appear to have decided yet):
1.) nothing other than the bundled dialler can make a call (quite limiting from the extensible and open position)
or 2.) the other end of the scale - anything can make a call, possibly with a warning first (which is very lame if someone wants to ship a real, genuine dialler replacement and gets asked on every call)
The normal/expected model of trusted apps which are signed by a trusted 3rd party (just the network if you're Orange - Orange only trust their own keys for the 'full trust' API) can be run without warning is sensible. Developers can write apps which do most things without restriction, but if you want to do absolutely anything on the device, then get it tested and signed.
I think Android is really interesting - however I really see it having problems with their (lack of) security model.
Re-inventing the wheel
Android is an interesting platform but why did they have to re-invented the wheel? As far as developers are concerned, it fragments the market: should I build my app for Android or for JavaME or both? I don't particularly mind the custom JVM, you can always build to different targets but ignoring JME means that you now need different code for JME and Android phones. Most developers will likely build for the established platform first and for Android if there really is a market. But without applications I can't see how it will build a significant market segment. The good old chicken and egg situation any new platform faces and that has killed many a brilliant technology in the past.
I would also be a little put off by the lack of low level support for programming directly atop linux. Although limited access is something we're all very accustomed to on phones, and this phone's not going to change that.
Also, I'm not convinced that there is a legitimate need to replace the java byte code with google byte code. They've created a new tool chain that's incompatible with the old, there better be a good technical reason for it.
With just in time compilation (java byte code or otherwise), the byte code gets compiled down to native machine, so in theory optimized byte code compilers would convert byte codes down to the same machine code. However, a register based byte code is probably going to prefer one register based architecture (both count and bit size) and be at odds with all others. Not to mention stack based processors such as itanium. A stack based byte code doesn't make this assumption.
Dalvik is *NOT* open source
Google has not published any source code for Dalvik, or any of the software that is being emulated, except for WebKit and the Kernel.
So at the moment this is just proprietary software, even though there is at least a promise of it becoming Open Source at some point in time. (when?)
Dalvik will be open source
At the Future of Mobile conference Google said that everything goes open when devices ship. And that's outside their control.
You need to remember that the first two Symbian devices (the Philips Ileum and the Ericsson R380) were shipped as closed devices.
There is no point in talking about the US or UK markets.
Google and any other people involved in mobile software will only be looking to profit from the explosion that will come in India and China.
In China people save for weeks to buy the latest phone, and are crazy about typing on their phone, they can't afford voice calls.
So I think Google and others will look to capitalise in emerging markets not the US/UK - even though this is where the initial development and growth will be aimed.
I already run Android on real hardware
... on the sharp zaurus!
1/ download G's SDK with the qemu virtual machine image
2/ extract the files from that qemu image
3/ modify your kernel with the android required modules
4/ create a chroot environment under debian or angstrom and fire up angstrom
Still immature and architecture does not seem fully thought out
I was also at that Android "hackathon". The questions I asked had no real answer.
The framework looks interesting because it lets applications smoothly share functionality. One application can take the user into another without the user being aware of the switch between applications. As far as they are concerned, they are just moving between steps in a single workflow.
However, that has implications for API, standardisation, security and testing that do not yet seem fully developed.
The capabilities of one application become an API for other applications on the device. Google seem to be relying on the market to standardise on those APIs, even for functionality that the user would expect to be core to the device (todo list, calendar, media player, etc.). The alliance members are contractually obliged not to fork the platform APIs and deploy incompatible platforms, but services provided by applications are a grey area.
The ability of applications to share functionality also affects testing. An application can expose instrumentation interfaces that let it be controlled when deployed on the device for functional testing. But if an application can seamlessly switch to functionality in other applications and back, instrumentation interfaces that correspond to intents must also be published and standardised, or it is impossible to test the application.
- One HUNDRED FAMOUS LADIES exposed NUDE online
- Google flushes out users of old browsers by serving up CLUNKY, AGED version of search
- China: You, Microsoft. Office-Windows 'compatibility'. You have 20 days to explain
- Twitter: La la la, we have not heard of any NUDE JLaw, Upton SELFIES
- GCHQ protesters stick it to British spooks ... by drinking urine