Google seemed on the back foot
I attended too, and during the presentation (which turned into more of a debate), the Google guy seemed pretty hammered with questions - most of which seemed to be questioning/dismissing Google's hippie mantra of "peace love and tolerance^H^H^H^H^H^H^H^H^Hopenness". Sadly, in my opinion, Google wasn't able to provide any significant assurances about how it's going to all work safely and securely.
Optional 'self signing' of applications (with no provision for trusted CA keys etc) merely there to group apps together from the same provider, rather than to provide any security. So no ability to actually trust/know whether an application is really from Google, IBM or HackMe Inc.
Network operators aren't going to like handsets which they cannot lock down. Orange traditionally lock down their smartphones, stopping some 3rd party apps from working. Networks also love removing (and if possible blocking re-installation of) VoIP apps.
Google seemed to be completely oblivious to the market in the UK whereby most handsets are network branded, modified and heavily subsidised. They seemed to be working around the SIM-free market, where the owner purchased the handset outright and therefore wanted full control over what does on, and can be taken off, the handset. Sadly, that market in the UK is probably niche at best. Combine that with large businesses wanting to lock down handsets for business use (stop them from taking data off site etc), makes it a strange stand to be taking. Everything seems to be going down the route of more security - not less - and much of it is being pushed by governments and business requirements.
And the question I raised there wasn't adequately answered - merely glossed over with something like "trust us, there will something in there". Everything seemed to be "we don't want to annoy the users with allow/deny requests" but also with the repeated caveat of "all this is far from finished".
With how Android works, it appears that apps announce what intents (actions) you offer. These intents are then managed by the Android OS, so when someone want to perform an intent (pick a photo, send an SMS etc), then the OS will route that intent to the available applications, possibly giving a choice of apps to pick.
My question was, with all the extensibility and lack of security/code signing - what's to stop HackMe Inc writing an application which looked like the Google/Android dialler (or maybe a Google text message app) but was actually dialling premium rate numbers and/or sending premium rate SMS whilst you were sleeping? How can you know when presented with 2 apps, with the same name, look and feel etc - which is the real Google app, and which is the dodgy one? Without signing, you cannot ever be sure and you get the possibility of on-handset phishing.
If an app can make a call, it can be a dialler. If it can dial, it can dial something different (or at different times). If there's no trusted signing of apps, there can be no way of assigning 'trusted' intents/actions to 'trusted' apps - ie. stopping which app can make a call.
The Google security model appears to be either (they don't appear to have decided yet):
1.) nothing other than the bundled dialler can make a call (quite limiting from the extensible and open position)
or 2.) the other end of the scale - anything can make a call, possibly with a warning first (which is very lame if someone wants to ship a real, genuine dialler replacement and gets asked on every call)
The normal/expected model of trusted apps which are signed by a trusted 3rd party (just the network if you're Orange - Orange only trust their own keys for the 'full trust' API) can be run without warning is sensible. Developers can write apps which do most things without restriction, but if you want to do absolutely anything on the device, then get it tested and signed.
I think Android is really interesting - however I really see it having problems with their (lack of) security model.