735 posts • joined Wednesday 23rd April 2008 16:59 GMT
Hmmm, are you sure? It depends on how big they are and how much they've sunk into the hardware side of things. Mind you, the phone hardware world is coalescing on them all being pretty much the same, so building any decent quality hardware is probably more of a marketing choice rather than an engineering challenge.
If they're small (=cheap), enthusiastic and talented and get the balance of the software just right so as to appeal to the talented enthusiast pro or amateur they could (in their terms) do very nicely indeed. You don't have to become Apple big if your investors aren't expecting or demanding that. The problem with being Apple-sized is that its way too easy to shrink by frightening amounts, just like is happening to Apple now.
I like the Z10, I have one. If you want to write one's own software its a fairly good place to be.
Re: Latency, it's all about latency
Yes, it certainly is all about the latency.
The computing industry hadn't really done much to solve the problem of slooooow memory for many years; well, forever really. Everything we've got (caches, DDR, QPI, Hypertransport, this idea from ScaleMP, Flash disk caches, etc) are all about working round the problem of memory being too small, and too slow for the CPUs we have. It is a massively difficult problem to solve, and it doesn't look it will be solved any time soon.
I program along the lines of Communicating Sequential Processes - forces me to get the scalability built in straight away, but takes a lot of thinking up front. Worth it in the end.
Re: Still pugging something
"that nobody really wants."
Hmmm, I don't think you've read the article properly, nor do I think you understand programmers either.
If you look at what's been going on in CPU design over the last 15 years you can clearly see that the CPU manufacturers have concluded that the vast majority of programmers are not prepared to confront the un-scalability of the software they right.
What Happened When Someone Built a Pure NUMA CPU
For example, the Cell processor in the PS3 is perhaps the ultimate physical expression of the benefits of properly embracing NUMA in your software. The Cell doesn't give you the option - it's maths cores (the SPEs) are unable to directly address each other's memory - pure NUMA. This obliges the programmer to write software that is wholly NUMA aware. If you do that and know what you're doing you can get performance that even today Intel's biggest chips are only just challenging.
Hiding NUMA from the programmer
Whereas Intel, when they finally went NUMA, hid that from the programmer by making QPI synthesise an SMP environment. It took a lot of silicon, and their design panders to the 'average' use case of one machine running several different programs and needing good performance for each of them.
Which Design Strategy Sold Best?
Now, any kind of market analysis will show that Intel got it right, and the IBM, Sony and Toshiba got it wrong. Sure, Sony put the Cell in the PS3 and have sold a bundle of those, but the number of programmers who can fully exploit the CPU is really very low. IBM realised that too, which is why they dropped it a few years back. Much to my great annoyance in the world of high capacity mass-parallelism signal processing where such architectures are very familiar and exciting.
So what does that analysis tell you? It tells us that programmers, mostly, cannot / do not / aren't allowed to spend time and effort properly architecting their code for true scalability.
So lets carefully analyse of what ScaleMP have actually done with their hypervisor. In effect they've done an Intel. Intel, for multi-socket boxes, have a bunch of cores connected on a network (the QPI) that allows any of them to access any memory anywhere else as if it were a true SMP system. All that ScaleMP have done is written a hypervisor that, if you squint only a little bit, provides a bunch of virtual cores connected on a network (the Infiniband) that allows any of them to access any memory anywhere else, as if it were a true SMP system.
Given that, and the clearly continued success of SMP (synthesised or not) in the modern NUMA world, how can you say "that nobody really wants" it. I think ScaleMP will do quite well once system developers realise what it is.
Probably not. It's all BlackBerry hardware, and the OS (BB10) has already been approved. Any additional examination is likely to be quite minor and incremental.
Apple, MS and BlackBerry are in a reasonably good place for maintaining an active listing on this list. MS has a hardware spec that goes along with WP8, so as long as the handset manufacturers stay within that (ie don't add a port labelled Debug Here And Slurp All The Data, or something) then it should be relatively easy to keep WP8 approved (assuming . Apple and BlackBerry control their hardware anyway and so are in a good place.
However Android is in a bad place; each handset manufacturer is effectively its own OS provider, Google don't provide them with pre built binaries. Samsung might get Android version X.Y.Z certified on their hardware, but that probably won't read across to a HTC handset running the 'same' version.
The UC APL list is quite interesting. There's 4 Androids, only one of which is Android 4. There's a couple of Apples, but only iOS 5. BlackBerry's entries are BB6 and BB7, and any device that runs it. And now BlackBerry have BB10 on there too (or will do as soon as anyone updates the website).
So of all the vendors only BlackBerry have their latest products and software approved. Not a bad position to be in.
Ironically, web apps are now making something of an HTML-5-and-4G-networks-fuelled comeback, as those two innovations make it possible to deliver an experience far closer to that of a native app than was possible in the far-off days of 2007."
Maybe, but at what cost in power consumption? I like good battery life.
Re: What was UMA architecture then?
In their current APUs the GPU doesn't interact with memory in the same way as the CPU does. That's in spite of the fact that they're on the same die and ultimately share the same DDR3 memory bus. In that's sense the arrangements are slightly Non Uniform, and you have to copy data in order to get it from one realm to another.
This new idea means that the GPU and CPU interact with memory in exactly the same way, and that makes a big difference. Software is simpler because a pointer in a program in the CPU doesn't need to be converted for the GPU to be able to use it. That helps developers. More importantly the "GPU job setup time" is effectively zero because no data has to be copied in or out first. That speeds up the overall job time.
I like it!
Re: Could really poke a finger in Microsoft's eye...
"That would give Apple a leg up on Microsoft on a few fronts."
You're a bit behind the times, aren't you!? The whole entire ethos, reson d'etre, and point of Microsoft's Win8 strategy is that apps are the same(ish) on mobile, tablet and desktop/laptop. AFAIK the source code for a Metro app will (by and large) compile up and run on all of those platforms and interact with the user in the same way. Who needs binary compatibility when you've got source code compatibility?
Regardless of how well MS have actually managed to pull this off, if anything is to be said about who has got a leg up on who in this regard then you'd have to conclude that MS are well ahead of Apple.
Having said that, I personally (and seemingly many others) think that MS's strategy (and by extension your suggestion about running iOS apps on OS X) is not really workable. Tablet and mobile are totally different to desktop and laptop. Judging by my own experience and that of everyone I know who has tried, Win 8 is not for the heavy duty content creator, worker, programmer, etc. I've not tried it on a tablet / mobile in earnest, but I could see it working quite well there.
Of course, that doesn't mean to say that Apple won't do it, but I can't see it becoming a main stream way to produce and consume apps on an Apple desktop.
Re: Brits complaining about space?
"Isn't that a bit rich considering you can barely put a lego figure on a balloon?"
Ultimately there's not much money to be made by having a launch vehicle. Even the North Koreans can build one of those. So the commercial launch market is basically one of high costs and low margins in the long run.
However the satellites are much more difficult to make properly, and the real money is to be made from building them. We've got very good at those and the country has made a bundle of cash that way.
Same in the aero industry. The difficult bits of aircraft are engines, wings and avionics. The rest of it is basic metal bashing and/or variations on pâpier maché. And guess what? Rolls Royce has a hugely profitable order book, Airbus make their wings here and a lot of avionics comes from the UK too. That's cash in the bank, and it's not straight forward for a new competitor to emerge.
Mclaren have a nice order book for the MP12-4C, but they could end up making far more money from licensing their carbon fibre manufacturing technique. They now make a single piece hollow section carbon fibre chassis in 4 man hours. AFAIK everyone else, including the aviation industry, can't do hollow sections easily and take a lot longer.
Having said that we're spectacularly capable of sitting back and resting on our laurels and getting caught out by someone else who's invested and innovated.
Re: American jobs
"Actually didn't the Italians manufacture a fair amount of the ISS?)"
And the Russians, and Canada and other Europeans. The 'I' in ISS does stand for International...
Re: Raspberry Pi Phone
"A true geek would insist on a phone that was based on a real time operating system. That rules out all of the popular phones, and Windows phone as well."
Setting aside the inevitable debate about popularity, Blackberry's BB10 is based on QNX which is first and foremost a hard real time OS.
"...but for "hard" (i.e. actual) real-time performance you need to re-write Linux, which various people have done.
Indeed. The PREEMPT_RT patch set is one of the major efforts. I use it, and it's pretty good. It's a pretty hard RTOS in that the context swith times seem to be pretty stable. It's not as good as, say, VxWorks (which I've also used a lot) which was designed from the ground up for that puprpose and has lightning fast context switch times.
Still, I gather that people have applied PREEMPT_RT to various ARM Linuxes, so there's no fundamental reason why one couldn't homebrew an Android featuring it.
Once I started developing multithreaded apps for an RTOS I started bitterly regretting the fact that most operating systems aren't real time (Windows, Mac, etc). RT means that you can do a really good job on anything doing anything with media or human interaction.
I'm liking my Z10 a lot so far. For the waverers out there its certainly well worth a look.
One of BB's biggest problems is the shops. The sales staff don't care, and it's difficult to get a proper go on one to see what it's like outside of the limited demo app. Because BB do some things so differently to Apple and Android you need that proper look to see why it's worthwhile and to decide whether it's what you really want.
Anyway, BB are clearly having a good go at surviving reviving.
Don't Beggar Thy Supplier
Apple might have to be a bit careful. $1.6billion is a lot of money, even for Foxconn. Apple would seem to be playing hard ball, but they can't do that too much. A contract won't count for much if Foxconn ever decides that working for Apple isn't worth the hassle, especially in China...
Re: And yet, and yet ...
"What's different about rtf+pics and Word documents+pics ?
Rtf writers tend to store pictures as hexadecimal text representations (the default, though binary is theoretically supported) of some underlying supported picture type, and quite often they default to uncompressed windows bitmaps. And they often throw in a metafile copy of the original. So you can find that a 500k jpeg can turn into tens of megabytes of text. It depends on the program one is using, but for maximum compatibility with other programs the file size gets pretty bloaty. Office doesn't have to do that, so it can minimise picture storage space.
Ok so it's not a big deal with storage so cheap, but it can be a right pain in the arse if you're doing something old fashioned like emailing the file to someone.
Re: And yet, and yet ...
@john f***ing stepp
"Since various models of Microsoft Word are incompatible with one another"
Well, MS do supply a free plugin for Office 2000 onwards that allows them to open and save newer Office documents. Been using it for ages, works very well. They can't be accused of screwing their customers on that particular score. Ok so newer features don't get replicated in older versions but that's almost always not a problem; it's rare for a user to stretch even 10% of office 2k...
It's a much better than sending RTFs around the place. Have you seen how big RTFs can get when you start including pictures?
My view is that the anarchist nature of Linux's desktops makes it hard to roll it out across a corporation. Got a bunch of people using Gnome2 and want to upgrade to he latest distro version? Then there's a lot of work or training to be done. Even MS have forgotten that corporate customers don't like change of that sort; no one is buying into Win8 for that reason. Only Apple so far seems to have retained the view that desktop is desktop and mobile is different. If that situation persists then their corporate penetration may start increasing.
Re: Dynamic Balancing already exists - FLAW
If the rotor is flexing as it spins then that will absorb energy. Not a problem on a 4x4, you're driving. But in an energy storage device they will be throwing energy away into heating up the rotor as it flexes.
So the question is, does it flex continuously, or does it settle into a shape and stay in that shape? If the latter then only a small amount of energy will be lost. If the former (which I suspect will be the case), then it won't work very well.
Using a fluid as the weight is a bad idea for the same reason - energy will be lost in stirring (thus heating) the fluid. And that will be a continuous loss, not a one-off as it settles into shape.
Round buildings don't work
The furniture doesn't fit...
Re: Memristor vs. Flash
"On the other hand, Flash has a real-world history to go by"
Yes, but Flash's history isn't exactly that brilliant. In fact its bloody awful. All that fussing with wear leveling, block erasing, error correcting and all that fretting about whether it's gonna just forget what's been written is a pain in the rear. It's only plus point is its moderately useful capacity (and a little bit non-volatile). Otherwise, in absolute terms, it is fairly painful to use.
Whilst it's true us ordinary mortals haven't got our hands on a memristor to evaluate it for ourselves, HP have been publishing all sorts of promising looking papers on their performance, longevity, etc. Remember, even if HP have to back off from what they say they can do by a factor of 10 (speed, capacity, longevity, take your pick) they're still going to comprehensively obliterate Flash.
Re: > scrawl "except for ZFS which is ok so far as we're concerned" somewhere in the middle of GPL2
"I get to sleep in Bazza's bed wearing big muddy boots"
The wife got there first with the muddy boots...
This is excellent news for everyone, and well done to the guys/girls who've done this. Thank you :)
However I do think it's a real shame that licensing concerns prevent the inclusion of this in the linux kernel. Whatever those concerns are they're surely relatively petty in comparison to the benefits we'd all get. Surely for something as significant as ZFS some rules could be changed specifically to accommodate it. Couldn't Linus or whoever just scrawl "except for ZFS which is ok so far as we're concerned" somewhere in the middle of GPL2?
It's still open source code. It's not as if anyone's going to be chasing anyone else for money if they use it. It seems unnecessarily obstinate, a bit like refusing a fantastic Christmas present simply because one's favourite Aunt has used gift paper that you didn't like... It didn't seem to worry anyone in the FreeBSD camp.
Hmmm, can I hear the drone of a cloud of hornets rushing towards me from a recently upset nest?
"I mean £500 for a Blackberry that doesn't even come with encryption as standard anymore.
Err, you're not entirely correct. BES is still wholly supported, encrypted of course. MS Exchange ActiveSync is encrypted. All the other email types (Hotmail, IMAP, POP and SMTP) all support TLS, SSL, etc.
So they can all be encrypted, just not in the manor to which you were accustomed to with BIS. Me? I'm using ActiveSync with an MS Exchange server - works really well.
Re: Still popular here
Big difference is that Blackberry's Balance is quite a godo BOYD solution. AFAIK WinPhone is no more BOYD friendly than a corporate laptop...
They're on debenhamsplus.com at £148 for the 64GB model, £87 for the 16GB. £87 is getting close to being the same as a tank of diesel for the car...
PC World has the 64GB model at £139 - gone up £10!
The Register, unlike the Daily Telegraph, hasn't got a loss making print operation to support. And I suspect that the staff count at The Register is considerably smaller than the Telegraph's. If that means the hacks at the Reg can still get a few beers like this one <--- on expenses then I see no risk of them putting up a paywall...
Re: Not gonna happen
"Canadian government will not allow this."
Perhaps, but they couldn't prevent it without nationalising the company. They, and many other western government, are to be able to control their mobile security to quite a high extent by being with Blackberry. However that's been heavily subsidised by other Blackberry customers.
Quite a lot of those other customers aren't likely to care who owns Blackberry, so the market pressure for the company to stay non-Chinese is going to be quite low. Only then will the governments learn the true cost of the capability that Blackberry currently provides.
I like the idea! Your Flash cached hdd will already have a RAM cache too. The only thing missing is the supercap UPS, and one could probably solder one of those on oneself. I might just try it myself.
Re: Different access models
@David D Hagood
That's certainly true for Flash, but not so for things like memristor memory where every bit is individually read/writable. Ok, memristor isn't here yet, though HP (yes, Hewlett Packard!) are reportedly close to coming to market.
Memristor is quite interesting. It's fast,1 GHz, so by the time you apply the same DDR tricks to it it can be the same speed as today's memory SIMMs. It's not block erased like Flash has to be. It can scale - HP say they *could* do 1 petabit per square centimetre (HDDs manage a few gigabits in the same area). If all that comes to fruition a PC could have just one memory SIMM and no other storage of any sort whatsoever. And it's non volatile. The same goes for other technologies like phase change memory AFAIK.
Bit of a game changer.
Re: Have I missed the point?
"So I *think* that means it works *almost* like you want already. When you load an exe, the exe file itself and all it's dll files are mapped into RAM as they are accessed. If your disk is fast, then this process will also be fast."
Nearly, just go once step further.
The act of loading a .exe is to transfer something into memory to be executed. It involves a process not unlike linking, in that all the system call references are finally pointed to the actual system library routines, etc, etc.
But if the application is already in CPU mapped memory there's not much point in going through this step every time you want to run the application. You might just as well do that when you install it, so 'running' the application becomes merely a matter of branching to the app's first machine op code instruction.
We've been there before. In the old old days you could buy ROM chips to plug into your BBC micro. To run whatever they contained you just told the CPU to start reading op codes from the beginning of the ROM. And off it goes.
With memory mapped storage the ideal of a block based file system is an artificial incumbency left over from the old days when we didn't have enough memory. The only reason the idea persists in a wholly memory mapped age is to do with OS architectures. It's a pretty big job to change how the OSes work.
It might never happen. As other posters have pointed out there's plenty of end user benefit to be had in *not* having an app in memory, ready to go at a moments notice (security, control, etc). All of those would have to re-established if storage became the modern equivalent of plugging in ROMs, and it probably isn't worth it from a cost-benefits point of view.
Re: the Unix vs. Windows battle in Linux
"things like NetworkManager have become unmaintainable."
I'm glad it's not just me. NetworkManager is *horrible*, always has been in my opinion, and needs serious work. Setting up a network connection shouldn't be that difficult.
I think it's a good example of why text configuration files can be problematic. Get the text file wrong somehow and the user friendly tools can fail. And someone's manpage is a weak way of defining the API between a daemon and a GUI, and there no real means of strongly validating anything. Coping with API changes can be a nightmare.
A rigid config API (which is all that the Windows Registry is) makes it harder to screw up the data that is stored, and so long as the user land GUI isn't buggy it is reliable. But if there's damage repairing it is fiendishly difficult.
Pros and cons both ways. Perhaps XML config files with well controlled schema would be a good way forward. That's what I do with the software I write.
Re: Yeah, I feel it to bro.
' "the fragmentation of Linux". Why does that not bother me at all.'
Linux has at least three different ways of packaging up software, the worst of which is a tarball autoconf tools thing. I wish the community would settle on just one. It's unnecessary, causes a lot of work, or means that you aren't reaching 2/3rds of the potential user base, or means you're dependent on distros adopting your wares.
I also hate the way in which Linux distros rely on an Internet connection to resolve package dependencies; you try doing that when your Internet is down.
At least software for Windows (and I guess Mac) tends to come with everything it needs on the CD. That's achievable on Windows and Mac because the developers can be sure of what is already on the machine. But with Linux the developers have no way of knowing that, not even within a single distro.
Re: This remains one of the Mac's best selling points
I do something similar, which works very well for me. I run Win 7, and VmWare Player hosts a number of Linux and Solaris VMs. Ok, so no OS X in that, but I have a nice desktop that works pretty well and all the Unix I need. I do the same thing at work, but there I use VM Workstation.
I have just one colleague who actually runs Linux as his desktop. He's sometimes left in the lurch.
I find Linux very frustrating. For example Centos / Redhat 6.2 just doesn't work properly Java-wise properly out of the box, and as a result Eclipse CDT just doesn't work in a fresh install from DVD. Seems like no one at Redhat actually tried it before shipping it. Now I can cope with that, but how on earth is a newbie supposed to deal with unnecessary problems like that?
Re: Doesn't go far enough
That incompatibility problem with CDMA / CDMA2000 phones arose because of where CDMA came from. It started off as a proprietary standard from Qualcomm, wasn't public, and every network using it did it slightly differently. Nothing wrong with that, that's just how it happened. GSM was always a wholly public and very complete standard, so the compatibility of handsets across networks was guaranteed. Both set 'market moods' for what customers expected. The introduction of European standards in the US was always going to upset the mood there once people realised that the barriers to network swapping were now artificial, not technological.
GSM is kinda limited to a cell size of 35 kilometers (can be 70 in Range Extended mode). In the wide open spaces of the US that could be inconvenient to setup. CDMA, CDMA2000 and UMTS all use the same radio modulation scheme which undoes that range limit (power permitting) but introduces other problems like cell breathing, which makes network planning difficult. LTE seems to have done it properly, blending GSM's simple network planning with high spectral efficiency and 100km cell size if that's what's wanted.
Pity they forgot about voice calls.
Re: Doesn't go far enough
Here in the uk it depends on who you get your phone from.
If you go to Car Phone Warehouse or Phones4U all their phones are unlocked anyway, apart from iPhones. I'm pretty sure they give 2 year warranties too, apart from iPhones. Spot the pattern? Anyway, that's what I was told in their shops just a couple of weeks ago.
So if you're going for a contract, it's possibly worth going to those shops in preference to O2, Vodafone, Three and EE.
"It's common sense."
Not that you get a lot of that in American politics. Nor in any other country's politics.
Re: I wonder
"Quite the opposite: They had a problem. It was fixed."
That's nuts. You wouldn't buy a car if the salesman says that it'll break down all the time, he doesn't know why, but hitting it with a hammer seems to fix it...
A problem occurred. They don't know why at this point in time. They managed to find a work around for this trip. If it happens again they might not be so lucky next time. What ever caused this problem is currently not fixed; they've not even had a chance to look at it yet. They will fix it, but until they start having regular flights without serious problems like this their credibility isn't so high.
I admire their tenacity (ie, Elon's big fat wallet) and ambition, but those alone do not make for a reliable rocket. You do actually have to get the design, build and prep right as well.
Re: I wonder
"If all the nay and doomsayers ("Disaster for Space X", "massive setback" etc) posting on previous register stories on this are now thinking they wish they had waited a bit to see how it all panned out?"
In the context of their ambitions, this trip has definitely not been a good thing. Manned flight on that thing? At the moment, no way. They've not even begun to establish any credibility in that line at all; quite the opposite.
More troubling is that this seems to be a problem in preparing the vehicle - blocked helium lines. It didn't happen last time, it's happened this time. It doesn't cost that much money to be consistent, yet so far they've been inconsistent. That means they've not got a satisfactory build process being rigidly adhered to. That tells their customers that launching with them is, currently, a bit of a gamble; it mightn't be built or prepared properly, even if the basic design is OK. Now, in the light of that, go ask your insurance company about that premium reduction.
In the space business you can't rely on 'getting away with it', especially when it comes to manned flight. Ask NASA. Space X have got away with it this time, but really that's not doing anything to build up a good reputation.
Oh, and this trip isn't finished yet. It's not yet been recovered post splashdown in the ocean. Only then will they have got away with it.
I do wish them success. It's a relatively new team with clearly a lot to learn, and I hope they do. However unless they rigorously analyse all defects they may well get to a place where they're regularly launching and being successful, but it might be that they've no real knowledge as to why. Nothing desperately wrong with that, but it would make introducing upgrades really difficult. The true measure of a good engineering team is when they can punch out another design with minimal difficulty.
We'll soon know if they're going about it the right way; the defect rate should drop off rapidly. If not, they'll keep having these defects cropping up.
Re: Give them a break!
@Voland's right hand,
Re: Soyuz - it is indeed very good. As good an example of "if it ain't broke..." as you could hope to find.
It doesn't really compete with Ariane 5 (Low earth orbit: A5 21,000 kg, Soyuz R7 5,500 kg. Geo transfer orbit 9,600 kg vs 2,400 kg), but it's still mighty impressive.
Re: Give them a break!
@Destroy All Monsters
"This bazza man is in process engineering, I see."
'Fraid not; digital signal processing's my thing. But I'll bluff my way through almost anything...
Re: Give them a break!
Ah well, so far as Ariane is concerned, very few have failed. Their success rate for Ariane 5 is currently 63 out of 67, and they've not had a problem since 2002. That is the current gold standard. Ariane 4 was phenomenally reliable too. They got so good at building those they ended up not bothering to test fire the upper stage engines prior to launch, and they weren't failing.
That's the success rate Space X have to aim at, and so far they're a long way from that. Being cheap isn't good enough if the failure rate is poor; customers and their insurers don't like losing expensive satellites at launch. As for manned space flight? I wouldn't....
Musk's strategy is bold, but it does depend on working group out the problems. So far they seem to be having different problems every time they launch; inconsistency is not very encouraging. This failure is a new one, it didn't happen last time. That means they've not built this one to the same standard as the last one. Unless they can cure that weakness in their manufacturing they'll never get it right.
The Blackberry Z10 can sort of be used as a desktop. It's got HDMI out, and you can pair a Bluetooth mouse and keyboard with it. You get a mouse cursor and everything. Plug it into a TV or monitor, et voila; a desktop machine.
BB's Docs To Go aren't the complete Office package (simple edits of Word, Excel and PowerPoints), but it's free with the device. That, plus the very good email system and you've got a fairly useful thing.
"... needs to be beaten to death with the Salmon of Correction."
The way this week's been going it needed a dose of the surreal.
Thank for telling us of your spoons.
"Yes, but the ultimate goal is that you don't ever have to go home. You will be able to do everything you normally do, including watching TV with the security camera's over the net, that it will be possible for you to stay at work 24/7."
Such is modern life. We have ansamachines to speak to people we don't want to talk to. We have PVRs to watch TV we dont want to watch. We have freezers to store food we have little intention of eating. We have social networks for dealing with people we don't want to meet in person. We have inboxes for messages from people we don't want to hear from. Now we're heading towards having machine doing most of the living in our houses that we're not in most of the time.
It's all very convenient!
Re: The problem is not authentication or lack thereof
"Strict product liability laws that force manufacturers to fix bugs would be a first step, and it's encouraging the FCC recently compelled HTC to release Android security updates for phones they'd just as soon not want to support"
Yeah right, good luck enforcing that. And in the context of Cerf's point, you obviously don't know where air con units (and everything else) are manufactured. Ever heard of a place called China? Updates could be more dangerous than the stock firmware.
- Product Round-up Smartwatch face off: Pebble, MetaWatch and new hi-tech timepieces
- Geek's Guide to Britain The bunker at the end of the world - in Essex
- FLABBER-JASTED: It's 'jif', NOT '.gif', says man who should know
- If you've bought DRM'd film files from Acetrax, here's the bad news
- Microsoft reveals Xbox One, the console that can read your heartbeat