901 posts • joined 23 Apr 2008
Understanding why an accident has occurred is going to become a whole lot more difficult. The police and their expert investigators are pretty good at diagnosing mechanical causes of accidents. They're not going to stand a chance when it comes to investigating a hack attack on a car. A good hack attack would leave no log entries anyway.
The manufacturers aren't going to want accidents investigated properly in case they are held liable for a poor design that is easy to hack in the first place. They're not interested now, and I doubt their attitude will change. [true example: A friend's car set off its own airbags whilst driving down the motorway. Despite that she was able to keep control and get off the road. Complaints to the manufacturer went utterly unanswered. Had she lost control and been killed, consider the scene that the police would find: a crashed car, airbags deployed, and a corpse. Nothing would have pointed to the true timeline of events, and it would likely have been blamed on driver error. No one knows how many times this has happened]
Which all means that drivers are going to find it very difficult to persuade either the authorities or the manufacturers or the insurers that the cause of a crash was some external hack. The driver will likely get the blame, especially if they are killed in the accident. The only way to get something done would be if hack attacks happen too many times to be ignored. By which time it will be too late for a lot of people.
On the whole I think we'd be better off without such levels of comms and automation in cars. The one that makes me laugh the most is "remote shutdown and tracking of a stolen vehicle". It's going to be easier to nick the cars in the first place via the inevitable flaws in the software. And all the thief needs is a 3G jammer to stop you tracking and stopping the car.
Re: Simpler Solution?
"Perhaps its from a philosophical point of view, that the GPL is their preferred license."
Perhaps, perhaps not. They are a money making profit machine, so they'd be quite motivated to use whatever it takes to maximise that. However they have open sourced quite a lot of the things they've done, which is certainly to their credit, so who knows!
Re: Simpler Solution?
"maybe if they used BSD then the memory *would* be a bottleneck, and a slower one than the linux networking bottleneck."
All right, suppose that BSD was crap at things like memory, scheduling, I/O, etc. If so, how does it manage to overcome all that to deliver better network performance than Linux? A network stack is a pretty thorough work out for pretty much everything the OS has to offer.
Short answer - FreeBSD is not slow at those other things.
Re: Simpler Solution?
"FreeBSD may have superior network performance, but Linux has superior performance in most other metrics that matter for a kernel."
Maybe, but they've seemingly identified network performance as their bottleneck. Hence their saying they want to improve on that. Other kernel performance metrics (memory allocation, context switch times, etc, etc) are clearly irrelevant to their system performance, so FreeBSD as-is would bring them a performance benefit (assuming they've not got dependencies on the specifics of Linux).
It's not surprising that their bottleneck is network performance. As soon as you start handling vast amounts of data the system I/O performance is king and almost nothing else matters in comparison.
For instance one of the biggest problems GPUs have in super computers is that they're not directly addressable node to node. To get data from one GPU to another in a different node it has to go via a PCIe bus to a CPU/memory, back across the PCIe bus to some sort of NIC, across some sort of interconnect (Myrinet, whatever), from the destination NIC across another PCIe bus into another CPU/memory and finally across that PCIe bus again one last time to the destination GPU. Great compute performance, terrible I/O, resulting in sustained performance not being anything like as good as peak performance (though of course that's very application dependent).
Very good indeed.
As per title!
You need to read more Feynman. If a theory / law / hypothesis doesn't fit correctly measured physical results then it doesn't matter how complete or satisfying that theory is, it's wrong.
NASA's chaps would be well aware of the career limiting ridicule that would ensue if they reported a result as unexpected and 'ridiculous' as this without very careful checking. They've already done a control experiment and got another unexpected result. The very fact that they've published this at all implies that an awful lot of work has gone into checking their experimental set up, and they still can't explain it away.
And anyway, measuring force is such a trivial thing to do with very good accuracy there's hardly anything to check.
Re: I'm wainting with baited breath...
""We did X and observed Y. We were surprised by Y. Can anyone help us confirm that we didn't overlook unknown factor(s) Z? Thank you!"
Indeed, and in fact it's pretty rare that a theoretician has successfully predicted a result that has been confirmed experimentally. In particle physics it's happened, I think, only twice. Normally an experimenter demonstrates beyond doubt that something weird is happening, and the theoreticians spend the next few years thinking up an explanation and then even longer dreaming up reasons why they hadn't thought of it first.
For entertainment go and ask a theoretical physicist to explain the Mpemba Effect, and don't let them bluff their way out of the challenge.
Yep, it's a steam punk star drive alright.
Plus if they do actually do a superconducting version there ought to be a lot of mist floating around as well. That'd drive the SPF (Steam Punk Factor) off the top of the scale.
If they are going to super cool it they might as well just add a few steam nozzles and noise valves just to convince everyone that something is causing it to work. As humans we're just not ready to believe that something can sit there working without flame / smoke / ear splitting noise / deep visceral rumbling / significant humming / a lot of sparks and stuff. Ion drives (which we all know do actually work) produce nothing more convincing than a slight blue glow, which is barely enough to believe in at all.
Re: And have you met Mr XTP?
"FEC is a waste of time, bw and cost if you have a great SNR.
FEC can be a waste of time and doesnt fix things if the SNR is higher than expected."
I suggest you learn something about communications theory. You can never, ever, eliminate noise generated bit errors in a system by increasing SNR. And that clever chap Mandlebrot showed us that it doesn't really make sense to talk about an average bit error rate either.
No matter how good your SNR is you have to have a way of dealing with error. Parity checking with retransmission is one way, FEC is another, etc. Even then you're only improving the chances of correct operation, not guaranteeing it.
Re: Compiler and runtime(s) also guaranteed defect free?
They could use Greenhill's compiler, that's very good and is formally developed. It's the foundation of their INTEGRITY operating system (see this Wikipedia entry, which as a customer I think is also very good. Not cheap, but is value for money, if you see what I mean.
I suppose this new microkernel is a direct competitor to the kernel inside INTEGRITY, though of course there's more to a complete and usable OS than just the kernel.
”All of this misses one point, the decision of who to use is not just a price based one. As far as features are concerned, AWS is a still light years ahead of the others. ”
However one of the most fundamental points of marketing is that cleverness and 'quality' doesn't sell. Look at iPhone; didn't have multi tasking, didn't even have copy and paste, fairly rubbish power hungry OS, put it in a nice case with a large screen and sell by the millions. Android is still a pretty low quality piece of software with many disastrous flaws, sells by the bucket load. BlackBerry's BB10 is a fantastic OS, well thought out design that actually allows you to do many things easily, quite complicated ideas, no one bought it. VHS vs Betamax; VHS won because it was cheap and no one cared for Betamax's quality. Itanium was quite a good chip, no one cared.
History shows us that if your customers need to be geniuses to see why your offering is better than anyone else's then you customer base is at best 5% of the overall market. The other 95% are either too lazy or stupid to work out what the best solution is for themselves and will resort to judgements on price and trivial differentials such as looks, feel, etc. AWS may well be light years ahead of anyone else's cloud, most of the market won't care or understand why.
Re: Fundamental limits
”There is room for differentiation. There are workloads that benefit from different types of computing, (e.g. search and FPGA's), that the cloud providers could offer.”
Not really. Someone somewhere else will be offering the same blend of chips (FPGA, GPU, CPU). If a cloud provider is buying the 'best' chips in the open market then the others can also buy the very same ones.
The only way to truly differentiate is to make their own chips and be better at that than anyone else. It's the only way to get exclusivity. But getting into the chip design business is a huge challenge and fraught with pitfalls. Being 'compatible' means a struggle to be better than the original. Being incompatible means no one will have any software or firmware to run on your cloud and any efficiencies you managed to achieve don't count for anything.
The fundamental limit on the costs of running a cloud are hardware, bandwidth and engergy. Once they've all thinned down to those costs alone then there's not going to be much to choose between them. Energy ain't getting any cheaper... Are we going to see Google Nuclear before too long? Amazon Atomics anyone?
He kinda has to
"Nadella is banging the "cloud" drum as loud as he can..."
He's basically obliged to do that. The term 'cloud' has become a buzzword that the majority really believe in, including MS's investors. If he doesn't bang the "cloud" drum as loud as he can then in the eye of MS's investors he's letting them down. And in the USA that's a sure way to get sued to smitereens.
"If, however, I'm not completely batshit bananas here then many - if not most - businesses agree with me that we have yet to be convinced that this is the future we want to buy into. "
I'm sensing only sanity in your entire comment. No bananas in sight.
Being a buzzword doesn't mean that "cloud" is right. Like you I know that it's a terrible solution for a very large proportion of business users out there.
I also think that it will prove to be a bad thing for consumers too. Data sovereignty and law matter just as much to individuals as to businesses. Who is to say whether pictures or comments that are perfectly normal and acceptable in, say, a European culture won't fall foul of law now or in the future in, for example, the USA? There's already problems with companies like Facebook imposing American prudishness on its European users.
Putting one's pics and stuff into someone else's cloud means that you're dependent on that country's politicians not passing draconian laws to your detriment. Ok so that might not ever become a problem, but uploading a picture today means taking a lifelong bet. Even trivial Tweets can come back to haunt you. That sort of thing can be career limiting.
And who is to say that clouds won't evaporate? There's no guarantee of one's personal data being permanently stored by cloud. If you'd got your lifetime's collection of photos on MS's cloud and MS suddenly and gratuitously go bankrupt, where's your photos now? Most consumers won't have a clue about that possibility and will actually believe the advertising.
"They are betting on the cloud - and mobile - to the point that they are willing to simply throw away all previous segments and businesses, and along with it any hope of being viewed by the general public, sysadmins, or people who sign the cheques as different (let alone better) than Oracle."
Unfortunately Apple has shown the market that business customers don't generate as much profit as the consumer market. MS have to follow suit. Investor pressure again, they want a piece of that action. In MS's investor's eyes business customers can go hang if the consumer market is going to deliver 10 times as much profit.
As it happens I don't think that MS are capable of generating a consumer market like Apple have. They're just not that sort of company. If sales figures are anything to go by Windows 7 and XP (i.e. old stuff) is what you use at work whilst Android & iOS (i.e. not Windows) is what you actually buy for yourself as a consumer. MS's investors can't see that and even if they did they probably wouldn't care. Investors aren't into keeping shares long term, all they want is for the price to rise in the short term so they can make a quick buck.
If taken to the extremes this strategy will drive business computing users increasingly towards Linux. I like and use Linux professionally. It's so close, but unfortunately I don't think it is yet the universal business tool that many would like it to be.
A Windows Domain is a very good way of controlling a business's desktops. Linux can use a domain for authentication and other services and can indeed serve it (Samba 4), but it's pretty hopeless in comparison to Windows when it comes to controlling what desktop users can and cannot do.
In a business setting you quite often need the level of control that Windows gives you. Total user freedom is sometimes something you have to take away so that your business can be seen to be complying with the various laws that control different business sectors.
If Linux did get (free?) management tools akin to a Windows Domain then I think that it would be game over for MS. Even if they did reverse their strategy and try to keep their traditional markets as a plan A.5 I think that the business sector would tell them to get stuffed.
"The American military has previously shown interest in sending messages over the air, using "mobile optical links" which are "imperative for secure quantum communications capabilities"."
Maybe I've not quite got the hang of this, but I'd have thought that putting enough laser power into the atmosphere to ionize it and create a plasma channel would in itself cause a fairly bright flash of light. Presumably that would be fairly easily seen or detected from a long way away.
I don't suppose that would be entirely popular with soldiers, sailors or airmen; secure communications that cannot be intercepted that none the less gives your position away with a big bright flash to everyone within your field of view...
Re: It does affect OSX
"Correlation != causation"
Indeed not, though given that the code base is (presumably) quite similar for the OS X variant there's a good chance that it is the same issue.
"Not sure what you're really referring too but I had to make a guess I would think this is about Android and vendors not shipping updates?"
Yep, that's the one.
"OS level updates can't be pushed out via Play."
And that doesn't look like a very clever idea now, does it. Come to think of it, Google must have looked at Linux, OS X and Windows (to name but a few) with their auto updating mechanisms and decided that pushable updates were a bad idea. What on earth made them think they would never have to do the same.
"The Android system partition is read only for a good reason."
And what is that reason? Judging by the amount of malware in the Android ecosystem it's certainly nothing to do with stopping bad things running on a handset.
"I guess what Google needs to do is either get more vendors shipping vanilla builds that Google will manage the over the air updates for or split the system partition up a bit so that vendors can add their junk in there but google can offer partial OTA updates for vital security updates. Kernel updates will be tricky as usually SoC vendors are very lazy. They'll get some old crap version of Linux working, release that as a BSP and forget about it. So if fixes for major issues are pushed to the Linux mainline it may take forever for those fixes to actually appear in the kernels for all of the devices out there."
Or you could do it properly, which is what Microsoft have tried (and mostly succeeded) to do. Which is, define a hardware architecture to which manufacturers must comply, and then MS can push out updates as and when necessary. Just like they do on PCs.
The Internet of Things is going to suffer quite badly too unless some major players take control and set up a reasonable hardware standard to which everyone can comply.
Well, at least we seem to have a code base where bugs can be found, located and fixed fairly quickly. So much better than OpenSSL; the 'fixing' part was not achievable.
The most important feature that anything secure must have is the ability to rapidly update and deploy in the face of bugs. Kudos to the LibreSSL guys for bringing that back. Now, if only Google can learn that lesson too...
Re: Can anyone explain? I'm genuinely curious.
"As much as it's clearly to everyone's benefit to have a competitor to x86, I don't understand what the business case is for investing in SPARC equipment. Wouldn't x86 be faster, cheaper and better supported for any sort of workload at this point?"
As Keith21 said, it depends on your tasks. For some really, really big tasks, 'cheaper' means cutting the power bill and forget everything else. And if your enormous task requires a lot of one sort of operation to be performed it's worth optimising that in silicon because you can slash the power bill. On a really big setup power is lots of $millions a year, so a few expensive boxes that can halve that are worthwhile.
I don't know much about databases, but I know a little about IBM's POWER. They added a decimal maths co-processor, i.e. a core extension that does maths much as you would do it on paper. This is very different the traditional floating point unit in that it has (so I understand) arbitrary precision. What's the point?
Well, when you're doing calculations for international finance you're basically doing currency conversions, which are floating point math. And if you're dealing in $Billions, conventional floating point arithmetic isn't accurate enough; you can be a few cents out. That's unacceptable. So the software has to do the math long hand.
Doing that on x86 takes forever (= a lot of power used), whereas on POWER there's a co-processor that does it far quicker. And if you're building the foreign exchange system for an entire country that's a big enough system for you to be worried primarily about power consumption as your major cost. And having the system scalability as Keith21 explained means that you can do the whole job in one machine at high efficiency.
And guess what; one of IBM's big markets is banking. Oracle's big market is databases. They're both doing elaborate things in their silicon to target very specific markets.
What is Memory?
It's quite interesting to examine what 'memory' is nowadays. Although we talk about 32TB of RAM in some sort of SMP configuration, actually it's synthesised from high speed internal networks (not Ethernet, at least, not yet) between processor nodes and memory controllers. This sort of architecture has percolated down to x86; Intel has QPI and AMD uses Hypertransport. These are similar in concept to current mainframe architectures, it's just that they don't scale up to thousands of cores.
If you ever start hanging round the HPC world you quickly realise that most of it is all about I/O speed, not CPU speed, provided the CPU is basically 'right' (and Oracle's announcement is basically about getting the CPU right; they did the I/O ages ago). Get the I/O right and you can pile up the CPUs until you have the necessary performance. Get the I/O wrong and you cannot do that. This was the reason why AMD briefly had the upperhand over Intel when Opteron first came out; Hypertransport was way better. For a good example of getting the I/O right take a look at the K computer and its six dimension hypertoroidal interconnect, and note how power efficient it is.
Good to see SPARC being updated still; I hope it does well.
Making SPARC good for their database is a natural thing for them to do. IBM do the same for their POWER processors, which have features that are good for international financial applications. For really big systems that sort of thing can make a significant difference to the power bill.
HP aren't able to do the same anymore; Intel must surely be super reluctant to do anything to Itanium. I notice that most of the op codes that gave some sort of benefit to Itanium have now found their way into Xeon, I expect Itanium to go no further, and HP will become just another x86 box shifter with a crummy line in 19" rack rails.
Re: Point of Order
It is extraterrestrial from your average Martian's point of view. 'Extraterrestrial' is merely an incomplete translation resulting from The Register's shameless use of Google Translate in ripping of the Martian Times' article about the rock.
"Yes this sucks but its the app writers fault not google, they request ludicrous permissions for their apps."
It's not ludicrous from their commercial point of view. If they can make more money by doing so then they will. They have to make a living after all, and Android is a crummy platform to try and sell software on given that piracy is appallingly easy.
Google have a slight problem. If they improve the end users control of permissions then the free apps will disappear because the app writers will lose their profit making model. And without major changes to Android it will remain ludicrously trivial to pirate paid-for apps. In short, Google have carelessly pushed out an underdeveloped, badly thought out mobile ecosystem that will one day cause catastrophic damage to their reputation, and it's too well entrenched now for them to make the necessary changes.
Re: "lost hull integrity"
"Why not just say 'it sunk'?"
Or why not just say "it went kaboom"? Oh, wait...
Re: Huh? Ransomware?
"Tesla have been doing this for a while now so linux in a car is hardly earth-shattering news.
And I won't mention Apple CarPlay."
There's nothing wrong with Linux as such, just like there's nothing wrong with QNX (which is what Apple Airplay runs on). They're OSes much like any other OS; they're pretty good.
The trouble starts when you put a network connection in and run a bunch of poorly written and oh so very exploitable apps on top. Then you need an automatic update system, staff to look for and fix problems across all versions of the software, and so on. That's a very expensive thing to do.
Plus it's not like the mobile industry where you can get away with dropping support for year old models. People will be expecting the support to last as long as it does for the rest of the car. That's really expensive.
The economic impact is potentially quite high. Say some script kiddie found a way to stop all Fords working and actually did so. In a country like the USA that means half the work force aren't going to work that day. That kind of thing shows up in GDP figures really quickly.
"The beardy types will be spluttering their coffee all over their terminals at the thought of ransomware on linux."
Well, there is already ransomware for Android, and that's Linux isn't it?!
"From what I can make out though, the plan is for linux to control the non-essential stuff like nav, climate control, bluetooth - not the stuff that's critical to making a car move. Unless I read TFA wrong I don't see any mention of a car's ECU running linux?"
Well, if the infotainment system is displaying data like fuel economy that has to come from the ECU. Which means there's a data connection between the two. That may be a path for an attack on the ECU.
"Nope. We'll probably just reinstall the code."
Yeah right, like a busy parent is going to be happy doing that instead of taking the kids to school. Plus their paired mobile gets a good going over too.
Regardless of whether or not an infotainment hack could get as far as the engine management doesn't really matter, it's still going to really piss off people.
There's also a worrying clue in its name: INFOtainment. These things display information like fuel economy and so forth which they're getting from the engine management. Which means that there's a data connection from the engine management to the infotainment system. And that too can be hacked (unless it's a one way link). So a hacked infotainment system could easily be a hackers gateway to the engine management.
It's basically Samsung's Tizen. And we all know how popular that is...
I can't help but feel that we're sleepwalking into another bunch of unnecessary security woes, just like has happened with the Internet of Things. Nor is that thought aimed solely at automotive grade linux.
The last thing any of us want is a flat battery caused by some bitcoin mining malware that's found its way onto the infotainment system in our cars. It's happening to our thermostats, smartTVs, set top boxes, fridges, etc. Why would our cars be immune?
Also a car system would be a rich target for cryptolocker type malware; "You wanna drive this car then you're gonna have to pay". That kind of threat doesn't work so well for, say, a set top box; we'd just throw it away. But we won't throw away our car just like that.
So if the car industry wants to pick up this then they're going to have to get smart with continual updates, software security expertise, all the expensive things necessary to keep an Internet connected system safe these days. They're not used to providing that level of support for software.
Re: Re no one can pay for anything without Uncle Sam says OK
Er, cash? Cheque? EBanking? Direct Debit?
Yet another stunning success for NASA, ESA and ASI. The pictures from the surface of Titan were particularly impressive.
This doesn't instil a reputation for permanence for anything Google-ish. Why would anyone choose to use their services and apps for business use when there's no guarantee that it will be available in six months time?
At least with software running on your own hardware you're more in control of its demise...
It's all about the software. It doesn't matter if it's a mobile phone, desktop, server, mainframe or supercomputer, the software is always King.
The Japanese K machine, 4th on the list, is the highest placed pure CPU computer. I know it uses a bespoke interconnect but there's probably a ton of software for it, or at least lots of source code that can be easily adapted. That might make it the 'quickest' computer out there because no one is wasting time writing software...
That barrier is now gone and NVIDIA says three vendors have products ready to roll that bring GPU-assisisted co-processing to market. The three are: .....
Separately but at the same event, AppliedMicro also announced that its ARM-based X-Gene “Server on a Chip” is now in a state of “readiness” and that “ … development kits [are] available immediately, and production [will be] silicon available imminently.”
Hmm, if that little collection of news doesn't get Intel quaking in their boots then I don't know what will.
Thing is there's not a lot Intel can do. They could buy (for example) AppliedMicro and shut it down, but that would merely encourage all the others. It would mean that an ARM based server is viable in Intel's eyes. Alternatively they could buy them and keep ARM going, but that would say the same thing too about ARM.
And where would their "x86 can do anything including low power" stance be then? Intel aren't going to change their development direction so far as I can see, but it is surely a risky strategy. What if ARM really does turn out to be a better server chip than Intel?
Thing is, if Intel did make ARMs they'd be the best in the world. Intel are very good at silicon manufacturing, and it would give them a tremendous advantage in the ARM market.
"A British mechanical giraffe evaded American secret services, infiltrated the White House and got close enough to Barack Obama to bite his head off."
Sounds worse than Austin Powers...
Polishing a turd?
No doubt that there's room for improvement in the science of running bytecode. However the Wikipedia article on Dalvik reports that ART currently isn't necessarily any faster, and programs take up more storage room.
Ahead-Of-Time compilation is surely an obvious thing to do; isn't that what any 'proper' compiled language like C/C++ does?
Being so obvious one wonders why no one has done it before. Is it because most End User License Agreements generally forbid permanent translation of the software into another form? That means you cannot take a collection of object called 'the software' and convert it into another CPU's op codes. So whilst that still allows you to do interpretation or just in time compilation (there's no permanent storage), such a clause won't let you do ahead of time compilation.
Of course Google is in ultimate control of all things Android and therefore has the ability to make the developers go along with Ahead of Time compilation.
New Psion 5
Definitely. There must be a new one of these!
Re: They do everything this way
One cost problem that I'm sure Amazon has not foreseen is the risk they run by offering virtualised desktops as a service. Basically they want companies to provide their staff with Amazon Web Services desktops instead of a real PC at their desk / server room.
The only trouble for Amazon is that a large part of their costs will be electricity. And consumption will be down to what exactly those desktop users do with the machines. And if the users do what most people do it will involve a lot of web browsing, and quite a lot of Googling, So Google will be in control of quite a lot of the CPU cycles that Amazon's virtualised desktops end up running.
There's not a lot Amazon could do about it. They could slow down the VM so that it does less CPU cycles per second. But then it would be less responsive and users would start complaining as they walk away from the service. To address this sort of thing properly would mean peering inside the VM to see if it's high CPU usage is purely down to Google's latest front page doodle before dynamically reducing it's clock rate. But that's very invasive, and I'm not sure users would tolerate that either.
If Google offered virtualised desktops as a service (do they?) then they're in an advantageous position. They could selectively deliver less CPU intensive Google Doodles to their desktop users whilst delivering electricity hungry ones everywhere else.
Seems you have attracted a delusional down voter.
Hadn't they seen the stories about how Android AV software is powerless (thanks to Google's design) to actually do anything about any malware it finds?
Re: Would someone PLEASE explain to me...
Call me cynical, but I suspect it has more to do with advertising revenue than anything else.
All the major players want you to not have a file system (at least, not one that you see). They want you to use their file system in their cloud. Ideally they would like you to want to use it, so they make it free, they add some crude sync features, etc. But just to make sure they make it harder or impossible to use the file system on your device / PC / Mac / whatever.
The catch is that once your stuff is in their cloud they can (because you accepted the EULA which, at clause 754.1.a.iv, says so) rummage through your stuff and sell the resulting advertising data.
It's not just Apple. The latest versions of Office + Windows save files to SkyDrive (or whatever it's called now) by default.
Paid-for cloud services (Dropbox?) maybe less inclined to rummage through your stuff, though shareholder profit pressure will erode that fairly quickly I should imagine.
In Apple's case I suspect that their original motive to hide the file system from you on iPhone was more to do with DRM control of music on your device. If you can't see the files, it's harder to rip them off. However, I suspect that nowadays their motivation is more to do with advertising than DRM.
Anti-competitive? Protocol Changes?
One can argue that no net neutrality is anti-competitive. Imagine if net neutrality hadn't existed back when Google were getting started and their robot Web crawling traffic had been suppressed. Where would they be now? Nowhere.
New stuff needs new protocols and without the ability to push that new data around the net it will be difficult for them to get traction.
Another thought; any protocol can become a transport for any other protocol (e.g. iSCSI, FCoE, etc). So if everything else became layered on top of, say, https (ie, 'normal' and encrypted), how would any ISP or network owner be able to do traffic shaping? They wouldn't be able to distinguish different types of traffic because the data they're carrying wouldn't be readable by them. Hopefully.
Yep, it was a neat solution. RIM's problem was that Apple had already trained everyone in what to expect from a tablet. So when RIM came along saying "Actually there's a different way" (and it was a very different way indeed!) no one listened or understood. There loss, but an even bigger loss for RIM.
Nowadays the idea is moot because the PIM / Email client software on the Playbook is perfectly capable of connecting to all sorts of things in its own right (Exchange, IMAP, etc), though arguably with less security for the corporate employer than the Playbook / BB6/7 / Bridge way. And alas the Playbook line isn't going anywhere either, so that's it.
It's much the same with BlackBerry Balance on BB10 phones. It's a really neat idea, it's pretty well bullet proof from the point of view of corporate security and personal privacy, it's a very good BYOD solution, far in advance of what everyone else is doing. Also it's the first actually useful Multi Level Security System I've ever heard of (all the others have been terrible usability kludges). And once again very few people out there have ever heard of it let alone know what it could do for their corporate users. Yet with pretty much every Android app now working just fine in BB10, you really can have a mix of personal fun and properly good (i.e. accredited) corporate security.
And other things like the Magic Roundabout in Swindon, the Arc de Triomphe in Paris, or indeed any half busy roundabout or T junction in the UK / anywhere in Europe, and indeed driving anywhere in any Italian city. I don't think it's going to deal with those things very well at all (except by stay perfectly stationary).
Re: Not just a blow to Microsoft's attempts to assure non-US customers
Perhaps it's time for the Family Cloud. I'll explain...
There's a Linux distro called Zentyal that comes with an open source clone of MS Exchange called OpenChange. There's something else in it too called Sogo that apparently adds ActiveSync, CalDev, CardDev, etc; ideal for mobiles. This plus a light dusting of a domain name and dynamic DNS could form the basis of a small home server that offers cloud like things (storage, mail, contacts, etc), and could connect to and sync with other home servers at your parent's, brother's, etc.
In short, how hard would it be to do a strictly peer to peer small scale cloud that is hosted on small home servers in our own family homes with access restricted to the family + selected friends? Not very, the right ingredients seem to exist though no doubt there'd be a bunch of work to do. But it would mean that you and your whole family know exactly where your data is at any one time.
Oh, and if there were such a thing and it worked well that would be a real alternative to the big US owned services like MS, Google, Apple, Amazon, etc. This court ruling is bad news for those companies because it completely undermines their attempts to portray your data as being safe and sound in their custody. What I've outlined above is a way for an alternative to be provided without the need to build huge data centres all over the world.
Re: I thought Space-X were supposed to be making space flight cheaper...
Joking apart, it does seem somewhat odd. When SpaceX first got going their aim seemed to be to have a very cheap way of manufacturing rockets, meaning that the rockets themselves could be disposable yet profitable. That was even reflected in their engine design.
Now it seems to be all about re-usability. So does that mean that they've discovered that rocket science isn't that cheap after all?
Re: Why permit the secrecy
"If, for example 85% of smartphones on the market infringe 200 Microsoft patents & require a licence is it not arguable that these licences should be frand?"
They're not taking anyone to court (like Apple), they're licensing. They're not refusing to license to anyone. If $8 per handset is anything like accurate that's quite reasonable. Sounds like it's fair, reasonable and non-discriminatory already.
Re: I assume ...
There's nothing stopping anyone doing a decent ext4 file system driver for Windows, and it could become something that everyone just knows they have to install.
"You are seriously thinking too small. If it had NOT been bought by Facebook, I would say that within five years a majority of PC gamers would not have been using normal screens any more."
Oh I quite agree with you, there's no doubt it would all be very attractive for gamers, and from what I've seen so far it would be very good.
I'd go even further that; I have multiple monitors hooked up to my development machine, but there's never enough screen real estate to have all the dozens of debug, code and app windows open all at once. Imagine having several dozen monitors rigged up into a rough hemisphere-like arrangement with oneself sat at the centre. Tricky and expensive to achieve. However a Rift could do you a virtual one of those with ease, and it would be fabulous. I want one of those quite badly.
"@bazza: You don't know what Oculus Rift is? Check some Youtube videos and contemplate what almost was."
@vociferous, I won't bother following up your recommendation. I've tried one of the pre-production prototypes. Cool, yes. Finished, no. Heading the right way, certainly. Perfectly pitched to reel in the wealthy and compulsive Zuckerberg, yes.
I don't know exactly that that was their game plan, but $2billion now is a handsome return on their efforts. They've probably had a lot if fun doing it, and now they don't even need to go through the depressing process of marketing their product.
For any start up getting bought out is most certainly factored into the business plan as a possibility. With someone like Zuckerberg around its well worth having buy-out as a primary goal.
As for Zuckerberg he's now got to make more than $2billion out of it. That might be quite difficult.
No one is going to use Facebook in 3D from their mobile. Facebook ain't the gaming platform of choice and Sony, Microsoft and Steam aren't going to give him any slack. It would make sense if he bought Steam too, but I suspect that they're not for sale at any price. And Facebook owning Steam sounds like a disaster anyway.
He could just market devices himself, but exactly how does that get more people spending more time in Facebook? It's just an elaborate peripheral. That surely isn't Facebook's primary business; people using Rifts on Steam/XBone/PS4/PC games are not going to be directed towards Facebook by those platforms. The world of CAD, engineering and science might be an additional marketplace but that's not a mass market.
This is a golden time for start ups. Make up some "cool" idea. Start developing it, make it look possible lay on a demo. Do a bit of corporate twerking in Zuckerberg's direction and collect the $Nbillion that he'll send your way after a casual chat over a mediocre coffee.
Google and Apple aren't far behind I suspect, but Facebook really does throw it's cash around like it's going out of fashion. Are they the biggest corporate suckers, or is that still HP?
- Review Apple takes blade to 13-inch MacBook Pro with Retina display
- Munich considers dumping Linux for ... GULP ... Windows!
- Game Theory The agony and ecstasy of SteamOS: WHERE ARE MY GAMES?
- Intel's Raspberry Pi rival Galileo can now run Windows
- Microsoft and HTC are M8s again: New One mobe sports WinPhone