* Posts by bazza

2208 posts • joined 23 Apr 2008

Google Native Client: The web of the future - or the past?

bazza Silver badge

@Thomas Wolf, encore

I've done a bit more digging.

Jazelle exists, but isn't widely used. Japanese phones seem to use it a fair bit, but that would appear to be that. Seems a shame. I tried to find out whether Blackberries use it with no success. Given their pretty good battery life, perhaps they do.

Hardware acceleration has made everything else on ARM pretty good - video/audio codecs, GPUs with adequate grunt, etc. etc. So why not Java?

If the chip manufacturers (TI/OMAP, Qualcomm/Snapdragone, etc) don't put it on then no one can use it. And given that a large fraction of the mobile market (Android & iOS) don't support Java anyway, why bother to put down silicon that's not going to be used?

Seems a shame - hardware accelerated Java could provide a really nice solution to the problem of write once run anywhere in the mobile space, but I guess there's too many vested interests to prevent it ever taking off. There's Apple with the iTunes store and Google with the Android store for a start; and neither of those parties want to open up their platforms to apps from just anywhere...

bazza Silver badge

@Thomas Wolf

Almost right.

A lot of ARM devices implement Jazelle, which is ARM's Java byte code execution engine alongside the ARM instruction set. In essence you can execute Java byte code natively alongside ARM instructions. There's an ARM op code that says that the next instruction to load from memory will actually be Java byte code; it's as seamless as that.

All of a sudden Java doesn't seem so stupid in the mobile platform, does it? Though I don't know if any of the the Java ME environments out there or Android's Dalvik use it.

bazza Silver badge


"The world+dog is moving away from languages like C and C++ for a reason."

Not entirely correct. Those who really want the ultimate in performance are using them in a big way. Many datacentre people are wondering if C/C++ are a better bet than PHP, etc. from the point of view of electricity bills. And a surprisingly large fraction of the HPC community are still using Fortran. Almost all OSes are in C / C++ one way or another. Big applications like database engines, CAD packages, CFD modellers, etc. are not written in Javascript.

bazza Silver badge

@John Miles 1

Careful - you'll be turning JavaScript in to MatLab, and you reeeeeeeeeeeeeeeally don't want to do that it you want high performance!

Other languages have done just that. Motorola extended C (and hence C++) on PowerPC with new types like "vector float" and "vector int". If you wanted to add four floating point values to another set of four values then it is a simple operation along the lines of ans_vec = vec_add(vec1,vec2), guaranteed to complete in a single clock cycle. A very good way to easily get stunning performance out of quite slow clock rate PowerPCs (equivalent to a x4 on the clock rate if you were really good).

I think that deeeeeep down in the Intel C compilers there's a very similar idea hidden away from view but still accessible if you go looking. Intel seem much more focused on providing complete libraries of useful routines that hopefully mean you as the programmer doesn't have to get that low level. But the low level stuff is still there somewhere.

bazza Silver badge



Even assuming the sandbox is secure, the fact that the code is processor dependent makes it a really dumb idea.


Well, except that currently that's what you have to do to get ultimate performance. Until either x86 (eek!) or ARM or PowerPC or SPARC or MIPS (all much nicer) achieve a complete world wide instruction set monopoly we're stuck with that. And if ultimate performance isn't needed then you'd probably use Java, JavaScript, etc.

In effect Google are trying to provide a completely standardised API for native app developers on all platforms so that apps are write-once-compile-many-debug-once. History is shown that such things tend to fall to the lowest common denominator, which is a sure fire way of not being able to exploit the maximum potential of any given hardware platform which rather defeats the whole point. I wouldn't be surprised if they couldn't make that out perform a really well written .NET or Java (dare I say even Javascript? On second thought's, meh) virtual machine *and* keep it truly platform independent.


With PNaCl, Emscripten wouldn't be necessary and apps could benefit from near native execution speeds"


Yes, but PNaCl on top of LLVM gets away from the main thrust of NaCl which is to be purely native on the client. If PNaCl becomes their main effort then really they're just trying to compete with any other VM based cross platform ecosystem like .NET, Java, etc. Why bother doing that when they're years behind all of those?

bazza Silver badge


"Very good and insightful post."

Thank you very much :-)


I believe that was exactly Blizzard's point, especially considering that not every single native application is a "properly written, decently compiled piece of native code."


Well, maybe. Blizzard is right in that a piece of JavaScript can be run better by having a better interpreter as well as the developer actually improving the source code's own efficiency. But I suspect that Blizzard is being rather optimistic if he thinks that an interpreter can make JavaScript better than ordinary native code.

For example, imagine that they were to develop a Javascript engine that automatically spots opportunities for parallelisation in the source code. Fantastic!!! That can all be vectored up, maybe executed on a GPU if its big enough to warrant it, amazing!

However, all those tricks will also exist in the native application world too. Many already do (loop unrolling, etc. The native C/C++ compiler writers have been trying pretty hard over recent years, especially Intel). All you have to do is set the right comiler switches to tell it to do what it can, et voila, a faster application is produced. And ATI and Nvidia are trying very hard to make useful APIs (OpenCL and CUDA) available to developers to simplify the task of doing really big number crunching.

So there's nothing special about JavaScript that means that there are some magical optimisations that can automatically be applied that couldn't also be applied to C, C++ or indeed any other language. And if they are applied to a native application at compile time that's likely always to be better than suffering the overhead of an interpreter. Concievably one might run the interpreter in a separate thread on a different CPU core to get round this. But that is consuming a core's runtime which might otherwise be dedicated to executing application code.

Similarly I think Google are crazy if they think they can successfully and usefully abstract all the fancy high performance computing APIs that are currently available to native application developers. For instance, will they make NVidia's CUDA or ATI's OpenCL available as a standard part of the NaCl environment? If not then already they're way behind the curve. It will likely always be the case that as APIs for high performance come along (like CUDA and OpenCL) NaCl will always be playing catch up, won't be able to support them on all platforms, or will just not bother.

The only way to achieve better performance on given hardware than is achievable through compilers / interpreters spotting the obvious or off-loading this 'n' that to a GPU is to have explicit parallelisation in the original source code. This has traditionally been perceived as very difficult, so most people and indeed almost the entire computing industry has tried to avoid tackling this head on.

There is some progress though. SCALA (for those who've not heard of it that's a superset of Java) is a language that brings the old fashioned Communicating Sequential Processes paradigm from 1978 (!!!) back to life. This simplifies the business of developing applications that are inherently parallel. It takes a big shift in how one goes about designing a computer programme, but trust me it's worth it. This is (currently) a much better starting point than trying to get a compiler or interpreter to work it out for itself. Likewise OpenCL and the like are making it easier to exploit the mass parallelisation available in a GPU.

bazza Silver badge

Sorry, but quite long...

Good article, thank you.

There’s a whole lot of horse shit being spouted all over by the various people quoted in the article. For instance:

"While JavaScript is a fabulous language and it just keeps getting better, there is a lot of great software that isn't written in JavaScript, and there are a lot of software developers that are brilliant, but they would rather work in a different language,"

Entirely wrong. JavaScript is merely an adequate language for certain purposes. Programmers use other languages for sound technical reasons (performance, libraries, etc), not just because they’d rather not use JavaScript. If Brad Chen thinks that all programmers should somehow want to use JavaScript (or maybe some other single language) then he’s starting off on the wrong foot.

And just who is Linus Upson trying to kid:

"One of the key features of the web is that it's safe to click on any link. You can fetch code from some unknown server on the internet,"

So Google have never got stung by a dodgy web link then? There have never been holes in JavaScript interpreters have there?

"Before, when you downloaded a native application, you had to install it and you had to trust it. With Native Client, you can now download native code, just like you download JavaScript and run it, and it's every bit as safe."

That maybe true but they’re carefully chosen words. “Every bit as safe” doesn’t mean perfectly safe.

And how about this little gem:

"You've added this arithmetic to make sure the jumps are in range. But the real issue is that if it's really clever, a program will arrange for a jump to jump past that arithmetic," says Morrisett. "You might protect one jump but not the next one."

So Morrisett is saying that someone might just do a little manual hacking to insert op codes in order to achieve something nefarious? It depends on where the verification is performed. If it’s done on the client as the code is running, then this whole NaCl sandbox idea will fall to the oldest hacking trick in the book. And using x86’s segment registers is mad. In today’s world of virtualisation there are many fine instructions on x86 from Intel and AMD to make strong sandboxing realistic, yet Google are choosing to ignore all that in favour of an archaic monstrosity from the dark ages of computer architecture history?

And Google haven’t done an ARM version yet. Haven’t they seen the mobile revolution happening just down the corridor in the Android department, in Apple’s shack, at Microsoft and literally everywhere else? Not having an ARM version is soon going to look pretty stupid if it doesn’t look stupid already… And isn’t PNaCl just mimicking Microsoft’s .NET and Sun’s Java? Does the world really need another one?

“Chrome will only accept Native Client applications distributed through the Chrome Web Store, and Google will only allow Native Client apps into the store if they're available for both 32-bit x86 and 64-bit x86”

So NaCl won’t be the web then. Users won’t be able to click on any link out there in the Web and get a NaCl app because they’ll have to visit a Google run store? That sounds *very* inconsistent with what Linus Upson was saying earlier.

But hang on, Chris Blizzard is talking junk as well:

“Once you download the native code, there's no opportunity for browser optimizations.”

Err, isn’t that the whole point of native code? Isn’t it supposed to be fully optimised already, no room for improvement without a hardware upgrade? No amount of software re-jigging inside a browser is ever going to make a properly written decently compiled piece of native code run any quicker than it already does?

“A source code–based world means that we can optimize things that the user hasn't even thought of, and we can deliver that into their hands without you, the developer, doing anything.”

Hmmm. I wonder how many web site authors, plug in developers and the like have spent feverish hours in the middle of the night trying to fix a web site or plug in to cope with Mozilla changing something YET AGAIN. Hasn’t Blizzard heard about the debacle over Firefox version numbers? His statement is correct only if the ‘optimisations’ don’t effect the standard, but Mozilla (and everyone else I guess) hasn’t exactly agreed what the standard is nor kept to it:

"What are you going to do about version compatibility? What are you going to do about DLL hell?”

Indeed. What are you going to do about plug in hell?

And this is a real beauty:

“Chen and Upson also point to efforts like the Emscripten project, which seeks to convert LLVM bit code to JavaScript. Even if Native Client isn't available in other browsers, Upson says, this would allow all Native Client applications to span the web.”

So we’re going to write in C++. That’ll get compiled to LLVM. Ordinarily that would get executed in some sort of VM, just like .NET and Java, in which case I might have chosen to use C# or Java in the first place. But just in case that VM is missing, the LLVM will get re-compiled to JavaScript, which in turn will get interpreted to x86 op codes. IN THE NAME OF HOLY REASON HOW IS THAT SUPPOSED TO BE A GOOD IDEA? Sorry for the shouty shouty, and I’m not religious in anyway either, but sometimes things make me snap. It’s not April 1st is it? No, good; I thought I’d better check.

Right, enough of the rant. Web apps (Java, JavaScript, whatever) are Web apps. Native apps are native apps. They serve different purposes. NaCl is another Google effort to corner more on line advertising revenue by means of setting up another app store eco system that doesn’t actually deliver any tangible benefit to the end user. All this talk of ‘trust’ doesn’t matter two hoots. In both models you have to trust either the app developer or Google. Why is trusting the app developer worse than trusting Google? You could even argue that a Single Point of Trust is worse - just look at the problems we've had when a single CA (Diginotar) gets hacked.

Unless they pull their fingers out very quickly NaCl is going to wither and die as the consumer world transitions wholesale to ARM. This transition is likely going to be driven like we’ve never seen before by Microsoft bringing out Win8/ARM.

On the face of it Linux (well, Android), Apple’s and Microsoft’s propositions are far more sensible (though Oracle might do for Android yet in the law courts). Java and .NET do a decent enough job. Microsoft will also have to do a decent job of making the x86 / ARM choice a non-issue to native developers (and the word is that they’re doing quite well on that front). Apple has made it relatively simple for developers of native apps to target the whole Apple eco system.

Battery life is going to be king for many years to come, and NaCl looks like a very bad way of extending battery life to me, not least because it’s currently stuck in the land of x86. If Win8/ARM machines start issuing forth in large numbers and last whole days on a single charge, who’s going to want a power hungry x86 machine running anything, least of all Chrome and NaCl?

Linux.com pwned in fresh round of cyber break-ins

bazza Silver badge

@AC, re: @Captain Scarlet

"Clearly this problem is with configuration/implementation of the security on the Linux systems involved, probably with a little user complacency thrown in for good measure and not a fundamental problem with the quality of Linux."

I'm not sure that it is clear. It is clear that a privilege escalation has occured, but I wasn't aware that anyone was saying how it had been accomplished. If it is a kernel problem, then like wow, that's a big deal. An unknown kernel bug allowing such escalation is a big worry for any OS, not just Linux. But even if it is just a config problem, what's going on there? Why are they still offline?

Apple seeks product security boss after iPhone loss

bazza Silver badge


Or they could just choose to chill out a bit and be less obsessively secretive. Would that really dent their sales in any measurable way whatsoever?

Anyway, Apple products are pretty predictable - shiny, lacking in some useful buttons and features that everyone else has been doing for years (FM radio, anyone?), pricey, designed to lock you into an ecosystem designed to make yet more money, and occassionally suffering form over function (antennagate?).

Skype: Microsoft's $8.5 billion identity tool

bazza Silver badge

@Dazed and Confused: Seems to do that already?

I've got skype on PCs, phones, etc. They all ring when someone calls me, and when I answer I'm speaking to whoever.

How Apple's Lion won't let you trash documents

bazza Silver badge


I doubt I'll be the first to point this out but here goes anyway. Did I miss Mac OSX being transitioned from FreeBSD to VAX VMS? Are Mac users going to have to get used to typing PURGE?

I reckon that there's a high chance that the less technically experienced users out there are going to get veeeeeeery confused by this. The thought of trying to explain a complicated version control system and when it does what it does and why it does it to my Aunt is not an appealing prospect!

I shall snigger from afar....

'Satnavs are definitely not doomed', insists TomTom man

bazza Silver badge

@Alex King

"Contrary to an earlier poster, nobody does (or should) give two hoots about gain, antenna patterns or whatnot. If it works, is easy to use and does the job then that's the point."

Except that if you as a consumer wanted to choose between them based on GPS reception performance that is the information you need. Without it all you can do is pick one at random.

This forum has many people saying that they've suffered GPS drop outs. When someone is lost in a city with no GPS reception they do care. Shame they didn't think about that when they were buying it in the shop.

But because the industry is effectively silent on the matter there is no commercial pressure. Sure, it works quite a lot of the time but we would all like it to work better.

TomTom certainly used to care - my ancient old TomTom easily out performs any phone I've ever seen in terms of GPS signal reception. When you're driving around the Peripherique and motorways in Paris through all those half obscured almost-tunnels you really don't want a GPS drop out; you will miss your exit! My TomTom hasn't let me down yet, but every phone I've seen packs in at the first hint of overhead obscuration.

bazza Silver badge

@mikeyt: crims aren't that bright

Cars used to get broken into if the windscreen had the marks from the sucker on it!

I suspect that when some idiot breaks in to a car these days they're not doing specifically for the satnav; they're just not fashionable enough.

bazza Silver badge

@Mark 63

I like my watch to work when my mobile battery goes flat...

You're right about the mp3 player market, hardly anything decent still on the market. I'm still using an ancient iRiver iHP-120 in the car, still works very well indeed, and the little cable remote control is just perfect - don't need to look at it even. Much better than fumbling around with a crappy touchscreen on a smartphone. Two headphone jacks (surprisingly useful, you can have great fun with a pal in an airport departure lounge listening to rude songs sniggering away without anyone else being able to hear), loads of different codecs, optical line in/out, FM radio (Apple still don't put radios in theirs, do they? Why? I mean, why oh why oh why do the f*****g idiots not just put a damned 5cent fm radio chip in their goddamn shiny toys? How long can a fit of pique over them not thinking of it first go on for?).

Anyway, I digress. Apple's success has really lowered people's expectations of what they think is technically achievable. It's no longer really commercially feasible for other manufacturers to push out superior products because not enough people understand the benefits of the technology anymore. Form is now more important than function.

Isn't it time for the competition authorities to take a serious look at Apple's dominant position before satnav is reduced to nothing more than an eBook atlas?

bazza Silver badge

@AC, re TomTom

I've got quite an old TomTom (a One v3 with Euro maps) that I find very useful indeed. Its maps are a little out of date, but not disaterously so. I quite happily go all over Europe and it's not let me down once. On a recent family holiday in rural France I was the only one to make it direct to the remote farmhouse we were staying in with no difficulty at all. It even knew about the driveway. Everyone else with mobiles, newer satnavs that had cheap / partial euro maps, etc. spent hours driving round the countryside lost either because they couldn't get a mobile signal, or the roads weren't on the map, etc.

I was vaguely thinking of getting a newer one, but from I've read here today I think that I'll stick with the one I've got. I don't want to use a phone either because they're expensive to buy and aren't quite as good (worse GPS in my experience, reliance to some extent on mobile coverage, voice too quiet, stupid things like auto screen blanking that the app can't control, can't make a phone call and navigate at the same time, etc. etc.). If they just made a slightly newer One v3 then I'd buy that.

Why oh why does shiny mediocrity succeed over old fashioned yet effective clunkiness? Do people want to be stylish more than they want to get to their desination with ease? Why would anyone buy a £400 smartphone and use it to navigate and suffer the inevitable compromises when a 4 year old £100 TomTom argueably does a better job?

I suspect that it works this way:

Punter: "Does this smart phone do satnav?"

Sales dude: "Of course"

Punter: "And it is nice and shiny too..."

whereas it should work this way:

Punter: "What's the GPS receiver sensitivity in dBm?"

Sales dude: "Errrr"

Punter "And what's the GPS antenna pattern like?"

Sales dude: "Welllllll"

Punter: "What the peak antenna gain?"

Sales dude: "4?"

Punter: "And what's the average time-to-update for map corrections from the date the road layout changed?"

Sales dude: "blurb blurb"

Punter: "and what's the map resolution? And what's the average time from traffic jam forming to autorecalculation of my route?"

etc. etc.

To make a useful comparison between satnavs, either phone or standalone, these are the sort of data that is actually needed. But none of the companies supplies it. So a level of mediocre performance has become the accepted norm and the general public will use the half baked products in ignorance of the fact that they could be a *lot* better than they currently are. And the trouble with mediocrity is that it has a way of letting you down just when you really, really want the damn thing to work properly.

bazza Silver badge


"That's assuming you want to overpay for a preinstalled system that will cost you many times more to fix if it has trouble down the road."

Built in satnav has the potential to be very very good. They can exloit car data (wheel speed, steering angle, etc) to provide a more reliable position fix than GPS alone. Shame that no one seems to do a good one. So why spend all that money on something that doesn't produce as good a result as something like a TomTom?

I wish there was an effective standard for these things in cars. DIN radio slots aren't the answer; it's not like every car comes with a spare DIN slot just waiting for you to put the upgrade of your choice in, and they're far too big. What would be very nice if there was a smaller slot that provided all the pertinent car data (wheel speed, steering angle, GPS antenna, etc) in a standardised way. The satnav manufacturers could provide units that would fit any car without having to stick it to the windscreen. Then there would be *real* competition in the satnav market.

bazza Silver badge


"My TomTom needs 30 seconds (sometimes more) to get a GPS lock, while my n900 has one instantly. Reason: my smartphone can access the internet, and get a good hint on its location for a fast kick-start of the GPS"

Funny, if I keep the satellite almanac data on my TomTom up to date (by plugging it in to the internet via a lappie every now and then like the book says to) it gets a complete lock within a few seconds. And it'll do that anywhere on the world's surface. I'd like to see your n900 achieve that outside of mobile phone coverage. Also a phone's approximate initial lock is OK so long as there's not two closely spaced roads to pick from...

"Also, the analogy with cameras is completely off: the big difference between a phone camera and a dedicated one is optics and sensorsize. The impact of the GPS antenna size isn't quite as big."

Not sure about that either. I've yet to find any phone with a GPS as sensitive as almost any satnav's. My TomTom gets a GPS signal almost anywhere *inside* my house; phone's don't. That gives a more reliable GPS lock in practise, something that's quite important in the urban jungle.

"Since a dedicated GPS receiver and a smartphone share a lot of common components, merging them seems like the obvious step. Goodbye dedicated GPS."

Indeed, and I think that a lot of the recent models of satnavs have 3G in them to get live traffic updates, roadwork information, etc. So there is a lot of hardware commonality between phones and satnavs. But some of the little features of satnavs that are missing from phones (like better sensitivity, no need for cell coverage, live traffic data that automatically alters the route) add up to something that the dedicated road warrior benefits significantly from.

I suspect though that the majority of the market will be phones, so the market costs for dedicated satnavs won't be sustainable and we'll all take a step back in capability like it or not. As for built in satnavs that benefit from speed and steering data direct from the car's own controls (and a lot of them have inertial sensors too), well they're already very expensive.

bazza Silver badge

@AC, re: Fair enough

"prefer a simple, single gadget to lugging a bagfull of toys."

I don't know how many people lug their satnav around. The normal place to find a satnav would likely be in the glove box in the car, not the driver's pocket / hand bag.

Apple's ex-cop and the case of the lost iPhone 5

bazza Silver badge

@AC, rip off USA

"Also, a monitor that's comparable to the 27" iMac's costs at least $1000 and that's in the US, where electronics are typically cheaper."

Do more Googling. They're about £490 from Hazro (IPS too!). Shows how much of a rip off Apple are who charge about twice that for something no better.

bazza Silver badge

@Barry Shitpeas: illegal activity?

Er, aren't you missing the point? Surely even in the good ol' US of A it is illegal to impersonate a police officer? And doing so to gain illegal entry to someone's home and threaten the residence is surely an agrevating factor?

What on earth were the SFPD officers thinking when they agreed to go along with this? With the Apple guys handing out a phone number this was always going to come out.

And it doesn't say much for whatever GPS tracking is in the phone if Apple went to the wrong house.

Kernel.org Linux repository rooted in hack attack

bazza Silver badge
Thumb Up

@sabroni: Indeed

Not holding my breath at all. No point really.

bazza Silver badge

Always going to be a problem?

The security of a distributed development effort like Linux kernel is going to be only as strong as the weakest link in the chain. With hundreds of contributing individuals out there on the internet it's always going to be difficult to ensure that they're each as careful / prepared / patched / etc. as everyone else. Humans as individuals aren't very good at being so consistently self disciplined.

Whereas in an internet-isolated development environment (in which I imagine the likes of Windows are developed) there's a BOFH, rules, corporate oversight, contracts of employment, and no direct internet connection. To attack such a setup means getting a suitably motivated person in on the inside. That's much harder to achieve I should think. It's certainly less convenient for the attacker.

Perhaps the OSS community needs to be a bit more open minded? I don't know for sure but I suspect that all the main servers holding the Linux source are running Linux. A homogonous collection of servers is much easier to compromise on a large scale than a heterogeneous set. If kernel.org used something else (FreeBSD? Windows even?) as well as Linux to host the source then an attacker's life would be much harder. With reference to the canine world, mongrels are much more resilient than pure-breds. It won't stop some individual developer's personal machine being hacked and leaking passwords, but it does complicate the matter of how to exploit that to attack the servers. Microsoft famously turned to Linux servers when a serious problem emerged with Windows a few years back. Perhaps it's time to return the compliment?

OK, it's not good PR to say that you don't totally trust your own OS, but then we're clearly past that now aren't we? Doesn't this hack underline that? Wouldn't it be quite mature to acknowledge that nothing, not even Linux, is perfect? Surely it's better to provide a more robust offering than maybe being a little bit fanbois-ish about the perfection of one's own creation?

As for 17 days, isn't that a mighty long time to notice that something's wrong on such an important set of servers? Was everyone away on holiday?

Mac Lion blindly accepts any LDAP password

bazza Silver badge

@Dibbley: El Reg, immediate action needed

Come on El Reg, this is a desparate situation. We need to get this hard pressed person an icon with several pints and a stiff whisky to follow. An icon with a single solitary pint is no where near enough. This is clearly a dedicated professional with a lot on their plate.

It sounds like you're the only one standing between your CEO / majority stock holder and ruin. Good luck!

Microsoft unveils file-move changes in Windows 8

bazza Silver badge

@Si 1: Real men?

Real hairy chested weather beaten gruff talking wizzened old men use Xtree Gold, or possibly it's very welcome Windows clone ZTree bold.

Woman in strop strip for Bermuda airport customs

bazza Silver badge


It seems he's spent most of the past 10 years in a Scottish jail for the same reason. Now that doesn't sound very proportionate in comparison to, for example, Al Megrahi who did a mere 8 years for bombing the Pan Am jumbo killing 270 people.

'Devastating' Apache bug leaves servers exposed

bazza Silver badge


"I know, isn't it terrible?".

Yes, it is if you're an app developer trying to support many users of many versions of many distributions. And how is an ordinary home user supposed to choose a Linux distro? For a start, which one's best? Which one does everything they need?

Ask yourself why Google chose do what they did with Android instead of just slapping a mobile friendly shell on top of an existing distribution. Surely that would have been much easier?

"Did you know there is *more than one command line shell*? Worse yet, there is more than one programming language even within the same language family! These things can be compiled for machines with totally different architectures."

Great if your a sys admin or developer. Totally and utterly irrelevant bollocks if you're an ordinary desktop user.

bazza Silver badge

@Eddie Edwards: Not a joke

Redhat should have gotton rid of rpm a decade ago. Apt/deb is much much better, so why do Redhat not use it? Unbuntu got rid of Gnome as their default with not much warning. How many different APIs are there for sound in Linux, and which ones are supported in every single version of every single distribution? CUPs has at least brought some consistency to printing, but it's still weird that a programme has to be able to render in PS to print and something else to display the same thing on screen. Mozilla are releasing new versions of Firefox quicker than plugin writers can cope with, and are planning on ditching version numbers as a result. These things might not matter to sysadmin type people, but they do matter to developers aiming at ordinary desktop users.

Whereas MS have said three years in advance that XP will cease to be supported in 2014. Apple gave massive warning of the cessation of Cocoa. Older versions versions of Office are still updated, but there's a definite end of life. In short the knife gets wielded every now and then, the planning is often quite considerate of users' needs. 'Better' does not mean quicker.

bazza Silver badge

@Ian McNee

"Yeah, those servers, they don't matter much, no point them being secure and reliable, it's not like they deal with anything important like financial transactions over the internet...hey...wait a minute..."

So you support my point then? Sysadmins can and do cope with Linux's fragmentation, and has met with success. Even I cope with Linux's fragmentation on a daily basis, and it's infuriating. Linux doesn't succeed on the desktop because you still have to be a something of a sysadmin to run it on a desktop. For example, do you *really* expect the average desktop user to know how to install an rpm packaged piece of software on an Ubuntu box, or to know what to do with a tarball? Get real. If the Linux world wants to succeed in the desktop arena, it's going to have to sort that kind of problem out.

And as for servers (Linux or otherwise) being secure and reliable, it seems that if they've been running Apache these last 4 years they've been anything but that. They've all been sat there just waiting for someone to send them some dodgy http requests, and it's only luck that no one did. How many sysadmins have spent the last 4 years telling their bosses that their important Apache servers doing important financial transactions on the internet are secure, protected against denial of service attacks, etc, etc?

"Actual studies based on what happens in the real world show that bugs & vulnerabilities in OSS are fixed significantly faster than in proprietary code. End of."

Given the magnitude and timescales of this particular problem in Apache, perhaps those studies' findings should be revised? I mean, MS have had their share of problems, but to be in a situation where vast swathes of the internet could have been brought to its knees with a few only slightly dodgy HTTP requests without the need even for a DDOS attack is pretty spectacular.

bazza Silver badge

@Solomon Grundy: Pretty good job?

"The OSS guys do a pretty darn good job of producing some pretty great software for free."

Moneytarily free is nice for the rest of us. But just how good a job have they done in this particular instance if a reported problem with huge consequences for a very large fraction of the internet was left unfixed for many many years?

The OSS people do get some soft benefits in return for their work - high reputation, consultancies, etc. This incident is a good example of how such benefits are just as vulnerable to bad news as cash flow is for a large company.

"The OSS community overall demonstrates project management skills that almost any big company should like to emulate"

I would dispute that. The glaring conter example to your statement is the world of Linux. I think that the handling of Linux (not just the kernel, I mean the whole shebang) by the OSS community has been terrible, really, on an absolute scale.

They do project management well in the sense that a bunch of guys decide to do something, and a result is delivered with enthusiasm over a reasonable timescale. The part of project management that is definitely not being done at all is deciding whether the work was necessary in the first place, or deciding (with global agreement) that the new thing will wholly replace something old.

Take a look round the world of Linux. Fragmentation abounds as far as the eyes can see. There are umpteen different distributions, a variety of different desktops, different package management systems, etc. etc. FFS how on earth can a choice of software package management systems be a good thing? Ok, someone once decided that an improvement was needed, but why would anyone keep using the old one?

I would say that at best Linux is a hodge podge of competeting ideas that has met some success in certain areas (servers) where this doesn't matter too much. But in the desktop arena it's in a terrible mess, and it's no wonder that most of the world's desktops and laptops are Windows or OS X. Clearly to the average user (and thus to app developers too) consistency really does matter. Linux has gained some popularity in the mobile sector only because some big organisation (Google and their Android) has come along and imposed its ideas on a global scale.

I think that the proprietary world is much better at wielding the knife to cut out old stuff and sticking with just one or two ways in which things are done. That's because it's expensive otherwise, and bad for sales. The same pressure is not being felt by the OSS community.

"Every software project has tons of bugs and decisions have to be made whether to work on improving functionality or fixing rarely encountered issues."

Clearly in this case no effective triage system was in place for assessing the criticality of issues. If there had been this would have been fixed many years ago.

bazza Silver badge

Not a good day for open source

One of the key advantages of open source is supposed to be that anyone can fix a bug and in all likelihood someone will do so quickly.

It seems that open source communities can be as lazy as closed software companies afterall. I suspect that the reason that this has happened is because Apache has had a reputation for rock solid reliability for some time there can't be any bugs worth fixing. Clearly not the case.

So how many other severe bugs are there lurking in the source code?

Nervous Samsung seeks Android Plan F. Or G, H ....

bazza Silver badge

Wheels coming off the Android band wagon?

Or is a wheel bearing just beginning to squeak?

It's clear now that patent wars are more or less the most powerful commercial tool these companies have. Apple are indeed doing quite well on this front, despite the fact that there seems to be little about an iSomething that is obviously novel and without precedent. So even if Samsung do go down some other path, what's to guarantee that the end result will be fireproof from a patents point of view?

The whole patent system, principally in the US, is clearly the major issue here. It must surely be pushing up costs, and that gets passed on to the consumer. Will the US political system ever work that out?

If the patent situation in the US gets much worse it could result in non-US companies abandoning the US market, even though it is large. Afterall, there's 5.8 billion people elsewhere. The result would be an unintended policy of isolationism, and that really won't be any good at all for the US population.

Why does Apple sue Samsung anyway? Doesn't Samsung manufacture Apple's ARM processors?

Dish eyes 4G LTE wireless network

bazza Silver badge


"LightSquared's plan was clearly insane..."

So when does insanity become genius? *If* Lightsquared do get to deploy a national terrestrial service then they will have got themselves some prime bandwidth without having to pay top dollar.

It should also be a lesson to all spectrum users. Just becaues the adjacent frequency bands are apparently clear doesn't mean that you can assume they always will be.

Jesus Phone gives Sprint redemption 'this October'

bazza Silver badge

"guess when the iPhone rumor broke?"


HP: webOS will still run PCs and printers

bazza Silver badge

@Asgard: Symbian had other problems

I agree that Nokia have been a poor custodian of Symbian ever since they got hold of it (EPOC32) from Psion. Some things written a long time ago by ex-Psion people are quite clear on that point.

However, it's well known that Symbian is a difficult OS to develop native apps for, far harder than OSes that have come from mainstream mains powered hardware. The reasons for the difficulty are clear; achieving ultimate performance on a battery power device mandated a way of doing things at odds with the normal programming paradigms we all learnt when young. This really showed through in final products. Even today Symbain phones generally have very good battery life in comparison to iOS or Android driven machines.

I reckon that Nokai were never able to assemble a large enough team of programmers who *really* knew Symbian. In essence they could not put enough development into it to allow it to compete on bling, user interfaces, etc. as well as on the purely technical matters of battery consumption and RAM requirements. I suspect that the reason why they didn't have a big enough pool of the right sort of programmer was money; acquiring that sort of rare-skilled programmer / developer is expensive in salary and/or training. Maybe if they had got it right straight away there would now be a much bigger pool of programmers, but they didn't, so there isn't.

But in a way Symbian is beyond rescuing. Even if Nokia could salvage the mess and turn out a decent user experience, there's almost no point anymore. People are now completely used to having to charge up their fondleslab once a day or more. And people want to download apps, and those apps aren't going to be native Symbain apps. It's too hard and time consuming to be worthwhile for the average mobile phone eye candy app developer. So they will have to be written in something hideous like Javascript, and bang goes all those carefully crafted power saving design features.

In a way it's a bad sign for the whole computing industry. A fundamental requirement of portable devices is long on times, even if we've gotten used to having to charge up once/twice a day. Given the poor rate of improvements in batteries this really means less power consumption, which is something that the rest of the computing world would like too. So far the truly successful means of achieving this have been:

1) better chips

2) that's it.

So far software has not really played a significantly successful role in reducing power consumption, and arguably the modern trendy things like Javascript have made it worse. Yet Symbian shows that if you do get the software right you can make significant improvements without having to do anything at all to chip or battery design. Are we as an industry just too lazy to actually pursue that 'free' performance boost?

Here lies /^v.+b$/i

bazza Silver badge

Iain M Banks?

c:\>restore.exe a: c:\*.*

iPhone 5 to include Japanese earthquake warning system

bazza Silver badge
Thumb Up

@Joseph Haig

Ach, dammit, you got there first!

Oracle's Sparc T4 chip: Will you pay Larry's premium?

bazza Silver badge

@Paul 77, @Chemist

@Paul 77

Why would you do that when you could just access a server across a network? Back when I first started you just accessed some server from an X terminal. Alright, you'd probably have to use a Linux PC instead of an X terminal these days, but otherwise nothing's changed.

@ Chemist

You are most likely correct. But by Linux I suspect you really mean Linux on x86/x64. Nothing wrong with that per se, but there can be very good technical reasons why x86 might not fit the bill. Not every academician (or developer for that matter) is best off hosting their work on x86, and it's their hard luck if they don't look around to see what else is available. As Kebabbert points out, Sparc T3 is very good for crypto. That might be handy if you're hosting a large website that has https only access. Similarly anyone doing large amounts of decimal (*not* floating point) math really needs to take a look at POWER, which is why IBM do quite well in the banking / financial services sector.

David May, parallel processing pioneer

bazza Silver badge

@Vic: Well I never

Well, I didn't know that! I never used Occam itself, but I'd no idea that the C compiler was done that way.

On the whole I didn't like the dev tools very much. Debugging was always a nightmare. I Always felt that Transputers needed some sort of separate debugging bus rather than relying on the Transputer channels themselves. But JTAG hadn't been invented then.

bazza Silver badge

Ah, Transputers!

I cut my professional teeth on Transputers. Still using the same message passing principles 20 years later.

Android app logs keystrokes using phone movements

bazza Silver badge


I downloaded the app. Then I went and found a big hammer and did some frenzied typing, screaming "Log that you bastard" over and over and over, just like a proper fanbois! App's a load of crap :)

Sent from my desktop.

Microsoft begins cagey Windows 8 disclosures

bazza Silver badge

@Synja, because...

...if you did put X86 instructions into and ARM, what you end up with is an X86 / ARM combined. Then you can kiss goodbye to all the advantages ARM has in terms of power consumption, core size, cost, performance etc.

The reason why Intel is in such a fix in the mobile space is because X86 is a very bad starting point when it comes to making a low power chip with acceptable compute performance. It's fine when you have power to spare (in a desktop for instance), and indeed Intel's desktop/server/laptop chips are pretty quick. To date Intel have relied on being better at the silicon manufacturing processes to stay ahead of the competition, but these days that isn't enough to keep X86 comptetive in the low power sector. That's why pretty much every phone / tablet out there is running an ARM.

If Intel really wanted to make chips that are competitive on power consumption they are pretty much obliged to make changes to their instruction set. Then it wouldn't be an X86 anymore!

Intel's other problem is that the server world is beginning to wake up to the cost advantages to lower power consumption. If the server people get an appetite for ARM then Intel will lose a massive amount of market share.

bazza Silver badge

@ DrXym, sounds reasonable

"The host OS could precompile the LLVM bitcode into an actual native executable and cache it somewhere"

Not a bad way to go. It'd take some nifty tech to ensure that the result is 'as good as' a properly compiled native app, but there are some very clever people out there these days. I suppose MS's other point is that any .NET app should run as is with zero modifications, provided that MS can make the .NET CLR work equally well on ARM as on X86. They've already got some practise at that - there's certainly used to be a 'try it if you dare' C# implementation for embedded ARMs.

Apple changed shape of Galaxy Tab in court filing

bazza Silver badge

@AC, re (¯`·._.·(¯`·._.·(¯`·._.· LMAO ·._.·´¯)·._.·´¯)·._.·´¯)

Oh crap, there'll be ASCI art all over El Reg now...

\ ______|\_____*-----------------------

\/ _ ¦¦ '_\



That's supposed to be shark with a frikin laser beam, but I'm not so good at this, proportional fonts blah blah.

Has Google wasted $12bn on a dud patent poker-chip?

bazza Silver badge

@vic 4

"I'd say one of the main reasons is so they didn't have to license it from Sun."

I reckon it was to build a locked in app store thing to capture more of the advertising market. Using straight Java would have allowed any old Android customer to use any old app market without troubling Google's servers and their accompanying adverts.

LightSquared blasts GPS naysayers in FCC letter

bazza Silver badge


The point I sought to make in my previous post is that any mobile phone service, be it satellite or terrestrial, has a quite powerful transmitter (i.e. the mobile phone itself) in exactly the wrong location. It's right next to the customer, and presumably very close to the customer's GPS receiver. That is far more significant to a GPS receiver than a base station a few miles away or a satellite in orbit.

The subsequent point I made is that presumably that has always been the case long before Lightsquared came along, but the old satellite mobile bands that Lightsquared took over were never popular enough for the problem to come to widespread attention. Only now that GPS is a mass market and that Lightsquared are reviving usage of the adjacent band has the problem been highlighted.

It would be interesting to know if a Lightsquared mobile phone has a GPS receiver that works when the phone is also transmitting to the Lightsquared network (for example when using something like Google Maps). If it does then the GPS industry is clearly talking horse shit.

bazza Silver badge


Of course, it depends on where the nearby transmitter is. The most likely source of the most troubling transmissions would be a customer's Lightsquared mobile phone, not the Lightsquared base station. You don't have to go very far from any base station before the received signal strength is really quite low. 1/r^2 is a powerful attenuator! The mobile phone, being so close to the customer, is almost always the more powerful transmission relatively speaking.

You always *can* build a filter with the required level of isolation, but for very high levels of isolation it is going to be physically quite big and/or expensive. This would not be ideal for a GPS receiver which is by definition a portable device. Military equipment doesn't mind filters being expensive, but large filters can be as much a no-no as for civilian applications.

It is very difficult for the frequency planners to know what to do. They could allocate frequency bands in such a way so that there's adequate guard bands either side to protect against any conceivable transmitter. But that would be enormously wasteful of bandwidth, and it depends on an accurate understanding of what technological developments there may be in decades to come. Not a straight foward task, and something that humans have proven to be very bad at. Remember the 640k RAM limit?

But in this particular case I think that even the original use of the adjacent bands for a satellite mobile service was arguably wrong. Sure, the satellites for that service were a long way away (in orbit, in fact), so their transmissions were never going to be powerful enough on the ground to affect GPS receivers. But those satellite mobile phones themselves would have to be quite powerful to be received by their satellites. It's quite possible that they would have interfered with GPS receivers in much the same way as Lightsquared's transmission apparently does. I suspect that this has been going on all the time. But seeing as the original satellite mobile service is defunct it sounds like it wasn't so popular in the first place. So maybe the problem was always there, just not badly enough for it to come to widespread attention.

So if that is the case, who is to blame for the mess? Lightsquared? Not really, they've acquired the rights to a band; they're not transmitting outside that band, so they're sticking to the rules. However, I do agree with the view that Lightsquared should have known better. In effect they trusted the FCC to know their stuff when they asked for the band. Was that commercially a wise choice? Probably not. And I bet that the receivers on Lightsquared's mobile phones are just as vulnerable to out of band interference [similar bands, similar electronic constraints to achieve significant out of band filtering]. They're getting away with it too, because the adjacent band is GPS, which has no terrestrial based transmitters. I bet that if GPS were replaced with some sort of mobile telephony service, Lightsquared would be complaining just as loudly as the GPS crew are today.

How about the frequency planners at the FCC? Depends on whether they were obliged to consider mass market cheap GPSs when the bands were allocated all those years/decades ago. Back when the bands were first considered (in the 1980s?) it would hardly have been imaginable that we'd have mobile phones, never mind mobiles with GPS in them. And the rules on operating bands are quite clear. If you receiver is open to receiving transmissions from adjacent bands that's your problem, not the FCC's.

How about consumers and their want of cheap, small GPS receivers? No not really, they're the not the designers of the equipment they've bought.

How about the GPS receiver manufacturers? Largely yes - they've got away with ignoring the effects of transmissions in adjacent bands for many years now when they had no right to assume that those bands would be forever quiet. The FCC rules (and the rules from frequency planners everywhere else in the world) are very clear in black and white on paper about that, and always have been. And if the manufacturers had paid strict attention to the FCC rules then we would likely not now have things like GPS in phones.

So what happens now? Personally I don't think that the GPS manufacturers deserve to get away with it. However, the real question is does the paying public deserve the right to access the GPS service in the way they do? Yes, they do! Affordable, convenient and functional GPS does make the lives of the general public better in a very big way, and improving the lives of the public as a whole must be a governmental goal.

I think that the FCC should buy out Lightsquared's band allocation (a pricey proposition), recover the cash through a one off levy on any GPS manufacturer with (currently) non-compliant kit, and place bigger guard bands either side of GPS.

iPhone 4 prototype journo off the hook

bazza Silver badge

Big problem for the case

Apple could hardly claim to have suffered economic damage as a result of the premature outing of the iPhone 4. That wouldn't have helped the chances of a prosecution succeeding.

Google points finger at human after robo car accident

bazza Silver badge

@Matt Bryant, quite right

What's worse is that the designer isn't in the car when it crashes so they're slightly less motivated to pay attention!

A worrying trend is that insurance companies are beginning to see automated car systems as a way of reducing accident claims. The old "computers don't make mistakes" attitude of the unthinking policy makers will make it very difficult for an individual driver to prove that the automatics were at fault. I'm not seeing any commitment to equip systems and accident investigators with the tools (eg independent black boxes that the police and owner can read, not just the manufacturer) they need to be able to diagnose a systems fault. Without such things the 'driver' is likely to get the blame every time. Not for me thank you!

Researchers poke gaping holes in Google Chrome OS

bazza Silver badge

Target Improbable

Given that Google are trying to build a new execution environment from (almost) scratch in a very short period of time, it's inevitable that problems are going to be incorporated.

The traditional OSes have been developed over decades and they're still not right yet. What's so special about Google's approach to make it likely that ChromeOS is trouble free in such a short period of time? Personally speaking I won't be touching it with a barge pole.

Google's only motivation for developing ChromeOS is to capture more of the advertising market. They're a commercial, profit driven company just like every other. ChromeOS is a dangerous strategy because it succeeds only if a substantial number of people can be persuaded that it provides a level of service and security above that which is offered by the more conventional platforms (Win/Mac/*nix). It will be difficult to provide such assurances if security researches keep finding massive holes like this. And by going way beyond the scope of other things like Google Docs, gmail, etc. they're taking on a much bigger task and are less likely to succeed.

Note to Apple: Be more like Microsoft

bazza Silver badge

@Volker Hett

Heard of Microsoft's Anytime Upgrade? Ten minute job at most per machine to do what you've accomplished with fresh installs. And I guess Exchange is actually just about the same for 2008 and 2008R2.

Biting the hand that feeds IT © 1998–2019