* Posts by bazza

2094 posts • joined 23 Apr 2008

Dell signals Windows 8 fondleslab range

bazza
Silver badge

re: Hot News: Dell touting to be Microsoft's and Intel's bitch

Er, haven't you seen all the fuss about Windows 8 on ARM? Microsoft didn't spend $millions on buying an ARM *foundry* license (a rare thing indeed) for nothing you know.

MS may well end up with all bases covered with everything from ARM phones/tablets with fancy GUIs to servers with a healthy dose of corporate integration throughout. That could be a hard thing to resist. It could wipe out Blackberry. If the offering attracts enough content it will make a big dent in Apple/Android too.

0
0

Sony asks for 1.6m LCD TVs to be returned

bazza
Silver badge

@Mike Richards, it's getting tricky

"Today it's sub-standard components in television."

It's becomming increasingly difficult for manufacturers to source reliable components. There's a large number of knock off fake components being manufactured by dodgy rip-off merchants (mostly in China it has to be said). These are finding their way into the component supply chains, and it can be very hard to spot the fakes. Ironically, even Chinese manufacturers are falling prey to this problem. The fakes are naturally of lower quality, often don't meet the original specifications, or sometimes are just an empty package with the right printing on the top!

I don't know if this is what's happened to Sony with these tellys. But given the scale of the issue it will becoming increasingly common for consumer electronics to fail early, or potentially be dangerous in some way.

0
0

Chaos feared after Unix time-zone database is nuked

bazza
Silver badge
WTF?

Only in America...

That is all

37
3

Judge cracks down on Bayesian stats dodginess in court

bazza
Silver badge

@AC, re: An enlightening book

That case the BBC is raising is indeed madness. What is the world coming to when the default view is that someone must have committed some crime if we cannot otherwise explain a sequence of events?

There's already a couple of criminal offences (purgery and perverting the course of justice) on the books that are woefully under applied when it comes to considering the care with which some experts have assisted the legal system. With expertise comes responsibility, especially in relation to understanding the true limits of their own knowledge. If the scientific experts involved in the legal system can't be relied upon to remember that most important of scientific tenets, perhaps the thought of facing criminal charges might focus their minds somewhat.

For example, imagine you have performed no research on the exact question at hand (e.g. can the environmental / biological factors causing SIDS persist in the family home?). Imagine further that you have no peer reviewed work to back up a statistics-backed assertion that you're about to make and are not formally qualified as a statistician. How hard is it to stop, think a bit and say "I don't know."?

Similarly, if the Court and legal officials don't understand the scientific process, why are they allowed to accept an act on the word of single expert witness? Have they never heard of scientific concensus?

0
0
bazza
Silver badge

Adversarial vs. Inquisitorial

I don't think that the adverarial system here in the UK works at all for scientific evidence. It's too easy for both sides to put forward 'experts' who are disagreeing, and where does that leave a Jury?

The legal system should be asking the scientific world how evidence is assessed. The scientific world would rightly say 'peer review and concensous'. No consensus, no conviction. That is an inquisitorial process, which the people involved in the legal system don't like that one little bit.

The SIDS cases were appalling. A non-expert in statistics presented unchallenged 'facts' that were in fact horseshit. At no point was he required to show that he was qualified to do so. Why wasn't he charged with purgery? Why weren't the Court's officials charged with gross negligence???

0
0
bazza
Silver badge
Thumb Up

Mean, median or mode?

That is all

0
0

Check your machines for malware, Linux developers told

bazza
Silver badge
Thumb Up

@tim

It's a mystery to me why anyone would down vote such sage advice. Here's a counterbalancing up vote.

1
0
bazza
Silver badge

@Santa from Exeter, @Destroy All Monsters

@Santa from Exeter

http://www.theregister.co.uk/2011/08/31/linux_kernel_security_breach/

Paragraph 3:

“Intruders gained root access on the server Hera,” kernel.org maintainers wrote in a statement posted to the site's homepage shortly after Hawley's email was leaked. “We believe they may have gained this access via a compromised user credential; how they managed to exploit that to root access is currently unknown and is being investigated.”

That's from the horses mouth, so to speak. If you don't like what *they're* saying, tough sh*t.

@Destroy All Monsters

"You are implying that there is some new trick going here."

Yes, it's fair to say that I am. But given the length of time it's taken so far to find out what mechanism the exploit used I'd have thought that they would have been able to test for and eliminate the known tricks by now. In contrast, something new could take ages and ages to discover. Presumably the attacker was competent enough to clean up log files to hide their methods.

If one is responsible for a business critical system running on Linux then one is going to have to at some point consider the likelihood of such an inference being correct. I guess that the lack of reports of mass compromises of Linux servers on the web is encouraging, but it is hardly a guarantee.

Ok, so the damage done to the Linux source code is nil (the widespread distribution and signing of Linux source code has been well done). But I think that the real problem is the means by which the attack was carried out. I genuinely hope that it turns out to be an oversight of configuration on the part of the sysadmins at kernel.org. But I personally find the cagey nature of how this is being reported less than reassuring. I've never bought into the arguement for non-disclosure until a fix is ready. If that takes a long time then all the users are ignorant of their vulnerability whilst the attacker has a free run. At least give the users a chance to secure their own systems by telling them what's going on. We all hammered Microsoft for such behaviour.

It's interesting to analyse the motives of the attacker. Money? Not likely from kernel.org I'd have thought. Altering the Linux source code? Unrealistic, maybe, and building in a secret backdoor would seem superfluous given the mastery they'd already have to have over Linux and many other things to achieve that. Maybe a naive and doomed attempt at altering the source code? Could be. Showing off? Who knows. Purely as an attack vector on kernel.org users and similar? Seems to be few pickings to be had from that. Dry run for a later attack against some other Linux website? Not exactly a discrete way to practise.

2
0
bazza
Silver badge

Guarantees?

1) At least one of the developers was careless or unlucky enough to get compromised

2) Does that guarantee that they won't get compromised again?

3) If just one person doesn't do the checks then the whole thing may start all over again

4) We still don't know what the compromise mechanism actually was

5) We have to conclude that the compromise route is still partially open to an attacker

0
0
bazza
Silver badge

Laugh

Linux's invulnerability turns out to be an illusion, just like for every other OS. Vociferous proponents now have egg on their faces, and for the moment it's not washing off.

I'm not sure about superior breeding. Microsoft have dealt with security, bugs, etc. quite well over the past few years. Windows went through security hell, but seems to have emerged stronger from the experience.

The lack of information on this flaw in Linux is beginning to look very shabby indeed. It's an open source OS, everyone out there should be able to examine the code for the flaw. Looks like the only person who did was the attacker.

The best information we have seems to be that authorised users on Linux boxes can achieve privilege escalation to get root access, and that there is no way of stopping them doing so. That state of affairs doesn't really recommend Linux to anyone does it?

3
10

Ten reasons why you shouldn't buy an iPhone 5

bazza
Silver badge
FAIL

@gmichael225

"1 is a compromise to maximise battery efficiency, and besides, a decent battery should outlast the phone."

Wow, you really have been conned by Apple. Properly designed / built electronics will always outlast a battery. There's a gazillion old Nokias, cars, TVs, radios, computers (even some Apple ones), planes, tanks, ships, etc. etc. out there that prove the point quite adequately.

Your view is clear evidence of how Apple have succeeded in getting the gullible to believe that an expensive piece of equipment failing is 'reasonable'. It isn't. You're being deliberately led by the nose into buying new and expensive hardware every year.

Just admit that you *want* a new one every year (nothing wrong with that, it's your money not mine), but don't foolishly attempt to justify it with spurious arguements.

1
1
bazza
Silver badge
Childcatcher

Flame proof underwear?

LP is clearly feeling brave today. 171 posts for and against already...

0
0

Ellison: 'There'll be nothing left of IBM once I'm done'

bazza
Silver badge

re K machine

+++

"We didnt build it but we use a similar architecture for our processors" does not sound like terribly good PR.

+++

Well, its not bad publicity either, and it is a small attention grabber.

I completely agree that Oracle are quite capable of cooking up their own bad PR! Like them or loath them, SPARC getting improved is only a good thing, but I think we'd all prefer a more modest communicator of that news... And indeed it will take more than a single benchmark to out do IBM.

1
0
bazza
Silver badge
Thumb Up

@tom 99

Seems you're a deserved member of the group of a select few people I know who have properly read up on modern CPU technology.

IBM are in many ways infuriating. They're always coming up with neat tricks like this which have applications way beyond IBM's core business, but the only way you can get it is to buy a bloody banking system off them. It is admirable how they stick to what they do best and not waste shareholders' profits on supporting the few small fry like me who'd like to use their tech for something other than an entire country's credit card transactions processing...

2
0
bazza
Silver badge

@kebabbert

"Again, if you are doing it right, you never use floating numbers in finance. Every calculation is done with integers, and you keep track of the number of decimals separately. No rounding will occur. No floating numbers are needed. As I said, I work in a large finance company."

Hmm, well I'm not sure that you've wholly understood what that part of POWER is. As you say, floating point is no good. I gather that the *decimal* accelerator (not the FPU) in POWER is doing the sort of arbitary precision sums you've outlined. Hardware acceleration of that is obviously going to bring benefits, or so IBM would have you believe. And it's difficult to disagree with the evidence of their sales figures.

"Those are trivial calculations, done in COBOL on Mainframes. Not very sexy."

Not sexy, but clearly very profitable. Profits don't have to be earnt in a sexy way, they just have to be big! I'd say that on an absolute scale skyscrapers, suits and MBAs are not really significantly sexier...

Yes, I'm familiar with the technology that the high speed trading world uses, and I'd certainly agree with you on the inappropriateness of mainframes in that role! But generally I think that Solaris/Linux on even top end server and network hardware is behind the curve when it comes to low latency, largely because they're stuck with stodgy sluggardly interconnects like Ethernet, Myrinet and Infiniband.

The high performance embedded signal processing world has been much more focused on low latencies than the mainstream server world. The 'unconventional' interconnects found in that domain (VXS's and OpenVPX's sRIO, and external interconnects like sFPDP) are all about low latency. That's because it's a key driver in the sorts of applications (radar, etc) implemented using such hardware. If you've not done so already, it's worth a bit of investigation.

Anyway, hurry up and earn your profits. There's a good chance that the whole high speed trading thing will get banned soon, especially if it gets fingered for causing a major market wobbly. I'm pretty sure that no one in the finance industry could say whether or not it meets the Nyquist stability criterion, but those of us who know what that means generally think that it doesn't and don't believe that anyone's checked to see either.

Even if it doesn't fall over it's doomed stagnate, eventually. As soon as you've all bought premises as close as possible to the stock exchange and have all chosen the optimum hardware and algorithms for the job, you'll all be as good at it as each other at it and there won't be any technological advantage left to exploit. Anyone started checking to see if they're plugged in to port number 1 on the Exchange's Ethernet switch?

On the otherhand if stagnation spurs the finance industry into developing even lower latency kit (the mainstream IT industry won't, they care merely about throughput) then I would be quite grateful.

3
0
bazza
Silver badge

K Machine?

"Also, IBM "has" currently 212 systems in the top500 (yes, really), Oracle "has" 12."

Maybe, but the fastest of them all is SPARC based. Half million+ SPARC cores and counting...

Ok, so Fujitsu makes the chips, but it's still good PR for Oracle and the SPARC commnunity.

0
0
bazza
Silver badge

Ambitious

I admire the corporate ambition on display. But I think that there's a lot more to IBM than just integer performance. I think that Ellison is underestimating IBM and their appreciation of their customers' needs.

There's plenty of hardware out there (Sparc, Itanium, x64) that should theoretically mean that IBM's hardware offering is lacking in appeal in one way or other (performance, cost, or whatever). But apparently that's not reflected in IBM's sales figures.

I think that IBM are actually quite subtle as to what they put in to their hardware designs. My favourite example is the decimal arithematic hardware acceleration on the POWER processors. That's absolutely perfect for massive banking applications having to process international transactions. As Boltar rightly points out in the post above, ordinary floating point is not accurate enough. Almost no one outside that niche knows that it's in there. But it shows that IBM have really thought about banking applications all the way down to the CPU design. And guess what - IBM sell to a *lot* of banks and financial processing outfits.

Whether or not Ellison understands that point I don't know, but it is important. IBM clearly has means of offering cost effective systems to their customers in ways where individual benchmarks are irrelevant. The customer ultimately cares about only service-per-dollar. This has allowed IBM to sell a surprising amount of mainframe gear for many decades now. So much so that everyone seems to have given up saying that the mainframe is dead.

Anyway, whilst Ellison may witter on about Java, there's a shed load of COBOL out there, new and old.

Having said that, I do like what they've done with SPARC.

9
0

HTC Android handsets spew private data to ANY app

bazza
Silver badge

@TheRegistrar: An update? Are you kidding?

And exactly what speedy and pervasive update mechanism is available to HTC to ensure that every HTC phone out there would actually receive such an update? Ah yes, none.

0
1
bazza
Silver badge

From HTC:

All your data are belong to us.

0
2

Pandemonium as Microsoft AV nukes Chrome browser

bazza
Silver badge

Win7-x64 + current Chrome

All's OK here...

2
0

Firefox devs mull dumping Java to stop BEAST attacks

bazza
Silver badge

@mangobrain: Chicken and Egg

>>>>>>>>>>>>

The problem here is that the server side also needs to support TLS 1.1/1.2, which OpenSSL - probably used in the majority of Apache HTTPS servers - doesn't. If the server only supports up to TLS 1.0, then whatever the client advertises support for, the version will end up downgraded to 1.0 as part of the initial negotiation.

<<<<<<<<<<<<

Yes indeed, but that doesn't excuse Mozilla from implementing TLS1.2. Even MS have that in IE9, it's just that it's not switched on by default. I gather that Opera supports it too.

The sensible solution is to implement TLS1.2 in all browsers. That would allow website operators to upgrade and start mandating it for secure connections without losing their users. A sensible solution has a feeling of inevitability to it, especially if some market-viable browsers already support it. For example, It would be viable right now for online banking sites to say that you have to switch on TLS1.2 in IE9 or use Opera and bar Mozilla and Chrome. It would cause a lot of phone calls, but they could do that right now.

If Mozilla are going to be lazy buggers and say 'not our problem' then Firefox risks getting labelled as being insecure by design. These musings from the Mozilla dev team might be indications that they're not taking the issue seriously, but this is not the first time that's happened.

But if I may get back to your good point about OpenSSL What is the OpenSSL community doing in not supporting TSL1.1/1.2? It's like they've heard of it, agree that it offers better security, but frankly can't be bothered to incorporate it because they've not got the time or inclination. TLS1.2 was defined by RFC5246 in August 2008 (outrageous quoting from Wikipedia). That's more than three years ago. I don't think that that counts as a hearty demonstration of proactive steps to maintain the worth and reputation of their software. They're essentially conceeding that they're quite happy to be outdone by Microsoft...

1
0

Schoolteachers can't teach our kids to code, say engineers

bazza
Silver badge

Tuppence ha'penny's worth

About the only useful thing they taught on O level 'Computer Studies' was some veeery basic architectural stuff, and the fact that computers could be programmed. The rest of it was up to us kids really. There were those who were self starters who went and taught themselves a bit of BASIC, maybe Pascal, etc, mostly on Spectrums and the few PCs that were around. Then there were those who just weren't interested. I left school knowing C quite well, which in those days was pretty much the most useful thing you could know!

For those of use who were interested, it worked just fine. Electronics Engineering at university added some polish (i.e. learnt architectures properly), but the whole point of university is that it is certainly up to one's self to learn.

0
0

New flash RAM tech promises 99% energy drop

bazza
Silver badge

Memristor?

I wonder how it will stack up against HP's memristor that's also being developed. Things could get mighty interesting in the storage market if either of these technologies get perfected...

2
0

LightSquared to magic away GPS interference in 2 weeks

bazza
Silver badge

@bwalzer: Er, yes.

If LightSquared's transmitters were so terrible that "Things like intermodulation and IF images" were causing them to radiate outside their allocated band then they wouldn't have a leg to stand on. But by all accounts that's not the case here; no one is saying that LightSquared's transmissions will intrude in an unlicensed or unreasonable way into the allocated GPS band.

The interference of GPS receivers is caused by not defining their receiving band with a filter good enough to reject out-of-band signals that will, if LightSquared start operating, be commonly encountered. All receivers are vulnerable to high power interfering signals causing non-linear responses within the receive chain components. That's why when you design a radio you take a look at the expected operating environment (maybe glance at the frequency allocation tables) and decide how much filtering is going to be needed against legitimate and commonplace adjacent signals. And you are right - guard band practicality is an important factor for the allocating authorities.

It's worth questioning whether today's filtering requirement for GPS is so very different to that theoretically required years ago when the 1525-1559MHz band was allocated to satcoms. To set the historical context for the current debate I think that it is important to analyse the practicality of the previous use of the 1525-1559MHz band for satcoms long before LightSquared came along. And the issues are not generally evident from the online material covering this topic. For example, the last slide at:

http://www.pnt.gov/public/2011/03/munich/hegarty.pdf

shows the frequency allocations and a stylised filter response. It suggests that "Low Power On Earth Satcom Emissions" don't cause interference with GPS, and that "15kW Base Station Emissions" do. However such an analysis is valueless without considering the distance between the interference source and the GPS receiver.

So consider this: a single LightSquared 15kW base station may indeed cause operating problems for a large number of poorly filtered GPS receivers over a wide area. However the low power 2W-ish transmission of a satcoms mobile phone would still have been able to cause problems, just over a much shorter distance (perhaps tens of meters?).

And consider where the satcoms mobile phone might have been used; on a train, on a plane, in a car with SatNav, etc. etc. If they had become as ubiquitous as terrestrial mobiles are today they'd be everywhere all the time. It is perfectly possible that the satcoms phone itself could have caused equally significant problems for GPS just by being physically close to critically important GPS receivers. Anyone who has ever placed a GSM mobile phone near a set of loudspeakers and heard the ticketer-ter ticketer-ter sound it makes will understand my point, especially considering that a set of loudspeakers isn’t designed to be a highly sensitive radio receiver like GPS.

What I seek to show above is that the question of how much guard band to have between GPS and satcoms was just a relevant then as it is now between GPS and LightSquared. It is highly likely that practicality of the chosen guard band was considered long ago by the FCC when the satcom band was first allocated. That consideration ought (hindsight?) to have taken into account a close encounter between a GPS and a satcom phone. That scenario is, from the point of view of a single GPS receiver, not so very different to a more distant encounter with a LightSquared base station. So if the guard band then was considered to be practical, why not now? Maybe the FCC didn't consider such a scenario back then, or maybe they never imagined that everyone and everything would be using GPS receivers for some purpose or other.

By endeavouring to develop an adequate filter LightSquared are seeking to show that the FCC got it right and that the GPS industry are too damned lazy and cheap to do their own jobs properly. The irony of the GPS industry having to license a filter design from LightSquared would be memorable...

However, if the GPS industry is proved right and that filter with adequate out of band rejection can't be build then that really would mean that the FCC got it wrong, and arguably always had got it wrong long, long ago when the 1525-1559 band was first allocated for satcoms.

In both outcomes it's not LightSquared's fault (though you could argue that they should have known better than that). I don't think that it would really be the FCC's fault either. The bands were allocated to satcoms long before anyone thought that every mobile phone, car, etc. would have a GPS receiver in it, so the need for miniaturisation of filters with sharp cut-offs didn't exist. It's not really the GPS industry's fault either. Hardly anyone was actually using the satcoms band. If they had been then we'd have sorted this out years ago. But the solution (whatever it turns out to be) is going to cost a lot of money, and that's always going to come from the customers one way or other.

Some people have pointed out that this is a purely North American problem. But we'd all like our phones (including its GPS) to work properly when we go there.

2
0
bazza
Silver badge

Surely Admiral of the Fleet (RN) is better?

0
0
bazza
Silver badge

Not that

Better filters on LightSquared's transmission won't help. They don't intrude on the licensed GPS band anyway. The problem lies in that cheap GPS receivers don't bother to exclude the non-GPS bands from their radio receivers. The rules from the FCC and similar bodies round the world are quite clear; if you don't design equipment to reject other people's legitimate transmissions, that's your problem not theirs.

LightSquared are, I suspect, commissioning the design of these filters to demonstrate that the GPS industry is lying in their claims that such filters are not possible. If LightSquared get these filters going (and there is not particular electronic reason why they won't) then the GPS industry will be obliged to shut up and start designing their mass market equipment properly, just like the rules say they should.

What's more, if LightSquared lay claim to and enforce the rights to the filter design the GPS industry might have to pay a license fee to them for every new GPS receiver built. That will fill LightSquared's coffers nicely. It would be *quite* ironic, especially when it this is a problem soley of the GPS industry's making.

1
1
bazza
Silver badge

@John Sager

"Now that GPS has had augmentation and integrity features added by SBAS services such as WAAS & EGNOS, it can be used as a primary navigation aid by aircraft, so that's going to have a severe impact on the aviation industry if GPS can be randomly unreliable due to interference, continent-wide in the US."

Anyone suggesting that GPS is usable as a primary navigation aid for aircraft hasn't done enough failure mode analyses. Any radio system is susceptible to interference / deliberate jamming. They've merely done a commercial analysis that say's it's cheaper than, for example, an inertial navigation system. It's especially worrying if the GPS equipment manufacturers haven't bothered to secure their receivers against legitimate out of band signals.

The aviation industry is already experiencing significant problems with kids (presumably kids) shining laser pointers at aircraft coming in to land. Given the ease with which GPS jammers can be bought, how long before some kid tries one out near an airfield for a laugh?

0
3

Autodesk shifts design apps to the cloud

bazza
Silver badge

@Mr Young: Yep.

Industrial espionage made easy. Don't hack the company, hack the cloud they use and get *all* of their IPR, for old and soon-to-be-released products, in one easy step. And for all their competitors.

Can you imagine Apple (for example) risking all their IPR to someone else's cloud? I severely doubt it.

How on earth do accountants manage to override the concerns of engineers when it comes to the important of such things? Do accountants (for it is always they) not realise the company-killing value that their designs represent? Great if you can save a few bucks on software tools, but it doesn't look so clever if all your design work has been pinched by a competitor. I'm also astonished at how many companies will quite happily connect their design/engineering networks up to the internet. Do they get a kick out of risking their entire business to a network remote hack that they cannot guarantee to be able to prevent? Poor assessment of corporate risk all round I fear...

0
0

Dinosaur-murdering space boulder family found innocent

bazza
Silver badge

Georgia?

Has the Georgia state parole board looked into this? I think we should be told...

0
0

Boffins step closer to steam-powered Babbage computer

bazza
Silver badge
Pint

@alannorthants

Only because it takes 5 minutes to execute the first 10 instructions of the BIOS...

0
0

Android bug lets attackers install malware without warning

bazza
Silver badge

Updates?

"One of the hopes for Android a few years back was that it would be a viable alternative to Apple's iOS, both in terms of features and security. With the passage of time, the error of that view is becoming harder to ignore. By our count, Google developers have updated Android just 16 times since the OS debuted in September 2008."

Google may have updated Android 16 times, but I bet the number of updates actually delivered to every end users by the manufacturers and networks with all those varied handsets and configurations to support is far, far lower than that.

All it will take is for some massively unacceptable hack to take place (e.g. all Android phones disabled by some virus) and suddenly the buying public will vote with their wallets and buy something else. Seems that Android is, amongst all the mobile platforms out there, significantly vulnerable to that. Are SE, HTC, etc. wise to base their entire business on such fragile foundations?

2
1

Hackers break SSL encryption used by millions of sites

bazza
Silver badge

@Ken Hagan. Title not optional

"But yeah, this doesn't necessarily affect any other use of SSL."

I'm half wondering if the basic technique is re-usable. These chaps have used Javascript as a way of targetting SSL/TSL sessions in use by a web browser. But I'm guessing (without any real knowledge) that the basic technique could be re-packaged as, for example, a trojan which might intercept any SSL/TSL traffic. Any thoughts?

0
0
bazza
Silver badge

Time taken?

"In an email sent shortly after this article was published, Rizzo said refinements made over the past few days have reduced the time required to under 10 minutes."

What did he do, install the latest Chrome/Firefox/IE with their faster Javascript interpreters?

1
0

IBM pitches overclocked Xeons to Wall Street

bazza
Silver badge

Hot chips

I wonder how that lot compares latency-wise to the 5.0GHz POWER6 servers?

From what I hear the serious players in that high frequency trading game are busily locating servers as close as possible to the exchanges to get shorter propagation times down the cables. It's 5ns-ish per meter down a cable you know. Being several miles away really costs!

0
0

Intel demos ultra low-juice chippery

bazza
Silver badge

@ Ken Hagan

Well, I guess it depends on what you call an interesting compute problem ;-)

When you stop and look at the high performance floating point compute jobs that your average man on the street actually wants done (and is therefore 'interesting', at least from an industrial competition point of view), it's things like video / audio codecs, and to a lesser extent 3D graphics and games physics. And that's about it. Most people's high performance floating point requirements *are* very parallel indeed. That's why Nvidia and ATI have successfully sold so many billions of very parallelised GPUs, and why almost every smart phone out there has one too. In that sense they really have taken over the world.

My only point really is that whatever Intel/AMD can achieve with a general purpose CPU someone like NVidia, Qualcomm, ARM, etc. is likely to surpass once they've mastered the comparable sillicon manufacturing techniques. That has consistently been the case up to now, and the commercial realities of today are clear evidence of that. And now there's things like CUDA and OpenCL which are threatening to take even more floating point workload away from the CPU.

Until Intel can get the performance / Watt to a level where the x86 battery life is meaninglessly long or the electricity bill insignificant, they're not going to get a look in. Maybe these low operating voltages will get them there, but I doubt it. Anyway, who wants 100GFLOPS in a handheld device anyway?

0
0
bazza
Silver badge

Starting from the wrong place

"Let's take an example of a hundred-gigaFLOPS system today," he said. "If you want that performance, it will require about 200 watts of power."

Well, it might take 200Watts worth of Intel hardware to get 100GFLOPS. But there's plenty of industry examples that already out perform that. Take the Cell processors - that weighed in at about 250GFLOPS for 80Watts, (32Watts / 100GFLOPs). And I wouldn't mind betting that most GPUs that get up to 100GFLOPS (i.e. all of them these days?) use much less than 200Watts. And just how many ARM SOCs do you need to get 100GFLOPs? They seem to deliver *enough* performance on very little juice indeed.

I think this is Intel missing the point again. If you *really* want to deliver a workload with the absolute minimum of power consumption, starting off with the x86 as the basis for delivering it is not necessarily going to be the optimum solution. Intel are very good at forcing silicon manufacturing towards ever more impressive transistor performance, but everyone else catches up sooner or later and just builds ARMs using the same tricks. And ARMs seem to have an inherent architectural advantage when it comes to Performance/Watt metrics.

Where this may just save Intel (at least for a while) is in the world of servers. If they can point to siginificant power savings in the data centre then the operators will be replacing their equipment as quickly as they possibly can.

0
0

Google Native Client: The web of the future - or the past?

bazza
Silver badge
Unhappy

@Thomas Wolf, encore

I've done a bit more digging.

Jazelle exists, but isn't widely used. Japanese phones seem to use it a fair bit, but that would appear to be that. Seems a shame. I tried to find out whether Blackberries use it with no success. Given their pretty good battery life, perhaps they do.

Hardware acceleration has made everything else on ARM pretty good - video/audio codecs, GPUs with adequate grunt, etc. etc. So why not Java?

If the chip manufacturers (TI/OMAP, Qualcomm/Snapdragone, etc) don't put it on then no one can use it. And given that a large fraction of the mobile market (Android & iOS) don't support Java anyway, why bother to put down silicon that's not going to be used?

Seems a shame - hardware accelerated Java could provide a really nice solution to the problem of write once run anywhere in the mobile space, but I guess there's too many vested interests to prevent it ever taking off. There's Apple with the iTunes store and Google with the Android store for a start; and neither of those parties want to open up their platforms to apps from just anywhere...

1
0
bazza
Silver badge

@Thomas Wolf

Almost right.

A lot of ARM devices implement Jazelle, which is ARM's Java byte code execution engine alongside the ARM instruction set. In essence you can execute Java byte code natively alongside ARM instructions. There's an ARM op code that says that the next instruction to load from memory will actually be Java byte code; it's as seamless as that.

All of a sudden Java doesn't seem so stupid in the mobile platform, does it? Though I don't know if any of the the Java ME environments out there or Android's Dalvik use it.

1
0
bazza
Silver badge

@Def

"The world+dog is moving away from languages like C and C++ for a reason."

Not entirely correct. Those who really want the ultimate in performance are using them in a big way. Many datacentre people are wondering if C/C++ are a better bet than PHP, etc. from the point of view of electricity bills. And a surprisingly large fraction of the HPC community are still using Fortran. Almost all OSes are in C / C++ one way or another. Big applications like database engines, CAD packages, CFD modellers, etc. are not written in Javascript.

0
0
bazza
Silver badge

@John Miles 1

Careful - you'll be turning JavaScript in to MatLab, and you reeeeeeeeeeeeeeeally don't want to do that it you want high performance!

Other languages have done just that. Motorola extended C (and hence C++) on PowerPC with new types like "vector float" and "vector int". If you wanted to add four floating point values to another set of four values then it is a simple operation along the lines of ans_vec = vec_add(vec1,vec2), guaranteed to complete in a single clock cycle. A very good way to easily get stunning performance out of quite slow clock rate PowerPCs (equivalent to a x4 on the clock rate if you were really good).

I think that deeeeeep down in the Intel C compilers there's a very similar idea hidden away from view but still accessible if you go looking. Intel seem much more focused on providing complete libraries of useful routines that hopefully mean you as the programmer doesn't have to get that low level. But the low level stuff is still there somewhere.

0
0
bazza
Silver badge

@DrXym

>>>>

Even assuming the sandbox is secure, the fact that the code is processor dependent makes it a really dumb idea.

<<<<

Well, except that currently that's what you have to do to get ultimate performance. Until either x86 (eek!) or ARM or PowerPC or SPARC or MIPS (all much nicer) achieve a complete world wide instruction set monopoly we're stuck with that. And if ultimate performance isn't needed then you'd probably use Java, JavaScript, etc.

In effect Google are trying to provide a completely standardised API for native app developers on all platforms so that apps are write-once-compile-many-debug-once. History is shown that such things tend to fall to the lowest common denominator, which is a sure fire way of not being able to exploit the maximum potential of any given hardware platform which rather defeats the whole point. I wouldn't be surprised if they couldn't make that out perform a really well written .NET or Java (dare I say even Javascript? On second thought's, meh) virtual machine *and* keep it truly platform independent.

>>>>

With PNaCl, Emscripten wouldn't be necessary and apps could benefit from near native execution speeds"

<<<<

Yes, but PNaCl on top of LLVM gets away from the main thrust of NaCl which is to be purely native on the client. If PNaCl becomes their main effort then really they're just trying to compete with any other VM based cross platform ecosystem like .NET, Java, etc. Why bother doing that when they're years behind all of those?

0
0
bazza
Silver badge

@DZ-Jay

"Very good and insightful post."

Thank you very much :-)

>>>>

I believe that was exactly Blizzard's point, especially considering that not every single native application is a "properly written, decently compiled piece of native code."

<<<<

Well, maybe. Blizzard is right in that a piece of JavaScript can be run better by having a better interpreter as well as the developer actually improving the source code's own efficiency. But I suspect that Blizzard is being rather optimistic if he thinks that an interpreter can make JavaScript better than ordinary native code.

For example, imagine that they were to develop a Javascript engine that automatically spots opportunities for parallelisation in the source code. Fantastic!!! That can all be vectored up, maybe executed on a GPU if its big enough to warrant it, amazing!

However, all those tricks will also exist in the native application world too. Many already do (loop unrolling, etc. The native C/C++ compiler writers have been trying pretty hard over recent years, especially Intel). All you have to do is set the right comiler switches to tell it to do what it can, et voila, a faster application is produced. And ATI and Nvidia are trying very hard to make useful APIs (OpenCL and CUDA) available to developers to simplify the task of doing really big number crunching.

So there's nothing special about JavaScript that means that there are some magical optimisations that can automatically be applied that couldn't also be applied to C, C++ or indeed any other language. And if they are applied to a native application at compile time that's likely always to be better than suffering the overhead of an interpreter. Concievably one might run the interpreter in a separate thread on a different CPU core to get round this. But that is consuming a core's runtime which might otherwise be dedicated to executing application code.

Similarly I think Google are crazy if they think they can successfully and usefully abstract all the fancy high performance computing APIs that are currently available to native application developers. For instance, will they make NVidia's CUDA or ATI's OpenCL available as a standard part of the NaCl environment? If not then already they're way behind the curve. It will likely always be the case that as APIs for high performance come along (like CUDA and OpenCL) NaCl will always be playing catch up, won't be able to support them on all platforms, or will just not bother.

The only way to achieve better performance on given hardware than is achievable through compilers / interpreters spotting the obvious or off-loading this 'n' that to a GPU is to have explicit parallelisation in the original source code. This has traditionally been perceived as very difficult, so most people and indeed almost the entire computing industry has tried to avoid tackling this head on.

There is some progress though. SCALA (for those who've not heard of it that's a superset of Java) is a language that brings the old fashioned Communicating Sequential Processes paradigm from 1978 (!!!) back to life. This simplifies the business of developing applications that are inherently parallel. It takes a big shift in how one goes about designing a computer programme, but trust me it's worth it. This is (currently) a much better starting point than trying to get a compiler or interpreter to work it out for itself. Likewise OpenCL and the like are making it easier to exploit the mass parallelisation available in a GPU.

0
0
bazza
Silver badge

Sorry, but quite long...

Good article, thank you.

There’s a whole lot of horse shit being spouted all over by the various people quoted in the article. For instance:

"While JavaScript is a fabulous language and it just keeps getting better, there is a lot of great software that isn't written in JavaScript, and there are a lot of software developers that are brilliant, but they would rather work in a different language,"

Entirely wrong. JavaScript is merely an adequate language for certain purposes. Programmers use other languages for sound technical reasons (performance, libraries, etc), not just because they’d rather not use JavaScript. If Brad Chen thinks that all programmers should somehow want to use JavaScript (or maybe some other single language) then he’s starting off on the wrong foot.

And just who is Linus Upson trying to kid:

"One of the key features of the web is that it's safe to click on any link. You can fetch code from some unknown server on the internet,"

So Google have never got stung by a dodgy web link then? There have never been holes in JavaScript interpreters have there?

"Before, when you downloaded a native application, you had to install it and you had to trust it. With Native Client, you can now download native code, just like you download JavaScript and run it, and it's every bit as safe."

That maybe true but they’re carefully chosen words. “Every bit as safe” doesn’t mean perfectly safe.

And how about this little gem:

"You've added this arithmetic to make sure the jumps are in range. But the real issue is that if it's really clever, a program will arrange for a jump to jump past that arithmetic," says Morrisett. "You might protect one jump but not the next one."

So Morrisett is saying that someone might just do a little manual hacking to insert op codes in order to achieve something nefarious? It depends on where the verification is performed. If it’s done on the client as the code is running, then this whole NaCl sandbox idea will fall to the oldest hacking trick in the book. And using x86’s segment registers is mad. In today’s world of virtualisation there are many fine instructions on x86 from Intel and AMD to make strong sandboxing realistic, yet Google are choosing to ignore all that in favour of an archaic monstrosity from the dark ages of computer architecture history?

And Google haven’t done an ARM version yet. Haven’t they seen the mobile revolution happening just down the corridor in the Android department, in Apple’s shack, at Microsoft and literally everywhere else? Not having an ARM version is soon going to look pretty stupid if it doesn’t look stupid already… And isn’t PNaCl just mimicking Microsoft’s .NET and Sun’s Java? Does the world really need another one?

“Chrome will only accept Native Client applications distributed through the Chrome Web Store, and Google will only allow Native Client apps into the store if they're available for both 32-bit x86 and 64-bit x86”

So NaCl won’t be the web then. Users won’t be able to click on any link out there in the Web and get a NaCl app because they’ll have to visit a Google run store? That sounds *very* inconsistent with what Linus Upson was saying earlier.

But hang on, Chris Blizzard is talking junk as well:

“Once you download the native code, there's no opportunity for browser optimizations.”

Err, isn’t that the whole point of native code? Isn’t it supposed to be fully optimised already, no room for improvement without a hardware upgrade? No amount of software re-jigging inside a browser is ever going to make a properly written decently compiled piece of native code run any quicker than it already does?

“A source code–based world means that we can optimize things that the user hasn't even thought of, and we can deliver that into their hands without you, the developer, doing anything.”

Hmmm. I wonder how many web site authors, plug in developers and the like have spent feverish hours in the middle of the night trying to fix a web site or plug in to cope with Mozilla changing something YET AGAIN. Hasn’t Blizzard heard about the debacle over Firefox version numbers? His statement is correct only if the ‘optimisations’ don’t effect the standard, but Mozilla (and everyone else I guess) hasn’t exactly agreed what the standard is nor kept to it:

"What are you going to do about version compatibility? What are you going to do about DLL hell?”

Indeed. What are you going to do about plug in hell?

And this is a real beauty:

“Chen and Upson also point to efforts like the Emscripten project, which seeks to convert LLVM bit code to JavaScript. Even if Native Client isn't available in other browsers, Upson says, this would allow all Native Client applications to span the web.”

So we’re going to write in C++. That’ll get compiled to LLVM. Ordinarily that would get executed in some sort of VM, just like .NET and Java, in which case I might have chosen to use C# or Java in the first place. But just in case that VM is missing, the LLVM will get re-compiled to JavaScript, which in turn will get interpreted to x86 op codes. IN THE NAME OF HOLY REASON HOW IS THAT SUPPOSED TO BE A GOOD IDEA? Sorry for the shouty shouty, and I’m not religious in anyway either, but sometimes things make me snap. It’s not April 1st is it? No, good; I thought I’d better check.

Right, enough of the rant. Web apps (Java, JavaScript, whatever) are Web apps. Native apps are native apps. They serve different purposes. NaCl is another Google effort to corner more on line advertising revenue by means of setting up another app store eco system that doesn’t actually deliver any tangible benefit to the end user. All this talk of ‘trust’ doesn’t matter two hoots. In both models you have to trust either the app developer or Google. Why is trusting the app developer worse than trusting Google? You could even argue that a Single Point of Trust is worse - just look at the problems we've had when a single CA (Diginotar) gets hacked.

Unless they pull their fingers out very quickly NaCl is going to wither and die as the consumer world transitions wholesale to ARM. This transition is likely going to be driven like we’ve never seen before by Microsoft bringing out Win8/ARM.

On the face of it Linux (well, Android), Apple’s and Microsoft’s propositions are far more sensible (though Oracle might do for Android yet in the law courts). Java and .NET do a decent enough job. Microsoft will also have to do a decent job of making the x86 / ARM choice a non-issue to native developers (and the word is that they’re doing quite well on that front). Apple has made it relatively simple for developers of native apps to target the whole Apple eco system.

Battery life is going to be king for many years to come, and NaCl looks like a very bad way of extending battery life to me, not least because it’s currently stuck in the land of x86. If Win8/ARM machines start issuing forth in large numbers and last whole days on a single charge, who’s going to want a power hungry x86 machine running anything, least of all Chrome and NaCl?

17
1

Linux.com pwned in fresh round of cyber break-ins

bazza
Silver badge

@AC, re: @Captain Scarlet

"Clearly this problem is with configuration/implementation of the security on the Linux systems involved, probably with a little user complacency thrown in for good measure and not a fundamental problem with the quality of Linux."

I'm not sure that it is clear. It is clear that a privilege escalation has occured, but I wasn't aware that anyone was saying how it had been accomplished. If it is a kernel problem, then like wow, that's a big deal. An unknown kernel bug allowing such escalation is a big worry for any OS, not just Linux. But even if it is just a config problem, what's going on there? Why are they still offline?

2
1

Apple seeks product security boss after iPhone loss

bazza
Silver badge

Or...

Or they could just choose to chill out a bit and be less obsessively secretive. Would that really dent their sales in any measurable way whatsoever?

Anyway, Apple products are pretty predictable - shiny, lacking in some useful buttons and features that everyone else has been doing for years (FM radio, anyone?), pricey, designed to lock you into an ecosystem designed to make yet more money, and occassionally suffering form over function (antennagate?).

5
0

Skype: Microsoft's $8.5 billion identity tool

bazza
Silver badge
Happy

@Dazed and Confused: Seems to do that already?

I've got skype on PCs, phones, etc. They all ring when someone calls me, and when I answer I'm speaking to whoever.

2
0

How Apple's Lion won't let you trash documents

bazza
Silver badge
FAIL

VAX VMS?

I doubt I'll be the first to point this out but here goes anyway. Did I miss Mac OSX being transitioned from FreeBSD to VAX VMS? Are Mac users going to have to get used to typing PURGE?

I reckon that there's a high chance that the less technically experienced users out there are going to get veeeeeeery confused by this. The thought of trying to explain a complicated version control system and when it does what it does and why it does it to my Aunt is not an appealing prospect!

I shall snigger from afar....

2
0

'Satnavs are definitely not doomed', insists TomTom man

bazza
Silver badge

@Alex King

"Contrary to an earlier poster, nobody does (or should) give two hoots about gain, antenna patterns or whatnot. If it works, is easy to use and does the job then that's the point."

Except that if you as a consumer wanted to choose between them based on GPS reception performance that is the information you need. Without it all you can do is pick one at random.

This forum has many people saying that they've suffered GPS drop outs. When someone is lost in a city with no GPS reception they do care. Shame they didn't think about that when they were buying it in the shop.

But because the industry is effectively silent on the matter there is no commercial pressure. Sure, it works quite a lot of the time but we would all like it to work better.

TomTom certainly used to care - my ancient old TomTom easily out performs any phone I've ever seen in terms of GPS signal reception. When you're driving around the Peripherique and motorways in Paris through all those half obscured almost-tunnels you really don't want a GPS drop out; you will miss your exit! My TomTom hasn't let me down yet, but every phone I've seen packs in at the first hint of overhead obscuration.

0
0
bazza
Silver badge
Unhappy

@mikeyt: crims aren't that bright

Cars used to get broken into if the windscreen had the marks from the sucker on it!

I suspect that when some idiot breaks in to a car these days they're not doing specifically for the satnav; they're just not fashionable enough.

0
0
bazza
Silver badge

@Mark 63

I like my watch to work when my mobile battery goes flat...

You're right about the mp3 player market, hardly anything decent still on the market. I'm still using an ancient iRiver iHP-120 in the car, still works very well indeed, and the little cable remote control is just perfect - don't need to look at it even. Much better than fumbling around with a crappy touchscreen on a smartphone. Two headphone jacks (surprisingly useful, you can have great fun with a pal in an airport departure lounge listening to rude songs sniggering away without anyone else being able to hear), loads of different codecs, optical line in/out, FM radio (Apple still don't put radios in theirs, do they? Why? I mean, why oh why oh why do the f*****g idiots not just put a damned 5cent fm radio chip in their goddamn shiny toys? How long can a fit of pique over them not thinking of it first go on for?).

Anyway, I digress. Apple's success has really lowered people's expectations of what they think is technically achievable. It's no longer really commercially feasible for other manufacturers to push out superior products because not enough people understand the benefits of the technology anymore. Form is now more important than function.

Isn't it time for the competition authorities to take a serious look at Apple's dominant position before satnav is reduced to nothing more than an eBook atlas?

4
0

Forums

Biting the hand that feeds IT © 1998–2018