* Posts by bazza

1926 posts • joined 23 Apr 2008

Jesus Phone gives Sprint redemption 'this October'

bazza
Silver badge

"guess when the iPhone rumor broke?"

12:00pm?

1
0

HP: webOS will still run PCs and printers

bazza
Silver badge

@Asgard: Symbian had other problems

I agree that Nokia have been a poor custodian of Symbian ever since they got hold of it (EPOC32) from Psion. Some things written a long time ago by ex-Psion people are quite clear on that point.

However, it's well known that Symbian is a difficult OS to develop native apps for, far harder than OSes that have come from mainstream mains powered hardware. The reasons for the difficulty are clear; achieving ultimate performance on a battery power device mandated a way of doing things at odds with the normal programming paradigms we all learnt when young. This really showed through in final products. Even today Symbain phones generally have very good battery life in comparison to iOS or Android driven machines.

I reckon that Nokai were never able to assemble a large enough team of programmers who *really* knew Symbian. In essence they could not put enough development into it to allow it to compete on bling, user interfaces, etc. as well as on the purely technical matters of battery consumption and RAM requirements. I suspect that the reason why they didn't have a big enough pool of the right sort of programmer was money; acquiring that sort of rare-skilled programmer / developer is expensive in salary and/or training. Maybe if they had got it right straight away there would now be a much bigger pool of programmers, but they didn't, so there isn't.

But in a way Symbian is beyond rescuing. Even if Nokia could salvage the mess and turn out a decent user experience, there's almost no point anymore. People are now completely used to having to charge up their fondleslab once a day or more. And people want to download apps, and those apps aren't going to be native Symbain apps. It's too hard and time consuming to be worthwhile for the average mobile phone eye candy app developer. So they will have to be written in something hideous like Javascript, and bang goes all those carefully crafted power saving design features.

In a way it's a bad sign for the whole computing industry. A fundamental requirement of portable devices is long on times, even if we've gotten used to having to charge up once/twice a day. Given the poor rate of improvements in batteries this really means less power consumption, which is something that the rest of the computing world would like too. So far the truly successful means of achieving this have been:

1) better chips

2) that's it.

So far software has not really played a significantly successful role in reducing power consumption, and arguably the modern trendy things like Javascript have made it worse. Yet Symbian shows that if you do get the software right you can make significant improvements without having to do anything at all to chip or battery design. Are we as an industry just too lazy to actually pursue that 'free' performance boost?

0
0

Here lies /^v.+b$/i

bazza
Silver badge

Iain M Banks?

c:\>restore.exe a: c:\*.*

0
0

iPhone 5 to include Japanese earthquake warning system

bazza
Silver badge
Thumb Up

@Joseph Haig

Ach, dammit, you got there first!

0
1

Oracle's Sparc T4 chip: Will you pay Larry's premium?

bazza
Silver badge

@Paul 77, @Chemist

@Paul 77

Why would you do that when you could just access a server across a network? Back when I first started you just accessed some server from an X terminal. Alright, you'd probably have to use a Linux PC instead of an X terminal these days, but otherwise nothing's changed.

@ Chemist

You are most likely correct. But by Linux I suspect you really mean Linux on x86/x64. Nothing wrong with that per se, but there can be very good technical reasons why x86 might not fit the bill. Not every academician (or developer for that matter) is best off hosting their work on x86, and it's their hard luck if they don't look around to see what else is available. As Kebabbert points out, Sparc T3 is very good for crypto. That might be handy if you're hosting a large website that has https only access. Similarly anyone doing large amounts of decimal (*not* floating point) math really needs to take a look at POWER, which is why IBM do quite well in the banking / financial services sector.

0
0

David May, parallel processing pioneer

bazza
Silver badge

@Vic: Well I never

Well, I didn't know that! I never used Occam itself, but I'd no idea that the C compiler was done that way.

On the whole I didn't like the dev tools very much. Debugging was always a nightmare. I Always felt that Transputers needed some sort of separate debugging bus rather than relying on the Transputer channels themselves. But JTAG hadn't been invented then.

1
0
bazza
Silver badge
Happy

Ah, Transputers!

I cut my professional teeth on Transputers. Still using the same message passing principles 20 years later.

0
0

Android app logs keystrokes using phone movements

bazza
Silver badge

Aaaaiiiiiieeeeeeeee!

I downloaded the app. Then I went and found a big hammer and did some frenzied typing, screaming "Log that you bastard" over and over and over, just like a proper fanbois! App's a load of crap :)

Sent from my desktop.

0
0

Microsoft begins cagey Windows 8 disclosures

bazza
Silver badge

@Synja, because...

...if you did put X86 instructions into and ARM, what you end up with is an X86 / ARM combined. Then you can kiss goodbye to all the advantages ARM has in terms of power consumption, core size, cost, performance etc.

The reason why Intel is in such a fix in the mobile space is because X86 is a very bad starting point when it comes to making a low power chip with acceptable compute performance. It's fine when you have power to spare (in a desktop for instance), and indeed Intel's desktop/server/laptop chips are pretty quick. To date Intel have relied on being better at the silicon manufacturing processes to stay ahead of the competition, but these days that isn't enough to keep X86 comptetive in the low power sector. That's why pretty much every phone / tablet out there is running an ARM.

If Intel really wanted to make chips that are competitive on power consumption they are pretty much obliged to make changes to their instruction set. Then it wouldn't be an X86 anymore!

Intel's other problem is that the server world is beginning to wake up to the cost advantages to lower power consumption. If the server people get an appetite for ARM then Intel will lose a massive amount of market share.

2
0
bazza
Silver badge

@ DrXym, sounds reasonable

"The host OS could precompile the LLVM bitcode into an actual native executable and cache it somewhere"

Not a bad way to go. It'd take some nifty tech to ensure that the result is 'as good as' a properly compiled native app, but there are some very clever people out there these days. I suppose MS's other point is that any .NET app should run as is with zero modifications, provided that MS can make the .NET CLR work equally well on ARM as on X86. They've already got some practise at that - there's certainly used to be a 'try it if you dare' C# implementation for embedded ARMs.

0
0

Apple changed shape of Galaxy Tab in court filing

bazza
Silver badge

@AC, re (¯`·._.·(¯`·._.·(¯`·._.· LMAO ·._.·´¯)·._.·´¯)·._.·´¯)

Oh crap, there'll be ASCI art all over El Reg now...

\ ______|\_____*-----------------------

\/ _ ¦¦ '_\

/\____________/

/

That's supposed to be shark with a frikin laser beam, but I'm not so good at this, proportional fonts blah blah.

1
0

Has Google wasted $12bn on a dud patent poker-chip?

bazza
Silver badge

@vic 4

"I'd say one of the main reasons is so they didn't have to license it from Sun."

I reckon it was to build a locked in app store thing to capture more of the advertising market. Using straight Java would have allowed any old Android customer to use any old app market without troubling Google's servers and their accompanying adverts.

1
4

LightSquared blasts GPS naysayers in FCC letter

bazza
Silver badge

@Trollslayer

The point I sought to make in my previous post is that any mobile phone service, be it satellite or terrestrial, has a quite powerful transmitter (i.e. the mobile phone itself) in exactly the wrong location. It's right next to the customer, and presumably very close to the customer's GPS receiver. That is far more significant to a GPS receiver than a base station a few miles away or a satellite in orbit.

The subsequent point I made is that presumably that has always been the case long before Lightsquared came along, but the old satellite mobile bands that Lightsquared took over were never popular enough for the problem to come to widespread attention. Only now that GPS is a mass market and that Lightsquared are reviving usage of the adjacent band has the problem been highlighted.

It would be interesting to know if a Lightsquared mobile phone has a GPS receiver that works when the phone is also transmitting to the Lightsquared network (for example when using something like Google Maps). If it does then the GPS industry is clearly talking horse shit.

1
0
bazza
Silver badge

@Number6

Of course, it depends on where the nearby transmitter is. The most likely source of the most troubling transmissions would be a customer's Lightsquared mobile phone, not the Lightsquared base station. You don't have to go very far from any base station before the received signal strength is really quite low. 1/r^2 is a powerful attenuator! The mobile phone, being so close to the customer, is almost always the more powerful transmission relatively speaking.

You always *can* build a filter with the required level of isolation, but for very high levels of isolation it is going to be physically quite big and/or expensive. This would not be ideal for a GPS receiver which is by definition a portable device. Military equipment doesn't mind filters being expensive, but large filters can be as much a no-no as for civilian applications.

It is very difficult for the frequency planners to know what to do. They could allocate frequency bands in such a way so that there's adequate guard bands either side to protect against any conceivable transmitter. But that would be enormously wasteful of bandwidth, and it depends on an accurate understanding of what technological developments there may be in decades to come. Not a straight foward task, and something that humans have proven to be very bad at. Remember the 640k RAM limit?

But in this particular case I think that even the original use of the adjacent bands for a satellite mobile service was arguably wrong. Sure, the satellites for that service were a long way away (in orbit, in fact), so their transmissions were never going to be powerful enough on the ground to affect GPS receivers. But those satellite mobile phones themselves would have to be quite powerful to be received by their satellites. It's quite possible that they would have interfered with GPS receivers in much the same way as Lightsquared's transmission apparently does. I suspect that this has been going on all the time. But seeing as the original satellite mobile service is defunct it sounds like it wasn't so popular in the first place. So maybe the problem was always there, just not badly enough for it to come to widespread attention.

So if that is the case, who is to blame for the mess? Lightsquared? Not really, they've acquired the rights to a band; they're not transmitting outside that band, so they're sticking to the rules. However, I do agree with the view that Lightsquared should have known better. In effect they trusted the FCC to know their stuff when they asked for the band. Was that commercially a wise choice? Probably not. And I bet that the receivers on Lightsquared's mobile phones are just as vulnerable to out of band interference [similar bands, similar electronic constraints to achieve significant out of band filtering]. They're getting away with it too, because the adjacent band is GPS, which has no terrestrial based transmitters. I bet that if GPS were replaced with some sort of mobile telephony service, Lightsquared would be complaining just as loudly as the GPS crew are today.

How about the frequency planners at the FCC? Depends on whether they were obliged to consider mass market cheap GPSs when the bands were allocated all those years/decades ago. Back when the bands were first considered (in the 1980s?) it would hardly have been imaginable that we'd have mobile phones, never mind mobiles with GPS in them. And the rules on operating bands are quite clear. If you receiver is open to receiving transmissions from adjacent bands that's your problem, not the FCC's.

How about consumers and their want of cheap, small GPS receivers? No not really, they're the not the designers of the equipment they've bought.

How about the GPS receiver manufacturers? Largely yes - they've got away with ignoring the effects of transmissions in adjacent bands for many years now when they had no right to assume that those bands would be forever quiet. The FCC rules (and the rules from frequency planners everywhere else in the world) are very clear in black and white on paper about that, and always have been. And if the manufacturers had paid strict attention to the FCC rules then we would likely not now have things like GPS in phones.

So what happens now? Personally I don't think that the GPS manufacturers deserve to get away with it. However, the real question is does the paying public deserve the right to access the GPS service in the way they do? Yes, they do! Affordable, convenient and functional GPS does make the lives of the general public better in a very big way, and improving the lives of the public as a whole must be a governmental goal.

I think that the FCC should buy out Lightsquared's band allocation (a pricey proposition), recover the cash through a one off levy on any GPS manufacturer with (currently) non-compliant kit, and place bigger guard bands either side of GPS.

3
0

iPhone 4 prototype journo off the hook

bazza
Silver badge

Big problem for the case

Apple could hardly claim to have suffered economic damage as a result of the premature outing of the iPhone 4. That wouldn't have helped the chances of a prosecution succeeding.

3
1

Google points finger at human after robo car accident

bazza
Silver badge

@Matt Bryant, quite right

What's worse is that the designer isn't in the car when it crashes so they're slightly less motivated to pay attention!

A worrying trend is that insurance companies are beginning to see automated car systems as a way of reducing accident claims. The old "computers don't make mistakes" attitude of the unthinking policy makers will make it very difficult for an individual driver to prove that the automatics were at fault. I'm not seeing any commitment to equip systems and accident investigators with the tools (eg independent black boxes that the police and owner can read, not just the manufacturer) they need to be able to diagnose a systems fault. Without such things the 'driver' is likely to get the blame every time. Not for me thank you!

0
0

Researchers poke gaping holes in Google Chrome OS

bazza
Silver badge

Target Improbable

Given that Google are trying to build a new execution environment from (almost) scratch in a very short period of time, it's inevitable that problems are going to be incorporated.

The traditional OSes have been developed over decades and they're still not right yet. What's so special about Google's approach to make it likely that ChromeOS is trouble free in such a short period of time? Personally speaking I won't be touching it with a barge pole.

Google's only motivation for developing ChromeOS is to capture more of the advertising market. They're a commercial, profit driven company just like every other. ChromeOS is a dangerous strategy because it succeeds only if a substantial number of people can be persuaded that it provides a level of service and security above that which is offered by the more conventional platforms (Win/Mac/*nix). It will be difficult to provide such assurances if security researches keep finding massive holes like this. And by going way beyond the scope of other things like Google Docs, gmail, etc. they're taking on a much bigger task and are less likely to succeed.

0
0

Note to Apple: Be more like Microsoft

bazza
Silver badge

@Volker Hett

Heard of Microsoft's Anytime Upgrade? Ten minute job at most per machine to do what you've accomplished with fresh installs. And I guess Exchange is actually just about the same for 2008 and 2008R2.

2
1

ARM scooping in cash but remains cautious

bazza
Silver badge

@Ramiro

Ah yes, but who would be allowed to buy them? The competition authorities wouldn't let any of the established manufacturers (Intel, TI, etc) buy them. If they were bought by, say, Intel it would give them a very powerful monopoly on a plate, not something that any of use should relish.

1
0

MacBook batteries susceptible to hack attacks

bazza
Silver badge

@Hardcastle The Ancient: Costs, I expect

"If you /must/ have a brain in a battery, why isn't it mask programmed? Just how smart does a battery need to be?"

Saves having to spend money on doing a mask for every single different battery design, much cheaper. Of course, 'cheaper' is a word that has both short and long term considerations. Business doesn't do long term very well, and a pricey round of court cases can turn previous short term profit gains into an expensive option.

0
0
bazza
Silver badge

@LPF, not necessarily so...

Mac's aren't exactly immune to remote code execution attacks. It wouldn't take much more than a booby trapped website (www.makemymacgoboom.com? Anyone bought that one yet?) to run the necessary code on anyone's Mac who happened to visit it.

Human nature being what it is, it will only be a matter of time before some script kiddie tries to detonate Macs all around the world simultaneously courtesy of a trojan payload with a timed execution time, "just for the fun of it".

0
1
bazza
Silver badge
Mushroom

Questions questions questions!

If zapware were to get on to a laptop, would Apple honour a warranty? And if the battery could be set to become dangerous, with whom would the liability rest?

If battery fires are a real possibility Apple would need to sort that out sooner rather than later. Millions of laptop batteries going up in smoke would almost certainly lead to expensive court cases at the very least, with deaths at the other end of the scale of possibilities. Sounds like they ought to be able to push out a fix as a software update. Also airlines would certainly be well advised to consider whether Mac laptop batteries were safe enough to be allowed on flights.

But hang on a mo - has anyone checked to see if this is a feature of laptop batteries in general? I don't suppose PC laptop batteries are so very different.

5
0

Adobe releases lengthy list of Apple Lion woes

bazza
Silver badge

@trstooge, Good for you

"*The* key app on my Macs is the browser."

Is there an app for that? ;-) I hope your internet connection stays up. Also, if the only application you use is the browser, why have a Mac at all? Sounds like all that expensive OS-X shininess is being hidden by crappy Javascript apps...

Fair play to you though, if it's working for you that's great. But not everyone can work (or even play) just in a browser. Personally speaking I would not like to depend on the reliability of an ISP or Google or any other online app provider in order to carry out my profession.

It's actually an old Windows application that I use (called Select Yourdon), not an MS-DOS programme. Though there is an old Burr-Brown filter design DOS programme that's occassionally useful to dig out and run now and then. For some of us there really are old applications that are necessary. It's nice not to have to keep antique hardware going just because it's the only thing that runs a vital and irreplaceable antique application.

Adobe are merely pointing out that old programmes that were written around APIs that were current at the time are now broken. This is because those APIs are now missing from the new OS X. I was merely pointing out that MS seems to have a better track record when it comes to keeping old APIs available. If long term stability is important, then perhaps MS are a better bet.

2
2
bazza
Silver badge

@Buck Futter: Point missed

Looks like Adobe have done some pretty comprehensive regression testing on their major applications going back years and have produced an honest report on how well they work on Apple's latest and greatest. That's the kind of thing that you'd expect a company with a good long term view of customer support to do; look after users of older products long after those products were superceded.

Whereas Apple seem to do very little regression testing, indeed they seem to actively trash older stuff. It's a reasonable commercial strategy - it forces committed customers to spend to upgrade. And I can understand why Apple might think that the purity of the OS's design shouldn't be polluted by crufty code from the past; it should be clean, perfectly formed and 'Apple' in every way...

However, that doesn't do actual users (both developers and end users) any favours at all. It won't be just Adobe users who'll be stung by this; other older applications using deprecated APIs will presumably broken too. It doesnt convey a message from Apple of long term stability, which is something that is actually quite important to a lot of people.

Perhaps that's why boring old Microsoft have done quite well. I can still run a quite useful CASE tool from 1993 on a modern Win7 machine without any difficulty.

Apple can't afford to piss off the developer community too much. Where would Mac be without Office, Adobe, and a few other key apps? Pretty much no where. Apple can't do these things on their own, they have to support the developer community in doing it for them to keep the Mac platform attractive.Shiny boxes that don't actually do anything are no use to man nor shareholder.

16
13

Microsoft rolls out One Big Windows strategy

bazza
Silver badge

@WonkoTheSane

My MP3 runs RockBox = Rockbox (not Linux)

SkyHD boxes do run Linux, but from what I hear they are annoyingly unreliable. Probably not Linux's fault, but it's not helping either. And it's not as if you can get in there and fix the bugs yourself easily.

How's the fish bowl?

0
3

Heat sink breakthrough threatens ventblockers

bazza
Silver badge
Thumb Up

@Anton Ivanov

"The thing works so well because of conductivity across a disturbed boundary layer which is being kept in that condition by the heatsink spin. At a couple of thousand RPMs it is likely to be on par with a lot of heat transfer pastes."

Yes, I think that's a far better explanation of what's going on. It's better to move the heat by moving and constraining the air, rather than rely on the heat conducting through the air.

I still wish I'd thought of it!

1
0
bazza
Silver badge
Happy

@AC

Indeed, but that powers only the harddisk. I was thinking of the whole machine, or at least enough of it to close a few files properly. A VME chassis has an ACFAIL line, which can be used in embedded applications to do some vital stuff in the dying microseconds of the PSU's capacitor charge; quite a useful notification in some circumstances!

I thought that head parking was achieved through purely mechanical means in that the forces exerted on the head (via the air cushion) by the spinning disk have a tendancy to push the head arm off the disk. But there's clearly enough energy in the disk to do it electronically too.

Just a thought - that's not something that SSDs can really do is it, unless they have a decent amount of capacitance somewhere. So is a power cut a slightly greater problem with an SSD than for a HDD? Whatever volume checking an OS performs after a power loss, it'd be much nicer to find that all the sectors had been correctly written.

0
0
bazza
Silver badge
Pint

Like all really good ideas...

...it's one where everyone will say "I could have thought of that".

The really clever bit is in the thin air layer. Sure, air is normally a terrible conductor of heat, but when the layer is so thin it's thermal resistance is much reduced*. Thus, from the the point of view of the heat, the rotating impeller is thermally 'attached' to the base plate (or at least much more so than if it were, say, 1mm away).

I for one hope he/they make a pile of cash out of that. Clever ideas like that need rewarding. And besides, something like that spinning away at several thousand RPM has got to sound just a little bit like a turbine, and that'd be a cool noise for any PC to make.

There must be a pile of kinetic energy built up in that spinner. That could be used as a little energy reserve; lose the mains power, and the spinner becomes a generator providing just enough electricity for a cleanish shut down. Bit like an F1 car's KERS. My idea (unless someone else thought of it first)!!!

*Just like metal loaded epoxy used to connect some flexy circuits; it's a terrible conductor over any sizeable distance, but when used in thin enough layers that doesn't amount to much.

5
0

Monolithic supers nab power efficiency crown

bazza
Silver badge

Accountants

Depends on how the accounts that plague these projects set up the budgets. If the electricity is paid out of a different bucket to the one you have available to buy hardware, why 'waste' that one on efficient hardware?

0
0
bazza
Silver badge

Not a PS3 fanboi...

...because I don't own one, but I will apologise a little bit for rising to your bait. But I do like the Cell's internal architecture a lot. Much more elegant and responsive than the sledge hammer that is a GPU. Hard to programme properly, a huge amount of grunt available if you can programme it, probably extremely satisfying once mastered.

To my eternal regret I've not had to make use of one at all.

But being a SPARC fan, I am cheering on the K machine at Riken. Just goes to show how much performance is as much bound up in good inter-processor comms as it is in CPU speed. All those GPU based machines seem to be terrible from the point of view of mean/peak performance ratios; sounds like their GPUs are being starved of work. Now if someone bolted the K machine's interconnect right in to the middle of a GPU, think about what sort of awesome machine could be built! Though I'd still prefer a network of Cells...

0
0

MS to WinXP diehards: Just under 3 more years' support

bazza
Silver badge

Thirteen years...

...is pretty generous support (all things considered) to get for a proprietary piece OS where the updates have been free for all that time. Anyone buying an XP retail licenese all those years ago is going to end up having had a pretty good deal considering they would have been able to port it on to new hardware several times by then.

Whether you'd want to have been stuck with it for all those years is another matter. I prefer Win7 these days, definitely a better product than XP.

How many Linux distributions can claim to have a re-install free upgrade path from that far back? Not many I'd guess. My personal experience of upgrading between major editions of Ubuntu has been patchy at best. XP may have been boring all this time, but it has done (mostly) a job that its users have wanted it to do.

1
1

'Lion' Apple Mac OS X 10.7: Sneak Preview

bazza
Silver badge
Stop

ARRRRGGHHH! More Creeping Tabletisation!

Launchpad is definitely a lurch towards running on a tablet. It's a worrying trend, heed it well.

Explanation - all the big development money in the industry seems to be going in to tablets and mobiles. Not that I care particularly for Apple, but the others (Ubuntu, Microsoft, etc) are all doing that too. Are we beginning to see the end of the line for making the life of the desktop / laptop user better?

Us desktop users aren't necessesarily doing hip and trendy things - corporate droids mostly these days I'd guess - but we still like a nice working environment! Just because there isn't major profits to be made out of clear thinking hard nosed IT departments whose favourite line is "what do you need that for" doesn't mean that there aren't users desparate for an upgrade. Personally I find Win7 quite good, but I'd hate to think that that's the end of the line; actually got XP at work.

0
0

One per cent of world's web browsing happens on iPad

bazza
Silver badge
Paris Hilton

Redressing the balance

Posting this from Firefox on a Solaris 10 VM on VMWare on Win7 64bit on an AMD CPU.

Nope, the pie charts ain't changed noticeably.

1
0

IBM to snuff last Cell blade server

bazza
Silver badge

@asdf, missing out

"PowerPC wont die but at least it is largely gone from general computing."

But not gone from other fields. Sectors like telecoms, military computing, etc. tend to care less about architectural backwards compatibility and more about performance, power consumption, etc. By being willing to switch around a bit they can exploit whatever is best at the moment. PowerPC is in fine form in the telecoms world, but has slipped a bit in the high performance embedded world.

Whereas 'general computing' has been stuck in the Intel rut for decades now. The trouble is that the battery powered and server farm sectors of 'general computing' have already chosen ARM or are threatening to do so. Why is that relevant? Well, it signifies a greater willingness on the part of the vendors to look beyond the world of x86 for the performance that sells. Doing that once means they have to keep doing it should something better come along in order to retain a competitive edge. It's entirely possible that PowerPC will be the chip of choice, and it might not be too hard for some vendor to go for it. The trouble is that such an endeavour will always be commercially driven; an offer of cheaper Intel chips might be just as commcercially advantageous as switching to another architecture to get a performance edge.

ARM is having a quite interesting impact on the market. They own the mobile market, and they may end up owning the laptop market too (MS porting Windows, Apple talking about an ARM laptop). They may also end up owning a large chunk of the server market too if vendors see useful performance per Watt figures for ARM servers. So where would that leave the great big hulking chips that Intel and AMD are churning out? With a somewhat smaller market I would imagine, apart from the power Desktop users, and there's not really many of them. So will Intel/AMD keep developing these very powerful chips if the ARM architecture starts taking over the server market too? Possibly not, or not at the same pace.

So where might users who do actually need fast general purpose compute performance turn? With Freescale seemingly tarting up the PowerPC line with some recent anouncements and the embedded market there to support it, there might yet be a commercial rational to move high performance computing over to PowerPC. Adobe may yet have to dust off their old Photoshop source code. I can remember in the early days of Intel Macs Photoshop was slower than on the G5s because there was no altivec unit to exploit for all those image processing functions.

Regardless, it seems likely that the end users are going to have to get used to underlying architectures changing more than once every 20 years. We should be grateful. It'll mean more performance (hopefully) and less power consumption, and who cares what instruction set lies beneath?

6
0
bazza
Silver badge

Fog of War

Pity. I've had my eye on Cell for years, but the roadmap uncertainty has been quite off-putting. Maybe Freescale's newly announce multicore PowerPCs will make up the difference. Regardless, Cell was certainly a programming challenge and was not one that any old programmer can achieve maximum performance in their first afternoon. Perhaps that's the real reason why IBM have backed away from it. I gather that there are some in the games industry who have got to grips with it (and all that horsepower presumably makes a difference), so maybe Sony will continue with Cell. Who knows.

1
0

ITU Gen Sec: Why not speaking English can be a virtue

bazza
Silver badge
Coat

Google Translate?

Eh bien, parle anglais est clairement la meilleure option pour les communications mondiales. Considérez française - tout à fait exact, mais pas très bon pour faire des blagues po Par exemple,

"Pourquoi le poulet at traverser la route rapidement?

Pour éviter de devenir Coq au Vin ".

Aha, aha, aha. C'est la cause de mes côtés pour séparer en anglais, en français, mais l'Tumbleweed souffle passé.

0
0

Apple's new Final Cut Pro X 'not actually for pros'

bazza
Silver badge

Is The Register selling tickets?

Because I'm enjoying this show immensely!

Just to stoke some flames, I think that a new version that can't read the previous version's files at all is crap software. Imagine telling a software developer that Apple's new dev tools won't import or compile their existing source code. I'm sure the dev would be justified in being furious.

MS did a good thing (after complaints I imagine) with Office2k7 in producing a plugin for older versions to allow them to read the new file formats. That's the best possible philosophy when making big changes to a file format.

Having lit that firework I shall now retire to a safe distance.

4
0

Quantum crypto felled by 'Perfect Eavesdropper' exploit

bazza
Silver badge

@Destroy All Monsters: Are you sure?

"It's far more likely that factoring turns out to be in P than that QM falls over, really."

Are you really really sure?

Firstly, as Ken Hagan said elsewhere the only 'quantum' part of quantum cryptography is the detectors. But the as the original article indicated these were prone in this particular case to incorrect operation in the face of relatively simple attacks. That is nothing to do with whether or not quantum mechanics is valid. It is merely our inability to reliably measure quantum states in the face of a simple attack.

Secondly, whilst quantum mechanics has indeed shown to be a theory well matched to physical observations, it is still a theory. Richard Feynman had a good few things to say about theories, and he should know. Seek out the videos of his lectures on quantum electrodynamics that he gave in New Zealand, they're very good and I think they're still freely streamable. And indeed the semiconductor junctions on which we all now depend are devices exploiting quantum effects. But my point is that quantum mechanics is just a theory, no more, albeit one that seems to work very well.

Although qm is pretty good, it is reasonable to suggest that it may not be completely correct. Firstly, I don't think anyone has managed to make qm and relativity fit together. Both have a wealth of experimental data to suggest that they're along the right lines but they remain theoretically un-united. So *something* is wrong somewhere. One of those 'somethings' is the behaviour of Pioneer 10 which isn't quite where it ought to be according to both Newton's and Einstein's theories of gravity which otherwise seem to work quite well in keeping the planets in the right places. Nor are galaxies quite the right shape. And does a quantum state change instantaneously or over a finite period of time when an observation is made? It's quite an important question to qkd. But some of the experiments I've read about are hinting that the answer is the latter not the former, suggesting that there may be a hole in the basic premise of qkd.

So knowing that something is wrong somewhere in the theoritical models of why stuff happens, would you ever base the security of your system on it? The *only* assurance we have that it is correct is in effect a bunch of scientists saying "it looks OK to me". Whereas logical encryption algorithms like AES, DES, etc. all exist within the rules of mathematics which are much better understood, because mankind made up the rules.

As Pete H pointed out they are still vulnerable in their actual physical implementations, but provided the logical implementation is correct and an attacker is unable to get physical access to either end then their strengths and weaknesses are deterministic solely within the mathematical framework in which they are defined. It could be that we don't understand the maths right. But that's a much more straightforward thing to worry about than being totally certain that we understand the physics.

0
0
bazza
Silver badge

@Destroy All Monsters: quick thought experiment

Just to follow up on my previous response to your most welcome post, imagine asking a physicist the following question.

"Would you bet the life of your first born child on Newton's law of gravity ultimately being proved correct in return for £million?"

120 years ago you would get quite a few saying yes. Immediately after Einstein's general theory of relativity was published you would still get some saying yes. Today, I dearly hope for the future social well being of the world that none would say yes.

I think that if I rephrased the question along the line of "Would you bet the life of your first born child that quantum mechanics is completely correct in return for £billion (inflation)" you might not get a 100% 'yes' rate. And if that's really the case, why should we bet our communication's security?

0
0
bazza
Silver badge
Happy

@Pete H, not really

Your arguement applies to instances where an attacker has physical access to one or other end of the communication link. Sure, if someone is in a position to do a power analysis on the encrypting device there's potentially a physical weakness to be exploited. However, the discussion so far has really been about intercepting the communications link between the two ends and whether or not there is an exploitable physical weakness. With purely logical algorithms like AES the intercepted signal is solely noughts and ones, so there is nothing to exploit beyond weakness in the maths. As you say that is a bloody hard job these days. But quantum cryptography extends the physical weaknesses to all aspects of the encryption system - both ends *and* the communication link. Not a very desirable move perhaps?

0
0
bazza
Silver badge

@Remy Redert re:Dead in the water

Of course other encryption systems suffer from early problems, but you're missing my point.

The strengths and weaknesses of systems like DES and AES can be determined purely analytically, and their implementations are open to truly large scale testing and examination by anyone with the urge to download the spec and look at the source code. Whatever the weaknesses in the algorithms are, we can point to them and say definitively what they are, how hard they are to exploit, etc. Anyone can look at one aspect of an algorithm and say things like "you'd have to find the prime factors of that number there" and know that that would be a complete and definitive statement on the merits of that part of the algorithm. One can then objectively assess how hard it would be to perform said feat, keep an eye out for papers with titles like "prime factor finding" and generally be comfortable. And the same goes for implementations. This is because things like DES, AES, etc. are entirely logical systems that operate in rule sets created by man with no physical influences.

The problem with quantum cryptography is that the security of a key transfer relies entirely on the behaviour of physical processes, namely the quantum entanglement itself as well as the single photon sources and detectors. Knowing whether or not we have a complete understanding of these physical processes is much harder to be sure about. Mankind has been constantly revising its opinions of nature for millennia, and I don't suppose we're going to stop doing that anytime soon.

So far the problems that have been encountered with quantum cryptography are related to the physical properties of the detectors and photon generators (it turned out that single photons weren't always on their own...). No great surprises there - matter does not always behave as we tell it to! This latest problem is just another instance of our misunderstanding the physical properties of one electro-optic component in the system. I doubt that one can ever prove analytically that the components are designed and implemented correctly. All one can ever say is that N tests have shown them to work properly, but N can never be a truly large number. And should one test each and every photon detector, or just a sample of the production run?

But what about entanglement itself, and the impossibility of messing with it? There's several bunches of physicists who are questioning whether this is in fact correct or not. It looks like the rule that you can't measure the state of an entangled photon without effecting the state is more of an assumption than a proven fact. It's easy to say that it is hard to make such measurements, but to the best of my knowledge no one has quite yet been able to completely rule it out. Some very elegant experiments are being planned by academics to explore this. Some have already been done with electrons which showed that you can 'sniff'' their quantum state, repair the damage done to the state, repeat until you know everything. Not good news so far, except that quantum cryptography uses photons.

My point is that all an experimentalist can say is that their particular experimental design could or could not measure states without disturbing them, but that say's nothing about someone else's experiment. Saying "I can't do it" doesn't prove that no one else can. Yet for quantum cryptography to be guaranteed you have to prove the rule. As I said above some results are already known for experiements with electrons which would suggest the issue is more one of experimental design, not hard physical facts. So where would quantum cryptography be if someone successfully designed and performed the right experiment? It is not guaranteed that they won't be able to do so. Certainly, if some one *does* manage to do it (which would be impressive because it would mean our quantum model of the world is wrong, Nobel prize in the post) quantum cryptography would be finished.

And it's worth pointing out that quantum cryptography is in fact ordinary symmetric cryptography that relies on a physical trick to securely exchange the key. That still doesn't stop someone getting the design and implementation of the actual encryption/decryption algorithm wrong.

9
0
bazza
Silver badge
FAIL

Dead in the water

So it seems that the limit on the security of quantum cryptography is nothing to do the entanglement of photons, but is wholly dependent on the electronic behaviour of the detectors used to test the integrity of that entanglement. This trick has been possible because of a loop hole that no-one had spotted previously. Ok, so they'll plug this loop hole, but who says' there won't be more? That sounds like something that you can never be completely sure about.

So what exactly is the point of quantum cryptography then?

2
1

SpaceX goes to court as US rocket wars begin

bazza
Silver badge
Pint

Better Fix

"...just need a few more good lurches".

0
0

Oracle seeks 'billions' with Google Android suit

bazza
Silver badge
Thumb Down

@AC, re: It's a shame

>Ellison is a nut job along with steve jobs

Er, are you saying that Google aren't? Oh boy they've hoodwinked you well! The *only* reason Google structured Android the way it did was to create a closed software ecosystem. Java-ish, but not Java enough to be able to run the apps elsewhere. That App lock in just encourages people to use Google services for which Google get ad money.

Google have taken a gamble on bending someone else's intellectual property to suit their own money making scheme, and it may yet back fire quite spectacularly. They dress it up as open source "from the very bottom of their heart", but that just disguises their corporate profit driven strategy. Google's trick is that most people don't see where the money is coming from. Apple's trick is that despite the obviously high prices people don't seem to care. All companies that have shareholders are obliged to take steps to increase profits, and we shouldn't be surprised to find that some of them are quite good at doing so without a blatant flow of cash.

The closest I've seen to a large company properly donating to the open source world is IBM, and Sun too in the good ol' days. IBM have put $billions of effort in to Linux from which everyone has benefitted. They make money out of it through server and services sales, but otherwise the rest of us use their contributions without a penny heading IBM's way. Sun developed dTrace and zfs and gave them away under their own license. They haven't cropped up as such in the Linux world because GPL2 isn't compatible with the license Sun wrote. You'd have to be very cynical indeed to blame Sun for that! They have been picked up by FreeBSD though. I'm sure there are other good examples too.

Google have open sourced quite a lot, but they're a bit tardy about it with Android, and everything they've done is clearly aimed at capturing more of the search and on-line advertising market. They're not especially good at it though. You think Android is first class; all I see is version fragementation, unfixed bugs, a heavy steer to doing everything through Google's websites, and yet another app ecosystem that makes it difficult to port apps to another platform. Crap. Look at the hounding HTC got just recently when they said that they wouldn't put an already out-of-date Android on HTC Desires. A sign of happy Google customers? Hardly.

9
8
bazza
Silver badge

@Matt Bucknall

It's an interesting thought. It's even more interesting to wonder why Google went for Java in the first place.

Suppose the Google design requirements for Android went something like this;

1) cheap to sling together

2) app development not in C/C++, but in a pervasive and slightly trendy language

3) closed eco system - Android apps run only on Android

The answer to 1) is Linux - they could rip that off as much as they like. Android has clearly has been slung together with not much thought given to updates, quality, security, etc. Java would have been a good answer to 2) but 3) gets in the way. Solution - bend Java a bit by using Dalvik, et voila! And it was cheap as chips too - they didn't have to grow a whole ecosystem from the ground up.

Only trouble is Dalvik might not turn out to be cheap at all, and might prove very expensive.

So what do they migrate to? Native, with/without something like Qt? Javascript? I suspect that for most developers the former would be yeurk.

2
2

Time to say goodbye to Risc / Itanium Unix?

bazza
Silver badge

Portability?

>The _BIG_ advantage of Linux is portability. Source code

>written on X86 should compile and run happily on

>MIPS/ARM/Power/Big Iron/Itanic/Whatever comes along.

Really? Maybe, provided you've got all the right libraries installed, the right versions of those libraries, the right GCC setup, and that your distribution's fs layout is along the same lines as the one used by the software developer, etc. etc. And then you may also have to worry about hardware architectural issues such as endianness. And then you have to wonder whether the software you have just compiled is actually running as the writer intended, or is there a need for some thorough testing?

The idea of distributing a source code tarball and then expecting ./configure, etc. to work first time for everyone on every platform is crazy. Pre compiled packages are a joke in the Linux world too; deb or rpm? Why is there more than one way of doing things? There is no overall benefit to be gained.

It is asking a lot of a software developer to maintain up-to-date rpms, debs and tarballs for each version of each linux distribution on each platform. Quite understandably they don't do it. If we're lucky the distribution builders do that for them.

0
0
bazza
Silver badge

@Steven Knox

You're right to suggest that some sort of performance metric should be calculated for a candidate IT solution, but you can't tell everyone what their metrics should be.

Google apparently use a metric of searchers per Watt. Sensible - searches are their business, energy is their highest cost. A banking system is more likely to be measured in terms of transactions per Watt second; banking systems are sort of real time because there is an expectation of performance, but energy costs will be a factor too. But ulimately it is for the individual business to decide what is important to them. For example a bank somewhere cold might not care about cooling costs!

I think that it is safe to conclude from IBM's sales figures that a fair proportion of businesses are analysing the performance metrics of x86, RISC, etc. and are deciding that a mainframe is the way ahead. IBM sell so much kit that not all their customers can be wrong!

0
0

Google pits C++ against Java, Scala, and Go

bazza
Silver badge

@ Destroy all monsters; Less of the little one, more of the old one

My whole point is that there's nothing really new to SCALA's concurrency models. Both the Actor and CSP concurrency models date back to the 1970's. Pretty much all that fundamentally needs to be said about them was written back then. Modern interpretations have updated them for today's world (programmers have got used to objects), but the fundamentals are still as was.

[As an aside I contend that a Communicating Sequential Process is as much an 'object' as any Java class. It is encapsulated in that it's data is (or at least should be) private. It has public interfaces, it's just that the interface is a messaging specification rather than callable methods. And so on].

No one in their right mind would choose to develop a programme as a set of concurrent processes or threads. It's hard, no matter what language assistance you get. The only reason to do so is if you need the performance.

CSP encouraged the development of the Transputer and Occam. They were both briefly fashionable late 80's to very early 90's when the semiconductor industry had hit a MHz dead end. A miracle really, their dev tools were diabolically bad even by the standards of the day. There was a lot of muttering about parallel processing being the way of the future, and more than a few programmer's brows were mightly furrowed.

The Intel did the 66MHz 486, and whooosh, multi GHz arrived in due course. Everyone could forget about parallel processing and stay sane with single threaded programmes. Hooray!

But then the GHz ran out, and the core count started going up instead. Totally unsurprisingly all the old ideas crawl out of the wood work and get lightly modernised. The likes of Bernard Sufrin et al do deserve credit for bring these old ideas back to life, but I think there is a problem.

Remember, you only programme concurrent software if you have a pressing performance problem that a single core of 3GHz-ish can't satisfy. But if that's the case, does a language like SCALA (that still interposes some inevitable inefficiencies) really deliver you enough performance? If a concurrent software solution is being contemplated perhaps you're in a situation where ulimate performance might actually be highly desirable (like avoiding building a whole new power station). Wouldn't the academic effort be more effectively spent in developing better ways to teach programmers the dark arts of low level optimisation?

1
1
bazza
Silver badge
Pint

@Ken Hagan

Thank you Ken; one's smugness was indeed primarily derived from Google implying that C/C++ programmers were superior beings...

My beef with proponents of languages like SCALA and node.js is that yes, whilst they are well developed (or on the way to being so) and offer the 'average programmer' a simpler means of writing more advanced applications, they do not deliver the highest possible performance. This is what Google has highlighted. Yet there is a need for more efficiency in data centres, large websites, etc. Lowering power consumption and improving speed are increasingly important commercial factors.

But it that's the case, why not aim for the very lowest power consumption and the very highest speed? Why not encourage programmers to up their game and actually get to grips with what's actually going on in their CPUs? Why not encourage universities to train software engineering students in the dark arts of low level programming for optimum computer performance? C++, and especially C, forces you to confront that reality and it is unpleasant, hard and nasty. But to scale as well as is humanly possible, you have know exactly what it is you're asking a CPU+MMU to do.

From what I read the big successful web services like Google and Amazon are heavily reliant on C/C++. We do hear of Facebook, Twitter, etc. all running into scaling problems; Facebook decided to compile php (yeeuuurk!) and Twitter adopted SCALA (a half way house in my opinion). The sooner services like them adopt metrics like 'Tweets per Watt' (or whatever) the sooner they'll work out that a few well paid C++ programmers can save a very large amount off the electricity bill. Maybe they already have. For the largest outfits, 10% power saving represents $millions in bills every single year; that'd pay for quite a few C/C++ developers.

A little light thumbing through university syllabuses reveals that C/C++ isn't exactly dominating degree courses any more. It didn't when I was at university 22 years ago (they tried to teach us Modula 2; I just nodded, ignored the lectures and taught myself C. Best thing I ever did). Google's paper is a clear demonstration that the software industry needs C/C++ programmers, and universities ought to be teaching it. Java, SCALA, Javascript, node.js plus all the myriad scripting languages are easy for lazy lecturers to teach and seem custom designed to provide immediate results. However, immediate results don't necessarily add up to well engineered scalable solutions. Ask Facebook and Twitter.

7
1
bazza
Silver badge

@ Rolf Howarth; Not always...

"My favourite adage is still that CPU cycles are cheaper than developer cycles!"

Not when you're having to build your own power stations to run your data centre they're not.

http://www.wired.com/epicenter/2010/02/google-can-sell-power-like-a-utility/

1
1

Forums

Biting the hand that feeds IT © 1998–2017