* Posts by bazza

1926 posts • joined 23 Apr 2008

Google pits C++ against Java, Scala, and Go

bazza
Silver badge
Angel

Caution - old git moan

Once upon a time C/C++ where the primary language of choice for software development. A C/C++ programmer was an 'average' software developer because that's almost the only langauge that was used. Now Google are saying that they're effectively superior to 'average programmer'!

Sorry about the gap, was just enjoying a short spell of smugness.

@sT0rNG b4R3 duRiD. Concurrency in C is just fine, it's no better or worse than any other language that lets you have threads accessing global or shared memory.

I don't know about yourself, but I prefer to use pipes to move data between threads. That eliminates the hard part - concurrent memory access. It involves underlying memcpy()s (for that's what a pipe is in effect doing), which runs contrary to received wisdom on how to achieve high performance.

But if you consider the underlying architecture of modern processors, and the underlying activities of languages that endeavour to make it easier to have concurency, pipes don't really rob that much performance. Indeed by actually copying the data you can eliminate a lot of QPI / Hypertransport traffic especially if your NUMA PC (for that's what they are these days) is not running with interleaved memory.

It scales well too. All your threads become loops with a select() (or whatever the windows equivalent is) at the top followed by sections of code that do different jobs depending what's turned up in the input pipes. However, when your app gets too big for the machine, it's easy to turn pipes into sockets, threads in to processes, and run them on separate machines. Congratulations, you now have a distributed app! And you've not really changed any of the fundamentals of your source code. I normally end up writing a library that abstracts both pipes and sockets in to 'channels'.

Libraries like OpenMPI do a pretty good job of wrapping that sort of thing up in to a quite sophisticated API that allows you to write quite impressive distributed apps. It's what the supercomputer people use, and they know all about that sort of problem with their 10,000+ CPU machines. It's pretty heavy weight.

If you're really interested, take a look at

http://en.wikipedia.org/wiki/Communicating_sequential_processes

and discover just how old some of these ideas are and realise that there's nothing fundamentally new about languages like node.js, SCALA, etc. The proponents of these languages who like to proclaim their inventions haven't really done their research properly. CSP was in effect the founding rationale behind the Transputer and Occam. And none of these langauges do the really hard part for you anyway; working out how a task can be broken down in to separate threads in the first place. That does need the mind of a superior being.

15
1

Microsoft unveils Windows Phone 7 8

bazza
Silver badge
Alert

@mraak Gnome 3? iOS?

It's worth mentioning that neither of these paid much attention to Gnome 2 or OSX. For the former that's quite important - how many linux tablets are there in comparison to the number of actively used linux desktops? Not many. For the latter it seems unimportant; Macs are still Mac OSX. But if Apple decide that, actually, people should use Macbooks and iMacs iOS way too (and really the desktop metaphor has been one big horrible mistake that's costing Steve Job's some money, and who's stupid idea was it anyway?) there may be similar gnashing of teeth. [But not too much because fanbois are fawning, cult crazed pillocks for whom St Jobs can do no wrong. Ooops, did that come out aloud?].

The thing that worries me is that desktop machines may be deprived of the 'desktop metaphor'. Imagine if iOS's 'tablet metaphor' became dominant. How would we look at two applications' windows at once?

There seems to be an unseemly rush for tablet friendliness in operating systems. Presumably MS, Apple and Gnome are chasing commercial success / popularity (delete as appropriate), for suddenly tablets are where it's at for some reason or other. But those of us who actually have to use computers for work may be cut out of it. Boo and hiss. For instance, imagine trying to use a CAD package to make up a drawing from a sketch in a customer's email if you can't place the CAD applications' window next to the email's window?

Tablet friendliness in OSes is an indicator of general trend in computing that should be becoming quite worrying for a wide range of professionals. Undoubtedly the commercial drive is towards battery powered small portable devices. Apple really has made billions out of that market, and everyone wants a major slice of that pie. Apple's commercial success is a very powerful indicator that the majority of people are reasonably happy with a machine that browses, emails, can do iPlayer (there's an app for that...) and YouTube and not a lot else. Most corporate users can get by quite happily with a tiny little ol' PC just about capable of running Office.

But there are many professions out there that require a decent amount of computing grunt and a couple of large screens. For example, DVD compilers, graphic artists, large scale application developers, scientists and engineers, CAD, etc. etc. Gamers count too in my arguement. Trouble is, whilst there's plenty of diverse professions requiring computing grunt, there's not that many professionals doing them. They don't represent a significant portion of the overall purchasing population, so they don't figure highly in the strategic planning of corporations like Apple, MS, Intel, AMD.

So what do we see? ARM suddenly threating to take over the world; tabletiness creeping into OSes; R&D money being spent on battery powered devices with small screens and no keyboards; high prices for machines (especially laptops) that have decent screens and high performance.

I've no idea how far the trend away from cheap high performance computing will go. To a large extent the market for high performance desktop users is heavily subsidised by the massive server market. Intel and AMD have to produce high performance chips for servers, so it's not very expensive to spin those out for the desktop users too. But Intel and AMD may wind up losing a big share of the server market to ARM as well; it's a distinct possibility. That would drive the prices for big powerful chips right up, and certainly dull AMD and Intel's enthusiasm for spending billions of dollars on developing better ones.

All this could make life much more expensive and under resourced for the high performance desktop user. I'm not looking forward to it!

Just in case anyone thinks that it will never come to this, think again. Commercial pressures do mean that these large companies are not at all charitable when it comes to niche (but still largish) market segments. If Intel/AMD/whoever decides at some distant point in the future that there's only a couple of billion in revenue to be made from designing a new 16 core 5 GHz chip but tens of billions from 2 core 1 GHz chips running at 0.05W, they will make the latter. They wouldn't be able to justify making the faster chip to their shareholders. Worse than that, they'd likely get sued by their shareholders if they chose otherwise. The question then would be will they still manufacture their fastest design just for the sake of the niche high performance market? Maybe, but it's not guaranteed, and it likely wouldn't come cheap.

3
3

Danish embassy issues MARMITE WAFFLE

bazza
Silver badge

<Sharks with frickin lasers>Title

Just fishing to see if I can discover another icon. Oh, and down with Denmark, the marmite menacers.

0
0

Brit expats aghast as Denmark bans Marmite

bazza
Silver badge

Piat d'or?

Have seen it in France, and far enough away from the channel ports to probably not be aimed at the booze cruiser.

0
0
bazza
Silver badge

It's a good idea

Made sense to use diesel fuel. There's a lot of that on ships!

0
0

Intel rewrites 'inadequate' roadmap, 'reinvents' PC

bazza
Silver badge

@nyelvmark. Eh?

>ARM is not a processor

Where have you been during the smart phone revolution? Things like Android phones, iPads, etc. have shown that whilst an ARM might not be the fastest chip out there it's certainly plenty fast enough for browsing, email and some simple amusements which is all that most people want to do. The operative word there is 'most'. It shows where the majority of the market is. It shows where the money is to be made. Companies are interested in making money, end of. Any bragging rights over having the fastest CPU are merely secondary to the goal of making money.

So clearly compute speed is not as big a marketing advantage as all that. The features that allow one product to distinguish itself from others is power consumption and size. And that's where ARM SoCs comes in streets ahead of Intel.

Intel at last seem to have realised this and have been caught on the hop by the various ARM SoC manufacturers and decisions by Microsoft and Apple to target ARM instead of / as well as Intel. So they're responding with their own x86 SoC plans, and will rely on their advantage in silicon processing to be competitive. And they may become very competitive, but only whilst everyone else works out how to match 22nm and 14nm.

It's a mighty big task for Intel. They have to completely re-invent how to implement an x86 core, re-imagine memory and peripheral buses, choose a set of peripherals to put on the SoC die, the lot. There's not really anything about current Intel chips that can survive if they're to approach the power levels of ARM SoCs.

Also a lot of the perceived performance of an ARM SoC actually comes from the little hardware accelerators that are on board for video and audio codecs, etc. There's a lot of established software out there to use all these little extras, and the pressure to re-use those accelerators on an x86 SoC must be quite high. So there's a risk that an x86 SoC will be little more than clones of existing ARM SoCs except for swapping the 32,000ish transistors of the ARM core for the millions needed for an x86.

And there in lies the trouble; the core. The x86 instruction set has all sorts of old fashioned modes and complexity. To make x86 half decent in terms of performance Intel have relied on complicated pipelines, large caches, etc. These are things that ARMs can get away with not having, at least to a large extent. So can Intel simplify an x86 core so as to be able to make the necessary power savings whilst retaining enough of the performance?

The 8086 had 20,000ish active transistors, but was only 16bit and lacks all of the things we're accustomed to in 32bit x86 modernity. Yet Intel have to squeeze something approaching today's x86 into little more than the transistor count for an 8086! I don't think that they can do that without changing the instruction set, and then it won't be x86 anymore. They'll have to gut the instruction set of things like SSE anyway and rely on hardware accelerators instead, just like ARM SoCs. If Intel's squeezing is unsuccessful and they still need a few million transistors then as soon as someone does a 14nm ARM SoC Intel are left with a more expensive and power hungry product.

The scary thing for Intel is that the data centre operators are waking up to their need to cut power bills too. For the big operators the biggest bill is power. So they should be asking themselves how many data center applications actually need large amounts of compute power per user? Hardly any. Clearly there's another way to slice the data centre workload beyond massive virtualisation. If some clever operator shows a power saving by having lots of ARMs instead of a few big x86 chips, that could be game over for Intel in the server market.

In a way it's a shame. Compute performance for the masses is increasingly being delivered by fixed task hardware accelerators. Those few of us (e.g. serious PC gamers, the supercomputer crowd, etc) who do actually care about high speed single thread general purpose compute performance may become increasingly neglected. It's too small a niche for anyone to spend the billions needed for the next chip.

4
0

Linux kernel runs inside web browser

bazza
Silver badge
Pint

@David Hicks, absurdity

OK so it doesn't work on an iSomething yet. But how long before we see Steve Jobs start trashing Javascript?

The language is only going to get faster on Androids, Blackberries, etc. and as it does so the opportunities for things like this to become more serious and more capable will only increase.

Now it would be an absurd way to run whatever software you like on a phone. You'd need a pretty good network connection for storage (I'm presuming that Javascript can't store data locally on an iPhone). The battery consumption is going to be terrible in comparison to running an equivalent native application. But with such restrictive practises eminating from His Jobness it is quite possible that absurdity will not be such a high barrier afterall.

3
5

Intel: Windows on ARM won't run 'legacy apps'

bazza
Silver badge

The New Legacy?

It's quite possible that MS will manage to arrange things so that recompilation of source code will probably produce a working ARM version of an existing x86 application. It's different to Mac's migration from PPC to x86 - there was an endianness change.

But with x86 to ARM there isn't, and that makes a pretty big difference to the porting task. All MS really need to do is to ensure that C structures (or the equivalent for your chosen language) are packed in memory in the same way and that's quite simple to achieve. Sure, there'll be testing to be done, but I'd put a whole £0.05 on that testing being confirmatory rather than fault finding, provided MS get it right.

MS have already shown some good evidence for it really being that simple. They showed Office 10 printing to an Epson printer. I don't know about Office 10 (.NET?), but the printer driver was simply recompiled and just worked. If a recompiled driver just worked OK, there's good chances for applications too.

And of course, .NET, Java, Javascript, Python (and so on) apps would just run anyway.

There will be an emphasis on software providers actually bothering to recompile their applications. But if it really is that easy then open public beta testing will probably be an attractive way of keeping porting costs down.

1
0

HP breakthrough to hasten flash memory's demise?

bazza
Silver badge

@Michael Xion - Staff?

Who was it that said that cats don't have owners, they merely have staff to look after them? All too true!

1
0
bazza
Silver badge

@Graham Wilson - an old fogey like me?

"I remember the days of 'Rolls-Royce'-type spectrum analysers and other such world-class test and measurement equipment from HP"

Ah yes, still got some truly ancient (nearly 30 years old) HP test gear in the lab, still in regular use, still perfectly OK. Still like Agilent stuff even if it is all just PCs in a funny box with some funny hardware, an ADC and some funny software.

Once had a battle with the bean counters keen to divest themselves of equipment that had long since 'depreciated'. It was tempting to 'bin' (well, hide) them, let them buy new ones (from Agilent, probably), and then restore the old ones from their hiding place and have twice the kit.

It was difficult to get them to realise that buying new ones could never ever be cheaper than just keeping the ones we'd already got, no matter how cleverly they stacked up their capital items budgets. Won in the end.

Bean counters of the world, take heed; spending no money really is cheaper than spending some money. Message ends. Message restarts; not all 'electronics equipment' is worthless in three years time. Message ends again. Oh, and sometimes the engineers do have good ideas as to how money should be spent.

1
0
bazza
Silver badge

You mean...

That those visionaries The Goodies might have been on the right lines with Kitten Kong?

Scary!

0
0
bazza
Silver badge

Oh how easy it is to forget

Much as I recall with annoyance that HP didn't ship W2k drivers for the Deskjet I'd bought (ain't bought HP since on principle), that they divested themselves of what is now Agilent thus discarding their soul, that they acquired then binned most of DECs finest and ditched their own processor line too, that a once fine company reduced itself to little more than a commodity x86 box maker with an expensive line of ink and toner on the side, it's nice to see that there remains at least a spark creativity.

If HP can pull this off then I might even cheer, and I'll probably be grateful one way or another. It will be an enourmous break through. If it works how can it fail to take over from almost every non-volatile storage mechanism that mankind currently has? That's an enourmous market, and it could all belong to HP in years to come.

But you have to wonder why HP's management decided over the years that all that R&D heritage and expertise wasn't worth it. Look at IBM - there's a company that's still not afraid to spend on fundamental R&D, and look at how well they do. If HP can do this with whatever's left in their R&D budget, what might they have achieved if they'd kept all that they'd once had?

Bean counters. Bastards.

9
0

Intel's Tri-Gate gamble: It's now or never

bazza
Silver badge

Intel making ARMs?

I think we'd all win then, even Intel.

0
0

Jaguar hybrid supercar gets green light

bazza
Silver badge
Paris Hilton

In this hallowed place...

...is one allowed to say hhhhhhhhhhhhhhhhottttttttttttttttttttttttttttttttttttttttttttttttttt?

I think I'd like one quite a lot.

Paris, coz it's the closest corresponding icon. Or am I too old now?

BTW, Do we need a Pippa Middleton icon?

0
0

Apple reportedly plans ARM shift for laptops

bazza
Silver badge

@Ken Hagan

Well, transistor for transistor and clock for clock comparisons do count. The ARM core, even today, is still about 32,000 transistors. Intel won't tell us how many transistors there are in the x86 core (just some vague count of the number on the entire chip), but it's going to be way more than 32,000. So if you're selling a customer Nmm^2 of silicon (and this is what drives the cost and power consumption) you're going to be giving them more ARM cores than x86 cores.

Then you add caches and other stuff. On x86 there is a translation unit from X86 to whatever internal RISCesque opcodes a modern x86 actually executes internally. ARMs don't need that. X86 has loads of old fashioned modes (16bit code anyone?) and addressing schemes, and all of that makes for complicated pipelines, caches, memory systems, etc. ARM is much simpler here, so fewer transistors needed.

What ARM are demonstrating is that whilst X86s are indeed mightly powerful beasts, they're not well optimised for the jobs people actually want to do. X86s can do almost anything, but most people just want to watch some video, play some music, do a bit of web browsing and messaging. Put a low gate count core alongside some well chosen hardware accelerators and you can get a part that much more efficiently delivers what actually customers want.

That has been well known for a long time now, but the hard and fast constraints of power consumption has driven the mobile devices market to adopt something other than x86. No one can argue that x86 instruction set and all the baggage that comes with it is more efficient than ARM given the overwhelming opinion of almost every phone manufacturer out there.

On a professional level needing as much computational grunt as I can get, both PowerPC and x86 have been very good for some considerable time. ARM's approach of shoving the maths bits out in to a dedicated hardware coprocessor will do my professional domain no good whatsoever! It's already bad enough splitting a task out across tens of PowerPCs / X86; I don't want to have to split them out even further across hundreds of ARMs.

2
0
bazza
Silver badge

@JEDIDIAH

Yes you are correct, and indeed users of other sorts of phones don't run in to performance limitations either.

What the market place is clearly showing is that most people don't want general purpose computing, at least not beyond a certain level of performance. Afterall, almost any old ARM these days can run a word processer, spreadsheet, web browser and email client perfectly well, and hardware accelerators are doing the rest.

Intel are clinging on to high performance for general purpose computing, and are failing to retain enough of that performance when they cut it down to size (Atom). ARM are in effect saying nuts to high performance and are focusing only on those areas of computing that the majority of people want.

Those of us who do want high performance general purpose computing are likely to be backed in to a shrinking niche that is more and more separated from mainstream computing. The high performance embedded world has been there for years - very slow updates to Freescale's PowerPC line, Intel's chips not really being contenders until quite recently and even then only by luck rather than judgement on Intel's part. It could be that the likes of nVidia and ATI become the only source of high speed maths grunt, but GPUs are currently quite limited in the sorts of large scale maths applications that work well on them and aren't pleasant or simple to exploit to their maximum potential. Who knows what the super computer people are going to do in the future.

1
0
bazza
Silver badge

Yes, but not quite

That's true if you ignore the efficiency of the instruction set and hence the number of clock cycles needed to perform a given task. X86 is terrible - not it's fault, ancient and of its time 30 years ago, and Intel have worked wonders keeping it going this long. But the ARM instructions set is much more efficient (it is RISC after all), so clock for clock, transistor for transistor ARM will normally outperform X86. Intel might have some advantage in floating point performance, but with codecs being run on GPUs / dedicated hardware, who really does much floating point calculation these days?

You can see some of the effects of X86 from the performance Intel have managed to extract from Atom. That is, not very much. And all for more power and less throughput than ARMs of a similar clock rate are achieving.

1
0

IBM preps Power7+ server chip rev

bazza
Silver badge

That's not how IBM operate

Your still not getting the point. IBM don't really sell POWER chips. They don't really sell computers either.

What IBM do sell is apparently quite expensive business services (which does include some hardware), and looking at their profitability you'd have to say that they're clearly value for money. IBM's silicon needs merely reflect their need to keep selling business services. If they can do that with what some might argue is old fashioned out of date silicon processes and chip designs then IBM will be quite happy with that. Indeed, busting a gut to build a faster chip that doesn't help sell more business services would be commercially idiotic.

Developing their own POWER chips does allow IBM to tailor the silicon to the needs of the business services that they sell. For example, ever wondered why the POWER chips have a decimal maths FPU as well as a standard FPU? Why would IBM go to all that effort when no one else ever has?

It's because for banking / financial applications standard double precision FPUs on Intel/AMD chips are not accurate enough, so you have to do the maths a different way. For example, doing $$$billions currency conversions need to come out to the last snippet of a cent, and double precision binary floating point maths doesn't get you that.

On an Intel chip you have to do that decimal maths in software (a bit like the bad old days of having an 8086 without the 8087...). It's slow and time consuming. But a POWER processor does the decimal maths for you, so it ends up being much quicker than an Intel x64. Which means for certain banking applications IBM can offer a service that's much quicker / cheaper / power efficient than someone offering a solution based on Intel processors. And so far that 'niche' market is big enough for IBM to make very impressive profits indeed.

Basically there is a level of sophistication to IBM's business model that escapes most people's attention, and is completely different to Intel's. You just can't read anything of consequence into comparisons of IBM and Intel hardware. You can compare IBM's and HP's business service offerings, and I'd say that HP (who just happen to base theirs on x64/Itanium) don't appear to be as good. If IBM ever decide that they're better off with Intel or anything else, they'd drop POWER quicker than you can blink.

So that's IBM's cool headed, commercially realistic side. Then they go crazy and do things like the IBM PC (a long time ago, but still crazy considering what IBM's core business was), or the CELL processor. The CELL processor in particular promised the world tremendous compute power, got used in the PS3 and had the high performance embedded market buzzing with anticipation.

And then they drop it just like that because it turns out that most of the rest of the world was taking far too long to learn how to use what is unarguably the most complicated and 'different' chip that anyone has produced in recent years, so they weren't selling enough to make it worth their while. Grrrrrr! Pity in a way - I think that if they had persisted then they would have cleaned up eventually because there's aspects of the CELL which are far superior to the GPUs that have started filling the void.

IBM are a tremendous technology company, but don't often give the niche markets the things that that they could. That's capitalism for you!

0
0
bazza
Silver badge

@Steve Button

Second all that.

Plus not to forget the success of PowerPC in the almost invisible embedded market where Freescale have been earning good money in the the telecommunications sector for their PowerQUICCs. And then there is the good old 8641D which, despite running at a miserable 1.3GHz, is still quicker at some sorts of useful maths than Intel chips.

0
0
bazza
Silver badge

And still going off topic even further...

I hear from distant contacts in the games industry that they're just beginning to get to grips with the PS3's CELL processor (for it is a mightly complicated beast) and discovering with joy the raw power that's available to them. I fear that Sony might just ditch the CELL processor, and that would be a great pity.

The problem they've had is that to program a CELL well you need a background in high speed real time multi-processing applications. The systems that get used for building modern military radars and the like are architecturally very similar to the CELL processor, and indeed the CELL processor has found some uses in those fields. The problem has been that not many people in the games industry have come from those highly specialised application areas, so the games industry has had to re-learn some of their tricks.

0
0

New top-secret stealth choppers used on bin Laden raid

bazza
Silver badge
Coat

Do their helicopters...

...have comfy chairs?

0
0

Intel PC hegemony facing ARMy attack

bazza
Silver badge
Welcome

Fun!

Certainly very interesting times. Just imagine the inquiry inside Intel should ARM succeed in snatching a large and damaging market share; just how did a pokey little design house from somewhere flat, cold and wet without even a small fab to their name manage to out maneouvre the mighty Intel? I would be very interested to know if ARM's design team staff count is larger or smaller than Intel's.

If this does indeed come to pass, ARM will definitely have been a 'slow burner'. It's taken 20ish years to get this far, not exactly the fastest growth curve we've ever seen.

X86 has had a very impressively long run so far but the fantastic growth of mobile and datacentre applications has really underlined the penalties of the x86 architecture; power consumption. Intel are trying to keep up with clever silicon manufacturing processes, but you can't escape the fact that an ARM chip implemented on the same processes is smaller, cheaper and lower power. They once had an ARM license (StrongARM / Xscale) but disposed of it and haven't managed to compete since. Big mistake?

Intel could win if they bought ARM and wiped them out or renamed the ARM instruction set as x86-lite. I'm amazed that they haven't tried to do so as yet. It would raise the mother of all Competition Commission / SEC antitrust inquiries, and I don't think that Intel would win that one.

2
0

Mozilla refuses US request to ban Firefox add-on

bazza
Silver badge
Pint

@farizzle

Cuba? Thought that they spoke Spanish there...

Obviously it has to be Britain. Apart from the climate. And the US extraditability. But for those two inadequacies (anyway made up for by the beer and cheese), it's the perfect place.

3
4

ARM jingling with cash as its chips get everywhere

bazza
Silver badge

long term

Chances are that ARM will be earning those pennies on those designs for a very long time indeed. Intel won't be earning much on 4 year old chips... ARM are clearly in it for a long time to come, and those pennies will keep adding up.

1
0

Apple component lock-ups jump 40%

bazza
Silver badge

@RichyS

The end result is the same as if they did buy the parts and stock them in a warehouse.

It's interesting that not even Apple can rely on the market lifetime of components and have to make this sort of up front commitment to secure a supply. Nintendo ran in to the same problem with the Wii in the early days. Because the sales were way in excess of predictions they wanted to expand production but couldn't because the component manufacturers themselves couldn't keep up.

Component obsolesence is an increasingly major issue for everyone, and I know some small manufactureres who now routinely make lifetime buys of everything needed to make a product, (maybe not the passives like resistors, etc). For small-ish production runs it's simpler to have bonded stores of every unique component no matter how unlikely it is to go out of production. Warehouse space can be a lot cheaper than a redesign to deal with obsolesence. It's just-in-time delivery, but with a coarser timescale. It also gives you some control over when your re-design for obsolesence takes place, rather than it being sprung on you as a surprise just because some chip suddenly become unavailable.

Of course, Apple are big enough that they can probably get any component that's ever been sold remanufactured. But it wouldn't be as cheap.

1
0

iPhones secretly track 'scary amount' of your movements

bazza
Silver badge

A couple of different view points

I have a couple of slightly different view points:

1) Apple must surely know that people might not want their location at all times to be logged. Sure, there may be a benefit (better battery life, smaller mobile data bill or whatever) for users with the phone doing this. But from a PR point of view surely it would be better to tell the users what's going on under the hood, maybe having an option to stop it, etc.

2) With Apple having servers that dish up the information on request in the first place there is an interesting consequence for the network operators. The networks are traditionally shy about the exact locations of all their cell stations. A network armed with the locations of a rival's cell stations can work out all sorts of things about their rival's network capacity, operating overhead, etc. etc. That counts as priceless commercial information allowing them to accurately undercut the rival..

So what's to stop Vodafone (for example) buying O2 iPhones and using them to get a complete map of O2's cell network and thereby deriving performance information for O2's entire cell network? Or have the network operators accepted that their competitors know everything about their networks costs and performances?

And we do need a popcorn icon.

2
0

White iPhone 4 out by month's end

bazza
Silver badge
Alert

White...

...is so yesterday. Seen it, done it, bought the matt emulsion. Aubergine's the in colour, apparently. The 70s have returned!

Note however that the comments pages of The Reg are not likely to contain the soundest of fashion advice.

0
0

What will we do with 600MHz?

bazza
Silver badge

See?

The range of comments here shows just how little thought about "what comes next" OFCOM have had. Mesh? Perhaps. Data? Maybe. TV? In this Youtube world you must be crazy. Radio? FM will never die. No one here has come up with a credible killer application that will make everyone think "I gotta get one of those".

To make widespread use of such a broad band someone somewhere would have to come up with the ASICs and other parts to exploit it. Any kind of usage based on anything else is going to be inefficient, expensive or both. No one is going to invest in a spectrum maximising ASIC just because little old Britain has a bit of bandwidth unused around about 600MHz. There just isn't the market size to support it. The USA has a similar problem - no one does truly good CDMA based phones because only 250million Americans use it whilst the other 5 billion in the world use GSM/UMTS. The best we can hope for is that Europe too frees up the same band, and then perhaps someone might be bothered to make something of it.

The analogue->digital TV switch over has been a widely accepted fact largely because Freeview is definitely better. The BBC deserves credit for resurecting the whole idea. So the fact that there's bandwidth at 600MHz with nothing to do is sort of immaterial - most people have already benefited.

The analogue->digital radio switch over however is surely in deep trouble. There is no apparent true benefit to the end user. There is no perceived alternative use for the bandwidth. Like 600MHz, it will be a valueless 'asset' stuck on the government books. I suspect that most countries aren't bothering to even try any more. So what on earth does OFCOM think anyone will want the old FM bands for even if they do manage to push us all on to DAB? Mobile phones? TV? Answer - nothing unless there's a large international market for the requisite ASICs and parts. If people lose their good FM service and are forced to re-equip for a worse DAB service, there will be hell to pay if no good comes from the surplus FM bands.

1
0

Penguin Computing overclocks Opterons for Wall Street

bazza
Silver badge

@David Dawson

Indeed, but I think that they already are. At least according to some of the rumours I've heard. Some outfits are using the kind of exotic gear normally associated with high speed low latency large scale signal processing. In London they're apparently all buying premises closer to the stock exchange because the cable lengths are shorter. How crazy is that!

It can only be a matter of time before governments outlaw the business. All these traders combined add up to a massive accident just waiting to happen in the blink of an eye.

1
0

Google 'clamps down' on world of Android partners

bazza
Silver badge

Oh dear

Sounds like Google are trying to push the genie back in to the bottle. Too late. They're beginning to pay the price of some very poor decisions of 4 years ago. Either they leave things as they are (i.e. anarchy) or upset the manufacturers a lot. And if they do get the mythical being back in to its glassy home, just what would the difference be between Android and Windows Mobile from the point of view of the manufacturers?

Well, for starters MS impose a hardware spec which makes it practical to have different manufacturers with one OS. Works for PCs, should work for mobiles. Handset manufacturers can build to that spec, and in theory MS look after everything else. Google doesn't, though they probably will (but you can smell it coming a mile off). If they do, then any handset built now will likely become unsupportable. What's the betting that Honeycomb makes it on to very few existing handsets?

4
2

Facebook HipHop serves 70% more traffic on same hardware

bazza
Silver badge

@peyton? @Bob Gateaux

GCC vs ICC - it's quite widely acknowledged that Intel's compiler does a significantly better job on Intel hardware than gcc.

I've seen bits of C compiled as a .NET app run 80 times slower than the same source code compiled by icc as a native app. Alright, it's was quite maths intensive code, but even so.

C# / Java may indeed be better languages than PHP, but they're by no means ideal when you need to scale to truly large setups like Google, Facebook, etc. If a C# app achieves 90% of the performance of the equivalent C++ app, that can amount to $Millions extra in hardware and electricity costs. Noble sentiments of purity of languages soon go flying out the window when faced with the real and expensive problems of building vast systems. No software engineer can successfully argue that automatic garbage collection (a convenience for lazy programmers in a hurry) is worth $Millions.

By developing HipHop Facebook are avoiding the decision to completely re-implement in C++, but it remains to be seen whether HipHop is a good enough 'bodge' (I use that term positively) to allow them to scale and retain the PHP base.

2
1

Mac OS X daddy quits Apple

bazza
Silver badge

Not just one person

There was talk a while back that Jon Ives (he of iDesign fame) would be leaving too. Impressive though St. Jobs is, he can't do it all himself. If too many of the important people leave then that might mean Apple have a serious recruitement problem. The share markets get all worried about Jobs' health, but really it's the other people in the company that matter most.

I don't think that anything about internal Apple politics can be reliably inferred from news such as this. People do move on all the time - fact of life. Sooner or later we all get itchy feet and want a change, and there's nothing special about the fruit themed toy factory that says otherwise.

1
0

Nvidia rushes to ARMs

bazza
Silver badge

...But not quite

Well OK, it depends on your point of view. GPU+ARM = something quite exciting, provided that all your heavy duty sums are GPU-able. If you have a set of sums that are a mixture of GPU friendly and GPU unfriendly operations, then this gets you no where; you'd still need some separate, meaty CPU.

ARM's not bad, but it is first and foremost a design that is just about quick enough to support small-ish compute jobs (OSX, Android). Any kind of compute performance in ARM tends to be hardware acceleration of standard things (codec work?).

With my high speed sums hat on I would like to have seen a PowerPC core in there instead of an ARM. It's a bigger CPU so it can do more, it isn't an x86, and it can support more workloads in it's own right (for example Altivec is still pretty good no matter what Intel might have you think).

But alas there's not enough of a market to support that. Everything is mobile these days, so ARM it is.

0
1

EA coughs to Dragon Age II user ban 'mistake'

bazza
Silver badge

software licenses

They can try to do it by putting conditions in the software license. However that might be classed as an unfair contract (at least here in the uk). A complaint to the local trading standards officer or a test case in the courts would probably secure a refund and possibly compensation.

The tricky bit is that compulsory membership does not mean you actually have to contribute to the forum. One is still free to contribute elsewhere.

On the whole companies would be better off not making forum membership compulsory because their actions to ban users are vulnerable to judicial review probably leading to compensation payouts.

0
0

Calxeda boasts of 5 watt ARM server node

bazza
Silver badge

@dr jim, assumptions in your post

Firstly you are assuming that everybody's workload consist of VMs. That is not universally the case. Plenty of workloads out there would fully saturate the CPU power of these A9s. That's when the performance per Watt is heavily in ARM's favour. Secondly it would be odd if this proposed system relied on external switches. I anticipate something along the lines of VXS or OpenVPX ie internal interconnect switching.

And on the topic of virtualisation I suggest you mug up on the ARM A15 which does support virtualisation and it's only a matter of time before that gets OS support (if it's not already there). Then there will be nothing left for intel to brag about except for outright performance per thread. But the supercomputer boys seem to prefer AMD for that. And on that topic I think AMD should license ARM cores sooner rather than later.

2
0
bazza
Silver badge

Olé!

Intel are behaving like an old matador who is gradually realising that they've used up their last trick, and that the new bulls (with an ARM banners fluttering from their horns) are turning out to be unexpectedly hard to deceive.

Intel really are running out of time, and if they don't do something dramatic very quickly they might suddenly find themselves with a much reduced server market. Power consumption is rapidly becoming hugely important in the server world, and so far Atom doesn't appear to make the grade. ARM based chips are clearly quite capable - the performance of mobile devices is ample demonstration of that - so why stick to x86?

2
0
bazza
Silver badge

@Arnold Lieberman

GPGPUs will likely be a GPU (i.e. something good at large scale parallel floating point calculations) with some sort of CPU as a front door.

ARM

low power consumption, not hugely quick (surprisingly good though) but easy to have lot of them

GPGPU

high power consumption, very quick for Large Sums (but not worth it unless you can exploit the paralellism of the GPU), and not likely to be as simple to have a lot of them

Well suited to their target workloads, but those workloads are very different.

1
0

Apple handcuffs 'open' web apps on iPhone home screen

bazza
Silver badge

surely deliberate?

How can this be anything but deliberate? Surely to have 2 javascript engines and choose between the based a url starting file:// or http:// is more work than not?

I can't imagine what sort of architectural mess must underly iOS that would amount to this genuinely being a bug.

BTW I hear that iOS clocks have gone wrong again on the change to daylight savings time in the US. Whatks going on there?

10
3

Windows 7 customers hit by service pack 1 install 'fatal error' flaws

bazza
Silver badge

@bill 36

I've had perfectly ordinary Ubuntu installations self destruct after installing perfectly ordinary automatic updates. Took some serious hacking to sort out, and was way beyond what an ordinary user could be expected to do. Not very impressed.

MS have had, in my personal experience, fewer update problems.

Considering how important the ability to automatically update is to the adoption rates of an operating system, it's amazing how badly they go. An OS with a reliable update mechanism will gain a reputation for continued improvement, and things can only get better. An OS with a reputation for not getting successfully updated will be deeply unpopular because users know they'll get left high and dry. This never used to be a problem in consumer land - bugs meant crashes that customers just fixed by turning off/on. But as functionality increases the power cycle fix becomes less acceptable.

MS will get this one right sooner or later; they have generally done so in the past for mainstream windows. MS also have to get WP7 updates right too. Apple are just b*****ds because they use updates as a mechanism to piss off customers with older kit by leaving bits out for no sound technical reason, and even then they don't always get them right. Android is an update joke.

6
7

Sixth Japanese nuclear reactor loses cooling

bazza
Silver badge

@C 2

"There is no energy *shortage*, only an artificially created crises"

I like that line a lot. You are quite right of course. Energy productions is riven by vested interests, human ineptitude and profit motives.

Given that a reliable large scale source of clean electricity would make most of the world's political problems go away, one does wonder why the politicians don't put more money into getting it. Of course, they only look 4 to 5 years into the future.

One place I am permanently puzzled by is France. Forty to fifty years ago they decided to do nuclear power, high speed rail and space. They have the cheapest and cleanest electricity in Europe (mostly nuclear, no major accidents so far), they have a high speed rail network that is marvellous to use, and they have the worlds most successful satellite launching business. So how on earth did all that survive the intervening 40 years without becoming a victim of political infighting and economic downturn? Never mind the stereotypical views of the French, they have clearly got the ability to actually make these grandiose projects actually work. ITER is in France, and maybe that is a good thing.

2
0
bazza
Silver badge

@C 2

"Just an FYI people, if you want to know exactly how viable solar is,"

Ah, I've got to disagree with you there. Yes, technically a solar based energy generation system might solve world energy problem, but geo-politically it is not an option. Currently the world can get it's oil/gas from cloudy and sunny places. Solar based energy production rules out the cloudy places. The bulk of the reliably sunny places are politically problematic. No country in Europe would want to be beholden to, for example, Libya. And anyway, if a country suddenly gains limitless solar energy it is quite likely that some of that would be used to irrigate their desert thus cooling it down. Then the clouds and rain come back.

Likewise not everyone has geothermal options.

That's why thorium / fusion might be real winners. Clears out the political distortions of world energy supply quite nicely.

1
1
bazza
Silver badge

Yes indeed!

Quite a few engineers have suggested that in the past, and there are many things to recommend it.

Uranium / plutonium fission was picked in the early days to generate plutonium for bombs, and to kick off the nuclear fuel cycle (which is what Sellafield was originally all about). Going back to first principles and choosing not to make bombs from the waste products means thorium is surely very viable.

I know that the EU got asked by CERN to fund thorium reactor research. The EU, bless 'em, pushed the proposal over to some Frenchie for evaluation. His view was meh, won't work, so it wasn't funded. Turns out he worked for the French nuclear industry with a uranium PWR design to flog. Conflict of interest or what.

India is putting some work in to it too. India has HUGE reserves of thorium...

Getting back to the situation in Japan, things are pretty bad. But it is pretty impressive how so far, despite huge levels of abuse thrown at these things, the actual vessels themselves don't seem to have been breached. Let's hope it stays that way.

There are going to be some interesting design reviews coming out of this. One is surely why was all the emergency cooling systems sufficiently low down to be affected by the tsunami (I am assuming that inundation is the root problem here). Put it on the roof out of water's way. Another is that these problems are seemingly arising because of insufficient electricity to run the cooling gear. One does wonder what would the situation be if they had just kept them running? Of course I don't know if that was even a viable option after the tsunami struck, and might certainly have been a gamble after the quake shook it all about in the first place.

Also I wonder how well the staff themselves are coping. They must be under a lot of stress, and people don't often make the right decision under such circumstances. I wonder how long it took to transition from an attitude of 'can we save the reactor intact' to 'can we just stop a containment breach no matter what the cost'? No one wants to be the one to make that call, especially when such a transition inevitably means an acknowledgement of some sort of failure, some deviation from the acceptable norm. It is especially difficult to make such an admission in Japanese society.

7
0
bazza
Silver badge

A good question

But it's one that the world has already answered with a resounding YES, and the world will want more of it tomorrow. Electric cars are the really stupid idea at the moment because they just move the problem elsewhere (and are argueably less efficient when you take electricity distribtion losses). If you want to run the world's cars on electricity alone an awful lot of power stations are going to have to be built. They can't be gas/oil/coal. Renewables won't do it either.

It is worth asking what is truly essential uses of electricity. Hospitals? Not much arguement there. Schools? Well, they managed 100 years ago without. Homes? We all like our central heating and stoves and fridges, but really everything else is not essential to life. Factories? Ah, now we're getting geo-economic / geo-political with that one.

Now for something controversial, especially in this forum. Large scale computerisation does not in my view actually make things better. It generally means that crap people/businesses can get away with being crap because they've got a computer system. Crap clerks don't actually need to know anything anymore, they read out what's on the screen ("Computer says no"). Crap retailers don't need to have a nose for their markets because the market analytics plots all sorts of graphs for them. Crap investors tap huge data repositories and have vast market models just so that they don't have to have an inuition or gut instinct for business. And so on. All of that takes a lot of electricity to run, and I suspect that it's expanding at a phenomenal rate. Domestic consumption of electricity is probably actually falling as fridges, TVs, etc. become ever more efficient. Commercial consumption outside of manufacturing is probably the thing that's driiving demand for electricity.

Can't be bothered to google for relevant data. Anyone got any supporting facts?

1
1

Japanese earthquake sparks nuclear emergency

bazza
Silver badge

On the other hand...

Looks significantly more serious now...

0
0
bazza
Silver badge

@45rpm

Oh dear, another knee jerker who won't understand risk.

Amount of radioactive material released so far in this event:

Nil

Amount of radioactive material released by your average coal fired power station:

Tens of tons/year.

Coal is often about 1part/million uranium, and a half decent power station can get through 35million tons in a year. 1ppm * 35^6 = 35 tons. That goes straight up the chimney of course! So which one do you want to live within 100 miles of? I've not seen anti-nuclear protesters outside a coal fired station before. CO2 protesters a plenty though!

UNSCEAR is pretty clear that Chernobyl has had more of a psycological impact rather than a statistical change in death rates. Even including Chernobyl, nuclear has had significantly less real impact on the environment than any other large scale engergy generation scheme that mankind has dreamt up.

Assuming you are strongly in favour of electric vehicles / trains, just where do you propose the electricity for that comes from without burning up a lot of coal, oil and gas in powerstations? Wind turbines and solar panels will not be the answer on a calm cloudy day... With current schemes seeking to supply only small percentages of current demand, how much of the landscape would have to be covered up with turbines and PV panels to make all the cars and trucks move too?

The only renewables scheme I've seen that makes sense is the one the Spanish are pursuing, namely solar towers with molten salt heat stores. The salt store provides a measure of guaranteed supply. Not a bad idea, provided you can distribute electricity from sunny places to cloudy places well enough.

Nuclear fussion is a much neglected strand of energy policy; more money is put in to dubious renewables schemes than into ITER. Nuclear fussion, if it can be made to work, will definitely be a significant game changer.

The scientific crowd working on nuclear fussion have a phenomenal track record. Over the past 30 years of effort they have met every deadline and exceeded primary goals. JET was tremendously successful. Yet the worlds governments dish out the money in a very paltry manner. The UK goverenement alone put £150billion in to the financial industry, yet ITER is projected to cost just €16billion. It would seem that to the UK a few bankers are worth ten time the technology for limitless energy.

9
2

iPad 2: Apple forced to make carrier concessions

bazza
Silver badge

@AC, "The coolest thing to do"

I can understand your point of view. Delivering the iPad concept very well is certainly what Apple is hoping to achieve (though my personal experience of their products' reliability is poor), and their commercial success thus far is certainly powerful confirmation of that.

"Not adding all those bells and whistles that frankly don't fit in the concept means more focus for making sure what it does it does better than everybody else. And that is what apple does best."

Indeed. But other companies can deliver that concept too, but they're likely to extend it to allow people to plug in USB devices, SD cards, etc as well. Apple's software superiority (I'll ignore it's apparent unreliability for now) is not something that Apple will be able to maintain forever. Microsoft and Google will one day (there's an 'if' there of course) match Apple's software completeness and user friendliness. When they do so but also offer additional things like USB Apple might have to think again about their concept.

I think that the moribund state of OSX on desktop / laptop Macs might be a sign of things to come. Having used both Win7 and OSX in various guises I subjectively think that Win7 is certainly on a par (and superior in certain apsects - taskbar in particular) with OSX. MS have caught up, and are cheaper. I know that Apple have been busy with their iSomethings, but really. Have Apple run out of ideas for OSX?

Microsoft may be this huge great slow moving beast, and Google's Android is laden with many severe problems, and Apple can currently run rings round the pair of them. But if and when Apple have run out of new ideas in the mobile areana too (are both Steve Jobs and Jon Ives on the way out?) it's likely / inevitable that others will catch up, and simply surpass Apple by adding USB, SD, etc.

Content, however, is another matter. iSomethings thrive on content. But what if that content is Facebook, or Flickr, or whatever and all those services and apps all become available on any half decent tablet (even an MS slab)? What exactly would Apple then be bringing to the party other than a cool looking slab that can't even read the SD card out of my camera?

Apple know this full well, hence their very restrictive model for deploying apps and music on to iSomethings which serves very nicely to make it hard for developers to support someone else's platform at the same time. Current market share + restrictive practises does serve to lock developers (and hence content) in to the iPlatform.

Their mistake, in my humble opinion, is to limit their current market share by cutting out those people who would quite like very simple and otherwise inconsequential things like an SD slot and maybe just maybe a USB interface. Even I might buy one then! Not doing so just leaves a hook for someone else to build a market share. But their commercial success thus far means that such user demand can not raise even a tiny blip on future-scope.

0
0
bazza
Silver badge

Difficulty with cool

I think that Apple will have to start bolting on things like USB, memory card slots, etc. Everyone else will, and it is difficult to make a thin, featureless black slab 'cooler' than everyone else's thin, featureless black slabs whilst not beating them on functionality.

On top of all that, it won't be hard. The new OMAP from TI is just the sort of CPU that will get used in tablets. It has SATA, USB2, even USB3, memory card interfaces plus shed loads of other I/O. And so do/will all the other SOC ARMs from everyone else. So it won't be hard for everyone else to add these things. Argueably it will cost Apple more because having set itself along the road of doing its own CPUs it will also have to 'out-ARM' the likes of TI, Marvell, Qualcomm, etc. That will be very difficult indeed.

The only thing Apple have in their favour is that they control their own OS, iOS4.3. That means that they could offer a more reliable, well thought out software experience for users. They sort of do, but then go and spoil it with a range of pointless restrictions. Those restrictions may appeal, or at least not matter, to US and European customers, but there's surely many times more potential customers worldwide who do actually want their devices to inter-operate.

0
0

Microsoft blows Windows Phone update, again

bazza
Silver badge

@Philippe

You make a good point. My Windows XP installation is only 3.05GByte, and presumably that's littered with back ups from updates, etc. etc. Slightly amazing that the WP7 update install needs 4GByte.

2
0

Android malware attacks show perils of Google openness

bazza
Silver badge

@AC, Openness

Yeah right*. Try telling that to the non-technical majority who don't know what you're on about but do worry that their bank details might have been compromised and their accounts emptied.

Openness is fine so long as there's a quick way to propogate updates. Google forgot that part completely. As Android currently stands virus writers can have a field day because it takes far too long (if it ever happens at all) for customer's handsets to get updated.

*apologies if you were being ironic...

8
4
bazza
Silver badge
Thumb Up

@wathend

Yes, I agree completely with you. It was indeed only a matter of time. Google's naivety has been truly staggering.

Open source 'works' because anyone can review code, find bugs and issue fixes which people can adopt. By that mechanism problems are found, dealt with, and everything improves surprisingly quickly.

The bit Google forgot about was the "fix adoption" part. The likelihood of the latest Android updates actually being rolled out to user's mobiles by networks is effectively nill. If they do roll out an update it's nearly always months behind the release date, during which time the virus writers have had a field day. And there will always be vulnerabilities in the latest version. People are buying phones probably with security bugs in them knowing that they will almost certainly never get fixed during the two year contract they've just signed up to (or whatever).

Updates are a necessity that Apple, Microsoft, RIM and Nokia have recognised. Microsoft's less than perfect update the other day certainly tarnished their reputation, and they need to get the next one very right indeed. Apple have the occassional update woopsie, but then again product faults in the Apple market seem to make no difference anyway.

The reporter wrote:

"The episode demonstrates the ugly predicament confronting consumers of smartphone apps..."

and then completely failed to mention BlackBerry. RIM are becoming interesting - very much a closed shop (it's all theirs), there's the BlackBerry World App Store, and a robust reputation for security. Dismissed by many as a businessman's phone with nothing exciting at all, it is often forgotten about. Yet the Torch is getting pretty good reviews, there's quite a lot of apps for it, etc. I got one only after stumbling across it whilst shopping round. It's close to being a complete Apple alternative without Apple's restrictive zeal, but without the problems of Android and Microsoft. If you can't stand Apple then it's almost the perfect phone.

Getting back to this Android virus problem. I wonder how much trouble there's going to be for the manufacturers that have backed Android as their only option? This sort of problem could be a company killer if the world population suddenly decides they don't want Android at all. For Windows desktop MS had a monopoly (in effect) which bought them time to get serious about improving Window's security. Google doesn't have that luxury - people can and probably will stop buying it just like that if it gets a bad reputation.

4
2

Forums

Biting the hand that feeds IT © 1998–2017