* Posts by Peter Gathercole

2924 posts • joined 15 Jun 2007

Voting chaos in not-fit-for-purpose electoral system

Peter Gathercole Silver badge

Online voting

OK, so you need to authenticate, possibly we should give everybody a physical means of identifying themselves (after all, numbers and ID strings can be copied), and mandate a way of electronically reading these securely on someone's own PC.

So you are now supporting ID cards, with card readers, attached to PC's with trusted and supported operating systems with DRM built in - say Windows running either Vista or Windows 7. This is what the industry advisers who will be engaged by any government will say. Win for Microsoft and the PC makers, don't you think!

What happens to people who can't or won't invest to do this? There will not be sufficient demand for polling stations, so would you install PC's in Post Offices or Libraries (oops, none of these left), or possibly Pubs (rapidly going the same way in small villages)? Will we have a underclass of people who can't vote because they live in the country and have limited transport options?

I really don't think this is what you want.

Open source R in commercial Revolution

Peter Gathercole Silver badge

R not 'created' in 1996, more like copied

R was is an open-source re-implementation of the tool S that was part of AT&T's software toolchest long before 1996 (Wikipedia says 1975), and originally developed by Bell Labs.

This is acknowledged in the documentation for R.

I was using S in 1988, and it was not new then. I was interested in it because it has a number of similarities with APL (A Programming Language) that is often cited as the first interactive computing environment.

Javascript guru calls for webwide IE6 boycott

Peter Gathercole Silver badge

How much of a problem?

Although I am a long-term technical specialist, I'm (almost proudly) mostly web ignorant (I can use it well, but don't expect me to be able to write any HTML or XML without a tool - Hey, I'm a core UNIX specialist, not a web designer!)

Looking at it with the eye of a novice then (hand-grenade time, this is flame bait), what is the problem with IE6?

I know, I've read that it does not confirm to standards, it's HTML implementation is poor, and generally web designers bitch at it all the time, but from a users perspective, pages appear when asked and it works in places where firefox, Chrome, and Safari still don't (I run Windows 2000 Pro in Virtual Box, and IE6 under Wine, and also have Windows 2000 Pro in a little used partition on my laptop for those awkward sites that insist on IE as a browser, especially when they need Silverlight or WMP as a backend for media - must update to XP to allow IE7, it is licensed for that).

I don't want to buy Windows 7 for my laptop, which is still working, so I don't need to change it. And I'm sure that many people who are mostly infrequent users feel the same. Ubuntu does 98% of what I need on a 2GHz Pentium 4 mobile.

I would actually like to see web sites that are strictly bound to one browser, or which are so PC unfriendly by splattering large flash animations across their pages, or which require a 1280x1024 minimum screen to display, blacklisted by the community for a few days. As a Linux and Firefox/Chromium user on laptops with 1024x768 screens, this would be a far better protest as far as I am concerned.

What I am also appalled at is the attitude of Giorgio Sardo, who obviously has an agenda of selling Windows, and indirectly, new PC's. Microsoft should be forced by legislation to provide modern browsers on their legacy OS's all the time there is still a sizable number (say, greater than 25% of the whole) of systems still in common use!

If IE6 stops working, then I suspect that there will be an anguished cry from tens of thousands of users, and a sudden lack of space in the Electrical section of the local municipal dump, not a large number of people installing Firefox.

Applesoft, Ogg, and the future of web video

Peter Gathercole Silver badge


I used to keep all of my ripped audio in Ogg Vorbis on my laptop. That was until I started using a high capacity media player. The time (and space) required for transcoding from Vorbis to MP3 for large parts of my collection began to get annoying, and due to a strangeness in Amarok and the transcode plugin, I ended up with both Vorbis and MP3 copies on my laptop. This ended up confusing Amarok, so I have now switched to all new rips saved in MP3. I don't like it, but I value my time and disk space more than a principal.

Shocking, really.

Peter Gathercole Silver badge

Good point......but

Software algorithms are not industrial processes. They can be innovated by you in your office or bedroom, by Johnny when he is not in school, or by your wife if she has the skill. It's a potential low cost, easily doable by almost anybody.

Having software patents prevents you from doing this, because if you unknowingly infringe on a patent, you are not allowed to use your own innovation. Are you prepared to do the searches to make sure that that clever snippet of code that morphs your cursor when you move over an icon or window does not infringe any one of thousands of patents? And what if you want to show off your extreme cleverness to your friends, are you prepared to indemnify them against possible law suites?

It's not the same as the way of physically holding data on a CD, or the process of masking transistors onto silicon, or any one of the hardware related patents you hold up as examples, because as an ordinary user, you will not be in a position to produce an industrial process in the same way as you can write software.

Patents can and should protect and encourage innovation, but the whole system has been corrupted to allow large corporations to make sure that no-one else can innovate. It is possible to own a patent, and then to grant an irrevocable right of use without fee to anybody. This is what everybody is hoping that Google will do with the patents they have just acquired.

Sharing bank PINs leaves consumers at risk

Peter Gathercole Silver badge

@full number

More and more merchant receipts only show 4 of the 16 numbers. It's stupid to have them all.

Peter Gathercole Silver badge

What I want to know is...

If an ATM defaults to reading the mag stripe, where is the PIN stored? Is there a one-way hash algorithm in the ATM that reads a key from the card, that together with the PIN can be used to generate a non-reversable cryptographic signature whose authority can be checked in the ATM?

I would prefer to have different numbers for the same card for ATM and Merchant Services transactions. This would be much safer than using the same number everywhere.

Peter Gathercole Silver badge

Reason for not handeling card

is not so they can't read it, it is so they cannot put it through a card skimmer. There are not that many people with eidetic memories (for goodness sake, I can't remember a new phone number for more than a few seconds).

I'm fairly certain that a high enough res. camera or two would be able to capture the name, dates, long card number, and the security code on the back even if the till operative did not handle the card. This is enough to use the card for Internet transactions.

The scam used to be to skim the card, send the details to a country that does not have UK chip and pin, clone the card, and use it to pay for goods in that country. And if you are also able to grab the PIN by shoulder surfing, you can used the cloned card to get cash out of a non-chip-and-pin ATM abroad as well.

Now that all you need is the visible information from the card for card-holder-not-present transactions, the whole system is open to abuse. This is the reason why we have the Verified by Visa and the SecureWhatever-it-is for Mastercard for Internet transactions. But this is not needed for card payments over the phone, so don't do it.

The instance of banks that the PIN must be kept private should be communicated to retailers who put their merchant devices on fixed installations in plain sight (Tesco, I'm singling you out here, but I'm sure that most other Supermarkets are also guilty of this). I'm certain that I could with reasonable accuracy observe the PIN number of the two customers ahead of me in the queue on most occasions. This makes the whole system a joke.

Dell Studio 17 touchscreen notebook

Peter Gathercole Silver badge


You don't get an idea about the size of this monster until you get to the pictures of the touchscreen being used. I'm certain that because of just the size, this system will never appear on my laptop replacement shortlist. I guess I should have guessed, it having a 17" screen, but if it is so wide, why have they not made space for the missing keys!

Researchers spy on BitTorrent users in real-time

Peter Gathercole Silver badge


They use this information themselves to check that you are still authorized to use ADSL, so it is no hardship for them to log it.

Microsoft defends death of free video in IE 9

Peter Gathercole Silver badge

co.uk not clear.

Not sure I agree. Whilst Computer Programs are not patentable, it is still being argued about whether a software technique can be regarded as an invention, and thus patented. UK and EU law appears to be at odds here.

Also, while you may have a standard, there is also nothing that states that a standard is not encumbered by patents. I do not believe that H.264 is either a free or an open standard in the generally accepted meaning, it is just that the license fees have been waived until 2015. This should ring alarm bells to anybody with half a brain cell.

Peter Gathercole Silver badge

Flash video

I'm fairly certain that most flash videos are H.264 encoded with a flash UI wrapped around them. This calls a decoder in the flash runtime. This is one of the reasons why the performance is so dire on non-windows platforms as Adobe show no real interest in anything other than the mainstream.

If we intend to keep the low power/cost end of the computing platforms alive (such as phones, pads and netbooks), we absolutely need the decoder part of a codec in the browser, not just language interpreters that allow you to run a decoder.

I've been playing around recently trying to use a different backend for flash video, specifically using mplayer with the correct modules for flash video. This works great (and much faster), until you hit a site that tries to query the version of flash in the browser plugin (like iPlayer), whereupon it falls down in a heap.

Peter Gathercole Silver badge

Can we tell where this is going yet?

This targeting of Ogg/Theora is the most blatant example of standards land-grab by patent that we have seen so far.

It would appear that Microsoft/Apple et. al. are not deploying their patent IP to generate income at this time, but merely to stifle an alternative technology that may deprive them of an effective monopoly.

I say effective, because the cross-licensing that big IP holders engage in has the ability to deflect anti-monopoly legislation, because a consortium of co-operating companies is not deemed to be a monopoly under the current rules.

Of course, once they have this effective monopoly, they can then leverage it for revenue generation. We can only hope that Google is prepared to defend the codec that Theora is based on.

As pointed out, if Microsoft and Apple are successful, then it is a grim portent of what is to follow.

Ubuntu's Lucid Lynx: A (free) Mactastic experience

Peter Gathercole Silver badge

USB wireless keyboard

Plug in a normal keyboard, drop into the BIOS during startup, and turn on Legacy USB support. Then reboot to see whether Grub understands the keyboard.

Grub is a minimal OS where size is a real issue. It relies on the BIOS settings being right!

On the subject of Fort Knox. Just because it is your machine does not make it a good idea to have global write on the whole filesystem as a non-privileged user. That way lies being pwned by the first haxor who makes it through your browser with a malicious Javascript or Flash applet (and it will happen, even though you are using Firefox on Linux). I understand your sentiment, I just don't think you understand system security. Please attend the Security 101 class.

Peter Gathercole Silver badge


I know that this is a trivial change, but the default background for Lucid reminds me of the early days of colour television when the tube would become magnetized leading to unpleasant blotches of colour.

It's different, I admit, but not pretty by any stretch of the imagination.

Peter Gathercole Silver badge

Troll alert! David Lawrence

I hope that this was deliberate flamebait!

The only reason I've had to do something like this in the last 5 years on Dapper or Hardy is when I have tried to get some hardware working when the vendor has not done anything to make it work under Linux themselves.

Remember, when you install a new piece of hardware on Windows, you have this nice shiny round thing with the hardware (it's called a driver CD), that the vendor has put a lot of work into to make it work in Windows. If they bothered to do the same for Linux, you would never have to touch the kernel. Try getting an HP printer working in Windows without the install diskette. It's nearly impossible if it is a printer Windows does not already know about.

For joe average, who wants to write documents, or browse the web, or even plug a printer in (unless it's a Lexmark - spit), everything they need is likely to already be in the distribution, or at least in the repository.

And claiming that you have to reboot twice is rich if you are coming from a Windows environment. Just installing a system from Windows media and the driver loads for XP will require you to boot your machine many, many times more for even Windows 7 than a recent Linux (just built a Windows 7 system over Christmas myself. Easier than XP by far - nearly as easy as the Lucid install I did on my workhorse laptop this morning!)

Go on, give it a try. Build a Windows and a Ubuntu system from scratch, and report back to the thread if you dare.

Peter Gathercole Silver badge

re. strictly for geeks...FAIL FAIL FAIL with a side order of FAIL

If there is one thing that UNIX like OS's are *MUCH* better than MS, it is the consistent directory naming system.

Remember that Linux, like UNIX, is a multi-user OS, so all of *YOUR* files should be kept under *YOUR* home directory, not scattered across the *SHARED* part of the filespace. This is what makes it possible to have UNIX like systems share their userspace across a networked filesystem, compared with the absolute CRAP of roving profiles in networked Windows systems.

If you look under your home directory on Lucid (or mos of the earlier Ubuntu's), you will find a directory called Desktop, one called Documents, one called Music, one called Pictures, one called Videos and one called Downloads. They are all yours, and will never be interfered with by another user logging on using a different name. If you want more, just create them in your home directory with whatever names you like (I have a local bin and a local lib and a local tmp just for me, but then I am a dyed in the wool UNIX user).

This is one of the fundamental strengths, for as a non-privileged user, you only have write access to the files under your home directory, and a restricted number of temporary directories. So if (and when - even on Linux) you get exposed to some malicious code, only your files are at risk, not the system as a whole!

Revealed: Public sector's web gravy train

Peter Gathercole Silver badge

and wash the cars

No. They teach the children, run the libraries, clean the streets, collect refuse (unless this is outsourced), inspect the environment, enforce parking regulations, grit the roads, examine planning applications, man the help lines and front office services, manage the care home provision (and in some cases run them), run the leisure centres and swimming pools, cross children across the road, maintain the street signage, run the electoral role and elections, collect an manage council tax, and on top of all of that, they manage themselves and their presentation to the people.

And I'm sure I have missed a whole lot out.

I might have mixed up local and county council responsibilities, but the councils actually do a huge amount to maintain our society. Whether they do it well or no is another matter...

Peter Gathercole Silver badge


If they were able to make a value decision themselves about benefit vs. cost, then I would agree with Mark65.

But they are being told in no uncertain terms by a meddlesome government what they must do, with strict timescales and financial penalties. Their hands are tied, they generally do their best (which may not actually be very good, because councils are not in the Internet business), and probably spend more money than they need on failed work and private sector 'consultants'.

Councils can be run like a business, although one with a tied customer base. It is believed by government and their (paid) advisers is that this is the way that councils should be run, as private sector management *MUST* be better than public sector. The problem with this is (as you have suggested) that councils are not really commercial businesses, and will only loose customers if the actually move geographically.

Peter Gathercole Silver badge

Please look at the scale

What are you comparing a council to?

If you do like-for-like comparisons between a county council with maybe 30,000 workers serving around 400,000 households with 850,000 residents (these are the approximate figures for Norfolk) against a corporation with 30,000 employees and close on a million customers, I think that the figures may be surprising.

How much do you think that a small bank, or possibly IBM UK spend on their web sites. I'm sure that the comparison would be very interesting. I would not be surprised if the councils spend less.

And look at the services they are being forced to supply (by government regulation) on the web, even if only to tell people their rights and entitlements. Housing, social services, schools, care provision, refuse collection, environmental health, roads, planning, enforcement of regulations, business rates, council tax, court services, local business development. And I'm sure I've missed many out. All the information has to be correct within guidelines.

It's a big, big problem that is quite beyond the experience of most people to comprehend (and probably most councils). This leads to the problem being treated as an elephant task, one bit at a time, which as we all know leads to inefficiencies.

Southwest One is an example where private-sector companies come in and manage to spend more money doing less than the councils ever did.

Lucid Lynx fights 'major' X-Server memory leak

Peter Gathercole Silver badge


My god (if I had one), you're a better man than me!

Never sleeping or eating or going to the loo. You really never have an opportunity to log out!

I think you need to move your long-running tasks into a batch process that can run divorced from a GUI, and learn to set up your environment quickly.

I have met members of the "it takes too long to set everything up when I log in" brigade, and all I can say to them is to stop whining, and learn to automate their post-login setup process. I tend to use a personally enforced logout/login to keep the number of windows I have open manageable.

Peter Gathercole Silver badge



I think we have identified a REAL geek here! Obviously does not have a Significant Other who complains about the noise produced and electricity consumed (while using the tumble drier even when the sun is shining!)


I agree it does not NEED shutting down (see my comment on my firewall), but I pay huge amounts to the electricity company already.

What I am hoping for now is a power supply that draws less than a watt when in standby, a motherboard that will respond correctly to WOL requests and an ACPI subsystem which allows Linux to suspend and resume correctly. I can start my RS/6000 44P remotely, why have I not been able to get any of my Linux boxes to do the same.

I'll look at this again with Lucid, as I've recently replaced the motherboard in my workhorse system.

Peter Gathercole Silver badge

Maybe I should have checked, but...

...I was taking the article at face value, and it says that the patches were rolled into the release candidate. The release candidate is supposed to be what is released unless something seriously bad is found.

The reason why it is important is because there was bad press and lingering stories of woe for a couple of weeks after Karmic was released, and Canonical cannot really afford the same for Lucid.

Peter Gathercole Silver badge

Important indeed

This is newsworthy because we are two days away from GA, and this is not a bug in a beta, it is a bug found in the Release Candidate. Generally speaking, it is unusual to get late fixes in the RC, because of the alpha and beta programs.

One question, however. I know why you might want to keep a server on for months at a time (my firewall system gets restarted about 4 times a year when I have a loss of power), but why stay logged in for all that time? Every time you log out and in (at least on Dapper, Hardy and Jaunty, and various RedHat and Mandriva distros), the X server is re-started, recovering the lost memory.

Gene that allows growing a new head identified

Peter Gathercole Silver badge

This make me feel uneasy

Surely, playing around at this level could well cause cancers and teratomas (some of the pictures of these just make me feel sick!).

After all, it is a breakdown of the controls on cellular division that cause both of these conditions.

Microsoft wins big in Chinese piracy lawsuit

Peter Gathercole Silver badge

Bad comparison

OpenOffice is not written as a commercial product. The fact that it is so good is a tribute to the people who have contributed their time and other resources, and the companies who believe that they can make other sales as a result of being apparently altruistic (Old Sun comes to mind). They are entitled to give their effort away if they choose.

But just giving away software is not a business model, which makes it an unfair comparison. In this respect, I do actually agree with Microsoft. They have invested effort producing the software, they are entitled to get reward for their effort if people want to use it. It would be their right to give it away, but it is not a duty for them to allow anybody to use it. It is more an argument of value and worth.

Now I'm not saying that people should stop using OpenOffice, but just that they be aware that free at the point of use does not mean free to produce. I also disagree with the prices that they charge, but I agree about their right to charge something.

You could almost turn the tables, and claim that Microsoft's commercial product is being undermined by the supply of a free alternative. This is very similar to the argument that Mozilla used when arguing against Microsoft giving away Internet Explorer (OK, OO is not an integral part of any OS, but that was not what was initially argued).

HP: last Itanium man standing

Peter Gathercole Silver badge

Not that simple

As I remember it, the HP/Intel tie-up looked like a good fit, even though in retrospect I would say that Intel took HP for a ride.

At the time HP decided to jump to a joint-developed chip with Intel, it was engaged in an arms-race with IBM. Tim said that it was the Itanium that made IBM invest in Power, but in actual fact this investment pre-dated the Itanium by at least 5 years. IBM bumped development of Power, PowerPC, RS64, and Power2-7 at various times, but it has been an almost continuous process if we overlook the stumble that happened with the 64 bit PowerPC 620 processor.

The original RIOS based IBM POWER systems, the RISC System/6000, was launched in 1990, and had been under development for at least 5 years before that. The driver was to be an industry leader in the Open Systems market place, as IBM had at last recognized that there was money to be made.

When first announced, the RS/6000 model 530 killed everything on the market stone dead, it was so much faster. HP had PA-RISC running in their MPE/iX line at the time, but it was not a single microprocessor, being built from discrete logic. The RS/6000 caused a huge stir, both because it was so much faster, and also because IBM put significant marketing weight behind their new systems. Sun were immediately knocked off the top of the workstation market, and never managed to really get back up there, and DEC invested heavily to try to produce a really hot chip in the Alpha, that was as a result of the need for speed, was significantly flawed and never really delivered on it's promise.

HP rushed systems to counter the RS/6000 based on the single chip implementation of PA-RISC, and running HP/UX. These were the HP 9000 model 720, 730 and 750, and the race was then on between IBM and HP to see who could have the fastest system. This reached it's peak in the late 1990's, when some models of RS/6000 had marketing lives of less than 6 months.

This was tremendously expensive, and HP, who did not have a big chip-fabrication division valiantly struggled to keep up, but was ultimately doomed to fail.

The way I remember the Itanium being pitched was that Intel were going to take on the development of the PA-RISC single microprocessor replacement, keeping most of the instruction set, but putting in features that would allow the processor to also run x86 binaries, and enhancing the x86 architecture for 64 bit. Intel would get access to HP's IP for the PA-RISC (which included high clock rate silicon and cache IP), and would use their considerable chip making skill to drive the product forward. HP would get a class-leading processor to keep their workstations and servers going. At least that is what was said by Intel.

What actually appeared to happen was that they designed Itanium to be their own processor, with less emphasis on making it a PA-RISC replacement, and more on trying to make it an upgrade path for 32bit x86 servers. They delivered it late, and the product did not live up to their claims as either a PA-RISC replacement, or a 64 bit x86 migration path. Intel attempted to use some of the IP to produce high speed x86 processors, but botched it with the Pentium 4, which was ultimately a dead-end.

Because of the delay, the world in general, and HP in particular, started looking elsewhere. HP appeared to loose interest in the UNIX market place, allowing both their own products and the subsumed products from DEC/Compaq (and to a lesser extent, Tandem) to fall into the legacy category. They produced Itanium based servers, but they were never up there with IBM, except in the very-large system market. Only customer pressure has kept many of the OS's alive.

In the meantime, IBM has been left with the only non-Intel/AMD UNIX offering that was actively being developed, and as a result, has kept market share. Even though there has been no real competitive pressure, IBM has used the convergence of the AS/400 and RS/6000 lines, and to a lesser but significant extent the z series, to move the architecture forward. They have borrowed from other IBM systems (and their competitors) to introduce type 1 hypervisors, hosted application partitions, and a pretty much unrivaled virtualization capabilities. The supported filesystems have scaled, the support for other technologies such as SAN and SVN has gone hand-in-hand with other IBM products.

'Beauty with antimatter bottom' created out of pure energy

Peter Gathercole Silver badge

...and made sure it was really hot

I'm off to find a teacake, and some perspective.

BOFH: Forgive and forget

Peter Gathercole Silver badge

Bit of a strange one today...

I'm struggling to work out whether the PFY was the cause of Simon's disappearance, or whether he did it on his own. I mean, if the PFY had actually attempted to dispatch the BOFH, how come Simon managed to claim the accident compensation (after all, you cannot make a claim for yourself if you are 'dead'), especially if you keep having your life support machine broken.

I feel that this story could have been much more (although maybe it is, and I am just commenting too early)

And why the requirement to stitch the PFY up with a potential fraud if you intend to let him back to work? It would make him more likely to try to get a 'promotion', especially if Simon is using that information for some form of blackmail.

Anyway, hoping that there is more details of this story to come.

IBM debuts new Power7 iron this week

Peter Gathercole Silver badge

What a climbdown

Shock news....

Matt Bryant, using an amazing piece of ass-covering, claimed that his original post was just troll bait, rather than a real comment.

Unfortunately, the more sensible members of the Register commenting community were able to see through this with apparent ease, identifying Matt as one of the trolls that he claims to be targeting.

Peter Gathercole Silver badge


WTF are you talking about!

Any AIX code that is compiled with the compiler defaults will work. It just does, and has done for years.

There is a good chance (greater than 80%) that if you pulled a binary compiled for RS64 or Power2, or even the original RIOS chipsets from a system running AIX 4.3.2 or later, and placed it *WITHOUT CHANGE* on an AIX 6.1 system running Power7, it would run.

*IF* you are talking about extracting every ounce of performance for some code, then I agree that if you have optimized the code for cach size and processor affinity or the particular properties of the floating point pipeline (as is being done where I work currently) or any number of other factors, you will need to re-compile it with the relevant options to get the best performance, but even then the Power6 optimized code will probably run on Power7. This is not new nor specific to IBM processors, and hasn't been ever since I was working on PDP/11's and VAXes.

It is acknowledged that there will be a different execution profiles for Power7 from Power6. The design requirements for the processor were different, and I talk to some worried people talking about how scalable their model is at the moment. We currently have some code that tops out at about 768 processors, and gets worse if you increase the number above this. If the overall clock speed is dropped without a corresponding increase in the number of instructions per clock cycle, then this code may well be slower on Power7 than Power6.

But it's not certain. The speed of the level 3 cache, together with the less deep pipeline required for slower clock speeds may just offset the drop in overall clock speed (pipeline stalls become less of a problem). And as the bottleneck with the code is in inter-thread communication, the high bandwidth between cores on the same die, and on the same QCM may also offer realistic hope of performance gain. And the interconnect between the nodes in an IH supernode should also perform better than what is currently used. And I believe that the number of possible in-flight speculative execution threads possible with more available execution units may reduce pipeline stalls even more.

Remember the Pentium 4 vs. Pentium 3 debacle, where a Pentium 3 at the same clock speed out performed the Pentium 4 when it was first launched, and the follow up Pentium M and D processors dropped the clock speed and again outperformed a Pentium 4.

The talk of in-order vs. out-of-order is bogus. The results for the same stream of instructions should be the same regardless of whether it is in-order or out-of-order. If it's not, the processor is broken. The difference is that out-of-order may allow the hardware instruction scheduler to better utilize the available execution units, leading to more-instructions per clock.

And anyway, the only applications that really are affected by the clock speed are speed-daemon floating point hungry single threaded research types of workloads. From my 30+ years of experience, this is a very small (but admittedly valuable) part of the AIX customer base.

For your commercial workload customer, the difference between Power5, Power6 and Power7 architecture and instruction sets will be largely ignorable. What will be more important is the number of cores, the number of simultaneous threads that can be executed and the memory constraints of the hardware. For these customers, the application providers may not even have optimized versions of their code for the different processors, but a one-size-fits-all distribution. Power7 is probably a big step forward for them. I know application providers who still compile on Power4 hardware with a generic set of compiler switches to allow the code to run on the entire processor set.

In fact, AIX is like this. The version of AIX 5.3 that runs on the Power6 575's (the current speed freak machine) is EXACTLY THE SAME as that which runs on an RS64 44P 170 (same install disks, I know, because I have done it!). There is no distinction on FixCentral for patches. It is just all the same.

Your comments about AIX5.3 are interesting. Yes, there are some features that will not be available to AIX5.3. Things like Turbo mode. But AIX 5.3 is still supported (not sure if there is an end-of-life date yet, would expect it if and when AIX 6.2 or AIX 7.1 is available), and there is likely to be a new Technology Level (TL12?) for Power7 that will add some of the new features. IBM have always kept the latest 2 releases of AIX under active support, and normally publish withdrawal from marketing for an OS about 2 years prior to that date. This means that there is at least 2 years of AIX 5.3 support, not that I would recommend anybody installing Power7 to use AIX 5.3 at this time.

The instruction features are more likely to be conditioned by the compilers (which, incidentally, are currently OS version agnostic, the Fortran and C compilers are the same packages for AIX 5.3 and AIX 6.1). What's even more impressive is that you can compile Power6 code on, say, a Power5 system, drop it onto a Power6 box, and expect it to run as well as if it had been compiled on a Power6 system.

All of this does not sound like the scenario you paint. Maybe you ought to work in a big AIX shop sometime, and see what binary compatibility is really like.

BTW. I cannot say where I am working, but we have more than one cluster in the first 100 of the November 2009 Top 500 Supercomputer list.

Brit astrophysical model scoops £1.1m at poker

Peter Gathercole Silver badge

Is there a course on writing ambigious headlines?

Because this would be a prime candidate for a case study.

Completely accurate, but you have to actually read the article before you understand what it means!

Android on an iPhone? There's an app for that

Peter Gathercole Silver badge
Thumb Up


Sounds like a way for early adopters of the iPhone who have resisted the urge to upgrade to recover some of the value by making a market for their second-hand devices.

I expect an upsurge of these early iPhones appearing on eBay.

'Gossips' say Apple will acquire ARM

Peter Gathercole Silver badge

@AC. My bad.

Intel only announced that Atom was running Android last week (apparently, I appear to have missed it. Must have been because I had a busy week doing some real work).

But as to Atom and Windows, may I point you in the direction of what is happening in the NAS space, where Atom is already pushing ARM out. Almost all of the recent devices run Windows Media Server on Atom, possibly because this is of use to people with Windows PC's but also because Atom is sufficiently low power that the ARM advantages are being eroded.

If Apple were to buy ARM and restrict advances in the architecture, do you really not think that the real winner would be Intel?

Peter Gathercole Silver badge

Not an immediate collapse

ARM do not produce processors. They license the technology, and guide it's onward development.

The current generations of ARM processor made by people like Qualcomm and Marvel would still be produced under an existing license that Apple could do little to change (unless the Qualcomm, Marvel etc. lawyers were sleeping at the wheel).

What they could do would be to stifle innovation for new developments and licenses, keeping the best for themselves and allowing the rest of the world to struggle on with what they have already.

But I would hold up what happened to Mips and Alpha (and to an extent PA-Risc) as examples of what happens when a company not in the primary business of designing processors have control of a processor architecture. And SPARC may be going the same way (do you really think that Oracle are really interested in investing significant sums to progress SPARC beyond what we have already).

I rue the day when we have just Intel and AMD (lumped together because they produce code-compatible processors), and possibly IBM if they decide to continue developing Power, are the only game in town.

Peter Gathercole Silver badge


ATMEL are ARM licensees. They have a series of ARM7 and ARM9 based products.

Below this, they have the AVR Microcontrollers, which are much simpler beasts.

Do you think that there is much overlap? I don't think so.

So they have the fab. and they have what looks like their own range of micro-controllers. How long would it take to develop their 32 bit AVR to provide the ARM functionality? By which time, everybody else would have switched to Atom or whatever lower power processor Intel has in the pipeline.

Peter Gathercole Silver badge

This is bad news

An independent ARM allowed the processor to become ubiquitous. If Apple buy and then restricts ARM technology, it gives Intel a clear playing field to clean up.

And with Atom comes Windows...

Google's going to have to port Android to Intel!

Reverse-engineering artist busts face detection tech

Peter Gathercole Silver badge


Is this a reference (Gripping hand) to "The Moat around Murcheson's Eye", aka "The Gripping Hand" by Larry Niven and Jerry Pournelle.

If so, well done that man! Excelent book!

Epic Fail: How the photographers won, while digital rights failed

Peter Gathercole Silver badge

Answer to your first question

On paper. The paper system still exists, and will continue to do so for those exceptions (like not having an Internet feed) that will continue to exist in the future.

Just because Internet filing is the preferred route does not mean that it will be the only one.

School secretly snapped 1000s of students at home

Peter Gathercole Silver badge


...cannot be used for this type of wake-up remotely, unless the system generating the WOL packet is either on the same physical (wired) network, or there is some form of MAC level routing set up.

By definition, WOL packets have to be at the MAC level (if the laptop is off, DHCP cannot allocate IP addresses), and these do not route through standard IP routers.

And this also means that the laptop cannot be on a wireless network, as WOL does not work over an 802.11abgn network.

My suspicion is that when the laptop in on, there is a VPN set up to allow the laptop to access and be accessed by the school systems, regardless of the network, routers and firewalls between the school and the laptop. This will be a software VPN, which relies on the OS, which means that the system has to be on.

Unless someone has produced a LAN card that is integrated in these laptops that does IP and VPN actually in the NIC when in standby mode. If they have, I suggest that these would be laptops to avoid, as who knows who would be able to snoop.

Fedora 13 - Ubuntu's smart but less attractive cousin

Peter Gathercole Silver badge

How long ago

...was your last Linux install.

I've put Hardy and Jaunty on lots of systems, and generally it just works.

Almost every wireless card I've used (and I've got many rattling around in drawers at home) is recognized without need of a vendor supplied installation disk.

The last problem I had was the hacked Atheros chipset in my EeePC 701 with Hardy (fixed by a specific module from the community), but by the time Jaunty came along, it worked without problems.

What impressed me recently was when I took my mule system, and replaced the motherboard, which resulted in different processor, support chipsets, graphics adapter, memory, network - well pretty much everything besides the wireless card (it's a deskside system some distance from the core of the home network) and the media peripherals.

The existing Hardy install (yes Ubuntu 8.04 - two years old, but kept up to date) barely batted an eyelid. It recognized the onboard Nvidia graphics (it previously had an ATI AGP card), asked to install the correct driver for it, and came up as if nothing had changed. It just coped with the fact that the support chips changed from a VIA set to an Nvidia nForce set, or that the processor changed from an AMD Athlon XP to a Pentium Dual Core.

The last time I did this with Windows XP, I had so many problems, mainly because the Windows 'you've changed your machine, are you still entitled to run Windows' checks caused me to have to call Microsoft to re-authorize the retail version of XP (which is allowed to be moved between systems as much as you want). And the specific IDE drivers for the original motherboard refused to let me access the optical drive to enable me to load the correct ones from the driver CD packaged with the new motherboard to fix the problem.

I've not used Vista, but have built a Windows 7 system last Christmas. I was genuinely impressed by how easy it was to install, and it is clearly a step change from XP, The install I did was on pretty much generic hardware, so I would hope that it would be quite easy.

But comparing the installs of Linux and Windows is largely bogus, because almost nobody outside of the technical community actually installs Windows on any system. They buy it pre-installed, and just use it until it becomes so cluttered and slow that they discard the whole system. To somebody who has never installed a system, it will always be a traumatic operation to partition their disk and install a completely foreign OS with no experience of building systems. This probably explains many of the 'tried it, found it so difficult that I just switched back to Windows' type of comments.

Many of these people would find a second or third install so trivial compared to the first that they would change their view.

Peter Gathercole Silver badge

CPU cycles

If you have a modern GPU with OpenGL support, this is almost exclusively handled there (fading, bouncing, wobbling etc.)

Unless you've harnessed your GPU to do calculations, you've probably got ample spare cycles there.

I leave Compiz on on my laptop, jut to try to get people to notice it to increase user awareness of Linux. It doesn't really chew up a lot of CPU cycles on my lowly 2GHz Pentium 4 Thinkpad T30 unless you are using rapidly changing pixmaps (like video), and I don't normally do this when I am working.

The ATI MobileRadion in this Thinkpad is not up to water or flame backgrounds, but it runs a mean desktop cube.

Administrator access: Right or privilege?

Peter Gathercole Silver badge

sudo lockdown

It's perfectly possible to lockdown sudo so that you cannot run any shell, and there are many books around that will also show how to prevent user-escapes from allowed commands (like shell escapes from vi, for example).

This is another advantage of UNIX and UNIX-like OS's. There's lots of documentation and experience 'out there'. When your only avenue to reliable knowledge is a vendors training program, you become their technical and economic hostage. This is one reason some vendors like changing their product frequently, so they have the opportunity to sell their training over-and-over again.

Peter Gathercole Silver badge

Another view.

I'm sure I don't agree. Yes, UNIX is nearly 40. Yes, there are uglies in the way that you administer it, and also in the crude security model, but what are you holding up as a shining example of something better? I've seen administration tools that looked prettier, but they generally end up being so locked down as to be largely useless, or so complex to set up (I'm thinking CDE with it's cross-system authentication here) that you have to be a real propeller-head in order to get it working.

UNIX has seen off so many alternatives, and still lives on, while everyone else learns the hard way over-and-over again that hidden complexity leads to difficult-to-manage systems. The more layers of 'gloss' you add to 'simplify' administration, the more problems you build in when it goes wrong. (I'm coining Gathercole's Law as being "Apparent simplicity causes hidden complexity" )

If you need something better for users, then Gnome and KDE will provide you something just as pretty as other OS's (and a product from the 1980's called Looking Glass, which predates usable Windows systems also springs to mind), so the so called unfriendly* command line is not necessary for those who don't need it. Sometimes you ought to look and see what it is possible to do with the simplicity of the shell command line as practiced by real power users. It may not LOOK pretty, but it is elegant and functional.

I have frequently stunned managers and younger colleges by piping together several small tools with simple stream processors (think awk or sed) to achieve in a matter of minutes things that they were prepared to commit days of work to do. This is especially true in clusters or networks of near homogeneous systems, which is where UNIX excels.

It is a testament to the original design criteria of the shell and the base UNIX command set that most of the commands I use on a daily basis came out of Bell Labs. Version 7 UNIX, dated 1976. This has been augmented over the years, but you would still recognize that system as UNIX today. This may mark me out as a dinosaur, but hey! I'm still working, and I appear to have the respect of my peers who keep asking me to do things they cannot work out an easy way to do.

In my view, what is wrong with the example quoted WAS a UNIX design flaw, that of allowing spaces in filenames (space should have been made a banned character), but the very flexibility of the shell and filesystem interface allowing almost any character in filenames has allowed multi-byte character set languages to be integrated into UNIX with comparatively little effort.

(*) Often, the reason why it was seen as unfriendly is that most users were too lazy to learn the dozen or so commands that were the core set needed to do their job. They got frightened because two-and-three letter abbreviations were not close enough to english (e.g. cat - catinate is and English word, but one many people are not familiar with). This was a matter of perception and training. Possibly the only OS that got it right on the command line was VAX/VMS with DCL, which allowed you to use full command names, or any unique abbreviation. But this made the command processor one of the largest tasks in the system and was still not English!

P.S. I'm really not looking forward to a time when role-based security (which is already present in the few genetic UNICES left and also Linux since the 2.6 Kernel) becomes the norm. I predict that we will see stories of administrators who don't fully understand the importance of local privileged accounts locking themselves out of their systems when the LDAP or ActiveX directory servers cannot be contacted to authenticate them to fix the problem.

Guy Kewney, pioneer, guru, friend - RIP

Peter Gathercole Silver badge


I've been reading Guy's work for over 30 years, since the early issues of PCW. He was one of the computing journalists who made it worth buying a magazine just because he had written an article for it.

I cannot say how much I will miss seeing his clear and concise style of writing.

I especially remember his reviews of the original Acorn Archimedes in Byte, where he was able to do a quality job of reviewing a world-class product in a US publication. This is one of the few issues of Byte that I have kept in my keepsakes collection, and it will become all the more treasured as a result.

My condolences to Lucy, and everyone else who had the privilege of knowing him personally.

New Reg comments system ready to launch

Peter Gathercole Silver badge

I'm honoured..

.. by having amanfromMars respond directly!!

I've often wondered whether he was speaking in code rather than translated Martian. I think that this post clarifies the matter, don't you?

Peter Gathercole Silver badge

I sincerely hope...

... that this is another April Fools story.

Do you really believe that paying to post comments is actually something that the majority of commentards will actually do? It's just a bit of light relief, and the ability to see whether other people are of like mind.

Next you will be saying that BOFH will be only available to subscription holders.

Hang on. This over 250 words yet is it?


Peter Gathercole Silver badge

Yawn. Pinch, Punch etc.


we've all been playing Half Life, and quipping about the LHC for too long for this to be remotely regarded as amusing

Greenpeace fears clouds will turn earth brown

Peter Gathercole Silver badge


I know the term is used in a broad sense, but I would suggest that viviculture (animal farming) is actually contributing to greenhouse gasses. In it's strictest sense, agriculture is farming of plants, which I would guess is actually is a net CO2 consumer, and only produces methane if something goes wrong.

Mind you, without an animal sector of farming, the following would happen:

1. We would have to try to work out what to do with millions of tons of straw every year

2. Without animal fertilizer, you would have to learn to rotate crops or fortify land with artificial fertilizer (which is related to oil as it requires energy and some oil by-products), or suffer a large drop in productivity.

2. Upland/marginal/river margin land would become unproductive (a sheep can graze on land you cannot get a tractor on).

3. You would have to learn to live without other foods, not just burgers. Think milk, cheese, yogurt, cream, bacon etc..

4. If you include Chickens (major methane producer) in the mix, you also loose eggs, and the cheapest and most widely eaten protein source

5. There would be a serious protein shortage, which would require serious management of the populations diet in the West.

6. The countryside becomes huge expanses of either fallow scrubland (uneconomical to grow crops on), or vast belts of arable monoculture.

7. Nature abhors a vacuum. Natural animal life (admittedly a lower greenhouse gas producer) will move in where the farmed animals used to be.

8. And finally, if the human population were to move on to a pulse and/or brassica based diet (think brussel sprouts or cabbage for the latter), then WE would probably become the largest producer of methane (at least in my experience!)

Royal Navy starts work on new, pointless frigates

Peter Gathercole Silver badge

Type 12 (improved)

This was Leander. It was built on the basic hull form and machinery of the Type 12 Whitby and Rothsay frigates, which were specialist AS and AA frigates.

Interestingly, the last batch Leanders ('broad-beam') were actually completed with different specialist configurations (Ikara AS, or Sea Wolf AA, and I believe that some had Exocet) in place of the Twin 4.5" turret, and many older vessels were converted during their lifetime.

So we went from specialist to general, then back to specialist.

After this, the Navy wobbled a bit. The Type 21 (Amazon) which was a commercial design, which was regarded as a poor as a result of not having sufficient upgrade potential built in, but then the Type 22 Weapon or Broadsword, which was a Navy design, and the later 'stretched' versions were so successful that some remain in service today. As I understood it, the extra space was not for additional weapons, but for Command and Control capabilities to allow these vessels additional radar and tactical control facilities to allow a conflict to be run from on-board.

Looking back, it seems strange now to consider that the Leander class ran to 26 ships. Nowadays, the entire major surface fleet is not much larger.

And the other thing to note is that today's destroyers are the size of 2nd WW cruisers, the frigates are the size of flotilla leader destroyers. There are effectively no vessels in the small frigate/corvette category, as these do not have full ocean-going capabilities without sacrificing either speed or weapons for endurance.

In reply to everybody saying that the modern warships are armored, that is not really the case. Whilst they do have survivability design features, a single missile or largish bomb will actually take a frigate out, and they would suffer quite badly from shrapnel damage from a near miss. What they do have is the ability to operate in nuclear or bacterial contaminated fields of war (so called ABC, Atomic, Bacterial or Chemical), and the Type 45 destroyer is intended to be a 'stealth' ship, with surfaces and water sprays designed to scatter radar, to make it look smaller on radar than it actually is. Imagine how large a container ship, with large flat sides must look.

These features are most comically illustrated by the so-called Kryten turret (Mk. 8 improved), which looks, naturally, like Kryten's (from Red Dwarf) head.

In addition, a warship must be able to move at least as fast as the rest of the group it is with (and with allied Navies), and to be able to react rapidly (which is why they switched from oil fuel steam turbines that had a startup time measured in hours, to Gas Turbine and/or Diesel, to allow a ship to be underway in a matter of minutes). Keeping j-fuel safe in combat is MUCH more difficult than keeping heavy oil fuel, which requires more design work.

And Lewis's ill-informed musing about carriers being able to protect themselves does not take into account amphibious warefare vessels like HMS Ocean, which do not have all the trappings of a full carrier.

Putting it bluntly, frigate sized ships are much more cost effective and useful in any number of different, and possibly un-expected environments than helicopter-carrying RFA's.

So Lewis. 3/10, could do better.

Biting the hand that feeds IT © 1998–2019