* Posts by Peter Gathercole

2953 posts • joined 15 Jun 2007

Windows Vista has been battered, says Wall Street fan

Peter Gathercole Silver badge
Thumb Up

Lets agree...

... that Vista on new hardware designed for it is stable and capable, but unless your old system REALLY rocked, it is probably better off with XP.

I do not want to enter a flame war. If you like Vista, stick with it. If you don't, use XP (or Linux). It really is horses for courses.

But it would be nice if Microsoft would continue to produce security patches for XP for all of the people who have perfectly capable XP systems that will not take Vista, or who do not want to pay for what is probably an unwanted upgrade.

I would hate to think that people started discarding perfectly serviceable systems because their Bank or some other organisation they deal with decides that as XP is not secure enough when it goes out of support. Think of the damage to the environment due to all the plastic, heavy metals and CRT screens.

They cannot even be used with their Microsoft OS if they are donated to charity, as the Windows EULA does not allow it.

Maybe there is an opening for offering recycled kit with Ubuntu loaded on it at budget prices.

Most 'malfunctioning' gadgets work just fine, report claims

Peter Gathercole Silver badge
Thumb Down

5% faulty

Surely, even 5% should be too high for new products. That's 1 in 20 items faulty.

Is it really the case that it is regarded as costing less to ship a faulty device from China, and dispose of it in Europe, than testing it before it is shipped? They sure as hell don't rework and fix faulty devices in Europe unless they are high cost items.

Or maybe the shipping causes the damage. So much for the value of the blister pack and expanded polystyrene.

Asus announces 10in, HDD-equipped Eee PC

Peter Gathercole Silver badge

Something not right

I don't know. These bigger EeePCs just do not look right after the 701. I wish that they had produced a model with a screen with more pixels, without the bezel, but in the same case.

I can cope very well with the keyboard on a 701, but the screen just does not have enough space, even for some of the default menus.

Five misunderstood Vista features

Peter Gathercole Silver badge

@Roger Barrett

I have a couple of XP systems that have games installed and run by the kids.

There is a significant problem with the security model, particularly with XP Home running on NTFS, where you need admin rights to install DLLs and other config files on the system, and the permissions are set so that you then need admin to read/change the files, and save any save files. It does not affect Fat32, because there are not the extended access control on the filesystem to give you the security (and problems) that running as a non-administrative user provides.

The additional problem with XP Home is that you do not have the extended policy editors for users and groups, or to manipulate the file attributes on NTFS. I suspect that even if XP Pro was being used by the majority of users, they would not want to get involved with this type of administration anyway.

It is possible to do some of the work with cacl from a command prompt, but it is very hard work. I have not found any way with XP Home (without installing additional software) that allows you to manipulate the user and group policies at all.

My current solution is to create an additional admin user, and then hide it from the login screen (with a registry key change). The kids can then do a right-click, and then a "run as" to this user for the games.

This is still insecure for a number of reasons. They already know that they can use this account to run any command with "run as", and the system is still as vulnerable to security flaws within a game, but it is a half-way house.

Unfortunatly, it does not appear to work 100% of the time. I tried using it to install "Blockland", which is a game that allows you to create a first-person role playing game in a world built out of something similar to Lego, and it would appear that there is a access-rights inheritance feature (read problem) that I don't know enough about to fix. It installed OK when run directly from a admin login, so I did not persue it.

I must admit that this type of problem scares me, especially if similar issues exist in SELinux (I normally have SE disabled), but I guess that I am just resisting change to a system I understand well. Role based authorisation is definitly the way forward, but it is just so difficult to accept this type of change.

Fedora 9 - an OS that even the Linux challenged can love

Peter Gathercole Silver badge

@AC about cat - if you are still reading

This is UNIX we are talking about, almost all things are possible, although I suspect that your ksh loop may well run slower than cat.

I take your point about 'command' and 'program', sloppy thinking on my part. But that sloppy thinking runs through the entire UNIX history. Check your Version 7, System V or BSD or AIX or any other documentation, and you will see that 'cat' appears in the "Commands" section of the manual (run "info cat" on a GNU Linux system, and see the heading. Section 1 "User Commands")

Interestingly, your one-liner does not to work exactly as written on AIX (a genetic UNIX), as echo does not have a -e flag. Still, you probably don't want that flag if you are trying to emulate cat. I have used echo like this in anger, when nothing but the shell was available (booting a RS/6000 off the three recovery floppy disks to fix a problem before CD-ROM drives were in every system).

I was not really ranting, I was trying to put a bit of perspective on the comments, from a historical point of view. I'll bet you would find a need to complain if cat was really not there on a distro.

Sorry, I did miss the lighthearted comment. Still, just a bit of fun between power users, eh!

Myself, I try to stick to a System V subset (vanilla, or what), mainly because it is likely to work on almost all UNIX from the last 20 years. When you have used as many flavours as I have, it's the only way.

Yes, I've been around and yes, I am still making a living out of UNIX and Linux, so don't feel I'm a complete dinosaur yet. And I am also open enough to use my name in comments (sorry, could not resist the dig).

Peter Gathercole Silver badge

Wireless and Linux

I can get hermes, orinoco, prism, atheros and centrino (ipw220) chipsets working out of the box with any Linux that supports the Gnome Network manager (which I have installed on Ubuntu 6.06 - it's in the repository). I can get the ralink and derrived chipsets working for WEP without too much trouble, but it takes some effort to get WPA working, which most people will not be able to sort out themselves.

Where there is a weakness is in the WPA supplicant support. Atheros and Centrino chipsets with Gnome Network manager will do it, and in a reasonably friendly way.

I am using a fairly backward Ubuntu release, so I suspect that it will be a little easier in later releases. I know that the normal network system admin tool in the menu does not work with WPA at all in Ubuntu 6.06.

Where the problem lies is that with a card intended for Windows, the user gets the nice little install CD, which takes away from them all the hassle of deciding which chipset is being used.

Modern Linux distributions probably have the abillity to drive almost all of the chipsets used out-of-the-box, and also have the NDIS wrappers as a fallback, but you need to be able to decide which chipset is in use to make useful decisions.

If manufacturers provided the details of the internal workings of the card (basically the chipset details), or even gave the same degree of care to installing their products on different Linux distros, as they do on different Windows releases, then I'm sure that there would be less discontent amongst non-hardcore Linux users.

I know that this is hampered by the plethora of different distributions out there (see my earlier comments), but it should not be rocket science.

An additional complication is that if you go into your local PC World (assuming it is still open after Thursday) and ask for a Wireless PC-Card using the Atheros chipset, you will get a blank look from the assistants, as they will understand "Wireless" and may understand "PC-Card" (but you might have to call it a PCMCIA card), but Atheros might as well be a word in Greek (actually, it probably is).

And it complicated by manufacturers who have multiple different products, with the same product ID, using completly different chipsets (if you are lucky, on the card itself, you may get a v2 or v3 added to the product ID, but not normally on the outside of the box).

If you definitly want to get wireless working, I suggest that you pick up one of the Linux magazines (-Format or -User) and look for adverts from suppliers who will guarantee to supply a card that will work with Linux, or keep to the Intel Centrino wireless chipset that fortunatly is in most laptops with Pentium processors.

If your laptop uses mini-PCI cards (under a cover normally on the bottom of the laptop) for wireless expansion, then there are many people selling Intel wireless cards on eBay for IBM Thinkpads (2915ABG) that will probably work. Thats what I am using, and it works very nicely indeed.

Peter Gathercole Silver badge

Who needs cat?

I know that this is absolutly geeky, but cat is a command (like ls, find, dd, ed etc.) which has been in UNIX since it's inception. I have been using it since the 1976/77 release of Bell Labs. Version 6 for the PDP/11. Long before ksh and bash (in fact, the version 6 shell was *really* primitive, only being able to use single character shell variables, for example)

It actually does a lot more than you think. Look up the -u flag, and with a couple of stty settings, you can make a usable, if very basic, terminal application (one cat in the background, one in the foreground).

Try doing a "cat *.log > logfiles.summary" using your ksh one-liner.

How about "ssh machine cat remotefile > localfile" for a simple file copy.

Also, cat has an absolutly miniscule memory footprint (the binary is just over 16KB on this Linux system I'm using).

It is one of the fundamental 'lego-style' building blocks that make UNIX so powerful. Whilst it is true that other tools are around that can do the same job, you cannot remove it because of compatibillity with old scripts. And you can guarantee it is there all the time on any UNIX variant (try running a script that starts "#!/bin/ksh" on a vanilla Linux system). And it is in every UNIX standard from SVR2 (SVID anybody), Posix 1003.1 to whatever is the current UNIX (2006?) X.Open standard.

Remember that the UNIX ethos is "efficient small tools, used together in pipelines". Even things like Perl are an anathama to UNIX purists, because they do everything in one tool.

I think you need to see a real UNIX command line power user at work. I have literally blown peoples minds by doing a task in a single pipeline of cut, awk, sed, sort, comm, join etc, that they thought would take hours of work using Excel or Oracle.

Mine is the one with the treasured edition of "Lyons annotated Version 6 UNIX kernel" in the pocket.

Proud to be celebrating 30 years of using UNIX!

Peter Gathercole Silver badge

@AC and others

I used to be a committed RedHat user for my allways-with-me workhorse laptop(s), from 4.1 through 9.1 (or was it 2), and when Fedora came alone, I got fed up with the speed that Fedora changed. You just could not use a Fedora release for more than about 9 months, and still expect the repositories to remain for that package that needed a library that you had not yet installed.

Also, when you update, you pick up a new kernel, and all of the modules that you had compiled need frigging or recompiling (my current bugbear is the DVB-T TV adapter I use).

I switched to Ubuntu 6.06 LTS mainly because I liked the support that they promised, and have delivered. Also the repositories are extensive, and are maintained.

Here I am again, two years later, and I can remain on Dapper if I want to (for quite a while) but I am finding that it is taking longer and longer for new things to be back-ported, and I have had problems with getting Compiz/Beryl or whatever the merged package is working with GLx or the binary drivers from ATI.

I am going to go to 8.04 (LTS again) for the same reasons as before, and I am removing the last remains of the RedHat 9 from my trusty Thinkpad T30 (the disk has moved/been cloned several times, keeping the machine the same, just on different hardware - ain't Linux good).

I wish there was an upgrade from 6.06 to 8.04, but I guess that one re-install every two years is not too much to put up with, especially as I choose to keep my home directory on a seperate partition.

I may give Fedora 9 a try on USB stick, just to see how things have changed, but I think that Ubuntu is still my preferred choice. This is mainly because I use my laptop as a tool, not as an end in itself. I just do not have the time to be fiddling all the time.

I know that this is petty, but I feel that we absolutly need ONE dominant Linux distro, so that we can achieve enough market penetration to make software writers take note. Ubuntu is STILL the best candidate for this as far as I can see, because of its ease-of-use, good support, and extensive device support.

If the Fedora community want to come up with a long-term release strategy, then I think that they could move into this space, but as most non-computerate users will generally keep the same OS on a system that it came delivered with for the lifetime of the machine. If they have to perform a major upgrade, most will discard the machine and buy a new one. This means that we need distributions with an effective lifetime of several years to get the needed penetration.

Tux, obviously.

PC World, Currys staff to be dumped in DSGi rescue plan?

Peter Gathercole Silver badge

@Ivan Headache

Length of cable supported depends on the SCSI variant being used.

I used Fast-Wide differential SCSI-2 cables that long and yes, they were in spec. and yes, they worked. LVM SCSI-3 allows even longer cables, I believe.

I seem to remember that SCSI-1 allowed a maximum length of 3 metres from terminator-to-terminator, but on many early midrange systems, the external SCSI port was on the same bus as the internal devices, and once you measured the internal cables that ran to all the drive bays, you often had less than 1 metre available for external devices.

The biggest problem was that all the different SCSI varients used different connectors and terminators, and even when you used the same variant, manufacturers often had their own (often proprierty) connectors (IBM, hang your head in shame!)

I hope that some of the smaller Currys stores survive, although this is unlikely, as they are about the only electrical retailer left on the high street (I know there are the Euronics consortium of retailers, but somehow, they are just not the same). Sometimes, you just have to pop out and buy a toaster/hoover bag/unusual battery in an emergency, and if I have to trek 2x25+ miles to the nearest large town, the only ones who will benefit is HM treasury due to the fuel duty. I know I won't.

I'm with most of you on PC World, however. I have often had to bite my tongue if I am ever in one of the stores listening to advice being given to customers. I once had an open row with the supposed network expert about the benefits of operating a proper firewall on a dedicated PC vs. the builtin inflexible app that is in most routers. Eventually, I had to play the "I'm an IT and network Consultant, and I know what I'm talking about" card just to shut him up.

Tux, because he makes UNIX-like OSs available to all.

BOFH: The Boss gets Grandpa Simpson syndrome

Peter Gathercole Silver badge

PDP 11/34!

I'm hurt.

I loved all of the PDP 11/34's I used. Of course they were not as reliable as more modern machines (or even 11/03s and 11/44s), but then they were built out of LS7400 TTL with a wire-wrap backplane with literally thousands of individually wrapped pins. If I remember correctly, the CPU was on five boards, with the FPU (optional) on another two. Add DL or DZ-11 terminal controllers, RK or RP-11 disk controllers, and MT-11 tape controllers, and you had a lot to go wrong.

I suspect that all of the Prime, DG, IBM, Univac, Perkin Elmer, HP systems of the same time frame had similar problem rates. Especially as they were not rated as data-centre only machines, and would quite oftern be found sitting in closed offices or large cupboards, often with no air-conditioning.

It was quite normal for the engineers to visit two or three times a month, and we had planned preventative maintenance visits every quarter.

But, the PDP 11 instruction set was incredibly regular (I used to be able to dis-assemble it while reading it), and it was the system that most Universities first got UNIX on. It had some quirks (16 bit processor addressing mapped to 18 or 22 bit memory addressing using segment registers [like, but much, much better than Intel later put into the 80286], Unibus Map, seperate I&D space on higher-end models). OK the 11/34 had to fit the kernel into 56K (8K was reserved to address the UNIBUS), but with the Keele Overlay Mods. to the UNIX V7 kernel, together with the Calgary device buffer modifications, we were able to support 16 concurrent terminal sessions on what was on paper little more powerful than an IBM PC/AT.

It was a ground-breaking architecture that should go down as one of the classics, along with IBM 360, Motorola 68000 and MIPS-1.

Happy days. I'll get my Snokel Parka as I leave.

Is the earth getting warmer, or cooler?

Peter Gathercole Silver badge

Chasing the money

A lot of you commenting on a scientific gravy train obviously don't know how scientific grants are awarded.

If you are a Research Scientist in a UK educational or Goverment sponsored science establishment, you must enter a funding circus to get money for your projects. This works by you outlining a proposal for the research you want to carry out, together with the resources required. This then enters the evaluation process run by the purse string holders (UK Government, Science councils, EU funding organisation etc.) Enivatably, the total of all of the proposals would cost more money than is available (just look at the current UK physics crisis), so a choice must me made.

The evaluation panels, which are made up of other scientists with reputations (see later) but often also contain civil servants, or even Government Ministers. They look at the proposals and see which ones they are prepared to fund. As there is politics involved, there is an adgenda to the approvals.

If there is a political desire to prove man-made climate change, the panel can choose to only approve the research that is likely to show that this is the case.

So as a scientist, if you want to keep working (because a research scientist without funding is unemployed - really, they are), you make your proposal sound like it will appeal to the panel. So if climate change is in vogue, you include an element of it in every proposal.

The result is funded research which starts with a bias. And without a research project, a scientist does not publish creditable papers, does not get a reputation, and is not engaged in peer review, one of the underlying fundamentals of the science establishment. Once all of the Scientists gaining reputations in climate study come from the same pro-climate change background, and the whole scientific process gets skewed, and doubters are just ignored as having no reputation.

If there was more funding available, then it is more likely that balanced research would be carried out, but at the moment, the only people wanting to fund research against manmade climate change are the energy companies, particularly the oil companies. This research is discounted by the government sponsored Scientists and Journalists as being biased by commercial pressures.

More money + less Government intervention = more balanced research. Until this happens, we must be prepared to be a little sceptical of the results. We ABSOLUTLY NEED correctly weighted counter arguments to allow the findings to be believable.

Please do not get me wrong. I believe in climate change, but as a natural result of reasons we do not yet understand properly (and may never as proved by the research of the recently deceaced Edward Lorentz), one of which could well be human. Climate change has been going on for much longer than the human race has been around, and will continue until the Earth is cold and dead.

I am a product of the UK Science education system to degree level, and have taught in one such establishment too, so please pass me the tatty corduroy jacket, the one with the leather elbow patches.

Lenovo ThinkPad X300 sub-notebook

Peter Gathercole Silver badge

Defrag'ing NTFS

Like many modern filesystems, NTFS has the concept of complete blocks, and partial blocks.

Sequential writes will result in complete blocks being used for all but the last block. In order to maximise the disk space, the remaining bit at the end of the file is written to a patial block, leaving the rest of the block containing the partial block available for other partial blocks. Confusingly, these partial blocks are called fragments. I don't know about the NTFS code in XP and Vista, but with other OSs, the circumstances when fragments are promoted to full blocks are fairly rare under normal operation, so over time the number of full blocks split to fragments will increase.

When reading a files that has been extended many times, you end up with blocks in the middle of the file that instead of being stored as whole blocks are in multiple fragments. Each fragment needs a complete block read so a single block dived into four fragments (for example) needs four reads instead of one, probably associated with four seeks as well.

When you defrag a filesystem, these fragments will be promoted to whole blocks (by effectivly performing a sequential re-write of the whole file), significantly increasing performance.

You also find that filesystems that are run over 90-95% full end up with a significant amount of the free space being in fragments, with few full blocks available. Certain types of filesystem operations just will not work in fragments (operations that try to write whole blocks, like those used by databases for example). This also affects a number of UNIX variants as well.

So long as the OS thinks of the SSD as a disk, using the same code as for spinning disks, the same problems will happen, thus you will need to defrag it just like an ordinary disk. Why should it be any different? What may happen is that the performance of a fragmented solid-state disk may not defrade as much as a spinning disk, as I would guess that a seek on an SSD is almost a no-op.

Standalone security industry dying, says guru

Peter Gathercole Silver badge


Come on, can you think that a security 'expert' that goes into an organisation, and just comes up with a 'nothing to look at here' is going to be trusted?

They HAVE to find something to justify their own existance, even it it is that you have to video everybody everywhere. The better you (and the previous expert consultants) are at the job in hand, the more trivial the next vulnerabillities become. And because they are just trying to find one or two things, they will stop once they have these one or two. Of course, this assumes that all the basics are covered.

It's when they start complaining that the screens can be read over the video links that they asked for, and whether the CCTV wires are Tempest complient or could be intercepted between the camera and the monitoring station that you really have to worry.

My view is, let a couple of minor but visible, easily fixable, holes be found,. Take the resultant report, fix them in no time flat, and everyone will be happy. You will get a 'Found something, had it fixed, everything OK now' report, and they will go away happy knowing that they have done the job. You will then not have to fix the trivial new vulnerabillities that they have not had to find.

I think the BOFH would aggree to this plan. Either that, or there will be some more mysterious accidents with lift doors opening at the wrong time!

Microsoft kicks out third Windows XP service pack

Peter Gathercole Silver badge


Maybe you ought to put pressure on your favorite game and device manufacturers to support Linux, rather than asking Linux to work like Windows. My IBM Thinkpad works flawlessly with a stright off-the-CD Ubuntu install, including the trackpad, the wireless and the display adapter. The only thing I have not tried is the modem, but who uses that anyway nowadays.

The problem with many the Linux software that needs to match the kernel version is that the developers did not understand the correct methods for making their software kernel version independent. As long as you remain in a major branch (like 2.6), it is possible to make your modules version independent.

Even if a module is compiled against a particular kernel minor version, it is often possible to copy the module into the correct location for any new kernel that you install. I admit that this is not somthing that is done automatically when you install a new kernel, but it's not that difficult either. If you have compiled the module, and kept the build directory, try doing a "make install" to see whether that will install it in the correct location.

Unfortunatly, Nvidia do not appear to be able to do this with their 3D module, somthing that almost everybody trying to get compiz running on a system with an Nvidia card will fall over.

Hitachi to go it alone on discs after all

Peter Gathercole Silver badge

Bad Blocks?

All disks have bad blocks, you just don't see them because they are mapped out. If you have a drive that cannot do that automatically, you may like to try re-mapping all the bad blocks using a suitable utility (normally provided by the drive manufacturer, but you could try mhdd). Of course, back your data up before trying to rescue the disk.

Once the badblock map is written, new bad blocks are normally caused by head contact with the platters, so don't jog your computer.

All of the IBM Ultrastar (and other IBM server disks back to Redwings) I have used have automatically re-mapped bad blocks. I'm not saying that I have never had to replace Ultrastar disks, but it has generally been because of electronics or motor or actuator failure. What I have found is that they mostly fail when they are stopped or started. Keeping them running 24x7, I have seen the run for literally years at a time (in the case of years without stopping, they were Spitfire 1GB SCSI disks - and the OS was AIX).

The Deskstar 'click-of-death' problems were a problem with the voice-coil motor failing to perform the head-preload during power up. The click was the head being moved to the end-stops. Deskstars were not the only disks with this type of problem. If you google click-of-death, you will find that other manufacturers have had similer problems in the past.

The underlying story is that you can have it cheap, big, or reliable. Currently cheap and big appear to be more important (to us!) than reliable. And the other thing is that the more expensive server members of a disk family are probably worth the money.

BBC vs ISPs: Bandwidth row escalates as Tiscali wades in

Peter Gathercole Silver badge

ISPs own fault

If the ISPs sell bandwidth they cannot deliver, who is to blame?

I know it may mean higher prices for us all, but I would much prefer to pay more to buy a service that delivers what I have been sold, than get a service that is unusable for much of the day.

Why should the BBC, or ITV, or Channel 4 or Channel 5, or Sky, or YouTube or its clones (who all have video on demand services) have to pay for anything except the bandwidth between them and their ISP.

The ISPs are asking for an unworkable charging model. The only thing that might make the BBC situation slightly different is that the high demand material may be slightly more predictible than some of the other content providers.

Guitar maker Gibson thrashes out more robo-axes

Peter Gathercole Silver badge
Thumb Down

Is this what you want?

I can understand having a guitar that is alwas in tune whenever you pick it up, but to correct the tuning of what you are playing?

How will it cope with bending a note, or slides, or playing with a bottleneck?

You might as well put it it through a post instrument DSP dynamic tuning corrector between the axe and the amp.

UK.gov will force paedophiles to register email addresses

Peter Gathercole Silver badge

Government thinking (possibly an oxymoron?)

Of course, the Goverment could provide a state sponsored email system, and force everybody to use that for all email....

....no wait. You would have to prevent use of out-of-nation email servers. OK, lets block SMTP and have a block list for foreign webmail servers. The ISPs can do this for us without cost if we mandate it by legislation...

...hang on, we then need to block tunneled and anonymised connections. OK also block anything that is encrypted....

...but that will block SSL. Never mind, Phorm will work so much better if SSL is not used. And once the Interwebnet tubes unencrypted, we can filter content from abroad, we won't have to worry about terrorists picking up bomb plans from foreign subversive sites.

Hell, lets just ban the Interwebnet. But wait, arn't we trying to push down costs by using the it for tax and other goverment systems....

...and so on.

Anybody for a Police State?

New banking code cracks down on out-of-date software

Peter Gathercole Silver badge

Check the Ts & Cs

I think that if you look at the terms and conditions of most online banking services, you will find that they have a list of known and supported OS/Browser combinations, and I would be surprised if any Linux platform is listed. This gives them an immediate get-out from most Linux users.

My primary bank would like me to install agent software on my machine (at least last time I looked) to access their online banking system. Of course, this is windows based.

And the AC who was talking about Linux viruses has obviously not taken into account how short the wikipedia page about Linux viruses actually is, nor has he looked at the viruses listed. Many of them are old definitions, some are for products not involved with browsing, and virtually none of them will cross the user/system boundry unless you are stupid enough to be running the vector as a privileged user (root).

I'm not saying that Linux is invulnerable, and the increased evidence of flash/java/javascript cross-platform attacks is worrying, but a well maintained Linux system is probably safe from most prevalent attack vectors. About the only place where Firefox is likely to be vulnerable, assuming it is installed into system-defined location (rather than in home directories) is via a plugin. It is just NOT POSSIBLE as a non-system user to install such things as keyloggers, DNS redirectors, and default route redirectors in a Linux system if the system privilege is guarded well.

Of course, Linux is just as vulnerable to social engineering (i.e. Phishing) attacks, but that is because the user is being targetted, not the OS or browser. In theory, it is possible to install anti-phishing plugins in Firefox, but such defenses are only as good as the block database that is being referenced.

I'm just waiting for the banks to insist on content filters being mandatory for their services. When that happens, the simple port filter firewalls implemented by most routers (and Linux Tables and Chains firewalls) will not satisy their requirements, and we will be further beholden to Microsoft.

By the Power of Power, IBM goes Power System

Peter Gathercole Silver badge

Oops, not Boca Raton

Not Boca Raton, that was PC, PS/2 and OS/2. I meant Rochester, of course.

And I missed out the OS/2 on PPC, which was also seen as possible to run on the same merged PPC platform, with a common microkernel and OS personallity layers put on top for OS/400, AIX and OS/2.

Peter Gathercole Silver badge

A long time coming

I remember having a presentation from an IBM bod from Montpellier describing a merged archetecture about 20 years ago (I wonder if the non-disclosuer agreement is still enforceable?). This was using a unified backplane with common components, that you plugged the relevent processor card into, and had scheduled using a hardcoded VM implementation that on reflection looked like the current hypervisor. It used common memory between all processors, with IO performed through the VM. The project was at that time called Prism, a term that has been used more recently just in the mainframe world for a hardcoded VM implementation (maybe that is a spin off from the same research project).

I also remember about 15 years ago when it was announced that the Boca Raton people had taken the PowerPC roadmap, and inserted the ppc 615 (I think) processor to run OS/400, extended with additional instructions to assist the running of that OS (and in the process, I understand, rescued the floundering PowerPC family, because Austin were having difficulty getting the ppc 620 (the first 64 bit member on the roadmap) running). Everyone in IBM was talking about merged product lines again then.

The smaller mainframes have long used microcoded 801 (the IBM RISC chip before RIOS and PowerPC) and PowerPC cores in such systems as the 9371 and air-cooled small zSeries systems. I wonder if the full unification of the product lines is still on someones roadmap? I'm sure that I was on a machine room some in the last couple of years, and had difficulty differentiating a p670 and a small mainframe that was close to it on the floor.

In reallity, a lot of the memory, disks, tape drives and I/O cards have been almost identical between the AS/400-RS/6000 and [pi]Series for many years (back to when the RS/6000 was launched), with the only real difference being the controller microcode, the feature numbers and the price!

Intel enrols second-gen Classmate PC

Peter Gathercole Silver badge
Thumb Up

Seems familliar

I know the common roots, but do the specs not look almost identical to an EeePC now?

Yahoo! cuddles Google's bastard grid-child

Peter Gathercole Silver badge


Sorry, I thought that the "granulising data" was reasonabily obvious and didn't think that it needed explaining, so assumed that it was the other terms that needed explaining. My bad.

Peter Gathercole Silver badge

@Sid re. SMP vs.MPP

SMP=Symmetric Multi-Processing

MPP=Massivly Parallel Processing

With an SMP box, there is a single OS image that schedules applications across the processors. If you write threaded code, then most SMP implementations will schedule threads on seperate processors without you having to write the code to explicitly taks into account the fact that there are multiple processors.

With MPP, there are multiple OS images in the cluster, and you have to write to an API that will allow different units of work to be placed on different systems. This means you have to make the application much more aware of the shape of the cluster. This also means that if not written carefully, you may not get better performance by adding additional nodes into the cluster.

Unfortunatly, too many IBM SP/2 implementions were not really parallel processing clusters, more like lan-in-a-can systems (goodness, where did I dredge that term up from).

But what Google does is a quantum leap up from what SP/2s were capable of, and are much more like Mare Nostrum and Blue Gene/L.

Asus Eee PC 900 flips one at MacBook Air with multi-touch input

Peter Gathercole Silver badge

Screen res?

Is it really 1024x768? This is not a wide screen resolution, but one often used for 4x3 aspect laptop screens. If you were to keep the same aspect ratio, it would be 1024x614. 1280x660 would also be about right.

BBC Micro creators meet to TRACE machine's legacy

Peter Gathercole Silver badge

No better machine

The BEEB was clearly the most useful teaching computer, possibly of all time.

It was accessible to people who were only prepared to learn Basic, and also to those who were prepared to use assembler. You could teach structured programming on it without any modification, but it also had languages like Forth, Pascal, Logo, and LISP available. Although the networking was rudamentry (and fantastically insecure), it allowed network file and print servers to be set up very easily and cheaply (proto-Ethernet CSMA/CD for PC's at the time cam in at hundreds of pounds per PC plus the fileserver). Although it did not run Visicalc or Wordstar (the business apps of the time), it was still possible to use View or WordWise, and ViewSheet, or ViewBase to teach office concepts. And it was possible to have the apps in ROM for instant startup.

I ran a lab of 16 BBCs to teach computer appreciation, and we had a network with a fileserver (and 10MB hard disk!), robot arms, cameras, graphic tablets, speech synths. speech recognition units, touch screens, pen plotters, mice and more. This was around 1983. Show me another machine of that time that could do all of this. And all for a cost of less than £25K (which included building custom furniture).

I wish that schools still used systems that empowered their staff to develop custom written software to teach their students. Nope. Only PC's.

I know many people (me included) who were prepared to pay for one of these machines at home. A classic.

So what's the easiest box to hack - Vista, Ubuntu or OS X?

Peter Gathercole Silver badge

@Don Mitchell

I think if you read the CERTs, you will find that a large number of the Linux vulnerabillities are theroetical, unexploited problems that have been identified by examination of the code. Do you really think that the buffer overrun security pronlems were all discovered by experimentation? Many of these problems have not even got example exploit code published.

So, which do you trust more. The code that has been examined and found that there may be theoretical problems (which are fixed reeeal quick), or the code that has definite exploits published, and may not get patched for months. Just imagine how many problems are likely to be found in Windows if the code was open, if there are this many discovered by experimentation.

Please don't just count the exploits, examine them in detail, and you then won't compare apples and oranges.

Arthur C. Clarke dead at 90

Peter Gathercole Silver badge

Minehead mourns the loss of one of its famous sons

What I always likes about his writing was that it was science fiction grounded in science fact. Unlike many other authors, all of his innovations seemed to be possible given one or two advances.

Who will provide the realistic grand visions now.

Bag tax recycled into eco-PR slush

Peter Gathercole Silver badge
IT Angle

Bag use

OK, so I will still have to use plastic bags to put my rubbish in (I line all of the bins in my house, one per room, with supermarket carrier bags), and also to sort my recycling into (cans, glass etc.) but instead of re-cycling the bags from the supermarket, I will have to buy them instead.

I probably will still use a similer number of bags, and these will still probably end up in landfill. But guess who will benefit. The supermarkets. Instead of giving me bags, they will now be able to SELL me them. A cost item becomes one generating profit.

And there is another environmental down side. Currently supermarket bags degrade over time in landfill, but the polythene bin bags used to replace them probably won't. I would also like to know what happens to the bag-for-life bags once the supermarkets have swapped them for new ones. Are they recycled? What are the energy cost comparisons in the recycling process vs. the costs of making disposible bags.

All I am saying is that nothing is simple or obvious.

The 'green' car tax grabs that don't add up

Peter Gathercole Silver badge

Flame fodder

I'm not a climateologist, but I have to say that I believe that the current published science of climate change is skewed towards proving that we are to blame.

The way this works is that research money has to be justified in advance by the researcher before the research starts. So researcher A asks for money to find why the Polar Bear population is reducing. Researcher B asks for money to research how man-made global warming is affecting Polar Bear populations.

Faced with the titles of these two research projects, the politicians (who ultimatly hold the purse strings in most western states) decide that the latter one is in keeping with their green agenda so has political value as well as scientific. So, the research starts off with a biased premise, skewing the perception when the results are presented out-of-context. It's like saying that all scientists who are looking for man-made climate change agree that their research concludes it is happening. What a surprise that they have found what they were looking for. Hey, they've justified their funding. This is the IPCC to a tee, and dissenting voices are shouted down.

Now, please don't get me wrong. Climate change is happening, and we are contributing to it in many different ways. But from what I have gleened, we are at the end of an ice age, the amout of geological change is reducing, affecting the long-term carbon cycle (look it up), and the Earth may be returning to a more 'normal' (in geological time frames) tempreture after about two million years of cold. This probably would happen even if we were to stop producing carbon dioxide tomorrow. I reference the BBC series Earth Story (ep. 6) to help support this claim.

I agree that we should reduce fossil fuel use, not to reduce carbon in the atmosphere, but because they are precious resources which will never be replaced naturally in any useful (to us) timeframe.

I'll just don my asbestos coat.

DAB: A very British failure

Peter Gathercole Silver badge

DAB styling

The main styling problem is actually a power thing. DAB radios are power hungry which equates to large batteries, which leads to large sets. It does not really matter how you discuise it, it will look retro. I have a Pure Elan RV40, which does not look like a '50s radio, but is large (and has two speakers!)

I also use headphone-only DAB radios (one branded KISS picked up in a catalogue clearence shop, and a Roberts Robi iPod attachment), and only get a few hours listening on either one. This is just a fact of life. I live with it. I regard it as an acceptable price for the diversity I cannot get on FM.

I would want to ask how people would like to package radios in a way that was not retro? Can anybody point to a stylish modern FM radio? I will then be able to point to a Roberts or Pure device that looks similar.

Peter Gathercole Silver badge

Gordon Bennett

If only half of the effort went in to defending DAB as has gone into these comments, it would be alive and well!! On some of the comments from others, here is what I think.

If you cannot advance the closk, set the alarm an hour earlier (Duh!)

DAB has DECODING delays in the receiver (listen to two different DAB radios at the same time, and hear the time signals at different times). This makes it impossible to correct by broadcasting it early. Same is true for Digital TV vs. Analog TV.

DAB is as good as the ariel. Good ariel==strong signal==no errors or dropouts

DAB radios have quite a lot of computing processing power, which is power hungry. I'm sure that if the person who complained about power consumption would really like to go back to listening to AM on crystal sets that can be made to work WITHOUT A BATTERY! If battery life worries you, get rechargable batteries.

Planet Rock plays music A LOT of people like (including me). But it won't suit everybody. And listen to something other than the rock blocks. I hear new-to-me stuff all the time.

Much of the BBC 7 content was recorded in mono (and some on acetate disks, not tape!), so stereo is not required for all of the material.

I now notice hiss on FM much more than before I listened to DAB.

FM and AM will never die as it is the official emergency information route for national emergencies, mearly because it needs less infrastructure to broadcast and recieve (can you imaging what the EMP from nuke would do to every satellite receiver)

GCAP have lost the plot, and are just chasing as much money as they can get.

There. Take that. My coat has the Roberts Robi hanging from the pocket.

Mac security site littered with malware

Peter Gathercole Silver badge

For goodness sake...

How many times to we have to have this same argument Windows vs. Mac vs. Linux.

There is no perfect solution to the problem as long as you have mechanisms to make the use of a system easier. Easier on the surface == complex under the covers. It does not matter if it is the sudo model that is in OSX or Linux, the Role based securtiy model of Vista or the "lets just do it" model of XP running as administrator. The basic problem still exists in that you need to do something out-of-the-ordinary, and you either trust it, or ask some form of question.

In every case, unless the user is really on the ball, there is always the chance that something nasty could get through. The Unix model (different from popular Linux distro's) of putting the code in your own non-privileged space is about the only robust model there is, as you are very unlikely in a properly run system to import anything that will affect anyone other than yourself. That's not to say that a 'bot or a trojan will not get through, but other users of the system are unlikely to be compromised. I am deliberately ignoring the lack of binary compatibillity, which is not what I am arguing.

Of course, this means that everyone who wants to use a particular browser extension or version of Java will have to install it themselves, and it is possible for things to be run when you are not logged in (just put it in cron), but this is quite easy to spot.

So, lets just agree that it is a knotty problem, accept that different OSs do it differently, and leave it at that.

DARPA releases 'Blackswift' hyperplane details

Peter Gathercole Silver badge

@Charles Manning

I really don't think that ARPA (as it was then) was spec'ing a worldwide commercial network.

It's research was in self-healing communication networks useful for military communications where many parts of the 'net might be taken out. This would be (and is) used on closed encrypted networks with no public access.

Also, don't tar the original research into packet switching with the poor implementation that plagues many applications now on the Internet.

Of course, there were weaknesses in the original design, such as the DoS SYN attacks or man-in-the-middle data capture attacks that are possible, but the security layers that leak so badly are definitly above the one provided by the basic ARPA design.

If you look at the original suite of applications that were demonstrations of the work (telnet, tenex, ftp, mail), they were useful, and people used them, even if they were basically insecure. The world was a more simple place, and generally the networks they were used on were internal to single organisations. Even then, firewalls were mooted (the first firewall I was aware of pre-dated the Web. by several years).

The concept of the World Wide Web (which is just a service running on the Internet) was NOT part of the ARPA research.

The fact that we are still using it, warts and all, justifies the strength of its original design, and it is only likely to be replaced by a derrivitive work (IPv6).

Cassini to surf Enceladus's icy plumes

Peter Gathercole Silver badge
Paris Hilton

Just a cosmic car (or spaceprobe) wash.

They obviously decided that Cassini was dirty.

With the wax and polish, please.

Siemens kicked off UK government contract

Peter Gathercole Silver badge


Was your mainframe running DOS?

What - DOS/360 maybe? But that would imply an IBM 360/370 mainframe (although I guess that it would run under VM on newer kit), much older than 1989.

Where is the icon for an old IT fart. A crumpled suit would probably do.

Canonical fires up box Landscaping business

Peter Gathercole Silver badge
Thumb Up

Supporting Steve George

I really appreciate Steve George making it clear what Ubuntu and Canonical stance is all about. And before you read on, I am just a Ubuntu user, and have no links with Canonical at all.

All of you who think that a company like Canonical can put the resources into making Ubuntu the first real open alternative to Microsoft without being able to leaverage a return can only think that money grows on trees.

They have a service based business model, and these services will include bespoke tools. The GNU Public License does not prevent such software from being written to run on top of Linux, nor does it prevent these tools from using, say, the libraries that ship in a Linux distro. This means that these tools can remain closed, and provide a commercial advantage. Canonical does not HAVE to put EVERYTHING they work on back into Ubuntu (provided that it is their own work that they are selling and not modified GPL'd code). That's what the GPL allows.

I applaud Canonical for putting back into the open as much as they do, and for sponsoring Ubuntu development, but they do have to become an economically viable company at some point. As long as they keep to their principals, what is wrong with that.

Where Canonical can benefit is by making these tools and services good enough for people to want to use them. By making sure that Ubuntu is adopted as widly as possible means that they have a larger potential client base. But what makes them different is that they are not shoving their software or services down peoples throats. Ubuntu users have chosen it because it is good, it is free, and it does not come with strings attached (Redhat and Novel/SUSE take note). People can pay for support if they want or need it, but there is no stigma to using the software, downloading the patches, and not paying anything if that is what they choose to do.

All I can hope is that enough people want services to enable Canonical to achieve their goal.

Palm OS-based Centro arrives in UK

Peter Gathercole Silver badge

Same on you Reg

... for using stock photos. I'm sure that SprintTV is not available in the UK!

Tool makes mincemeat of Windows passwords

Peter Gathercole Silver badge

@Dave re pointers

If you can take a dump of the entire memory, then time is not a problem for mining data. Of course you would not be able to break in to the machine in a hurry, but that is only one possibillity.

And I believe that my point still stands. If the Kernel can find the information, then so can another tool specifically written to follow the same evidence trail. Once you know the rules, you can code a analytical tool to apply them. All you need is a device like an EeePC (but with a firewire port) with tools intelligent enough to recognise the OS in question, and apply the relevent rules. A serious hacker will have a toolkit with the rules built in ready and waiting. In in seconds.

My guess is that those people who think it is too hard have never delved under the covers in a real OS to understand how they work. And I know I am being a pedant, but I do not see the difference in this context between 'abstraction' and 'obsfrucation'.

Peter Gathercole Silver badge

@Kenny Millar

Useful comment, but not completly valid. The OS always has to be able to find this information, so has pointers that can themselves be found (paging tables with known base addresses etc.) All you have done is added an extra level of abstraction, which may deter some people, but not those with serious knowledge, or access to clever tools. Of course, this may make OS's with their source code visible more vulnerable.

Peter Gathercole Silver badge

Busmastering DMA controllers the problem?

I must admit that I am years behind the times, but when I studied DMA controllers in detail, the OS programmed the memory mapping registers on most architectures to limit the DMA controllers access to just the memory that it needed. This was before the advent of busmastering controllers, but I cannot see that not limiting the memory region, or allowing the controller to access the memory management registers can ever have been a good idea.

In the normal scheme of things, the DMA write operations needs the controller to know where it is safe to write the information, even if it is taking control of the bus in a non-solicited manner. Of course, read operations are not as critical, but again, for a DMA controller to do anything useful, it is necessary for it to be told where to look.

As a result, allowing the controller carte-blanche to the memory map of the system should never really be necessary. Surely this means that the DMA access for firewire must be a mis-feature at the very least, even if it is not a flaw in the design. Or is it really a problem with the northbridge memory controller in a PC?

Maybe someone can enlighten me about why you would want to be able to allow a DMA controller full access to the memory, except to allow a box to be owned in this manner.

BTW, this is also an old story. Apparantly the technique was presented at Ruxcon in 2006.

Asus Eee PC gives Sony the willies

Peter Gathercole Silver badge
Paris Hilton

@observant AC

It is faked!! And obviously a pre-production mock up as the shape is all wrong.

Nooo, not Ms Eee, who's shape looks just right!

IBM gives mainframe another push

Peter Gathercole Silver badge


I think you will find it is 64 larger birds (Emu's maybe?) under the covers. It says so in the article.

Judge greenlights lawsuit against Microsoft

Peter Gathercole Silver badge
Thumb Down

A certain large supermarket...

... was seen selling a 'complete' system (screen, keyboard etc.) with Vista Basic installed that, on inspection, did not actually have sound capabillities listed on the external packaging, and no speakers included.

I'll bet that this had a motherboard with sound built-in, that did not have Vista drivers available for the chipset used.

I pity the people who bought these systems. I'll bet they did not expect to not get sound!

Sky Broadband puts the fault into default Wi-Fi security

Peter Gathercole Silver badge

Cracking WEP

I wondered what I was doing wrong. I was just working from the original aircrack manual pages.

Of course (having looked around a bit), it would help if my laptop did not have a Intel 2200BG chipset. I know I'm wimping out here, but for a quick investigation, I am not going to go down the route of re-compiling the ipw2200 modules. Oh well, looks like I need the backtrack2 livecd.

Peter Gathercole Silver badge

Have you tried to crack WEP

I is important that you cannot algorithmetrically guess a WEP password, because even 64 bit WEP is enough to deter casual bandwidth-stealers.

I made an attempt to crack a 64 bit WEP key on one of my wireless routers recently, just to see how long it would take.

I used airmon, airodump and aircrack, and read that I would need something over 200,000 packets before aircrack would be guaranteed to recover the key. I found that it was not the power of the machine running the crack, but the amount of traffic on the network which determined the amount of time to crack the key.

After running the whole weekend, I had nowhere near enough packets with just surfing running on the network (I admit it was a quiet, but not idle network), so I suspect that most war-drivers will not bother to hang around to attempt to crack your 64 bit WEP unless you are a big-time P2P user, or throw large media files around your wireless network.

Of course, the 15 year old h4x0r or script-kiddie in the next road, trying to get porn without their parents knowing might be a different matter.

Warner Bros revs up live action Akira

Peter Gathercole Silver badge

Not sure whether it will work in US

There are several very Japanese concepts in Akira that will not easily translate into the US as is. There are two ways round this. Ether New Manhatten society will be made to look like post apocalipse Neo-Tokyo, or they will change the story.

Which is it likely to be. Hmmmmm.

Die for Gaia, save the planet?

Peter Gathercole Silver badge
Paris Hilton

Population control

I've always wondered why the human race (or at least the British population) has not degraded.

If you look at the demographics of which part of society is having the most kids, in Western society, you will find that the best educated, highest earning portion of the society is the one having the least number of children.

If you go down to the Chav end, they are having the most (this is by observation, not statitistics, but my gut feel is that it is true).

So in theory, assuming that abillity and education follow down the generations (educated people are more likely to make sure that their children are educated than non-educated people), why has the population of these societies not ended up at the chav end of the spectrum.

Oh. Maybe it has. Hence dumbing down everywhere. And here is another example. Paris! (OK, not so good as the Hiltons are slightly rich, but as good a reason for the icon as any!)

Heathrow 777 crash: 'No anomalies in the major aircraft systems'

Peter Gathercole Silver badge
IT Angle

...fly again

It is almost certain that the plane won't fly again.

Aircraft bodies are designed to cope with the stress of taking off, flying, and normal landing. A mushy landing resulting in undercarriage colapse and wing damage is likely to stress elemets of the plane beyond their tolerances, leading to structural defects in many of the strength elements of the airframe. Damage will have been done to the wing fabric and possibly the roots, undercarriage, engine pods, and body (where it touched the ground).

If it were to fly again, there would need to be tests performed on all of the major structural elements to prove that they were not compromised. This would probably cost more than replacing the plane. In addition, it would need new wings and engines (which is possible, but expensive). For an example, see how much it has cost to return Vulcan XH558 to flying condition, and this was a much more simple aircraft.

In addition, the investigation will be probing all of the wiring systems and control systems, so these would need to be re-worked. If you have ever seen how much wire there is in a modern plane, and at what point in the construction it is put in, you would realise that it cannot be replaced. Hell, car companies don't like replacing the wiring loom in a car!

It is likely that most of the relevent parts of the plane will be kept until some time after the investigation is closed, and if they are ever released, re-usable parts will enter the spares pool of BA or Boeing (after being bought back from the insurers). The remaining airframe will probably become a engineering, fire, or evacuation training mule.

BA will not suffer, as the planes are insured.

BitTorrent busts Comcast BitTorrent busting

Peter Gathercole Silver badge

£1387 per month...

... but how many people want/need to download 2.5TB in a month, which is approx. how much you can do with 100% of 8Mb/S line.

I pay a premium for my line at this speed, and I would like to be able to get the speed I pay for, but only in bursts, and not necessarily during the peak hours. I clearly don't get it.

Looking at my firewall traffic graphs, it looks like during busy times, I average about 40KB/S, which equates to about 320Kb/S, which is something like 24 times slower than my theoretical maximum. And the peak (averaged out over 15 minute periods) is only 125KB/S, which equates to about 1Mb/S. Still 8 times less than I pay for.

And now, it is very suspicious that my incoming SSH sessions hang within seconds of me starting them, which looks like VM is doing something antisocial with my traffic.

Biting the hand that feeds IT © 1998–2019