* Posts by Peter Gathercole

2924 posts • joined 15 Jun 2007

Dell thinks young and colorful with business notebook refresh

Peter Gathercole Silver badge
Thumb Down

Thanks Mike and Chris

I think I can understand programming having learned to program on 80x25 ASCII terminals. After this anything seems like a luxury.

On the keyboard front, as I look at my Thinkpad T30 (probably one of the best laptop keyboards around), which admittidly does not have a numeric pad, the 15.1" diagonal 4x3 screen allows sufficient width for most of the keys to be full-sized, and I'm used to the way that IBM place the cursor and other extra keys (which is actually reasonably close to a fullsize keyboard). This results in a laptop which is only marginally larger than an A4 pad.

Most of my time is spent writing documentation, and I like to see a whole A4 page at a time. This is why vertical size is important to me. I also use multiple terminal sessions for sysadmin, and can choose various font sizes to get 2 or 4 windows on a 4x3 screen at a res of 1024x760 without having to resort to a magnifying glass. I'm sure I could cope with a 1440x900 (more vertical space than my 1024x768), but I would prefer a 1440x1050 (a real Thinkpad resolution) with the screen filling the lid. Even more pixels!

Still not convinced.

Of course, maybe the extra horizontal space is actually required for the extra bumph Microsoft have put in Aero!

Peter Gathercole Silver badge
Thumb Down

Screen aspect ratio

Can someone please explain why the computer industry is so keen on widescreen laptops. The only real reason I can see is to watch DVD's, but as the horizontal resolution of DVD's is a maximum of 720 pixels, I can cope with only using the middle two thirds of my existing 4x3 screen to watch them.

I cannot for the life of me see why you would want either a bulkier laptop, fewer vertical pixels, or smaller pixels for a business laptop that you carry with you all the time.

If anything, I would like *more* vertical pixels. Please, someone, enlighten me, because I'm mightly pissed off every time I wander anywhere that is selling laptops now.

16-card GPU bangs-per-buck mega shoot out

Peter Gathercole Silver badge
Boffin

45 degrees?

No, not necessarily 45 degrees. Do a least square regression (or similar), and then all of the cards below the line show better than the average, and those above are worse. The further a card is from the line, the more extreme the perforance vs. price is from the 'average'. But the line could be any angle, depending on the scales you choose.

Secret of invisibility unravelled by US researchers

Peter Gathercole Silver badge
Boffin

See and be seen.

Come on. I don't think that they have said that it is invisible in all directions. If you could make it so light from *behind* gets bent around to the front but light from in front gets absorbed totally (i.e. no reflections) and you should be able to see in front, and not be seen from the front. Of course this would not be full invisibillity (a la thermoptic camoflage to all of us GITS fans).

Anyway, if you bent 75% of the light, and prevented the reflection of the other 25% (by absorbing it, which could be in your eyes), you probably would be able to see well enough, and would be fairly well hidden (there would still be a 'dull' spot). The problem would be liniarity, making sure that the diverted light rays followed the same path they would if they were not diverted. And even then, there would probably be a detectable phase shift of the light due to the longer path!

Full invisibillity is still Sci-Fi, and will be for some time.

BT slams bandwidth brakes on all subscribers

Peter Gathercole Silver badge
Thumb Down

BT retail or wholesale

It's funny. I am a Virgin Media ADSL customer. The service comes via BT wholesale. What is being described is exactly the situation I see, and I have been blaming on Virgin. If BT are applying this policy to their wholesale customers (i.e. you buy your ADSL service from another ISP, but the ADSL link to your local exchange is run by BT), then I have been unjustly accusing Virgin. At almost presicely 23:00, the overall download speed jumps up from around 50-60KB/s (as measured by my firewall traffic monitors) to 200-600KB/s (and sometimes faster). I would suggest that it was people going to bed if it were not for the abrupt change at such an obvious time. And I see a progressive slowdown in the morning as well.

My house's traffic (I don't call it mine, because we are a switched on-house of six internet users, with more than one computer/network device per household member) consists of Steam, WoW, Fantasy Star Unlimited (and other) games, Wii and DS internet connected games, Skype, Mail, torrents (Linux distros - honest), tunneled services through SSH (inbound and outbound including SMB file access and printing), BBC iPlayer, Sky Anytime, YouTube and other flash video sites, Internet radio, system updates (Ubuntu, Mac, and Windows), system updates for the purposes of my business (AIX and related fixes from IBM fix central), VPN access to my clients systems, and oh, I nearly forgot, some web-browsing.

So the vast majority of my traffic will be legal and justified (I can't always keep track of exactly what everyone is doing) use and much of it will NOT be done over port 80. I suspect that many people actually have far more non-port 80 traffic than they think. So if they are shaping all my traffic because od my torrent downloads, I think I have a right to be upset.

Official: Eee PC range to expand

Peter Gathercole Silver badge
Thumb Up

eeePC 701

I bought my 701 on the day it became available, and I have used it nearly every day since to suppliment my IBM Thinkpad, for a whole host of different things.

For me, the primary thing is the size. It is still so small compared to almost everything on the market. I don't think I would have bought it if it was larger, or if it were more expensive.

I have dumped Xandros, however, as I keep getting the system in a state where it won't boot because there is a strange problem with the UnionFS commiting transient files to the read-only copy so that you cannot recover the disk space. I'm sure that there must be a config problem there somewhere.

I cannot believe that larger/more expensive models will actually have the same WOW factor of the original, and that is what sold it.

Microsoft to kill Windows with 'web-centric' Midori?

Peter Gathercole Silver badge
Coat

@Mitch Russell

Not sure that it was the network bandwidth that killed diskless workstations from Sun, IBM, Apollo, Whitechapple et. al. After all, you normally only boot a system once every day (paging excepted).

The reason why diskless was appealing was that disk prices then were high. I remember being quoted over £500 to add a 60MB disk to a Sun 3/50. When it became cheap to put a disk into a diskless system, all the cost advantages went away, and you were left with all the bandwidth costs and no advantages. Add to that the increasing concern about the security of NFS, and suddenly things began to look scary.

As a sysadmin, however, the fact that you had total control of a diskless system (like implementing patches once for all your diskless clients) was very attractive. And it gave you a really good reason to say No! to users wanting root access to the system on their desk. Also, remember that all the systems looked EXACTLY THE SAME, so if a desktop system blew up, you told the user to switch desks, or dropped another one on the desk, and the user would not know the difference. Streets ahead of Roaming Profiles.

I'm really a little sad that no-one has resurrected the idea using a virtualisation technology, although I believe that it fits the UNIX model better than MS's current offerings.

I'm sure that you could have a kernel stored in flash that would check on boot to see whether it needed to update itself, and then attach all it's resources from LOCAL servers. Sounds like an interesting Linux based research project, although I'm sure that some of the old X-Terminals used to do something similar.

Mine is looking verry tattered after having been worn for so long!

Lateral thought saves sizzling server

Peter Gathercole Silver badge
Thumb Up

Random number generator

Come on all of you. All you need is a Bambleweenie 57 sub-meson brain attached to an atomic vector plotter suspended in a strong brownian motion producer, say, a really hot cup of tea! (RIP Douglas)

I had an expererience with an educational computer-controlled robot arm that used IR sensors to make optical shaft encoders for the motors (it was a really good design of arm that did not use stepper motors as was the rage at the time, but proper electric motors, so was much faster and more impressive, and with six seperate independent movements). It worked really well, but unfortunately, the IR emmitter/detectors were covered in translucent plastic, which when used in direct sunlight caused ALL of the active motors to run to the end-stops of the respective movement. The whole arm contorted, and dumped itself off the bench, and led to red faces and a difficult-to-justify repair bill!.

Microsoft slams 'sensationalist' Vista analysis

Peter Gathercole Silver badge

What I want to know...

... is how many of the machines that were surveyed are older than 18 months.

Most businesses will not upgrade the OS on an existing PC. It makes no sense for two reasons, one being that the system is already partway through it's lifetime (and asset depreciation), so why spend new money to replace the OS when the existing OS still works, and the other being that the machine will probably be less productive with Vista than it is currently with XP. Add to this the fact that buying a new system means it comes automatically with Vista, and may well be cheaper in real terms than the system it replaces, and you will realise that very few businesses will do anything other than moving to Vista when they replace the PC.

So the real question should be how many of the business systems deployed since Vista hit the market are still running XP. Anybody any idea?

How government will save you from P2P deviance

Peter Gathercole Silver badge
Black Helicopters

@ all the ACs

There really ARE legal P2P uses. Linux Distro's being the one I use, but I'm sure that one of the major IT manufacturers proposed a P2P method to distribute fixes, and the BBC dabbled with the original iPlayer.

But for all you anonymous 'experts' out there, how do you 'ban' P2P? You can stop Grockster, Kazaa, eDonkey, Overnet, Torrent, limewire (does it still exist!) along with all the *CURRENT* P2P applications, but hey, TCP/IP, which the net runs on, allows point-to-point datastream connections from two machines. All you have to do is come up with another P2P protocol which has not been seen yet, and you have got around the filter. Or is there a magic piece of technology that I don't know about that can look at a random data packet and go "Ha, this is part of a P2P stream. Quash".

And if the P2P designers really wanted to be clever, it would be possible to devise a UDP/IP protocol using stateless connections, with out-of-order packets routed via multiple hosts using different ports, possibly with each packet encoded differently. Block that!

It becomes a technology war, where the side with the largest number and cleverist deep-hackers winning. I would place my money with the P2P designers, quite honestly, as these people work without financial reward (the other side needs salaried people). And if it is decided that such applications become illegal to write, then you end up with a locked down Internet, where a new technology like the World-Wide-Web (as it was when it was new) can never happen again. I think people forget what the Internet was like using ftp, Archie and Gofer. There *will* be new killer apps that will change the Internet overnight that we cannot yet imagine.

I'm sure Microsoft, Google et. al. would love it if they get given the decision making power to decide what we can run on the Internet by Governments. Think of the revenue generating power that they would be given.

And the AC who believe that they can be filtered at source just does not understand about how P2P and computers in general work. What constitutes the start of a transmission, if you are getting parts of the work from a dozen different P2P systems around the Internet? P2P is NOT a client-server model (the clue is in the name Peer-to-Peer).

Goodby freedom. Stop the planet, I want to get off.

Intel Classmate PC lands in UK for £239

Peter Gathercole Silver badge
Thumb Down

@matt

You obviously have never watched an effective teacher teach a class using computers, even if you appear to be in education.

It starts up with instructions about how to start the app read from a crib sheet (or often the teacher or teaching assistant setting the programs up before the class starts), and then continues with using the app, which is probably OS agnostic. What use is detailed Windows knowlege in this type of class?

In the UNIX/Linux world, it is possble to set up specific user accounts that just launch the required application. So the instructions become "At the login prompt, type the name of the application, and watch it launch". No specific user ID's per child, you can lock the ID down so that even if the kids find a way to break out, they can do no damage, and the teachers DO NOT NEED TO KNOW ABOUT THE SPECIFICS OF THE COMPUTER.

Your points about supportabillity only count if your support staff are only trained in Windows. Why should they not learn about Linux. It's not like your average teacher who uses Windows XP Home or Vista Basic at home is likely to be able to support a network of Windows systems without additional training. And unlike your average IT professional, they don't CARE about whether it is Windows or not, just that it works like the manual and procedures say it should. They are TEACHERS for goodness sake!

It is possible to lock Linux down so that it will never change until some deliberate action changes it. Try doing that on XP, or even Vista where some programs or other will require Admin rights, and this is likely to open vectors for system corruption in the classroom. If you are really that concerned, you can effectivly make your Linux PC a thin client, or even both thin and fat, determined by what user you login as.

And as for applications, UNIX/Linux programs work from network shares much better than Windows ones, and have done since Sun said that "The network IS the system" in the 1980's. There are LOTS of people who understand how to write applications for UNIX-like OS's that can pick up all of their code and configs from relative or non-specific paths, making the way that the shares and mounts are accessed less important. This means that the apps DO NOT EVEN HAVE TO BE INSTALLED ON THE PC's. Just mount the share or remote filesystem read-only during system boot, and go.

And the best teaching software is bespoke, or at least written specifically to support the subject. The BBC micro, rest it's silicone chips, had huge amounts of subject led programs available, often written by the very people who used it. When schools started installing IBM compatibles, many, many teachers found that there were too few subject lead programs available, and the PC was too complex and had too few development tools available to allow the teachers themselves to write the simple but specific programs they needed. This may have changed now, but there was a generation of teachers who cut their teeth on BEEBs but felt that the new computers in their school were in-accessible or of limited use for their subjects.

I've seen too much "Computer" teaching ending up being teaching particular packages, often Windows ones. Computers SHOULD be used as a tool to support other subjects, not as an end to themselves, except in ICT classes. And these should teach more about HOW computers work, rather than just how to use them.

I've now left the education field, but I have three children in various levels of education, and nothing much seems to have changed since I was there. I have heard of schools who have embraced Open Systems very sucessfully, and only use Windows for a few packages that there are no representative alternatives available, but in most schools, the Windows momentum is difficult to deflect.

World fails to end as Palm ships Treo smartphone with Wi-Fi

Peter Gathercole Silver badge
Alert

Where's the PalmOS version?

Call me a bigot, but I won't buy anything for myself running any type of MS Windows unless I have no option. And I'll think long and hard about it even then.

Roll on the Linux version with a PalmOS frontend and a Dragonball or ARM emulator, but hurry, my 650 is beginning to go west.

IBM's eight-core Power7 chip to clock in at 4.0GHz

Peter Gathercole Silver badge

Elite?

Don't know how many people remember Elite on the 32K BBC B, but it was a revelation when it first came out. Realtime hidden line removal on an 8-bit micro running at 2MHz with not a GPU in sight! When I first saw it I was amazed, as I had been playing around on the BEEB in assembler to do 3-D wireframe, and I could only get simple objects (cubes mainly and other regular objects) without hidden line removal running at 2-3 frames a second. But I think that the main problem was that I was using the OS linedrawing primatives, whereas Elite used a quick and very dirty algorithm.

I would love to know how they did the hidden line stuff so fast on a limited system. They wern't even using colour switching to hide the drawing (Elite ran split screen, with the top 3/4 running effectivly in mode 4 [actually a hybrid mode 3] 1 bit per pixel == 2 colours, and the bottom running in mode 5 2 bits per pixel == 4 colours) except when you were using a 6502 second processor, when it ran in mode 1 all the time. Really used the available hardware to it's best.

This Power7 monster IBM is proposing sounds like you will need serious communication skills to get the best from it. Make the p4 and p5 stuff I am working on at the moment look a bit lame.

US retailers start pushing $20 Ubuntu

Peter Gathercole Silver badge
Coat

@Gerhardt

I don't think that anything in the Ubuntu package manager will actually uninstall modules that you have built, but what it will to is update the kernel and not rebuild the modules that you have added. New kernel versions mean new modules directories which will not contain your modules. The old ones will still be there in the /lib/modules/<version> directory, and if you boot the relevent kernel (ever wondered what all those extra entries in the GRUB boot screen were), they will still work.

I do agree that this is difficult behaviour to get to grips with. I put 8.04 on my main laptop (previously running Dapper 6.06) the weekend after the full release, and have had at least 4 minor kernel upgrades since, which have meant that I have had to re-compile (or at least re-copy) the aironet module that I use for my Three 3G network dongle to speed up network access (it's patched with the USB id of the dongle).

Provided that the kernel update is a minor release (4th number of the version number) there is an extremely good chance that your module will work without re-compiling. Alternativly, you can lock the kernel and kernel modules packages so that they will not be upgraded, but this means that they will not get any patches. Fire up the synaptic package manager from the System->Administration menu, and press <F1> to get some help.

To locate where the module you want is installed, assuming you know the name of the module, then you can use "find /lib/modules -name '*mod-name*' -print" (where mod-name is the name of your module). You can then identify the version of the kernel with "cat /proc/version", and workout from this where to copy the module. Please note that this is all command line stuff, and is not a full procedure, but with the correct amount of applied thought, you should be able to work it out.

Sorry, I know that this is not a tech-assistance forum. I'll try to just keep to comment in future.

Alan Sugar leaves Amstrad

Peter Gathercole Silver badge
Thumb Up

AMSTRAD and VIGLEN

Sir Alan bought Viglen out when they got into financial difficulties sometime in the '90s as I recall. All of the BBC stuff was pre-Sugar, although it was the BBC stuff that made them famous. I still have a working 40/80 switchable double sided drive that I bought back in 1983. Bare TEAC drive with a plastic sheath case, plastic back plate, and 2 cables with the correct ends on.

Viglen became a reputable supplier of reasonable PC's to business after they moved from BBC stuff. I was surprised when they had one of the first 486-DX systems reviewed in the UK.

Amstrad used to make real Hi-Fi seperates before the card-boxy-things. I had an IC2000 amp and IC3000 tuner which were paper covered chipboard and plastic, but the metal chassis and electronics wern't actually that bad. Beef up the power supply with a large electrolytic capacitor to knock out the hum and you had 25 watts RMS per channel which could drive significant amounts of current.

Following that, they had metal cased seperate amps, tuners and cassette decks, in silver and black, but I thought they looked a bit tacky.

They also did a strange turntable, which looked a little like a Rega Planar (wooden plinth, speed change by moving the belt by hand, external belt drive), but had a strange three-armed turntable with hexagonal pads on the ends of the arms that did not support the LP at all. I wondered what one would have sounded like with a Rega glass platter sitting on it.

You really need an Old Fart icon here!

Intel says 'no' to Windows Vista

Peter Gathercole Silver badge
Thumb Up

Steam

The Steam games are very difficult to pirate, as they are tied to single use product activation keys. It is possible to install them on more than one computer, but if you are connecting to the Steam servers for multi-player games, then you have to log on, and each activation key is registered against a single sign-on ID. It won't allow you to register a key against two accounts.

If you try to set up a LAN game with the same copy on two PC's, again they will tell you, and the second one won't start.

Trojan heralds OS X's 'new phase of exposure to malware'

Peter Gathercole Silver badge
Stop

Pi is Pi Gordon?

It will be equal to itself (this is axiomatic), but it is NOT 3.14159, although you could make this statement true by saying it is 3.14159 rounded to 5 decimal places, or to 6 significant figures.

Pi is a non-repeating irrational number (i.e. it cannot be represented as a fraction, and as far as we know, the sequence of digits does not repeat), so it is not possible to be completely accuratly represented on paper or computer.

But, back to the story. All of you who state that it is impossible to have a completely secure OS are generalising. It should be possible to make a completely secure OS, but the costs of doing it make the feat impractical. But UNIX-like OS's have a distinct advantage over pre-vista versions of Windows because the security model that has existed in UNIX-like OS's for over 30 years expects that most work is done as a non-privileged user that does not have access to large parts of the system.

Even a patchy webserver can be made to run as a non-privileged user, with read-only data, so the system as a whole is unlikely to be compromised.

Of course, if you have a means to administer/patch the OS, social engineering can ALWAYS be used to compromise the system. I'm not saying that these OS's are completely secure, but they have fundamental advantages.

If you were to have a system with no mechanism to patch the OS, and the OS was stored in ROM and could not be changed, and there was no way to re-vector OS calls, and you were not able to run any code that was not shipped with the OS, and you made the system functionally frozen, and you put an encrypted filesystem in place, encrypted by a physical dongle then it is unlikely that anybody would break in. But this would be more like an appliance that a general-purpose computer. But maybe that is what is needed by the majority of current users.

Putting any ease-of-use feature in an OS (although you could argue that the user interface is seperate from the OS proper) puts a system at risk. Obviously, any remote desktop tool has the scope to be a way into a system, and having a general purpose scripting language could also make a system vulnerable.

Lenovo throws arms and legs around SMBs

Peter Gathercole Silver badge
Thumb Down

Think's ain't what they used to be!

I agree.

I've had Thinkpads for 10 years or so, mainly bought re-conditioned, and everything after T23's are flimsy. I still have a 10+ year old 380XD running as a firewall.

Granted, bits always fell off eventually, but none have ever let me down by not working onsite until I got a T30 (the first model built in the far east, I believe). T41s and T42s appeared a bit better, but I do not like the T60s at all, especially the widescreen ones, which is why I havn't bought one!

I really don't know what I will get next. Never mind. My current machine (again a T30, because the disk and DVD-Writer could be just swapped over) runs Ubuntu 8.04 quite well enough for the moment, as long as the one remaining working memory socket stays soldered to the MB. Won't be upgrading the Windows partition to Vista, though.

What I learned from a dumb terminal

Peter Gathercole Silver badge
Happy

How about this true story

About 15 years ago, I was working in a major UNIX vendors support centre, and took a call about the colours being wrong on the screen. After going through all of the X colour (color?) maps and everything else I could think of, we found that red was coming out blue, and vice-versa. Green was OK, I suggested in desparation, that the customer unplug the monitor from the computer, and plug it back in.

After some noises from the end of the phone, an amazed customer came back on the line, saying that when he removed the plug, it was incredably stiff, and he found that it was plugged in upside down! Quite how that could have happened by accident is beyond me.

P.S, it was not a 15 pin high density D-Shell VGA connector, it was a D-Shell with three mini-coax connectors in it, one each for red, green, and blue, with sync on green, like below.

---------

\ o o o /

-------

When plugged in upside down, the mini-coax plugs connected, but the surrounding D-Shell must have been well and truly bent out of shape!

IBM 'advises' staff to opt for a Microsoft Office-free world

Peter Gathercole Silver badge
Thumb Up

@Mark Rendell

It's funny, all the collaborative things you mention were pretty much invented by IBM. The document sharing, mail, and shared calender features were first seen in the PROFS, or NOSS internal office system that IBM used. At the time, it was all done on mainframes and 3270 terminals, but it eventually was ported to OS/2. I'm talking about a product (the mainframe version) that existed BEFORE PCs were actually made.

It's funny that it took a long time for some of the features to appear in Lotus Notes, which was supposed to replace PROFS. But I think that most things now appear in Domino, when used with an up-to-date Notes client. The problem is that most people see Notes as just a quirky mail client, rather than the revolutionary collaborative tool and application platform that it actually was and is. But I will admit that Notes used to wind me up when the replication I asked for didn't happen, leaving all my outgoing mail stuck on my Thinkpad.

Peter Gathercole Silver badge
Linux

History

I was in IBM 12 years ago, and at that time, it was OS/2 and Lotus SmartSuite that came loaded automatically on any Wintel system. If you wanted Windows and Office, you had to have a really good business case, and Office on OS/2 was not really well supported (probably due to Microsoft using secret API calls when running on Windows).

When OS/2 fell out of grace at IBM, there was a time when SmartSuite on Windows was tried, but as most of IBM's customers were Office users, document exchange became a problem (there were SmartSuite filters to open and write Office formats, but they were not included by default, and were not 100% effective). There became a straight choice for users between SmartSuite and Office, and Office won, like in the rest of the world.

IBM then tried to make SmartSuite (and Lotus Notes client, the Email part of Notes) more popular with a giveaway program on magazine cover disks, but that did not work either, so the package died, albeit a slow, lingering death.

So for about 10 years, IBM has been using exclusively Office, buying corporate licenses at whatever cost Microsoft felt like charging them.

If IBM can make even some of their own users give up Office, so that a smaller license fee needs paying, then they can only gain. And with ODF being a hot topic at the moment, it gives the possibility of some free news space. Not sure how the targeted users will react, however.

I avoided Office, using SmartSuite after I left IBM, and switched to StarOffice and then OpenOffice when I decided to use Linux as my primary OS (I'm a Unix consultant). And now, when I have to use Office as part of my work on client provided systems, you cannot imagine how annoying and difficult I find it. The lack of any common sense in things like font handling, and styles when cutting and pasting between documents, everything moving around when new releases come out, and some very strange behavior when trying to adjust complex numbered lists just astound me. I could list at least 50 things that I cannot stand. Quite how the software passes ergonomic testing escapes me.

So I think that IBMers should embrace a move from Office. Let's break Microsoft's monopoly. Of course, I would actually like to move back to Memorandum Macros and Troff, or possibly LaTeX for documents, and who needs spreadsheets anyway!

Windows Vista has been battered, says Wall Street fan

Peter Gathercole Silver badge
Thumb Up

Lets agree...

... that Vista on new hardware designed for it is stable and capable, but unless your old system REALLY rocked, it is probably better off with XP.

I do not want to enter a flame war. If you like Vista, stick with it. If you don't, use XP (or Linux). It really is horses for courses.

But it would be nice if Microsoft would continue to produce security patches for XP for all of the people who have perfectly capable XP systems that will not take Vista, or who do not want to pay for what is probably an unwanted upgrade.

I would hate to think that people started discarding perfectly serviceable systems because their Bank or some other organisation they deal with decides that as XP is not secure enough when it goes out of support. Think of the damage to the environment due to all the plastic, heavy metals and CRT screens.

They cannot even be used with their Microsoft OS if they are donated to charity, as the Windows EULA does not allow it.

Maybe there is an opening for offering recycled kit with Ubuntu loaded on it at budget prices.

Most 'malfunctioning' gadgets work just fine, report claims

Peter Gathercole Silver badge
Thumb Down

5% faulty

Surely, even 5% should be too high for new products. That's 1 in 20 items faulty.

Is it really the case that it is regarded as costing less to ship a faulty device from China, and dispose of it in Europe, than testing it before it is shipped? They sure as hell don't rework and fix faulty devices in Europe unless they are high cost items.

Or maybe the shipping causes the damage. So much for the value of the blister pack and expanded polystyrene.

Asus announces 10in, HDD-equipped Eee PC

Peter Gathercole Silver badge
Linux

Something not right

I don't know. These bigger EeePCs just do not look right after the 701. I wish that they had produced a model with a screen with more pixels, without the bezel, but in the same case.

I can cope very well with the keyboard on a 701, but the screen just does not have enough space, even for some of the default menus.

Five misunderstood Vista features

Peter Gathercole Silver badge
Linux

@Roger Barrett

I have a couple of XP systems that have games installed and run by the kids.

There is a significant problem with the security model, particularly with XP Home running on NTFS, where you need admin rights to install DLLs and other config files on the system, and the permissions are set so that you then need admin to read/change the files, and save any save files. It does not affect Fat32, because there are not the extended access control on the filesystem to give you the security (and problems) that running as a non-administrative user provides.

The additional problem with XP Home is that you do not have the extended policy editors for users and groups, or to manipulate the file attributes on NTFS. I suspect that even if XP Pro was being used by the majority of users, they would not want to get involved with this type of administration anyway.

It is possible to do some of the work with cacl from a command prompt, but it is very hard work. I have not found any way with XP Home (without installing additional software) that allows you to manipulate the user and group policies at all.

My current solution is to create an additional admin user, and then hide it from the login screen (with a registry key change). The kids can then do a right-click, and then a "run as" to this user for the games.

This is still insecure for a number of reasons. They already know that they can use this account to run any command with "run as", and the system is still as vulnerable to security flaws within a game, but it is a half-way house.

Unfortunatly, it does not appear to work 100% of the time. I tried using it to install "Blockland", which is a game that allows you to create a first-person role playing game in a world built out of something similar to Lego, and it would appear that there is a access-rights inheritance feature (read problem) that I don't know enough about to fix. It installed OK when run directly from a admin login, so I did not persue it.

I must admit that this type of problem scares me, especially if similar issues exist in SELinux (I normally have SE disabled), but I guess that I am just resisting change to a system I understand well. Role based authorisation is definitly the way forward, but it is just so difficult to accept this type of change.

Fedora 9 - an OS that even the Linux challenged can love

Peter Gathercole Silver badge
Boffin

@AC about cat - if you are still reading

This is UNIX we are talking about, almost all things are possible, although I suspect that your ksh loop may well run slower than cat.

I take your point about 'command' and 'program', sloppy thinking on my part. But that sloppy thinking runs through the entire UNIX history. Check your Version 7, System V or BSD or AIX or any other documentation, and you will see that 'cat' appears in the "Commands" section of the manual (run "info cat" on a GNU Linux system, and see the heading. Section 1 "User Commands")

Interestingly, your one-liner does not to work exactly as written on AIX (a genetic UNIX), as echo does not have a -e flag. Still, you probably don't want that flag if you are trying to emulate cat. I have used echo like this in anger, when nothing but the shell was available (booting a RS/6000 off the three recovery floppy disks to fix a problem before CD-ROM drives were in every system).

I was not really ranting, I was trying to put a bit of perspective on the comments, from a historical point of view. I'll bet you would find a need to complain if cat was really not there on a distro.

Sorry, I did miss the lighthearted comment. Still, just a bit of fun between power users, eh!

Myself, I try to stick to a System V subset (vanilla, or what), mainly because it is likely to work on almost all UNIX from the last 20 years. When you have used as many flavours as I have, it's the only way.

Yes, I've been around and yes, I am still making a living out of UNIX and Linux, so don't feel I'm a complete dinosaur yet. And I am also open enough to use my name in comments (sorry, could not resist the dig).

Peter Gathercole Silver badge
Boffin

Wireless and Linux

I can get hermes, orinoco, prism, atheros and centrino (ipw220) chipsets working out of the box with any Linux that supports the Gnome Network manager (which I have installed on Ubuntu 6.06 - it's in the repository). I can get the ralink and derrived chipsets working for WEP without too much trouble, but it takes some effort to get WPA working, which most people will not be able to sort out themselves.

Where there is a weakness is in the WPA supplicant support. Atheros and Centrino chipsets with Gnome Network manager will do it, and in a reasonably friendly way.

I am using a fairly backward Ubuntu release, so I suspect that it will be a little easier in later releases. I know that the normal network system admin tool in the menu does not work with WPA at all in Ubuntu 6.06.

Where the problem lies is that with a card intended for Windows, the user gets the nice little install CD, which takes away from them all the hassle of deciding which chipset is being used.

Modern Linux distributions probably have the abillity to drive almost all of the chipsets used out-of-the-box, and also have the NDIS wrappers as a fallback, but you need to be able to decide which chipset is in use to make useful decisions.

If manufacturers provided the details of the internal workings of the card (basically the chipset details), or even gave the same degree of care to installing their products on different Linux distros, as they do on different Windows releases, then I'm sure that there would be less discontent amongst non-hardcore Linux users.

I know that this is hampered by the plethora of different distributions out there (see my earlier comments), but it should not be rocket science.

An additional complication is that if you go into your local PC World (assuming it is still open after Thursday) and ask for a Wireless PC-Card using the Atheros chipset, you will get a blank look from the assistants, as they will understand "Wireless" and may understand "PC-Card" (but you might have to call it a PCMCIA card), but Atheros might as well be a word in Greek (actually, it probably is).

And it complicated by manufacturers who have multiple different products, with the same product ID, using completly different chipsets (if you are lucky, on the card itself, you may get a v2 or v3 added to the product ID, but not normally on the outside of the box).

If you definitly want to get wireless working, I suggest that you pick up one of the Linux magazines (-Format or -User) and look for adverts from suppliers who will guarantee to supply a card that will work with Linux, or keep to the Intel Centrino wireless chipset that fortunatly is in most laptops with Pentium processors.

If your laptop uses mini-PCI cards (under a cover normally on the bottom of the laptop) for wireless expansion, then there are many people selling Intel wireless cards on eBay for IBM Thinkpads (2915ABG) that will probably work. Thats what I am using, and it works very nicely indeed.

Peter Gathercole Silver badge
Coat

Who needs cat?

I know that this is absolutly geeky, but cat is a command (like ls, find, dd, ed etc.) which has been in UNIX since it's inception. I have been using it since the 1976/77 release of Bell Labs. Version 6 for the PDP/11. Long before ksh and bash (in fact, the version 6 shell was *really* primitive, only being able to use single character shell variables, for example)

It actually does a lot more than you think. Look up the -u flag, and with a couple of stty settings, you can make a usable, if very basic, terminal application (one cat in the background, one in the foreground).

Try doing a "cat *.log > logfiles.summary" using your ksh one-liner.

How about "ssh machine cat remotefile > localfile" for a simple file copy.

Also, cat has an absolutly miniscule memory footprint (the binary is just over 16KB on this Linux system I'm using).

It is one of the fundamental 'lego-style' building blocks that make UNIX so powerful. Whilst it is true that other tools are around that can do the same job, you cannot remove it because of compatibillity with old scripts. And you can guarantee it is there all the time on any UNIX variant (try running a script that starts "#!/bin/ksh" on a vanilla Linux system). And it is in every UNIX standard from SVR2 (SVID anybody), Posix 1003.1 to whatever is the current UNIX (2006?) X.Open standard.

Remember that the UNIX ethos is "efficient small tools, used together in pipelines". Even things like Perl are an anathama to UNIX purists, because they do everything in one tool.

I think you need to see a real UNIX command line power user at work. I have literally blown peoples minds by doing a task in a single pipeline of cut, awk, sed, sort, comm, join etc, that they thought would take hours of work using Excel or Oracle.

Mine is the one with the treasured edition of "Lyons annotated Version 6 UNIX kernel" in the pocket.

Proud to be celebrating 30 years of using UNIX!

Peter Gathercole Silver badge
Linux

@AC and others

I used to be a committed RedHat user for my allways-with-me workhorse laptop(s), from 4.1 through 9.1 (or was it 2), and when Fedora came alone, I got fed up with the speed that Fedora changed. You just could not use a Fedora release for more than about 9 months, and still expect the repositories to remain for that package that needed a library that you had not yet installed.

Also, when you update, you pick up a new kernel, and all of the modules that you had compiled need frigging or recompiling (my current bugbear is the DVB-T TV adapter I use).

I switched to Ubuntu 6.06 LTS mainly because I liked the support that they promised, and have delivered. Also the repositories are extensive, and are maintained.

Here I am again, two years later, and I can remain on Dapper if I want to (for quite a while) but I am finding that it is taking longer and longer for new things to be back-ported, and I have had problems with getting Compiz/Beryl or whatever the merged package is working with GLx or the binary drivers from ATI.

I am going to go to 8.04 (LTS again) for the same reasons as before, and I am removing the last remains of the RedHat 9 from my trusty Thinkpad T30 (the disk has moved/been cloned several times, keeping the machine the same, just on different hardware - ain't Linux good).

I wish there was an upgrade from 6.06 to 8.04, but I guess that one re-install every two years is not too much to put up with, especially as I choose to keep my home directory on a seperate partition.

I may give Fedora 9 a try on USB stick, just to see how things have changed, but I think that Ubuntu is still my preferred choice. This is mainly because I use my laptop as a tool, not as an end in itself. I just do not have the time to be fiddling all the time.

I know that this is petty, but I feel that we absolutly need ONE dominant Linux distro, so that we can achieve enough market penetration to make software writers take note. Ubuntu is STILL the best candidate for this as far as I can see, because of its ease-of-use, good support, and extensive device support.

If the Fedora community want to come up with a long-term release strategy, then I think that they could move into this space, but as most non-computerate users will generally keep the same OS on a system that it came delivered with for the lifetime of the machine. If they have to perform a major upgrade, most will discard the machine and buy a new one. This means that we need distributions with an effective lifetime of several years to get the needed penetration.

Tux, obviously.

PC World, Currys staff to be dumped in DSGi rescue plan?

Peter Gathercole Silver badge
Linux

@Ivan Headache

Length of cable supported depends on the SCSI variant being used.

I used Fast-Wide differential SCSI-2 cables that long and yes, they were in spec. and yes, they worked. LVM SCSI-3 allows even longer cables, I believe.

I seem to remember that SCSI-1 allowed a maximum length of 3 metres from terminator-to-terminator, but on many early midrange systems, the external SCSI port was on the same bus as the internal devices, and once you measured the internal cables that ran to all the drive bays, you often had less than 1 metre available for external devices.

The biggest problem was that all the different SCSI varients used different connectors and terminators, and even when you used the same variant, manufacturers often had their own (often proprierty) connectors (IBM, hang your head in shame!)

I hope that some of the smaller Currys stores survive, although this is unlikely, as they are about the only electrical retailer left on the high street (I know there are the Euronics consortium of retailers, but somehow, they are just not the same). Sometimes, you just have to pop out and buy a toaster/hoover bag/unusual battery in an emergency, and if I have to trek 2x25+ miles to the nearest large town, the only ones who will benefit is HM treasury due to the fuel duty. I know I won't.

I'm with most of you on PC World, however. I have often had to bite my tongue if I am ever in one of the stores listening to advice being given to customers. I once had an open row with the supposed network expert about the benefits of operating a proper firewall on a dedicated PC vs. the builtin inflexible app that is in most routers. Eventually, I had to play the "I'm an IT and network Consultant, and I know what I'm talking about" card just to shut him up.

Tux, because he makes UNIX-like OSs available to all.

BOFH: The Boss gets Grandpa Simpson syndrome

Peter Gathercole Silver badge
Coat

PDP 11/34!

I'm hurt.

I loved all of the PDP 11/34's I used. Of course they were not as reliable as more modern machines (or even 11/03s and 11/44s), but then they were built out of LS7400 TTL with a wire-wrap backplane with literally thousands of individually wrapped pins. If I remember correctly, the CPU was on five boards, with the FPU (optional) on another two. Add DL or DZ-11 terminal controllers, RK or RP-11 disk controllers, and MT-11 tape controllers, and you had a lot to go wrong.

I suspect that all of the Prime, DG, IBM, Univac, Perkin Elmer, HP systems of the same time frame had similar problem rates. Especially as they were not rated as data-centre only machines, and would quite oftern be found sitting in closed offices or large cupboards, often with no air-conditioning.

It was quite normal for the engineers to visit two or three times a month, and we had planned preventative maintenance visits every quarter.

But, the PDP 11 instruction set was incredibly regular (I used to be able to dis-assemble it while reading it), and it was the system that most Universities first got UNIX on. It had some quirks (16 bit processor addressing mapped to 18 or 22 bit memory addressing using segment registers [like, but much, much better than Intel later put into the 80286], Unibus Map, seperate I&D space on higher-end models). OK the 11/34 had to fit the kernel into 56K (8K was reserved to address the UNIBUS), but with the Keele Overlay Mods. to the UNIX V7 kernel, together with the Calgary device buffer modifications, we were able to support 16 concurrent terminal sessions on what was on paper little more powerful than an IBM PC/AT.

It was a ground-breaking architecture that should go down as one of the classics, along with IBM 360, Motorola 68000 and MIPS-1.

Happy days. I'll get my Snokel Parka as I leave.

Is the earth getting warmer, or cooler?

Peter Gathercole Silver badge
Coat

Chasing the money

A lot of you commenting on a scientific gravy train obviously don't know how scientific grants are awarded.

If you are a Research Scientist in a UK educational or Goverment sponsored science establishment, you must enter a funding circus to get money for your projects. This works by you outlining a proposal for the research you want to carry out, together with the resources required. This then enters the evaluation process run by the purse string holders (UK Government, Science councils, EU funding organisation etc.) Enivatably, the total of all of the proposals would cost more money than is available (just look at the current UK physics crisis), so a choice must me made.

The evaluation panels, which are made up of other scientists with reputations (see later) but often also contain civil servants, or even Government Ministers. They look at the proposals and see which ones they are prepared to fund. As there is politics involved, there is an adgenda to the approvals.

If there is a political desire to prove man-made climate change, the panel can choose to only approve the research that is likely to show that this is the case.

So as a scientist, if you want to keep working (because a research scientist without funding is unemployed - really, they are), you make your proposal sound like it will appeal to the panel. So if climate change is in vogue, you include an element of it in every proposal.

The result is funded research which starts with a bias. And without a research project, a scientist does not publish creditable papers, does not get a reputation, and is not engaged in peer review, one of the underlying fundamentals of the science establishment. Once all of the Scientists gaining reputations in climate study come from the same pro-climate change background, and the whole scientific process gets skewed, and doubters are just ignored as having no reputation.

If there was more funding available, then it is more likely that balanced research would be carried out, but at the moment, the only people wanting to fund research against manmade climate change are the energy companies, particularly the oil companies. This research is discounted by the government sponsored Scientists and Journalists as being biased by commercial pressures.

More money + less Government intervention = more balanced research. Until this happens, we must be prepared to be a little sceptical of the results. We ABSOLUTLY NEED correctly weighted counter arguments to allow the findings to be believable.

Please do not get me wrong. I believe in climate change, but as a natural result of reasons we do not yet understand properly (and may never as proved by the research of the recently deceaced Edward Lorentz), one of which could well be human. Climate change has been going on for much longer than the human race has been around, and will continue until the Earth is cold and dead.

I am a product of the UK Science education system to degree level, and have taught in one such establishment too, so please pass me the tatty corduroy jacket, the one with the leather elbow patches.

Lenovo ThinkPad X300 sub-notebook

Peter Gathercole Silver badge
Boffin

Defrag'ing NTFS

Like many modern filesystems, NTFS has the concept of complete blocks, and partial blocks.

Sequential writes will result in complete blocks being used for all but the last block. In order to maximise the disk space, the remaining bit at the end of the file is written to a patial block, leaving the rest of the block containing the partial block available for other partial blocks. Confusingly, these partial blocks are called fragments. I don't know about the NTFS code in XP and Vista, but with other OSs, the circumstances when fragments are promoted to full blocks are fairly rare under normal operation, so over time the number of full blocks split to fragments will increase.

When reading a files that has been extended many times, you end up with blocks in the middle of the file that instead of being stored as whole blocks are in multiple fragments. Each fragment needs a complete block read so a single block dived into four fragments (for example) needs four reads instead of one, probably associated with four seeks as well.

When you defrag a filesystem, these fragments will be promoted to whole blocks (by effectivly performing a sequential re-write of the whole file), significantly increasing performance.

You also find that filesystems that are run over 90-95% full end up with a significant amount of the free space being in fragments, with few full blocks available. Certain types of filesystem operations just will not work in fragments (operations that try to write whole blocks, like those used by databases for example). This also affects a number of UNIX variants as well.

So long as the OS thinks of the SSD as a disk, using the same code as for spinning disks, the same problems will happen, thus you will need to defrag it just like an ordinary disk. Why should it be any different? What may happen is that the performance of a fragmented solid-state disk may not defrade as much as a spinning disk, as I would guess that a seek on an SSD is almost a no-op.

Standalone security industry dying, says guru

Peter Gathercole Silver badge
Pirate

Voodoo

Come on, can you think that a security 'expert' that goes into an organisation, and just comes up with a 'nothing to look at here' is going to be trusted?

They HAVE to find something to justify their own existance, even it it is that you have to video everybody everywhere. The better you (and the previous expert consultants) are at the job in hand, the more trivial the next vulnerabillities become. And because they are just trying to find one or two things, they will stop once they have these one or two. Of course, this assumes that all the basics are covered.

It's when they start complaining that the screens can be read over the video links that they asked for, and whether the CCTV wires are Tempest complient or could be intercepted between the camera and the monitoring station that you really have to worry.

My view is, let a couple of minor but visible, easily fixable, holes be found,. Take the resultant report, fix them in no time flat, and everyone will be happy. You will get a 'Found something, had it fixed, everything OK now' report, and they will go away happy knowing that they have done the job. You will then not have to fix the trivial new vulnerabillities that they have not had to find.

I think the BOFH would aggree to this plan. Either that, or there will be some more mysterious accidents with lift doors opening at the wrong time!

Microsoft kicks out third Windows XP service pack

Peter Gathercole Silver badge
Linux

@timM

Maybe you ought to put pressure on your favorite game and device manufacturers to support Linux, rather than asking Linux to work like Windows. My IBM Thinkpad works flawlessly with a stright off-the-CD Ubuntu install, including the trackpad, the wireless and the display adapter. The only thing I have not tried is the modem, but who uses that anyway nowadays.

The problem with many the Linux software that needs to match the kernel version is that the developers did not understand the correct methods for making their software kernel version independent. As long as you remain in a major branch (like 2.6), it is possible to make your modules version independent.

Even if a module is compiled against a particular kernel minor version, it is often possible to copy the module into the correct location for any new kernel that you install. I admit that this is not somthing that is done automatically when you install a new kernel, but it's not that difficult either. If you have compiled the module, and kept the build directory, try doing a "make install" to see whether that will install it in the correct location.

Unfortunatly, Nvidia do not appear to be able to do this with their 3D module, somthing that almost everybody trying to get compiz running on a system with an Nvidia card will fall over.

Hitachi to go it alone on discs after all

Peter Gathercole Silver badge
Happy

Bad Blocks?

All disks have bad blocks, you just don't see them because they are mapped out. If you have a drive that cannot do that automatically, you may like to try re-mapping all the bad blocks using a suitable utility (normally provided by the drive manufacturer, but you could try mhdd). Of course, back your data up before trying to rescue the disk.

Once the badblock map is written, new bad blocks are normally caused by head contact with the platters, so don't jog your computer.

All of the IBM Ultrastar (and other IBM server disks back to Redwings) I have used have automatically re-mapped bad blocks. I'm not saying that I have never had to replace Ultrastar disks, but it has generally been because of electronics or motor or actuator failure. What I have found is that they mostly fail when they are stopped or started. Keeping them running 24x7, I have seen the run for literally years at a time (in the case of years without stopping, they were Spitfire 1GB SCSI disks - and the OS was AIX).

The Deskstar 'click-of-death' problems were a problem with the voice-coil motor failing to perform the head-preload during power up. The click was the head being moved to the end-stops. Deskstars were not the only disks with this type of problem. If you google click-of-death, you will find that other manufacturers have had similer problems in the past.

The underlying story is that you can have it cheap, big, or reliable. Currently cheap and big appear to be more important (to us!) than reliable. And the other thing is that the more expensive server members of a disk family are probably worth the money.

BBC vs ISPs: Bandwidth row escalates as Tiscali wades in

Peter Gathercole Silver badge

ISPs own fault

If the ISPs sell bandwidth they cannot deliver, who is to blame?

I know it may mean higher prices for us all, but I would much prefer to pay more to buy a service that delivers what I have been sold, than get a service that is unusable for much of the day.

Why should the BBC, or ITV, or Channel 4 or Channel 5, or Sky, or YouTube or its clones (who all have video on demand services) have to pay for anything except the bandwidth between them and their ISP.

The ISPs are asking for an unworkable charging model. The only thing that might make the BBC situation slightly different is that the high demand material may be slightly more predictible than some of the other content providers.

Guitar maker Gibson thrashes out more robo-axes

Peter Gathercole Silver badge
Thumb Down

Is this what you want?

I can understand having a guitar that is alwas in tune whenever you pick it up, but to correct the tuning of what you are playing?

How will it cope with bending a note, or slides, or playing with a bottleneck?

You might as well put it it through a post instrument DSP dynamic tuning corrector between the axe and the amp.

UK.gov will force paedophiles to register email addresses

Peter Gathercole Silver badge
Joke

Government thinking (possibly an oxymoron?)

Of course, the Goverment could provide a state sponsored email system, and force everybody to use that for all email....

....no wait. You would have to prevent use of out-of-nation email servers. OK, lets block SMTP and have a block list for foreign webmail servers. The ISPs can do this for us without cost if we mandate it by legislation...

...hang on, we then need to block tunneled and anonymised connections. OK also block anything that is encrypted....

...but that will block SSL. Never mind, Phorm will work so much better if SSL is not used. And once the Interwebnet tubes unencrypted, we can filter content from abroad, we won't have to worry about terrorists picking up bomb plans from foreign subversive sites.

Hell, lets just ban the Interwebnet. But wait, arn't we trying to push down costs by using the it for tax and other goverment systems....

...and so on.

Anybody for a Police State?

New banking code cracks down on out-of-date software

Peter Gathercole Silver badge
Linux

Check the Ts & Cs

I think that if you look at the terms and conditions of most online banking services, you will find that they have a list of known and supported OS/Browser combinations, and I would be surprised if any Linux platform is listed. This gives them an immediate get-out from most Linux users.

My primary bank would like me to install agent software on my machine (at least last time I looked) to access their online banking system. Of course, this is windows based.

And the AC who was talking about Linux viruses has obviously not taken into account how short the wikipedia page about Linux viruses actually is, nor has he looked at the viruses listed. Many of them are old definitions, some are for products not involved with browsing, and virtually none of them will cross the user/system boundry unless you are stupid enough to be running the vector as a privileged user (root).

I'm not saying that Linux is invulnerable, and the increased evidence of flash/java/javascript cross-platform attacks is worrying, but a well maintained Linux system is probably safe from most prevalent attack vectors. About the only place where Firefox is likely to be vulnerable, assuming it is installed into system-defined location (rather than in home directories) is via a plugin. It is just NOT POSSIBLE as a non-system user to install such things as keyloggers, DNS redirectors, and default route redirectors in a Linux system if the system privilege is guarded well.

Of course, Linux is just as vulnerable to social engineering (i.e. Phishing) attacks, but that is because the user is being targetted, not the OS or browser. In theory, it is possible to install anti-phishing plugins in Firefox, but such defenses are only as good as the block database that is being referenced.

I'm just waiting for the banks to insist on content filters being mandatory for their services. When that happens, the simple port filter firewalls implemented by most routers (and Linux Tables and Chains firewalls) will not satisy their requirements, and we will be further beholden to Microsoft.

By the Power of Power, IBM goes Power System

Peter Gathercole Silver badge
Unhappy

Oops, not Boca Raton

Not Boca Raton, that was PC, PS/2 and OS/2. I meant Rochester, of course.

And I missed out the OS/2 on PPC, which was also seen as possible to run on the same merged PPC platform, with a common microkernel and OS personallity layers put on top for OS/400, AIX and OS/2.

Peter Gathercole Silver badge
Boffin

A long time coming

I remember having a presentation from an IBM bod from Montpellier describing a merged archetecture about 20 years ago (I wonder if the non-disclosuer agreement is still enforceable?). This was using a unified backplane with common components, that you plugged the relevent processor card into, and had scheduled using a hardcoded VM implementation that on reflection looked like the current hypervisor. It used common memory between all processors, with IO performed through the VM. The project was at that time called Prism, a term that has been used more recently just in the mainframe world for a hardcoded VM implementation (maybe that is a spin off from the same research project).

I also remember about 15 years ago when it was announced that the Boca Raton people had taken the PowerPC roadmap, and inserted the ppc 615 (I think) processor to run OS/400, extended with additional instructions to assist the running of that OS (and in the process, I understand, rescued the floundering PowerPC family, because Austin were having difficulty getting the ppc 620 (the first 64 bit member on the roadmap) running). Everyone in IBM was talking about merged product lines again then.

The smaller mainframes have long used microcoded 801 (the IBM RISC chip before RIOS and PowerPC) and PowerPC cores in such systems as the 9371 and air-cooled small zSeries systems. I wonder if the full unification of the product lines is still on someones roadmap? I'm sure that I was on a machine room some in the last couple of years, and had difficulty differentiating a p670 and a small mainframe that was close to it on the floor.

In reallity, a lot of the memory, disks, tape drives and I/O cards have been almost identical between the AS/400-RS/6000 and [pi]Series for many years (back to when the RS/6000 was launched), with the only real difference being the controller microcode, the feature numbers and the price!

Intel enrols second-gen Classmate PC

Peter Gathercole Silver badge
Thumb Up

Seems familliar

I know the common roots, but do the specs not look almost identical to an EeePC now?

Yahoo! cuddles Google's bastard grid-child

Peter Gathercole Silver badge
Happy

@Sid

Sorry, I thought that the "granulising data" was reasonabily obvious and didn't think that it needed explaining, so assumed that it was the other terms that needed explaining. My bad.

Peter Gathercole Silver badge
Boffin

@Sid re. SMP vs.MPP

SMP=Symmetric Multi-Processing

MPP=Massivly Parallel Processing

With an SMP box, there is a single OS image that schedules applications across the processors. If you write threaded code, then most SMP implementations will schedule threads on seperate processors without you having to write the code to explicitly taks into account the fact that there are multiple processors.

With MPP, there are multiple OS images in the cluster, and you have to write to an API that will allow different units of work to be placed on different systems. This means you have to make the application much more aware of the shape of the cluster. This also means that if not written carefully, you may not get better performance by adding additional nodes into the cluster.

Unfortunatly, too many IBM SP/2 implementions were not really parallel processing clusters, more like lan-in-a-can systems (goodness, where did I dredge that term up from).

But what Google does is a quantum leap up from what SP/2s were capable of, and are much more like Mare Nostrum and Blue Gene/L.

Asus Eee PC 900 flips one at MacBook Air with multi-touch input

Peter Gathercole Silver badge

Screen res?

Is it really 1024x768? This is not a wide screen resolution, but one often used for 4x3 aspect laptop screens. If you were to keep the same aspect ratio, it would be 1024x614. 1280x660 would also be about right.

BBC Micro creators meet to TRACE machine's legacy

Peter Gathercole Silver badge
Coat

No better machine

The BEEB was clearly the most useful teaching computer, possibly of all time.

It was accessible to people who were only prepared to learn Basic, and also to those who were prepared to use assembler. You could teach structured programming on it without any modification, but it also had languages like Forth, Pascal, Logo, and LISP available. Although the networking was rudamentry (and fantastically insecure), it allowed network file and print servers to be set up very easily and cheaply (proto-Ethernet CSMA/CD for PC's at the time cam in at hundreds of pounds per PC plus the fileserver). Although it did not run Visicalc or Wordstar (the business apps of the time), it was still possible to use View or WordWise, and ViewSheet, or ViewBase to teach office concepts. And it was possible to have the apps in ROM for instant startup.

I ran a lab of 16 BBCs to teach computer appreciation, and we had a network with a fileserver (and 10MB hard disk!), robot arms, cameras, graphic tablets, speech synths. speech recognition units, touch screens, pen plotters, mice and more. This was around 1983. Show me another machine of that time that could do all of this. And all for a cost of less than £25K (which included building custom furniture).

I wish that schools still used systems that empowered their staff to develop custom written software to teach their students. Nope. Only PC's.

I know many people (me included) who were prepared to pay for one of these machines at home. A classic.

So what's the easiest box to hack - Vista, Ubuntu or OS X?

Peter Gathercole Silver badge
Linux

@Don Mitchell

I think if you read the CERTs, you will find that a large number of the Linux vulnerabillities are theroetical, unexploited problems that have been identified by examination of the code. Do you really think that the buffer overrun security pronlems were all discovered by experimentation? Many of these problems have not even got example exploit code published.

So, which do you trust more. The code that has been examined and found that there may be theoretical problems (which are fixed reeeal quick), or the code that has definite exploits published, and may not get patched for months. Just imagine how many problems are likely to be found in Windows if the code was open, if there are this many discovered by experimentation.

Please don't just count the exploits, examine them in detail, and you then won't compare apples and oranges.

Arthur C. Clarke dead at 90

Peter Gathercole Silver badge

Minehead mourns the loss of one of its famous sons

What I always likes about his writing was that it was science fiction grounded in science fact. Unlike many other authors, all of his innovations seemed to be possible given one or two advances.

Who will provide the realistic grand visions now.

Bag tax recycled into eco-PR slush

Peter Gathercole Silver badge
IT Angle

Bag use

OK, so I will still have to use plastic bags to put my rubbish in (I line all of the bins in my house, one per room, with supermarket carrier bags), and also to sort my recycling into (cans, glass etc.) but instead of re-cycling the bags from the supermarket, I will have to buy them instead.

I probably will still use a similer number of bags, and these will still probably end up in landfill. But guess who will benefit. The supermarkets. Instead of giving me bags, they will now be able to SELL me them. A cost item becomes one generating profit.

And there is another environmental down side. Currently supermarket bags degrade over time in landfill, but the polythene bin bags used to replace them probably won't. I would also like to know what happens to the bag-for-life bags once the supermarkets have swapped them for new ones. Are they recycled? What are the energy cost comparisons in the recycling process vs. the costs of making disposible bags.

All I am saying is that nothing is simple or obvious.

Biting the hand that feeds IT © 1998–2019