* Posts by Peter Gathercole

2952 posts • joined 15 Jun 2007

Oi, Elon: You Musk sort out your Autopilot! Tesla loyalists tell of code crashes, near-misses

Peter Gathercole Silver badge

Re: Say what you like about Teslas @bob

I drive a lot on un-lit roads (it's a hazard of living in a rural environment), and it is not just drivers who have their lights set too high that bother me.

The super-bright LED lights on cars coming in the opposite direction are enough to upset night vision even when they're adjusted correctly and not on high-beam. They're just too bright.

What surprised me a while back was that these super-bright lights are also being put on pushbikes. This is just wrong, especially when they are set to flash. Even if they don't flash, when you come across one, you have to look hard to see past them to make sure they are not a car with one light not working (and thus difficult to see how much of the road they occupy.)

And don't get me started on the stupidity that allows manufacturers to put indicator lights next to or surrounded by high brightness side lights, especially if the sidelight has to turn off when the indicator turns on to allow the indicator to be seen. You get a light that just appears to go from white to orange, without the required change in contrast. Why are they even allowed in the homologation tests!

A new Raspberry Pi takes a bow with all of the speed but less of the RAM

Peter Gathercole Silver badge

Re: I swerved the PoE hat @defiler

I'm pretty certain that RiscOS can only use one of the 4 cores, and certainly won't use the 64bit instruction set, so keeping it on the B is probably a good use of the B.

OK Google, why was your web traffic hijacked and routed through China, Russia today?

Peter Gathercole Silver badge

Re: So much for the original intent of the ARPANET

The original thinking for ARPANET did not include BGP. I believe that the alternative routing strategies were provided by static routing with routes preferences and hopcounts providing alternate pathing.

For some history, look up RIP, which was deployed sometime around 1969.

But RIP would never cope in today's massively complicated Internet. Since class-based routing broke down to allow re-use of the previously reserved network ranges that have been freed up to keep IP4 going, the routing tables that the core routers have to know are HUGE.

But considering how BGP hijacking has been known about for a long time, I'm surprised that it has taken this long for a key based trust system to be introduced.

Can your rival fix it as fast? turns out to be ten-million-dollar question for plucky support guy

Peter Gathercole Silver badge

Many years ago..

I went for an interview at Seqoia (they made large for the time UNIX systems), and was presented a test.

"There is a problem in the printing system that causes lpd to crash and corrupt the print queue. See if you can spot where the problem is likely to be be" they said., before leaving me at the console of one of their test systems, with the root password.

I found the problem in about 10 minutes. I then proceeded to spend the time until they checked back to come up with a patch, work out how their compilation system worked, and compiled it ready for deployment.

I did all of this, and what I would do to test it, and ended up twiddling my thumbs for some time until they decided to check back with me.

From their reaction, I don't think that they expected me to even find it, but I knew my way around both the System V and BSD UNIX source tree quite well. They made great noises about how I would fit into their support team, and how it would be really good if I could join them, and the local Managing Director wanted a chat with me, before admitting that they could not even match the package I was getting where I was working at the time (even though the job agent knew exactly what I would need).

So I left furious, as I would not have turned up for the interview if I had known the maximum package they were offering. I think that the agent was using me as a foot-in-the-door for other candidates.

I was not happy with the agent, even though I was on quite good terms.

My hoard of obsolete hardware might be useful… one day

Peter Gathercole Silver badge

Hmmm. Maybe it is time time to..

I only recently ditched all of my ISA and EISA sound, graphics and communication boards. You know, from the time that PCs didn't even have a serial port on the motherboard.

I conceded that not even having any motherboards from the era to put them in probably meant that they were surplus to requirements.

I must get round to ditching all of the <1GB drives sometime, but I've just got to check that there's nothing important on them....

Peter Gathercole Silver badge


If you ever do get round to ditching your 1970's vinyl, give me a call. I will consider coming and picking it up.

Many of them are better (or at least more authentic) than the 're-mastered' compressed copies that you can get on digital download.

This just in: What? No, I can't believe it. The 2018 MacBook Air still a huge pain to have repaired

Peter Gathercole Silver badge


It's funny.

30 years ago, I would have agreed with you about ESD damage.

But since then, although I have taken hundreds of laptops and other computers apart, many of which I used myself, I don't think that I can attribute any of the (relatively few) failures to ESD damage.

I do take minimal static precautions, like having something earthed close to me that I will touch periodically, and before I touch the processor, but I don't completely follow the rules, and I don't use an anti-static strap.

I know, you're going to quote cumulative static damage, which may be true, but I think that chip design, for all it's modern complexity, has meant that unless you really zap stuff, it's likely to survive with only moderate precautions.

And I think that this is true across the computing spectrum. In my last post, I was working with hardware engineers on supercomputers, and they were not that different, even when changing very complicated assemblies (but of course, there was plenty of grounding around when working on equipment that was still connected to the power infrastructure, as was the case with these systems.)

Monster mash: Spectra Logic's tape library now twice the beast it was

Peter Gathercole Silver badge

Many organizations do

Pretty much anybody who has a need to collect and keep vast and ever growing quantities of data.

My most recent experience is of Meteorological data. As forecasts work at ever increasing resolutions, the amount of data that is generated that is wanted to be kept keeps growing at an ever increasing rate.

Docker invites elderly Windows Server apps to spend remaining days in supervised care

Peter Gathercole Silver badge

Nothing is new

Looks like the same concept as AIX 5.3 vWPARs that allow you to run apps from old AIX versions on modern Power boxes with supportable levels of AIX.

Only been around for about 7 years.

Macs to Linux fans: Stop right there, Penguinista scum, that's not macOS. Go on, git outta here

Peter Gathercole Silver badge

Re: Great plan Timmy. @Caffinated Sponge

My previous post on this was a little incomplete. I had not realized that on Intel Secure Boot systems, there is a 'shim' bootloader signed against a Microsoft certificate that can isolate grub and the kernel from Secure Boot. This shim will do additional signature checking, and have certificates maintained by the sysadmin to allow locally compiled versions of grub to be booted. So only the shim needs to be signed against a certificate in the Secure Boot in the UFEI.

But my original point is that the certificates installed in the Secure Boot system are entirely under the control of the hardware vendor. For the UEFI used on Intel systems to boot Windows, the main certificate holder is Microsoft. Microsoft has come up with this method to allow some Linux distributions to sign against the shim certificates, and allowed them to get grub or other bootloader signed with the shim certificate.

UEFI does have a facility to install new certificates, but I think many systems have this disabled so the only certificates that can be used are those that were installed when the system was created.

I suspect that on latest Apple hardware, the only certificate holder is Apple, and only Apple certificates are installed.

If Apple choose not to sign the shim bootloader, then you can't run Linux. It's nothing the Linux community can change, it's completely at Apple's discretion.

I think that the cryptography involved in signing with a certificate is sufficiently advanced that you can't 'steal' a certificate. There is magic (read - a cryptographic checksum) in the signature that will check that the code that contains the certificate signature has not been tampered with. So the only solution is to obtain a correct signature for your code. If Apple don't want to grant one, then tough.

I can totally see why some people want to be able to prove that their system is secure, and is only running software from a recognized source (I won't say trusted, because I think that some OS vendors have abused any trust that they once may have had), but the mechanism used is a double-edged sword which allows these organizations to eliminate rival and alternative OS installations.

So far Microsoft have been prepared to play fair. But there is absolutely nothing that says that they will remain that way. Remember, the last E in EEE is Extinguish...

Peter Gathercole Silver badge

Re: Great plan Timmy. @AC

You've missed the critical point here.

It is not down to the Linux community to get their code certified, it is for Apple to include the existing certificates that Linux can use into the certificate store in the secure boot system.

If apple do not want to include a certificate Linux installs can use, there's pretty much nothing that the Open Source community can do to make Linux run without breaking secure boot.

This was evident when the original Palladium security system was being mooted back in 2002. Some people, like Ross Anderson spotted this, and rang the warning bells, but not too many people heeded the warnings.

Just because Windows 10 currently requires a PC to have the ability to turn Secure Boot off, this may change in the future, and having secure boot even present means that at some point, it could be enabled, restricting the choice of every owner of a PC with it in.

I agree that you can choose to not buy Apple hardware. But generally speaking, it's nicely engineered (or has been in the past, not so sure now) and used to be a good choice if you were prepared to pay the premium.

IBM sits draped over the bar at The Cloud or Bust saloon. In walks Red Hat

Peter Gathercole Silver badge

Re: That's all very well, but ...

Hey, maybe they could RA Lennart!

Or even better, put him in a quite corner where he can't wreck any more of the Linux/UNIX legacy in the name of making it like an offering of another large IT company.

Zip it! 3 more reasons to be glad you didn't jump on Windows 10 1809

Peter Gathercole Silver badge

@Timmy B

I have not really had any problems printing to mfds from manufacturers like Epson, HP and Brother, although using the fax or scanner functions on remotely connected mfds can be a bit of a problem. For the Brother I'm currently using, I had to download and install their Linux printer definitions, plus a script to install them, but that's not really that much different from Windows,

I did have a terrible battle with a cheap Lexmark mfd from Linux some years ago, but they don't appear to make devices for the home market any more, and I'm fairly certain that their laser printers can be driven as generic Postscript of PCL devices without any additional software.

When I got my first HP mfd, I plugged it into my laptop that was running Hardy Heron (6.06?) using USB, and was amazed to find that Ubuntu recognised the device, and created both print and scanner devices for it that allowed me to use it immediately, with almost no intervention from me (I think I may have had to tell it that the paper size was A4).

So I would be interested in hearing which manufacturers mfd's you're struggling with?

Roughly 30 years after its birth at UK's Acorn Computers, RISC OS 5 is going open source

Peter Gathercole Silver badge

Re: it was a joy to work in and ahead of it's time for creating structured code @Mage

I have often thought about what it was that made the micro revolution happen in the 1980s.

My thoughts are that one of the reasons was the immediacy of getting something done that hooked the youth of the '80s. Rocking up to a machine, typing a four or five line program followed by RUN, and having colours splatted all over the screen, or random beeps coming from the speaker says to a newbie "look, you can do magical things", and they're hooked, almost in no time flat.

BASIC was the best tool at the time for this first step. Quick to learn, easy to remember, and immediate.

I look at what is necessary to learn Pascal, Modula and the other compiled languages. First you have to learn the editor. Then you have to write the code. Then you work out how to compile, and only then (assuming that you don't get any cryptic errors from the compiler), you get to see the results. Even using IDEs puts too much complexity in the first step before you achieve anything.

Most of the youth of today will turn off after exhausting their limited attention span at the point that you have to invoke the compiler. And this IMHO is the problem with most modern languages used for teaching.

Add to this the need to learn quite complex language constructs before being able to write syntactically correct code in things like Python, currently the poster boy of teaching languages, and you will turn off more kids than you attract, even if they are quite able.

I saw this in the early '80s. I worked in a UK Polytechnic, and had several intake years on HMC and HMD computing courses coming in having learned BASIC on Spectrum VIC20 and C64 systems (amongst others) who sat down at a terminal, learned how to log in and use an editor like EDT, and start writing Pascal, complaining bitterly that this was not what they thought computing was all about, and why was it so complicated! Once they got over the hump, they were fine, but some did not get that far.

Similarly, my father learned to program on Spectrum and BBC micros, and as a retirement present in about 1992 was given an 8086 MS/DOS PC, and one of the first things he asked me was "How do I write a program that draws pictures and plays sounds" (things he had been doing for years to aid the teaching he was doing), and I had to say that it was not built in to MS/DOS, and even GW Basic by itself without extra software packages.

I don't believe that he ever wrote another program ever again.

Your comment about Forth is interesting. I learned Forth as an additional language (PL/1, APL, C and Pascal were my primary languages then) back in the 1980's (ironically on a BBC Micro with the HCCS Micro Forth ROM, not Acornsoft Forth), and I would say that it is an extremely poor language to for a newbie to learn programming in. The stack based arithmetic system is completely non-intuitive to someone who has not studied computing already (good grief, most people have difficulty understanding and using named variables in a computer program), and although you can define meaningful words in the dictionary, most of the primitives are terse, and impossible to guess the meaning of without reading the manual. And even getting to the point where you could define a word would tax most kids I have known.

At least most Fortran/Algol/BASIC/COBOL based languages have their keywords closely matching English and sometimes mathematical languages. And BASIC scores well in not having strict typing, something that becomes more important as you get more proficient, but a real barrier to someone just learning.

So in my view, as a first stepping stone, BASIC is a good start to gain the concepts of programming, followed on by a move to a more comprehensive language. And BBC BASIC was one of the fastest and best.

Could it be bettered? Yes, I'm sure it could, but Javascript and Python are not it!

Peter Gathercole Silver badge

Am I sure?

I'm no expert, although I have looked into the memory layout of RiscOS because I was interested.

It may be that over the different versions of RiscOS, new features were included, but Wikipedia indicates RiscOS 2 did not have virtual addressing, and I saw nothing in the remaining history to indicate that it was added later.

It is quite true that MEMC did have memory protection capabilities, but from what I have read, it was not used in the earlier versions of RiscOS, although I am sure that it was used in RISC iX.

I find it hard to believe that the current versions of RiscOS do not have memory protection, but my original post was really about RiscOS under Acorn's custodianship.

Peter Gathercole Silver badge

...and it did!

If you look at the core implementation of modern Intel and many other processors, including the zSeries mainframe, they have microcoded RISC processors in them.

And that is not taking into account the remaining RISC processors, ARM, IBM PowerPC (although this is the most un-RISC RISC processor I've ever seen), RISC-V, and MIPS derived processors that are still available.

And I don't think that the micro-controllers and PIC processors that you find embedded in many millions of devices would exist without the research done for RISC processors.

Peter Gathercole Silver badge

Re: "Underpowered"?

According to Wikipedia, the BLiTTER functionality was added to the ST range in late 1989, with the introduction of the STE models.

This is Wikipedia, I know, but for things like this it is mostly correct.

The A400 model of the Archimedes was launched in June 1987.

The Master 128 was a continuation of the 8-bit BBC microcomputer range, which is why it was still available for continuity purposes for schools, even after the Archimedes was launched.

Peter Gathercole Silver badge

Re: RiscOS really was magnificent but...

Whilst it was co-operative multitasking with no interrupt driven scheduler, the points at which a process could lose the CPU were built into many of the system calls, including the ones to read keystrokes from the keyboard and the mouse position.

What this meant was that if you were doing any I/O through the OS, there were regular points where control could be wrested from a process.

That's not to say that it was not possible to write a process that would never relinquish the CPU, but most normal processes are not written like that.

The real issue (IIRC) is that the earlier versions of RiscOS did not enforce any memory space virtualisation or separation. All processes had to be written as position independent code that could sit anywhere in memory, and used the OS to manage their data space. This meant that in this day and age, RiscOS would be regarded as a really insecure OS.

Peter Gathercole Silver badge


You make the comment that the graphics on Archimedes was underpowered, but you have to put in historical context.

The original Archimedes hails back to 1987. At that time, the Atari ST520 was available and the Amiga 500 was released. The Atari had no graphics assist beyond some sprite handling, the Amiga has a blitter which automated block transfer of memory, but only in the first 512K of memory.

The Archimedes was able to do everything that the others could just using the power of the ARM processor, and was not at a serious disadvantage.

And at the same time in PC land, you had the CGA, EGA and early VGA adapters (plus the third party graphics cards) that did almost no processing on their own, and provided a dumb frame buffer that was manipulated by the underpowered (compared to the ARM) main processor.

As the ARM was an efficient full 32 bit RISC processor (as opposed to the 16 or 32 bit register, 16 bit data path of the Intel and 68000 based systems) with good memory access and a high clock speed, it was able to drive a frame buffer as well or better than almost everything else available at the time. The Amiga had some advantage due to it's blitter, but IIRC, it had some serious limitations in what you could do with it.

Where it fell behind was when the clock speed of the Intel processors started being pushed up into the decade MHz range, mainly because Acorn did not have the resources to build the higher speed ARM and ARM based systems. But this was a financial limitation, not a technical one.

And of course Acorn never got into the graphics co-processor market that only hit the mainstream after Acorn was split up.

Erm... what did you say again, dear reader?

Peter Gathercole Silver badge

To be human.

If a human had produced the transcription of the statement, they may well have missed out the "erm", or possibly qualified it with some descriptive text.

But when you get a machine performing the transcription, all of these little hesitations, repetitions and deviations are reproduced verbatim.

Sounds like Norman is a fan of "Just a Minute"!

Your pal in IT quits. Her last words: 'Converged infrastructure...' What does it all mean? We think we can explain

Peter Gathercole Silver badge

Re: I think I get it now @Pascal

I think you need to change point 2. I think it should be:

2) Complicate everything by giving existing technology a new incomprehensible name and make sure to constantly repeat that it is different and important.

Microsoft yanks the document-destroying Windows 10 October 2018 Update

Peter Gathercole Silver badge

Re: "were made available for other OS" @ defiler

Just because different GUIs are available on Linux, it does not prevent applications with different look-and-feel from running simultaneously on a system.

While it is true that a mixture of user interfaces on screen at the same time may look messy, it's no reason to say that you can't run an application on a Linux if you are using a different UI than the one the application expects to use (and even if this is the case, you can have multiple GUIs installed, and switch between them).

I always have the Gnome and KDE support libraries installed (along with several others) on my primary workstation, and this means that I can run an application meant for almost any Linux GUI.

Buttons and menus appear may be different from one app to another, but if an application is important, you get to know where to look for that app. Not ideal, but not a show-stopper.

Peter Gathercole Silver badge

Re: "were made available for other OS" @AC

You don't understand the relationship between the GPL and LGPL.

GPL is not a barrier to writing commercial software on Linux, as all the bits you need (most development libraries, compilers and GUI support) are published under LGPL or other fairly permissive license, which allow you to compile, link and ship code without the GPL requiring you to open-source your code.

Properly packaged Linux packages can have quite good portability within a system architecture (like x86-64), although sometimes the version checking for some of the libraries throws up unnecessary problems. But that's not much different from DLL-hell on Windows.

Going forward, package managers like Snappy and AppImage will make Linux portability even better.

WLinux brings a custom Windows Subsystem for Linux experience to the Microsoft Store

Peter Gathercole Silver badge

Re: Why?

You might not, but Microsoft are hoping people who might have experimented with Linux will do it this way, rather than setting up a dual boot system (which may result in Windows never being booted again!)

And if the user gets it running well enough, why would you even consider installing a native Linux distro.

Microsoft is Embracing Linux. The rest will follow!

That scary old system with 'do not touch' on it? Your boss very much wants you to touch it. Now what do you do?

Peter Gathercole Silver badge

Re: Put it all together...

Worse than that, it was called the All-agro!

Dead retailer's 'customer data' turns up on seized kit, unencrypted and very much for sale

Peter Gathercole Silver badge

Re: How's this different than normal?

Normally, kit like this is sold by the liquidator or administrator to settle debts, pay creditors (after lining their own pockets, of course, as preferred creditors).

Put the onus on them to clean the data from any kit that it sold on, and let them pass that obligation on to any disposals company that is engaged to clear a site. Make it a penalty on the liquidator to allow customer data to leak from a company they've closed down.

Will probably mean more perfectly usable kit being destroyed rather than recycled, and possibly make the IT equipment more of a liability than an asset, but perfectly doable.

Ubuntu flings 14.04 LTS users a security lifeline, chats some more about Hyper-V

Peter Gathercole Silver badge

Why stick to 14.04?

One should point out that 14.04 is the last LTS release that does not have systemd as the default init (it is still there, but upstart is running as process 1).

This may make some shops stay with 14.04 for a bit longer.

Apple hands €14.3bn in back taxes to reluctant Ireland

Peter Gathercole Silver badge

What I want to know

is what the Irish Government intends to do with the ~3000 Euros per head of their population (there's approximately 4.77 million Irish residents).

I'm sure that Irish tax payers would love a tax rebate, or even money put into the country's infrastructure.

I know, they could use it to build the border with NI, when we hit the WTO rules next year.

Watt the heck is this? A 32-core 3.3GHz Arm server CPU shipping? Yes, says Ampere

Peter Gathercole Silver badge

Re: Drivers ?

Probably crossed terms.

ISA also means Instruction Set Architecture, which is what the ARM ISA is.

Nowadays, pretty much all devices work through PCIe 3, so device drivers are much less of an issue than they were.

Most people building x86 systems see the legacy BIOS, keyboard, serial and parallel ports as being something that ought to be culled from modern systems (some have done it already), and I really don't think you mean that there is still support for the 8-bit 'ISA' adapters that were in the original IBM 5150 PC.

Euro bureaucrats tie up .eu in red tape to stop Brexit Brits snatching back their web domains

Peter Gathercole Silver badge

Re: Eh? @Taiwas

"Deal or no deal - they can't decide in a unified way, which they truly want"

This was always going to be the case. There could never be a deal that would keep a majority of the Government, Parliament or the Electorate (on both sides) happy. I feel that this is what the remain side are expecting to happen if there is a public vote on the resultant deal.

If you quantify the likely factions you have:

Those who want to remain in the EU.

Those who want to leave under any circumstance, and would sever ties tomorrow if they could.

Those who want to leave because of immigration, but want to keep the reminder of the EU advantages.

Those who are ambivalent about immigration, but want to get rid of some of the EU regulations.

Those who want to leave to enable the UK to trade better with the rest of the world.

Those who want to leave because the direction of the EU is towards a federal state, which they don't want (this is my position).

Now. take any combination of these, and see whether you can get a majority. Tricky, isn't it.

If there is a referendum on a deal, any deal, it will be voted down. This is why the PM will resist a further vote (even in parliament), because she will lose. The remain camp hope that this will mean that we will stay in! But it is more likely that we will leave on WTO rules, with the hardest of Brexits, and no transitional period.

The Government have an impossible task, which is why they cannot come to an agreement. It's not (all) their fault!

A basement of broken kit, zero budget – now get the team running

Peter Gathercole Silver badge


In my first job programming (in RPG II) in a card-and-batch environment in local government, I got frowned at for working out how to use the JCL to check whether a compile had completed without errors or warnings, so I could have a test run immediately following it in the same card-deck.

Saved me at least 20 minutes per iteration (and sometimes much more), and normally meant that I had twice the number of decks in the queue than all of the other programmers (you had to work on more than one programme because of the turnaround time in the batch queue).

Although the powers-that-be were merely disapproving of this, when I spent time trying to work out how to use the archive manager (analogous to SED, IIRC) to patch virtual card decks (rather than having the whole deck re-punched, patched, and then re-added to the archive, I kid you not), I was hauled aside for being 'disruptive'.

So at the end of the first year, when I was told I didn't merit a pay rise from the stupidly low starting salary, I told the manager exactly what I thought about RPG II (I think I described it as a jumped-up macro assembler - I had previously been programming in PL/1, APL and C on UNIX at university), and said I would be looking around for another job immediately!

Probably was a good move, actually, because I ended up working at a Polytechnic deep in the guts of UNIX V6 and 7 on a non-standard SYSTIME PDP11, which really set my career.

Microsoft: You don't want to use Edge? Are you sure? Really sure?

Peter Gathercole Silver badge

Re: Block IE and Edge @Updraft

For my own use, I am completely Windows free.

I do dual-boot my laptop, and I did boot into Windows a few weeks back to check (and fix fix) it after I migrated the whole system to an SSD (and I put the latest Windows patches on), but other than that, all my day-to-day computing is Linux or UNIX only.

I agree that it is about the applications you run, but to claim re-training your staff is a reason is just FUD.

Windows has some serious advantages at an Enterprise level (AD, Policy Director, Sharepoint [or whatever it's called now]), but for many organizations a sensible and properly resourced Linux implementation is possible. Where I've seen this, though, it often only takes one person in authority (or who is able to be influenced) to push for a return to windows strategy for this to happen (see the background in the Munich City travesty - exactly why did Microsoft choose to become a major employer close to Munich?)

No, unfortunately Microsoft still have far to much influence so that (native) Linux will never have it's day on the desktop. Chromebooks, however...

Dust off that old Pentium, Linux fans: It's Elive

Peter Gathercole Silver badge

Re: If it's snappy on old kit... @insane_hound

Yes, 640x480x16 comes out at exactly 150KB for the primary frame buffer, assuming 4 bits per pixel.

You would have struggled to get a dual-frame buffer working, but for 2D single buffer, it was ample.

When it comes to things like graphics intensive games, sometimes they used to render a frame in main memory, and blit it out to the graphics card's frame buffer, so for some purposes, it may have been necessary to increase main memory.

Remember that unlike today, where the graphics card can do complete object rendering with texture mapping and light source and even now full blown ray tracing, early VGA ans SVGA cards did not have a huge amount of intelligence, so often the main processor did most of the work.

Trainer regrets giving straight answer to staffer's odd question

Peter Gathercole Silver badge

Yup, that does it

When I ran my own company, a family friend asked me to price up a repair for a Packard Bell desktop, and put it on headed notepaper.

I found out (after I had written a no economical repair possible report - Packard Bell systems had proprietary power supplies and motherboards) that he used it in an insurance claim, and he admitted to switching the power supply to 110 volts to deliberately break it (after all this time, there's no comeback, as unfortunately he is no longer with us).

The mobo and graphics card were fried, as well as the power supply, but the Pentium 120 that was on the mobo survived (this indicates how long ago it was), and went on to run fanless in my built-from-scrap-parts firewall system for several years.

Peter Gathercole Silver badge

I wish I knew that 15 years ago

One of my kids spilled something orange and very sticky over one of my two IBM Model M keyboards (Tizer or Irn Bru, they did not own up to it so I never knew for certain).

At the time, I had not heard that Model Ms could survive a dishwasher, so I went through the entire process of stripping it down (boy, you need some deep sockets), and cutting off the melted plastic rivets that hold plastic case that contain the rockers, springs and membranes, and then suffered the problem of the conductive tracks peeling off when I opened the membrane up to wash it.

I cleaned, attempted to repair the tracks with conductive paint, and reassembled the keyboard, adding small nuts and bolts to replace some of the plastic rivets, but unfortunately it never completely worked again, so the keycaps, space bar and cable were salvaged, and with deep reluctance, it was consigned to the recycling centre.

About 6 months after I had failed to repair it, I heard about the dishwasher trick (and now I know that Unicomp sell replacement membranes as well), but it was too late. I was mortified. Needless to say, there is a no-sticky drink rule whenever the kids come anywhere close to my remaining Model M.

But I know all about how a Model M is made

Rubrik says bye to global sales boss

Peter Gathercole Silver badge

I hope that the picture is a visual pun...

...as the cube is attributed to Rubik.

SCO vs. IBM case over who owns Linux comes back to life. Again

Peter Gathercole Silver badge

Re: I thaught Novell owned the property @ Daniel von Asmuth

IBM had and have valid AT&T UNIX source licenses, and were part of Project Monteray, which included the Santa Cruz Operation (before Caldera bought them), so I think that it is very likely that IBM also had a SVR4 source license.

I would assume that, unless IBM's SVR4 source license explicitly prohibited code from SVR4 to appear in AIX, that IBM behaved entirely appropriately with regard to AIX.

The initial product of Project Monteray that IBM produced was a version of AIX 5L running on Itanium. This was harmonized with the release of AIX 5.1 on Power, so that the new features in 5L were also in 5.1 (which some people, even in IBM, did call AIX 5.1L or AIX L 5.1)

I actually did a bit of investigation on an AIX 5L Itanium system, and decided it looked like AIX on Power. walked like AIX on Power, and quacked like AIX on Power, so it was just another AIX platform (apart from some features that were still missing), and promptly decided to deliberately lose interest until Itanium systems running AIX 5L appeared in the market place, which they never did.

What IBM were accused of was not copying UNIX code to AIX, but of copying UNIX code into their contributions to Linux. TSG (ex. Caldera) claimed they had a right to rescind IBM's UNIX source licenses on the strength of this accusation, something that was not possible as the licenses were in perpetuity, and then tried to accuse all IBM's AIX customers of running UNIX variants illegally. IBM promised to defend any AIX customers from TSG's claims of running UNIX illegally if they ever were taken to court, so TSG never carried out any of their threats to AIX customers.

TSG's management were idiots!

No do-overs! Appeals court won’t hear $8.8bn Oracle v Google rehash

Peter Gathercole Silver badge

Re: On the one hand @bazza

The SCO case originaly hinged around SCO's assertion that IBM included parts of the source code obtained under IBM's UNIX source code license that they held for IX and AIX into the code they contributed into Linux (particularly LVM code).

What became apparent is that the only code that was in Linux that came from a UNIX source tree was from ancient UNIX (Edition/Version 7) which SCO themselves had put into the public domain under a fairly unrestricted license. When SCO, with full access to the AIX source tree, were unable to demonstrate anything else more than a general resemblance in the TTY and other device switch (which were basically a series of C switch [case] statements which made no sense to write any other way), that part of the case collapsed.

It then became muddied, because Novell successfully challenged SCO's claim to the the copyright holder of the UNIX source in the first place!

Apart from the entertainment value, I'm so glad that those cases has finally been put to bed.

In this case, I thought that Google had bee accused of directly copying the include files (and only these files) that essentially defined the API between the application and the runtime. I thought that Java had actually been published under a fairly permissive license by Sun (as they were very keen to get it adopted as a pervasive language), so I'm actually surprised that this case has come to this conclusion. But I suppose it's Oracle, so all reason goes out of the window as greed takes over.

Not that Google's any better these days,

Experimental 'insult bot' gets out of hand during unsupervised weekend

Peter Gathercole Silver badge

At University at Durham (and Newcastle)..

..they ran a slightly obscure OS on their s370 called MTS (the Michigan Terminal System).

Unusually for a mainframe OS of the time (I was there in 1978 on), it drove interactive terminal sessions, and our use was controlled by accounting limits. Not surprisingly, these limits were, well, limiting.

I found two ways around this. One was that if you allocated a temporary disk (which allowed you to borrow disk for that session,but which disappeared when you logged out), and then explicitly relinquished it, the space would be added to your permanent disk allocation!

The other was that when a new year started, the initial passwords on the subsidiary computing students accounts were predictable, so you found one, but didn't change the password. You then watched for any activity. If after a suitable time you did not see the account being used (which was possible, as subsid. students tended to swap courses) you could then appropriate the account.

This was how I managed to get enough interactive time to become (I believe) the first person (at Durham, anyway) to complete with a perfect 500 point score, the version of the original Colossal Cave adventure with the Repository ("A crowd of dwarves burst through the hole in the wall, shouting and cheering..." or something similar).

Strange, almost immediately, the game stopped working at Durham. Coincidence?

Abracadabra! Tales of unexpected sysadmagic and dabbling in dark arts

Peter Gathercole Silver badge

Re: Case sensor

May have been stiction. One batch of 1GB IBM Spitfire disks, had the wrong lubricant in the drive bearings, which would vaporize and condense on the platter. When the drive was stopped, and the head parked, it sometimes stuck to the condensed lubricant, and would prevent the disk spinning up.

A quick shock would free the head and allow the disk to spin up.

I believe some other drive manufacturers also had this problem as well.

Ex-UK comms minister's constituents plagued by wonky broadband over ... wireless radio link?

Peter Gathercole Silver badge

Re: @Tim11

I should have included the 'capitalism, red in tooth and claw' option, but in reality this is not an option for any government that is wanting to put essential services online, and expecting people in rural locations to be able to pay the cost of their connectivity.

In reality, putting it on a profit basis will make rural locations less inhabitable1, because people will not be able to afford to live there for an increasing number of reasons.

No. In the UK, government has to consider a broadband provision as an essential guaranteed service if they want to reduce the cost of running government, and the telecom. providers and media companies, who are looking for a connectivity inversion for their future business models want it to.

1 Hey, you say, Leave the country to those who can afford it! But a lot of farming (take out the farm owners and just look at their workers) and land management is a subsistence economy that pays just above minimum wage, and people on minimum wage cannot afford the high transport costs, lack of amenities, and now add high cost of doing business with government agencies like DWP and HMRC, and as soon as the land is not managed, it will be hugely less attractive to tourists and people looking for second homes in the country

Peter Gathercole Silver badge


This is the problem with a universal service in the modern age.

It used to be that all the easy customers would be charged a little bit more for their services, and the surplus would be used to provide a service for those people who needed a more expensive solution, without them having to pay more.

But now we must have 'value for money' and 'maximize shareholder return', and suddenly, you're not allowed to put in a non-profit making solution.

The only ways that this can be overcome is by re-nationalizing Openreach or BT as a whole and giving it's near-monopoly back (shudder), or putting regulations in place that enforce a guaranteed minimum service for all customers.

But that last solution is unpopular with suppliers, because it limits maximum profit, plus someone in the future will come in and provide just the easy customers a cost+small profit service, undercutting Openreach on the services they need to cross-subsidise the more expensive customers.

It's the tension that exists everywhere in regulated capitalist systems, unfortunately.

Connected car data handover headache: There's no quick fix... and it's NOT just Land Rovers

Peter Gathercole Silver badge


Check the size of the product from the multipack.

If it's crisps, snacks or chocolate, especially if bought from Poundland or Iceland, the individual pack size from the multipack is probably smaller than the packs bought individually. This is the reason they're not supposed to be sold separately, so that the manufacturer does not get blamed for reducing the pack size.

The manufacturers do this to try to make the multipacks appear better value than they actually are!

Peter Gathercole Silver badge

Re: This needs some input from the DVLR @Lee D

The ability to use your passport photo as part of the driving license application has been there since about 2006.I was part of the project that implemented it.

I don't think that it is that radical to tack a bit of function to the already existing process of registering a change of ownership of a vehicle. All of the generation of the V5C is already there, and it would be relatively trivial to add something like a code generation step and notification of change of ownership to the manufacturer (although it would have to not include either the previous owner or the new owner information for data protection reasons).

Techie's test lab lands him in hot water with top tech news site

Peter Gathercole Silver badge

"Surely you don't mean that!"

"I do, and don't call me Shirley"

Ad watchdog: Amazon 'misleading' over Prime next-day delivery ads

Peter Gathercole Silver badge


Prime is a mulch-facited offering. The two most obvious benefits are next day delivery, which is not available on all Prime items (and it is possible to tell this from the listings - they nearly always say "not eligible for next day delivery"), and free shipping (there are other things like Prime only items, which are only available if you are a Prime member), and some music and video media available to stream

So it is possible to have an item that you do not pay any shipping charges, but which will be delivered several days after order. It's still a Prime item.

What I have problems with is when you order something that is "next day delivery", and you get an expected delivery date, a dispatch notification, and even tracking information that says it will be delivered, right up to the end of the delivery window, where you suddenly get a "we're so sorry, we have not been able to deliver" message (although you don't always get that).

I'm sure that the multiple instances of this that I've experienced are largely a result of the delivery company (normally Hermes, DHL always seem to deliver) pushing their delivery drivers beyond what is achievable. Normally, a swift complaint to Amazon results in a "free" month of Prime, but that's not much consolation when I needed the item on a particular day.

Three more data-leaking security holes found in Intel chips as designers swap security for speed

Peter Gathercole Silver badge

Re: Looking at the wrong holes @Warm Braw

I think you're not following current deployments. "multiple dedicated machines" do not exist in large organizations any more. They're all doing virtual machine deployments because the hardware vendors are selling these expensive super-sized systems with the express intent of them being carved up into VMs.

And here is the rub. If you cannot trust your process/vm hardware separation, you're in real trouble.

Of course, we could go back to an operating model where we have hundreds of discrete systems rather than a couple of very large systems with dozens of VMs, but space, power, cabling etc would take us back more than a decade, and the loss of flexible sizing would result in wasted resource due to having to have different sized discrete systems for different workloads.

Multi-user, multi-tasking systems have relied on access separation ever since they were invented more than 40 years ago. Pulling this out from under current operating systems would mean going back to the drawing board for OS design, even if it were possible.

Google keeps tracking you even when you specifically tell it not to: Maps, Search won't take no for an answer

Peter Gathercole Silver badge

Re: Nobody saw this coming? @Geoffrey W

From personal experience, if you're trying to move a flock of sheep from one place to another, it is quite often the case that if you can get a couple to to move with purpose to where you want them to go, most of the rest of the herd will follow.

I've not done any in depth sociological research, but I have moved herds of sheep hundreds of times when my farther-in-law owned a sheep farm...

ZX Spectrum reboot latest: Some Vega+s arrive, Sky pulls plug, Clive drops ball

Peter Gathercole Silver badge

Re: What we need

The 6502 and Z80 clock issue is really a precursor to the great RISC vs. CISC debate.

Generally, the 6502 would execute each instruction in about 2 clock cycles, although there were a few that only needed one. The Z80 required between 4 and 13 clock cycles per instruction depending on what the instruction was (this is from memory), so although it generally had a faster clock speed, and more sophisticated instructions, for many of the simpler operations that these processors typically ran, the 2MHz 6502 in the BBC performed tasks faster than the 3.75 MHz in a Spectrum.

The memory access was also more simple for 6502, which enabled it to work with slower memory than the Z80, mainly because memory and CPU clock speeds were linked together.

For complex workloads, the Z80 could run rings around a 6502, but in order to do that, you would need to have work that needed 16 bit registers, and used the complex instructions to their maximum benefit.

The Z80 was more memory efficient (so long as you used all of the instructions) although clever use of the indexed addressing modes of the 6502 could save memory, and allowed you to use zero page memory almost as registers on a 6502, negating some of the benefit of the Z80's more generous register set. The Z80 also had the basic support for bank-switched memory and port driven I/O, neither of which the 6502 had.

It's also worth remembering that processors of this age executed instructions strictly in the sequence they were written, with no overlapping or super-scalar execution, and all memory read and writes went strictly to the RAM, no caching or pre-fetich of instructions or data.

So the Z80 was a more sophisticated processor, but not necessarily a faster one than the 6502.

Drink this potion, Linux kernel, and tomorrow you'll wake up with a WireGuard VPN driver

Peter Gathercole Silver badge

Re: Because it's still a module?

I only read the article, and that contains "pouring his open-source privacy tool directly into the Linux kernel".

I see that it will remain a module, but the intention I get from this is that they are trying to make it compiled directly into the kernel. Maybe the article is misleading.

I actually have no problems with it remaining as a module, either official or unofficial, but there are only a very limited number of scenarios I can see where having it actually compiled in the kernel will benefit users or administrators.

Biting the hand that feeds IT © 1998–2019