* Posts by Peter Gathercole

2951 posts • joined 15 Jun 2007

Windows XP point-of-sale machine gets nasty sniffle. Luckily there's a pharmacy nearby

Peter Gathercole Silver badge

Re: Couldn't a Pi do the job these days ? @Roland5

But if you are a POS sales kit provider, you fork the kernel and necessary parts of the tool chain software away from the main line distro, and then employ an in-house support team to do it yourself. Hey, you can even strip it down to it's bare minimum so it runs on lower spec. hardware and to have a reduced attack surface.

Provided you resource it well enough, you can keep it running and in *your* support for as long as you want, regardless of what the main-line distro does.

In case you haven't twigged, this is the primary difference between using Open and Closed source software. And if you are not re-distributing the software, merely putting it on your own hardware that you support, you don't even have to make the source of your changes available (although there would probably be no real point in not making it available).

Radio gaga: Techies fear EU directive to stop RF device tinkering will do more harm than good

Peter Gathercole Silver badge

Re: Gonna ask what may be a stoopid question here...

More to the point, what does this say about Linux on laptop/desktop systems with WiFi adapters?

The WiFi adapters in these systems are just as much an RF generator as an access point. Does this mean that Linux cannot be used with WiFi without some vendor support?

At face value, this is what the article suggests, as the quotes state "software", not just "firmware"

That way madness lies!

Ubuntu 14.04 LTS media released with APT fix as end of support nears

Peter Gathercole Silver badge

Re: Thanks, but no thanks ...

You do realise that you could replace Unity from the packages in the repos. for something like lxde, kde, Gnome, or even Gnome fall/failback, the latter of which makes the system look quite a lot like Gnome 2 (although the plugin apps need re-writing).

There is even Cinnamon and Mate in the repos, and probably a lot of others that are either ancient or not in the mainstream.

I never liked Unity, but I found a way around using Ubuntu without having to use Unity.

TalkTalk kept my email account active for 8 years after I left – now it's spamming my mates

Peter Gathercole Silver badge

Had a similar problem with Virgin

I moved away from Virgin (just a dial-up and then ADSL account, not TV or cable), but my email address was still active for over a year after the account was closed.

Got to the point where their webmail portal was not accessible, but my fetchmail POP3 scripts still worked both picking up and sending mail.

Even though the account was supposedly closed, it was still used to send spam out after Virgin leaked the details in one of their data breaches (mail address and password), and it was also used to hijack my facebook account (which I did not notice because I don't actually use Facebook hardly at all).

When I tried to get them to take action, all I got was the "Sorry, you're no longer a customer" spiel, although the mail address was eventually shutdown even for POP access.

Bad news: Google drops macOS zero-day after Apple misses bug deadline. Good news: It's fiddly to exploit

Peter Gathercole Silver badge

Re: Is this a problem?

"More recent Unixes" is a relative term compared to Unix Edition 7, which is ancient.

The un-linking of running executables is probably the behavior of Unixes with demand paging (as opposed to swapping), so SVR3 (SVID issue 2) or later for AT&T Unixes, and probably BSD 4.2. I'm talking mid-1980s. It worked like I described it purely to prevent the type of flaw that appears to be reported here.

The behavior of changing running executables is unlikely to be documented in any of the SVIDs or Posix standards, as these tend to document interfaces, not implementations. I no longer have access to any internal System V design documentation, so I don't know whether there was any documented design for this area of Unix.

What I described was what I've observed in SVR2 and SVR3, and later in AIX and SunOS. I found the behavior curious, so looked into exactly what was going on (and at the time I had access to the System V source as part of my job at AT&T).

Some of the behavior I describe is documented on the unlink(2) and exec(2) man pages, although I have not found the behavior for running files documented. I did find this stack exchange question, which describes what I've said in other words.

I also remember the behavior of mmap() being described in quite some detail in the SVR4 developer conference in 1988, which went into concurrent access to memory mapped files, and I do still have my notes, so could look up that, but I think that there should be some SunOS and/or BSD documentation around on the 'net somewhere.

But I was positing that the vulnerability was not with mmap(), so the mmap() documentation is not really appropriate here.

Hope this is of some interest,

Peter Gathercole Silver badge

Is this a problem?

I'm struggling to see why this is a problem, if it is working exactly as described and affecting memory-mapped files.

If you forget about the memory mapped element of this exploit, if two processes have the same file open, one for read, and one for write or read/write, if the write process writes to the file, and the other process reads the region of the file that was written, the new data should be used (at least on a Posix compliant file system, for filesystem like Lustre on clusters, this may not be true).

Even if you take the memory mapped file nature of the file into consideration, if the underlying file on disk is altered, I would expect that the copies of changed pages of the mm file to be invalidated when the corresponding block is written to disk, reflecting the change on disk, and meaning that if the page is referenced again, it will be fetched from disk to pick up the changes.

I suspect that the problem is not to do with memory-mapped files, but more to do with demand-paging (or swapping on non-paging Unixes) of the text space of executable binaries. In this case, if a page of the text of a process is aged or pushed out (or never loaded in the first place), rather than being copied to the swap space, it is assumed that the text image on disk will not change, and can thus be re-loaded from the disk.

I know that Mac OS does not run a Unix kernel, so may actually handle paging differently, but the Unix model for this has been will understood since before Mac OS X existed. Memory Mapped files and the mmap() call were first described (but not implemented) in BSD 4.2 documentation, first implemented in SunOS 3.2, and made it mainstream via SVR4 into Posix. Mach did implement mmap() before Apple used it in Mac OS X.

For archaic Unix systems (I saw this behavior on Unix edition 7), the file on disk would not be allowed to be changed while it is currently executing (you got an ENOPERM even if you were root and had write permission).

On more current Unixes, the currently executing old file is unmapped from the filesystem, and a new copy of the file would be created (with a new i-node, and possibly using copy-on-write) to hold the modified contents. The running old unlinked file would remain physically on disk, but not linked into a directory file just so that currently un-loaded pages of the process would remain to be reloaded at the same state as they were when initially run. The link count in the in-core copy of the i-node describing the executable is incremented each time that an executable is run, and decremented when a copy of the process finishes. When the link count on the in-core copy of the i-node drops to zero when the last process exits, the file is finally deleted from the filesystem, and the space released back to the free-map. A new copy of the process will pich up a fresh copy of the file from disk (I'm ignoring sticky text, this is just so old nobody uses it any more).

I suspect that it is this latter behavior with the text of executable files that is the problem, not memory mapped files opened for read or write.

When it comes to scripts or other interpreted files, I'm not sure that the situation is the same. I would expect that the script itself would be read off disk in it's entirety and held in the data segment, and paged to the paging disk as data if the real memory was needed, but I could be wrong here, and I've not got the time to read any source. I would welcome anybody who has information about this area to comment about how the contents of interpreted scripts are held in memory during execution.

Three-quarters of crucial border IT systems at risk of failure? Bah, it's not like Brexit is *looks at watch* err... next month

Peter Gathercole Silver badge

Re: Cheer up, what's the worst that could happen?

The backstop only kicks in if the current May deal is accepted and comes into effect, and only then if we get to the end of an additional negotiation on the exact trade relationship without an agreement on those terms. This period is either 19 months or two years, I can't remember which.

People appear to completely forget this next phase of the deal if it comes into force. During this additional period, all the current rules will continue as they are.

If we hit leave with no deal, there is no backstop.

Lenovo kicks down door of MWC, dumps a stack of sexy new ThinkPads

Peter Gathercole Silver badge

Not me who originally raised the point, but I continually find that my error rate on the chicklet style keyboard is much higher than on the older style keyboards (I use a chicklet keyboard on my work supplied Thinkpad, and the old style on my personal T400, which is still going strong running Ubuntu).

If I had to say what the difference is, I would say that it is the fact that the older pattern of keys had a more pronounced edge, and a taper to actually reduce the effective size of the key, so that you could feel how close your finger was to the center, and thus where your fingers are relative to the keyboard in general. With the mostly flat key caps of the chicklet type, I do not get that feedback, and wander to the edge of the keys, and can end up pressing two keys. I can feel that, but the fact that I then have to go back and correct the mistake slows down my typing.

I'm sure that this is an artifact of the way I type, but after using computer keyboards for 40+ years (almost exclusively on keyboards with curved tops and pronounced edges, some even more so than Thinkpad keyboards), I doubt that I am likely to change my typing style.

I remember many old keyboards (Dec VT100 and VT220, IBM 3278 etc.) whose keys were significantly dished, giving very positive location feedback. Flat keys of the type I first saw on Apple systems are just wrong.

IBM so very, very sorry after jobs page casually asks hopefuls: Are you white, black... or yellow?

Peter Gathercole Silver badge

Re: Not really someone to hire

Oops. You could not call Germany a company...

Of course I meant country.

Peter Gathercole Silver badge

Re: Not really someone to hire

Hypothetically, I wonder if the sales pitch went like this:

German bureaucrat: We'd like to by some tabulating and sorting systems.

IBM sales person: Great! We sell those. What will you be using them for?

German bureaucrat: Well. we want to start sorting out which of our people might be Jews.

IBM sales person: Oh. Why's that?

German bureaucrat: We were thinking about exterminating them.....

I don't think that it would have gone anything like this.

All the Germans has to do was say that they were carrying out a census, without expanding on the reason for doing it, and IBM would have been none the wiser. It's only after the fact that we can see that the IBM systems that were used to single out the Jews and other racial types.

And if you are flinging mud, maybe you ought to look at who financed Germany, a nearly bankrupt company in the early 1930's. Significant names such as Rockefeller, Morgan, DuPont, General Motors, and Ford were all involved through financing deals and share ownership of German companies such as Interessen-Gemeinschaft Farbenindustrie - the group that gave significant backing to Hitlers election campaign, Focke-Wulf, AEG, Siemens, ITT and Volkswagen.

So is IBM any worse than the rest of the American corporate system, merely because they supplied sorters that had multiple uses, but just happened to be used for a war crime after the machines were sold?

Linus Torvalds pulls pin, tosses in grenade: x86 won, forget about Arm in server CPUs, says Linux kernel supremo

Peter Gathercole Silver badge


It is easy to forget that Microchannel was not just an x86 technology.

Both RISC System/6000 (early Power platforms) and AS/400 used Microchannel, and neither of these needed reference disks. I can't get away without mentioning the baby mainframe 9370 as well, but it used a PS/2 model 80 as it's I/O controller, so does not really count in this context.

In many ways, the way Microchannel worked a lot like PCI. Each card had a baked in ID string that was readable during the configuration stage of booting, to allow the OS to configure the support.

RS/6000 and AS/400 systems did have an advantage, though, because both OS's were controlled by IBM, so in much the same way that Apple has now, IBM controlled both the hardware and software layers of the systems. Before using a new card, you had to upgrade or patch the OS to include support for the new card, which provided the configuration method for the ID string for the card.

For PS/2 systems, the reference disk included an ADF (Adapter Description File) which was a text file with a description of the adapter, which actually sounds a lot like what the OS9 text description file that the previous poster referred to. I think that the BIOS in PS/2 systems would load the information for the installed cards from the reference diskette to set up the IRQs and memory, and store that in the NVRam, so that the BIOS would do the basic device configuration before the OS bootstrap.

Data-spewing Spectre chip flaws can't be killed by software alone, Google boffins conclude

Peter Gathercole Silver badge

Re: that's funny

I'm a little uncertain about how timing memory access actually leaks data.

It is a very strong indicator about whether the data value you've just read was pre-fetched, in one of the caches, or retrieved from memory. This information may be valuable in deciding whether the value you've just read could have been pre-fetched from a different context, and thus whether it could be data from another process, but that's about all it does.

Pretty much all of the variants of Spectre that I've read about are to do with data from another address space being in either renamed registers from a branch not taken, or in some other cached data structure. The timing of the read will tell you this, but not the value itself.

The flaws exist because the data in the caches may have been placed there while the processor was running at a higher privilege state, and possibly should not be visible after the processor has left that state. With Meltdown, this allowed a process to read data from it's own address space which should have been protected, and which, unfortunately was mapped to part of the kernel address space. Mapping the kernel memory into the address space of a user process was a bad idea, whatever protections you set, and should have been seen as such from the get-go (incidentally, PDP11, VAX and s/370 versions of UNIX never did this, and IBM learned this with AIX on Power line sometime around AIX 4.1 in the mid 1990's).

Although some of the Spectre variants suggest it may be possible to read data from another process address space, or from a system call, most of them appear to be ways of reading protected data from their own address space.

The attack documented in the article, of one thread reading data from another thread's memory should not surprise anyone. Remember that when threads, or lightweight processes were introduced, the whole concept was to allow more than one processor ro work in a single processes address space. That is what it was designed for! (For reference, with early multiprocessor systems, each process could only consume up to one processors worth of CPU resource, never more, even if the other processors were idle). Allowing the system to schedule more than one processor per process, using threads as the scheduleable entity lifted this restriction.

But it was implicit that each of the threads was running in a single address space, so in theory had access to each of the other threads memory (which meant that several contention issues with shared data structures had to be worked around).

Running client-side executable code from a server you don't control in a thread, without further protection, was insane in the first place, and now that those protections are seen to be flawed, client side code execution must be banned, or at least relegated to a separate process address space, damn the performance consequences.

The basic rule should be that if it is not code you control, you should run it as at least a separate process or even in a lower security ring (for suitably equipped processors). The enforcement of process address spaces by the MMU is well understood, and the separation of all caches and other cached structures should be invalidated across different process contexts.

Return of the audio format wars and other money-making scams

Peter Gathercole Silver badge

Re: MiniDisk? Bah!

You can still get 78 styluses for several well known cartridges. I was surprised when I found that Ortofon still had a 78 stylus for many of their removable stylus cartridges in their catalogue.

Many BSR decks that used to be in 60's and 70's record players (like the popular Dansette) and phonograms had a dual purpose stylus with an LP stylus on one side, and a 78 stylus on the other. You could rotate it to select the one you wanted to use, and there was a plastic tab attached to help you rotate it, and display which was currently in use.

78s encode the sound vertically. Mono LPs encode the sound horizontally, and stereo LPs had the left and right channels encoded separately, one on each side of the grove, with the walls of the grove at 90 degrees from each other, 45 degrees from the verticle.

Playing a 78 disk on a stereo cartridge generally means poor quality sound, where the wider profile of the grove means that the narrower stereo stylus drags long the bottom of the groove, where the dirt and dust accumulates, whereas on a stereo LP, it sits touching the edges of the much narrower groove, and does not touch the bottom. Also, for a low arm mass LP turntable, the whole arm would go up and down, rather than the stylus moving relative to the body of the cartridge.

Patch this run(DM)c Docker flaw or you be illin'... Tricky containers can root host boxes. It's like that – and that's the way it is

Peter Gathercole Silver badge

Re: Sure they're just warming up

Got a long way to go before you get to something like this.

V: Voilà! In view, a humble vaudevillian veteran, cast vicariously as both victim and villian by the vicissitudes of Fate. This visage, no mere veneer of vanity, is a vestige of the vox populi, now vacant, vanished. However, this valorous visitation of a by-gone vexation, stands vivified and has vowed to vanquish these venal and virulent vermin vanguarding vice and vouchsafing the violently vicious and voracious violation of volition. (he carves a "V" into a sign) The only verdict is vengence; a vendetta, held as a votive, not in vain, for the value and veracity of such shall one day vindicate the vigilant and the virtuous. (giggles) Verily, this vichyssoise of verbiage veers most verbose, so let me simply add that it is my very good honor to meet you and you may call me V.

Evey: Are you like a crazy person?

V: I'm quite sure they will say so.

(Can't remember whether this is in the original comic strip, or whether it was invented just for the film, but it captures the idea of what was written by Alan Moore - quite the wordsmith)

Mini computer flingers go after a slice of the high street retail Pi

Peter Gathercole Silver badge

Re: Not Just a Store

I used to have a novel trick for the ZX81s that used to be on display in WH Smiths.

I typed in a REM at line 10 that contained some code that changed the value of the Z80's I register, and then called the address (in the way that you could embed machine code in the first line of the program, which was always at a fixed address).

The effect of this was scrambling the text on the screen, as the I register contained the top byte of the address of the character table used to generate the screen. It did not matter whether you cleared the screen, any new text on the screen was garbled.

You could get some really bizarre effects, like offsetting the characters almost like rot-13, and unless you knew what had been done, it looked like the computer had crashed.

(Incidentally, using this trick, if you put some static RAM on the ULA side of the bus isolating resistors, addressed to one of the gaps in the address map, and pointing the I register at it, you could actually have a programmable character set. On a ZX81!)

Unfortunately, it did not survive pulling the power cord out and plugging it back in again, and I must confess that this was not a childhood prank, as I was 22 at the time!

Reliable system was so reliable, no one noticed its licence had expired... until it was too late

Peter Gathercole Silver badge

Re: A byte for the year

Actually, most business software at the time often implemented integers including dates as Packed Binary Coded Decimal, storing one decimal number in a 4 bit nibble, so two digits in a single 8 bit byte.

This was the case for a lot of COBOL and RPG programs, two of the languages most often used for business applications.

Many machines actually included instructions to do arithmetic in packed BCD, including s/360, VAX, Burroughs, and even the 68000 (this list comes from Wikipedia, but I knew about s/360 and VAX).

Other systems using 12, 24 and 36 bit words would store 3, 6 and 9 decimal digits in their word. Some of the 24 and 36 bit word length systems also used 6 bit characters, with 4 or 6 characters encoded in a single word.

Honestly, youngsters today, No sense of history! They think x86 is the be-all and end-all of processors.

Treaty of Roam: No-deal Brexit mobile bill shock

Peter Gathercole Silver badge

Re: Draft statutory instruments

The reason why draft statutory instruments are required is because there is no hope whatsoever that all of these amendments could be debated by both houses in this century, let alone before March 29th. They need to be draft now so that they can be waved into effect with a single vote on March 29th if a deal does not get agreed. We don't (and in fact can't) bring them into force now, because the EU bills are still in effect.

The main purpose of the process is that each EU article enacted into UK law needs individual scrutiny to change the wording to remove references to the European courts and the other legal entities in the EU, and replace them in the legislation with the appropriate UK bodies. This is what prevents the wholesale bulk change of the relevant laws.

Whilst there is the possibility that other changes could be made during this process, that was not the stated intent of the legislation that was passed to allow this all to be done as SIs.

Like many other Brexit related processes, we as non civil servants don't actually find out about the things that need to be done unless we look. The whole process is hugely complex, and there will still be things to be done long after March 29th. We really needed the whole of the two year period (and potentially more) to put in place the mechanisms, whatever the outcome (and the deal has this built in to the transition period), but not enough has been done for a WTO exit.

The thinking was that a no-deal exit was unthinkable, so didn't need to be planned for. Unfortunately, the people who thought this were over optimistic...

(Much as I don't want it, I can still see Article 50 being rescinded on March 28th, because the enormity of the mess if a deal is not found is only beginning to sink in to politicians heads!)

Accused hacker Lauri Love to sue National Crime Agency to retrieve confiscated computing kit

Peter Gathercole Silver badge

Re: Re. Why does he want five-year-old kit back?

Believe it or not, it is perfectly possible that kit a mere 5 years old, even a PC, would still be usable.I presume that Mr. Love used Linux. As such, the systems suffer less resource-rot than if they are Windows systems.

A five year old machine could be running a core-2 duo or quad, or an early i3, 5 or 7 processor, and may have 8 or 16GB (or more) of memory and a good-enough graphics card.

I am still using equipment of that era and older (my primary laptop is a T400 Core 2 duo at 2.2 GHz with 4GB of memory, manufactured in 2010 IIRC) and it runs Ubuntu 16.04 quite fast enough to do everything I need to do. I have much older equipment that is still doing useful work where computing power is not the primary requirement.

I get so fed up, especially in these days of over-powerful systems, with the assumption that any computer made more than a couple of years ago is useless. It's all hype, trying to make computers more like fashion items, and pushed by the system vendors and OS suppliers to try to create a rapid, repeating market for their wares.

A classic example is Windows 7, which still does the job, with MS pushing a still much disliked new version very hard, with dire threats of support ending for the older versions. The way forward for many people ought to be Linux (rather than the recycling centre) for kit that may have many years of useful life in it.

It probably won't happen, because I can see the computers-as-a-service, totally controlled by the vendors and with a monthly charging model, steaming down the track towards us, with nothing apparently able to divert it, and the ordinary Joe or Josephine just rolling over and accepting it!

Peter Gathercole Silver badge

Re: Interesting angle.

IIRC, the system in the UK was changed a few years ago to allow a suitably qualified solicitor/lawyer (note not a barrister) to present in civil (and I believe in some less serious criminal) cases.

I have a family solicitor friend who took the training, and presented cases in court several times before he retired recently.

It was a good move for the legal system, because employing a barrister means paying for their time to learn about the case, possibly from the solicitor who has already discovered a lot of the information, as well as for the time in court, as well as the solicitor's time. Allowing the solicitor, who should already have a lot of the case details, to represent in court saves whoever is paying the final bill an expensive tier of legal costs.

The people who lose out are the recently qualified barristers who would cut their teeth on the low-grade civil cases that is their bread and butter until they have earned a reputation.

Windows Defender update: So secure, it wouldn't let Secure-Boot Windows PCs, er, boot

Peter Gathercole Silver badge

Re: My personal Windows 10 update....

I've not used Mint recently (I built a system using it when Unity was being shoved down our throats by Canonical, but actually settled on Gnome Fall/Failback which I'm still using today), but is it not the case that Cinnamon and Mate are in the Ubuntu repository, so you can use one of those in preference to Unity or Gnome to make Ubuntu look more like Mint?

I'm actually surprised that Mint has a Grub issue and Ubuntu doesn't, as Mint is a downstream distro of Ubuntu. Does Mint have a different policy regarding the kernel version than Ubuntu?

Apple: You can't sue us for slowing down your iPhones because you, er, invited us into, uh, your home... we can explain

Peter Gathercole Silver badge

Re: Do they still do it? @MattUK

As a complete guess, maybe the LG android version is actually doing an FSTRIM occasionally, whereas the Samsung version is not.

When it comes to this level of detail, it is perfectly possible for one manufacturer to decide on different management style than another.

For general purpose Linux systems (I regard Android as an example of a specialized Linux), it may be that you would have an FSTRIM performed in a cron job for certain types of flash memory. Maybe LG have something similar, and Samsung don't. Or maybe they use different flash controllers that handle the flash housekeeping differently.

This is all guesswork on my part.

Peter Gathercole Silver badge

Re: Do they still do it? @MattUK

Even in these days of FSTRIM being in the Android filesystem handling, flash memory handling still causes slowdowns as a device ages.

Because of the way flash memory is emptied (erased, in large allocation units which are multiples of the filesystem block size), there is a preference built in for it to mark the data deleted, but not actually erase and re-use it. This is because it's probably going to have to shuffle data in the rest of the allocation unit around before it can erase the allocation unit. The more flash RAM you have, the longer it will take to use it all and get to this point.

So all the time that you have still-to-be used flash memory, the OS will be quite speedy.

As soon as almost all of the empty flash memory allocation units have been used, the OS has to do some serious housekeeping to shuffle all of the data out of an allocation unit, so it can be erased prior to it being re-used. AFAIK, this is deferred until the last possible time. at which point the performance will hit a brick wall.

FSTRIM was supposed to be used to prevent this slowdown, but I get the impression that when it was introduced in Android, it didn't actually make a huge difference, IMHO.

When you wipe the phone and re-flash the original image, it does a bulk erase of the entire flash memory, after which it will run quick again, and start the whole process over.

In flash SSDs rather than phone memory, because the flash controller is allowed to consume power when the storage device is idle (which a phone won't do, in order to increase battery life), it spends it's idle time performing the housekeeping early, to make sure that you get maximum performance when it is needed.

This is done at a controller level, as there is block address renumbering going on (for wear leveling and to overcome a process called write disturbance, and also to allow for spare blocks to extend the lifetime of the device), so the data can be copied into a new allocation unit without the OS or filesystem being aware that the data is now in a different portion of the memory.

The Apple Mac is 35 years old. Behold the beige box of the future

Peter Gathercole Silver badge

Re: Typical el Reg @sprograms

Hmm. System 3. Not sure that AT&T actually has a certification program for that, and even if it did, System 3 (or SIII as it is often written) was a long time before OS X.

I don't dispute that OS X has obtained UNIX branding, but I think it started at UNIX 98.

Ah. Now I see. You are referring to UNIX 03. This is *NOT* UNIX System 3, which was the predecessor of System V, and was available on AT&T hardware (things like the 3B20) from about 1982.

I actually really dislike the fact that the latest UNIX branding is called UNIX Version 7. To me, that is the Bell Labs. PDP11 release from 1979, also known as UNIX Seventh Edition. it is of passing interest that on the tuhs archive, it's under a branch entitled V7, but as I review the documentation, it is almost universally called UNIX Seventh Edition. I always referred to it as Version 7, though, as did everyone around me at the time. But I also note that the official Digital port of Unix Edition 7 was called V7M or V7M/11. So I don't feel too out on a limb.

Pedantic Grammar alert, natch.

The lighter side of HMRC: We want your money, but we also want to make you laugh

Peter Gathercole Silver badge

Re: Child benefit... @weallneedhelp

Hmmm. That's very interesting. Although I followed the news when the change in policy was being argued and announced, I don't ever remember that being mentioned.

That article suggests that if you claimed and then opt out you remain earning credits. It is suggesting that you should not decide to not claim.

I wonder where to get clarification on this.

Peter Gathercole Silver badge

Re: Child benefit... @Robert

When it comes to aggregating all income, it does not matter whether it is being used to pay the mortgage, just as long as the appropriate amount is spent on the kids.

But it could well be argued that helping keep a roof over their heads or feeding them is actually beneficial to them.

Only where equivalent amount is being spend on unnecessary items could you really make a case for it being wasted.

There were many unjust things that happened during rationing. It's just the way things work when people do not follow the purpose and intent of the policy. People tend to be selfish, that's just human nature. Society can mostly overcome the selfishness, but never truly enforce it without undermining itself.

Peter Gathercole Silver badge

Re: Child benefit... @AC

I was not suggesting that you did not agree, rather pointing out how stupid the HMRC systems are.

I also agreed that if the child benefit bill needed to be cut, that my family should be in the group for whom it was removed, but that still made it a little unwelcome (who actually likes losing income that they've previously received).

I think that it should be possible for the person earning the income to inform DWP that child benefit should not be paid to their spouse, but that is not the way it works (I suppose that there is some justification to avoid vexatious claims). Child benefit was always paid to the woman, supposedly so that they had control of it to spend on the children rather than it being pissed up the wall by their stereotypical drunken layabout husband. And as a result, the woman has to say they don't want it any more.

But I wonder how many women are not told by their husbands that they are higher-rate tax payers, and the women neglect to say that they're receiving child benefit? Do HMRC actually perform any checks to see whether people are not declaring that the household is receiving child benefit?

Peter Gathercole Silver badge

Re: Child benefit... @AC

What you are supposed to do is tell your wife to let the DWP or whoever it is that pays child benefit that one of the bread-winners is a higher rate tax payer, so that the benefit is stopped.

No, it didn't work for me either. My wife kept saying she'd call and do it, right up to the point that our youngest child left education. She didn't want to lose the income that was under her sole control (which actually often didn't get spent on the children, but that's another story).

It's effectively you paying your wife the equivalent amount as the child benefit, but by a really, really inefficient route. And on top of that, the fact that in UK law, your tax affairs are your concern, not your spouses (and vice-verse) means that if your partner won't tell you what child benefit they receive, you're effectively guessing (with a little bit of assistance from HMRC in the self assessment process) what the amount is, with no check to say whether you are correct!

Definitely a broken system.

Oh, SSH, IT please see this: Malicious servers can fsck with your PC's files during scp slurps

Peter Gathercole Silver badge

Re: When your whole backup solution is centered around SCP transfers...

Thanks for clearing that up. I know now.

I was thinking that rsync was a little older than it appears to be (the original release was dated 1996, now I look). That means that I was using it very soon after it was released.

Peter Gathercole Silver badge

Re: When your whole backup solution is centered around SCP transfers...

Hmm. Not totally sure about this, but I think that rsync is layered on a transport layer that you specify, and this used to be rsh/rcp in the old trusting days of computing, and is normally replaced by ssh/scp.

I don't know if it uses ssh as a raw transport, and does all of it's file handling itself, or whether it hands things off to scp. I'm sure someone here knows.

So it may be that rsync is just as compromised!

Bish, Bash... gosh! Good ol' Bourne Again Shell takes a bow as it reaches version five-point-zero

Peter Gathercole Silver badge

Re: Bourne Again Shell (Bash – geddit?) @tfb

I was using SVR2 when I first came across ksh (ksh85, I think it was) in 1987. It was available as an exptool for AT&T related companies.

ksh88 became accepted as part of R&D Unix for AT&T companies (and I think it was purchasable commercially for 3B2 systems), and shipped as a standard shell in SVR4 in 1989 (and thus available in SunOS 4). I think it made it's way into a lot of AT&T licensed and derived UNIX systems including AIX 3.1 onwards. I remember that I also came across it on a DGUX box and Digital UNIX systems.

It became the basis for the POSIX shell (which was effectively ksh88 with a few tweaks), and I think that Bash was written to be a POSIXly compliant shell, which makes ksh a direct ancestor of Bash.

Oh, and in answer to csh, what planet are you from! I know that some wierd people using BSD used csh (but many people on BSD didn't), but really it bore almost no relation to the Bourne shell. In my experience, most people only really used it because of the history features. And most of them only used it as an interactive shell, and wrote most of their scripts in the Bourne shell, even though csh was intended as a programming shell.

I'm not saying that it could not be used, but when I compiled it up on a Bell Labs. V7 UNIX box from a BSD 2.3 tape in something like 1982, it felt so foreign and wrong that I quickly went back to the normal shell.

Peter Gathercole Silver badge

Re: Bourne Again Shell (Bash – geddit?)

Probably the case that Bash was not the successor to the Bourne Shell, but to ksh, the Korn Shell, but the naming sort of still fits.

I think it would be an interesting exercise for many of the commenters here to actually try to use the original Bourne Shell as their main shell for a period of time, so that they could appreciate how much a step up ksh, the Posix Shell and bash actually are.

And for those who have hair-shirt tendencies, the Bell Labs. UNIX version/edition 6 shell (if you can format it - it appears it's not compatible with the groff version of the man macros installed on this Redhat system) is a real eye-opener!

Begone, Demon Internet: Vodafone to shutter old-school pioneer ISP

Peter Gathercole Silver badge

Re: Historical accuracy

Where I was working, we had a commercial arrangement with EUnetGB when it was part of the University of Kent.

I thought that EUnetGB was bought by PIPEX sometime in 1992 or thereabouts.

I was running a leased line to Canterbury at the time, and had to re-work the CISCO AGS router configuration one evening during a managed transition to ensure continued Internet access.

Peter Gathercole Silver badge

Re: netcom.co.uk

I managed to get PPP working with Redhat (original Redhat, not RHEL) with the information that Virgin.net provided back in 1997. I did not think that it was too difficult (I also managed to get it working with Breathe without much difficulty). I managed to set Linux up as a router as well, to share the connection with other systems, and even managed to get Internet Connection Sharing working in Windows (95 I think it was).

Neither of them provided specific instructions for Linux, but IIRC it was perfectly possible with the information that they provided for Windows.

I also managed to get Dial-on-Demand working with Smoothwall (a dedicated Linux firewall) a little later, providing protected network access on demand to the proto home network that connected the systems in my home office together. Seamlessly transitioned to ADSL as soon as it was available in my area without having to re-work any of the individual PCs other than the Smoothwall when the change happened.

All those memories...

Google Play Store spews malware onto 9 million 'Droids

Peter Gathercole Silver badge

Re: Do phones still have an IR port?

It's a flaw in the review systems. They should all have separate ratings for not only the quality of the item purchased but also the customer service. This would allow someone to grade it as "1" for the item, but give a "5" for the way the seller responded to the problem.

Excuse me, sir. You can't store your things there. Those 7 gigabytes are reserved for Windows 10

Peter Gathercole Silver badge

Re: WinSXS

I'm not sure how Windows does it, but on UNIX and Linux with most of the common filesystem types (ufs, extX etc.), the system cannot tell the difference between the original file and a hard-link to it (in fact, there is no difference, both directory entries point to the same i-node in the same way, the system does not even record which link was made first as the dates on a file are stored in the i-node, not the directory entry).

The only thing that anything can tell is the number of hardlinks to a file.

It can also be difficult to identify where other (hard) links actually are in the filesystem without doing a file tree walk. Sometimes ncheck (if it is installed) can be used, but this utility, dating back to ancient UNIX is often not installed on a system (or may not even be present).

Of course, Windows may do it differently (as do some of the more advanced filesystem types on *nix), I just don't know as I don't really follow Windows that much.

Peter Gathercole Silver badge

Re: 32GB HP Monstruosities @Dave

The original Atoms were a bit pants, but the current 64 bit ones not so.

Intel appears to have changed the meaning of the Atom range since first introduction. Initially, they were processors intended to be soldered onto a board (rather than in a socket), but they still needed external logic to create a system.

Recently, Atom appears to be used as a branding for SoCs.

The most recent generations of Atom use the same processor architecture as Celeron and Pentium Silver processors, and there are ranges of clock speeds and capabilities available in each family.

In terms of what they can do, a lot will depend on what you want them to. They will never be good systems for processor intensive operations, but for something that needs a low power processor with moderate performance, they are quite capable. I had a laptop with an Atom x7-E3950 in it, and was very surprised by the speed of the system compared to my (admittedly aging) 3rd gen i5 Thinkpad.

Peter Gathercole Silver badge

Re: 32GB HP Monstruosities @Dave

You are aware, of course, that Intel have reused the Pentium name for the low grade processors that would previously been called "Celeron", aren't you?

But you ought to also be aware that the 4 core Atom-X 64 bit processors can be really punchy little things, capable of doing a lot of work.

Techie basks in praise for restoring workforce email (by stopping his scripting sh!tshow)

Peter Gathercole Silver badge

Re: I learnt to test my WHERE clauses on a DELETE with a SELECT first @GrumpenKraut

No. You will find that -print0 is not in the SVID version of find. There is a typo, though, as I missed out the filename!

I take your point about print0, but even though whitespace is actually allowed in filenames in pretty much all UNIX-like filesystems.

If I ruled the world, the set of characters that would be illegal in filenames would be much larger! I really hate when filesystems are shared between UNIX-like OS's and Windows (Samba, NFS etc.), and I see filenames created by Windows including spaces, asterisks and other characters. And that is not to mention when different multi-byte character encoding systems were used, prior to UTF8 becoming dominant.

But then, many people think I'm far too old to make relevant comment about modern computing.

Peter Gathercole Silver badge

Re: I learnt to test my WHERE clauses on a DELETE with a SELECT first

Grrr. Creaping featureism!

The 'correct' use should be:

find <dir> -name -exec rm {} \;

(the starting directory should be required, I don't know when GNU made it optional). I admit that it spawns more processes, but if you're really worried about this, then:

find <dir> -name -print | xargs rm

These work on pretty much all unix-like operating systems whereas the GNU ones don't, which is why they trip off the fingers when I'm working.

Mine is the copy with the SVID in the pocket!

Commodore 64 owners rejoice: The 1541 is BACK

Peter Gathercole Silver badge

Re: Emulated peripherals

Now there is an ironic circle.

The ARM instruction set was originally modelled on an Econet of BBC micros, and then the first ARM 1 development board was a Tube second processor!

Acorn really did have some exceptional engineers, and the BEEB was an exceptional system for it's time.

Incidentally, there were 80x86 (I think it was an 80286) second processors, as well as Z80 and even a NS32016 (originally intended to run Xenix) available as second processors. These were even packaged as business oriented machines as the ABC (Acorn Business Computer) range.

Corel – yeah, as in CorelDraw – looks in its Xmas stocking and discovers... Parallels

Peter Gathercole Silver badge

Parallels Workstation for Linux

I actually bought a copy of Parallels Workstation for Linux back in about 2004, when VirtualBox either did not exist, or was very immature.

Parallels was excellent, allowing me to run Windows on Linux in seamless mode, and was very efficient.

Unfortunately, after they pulled the product (for Linux) and the Linux kernel interface changed over time, the kernel modules for Parallels (which were provided in source to be compiled with the kernel headers) would not compile, and although I had a bit of a poke at them to try to get them working, I could not do it my available time, and I eventually (and reluctantly) switched to VirtualBox.

Although I still use VirtualBox now, I would still consider buying a new copy for Linux if they re-introduced it (rather than paying for the commercial extensions to VirtualBox for the better bits like high speed USB).

IIRC, they also had/have a containerization product (it was available long before Docker et. al.) that they touted for Cloud applications a few years ago. Might Corel have been wanting that technology?

College PRIMOS prankster wreaks havoc with sysadmin manuals

Peter Gathercole Silver badge

Econet security

Unfortunately, the Econet implementation in the original BBC Micro's had very little in the way of security.

The station ID was coded using a set of dip-switches under the top cover, on the keyboard PCB, but this was read into a location in page 0 of the RAM. As the 6502 and BBC OS did not have any concept of a privilege mode, it was possible to change the station ID with a simple command.

There was a vague idea of a privileged user when you logged into the file server (and there was some minimal user-seperation on the fileserver), but again, the User ID and whether they were privileged or not was stored in page 0 again, and could easily be changed.

It was the nature of the machines. There was no real way of securing what was effectively an open workstation.

When I administered an Econet Level 3 network, I very quickly established that there was nothing that could be secured on the network, and told the teaching staff to only store course records on floppy, never on the fileserver.

It was a shame really, because it was a rather nice system (with one or two drawbacks, like security and very slow byte-by-byte access to files using OSDCH and OSWRCH)

Sysadmin’s plan to manage system config changes backfires spectacularly

Peter Gathercole Silver badge

Re: SCCS hits you

The problem (or maybe it's a strength) with SCCS is that you have embedded tags that are expanded, normally with dates, versions etc. as the file is checked out readonly. With SCCS, they are surronded by % or some such. (RCS does use similar but incompatible tags, I'm not sure about other systems).

The problem is that in some cases, these tags can mean something to other tools, and may also expect to use % as a special character, in which case deploying an un0checked in copy may cause undesirable effects.

Of course, one solution to this is to use it with "make", which would allow you to perform additional processing around the versioning system. I'm not sure I remember how I did it, but I'm pretty certain when I used make and SCCS in anger, I had a method where I could spot that it was not checked in. Make is slightly aware of SCCS.

But of course, you can't meaningfully compare SCCS with modern tools. I'm sure it wasn't the first versioning system around, but it must have been one of the earliest, dating back to the early 1970's. It was not meant to work with vast software development projects with many people working on them, but for it's time, it did a pretty good job (Bell Labs. used it to develop UNIX).

Each iteration of version control since, like CVS, RCS, arch, Subversion, Git et. al. has expanded on the functionality, meaning that as the grandaddy of them all, SCCS cannot come out favorably in any comparison.

But I still use it on occasion, as it is normally installed on AIX, even when nothing else is.

NHS supplier that holds 40 million UK patient records: AWS is our new cloud-based platform

Peter Gathercole Silver badge

Re: Bullshit Alert

OK. Thanks for your scenario. You're using only cloud storage, I can understand that. Encrypted as it goes to/from the cloud, and never actually used in the cloud. Cheap storage, but to do any volume analysis, will be very expensive on data transfer costs.

But actually running the application in the cloud? Or using cloud-based desktop (not mentioned here, I'm extrapolating)? In these cases, the keys need to be in the cloud.

OK. Encrypted region within a cloud domain? You're trusting the cloud provider cannot be coerced to hand the data and the keys over to some TLA or hacker, and backed up by a warranty which will not exceed the cost of the service (even if you can prove that the data's been nabbed?) This cannot be considered a good move.

Peter Gathercole Silver badge

Re: Bullshit Alert

"...the keys aren't available"

Someone please correct me. If this data is encryped, but being used by cloud based analysis applications, then those cloud based applications must have the keys necessary to access the data (I accept that using the data from, for example, GP surgerys. there is scope for keys to be on the surgery's systems, and presented for every request, but that does not cope with bulk analysis mentioned here).

And they're in the same cloud, so if someone really wants the data, they half inch the data and the keys (OK, you could go down the rabbit hole of needing a key to decrypt the key store in the cloud, but how often do you go round this loop until you must store a key somewhere readable).

So where is the security?

I'm sure I must have missed something, so I'm asking for someone to point out where I'm being stupid?

Support whizz 'fixes' screeching laptop with a single click... by closing 'malware-y' browser tab

Peter Gathercole Silver badge

Build your own PC

I've been building my kids gaming PCs for a couple of generations of machines.

A few years ago, I was building one to wrap an put under the tree at Christmas for my youngest son.

The build went fine, and the system was working perfectly, so I checked and tightened all the screws, and put the cover on, and then wrapped it.

Christmas morning. Wrapping paper comes off, and the system was connected up. The power button was pressed, and... nothing. No lights, or fans. Nothing. I spent the rest of Christmas day going through the build, including replacing the power supply and removing all of the adapters. Nothing. A disappointed son returned to using his really underpowered old machine that struggled to play his games.

Eventually decided that the motherboard must have failed between testing and unwrapping (unlikely, but the only thing I could think of). Online on Christmas evening to order another motherboard, with the most expensive delivery option to get it as soon as they could get it to me.

Day after boxing day, it arrives (yes, really). Out with the first board, in with the second. this would fix it! Only it didn't. No change.

I was baffled. I ended up doing a case-less build on the kitchen table, using a switch wiring set from a decommissioned case, the new power supply and the first board, Surprisingly, everything powered up without problem. Put it back in the case, nothing.

So, thought I maybe the switch set? Left the board in place, and used the set I'd used in the case-less build. Everything worked!

Finally I had a clue, so I checked the wires in the case. Remember when I said I had tightened the screws? Well, I had been careless, and the wires to the power button were caught between the case and a flash card reader that was where the floppy would normally be. During the build, everything was working fine. As soon as I tightened the screws, the sharp metal edges scissored the wires to the power switch, cutting them both. Result, no contact to turn the power on. A quick swish of the soldering iron, some heat shrink, and a happy son that could finally get his new gaming rig running.

So the moral of the story is, even if it was working before closing the case, check that it still works before re-packaging.

Still, I found a use for the extra motherboard building a franken-machine from spare parts I had knocking around (which included the case from my son's old machine), which with a wave of a Ubuntu CD (no unused windows licence available), became my first machine that I didn't have to share with my family!

Amazon's homegrown 2.3GHz 64-bit Graviton processor was very nearly an AMD Arm CPU

Peter Gathercole Silver badge

Re: Interesting comparison... @ToddRundgrensUtopia

Throwing terms like NUMA around in multicore systems without sufficient qualification can be completely misleading (and this is separate from the abomination that is term "non-Non-Uniform-Memory-Access" used here).

NUMA is normally not used at a chiplet level, but at a complete system level. I certainly can see that each quad core 'cluster' chiplet could have Uniform Memory Access (see what I did there) to their local memory for each of it's 4 cores, but at a system level (or even a chip level), this will almost certainly be a NUMA architecture.

I spent some time working with IBM Power 7125 575 and 775 systems, and know that as processor count increases, coherent cache and memory access becomes exponentially more difficult.

Peter Gathercole Silver badge

Re: Interesting comparison...

SciMark is inherently a single threaded benchmark, so it really measures single core performance, which would make sense given 2x performance with 1.5x clock speed and an architectural bump.

Once you factor in the four times core count, it will be much more useful in a datacentre environment with real-world workloads.

It's interesting that it's a non-NUMA design. This normally causes memory bus contention issues with multi-core designs, so I wonder what they've done to allow 16 cores to access the same memory without blocking.

Blighty: We spent £1bn on Galileo and all we got was this lousy T-shirt

Peter Gathercole Silver badge

Re: FUD Central Nervous System @amanfrommars 1

That nearly made sense to me.

Am I going mad?

Oi, Elon: You Musk sort out your Autopilot! Tesla loyalists tell of code crashes, near-misses

Peter Gathercole Silver badge

Re: Say what you like about Teslas @bob

I drive a lot on un-lit roads (it's a hazard of living in a rural environment), and it is not just drivers who have their lights set too high that bother me.

The super-bright LED lights on cars coming in the opposite direction are enough to upset night vision even when they're adjusted correctly and not on high-beam. They're just too bright.

What surprised me a while back was that these super-bright lights are also being put on pushbikes. This is just wrong, especially when they are set to flash. Even if they don't flash, when you come across one, you have to look hard to see past them to make sure they are not a car with one light not working (and thus difficult to see how much of the road they occupy.)

And don't get me started on the stupidity that allows manufacturers to put indicator lights next to or surrounded by high brightness side lights, especially if the sidelight has to turn off when the indicator turns on to allow the indicator to be seen. You get a light that just appears to go from white to orange, without the required change in contrast. Why are they even allowed in the homologation tests!

Biting the hand that feeds IT © 1998–2019