* Posts by Peter Gathercole

2924 posts • joined 15 Jun 2007

Watt the heck is this? A 32-core 3.3GHz Arm server CPU shipping? Yes, says Ampere

Peter Gathercole Silver badge

Re: Drivers ?

Probably crossed terms.

ISA also means Instruction Set Architecture, which is what the ARM ISA is.

Nowadays, pretty much all devices work through PCIe 3, so device drivers are much less of an issue than they were.

Most people building x86 systems see the legacy BIOS, keyboard, serial and parallel ports as being something that ought to be culled from modern systems (some have done it already), and I really don't think you mean that there is still support for the 8-bit 'ISA' adapters that were in the original IBM 5150 PC.

Euro bureaucrats tie up .eu in red tape to stop Brexit Brits snatching back their web domains

Peter Gathercole Silver badge

Re: Eh? @Taiwas

"Deal or no deal - they can't decide in a unified way, which they truly want"

This was always going to be the case. There could never be a deal that would keep a majority of the Government, Parliament or the Electorate (on both sides) happy. I feel that this is what the remain side are expecting to happen if there is a public vote on the resultant deal.

If you quantify the likely factions you have:

Those who want to remain in the EU.

Those who want to leave under any circumstance, and would sever ties tomorrow if they could.

Those who want to leave because of immigration, but want to keep the reminder of the EU advantages.

Those who are ambivalent about immigration, but want to get rid of some of the EU regulations.

Those who want to leave to enable the UK to trade better with the rest of the world.

Those who want to leave because the direction of the EU is towards a federal state, which they don't want (this is my position).

Now. take any combination of these, and see whether you can get a majority. Tricky, isn't it.

If there is a referendum on a deal, any deal, it will be voted down. This is why the PM will resist a further vote (even in parliament), because she will lose. The remain camp hope that this will mean that we will stay in! But it is more likely that we will leave on WTO rules, with the hardest of Brexits, and no transitional period.

The Government have an impossible task, which is why they cannot come to an agreement. It's not (all) their fault!

A basement of broken kit, zero budget – now get the team running

Peter Gathercole Silver badge


In my first job programming (in RPG II) in a card-and-batch environment in local government, I got frowned at for working out how to use the JCL to check whether a compile had completed without errors or warnings, so I could have a test run immediately following it in the same card-deck.

Saved me at least 20 minutes per iteration (and sometimes much more), and normally meant that I had twice the number of decks in the queue than all of the other programmers (you had to work on more than one programme because of the turnaround time in the batch queue).

Although the powers-that-be were merely disapproving of this, when I spent time trying to work out how to use the archive manager (analogous to SED, IIRC) to patch virtual card decks (rather than having the whole deck re-punched, patched, and then re-added to the archive, I kid you not), I was hauled aside for being 'disruptive'.

So at the end of the first year, when I was told I didn't merit a pay rise from the stupidly low starting salary, I told the manager exactly what I thought about RPG II (I think I described it as a jumped-up macro assembler - I had previously been programming in PL/1, APL and C on UNIX at university), and said I would be looking around for another job immediately!

Probably was a good move, actually, because I ended up working at a Polytechnic deep in the guts of UNIX V6 and 7 on a non-standard SYSTIME PDP11, which really set my career.

Microsoft: You don't want to use Edge? Are you sure? Really sure?

Peter Gathercole Silver badge

Re: Block IE and Edge @Updraft

For my own use, I am completely Windows free.

I do dual-boot my laptop, and I did boot into Windows a few weeks back to check (and fix fix) it after I migrated the whole system to an SSD (and I put the latest Windows patches on), but other than that, all my day-to-day computing is Linux or UNIX only.

I agree that it is about the applications you run, but to claim re-training your staff is a reason is just FUD.

Windows has some serious advantages at an Enterprise level (AD, Policy Director, Sharepoint [or whatever it's called now]), but for many organizations a sensible and properly resourced Linux implementation is possible. Where I've seen this, though, it often only takes one person in authority (or who is able to be influenced) to push for a return to windows strategy for this to happen (see the background in the Munich City travesty - exactly why did Microsoft choose to become a major employer close to Munich?)

No, unfortunately Microsoft still have far to much influence so that (native) Linux will never have it's day on the desktop. Chromebooks, however...

Dust off that old Pentium, Linux fans: It's Elive

Peter Gathercole Silver badge

Re: If it's snappy on old kit... @insane_hound

Yes, 640x480x16 comes out at exactly 150KB for the primary frame buffer, assuming 4 bits per pixel.

You would have struggled to get a dual-frame buffer working, but for 2D single buffer, it was ample.

When it comes to things like graphics intensive games, sometimes they used to render a frame in main memory, and blit it out to the graphics card's frame buffer, so for some purposes, it may have been necessary to increase main memory.

Remember that unlike today, where the graphics card can do complete object rendering with texture mapping and light source and even now full blown ray tracing, early VGA ans SVGA cards did not have a huge amount of intelligence, so often the main processor did most of the work.

Trainer regrets giving straight answer to staffer's odd question

Peter Gathercole Silver badge

Yup, that does it

When I ran my own company, a family friend asked me to price up a repair for a Packard Bell desktop, and put it on headed notepaper.

I found out (after I had written a no economical repair possible report - Packard Bell systems had proprietary power supplies and motherboards) that he used it in an insurance claim, and he admitted to switching the power supply to 110 volts to deliberately break it (after all this time, there's no comeback, as unfortunately he is no longer with us).

The mobo and graphics card were fried, as well as the power supply, but the Pentium 120 that was on the mobo survived (this indicates how long ago it was), and went on to run fanless in my built-from-scrap-parts firewall system for several years.

Peter Gathercole Silver badge

I wish I knew that 15 years ago

One of my kids spilled something orange and very sticky over one of my two IBM Model M keyboards (Tizer or Irn Bru, they did not own up to it so I never knew for certain).

At the time, I had not heard that Model Ms could survive a dishwasher, so I went through the entire process of stripping it down (boy, you need some deep sockets), and cutting off the melted plastic rivets that hold plastic case that contain the rockers, springs and membranes, and then suffered the problem of the conductive tracks peeling off when I opened the membrane up to wash it.

I cleaned, attempted to repair the tracks with conductive paint, and reassembled the keyboard, adding small nuts and bolts to replace some of the plastic rivets, but unfortunately it never completely worked again, so the keycaps, space bar and cable were salvaged, and with deep reluctance, it was consigned to the recycling centre.

About 6 months after I had failed to repair it, I heard about the dishwasher trick (and now I know that Unicomp sell replacement membranes as well), but it was too late. I was mortified. Needless to say, there is a no-sticky drink rule whenever the kids come anywhere close to my remaining Model M.

But I know all about how a Model M is made

Rubrik says bye to global sales boss

Peter Gathercole Silver badge

I hope that the picture is a visual pun...

...as the cube is attributed to Rubik.

SCO vs. IBM case over who owns Linux comes back to life. Again

Peter Gathercole Silver badge

Re: I thaught Novell owned the property @ Daniel von Asmuth

IBM had and have valid AT&T UNIX source licenses, and were part of Project Monteray, which included the Santa Cruz Operation (before Caldera bought them), so I think that it is very likely that IBM also had a SVR4 source license.

I would assume that, unless IBM's SVR4 source license explicitly prohibited code from SVR4 to appear in AIX, that IBM behaved entirely appropriately with regard to AIX.

The initial product of Project Monteray that IBM produced was a version of AIX 5L running on Itanium. This was harmonized with the release of AIX 5.1 on Power, so that the new features in 5L were also in 5.1 (which some people, even in IBM, did call AIX 5.1L or AIX L 5.1)

I actually did a bit of investigation on an AIX 5L Itanium system, and decided it looked like AIX on Power. walked like AIX on Power, and quacked like AIX on Power, so it was just another AIX platform (apart from some features that were still missing), and promptly decided to deliberately lose interest until Itanium systems running AIX 5L appeared in the market place, which they never did.

What IBM were accused of was not copying UNIX code to AIX, but of copying UNIX code into their contributions to Linux. TSG (ex. Caldera) claimed they had a right to rescind IBM's UNIX source licenses on the strength of this accusation, something that was not possible as the licenses were in perpetuity, and then tried to accuse all IBM's AIX customers of running UNIX variants illegally. IBM promised to defend any AIX customers from TSG's claims of running UNIX illegally if they ever were taken to court, so TSG never carried out any of their threats to AIX customers.

TSG's management were idiots!

No do-overs! Appeals court won’t hear $8.8bn Oracle v Google rehash

Peter Gathercole Silver badge

Re: On the one hand @bazza

The SCO case originaly hinged around SCO's assertion that IBM included parts of the source code obtained under IBM's UNIX source code license that they held for IX and AIX into the code they contributed into Linux (particularly LVM code).

What became apparent is that the only code that was in Linux that came from a UNIX source tree was from ancient UNIX (Edition/Version 7) which SCO themselves had put into the public domain under a fairly unrestricted license. When SCO, with full access to the AIX source tree, were unable to demonstrate anything else more than a general resemblance in the TTY and other device switch (which were basically a series of C switch [case] statements which made no sense to write any other way), that part of the case collapsed.

It then became muddied, because Novell successfully challenged SCO's claim to the the copyright holder of the UNIX source in the first place!

Apart from the entertainment value, I'm so glad that those cases has finally been put to bed.

In this case, I thought that Google had bee accused of directly copying the include files (and only these files) that essentially defined the API between the application and the runtime. I thought that Java had actually been published under a fairly permissive license by Sun (as they were very keen to get it adopted as a pervasive language), so I'm actually surprised that this case has come to this conclusion. But I suppose it's Oracle, so all reason goes out of the window as greed takes over.

Not that Google's any better these days,

Experimental 'insult bot' gets out of hand during unsupervised weekend

Peter Gathercole Silver badge

At University at Durham (and Newcastle)..

..they ran a slightly obscure OS on their s370 called MTS (the Michigan Terminal System).

Unusually for a mainframe OS of the time (I was there in 1978 on), it drove interactive terminal sessions, and our use was controlled by accounting limits. Not surprisingly, these limits were, well, limiting.

I found two ways around this. One was that if you allocated a temporary disk (which allowed you to borrow disk for that session,but which disappeared when you logged out), and then explicitly relinquished it, the space would be added to your permanent disk allocation!

The other was that when a new year started, the initial passwords on the subsidiary computing students accounts were predictable, so you found one, but didn't change the password. You then watched for any activity. If after a suitable time you did not see the account being used (which was possible, as subsid. students tended to swap courses) you could then appropriate the account.

This was how I managed to get enough interactive time to become (I believe) the first person (at Durham, anyway) to complete with a perfect 500 point score, the version of the original Colossal Cave adventure with the Repository ("A crowd of dwarves burst through the hole in the wall, shouting and cheering..." or something similar).

Strange, almost immediately, the game stopped working at Durham. Coincidence?

Abracadabra! Tales of unexpected sysadmagic and dabbling in dark arts

Peter Gathercole Silver badge

Re: Case sensor

May have been stiction. One batch of 1GB IBM Spitfire disks, had the wrong lubricant in the drive bearings, which would vaporize and condense on the platter. When the drive was stopped, and the head parked, it sometimes stuck to the condensed lubricant, and would prevent the disk spinning up.

A quick shock would free the head and allow the disk to spin up.

I believe some other drive manufacturers also had this problem as well.

Ex-UK comms minister's constituents plagued by wonky broadband over ... wireless radio link?

Peter Gathercole Silver badge

Re: @Tim11

I should have included the 'capitalism, red in tooth and claw' option, but in reality this is not an option for any government that is wanting to put essential services online, and expecting people in rural locations to be able to pay the cost of their connectivity.

In reality, putting it on a profit basis will make rural locations less inhabitable1, because people will not be able to afford to live there for an increasing number of reasons.

No. In the UK, government has to consider a broadband provision as an essential guaranteed service if they want to reduce the cost of running government, and the telecom. providers and media companies, who are looking for a connectivity inversion for their future business models want it to.

1 Hey, you say, Leave the country to those who can afford it! But a lot of farming (take out the farm owners and just look at their workers) and land management is a subsistence economy that pays just above minimum wage, and people on minimum wage cannot afford the high transport costs, lack of amenities, and now add high cost of doing business with government agencies like DWP and HMRC, and as soon as the land is not managed, it will be hugely less attractive to tourists and people looking for second homes in the country

Peter Gathercole Silver badge


This is the problem with a universal service in the modern age.

It used to be that all the easy customers would be charged a little bit more for their services, and the surplus would be used to provide a service for those people who needed a more expensive solution, without them having to pay more.

But now we must have 'value for money' and 'maximize shareholder return', and suddenly, you're not allowed to put in a non-profit making solution.

The only ways that this can be overcome is by re-nationalizing Openreach or BT as a whole and giving it's near-monopoly back (shudder), or putting regulations in place that enforce a guaranteed minimum service for all customers.

But that last solution is unpopular with suppliers, because it limits maximum profit, plus someone in the future will come in and provide just the easy customers a cost+small profit service, undercutting Openreach on the services they need to cross-subsidise the more expensive customers.

It's the tension that exists everywhere in regulated capitalist systems, unfortunately.

Connected car data handover headache: There's no quick fix... and it's NOT just Land Rovers

Peter Gathercole Silver badge


Check the size of the product from the multipack.

If it's crisps, snacks or chocolate, especially if bought from Poundland or Iceland, the individual pack size from the multipack is probably smaller than the packs bought individually. This is the reason they're not supposed to be sold separately, so that the manufacturer does not get blamed for reducing the pack size.

The manufacturers do this to try to make the multipacks appear better value than they actually are!

Peter Gathercole Silver badge

Re: This needs some input from the DVLR @Lee D

The ability to use your passport photo as part of the driving license application has been there since about 2006.I was part of the project that implemented it.

I don't think that it is that radical to tack a bit of function to the already existing process of registering a change of ownership of a vehicle. All of the generation of the V5C is already there, and it would be relatively trivial to add something like a code generation step and notification of change of ownership to the manufacturer (although it would have to not include either the previous owner or the new owner information for data protection reasons).

Techie's test lab lands him in hot water with top tech news site

Peter Gathercole Silver badge

"Surely you don't mean that!"

"I do, and don't call me Shirley"

Ad watchdog: Amazon 'misleading' over Prime next-day delivery ads

Peter Gathercole Silver badge


Prime is a mulch-facited offering. The two most obvious benefits are next day delivery, which is not available on all Prime items (and it is possible to tell this from the listings - they nearly always say "not eligible for next day delivery"), and free shipping (there are other things like Prime only items, which are only available if you are a Prime member), and some music and video media available to stream

So it is possible to have an item that you do not pay any shipping charges, but which will be delivered several days after order. It's still a Prime item.

What I have problems with is when you order something that is "next day delivery", and you get an expected delivery date, a dispatch notification, and even tracking information that says it will be delivered, right up to the end of the delivery window, where you suddenly get a "we're so sorry, we have not been able to deliver" message (although you don't always get that).

I'm sure that the multiple instances of this that I've experienced are largely a result of the delivery company (normally Hermes, DHL always seem to deliver) pushing their delivery drivers beyond what is achievable. Normally, a swift complaint to Amazon results in a "free" month of Prime, but that's not much consolation when I needed the item on a particular day.

Three more data-leaking security holes found in Intel chips as designers swap security for speed

Peter Gathercole Silver badge

Re: Looking at the wrong holes @Warm Braw

I think you're not following current deployments. "multiple dedicated machines" do not exist in large organizations any more. They're all doing virtual machine deployments because the hardware vendors are selling these expensive super-sized systems with the express intent of them being carved up into VMs.

And here is the rub. If you cannot trust your process/vm hardware separation, you're in real trouble.

Of course, we could go back to an operating model where we have hundreds of discrete systems rather than a couple of very large systems with dozens of VMs, but space, power, cabling etc would take us back more than a decade, and the loss of flexible sizing would result in wasted resource due to having to have different sized discrete systems for different workloads.

Multi-user, multi-tasking systems have relied on access separation ever since they were invented more than 40 years ago. Pulling this out from under current operating systems would mean going back to the drawing board for OS design, even if it were possible.

Google keeps tracking you even when you specifically tell it not to: Maps, Search won't take no for an answer

Peter Gathercole Silver badge

Re: Nobody saw this coming? @Geoffrey W

From personal experience, if you're trying to move a flock of sheep from one place to another, it is quite often the case that if you can get a couple to to move with purpose to where you want them to go, most of the rest of the herd will follow.

I've not done any in depth sociological research, but I have moved herds of sheep hundreds of times when my farther-in-law owned a sheep farm...

ZX Spectrum reboot latest: Some Vega+s arrive, Sky pulls plug, Clive drops ball

Peter Gathercole Silver badge

Re: What we need

The 6502 and Z80 clock issue is really a precursor to the great RISC vs. CISC debate.

Generally, the 6502 would execute each instruction in about 2 clock cycles, although there were a few that only needed one. The Z80 required between 4 and 13 clock cycles per instruction depending on what the instruction was (this is from memory), so although it generally had a faster clock speed, and more sophisticated instructions, for many of the simpler operations that these processors typically ran, the 2MHz 6502 in the BBC performed tasks faster than the 3.75 MHz in a Spectrum.

The memory access was also more simple for 6502, which enabled it to work with slower memory than the Z80, mainly because memory and CPU clock speeds were linked together.

For complex workloads, the Z80 could run rings around a 6502, but in order to do that, you would need to have work that needed 16 bit registers, and used the complex instructions to their maximum benefit.

The Z80 was more memory efficient (so long as you used all of the instructions) although clever use of the indexed addressing modes of the 6502 could save memory, and allowed you to use zero page memory almost as registers on a 6502, negating some of the benefit of the Z80's more generous register set. The Z80 also had the basic support for bank-switched memory and port driven I/O, neither of which the 6502 had.

It's also worth remembering that processors of this age executed instructions strictly in the sequence they were written, with no overlapping or super-scalar execution, and all memory read and writes went strictly to the RAM, no caching or pre-fetich of instructions or data.

So the Z80 was a more sophisticated processor, but not necessarily a faster one than the 6502.

Drink this potion, Linux kernel, and tomorrow you'll wake up with a WireGuard VPN driver

Peter Gathercole Silver badge

Re: Because it's still a module?

I only read the article, and that contains "pouring his open-source privacy tool directly into the Linux kernel".

I see that it will remain a module, but the intention I get from this is that they are trying to make it compiled directly into the kernel. Maybe the article is misleading.

I actually have no problems with it remaining as a module, either official or unofficial, but there are only a very limited number of scenarios I can see where having it actually compiled in the kernel will benefit users or administrators.

Peter Gathercole Silver badge


The Linux kernel is already heading towards bloat central.

The module mechanism was invented so that you could add functionality without having to include it in the kernel as a compile time option.

The only possible reason for doing this would be if it were needed during system startup, for example if you wanted to network boot through a VPN. This would be problematic, especially as you would have to find some way of loading the certificates or keys before having a filesystem available.

It seems to me that some Linux developers need to remember the design decisions that were made to try to make running Linux easier!

'A word processor so simple my PA could use it': Joyce turns 30

Peter Gathercole Silver badge

Re: guitar tutor

I think you'll find BASIC is older that Forth, so 'old school' would be better applied to both.

(I had a 2nd language ROM in my BBC Micro for Forth back in 1982)

Now that's a dodgy Giza: Eggheads claim Great Pyramid can focus electromagnetic waves

Peter Gathercole Silver badge

Re: And...

No, because they're triangular based pyramids.

All of the ones that have 'unexplained' effects are square based, and aligned to the earth's magnetic field!

But if you look at a pyramid tea bag, you will see that it is just a tube of paper, with the top an bottom seams glued/welded at right angles to each other in parallel planes with the perpendicular aligned along the centre of the tube.

All this time I thought that it was some clever folding machine, only to discover it's almost no different from a machine making normal square tea bags.

Peter Gathercole Silver badge

Re: It was aliens wot did it

"There are pyramids in my head

There's one underneath my bed

And my lady's getting cranky...."

Sysadmin trained his offshore replacements, sat back, watched ex-employer's world burn

Peter Gathercole Silver badge

Re: Logic bombs @razorfish

Not the end of the world. Reload your offline backups that are stored in a remote location.

Remember, online copies are *not* backups, for exactly the reason you specify!

BBC websites down tools and head outside into the sun for a while

Peter Gathercole Silver badge

Re: Scary

There was more than just the test card.

During the rollout of coulor TV in the UK in 1967 or so, BBC2 carried a number of test programs, which I believe were called "Trade Test Transmissions". They were basically colourfull short documentaries, broadcast at fixed times of the day, so that TV installers has something predictable to set the colour up on the TV they were installing (Colour TVs were still mainly valve driven, and were fiddly to set up, took ages to warm up from cold, and generated large amounts of heat).

I happened to be ill for a while that year, and off school for a week or two, and I remember three of them. One was called "Ride the White Horses", and was about power boat racing, another was called "Skyhook", and was about helicopter cranes, and one was called "Birth of a Rainbow", about rainbow trout farming.

There were more, but I can't remember them. Wikipedia has a list.

Looking a YouTube, they have the one about rainbow trout, and some of the others, but not the other two I remember.

UK.gov commits to rip-and-replacing Blighty's wheezing internet pipes

Peter Gathercole Silver badge

Re: Not wanting to state the obvious @EnviableOne

C4 and the license fee.

I see my mistake. There was a plan 10 years ago for C4 to receive some TV license money, but it was never carried out.

S4C, which is not C4 does receive license payers money, though.

Peter Gathercole Silver badge

Re: Not wanting to state the obvious @Martin an gof

That's true for standard definition, but BBC HD channels do not carry any local content. It's national, with local content slots showing a message saying that the content is not available in your region.

Peter Gathercole Silver badge

Re: Not wanting to state the obvious @EnviableOne

Your argument falls down, because Channel 4 take a slice of the license fee.

Peter Gathercole Silver badge

Re: Not wanting to state the obvious @JDeaux

I'm pretty certain that you do not need a TV license to watch Netflix. You only need one to watch broadcast TV.

There is a concise wording of what broadcast means in the TV license, but I can't be bothered to dig it out, but it's basically along the lines of watching a program at the same time as it is being broadcast, whatever transmission media you're using.

So, for example, if you watch something that is being served out using one of the catch-up services before it's finished in it's broadcast (it has to overlap the broadcast), then you need a TV license. If you wait until it's finished, and then watch it on a catch-up service, then a TV license is not needed (but remember the +1 TV channels).

They've also broadened the scope, as they've defined computers and other devices as TVs for the purpose of watching broadcast programmes.

As far as I am aware, Netflix do not broadcast any content, so it is all on-demand. No TV license needed. If Netflix were to start carrying 'Live' programs, sent to multiple users simultaneously, you may need one, however.

NowTV, which carries channels that are broadcast along side their catch-up content does require a license.

BBC iPlayer is a bit of an exception though, as they have added a specific requirement to have a TV license in order to use any aspect of iPlayer. This is actually more like a no-fee commercial contract. They justify this because you can watch programs on iPlayer at the same time as they are broadcast, but I actually object quite strongly to what the BBC is doing in this area.

I can see the nature of 'broadcast' being changed or challenged again in the near future, because of the nature of multicast services on the Internet. For example, many road traffic cameras provide real-time video to whoever wants to see them. Does this count as a 'broadcast'? And of course, as the technology gets cheaper, we are beginning to see small live TV stations being run out of bedrooms using cloud services, in the same way that we get small Internet radio stations. Will these count? Who knows.

ReactOS 0.4.9 release metes out stability and self-hosting, still looks like a '90s fever dream

Peter Gathercole Silver badge

Slight aside

I recently tried getting RedHat 4.1 (that's RedHat, not RHEL or Fedora) from a PCW cover disk from something like 1997 running in VirtualBox.

It was quite a frustrating experience, because although the install worked OKish (once I'd sorted out how to run the loader program running without a floppy disk drive, and then access the CD from the loader program - bootable CDs and ATAPI was still in the future back then), trying to get XFree86 working was difficult.

Back in the day, graphics and mouse support was much less standardized on PCs, and the version of XFree86 needs to be told a lot about the graphics adapter hardware. The autoprobe cannot identify the VirtualBox display adapter from it's ancient database of cards that it knows about (just an example, when I originally used this as my main Linux distro, I had an ATI Mach64 card which worked quite well).

I had minor success setting it up as a dumb SVGA adapter with no hardware assists, but I also had problems with the mouse as presented by VirtualBox. And I did not get around to attempting to get the sound working (I used to use a ISA SoundBlaster 16 which I did get working with OSS, I think). I don't know what the VirtualBox sound adapter looks like.

Even though the virtual machine was running on my 2nd gen Core 2 Duo Thinkpad (not a stellar performer by today's standards, but a couple orders of magnitude faster than the machine I first ran it on), because there was no hardware assist from the graphics adapter, the screen handling was sloooooow compared to when I used it on a Pentium 100 with the Mach64.

I wish I could get it running better, because I would love to be able to show people how simple the GUI looked on old UNIX/Linux systems (I know, I could run fvwm2 on Ubuntu, but it would be so much more authentic on an ancient Linux).

Spectre rises from the dead to bite Intel in the return stack buffer

Peter Gathercole Silver badge

Re: Asking (possibly) dumb question

In this case, I don't think that it is a timing issue (the timing issue leaks were to decide whether a data cache contained a value, or whether the system had to go an fetch it from main memory).

In this case, what is being done is that a buffer that caches the return address from functions is being controlled, so that when returning from a function or sub-routine, it is possible to change the return address from the next instruction after the call (the normal result) to an arbitrary address controlled by the malicious code.

Normally, the processor would go to the stack frame stored in main memory to fetch the return address (and changing this is the primary technique in a 'stack smashing' attack, as the stack is in the address space of the user process), but it looks like Intel and ARM have found a way of keeping this return address in a faster cache, so that the return can save some clock cycles. If you can arrange to change entries in this buffer cache to point to some malicious code, and get the processor to return to this code while still in kernel mode, in theory you can get access to memory that would normally be protected.

The write up and description of the Return Stack Buffer and this vulnerability is quite involved, and I'm not sure I fully understand it, because in ARM at least, there appears to be two buffers, one of which is a conditional buffer that tracks predictive returns, which can be manipulated using 'branch not taken' types of speculative attack.

As I said earlier, invalidating the RSB (assuming the processor has a suitable instruction) on a syscall or context switch should limit this type of attach to the current process, which is still not that good, but should prevent the leaking of information from other processes or the kernel.

Peter Gathercole Silver badge


I've only had a short think about this, but it strikes me that the main problem here is that the contents of the Return Stack Buffer persists across context switches.

If whatever OS kernel is being used invalidated the RSB when context switching between different process/threads, then this may affect performance, but should prevent this type of leak between processes. Any performance impact would only be when a process is re-scheduled.

Switching to kernel mode (a system call) would be a bit more problematic, as system calls happen frequently. You would not really want to invalidate the RSB on every syscall, but I would have thought that there should be something that the syscall interface could do to sanitize the RSB it inherits from the process. But the separation of the kernel and process address spaces in the Meltdown fixes should really limit the damage.

As I say, I've not read the full papers yet, so there may be something I haven't considered.

No big deal... Kremlin hackers 'jumped air-gapped networks' to pwn US power utilities

Peter Gathercole Silver badge

Re: More detail please

Um. If somebody/anybody has remote access to a network, then it is not "air-gapped".

A properly air-gapped environment has absolutely no communications connections with any other environment, and is completely self-contained in one location.

Anything else should probably be described as "firewalled" (assuming that there are firewalls in place!)

Sysadmin sank IBM mainframe by going one VM too deep

Peter Gathercole Silver badge

Re: Just to mudddy the waters a trifle ... @jake

The ! path separator was adopted by UUCP mail in the UNIX world, and remnants of it still persist in the canned sendmail configurations shipped with real UNIX systems.

I used to be att!ihnp4!hvmpa!gatherp within the AT&T name space (I've googled this, so I don't think it's leaking any info).

ihnp4 used to be a very good machine to use to base mail path addresses, because it seemed to be connected to everything, and I was surprised to find a reference to an ihnp4 based ! mailpath in a document about 10 years ago. I think the name has finally gone now, but the name persisted with new systems adopting the name as hardware was retired.

"ih" stood for Indian Hill, an AT&T development lab in Chicago where AT&Ts switching systems (telephone exchanges) were developed, amongst other things. "np" stood for network processor, and 4 was the number of the system.

It was quite interesting, as because of the way that the ! separator specified a mail path, you could explicitly route mails through various systems to check connectivity. I frequently used to generate a mail loop back to myself, bouncing a mail of a distant server to check that a particular route worked.

It's not that long ago (well, actually ~20 years - but that does not seem that long to me) that you were able to use ! mail routing with sendmail (with a normal rule-set) on TCP/IP networks, but as sendmail get replaced and explicit mail routing fell out of favor, it stopped working.

Peter Gathercole Silver badge

Keyboards with #. £ and $ started appearing in the UK at about the same time as 8-bit characters were becoming popular on mini-computers and PCs (the very early '80s). See my earlier post about keyboards and 7-bit ASCII to understand what happened before that time.

Once 8-bit code-pages became popular, the lower 128 (well, 95 really because of the 33 non-printing characters) code points of almost all internationalized code-pages were the same as US ASCII (X3.4-1986), and any character not in US ASCII was pushed into the top 128 (well, 127 really, because position 255 was normally delete or something).

This meant that there were many, many code pages to cope with different characters for different languages, not just the UK, and a corresponding set of keyboards. Here is an interesting page from IBM, who IMHO were the first company to really start standardizing keyboard layouts for different countries (the 'enhanced' keyboard many of us will be typing on is basically an IBM layout from the PC-AT era, although DEC's international LK-201 keyboards were of a similar time-frame).

Note the references to the 101, 102 and 106 keyboards pre-date the addition of the 'windows' keys.

Peter Gathercole Silver badge

Re: Just to mudddy the waters a trifle ...

That's interesting. I never thought about the roots of the words (although there is something similar about pound weight and lb as a symbol).

I always assumed it was because in the early days of terminal, the US 7-bit ASCII table only had space for 96 characters, and were filled with characters suitable for US data processing. This did not include currency symbols for other geographies.

For many terminals and printers intended for use in the UK, there was a toggle or DIP switch, or sometimes a menu setting that normally replaced the # symbol with a £ symbol (although some replaced $ with £). Same numeric code, different presentation. This is what I thought was the basis for hash/pound.

I remember writing shell scripts with comments that appeared with the £ symbol at the front. In hindsight, it must have looked very strange, but at the time, it was just normal.

When 8-bit ascii with extended character sets started being used, life was a nightmare, because the number of different code-pages (CP437 and ISO8859-1 and -15 anybody) proliferated, with different code pages on different devices, making inter-operabillity extremely difficult.

I don't know how other OS's dealt with this, but IBM came up with quite complicated input and output methods on AIX for most devices that allowed you to specify a translation table that could be used to make it all work, but setting these up was quite complicated, and not many customers actually used them correctly (or in some cases, at all!)

It was only the adoption of various Unicode UTF character encoding schemes that things started working a little easier.

Fukushima reactors lend exotic nuclear finish to California's wines

Peter Gathercole Silver badge

Re: They missed a source

I get the Asimov reference, but it was not R.Daneel Olivaw or R.Giskard Reventlov's plan. It was Levular Mandamus who set up the nuclear intensifier. The two robots merely did not stop the plan, in order to invigorate the human race.

This caused the demise of R.Giskard, as he did not have the flexibility to work around the first and zeroth law.

I always found it strange that R.Daneel was able to invent and invoke the zeroth law, and then partly ignore it to allow 'harm' the humans on Earth for their long-term benefit. He was quite an early humaniform robot (at least in the 'Spacer' era, and ignoring Tony and Georges Nine and Ten), so why was his positronic brain so adaptable?

Peter Gathercole Silver badge

Obligatory XKCD


Samsung’s new phone-as-desktop is slick, fast and ready for splash-down ... somewhere

Peter Gathercole Silver badge


tek4010 had a relatively small screen (~11" diagonal IIRC), so I would not call that "horrendously large", although it was normally set on it's on floor-standing pedestal. It's more capable cousin, the 4014 was larger.

It was a storage scope, so the screen 'remembered' what had been drawn without the screen processor redrawing it (unlike on a raster CRT monitor, which has continually to repaint the screen). Over the course of a minute or two, the image started to degrade, and the image could not be scrolled. You had to clear the screen and draw the next one.

I used to use it to do work in APL, as it could draw all of the over-struck greek characters, and I actually wrote a 4010 graphics emulator (it was a very simple protocol) in BBC basic, which was fast enough to keep up on a 9600 baud serial link.

UNIX troff (a text formatter) had a post-processor that would allow di-troff output to be drawn on the screen of a 4014 for proof reading before the days of high definition terminal screens. I believe it's still there in groff in GNU/Linux, even it's not needed anymore.

'Fibre broadband' should mean glass wires poking into your router, reckons Brit survey

Peter Gathercole Silver badge

Re: It was always fibre though, right?

Certainly, there was a time when most of the trunk or core network was fibre, while the analogue phone system beyond the local exchange was copper. That was the concept of System X.

I worked for a company that was providing BT with telephone exchanges in the late '80s, and they were definitely selling fibre trunk equipment then. According to Wikipedia, the last trunk connection in the UK was switched to fibre in 1990.

Peter Gathercole Silver badge

Re: Eir, Vodafone & Sky

Actually there is a simple definition of what 'broadband' actually means.

It means that there are multiple carrier frequencies running over the transmission media. Basically frequency division multiplexing (FDM).

In the dim and distant past, when thick and thin wire Ethernet or Token Ring was the main data networking technology, this used time division multiplexing, sometimes called narrow-band, because there was only one digital data stream, and each network station got a share of the total available.

Cable TV started using multiple frequencies over coaxial cable to provide analogue cable TV, which was the first time many people outside of the comms. industry would have come across 'broadband' (unless you count ordinary OTA analogue TV).

One place I worked, we had a data network that ran over coaxial cable with multiple data channels being carried (multiplexed) on different frequencies down the cable (actually it was a hybrid system, because there was TDM being done on each of the FDM channels).

All *DSL systems are broadband, there being multiple carrier frequencies being sent down the 'phone line. DOCSIS is broadband because of there being multiple carrier frequencies over the cable.

Interestingly, most Fibre is also broadband, because the carriers use multiple frequencies of laser light down the Fibre, although it is not clear to me whether this is what is delivered in FTTP (it definitely is in the backhaul or core network). I did read a description that suggested that the down path on FTTP was shared TDM spread over multiple FDM carrier frequencies, and the up path was FDM, with each customer having their own frequency. This means that it is possible to split and combine the different feeds using passive optical splitter nodes at the pole/distribution point, and only need expensive powered switches at the cabinet.

I'm not sure whether 4G counts as broadband. There are definitely multiple carrier frequencies being transmitted and received, so I suppose that it must by the definition I gave at the beginning of this post.

Oracle wants to improve Linux load balancing and failover

Peter Gathercole Silver badge

IBMs RDMA over HFI and IB

ran RDMA over an abstracted network device, with the resilience built in to the underlying network. This allowed the network layer to adjust to failures without the RDMA setup being exposed to the changes.

Seemed to work quite well on AIX, and I believe that the Linux support (for the P7 775 9125-F2C) worked the same (or even better!). I'm pretty sure that IBM would have put their work back into the kernel.

Evil third-party screens on smartphones are able to see all that you poke

Peter Gathercole Silver badge

Re: What? @DougS

Interesting thought. I don't know enough about Android device drivers to know whether they can accept microcode from the device that is being configured, but I think it is unlikely.

Doing a bit of research, it does indeed appear that the interface for most screen controllers is I2C, SPI or SSI, and these do not require, and does not allow code to be injected into the device driver. It may be possible that it would react to a request for the device parameters from the device driver, but that would be at the device drivers request.

On the subject of recovering the device, these replacement screens are provided at the lowest cost, and the chances of someone actually following up on a warranty after they've had the device for a year is very unlikely. Generally, phone repairers working at this low end just buy direct from China over Ebay, and are unlikely to return the faulty device unless there was a compelling reason, so they would have to be complicit (the cost of returning stuff ti China is much greater than sending it from China). I'm not saying it couldn't happen, but...

Peter Gathercole Silver badge

Re: What?

It would not at all surprise me if the touch screen has it's own ARM core, or at least a microcontroller or PIC.

All of these could run additional code that captures all sorts of information, but I have to ask, what does it do with the data after it's been collected?

I very much doubt that whatever the controller is, that it has access to the higher levels of function like the IP stack, or even access to the main memory of the phone or whatever device the touch screen is fitted to.

Chances are that there is a definite protocol that the screen and main processor use (it's probably based on I2C or something similar) to allow the OS to identify what is touched and when, and unless this is a lot more functional than it needs to be, passing out-of-band data is not something that is likely to be very easy. You would have to have some system component added to Android (the applications are abstracted from controlling the hardware) to read the data.

So even loading a dodgy app. that looks to see whether one of these hacked screens is fitted, is unlikely to be able to query the screen controller to get that information.

What I would like to see is what the special modifications were made to the demonstration phone to allow this demonstration. I'll bet they included a modified Android driver to talk to the screen. If that's the case, then I'll breathe easily regarding the screens I've replaced on several phones.

Who fancies a six-core, 128GB RAM, 8TB NVMe … laptop?

Peter Gathercole Silver badge

Re: Still lightweight @Duncan

But the VT220 could do 132x24 text resolution!

ZX Spectrum reboot firm boss delays director vote date again

Peter Gathercole Silver badge

Debt collectors

This is from memory, but in order to send in debt collectors, a process has to be followed which requires the complainant to file a claim in the court. There is then a period during which the defendant can respond to the claim.

Depending on this response, the court can then issue a court order for the outstanding amount to be paid. This has another waiting period, and it is only after the defendant fails to pay in the time specified that the complaint can engage a court recognized agent (sheriff or debt collector) to attempt to enforce the debt or recover goods to cover the debt.

If that is not possible, then the complainant can file to have the defendant bankrupted.

All in all, the process takes months, not days, and this is probably what indigogo have found. It will be interesting to see whether the process has been started. If it has, the information should be in the publicly available court record. Anybody any idea which court they might be using?

As I say, this is from memory, learned after a friend of mine returned a car to a car dealer as not fit for purpose, and attempted to get a refund of the money he paid for it.

As far as the gender pay gap in Britain goes, IBM could do much worse

Peter Gathercole Silver badge

Re: What gender gap though? @Lusty.

Hmm. All this time, I've got it wrong. I'm so humbled on this.

I thought I had a good grasp of mean, median and mode, but I obviously don't.

I'm thinking I should withdraw my original comment, but that would make this comment trail a bit difficult to follow.

Biting the hand that feeds IT © 1998–2019