* Posts by Peter Gathercole

2924 posts • joined 15 Jun 2007

Supreme Court punts on Microsoft email seizure decision after Cloud Act passes US Congress

Peter Gathercole Silver badge

Re: As has been noted...

But ultimately, if the data is on your kit, in your buildings, you have the recourse of air gapping it, turning it off, putting it through a crusher et. al. which will prevent any further data loss. Try getting any of the cloud providers to surrender or destroy the disks or tapes that have held your data when you move away from their service.

You also have much more control about how the data is protected, rather than relying on the promises of one or more third parties, possibly in other countries.

For example, you get to choose the number and type of security boundaries so that you are not so reliant on one single firewall, OS or network router/switch supplier, and you can vet your staff, and take appropriate disciplinary action if they go astray.

Gemini: Vulture gives PDA some Linux lovin'

Peter Gathercole Silver badge

@AC re phone and data.

Ubuntu would be a good place to start for Linux on an Android phone, because Canonical have already done it, albeit abandoned now.

I have a Nexus 4 with Ubuntu Touch on it, and although it is built on top of an Android kernel, it is supposed to be a full Linux (although the display is Mir and Unity). Opening a terminal session does make it look like a more complete Linux than doing the same on an Android device. Phone and data work fine. I've not attempted to put an external keyboard on it (I can't remember if the Nexus 4 supports On-the-Go USB).

In order to run other normal apps than the ones in compiled for Unity, it is necessary to have some form of compatibility layer installed to provide something approaching a normal X11 display, but I never managed to get it working. Maybe I should attempt it again.

VMs: Imperfect answers to imperfect problems, but they're all we have

Peter Gathercole Silver badge

Re: Multitasking @lleres

I am aware that prior to Edition 4, UNIX on the PDP-7 was one user at a time, but it had the concept of multiple users, although only one, at a time from earlier than that.

Bearing in mind that in the beginning, it was a side-of-the-desk project, borrowing a system that did not belong to them, it is not surprising that it took a short while to become fully multitasking with multiple concurrent users.

The early '70s was before my (computing) experience. I first used UNIX Version (Edition) 6 at Durham University in England in October 1978 (Yay, 40th anniversary of first using UNIX coming up), although I had used ICL George 3 or 4 and TENEX as a guest a few months earlier, and MTS at the same time as UNIX, but shared access computers were a real rarity at the time, especially outside of universities and other research establishments.

UNIX must have been quite the breakthrough for those who came across it at the time.

Peter Gathercole Silver badge

..imperfect problem.

I think 'imperfect problem' is a poor moniker, as it implies that there is a 'perfect problem', which surely must be an oxymoron.

In my view, a perfect problem is one that does not exist!

Peter Gathercole Silver badge

Re: Multitasking @lleres


I don't recognize your categorization of "Unix die-hards" being proponents of real time computing.

UNIX was made multitasking almost from the beginning in order to allow several people to share what was an expensive and scarce resource. At that time, UNIX was NOT, and never has been a proper 'real-time' operating system like DEC's RT-11 or RSX-11 (note, there have been real-time extension, like AT&T UNIX RTR, but they are not really mainstream).

In fact, completely counter to what you said, the movers and shakers of UNIX (Dennis, Ken, Doug and Joe - although Brian was less involved) were involved in various degrees with Multics, with all of them taking an active role in that project. Multics was multi-user and multi-tasking, and the desire when creating UNIX was to preserve many of the good things in Multics, on much smaller and less costly systems than Multics needed.

So as a result, UNIX was written, pretty much from the ground up, as a multi-user and multi-tasking system.

In my view, if IBM had chosen a cut-down OS based on UNIX rather than what Microsoft provided, the whole computing world would have been better. As it was, proper multi-tasking did not appear on desktop-class machines for many years, and windows was only dragged into the multi-user world very late indeed.

But I take the points made in the article that the poor implementation of many computer OSs and applications does not provide sufficient isolation between each application, but a properly designed OS with the correct resource fences (for CPU, memory and IO) should really do everything that is currently being done by a type 2 hypervisor. Basic UNIX has always provided process and memory separation, and AT&T derived UNIXes had a 'fair share scheduler' back in the 1980's to enforce CPU limits, and AIX has had Work Load Manager (WLM) since AIX 4.3.3, which is used for WPARs (Workload Partitions - much like Solaris Containers) for limiting CPU, memory and I/O resource use.

A proper OS should enforce memory separation (UNIX has since it was re-written on the PDP-11), although the current Meltdown has shown that Linux (note, Linux is not UNIX) has taken some (in hindsight, and IMHO) poorly thought out efficiency shortcuts (like mapping most of the kernel memory space into each process). UNIX never did this, at least not on the PDP-11, s370, VAX, Sun Motorola and SPARC platforms that I know most about.

It would be interesting to look at Intel UNIX ports like Sun/OS i386. AIX PS/2 (damn, I should know this for this platform), Xenix/368, Interactive UNIX, Microport UNIX and UNIXware to see whether those platforms properly separated the kernel address space from user-land.

Here's the list of Chinese kit facing extra US import tariffs: Hard disk drives, optic fiber, PCB making equipment, etc

Peter Gathercole Silver badge

Re: Liquid elevators

Try an Archimedes Screw.

Linus Torvalds says new Linux lands next week and he’s sticking to that … for now

Peter Gathercole Silver badge

Re: As for every release @teknopaul

...in vms...

Has the Intel port of OpenVMS got that far already?

Ohhhhh. He meant VMs, didn't he?

Probe: How IBM ousts older staff, replaces them with young blood

Peter Gathercole Silver badge

That is so true.

Who knew? Fabric access NVMe arrays can work with Spectrum Scale

Peter Gathercole Silver badge

Don't know what the fuss is about

As long as storage can be mapped into a *nix device, Spectrum Scale can use it.

What Spectrum Scale can achieve is not just high speed access, but very high bandwidth to single filespaces by multiple clients. Historically, it has achieved this by a high degree of parallelism across relatively slow (disk speed) storage.

That's why it is popular in supercomputer installations where speed and file-store size are both important.

What using NVMe will do is reduce the latency, although increasing the individual device read speed will help reducing the amount of parallelism that is needed to obtain the required performance.

Spectrum Scale also allows managed tiered access to storage of difference performance.

The art will be to organize it to get the maximum benefit from that speed.

10 PRINT "ZX81 at 37" 20 GOTO 10

Peter Gathercole Silver badge

Re: Memories... @travellingman

I'm really not sure how much of an asset the Cambridge Programmable would have been in an exam over a normal scientific calculator.

It did not have any stored memory capability, so you would either take it in powered on, and risk the battery running out, or remember any program that you wanted to use, not that much of a problem, however, with only 32 (or was it 36) programmable steps.

I did use a high-function Commadore SR4190R in a physics exam at university to do some linear regression that I could not remember the formula for. Worked out the results, then reverse-engineered the calculations to fit so I could present my 'workings'. Non-programmable scientific calculators were allowed, but I suppose it was cheating (a bit). I don't actually think that that exam added much to my overall degree.

Peter Gathercole Silver badge

Re: Memories... @IMG

I'll see your HP RPN calculator (mine was an HP-45), and raise you (because of difficulty in fitting anything useful in the limited memory) a Sinclair Cambridge Programmable and a Commodore PR-100 (I also had a Texas TI-57 programmable at one time, but it went wrong after about 2 weeks, and I got my money back).

I've forgotten all of the other calculators I've had across the years. I still have a TI-58 as an ultimate fallback, but I mainly use my 'phone now.

Interestingly enough, in the past couple of weeks, I've had to remind a colleague about the fact that some calculators did arithmetic hierarchy (generally TI and possibly Rockwell), and some didn't (Sharp, Commodore/CBM, early Casio, and most cheap 4/5 function calculators). HP were pretty much a law unto themselves, using RPN.

Peter Gathercole Silver badge

Re: Gateway Drug

I added an external keyboard using a Tandy membrane keyboard, suitably modified by scraping tracks and repainting with conductive paint. This was attached by a ribbon cable long enough that ZX81 and rampack was some distance from the keyboard. Never had a wobbly rampack crash after that.

I also added a 7400 TTL on a small board in the Sinclair rampack to re-map the 1K of static memory to a usable memory address (which I used for small machine code assists to basic), and also added another 1K of static memory under the keyboard, attached to the ULA side of the bus isolation resistors. This allowed me to change the I register, which was used to hold the base address of the character generator table to point at an address in this RAM. This gave me a fully programmable character set! So my ZX81 was probably the only one with 18K of RAM!

I also had a sound board from QuickSilver which provided 4 channel (3+white noise) sound using an AY-8910 sound generator added to the video signal using an external modulator. QuickSilver also produced a point addressable graphics board, but I think that worked by doing a similar trick to mine with the RAM, and writing all the characters out to the screen, and manipulating the pixels in each character cell. I believe that it came with some M/C routines in an additional ROM that allowed basic line draw commands.

I had great fun getting it to produce harmonized music while drawing it on the screen at the same time. The only problem was that in 'slow' mode, the Basic was just a bit too slow to make it seamless.

Although it looked a bit Heath-Robinson, it drew some interest in the computer club of which I was a member.

Paul Allen's six-engined monster plane prepares for space deliveries

Peter Gathercole Silver badge

Re: Gerry Anderson thought of it first

I always wondered why the tyres of lifting body 1 weren't scuffed when the wing-tips angled down. The main body was not still on it's stilts when LB1 was attached.

I made a mean Lego model of Zero-X when I was a kid. It was about 15" long, and used nearly all of the Lego that we had. The colours were wrong, of course, and as all large Lego models were, it was a bit fragile.

Unfortunately, I did not take any pictures, because I didn't have a camera at the time.

Intel gives Broadwells and Haswells their Meltdown medicine

Peter Gathercole Silver badge

Re: New processor? - NO! @chasil

The retpoline fix, IMHO, is not a complete mitigation for Spectre V2.

What has been described is a change to the compiler/compiler flags used when the kernel was compiled.

As I understand it, retpoline will prevent a call from generating speculative execution at the time of the call, so now the kernel has been compiled with this fix, the kernel will not have any speculative execution occurring whenever it performs a call.

But what runs on these systems is more than just the kernel. Compile time fixes need to be performed on all code that runs on the system, and the kernel is just part of a running system.

You would also have to re-compile the whole of the repository if you wanted to also roll this out to a Ubuntu system, and that pre-supposes that you don't have any code compiled without the retpoline options present on your system.

But even this is not enough. If there was a remote execution vulnerability that allowed executable code to be dropped onto your system and executed, then you have ABSOLUTELY NO CONTROL over whether that has a retpoline fix in it, and you can bet your last dollar that the code would not have the workaround.

Add to that the possibility of locating sequences of bytes that form valid code for performing a Spectre type 2 attack on the system already, and you should be able to see that retpoline fixes in the kernel are seriously not enough to mitigate this attack.

Just my 2 penny's worth.

Nokia tribute band HMD revives another hit

Peter Gathercole Silver badge

Re: I still have a 7110... @msknight

Definitely had a spring, although the tracks that connected the microphone in the slide were the weakest point. As soon as the contacts got dirty (and they were exposed to the air), the microphone stopped working.

It was an easy fix, but tedious to do frequently.

OpenBSD releases Meltdown patch

Peter Gathercole Silver badge


Just think how I feel.

In October, I will celebrate the 40th anniversary of logging on to a UNIX system for the first time.

Cue up the real grey-beards...

If you haven't already killed Lotus Notes, IBM just gave you the perfect reason to do it now, fast

Peter Gathercole Silver badge

Re: CVE-2018-1383 @Seven

I took the efix apart yesterday (publicly available to anybody and can be examined using anything that understands tar), the description is "ABSTRACT=CAA clcomd fix", and the only thing that is shipped with it is a replacement for /usr/sbin/clcomd.

Whilst it is true that this fileset is shipped as part of AIX (although only usable on Standard and Enterprise edition, not Express), it is only needed on systems that are clustered in some way. I know it is needed by System Mirror PowerHA (HACMP), but I suspect that it may also be used by some of the other cluster services like Spectrum Scale Storage (GPFS) and maybe other things that uses RMC/RSCT, although it is not used for communication with the HMC.

The published APARs contain virtually no information about the nature of the vulnerability, so it would require internal knowledge to definitively know what the problem is.

Maybe the AC who replied to you actually has seen something to confirm your guess.

I am currently involved in running a mixed estate of clustered and non-clustered (PowerHA) AIX systems, and clcomd is generally not running on the non-clustered systems.

Home taping revisited: A mic in each hand, pointing at speakers

Peter Gathercole Silver badge

Re: C90

Ah, the AD90. I got through dozens of the things. Much higher quality than the D90s, and better than the Maxel equivalents (IMHO), but much cheaper than the Psudo-Chrome SA90s. The equalization bias was such that they tended to produce a slightly bright sound on most Hi-Fi, so it was best to use a record deck that did not produce too much surface noise.

I remember splicing an extra 5 minutes on some tapes to record the two sides of some LPs onto the single side of an AD90 (although the TDKs had about 46 minutes of tape as measured on my JVC KD720 HiFi deck). I think one of them was Genesis Wind and Wuthering, and I had Meatloaf's Bat out of Hell on the other side (if any record company is reading, I have since bought both on CD, so you still made a sale!)

In general, most LPs were under 20 minutes a side, so would fit on one side of an unadulterated AD90.

I remember there being a country-wide shortage of AD90s sometime around 1980 because it was the tape of choice for most home-tapers.

I avoided Scotch/3M or BASF tapes, because they shed oxide even when new! And I would not touch unbranded tapes at all, and even good C120 tapes suffered from print-through, and tended to jam even on good tape decks.

Intel adopts Orwellian irony with call for fast Meltdown-Spectre action after slow patch delivery

Peter Gathercole Silver badge

Re: No replacement

But packaging a Coffee Lake+ in a Socket 1150/1 package (at the volume of Core Quads produced) may be cheaper, especially if you consider a like-for-like replacement of the mobo and memory in some gaming rigs will cost a similar amount to the processor!

Last year I did a just-behind the leading curve rebuild of one of my son's gaming rig, and the cost of the mobo and memory was easily more than the processor.

Peter Gathercole Silver badge

Re: No replacement

But they could produce latest generation chips without the design flaw, and package them in the older chip packages. As most Core and Xeon processors are in sockets, it would be possible to do a one-for-one replacement, although you would either have to be happy taking the systems apart yourself, or paying someone to do it.

They could get approximate performance by tweaking the clock multiplier and possibly disabling some cores and L0/1 cache, and I dare say they could also turn off some of the newer features (as they already do for current generation Celeron and the recently re-launched Pentium processors) so that end users did not get the benefit of those not in the older CPUs. They would have to do something with the ID info, because some mobos may struggle to configure the newer chips without a firmware upgrade.

I think that the only thing they might have problems with was the TDP. Underclocking later generation CPUs would use less power, but I think that they should be generous enough to allow people to benefit from that.

But it would be pretty expensive, so I have no expectation that Intel will do this.

Wileyfox goes TITSUP*: Smartmobe maker calls in the administrators

Peter Gathercole Silver badge

Cheap phones

I recently bought a Blackview Chinese special from Amazon in the Black Friday sales. It cost the princely sum of just over £43 new.

It's not a premium smart phone, but the screen res. is the same as the mid-range HTC it replaced. It's only 3G and does not have NFC, but I found that I was not using the NFC and 3G is quite fast enough for the limited amount of mobile browsing I do,

It's got the same Flash and RAM, and Android 7 appears to do a much better job of managing the available memory than previous releases. The battery is removable and large at 2800mAh, and easily lasts more than two days the way I use it. It has a microSD slot as well as the SIM slots.

Also, the call and audio quality is much better than the HTC.

But the main reason I bought it was that I had been carrying around two phones because of coverage problems (the other a Nexus 4 running Ubuntu Touch - which worked really well), and this Blackview is a dual-SIM phone that means I only have to carry one.

All in all, it's a perfectly good phone for almost nothing. The only thing I still intend to check is whether there is any traffic from baked-in apps on the phone, but I've not noticed anything yet.

Crowdfunding small print binned as Retro Computers Ltd loses court refund action

Peter Gathercole Silver badge

Re: IIRC Sinclair "pre sold" hardware then used the money to get the mfg running.

Amstrad did have several lines that qualified as Hi-Fi.

I had an IC2000 20W RMS per channel amp and an IC3000 tuner, and they definitely met the Hi-Fi criteria of power, distortion etc, although they were paper covered chipboard and plastic. The design of the electronics weren't too bad, however.

You would not class them as anything other than budget kit, but they were definitely better than the majority of music centers that were available at the same budget.

The problem is that Hi-Fi has become elitist in the past few years, probably because there is no real market for low-to-mid range separates. I picked up a What Hi-Fi recently to look at a 'budget' turntable review, and was shocked to discover that the cheapest turntable they had in this 'budget' review cost something like £350, and it also included 'tables costing in excess of £700. The only way this is budget is if you regard transcription turntables as the norm.

Lenovo literally has a screw loose – so it's recalled flagship Carbon X1 ThinkPads

Peter Gathercole Silver badge

Re: That will be your hard drive

I think that all Carbon X1's have SSDs.

Just saying.

Open source turns 20 years old, looks to attract normal people

Peter Gathercole Silver badge

Re: Amiga


According to Wikipedia (I know, I know), the 1BSD add-on tape for Bell Labs UNIX version 6 was released in 1978. I'm sure that some of the utilities would have been used internally within UCB before this, so I'm assuming that's where you were.

At this time, it was a series of add on tools plus one or two kernel modifications and fixes/patches shipped as source to be applied on top of an existing Bell Labs V6 UNIX installation, not an OS in it's own right.

I, too, have personal experience. I used Ingres, which was shipped on with 2BSD in 1979. Later, I looked after a Bell Labs UNIX V7 installation with a BSD 2.6 tape (which I wish I had taken a copy of), again for Ingres.

It was a lot of fun investigating the other software that was on the tape, although most of it required the separate I&D address space extension to compile and run, which my PDP-11/34 did not have. There was, however an overlay loader included that I had a play with, and managed to get ex running, although vi would not run. I did not pursue it, because Newcastle University produced a multi-platform simple screen editor for student use, which we were given a copy of (how's that for Open Source).

I think that 3BSD which shipped in 1979 was the first distribution that contained a complete OS, although you still needed to be a Bell Labs/AT&T source code licensee, because it still contained base UNIX code.

I also ran RSX-11M on my PDP/11, and we were a member of DECUS, and we used the DECUS C compiler, RUNOFF, and a number of other utilities which were available for free (well, at the cost of the media) to members.

NASA finds satellite, realises it has lost the software and kit that talk to it

Peter Gathercole Silver badge

Re: Those who do not understand Unix are condemned to reinvent it, poorly.

Got to be a bit careful.

In the current UNIX standards maintained by The Open Group, they are deprecating some older system and library calls, so it is not certain that modern UNIXs will be able to directly compile older UNIX software.

But generally, they have been replaced by more functional equivalents which can be pulled in by preprocessor macros. But it requires work.

Peter Gathercole Silver badge

Re: It was also HARDWARE that no longer exists.

Isn't a hardware archive called a museum?

Apple whispers farewell to macOS Server

Peter Gathercole Silver badge

Re: lpr

Real story.

Back in the mists of time (1982?), I was working on a PDP-11 running UNIX V7, and trying to get the 22 bit addressing working with the Calgary kernel buffer mods on a non-separate ID PDP-11/34 (it was a SYSTIME special, with non standard features). Had the system single user mode, i.e. as root with no lpd running.

I got into the habit of doing "cat file > /dev/lp0" (the standard name for an lp11 parallel printer driver) to print things I wanted on paper.

Unfortunately, the character /dev entry for the OS disk was rp0...

Even though "l" and "r" are some distance from each other on the keyboard, you can guess what I did. Overwrote the bootstrap, superblock and the first 20 odd K of the inodes (inode 0 was the one for /) on my experimental disk pack, which had my current modifications to the kernel on it with no backup.

Fortunately, it did not prevent me from bringing the system back to normal operation by swapping the (removable) disk packs over, but it did give me several hours work to recover my changes.

Newsflash: Car cyber-security still sucks

Peter Gathercole Silver badge


Ah, but yours is an Engine Control Unit (ECU) not an Electronics Control Unit (ECU). See the difference?

I do wish people would not re-use acronyms, especially in the same field!

New Sky thinking: Media giant makes dish-swerving move on Netflix territory

Peter Gathercole Silver badge

Re: TV Licence?

It will still need a license if you are watching broadcast TV via the internet.

There is a definition of what broadcast TV means knocking around, but I can't be arsed to dig it out. It's something to do with watching it (however you do it) while the same content is being transmitted on a broadcast medium, and covers all broadcast types.

This is different from on-demand, where you request specific content independently of whether it is being sent to other users.

Peter Gathercole Silver badge


Apart from maybe having more channels, is this that much different from NowTV, which is already a Sky service in the UK?

Intel’s Meltdown fix freaked out some Broadwells, Haswells

Peter Gathercole Silver badge

Re: Remembering Snowden... @Zippy

Even if all of the authorized applications are re-written, you still have to accept that if executable malware can be dropped onto a system, this would still be able to exploit Spectre.

The only way that you can be 100% sure that a system is not susceptible is by either having fixed hardware, OR by having absolute and total control of every piece of executable code on the system.

Recompiling your authorized code is only a partial solution.

Self-driving cars still do not exist even if we think they do

Peter Gathercole Silver badge

Re: They kinda do and kinda don't

Close to my home, there is a quarter-mile passing lane (meant to allow passing of agricultural vehicles) that ends and immediately drops to 40, then 30 in a short distance before entering a village.

The number of idiots, seeing the passing lane, but not looking where they will have to pull in again, who accelerate to more than the national speed limit, and then don't know what to do at the more the restricted speed limits drives me crazy.

It's amazing there's not more accidents. One of the places where a speed camera would actually be welcome. Unfortunately, there is not one, and there's no place where a camera van can park, so nobody get's caught!

IBM melts down fixing Meltdown as processes and patches stutter

Peter Gathercole Silver badge

Re: Incapacitated By Meltdown

I'm not sure if I was the first person to use "It's Broken, Mate", but I don't think I saw it used before I used it.

Woo-yay, Meltdown CPU fixes are here. Now, Spectre flaws will haunt tech industry for years

Peter Gathercole Silver badge

Re: DEC Alpha @Paul Crawford

You must also remember that HP was heavily invested in Itanium, as they had contributed their PA-RISC and EPIC technology to Intel, supposedly to make it easier for Intel to build am architecture that HP could move HP/UX to easily.

As it turned out, Intel used a lot of the IP to make their x86 processor line run faster, and were late delivering the server grade Itanium chip that HP wanted (and which were not as easy to port to as HP expected).

IIRC, it was so bad that HP developed two further generations of PA-RISC, which were, in fact, some of the best processor designs HP ever made, just to allow them to have competitive systems to sell while Intel faffed around with Itanium.

So Intel took HP for a ride, and then ditched them once they had gained the IP they were after.

We translated Intel's crap attempt to spin its way out of CPU security bug PR nightmare

Peter Gathercole Silver badge

Re: Mixed signals on CPU's @herman

I don't believe MSDOS ever ran on the 8080 processor, which was 8-bit.

The lowest would have been an 8088, as per the original 5150 IBM PC, which is a 16 bit processor with an 8-bit data bus and an 20-bit address bus, allowing 1MB of physical memory.

Kernel-memory-leaking Intel processor design flaw forces Linux, Windows redesign

Peter Gathercole Silver badge

Re: 68000 versus 8086

ULAs (Uncommitted Logic Arrays) were a UK innovation, mainly designed by Ferranti.

They allowed some of the layers of the wafers to be a common design, with the last few acting as a customization to get the chip to do what was needed. You could think of them as a half-way house to an FPGA, but with the configuration baked into the last few layers of silicon rather than after manufacturer.

I don't believe that any US company really bought into using ULAs, but they were used, as already pointed out, for the ZX81, BBC Micro, ZX Spectrum and Acorn Electron to reduce the chip count.

But production problems with ULAs were one of the main reasons why several of these systems were delivered late. Ferranti eventually disappeared into Marconi, which was sold off when that company went bust, so the technology disappeared.

Peter Gathercole Silver badge

Re: 68000 versus 8086 @Robbert Sneddon

Whilst I don't have the experience of the 68000 that you obviously had, I believe that there were a significant number of support chips from the 680X 8-bit family that worked with the 68000.

I'm trying to think back, but I'm sure I saw a working 68000 system around the time that the IBM PC was new. Of course, that could have been because small companies were more agile than IBM, but I think that the IBM PC was a very quick development which didn't start until 1980, a year after the 68000 was released.

No, I think that the reason why IBM went with Intel was mainly cost.

If the 68000 had been chosen for the single-tasking, floppy disk only original IBM PC, I think that we would have had multi-tasking desktop systems much earlier, because the 68000 was designed as a 32 bit family of processors from the outset, rather than being an 8/16 bit kludge of a processor that the 8088 and 8086 processors were (and became worse still with 32 bit and 64 bit evolutions)

Peter Gathercole Silver badge

Re: How convenient @CrazyOldCatMan

You may be right about soldered processors in Ultra books or NetBooks, but I can assure you that many business laptops from people like IBM/Lenovo, HP and Fujitsu still have their processors in sockets.

It's just the ones where the supplier does not care about maintenance and also tend to use glue to hold the systems together that don't.

Peter Gathercole Silver badge

Re: Hmmm... @Roo

I think that you should say were plenty of non-x86 processors out there. There really aren't any more, with just AMD (which is an x86 derivative, but may not be affected), Power, zSeries, ARM, and the tail end of SPARC, and I suppose Itanium (just) being around.

You could also say, I suppose, that there is a MIPS processor around still, but you'd have to buy it from the Chinese.

A lot of other architectures never made the switch to 64 bit (although Alpha was always 64 bit). Architectures we've lost include Alpha, PA-RISC, VAX, all Mainframe apart from zSeries, DG Eclipse, Motorola 68000 and 88000, Nat. Semi. 32XXX, Western Electric 32XXX, various Honeywell, Burroughs and Prime systems, and various Japanese processors from NEC, Hitachi and Fujitsu.

This is largely the cost of wanting cheap, commoditized hardware. You end up with one dominant supplier, and suffer if they get something (or even a lot of things) wrong.

Peter Gathercole Silver badge

Re: Hmmm... @Primus Secundus Tertius

The reason why PDP-11 could respond so quickly to interrupts was this facility to switch address spaces without having to save the register context. On other architectures, in order to take an interrupt, the first thing that you need to do is save at least some of the address registers, and then restore them after you've handled the interrupt.

IIRC, the PDP-11 not only had duplicate address mapping registers, but also had a duplicate set of some of the GP registers, so you had to do pretty much nothing to preserve the process context that's just been interrupted. This is what made the interrupt handling very fast.

The time to return from the interrupt was entirely down to the path length of the code handling the interrupt. The actual return mechanism was as quick as the calling mechanism. There were unofficial guidelines about how long your interrupt code should take, which I believe were conditioned by the tick length for the OS. If you took too long, you would miss a clock-tick, which would result in the system clock running slow.

In addition, I also believe that there were a small number of zero page vectors that were left unused by either UNIX or RSX11/m (the version of RSX I was most familiar with) that allowed you to add your own interrupt handlers for certain events.

Peter Gathercole Silver badge

Re: Hmmm... @J.G.Harston

That is true, but for volume data moves was mitigated by DMA from disk directly into memory-mapped buffers in the process address space, using the UNIBUS address mapping registers, which allowed raw DMA transfers to addresses outside of the kernel address space.

Of course, not all PDP11 models had the UNIBUS (or, I presume a similar QBUS) feature, but pretty much everything after an 11/34 would have. I had an unusual 11/34e that also had 22-bit addressing, which made it much more useful.

Peter Gathercole Silver badge

Re: Hmmm... @AC

That's true. Having a address space change would have to disable speculative execution, because it would also have to try to predict which address space it would be in.

Actually, thinking about it, it still has to, because if the mapped page is protected from view, there still needs to be some mechanism to lift the protection to allow the speculative execution of the branch of code, before the decision is taken. But in theory, the results of the branch-not-taken should be discarded as soon as the decision is made, so that the information gathered could not be used. Maybe there is something in the combination of speculative execution and instruction re-ordering (not mentioned yet) which allows data to be extracted from later in the pipeline.

Maybe this is the problem, and if it is, it's probably a design flaw rather than a bug, Interesting.

Peter Gathercole Silver badge

@ lsatenstein

I think you'll find that Core, i series and Xeon processors are all installed in sockets.

Atom processors are designed in packages intended to be soldered onto system boards. Everything else is in sockets that allow the processor to be replaced. But the problem here is that Intel keep changing the socket design, so you just can't put new processors into old motherboards.

This means that if you are upgrading a system piecemeal, rather than all at once, you end up having to replace not only the processor, but also the motherboard and probably the memory as well.

I would very much like Intel to be forced to support older sockets for longer, so you could give a system a relatively non-intrusive processor upgrade without having to tear the whole system down.

Peter Gathercole Silver badge

Re: Hmmm...

I think we need to return to PDP11, where you had an alternative set of memory management registers for program and supervisor (kernel) mode. When you issued the instruction to trigger the syscall, the processor switched the mode, triggering an automatic switch to the priv. mode registers, mapping the kernel to execute the syscall code..

This meant that it was not necessary to have part of the kernel mapped into every process.

IIRC, s370/XA and Motorola 68000 with MMU also had a similar feature. I do not know about the other UNIX reference platforms like VAX (the BSD 3.X & 4.X development platform) or WE320XX (AT&T's processor family used in the 3B family of systems - the primary UNIX development platform for AT&T UNIX for many years), but I would suspect that they had it as well.

I first came across the need to have at least one kernel page mapped into user processes on IBM Power processors back in AIX 3.1, where page 0 was reserved for this purpose. In early releases, it was possible to read the contents of page 0, but sometime around AIX 3.2.5, the page became unreadable (and actually triggered a segmentation violation if you tried to access it).

Brazil says it has bagged Royal Navy flagship HMS Ocean for £84m

Peter Gathercole Silver badge

Re: Whats in a name

When the RN was at it's zenith, warships were named after all sorts of things. The before mentioned HMS Pansy was almost certainly a Flower class corvette, all of which were named after, um, flowers.

It used to be that capital warships were named after famous people, characters from mythology, or an adjective (like Victorious).

Lesser ships have been named after all sorts of things, like counties, towns, and as you get down to the more numerous ships which followed a letter (destroyers, frigates etc.) like the Amazon class all started with "A", with names from all sorts of word category (e.g. Amazon, Antelope, Ambuscade, Arrow, Active, Alacrity, Ardent, Avenger).

With the smaller number of warships recently, there has been a desire to keep certain names going (for example Victorious, Vanguard, Audacious, and Ajax), although for submarines, they are apparently following letters as well.

IIRC, Ocean was quite unusual, as there had only been one previous HMS Ocean, which was a Colossus class aircraft carrier.

One interesting part of Royal Navy tradition is that battle honors for namesake ships are carried across to the new ship, and I believe that the wardroom silver- and crystal-ware is also moved to the new ship.

If this is the case, you can imagine there having to be significant storage space for the wares from all the ship names that are no longer in use!

AI smarts: IBM pushes out 'faster than X86' POWER9 servers

Peter Gathercole Silver badge

Re: POWER to the people! @AC

Late to the comment trail, I know, but AS/400 was the hardware platform, and used to have it's own processor types, although they adopted (and some say saved) IBMs PowerPC processor platform, with Rochester picking up 64 bit systems with the Amazon (RS64) processor when Austin dropped the ball with the failed PowerPC 620, which barely saw the light of day outside of IBM.

IBM i was previously called OS/400.

One reason that IBM i persists is because it is a very business friendly system. Before things like Apache and the other open software packages were grafted on top of the POSIX compatibility layer, many organizations did not employ specialist system admin/operations staff. It was sufficiently simple and menu driven that the general running could be given to ordinary admin staff with little training, and all of the hardware type stuff was handled by IBM CEs.

But it is a propriety system, and you have almost complete vendor lock-in, which is why most consultancies will suggest ditching them. But that does not mean that they could still be the best solution for some companies.

Yes, your old iPhone is slowing down: iOS hits brakes on CPUs as batteries wear out

Peter Gathercole Silver badge

Re: Economy

I think you need to look at bigclivedotcom's channel in YouTube for his battery tear-downs.

There are differences between expensive and cheap rechargeable batteries, but they are probably much less than you might think, and it's the embedded electronics that are often the biggest difference. As long as there is some charging protection and over-current fuses, both of which are now *very* cheap to add to a battery (using Chinese produced single chip solutions), they might fail, but not catastrophically. Things have moved on hugely in the last few years.

Of course, if you buy the cheapest, there are likely to more corners cut, but I've bought replacement batteries for phones and laptops from Chinese sellers for years, and not had any problems.

The only faulty phone battery that I've had was a branded Nokia battery for a 7110 (although it could have been a counterfeit, it was bought from a high street phone accessories shop), which suffered an internal short and overheated, although it did not explode or catch fire.

When it comes to SD cards, buying them from supermarkets is nearly as cheap as on-line, and will very rarely give any trouble at all.

Peter Gathercole Silver badge

Re: Battery shape?

The chemistry of ordinary car batteries that start a fossil fuel car, and those that run EV's is very different.

Starter batteries, which are single batteries, need to provide a very high current (40-100 amps depending on the type and size of engine) for a matter of a few 10's of seconds, and then get charged over the next 20 minutes or so using relatively unsophisticated, and generally rather poor power.

EV batteries need to provide reasonably constant current draw for a few hours, and are then charged using sophisticated charging hardware from a clean supply normally over a number of hours. There are multiple batteries that each contribute to the overall current, and you can do some clever things with switching them from parallel to series for short bursts of power when accelerating.

This means that the chemistry and physical design of starter and EV batteries is very different, even though they look similar from the outside, and also means that starter batteries tend to age faster.

Merry Xmas, fellow code nerds: Avast open-sources decompiler

Peter Gathercole Silver badge

Re: This is game-changing stuff

I was going to say something very similar, and add to it by saying that decompilers are not exactly new.

I know it's a bit clumsy, but various debuggers have decompilers built into them to turn lumps of machine code into something more readable.

I mean, dbx and gdb have been around a good long time, and I used adb, cdb and in fact the original db (on UNIX edition 6) 35+ years ago.

Whilst I would not want to decompile a complete software suite using one of these tools, investigating interesting bits of code has always been possible.

Biting the hand that feeds IT © 1998–2019