Just think how I feel.
In October, I will celebrate the 40th anniversary of logging on to a UNIX system for the first time.
Cue up the real grey-beards...
2759 posts • joined 15 Jun 2007
Just think how I feel.
In October, I will celebrate the 40th anniversary of logging on to a UNIX system for the first time.
Cue up the real grey-beards...
I took the efix apart yesterday (publicly available to anybody and can be examined using anything that understands tar), the description is "ABSTRACT=CAA clcomd fix", and the only thing that is shipped with it is a replacement for /usr/sbin/clcomd.
Whilst it is true that this fileset is shipped as part of AIX (although only usable on Standard and Enterprise edition, not Express), it is only needed on systems that are clustered in some way. I know it is needed by System Mirror PowerHA (HACMP), but I suspect that it may also be used by some of the other cluster services like Spectrum Scale Storage (GPFS) and maybe other things that uses RMC/RSCT, although it is not used for communication with the HMC.
The published APARs contain virtually no information about the nature of the vulnerability, so it would require internal knowledge to definitively know what the problem is.
Maybe the AC who replied to you actually has seen something to confirm your guess.
I am currently involved in running a mixed estate of clustered and non-clustered (PowerHA) AIX systems, and clcomd is generally not running on the non-clustered systems.
Ah, the AD90. I got through dozens of the things. Much higher quality than the D90s, and better than the Maxel equivalents (IMHO), but much cheaper than the Psudo-Chrome SA90s. The equalization bias was such that they tended to produce a slightly bright sound on most Hi-Fi, so it was best to use a record deck that did not produce too much surface noise.
I remember splicing an extra 5 minutes on some tapes to record the two sides of some LPs onto the single side of an AD90 (although the TDKs had about 46 minutes of tape as measured on my JVC KD720 HiFi deck). I think one of them was Genesis Wind and Wuthering, and I had Meatloaf's Bat out of Hell on the other side (if any record company is reading, I have since bought both on CD, so you still made a sale!)
In general, most LPs were under 20 minutes a side, so would fit on one side of an unadulterated AD90.
I remember there being a country-wide shortage of AD90s sometime around 1980 because it was the tape of choice for most home-tapers.
I avoided Scotch/3M or BASF tapes, because they shed oxide even when new! And I would not touch unbranded tapes at all, and even good C120 tapes suffered from print-through, and tended to jam even on good tape decks.
But packaging a Coffee Lake+ in a Socket 1150/1 package (at the volume of Core Quads produced) may be cheaper, especially if you consider a like-for-like replacement of the mobo and memory in some gaming rigs will cost a similar amount to the processor!
Last year I did a just-behind the leading curve rebuild of one of my son's gaming rig, and the cost of the mobo and memory was easily more than the processor.
But they could produce latest generation chips without the design flaw, and package them in the older chip packages. As most Core and Xeon processors are in sockets, it would be possible to do a one-for-one replacement, although you would either have to be happy taking the systems apart yourself, or paying someone to do it.
They could get approximate performance by tweaking the clock multiplier and possibly disabling some cores and L0/1 cache, and I dare say they could also turn off some of the newer features (as they already do for current generation Celeron and the recently re-launched Pentium processors) so that end users did not get the benefit of those not in the older CPUs. They would have to do something with the ID info, because some mobos may struggle to configure the newer chips without a firmware upgrade.
I think that the only thing they might have problems with was the TDP. Underclocking later generation CPUs would use less power, but I think that they should be generous enough to allow people to benefit from that.
But it would be pretty expensive, so I have no expectation that Intel will do this.
I recently bought a Blackview Chinese special from Amazon in the Black Friday sales. It cost the princely sum of just over £43 new.
It's not a premium smart phone, but the screen res. is the same as the mid-range HTC it replaced. It's only 3G and does not have NFC, but I found that I was not using the NFC and 3G is quite fast enough for the limited amount of mobile browsing I do,
It's got the same Flash and RAM, and Android 7 appears to do a much better job of managing the available memory than previous releases. The battery is removable and large at 2800mAh, and easily lasts more than two days the way I use it. It has a microSD slot as well as the SIM slots.
Also, the call and audio quality is much better than the HTC.
But the main reason I bought it was that I had been carrying around two phones because of coverage problems (the other a Nexus 4 running Ubuntu Touch - which worked really well), and this Blackview is a dual-SIM phone that means I only have to carry one.
All in all, it's a perfectly good phone for almost nothing. The only thing I still intend to check is whether there is any traffic from baked-in apps on the phone, but I've not noticed anything yet.
Amstrad did have several lines that qualified as Hi-Fi.
I had an IC2000 20W RMS per channel amp and an IC3000 tuner, and they definitely met the Hi-Fi criteria of power, distortion etc, although they were paper covered chipboard and plastic. The design of the electronics weren't too bad, however.
You would not class them as anything other than budget kit, but they were definitely better than the majority of music centers that were available at the same budget.
The problem is that Hi-Fi has become elitist in the past few years, probably because there is no real market for low-to-mid range separates. I picked up a What Hi-Fi recently to look at a 'budget' turntable review, and was shocked to discover that the cheapest turntable they had in this 'budget' review cost something like £350, and it also included 'tables costing in excess of £700. The only way this is budget is if you regard transcription turntables as the norm.
I think that all Carbon X1's have SSDs.
According to Wikipedia (I know, I know), the 1BSD add-on tape for Bell Labs UNIX version 6 was released in 1978. I'm sure that some of the utilities would have been used internally within UCB before this, so I'm assuming that's where you were.
At this time, it was a series of add on tools plus one or two kernel modifications and fixes/patches shipped as source to be applied on top of an existing Bell Labs V6 UNIX installation, not an OS in it's own right.
I, too, have personal experience. I used Ingres, which was shipped on with 2BSD in 1979. Later, I looked after a Bell Labs UNIX V7 installation with a BSD 2.6 tape (which I wish I had taken a copy of), again for Ingres.
It was a lot of fun investigating the other software that was on the tape, although most of it required the separate I&D address space extension to compile and run, which my PDP-11/34 did not have. There was, however an overlay loader included that I had a play with, and managed to get ex running, although vi would not run. I did not pursue it, because Newcastle University produced a multi-platform simple screen editor for student use, which we were given a copy of (how's that for Open Source).
I think that 3BSD which shipped in 1979 was the first distribution that contained a complete OS, although you still needed to be a Bell Labs/AT&T source code licensee, because it still contained base UNIX code.
I also ran RSX-11M on my PDP/11, and we were a member of DECUS, and we used the DECUS C compiler, RUNOFF, and a number of other utilities which were available for free (well, at the cost of the media) to members.
Got to be a bit careful.
In the current UNIX standards maintained by The Open Group, they are deprecating some older system and library calls, so it is not certain that modern UNIXs will be able to directly compile older UNIX software.
But generally, they have been replaced by more functional equivalents which can be pulled in by preprocessor macros. But it requires work.
Isn't a hardware archive called a museum?
Back in the mists of time (1982?), I was working on a PDP-11 running UNIX V7, and trying to get the 22 bit addressing working with the Calgary kernel buffer mods on a non-separate ID PDP-11/34 (it was a SYSTIME special, with non standard features). Had the system single user mode, i.e. as root with no lpd running.
I got into the habit of doing "cat file > /dev/lp0" (the standard name for an lp11 parallel printer driver) to print things I wanted on paper.
Unfortunately, the character /dev entry for the OS disk was rp0...
Even though "l" and "r" are some distance from each other on the keyboard, you can guess what I did. Overwrote the bootstrap, superblock and the first 20 odd K of the inodes (inode 0 was the one for /) on my experimental disk pack, which had my current modifications to the kernel on it with no backup.
Fortunately, it did not prevent me from bringing the system back to normal operation by swapping the (removable) disk packs over, but it did give me several hours work to recover my changes.
Ah, but yours is an Engine Control Unit (ECU) not an Electronics Control Unit (ECU). See the difference?
I do wish people would not re-use acronyms, especially in the same field!
It will still need a license if you are watching broadcast TV via the internet.
There is a definition of what broadcast TV means knocking around, but I can't be arsed to dig it out. It's something to do with watching it (however you do it) while the same content is being transmitted on a broadcast medium, and covers all broadcast types.
This is different from on-demand, where you request specific content independently of whether it is being sent to other users.
Apart from maybe having more channels, is this that much different from NowTV, which is already a Sky service in the UK?
Even if all of the authorized applications are re-written, you still have to accept that if executable malware can be dropped onto a system, this would still be able to exploit Spectre.
The only way that you can be 100% sure that a system is not susceptible is by either having fixed hardware, OR by having absolute and total control of every piece of executable code on the system.
Recompiling your authorized code is only a partial solution.
Close to my home, there is a quarter-mile passing lane (meant to allow passing of agricultural vehicles) that ends and immediately drops to 40, then 30 in a short distance before entering a village.
The number of idiots, seeing the passing lane, but not looking where they will have to pull in again, who accelerate to more than the national speed limit, and then don't know what to do at the more the restricted speed limits drives me crazy.
It's amazing there's not more accidents. One of the places where a speed camera would actually be welcome. Unfortunately, there is not one, and there's no place where a camera van can park, so nobody get's caught!
I'm not sure if I was the first person to use "It's Broken, Mate", but I don't think I saw it used before I used it.
You must also remember that HP was heavily invested in Itanium, as they had contributed their PA-RISC and EPIC technology to Intel, supposedly to make it easier for Intel to build am architecture that HP could move HP/UX to easily.
As it turned out, Intel used a lot of the IP to make their x86 processor line run faster, and were late delivering the server grade Itanium chip that HP wanted (and which were not as easy to port to as HP expected).
IIRC, it was so bad that HP developed two further generations of PA-RISC, which were, in fact, some of the best processor designs HP ever made, just to allow them to have competitive systems to sell while Intel faffed around with Itanium.
So Intel took HP for a ride, and then ditched them once they had gained the IP they were after.
I don't believe MSDOS ever ran on the 8080 processor, which was 8-bit.
The lowest would have been an 8088, as per the original 5150 IBM PC, which is a 16 bit processor with an 8-bit data bus and an 20-bit address bus, allowing 1MB of physical memory.
ULAs (Uncommitted Logic Arrays) were a UK innovation, mainly designed by Ferranti.
They allowed some of the layers of the wafers to be a common design, with the last few acting as a customization to get the chip to do what was needed. You could think of them as a half-way house to an FPGA, but with the configuration baked into the last few layers of silicon rather than after manufacturer.
I don't believe that any US company really bought into using ULAs, but they were used, as already pointed out, for the ZX81, BBC Micro, ZX Spectrum and Acorn Electron to reduce the chip count.
But production problems with ULAs were one of the main reasons why several of these systems were delivered late. Ferranti eventually disappeared into Marconi, which was sold off when that company went bust, so the technology disappeared.
Whilst I don't have the experience of the 68000 that you obviously had, I believe that there were a significant number of support chips from the 680X 8-bit family that worked with the 68000.
I'm trying to think back, but I'm sure I saw a working 68000 system around the time that the IBM PC was new. Of course, that could have been because small companies were more agile than IBM, but I think that the IBM PC was a very quick development which didn't start until 1980, a year after the 68000 was released.
No, I think that the reason why IBM went with Intel was mainly cost.
If the 68000 had been chosen for the single-tasking, floppy disk only original IBM PC, I think that we would have had multi-tasking desktop systems much earlier, because the 68000 was designed as a 32 bit family of processors from the outset, rather than being an 8/16 bit kludge of a processor that the 8088 and 8086 processors were (and became worse still with 32 bit and 64 bit evolutions)
You may be right about soldered processors in Ultra books or NetBooks, but I can assure you that many business laptops from people like IBM/Lenovo, HP and Fujitsu still have their processors in sockets.
It's just the ones where the supplier does not care about maintenance and also tend to use glue to hold the systems together that don't.
I think that you should say were plenty of non-x86 processors out there. There really aren't any more, with just AMD (which is an x86 derivative, but may not be affected), Power, zSeries, ARM, and the tail end of SPARC, and I suppose Itanium (just) being around.
You could also say, I suppose, that there is a MIPS processor around still, but you'd have to buy it from the Chinese.
A lot of other architectures never made the switch to 64 bit (although Alpha was always 64 bit). Architectures we've lost include Alpha, PA-RISC, VAX, all Mainframe apart from zSeries, DG Eclipse, Motorola 68000 and 88000, Nat. Semi. 32XXX, Western Electric 32XXX, various Honeywell, Burroughs and Prime systems, and various Japanese processors from NEC, Hitachi and Fujitsu.
This is largely the cost of wanting cheap, commoditized hardware. You end up with one dominant supplier, and suffer if they get something (or even a lot of things) wrong.
The reason why PDP-11 could respond so quickly to interrupts was this facility to switch address spaces without having to save the register context. On other architectures, in order to take an interrupt, the first thing that you need to do is save at least some of the address registers, and then restore them after you've handled the interrupt.
IIRC, the PDP-11 not only had duplicate address mapping registers, but also had a duplicate set of some of the GP registers, so you had to do pretty much nothing to preserve the process context that's just been interrupted. This is what made the interrupt handling very fast.
The time to return from the interrupt was entirely down to the path length of the code handling the interrupt. The actual return mechanism was as quick as the calling mechanism. There were unofficial guidelines about how long your interrupt code should take, which I believe were conditioned by the tick length for the OS. If you took too long, you would miss a clock-tick, which would result in the system clock running slow.
In addition, I also believe that there were a small number of zero page vectors that were left unused by either UNIX or RSX11/m (the version of RSX I was most familiar with) that allowed you to add your own interrupt handlers for certain events.
That is true, but for volume data moves was mitigated by DMA from disk directly into memory-mapped buffers in the process address space, using the UNIBUS address mapping registers, which allowed raw DMA transfers to addresses outside of the kernel address space.
Of course, not all PDP11 models had the UNIBUS (or, I presume a similar QBUS) feature, but pretty much everything after an 11/34 would have. I had an unusual 11/34e that also had 22-bit addressing, which made it much more useful.
That's true. Having a address space change would have to disable speculative execution, because it would also have to try to predict which address space it would be in.
Actually, thinking about it, it still has to, because if the mapped page is protected from view, there still needs to be some mechanism to lift the protection to allow the speculative execution of the branch of code, before the decision is taken. But in theory, the results of the branch-not-taken should be discarded as soon as the decision is made, so that the information gathered could not be used. Maybe there is something in the combination of speculative execution and instruction re-ordering (not mentioned yet) which allows data to be extracted from later in the pipeline.
Maybe this is the problem, and if it is, it's probably a design flaw rather than a bug, Interesting.
I think you'll find that Core, i series and Xeon processors are all installed in sockets.
Atom processors are designed in packages intended to be soldered onto system boards. Everything else is in sockets that allow the processor to be replaced. But the problem here is that Intel keep changing the socket design, so you just can't put new processors into old motherboards.
This means that if you are upgrading a system piecemeal, rather than all at once, you end up having to replace not only the processor, but also the motherboard and probably the memory as well.
I would very much like Intel to be forced to support older sockets for longer, so you could give a system a relatively non-intrusive processor upgrade without having to tear the whole system down.
I think we need to return to PDP11, where you had an alternative set of memory management registers for program and supervisor (kernel) mode. When you issued the instruction to trigger the syscall, the processor switched the mode, triggering an automatic switch to the priv. mode registers, mapping the kernel to execute the syscall code..
This meant that it was not necessary to have part of the kernel mapped into every process.
IIRC, s370/XA and Motorola 68000 with MMU also had a similar feature. I do not know about the other UNIX reference platforms like VAX (the BSD 3.X & 4.X development platform) or WE320XX (AT&T's processor family used in the 3B family of systems - the primary UNIX development platform for AT&T UNIX for many years), but I would suspect that they had it as well.
I first came across the need to have at least one kernel page mapped into user processes on IBM Power processors back in AIX 3.1, where page 0 was reserved for this purpose. In early releases, it was possible to read the contents of page 0, but sometime around AIX 3.2.5, the page became unreadable (and actually triggered a segmentation violation if you tried to access it).
When the RN was at it's zenith, warships were named after all sorts of things. The before mentioned HMS Pansy was almost certainly a Flower class corvette, all of which were named after, um, flowers.
It used to be that capital warships were named after famous people, characters from mythology, or an adjective (like Victorious).
Lesser ships have been named after all sorts of things, like counties, towns, and as you get down to the more numerous ships which followed a letter (destroyers, frigates etc.) like the Amazon class all started with "A", with names from all sorts of word category (e.g. Amazon, Antelope, Ambuscade, Arrow, Active, Alacrity, Ardent, Avenger).
With the smaller number of warships recently, there has been a desire to keep certain names going (for example Victorious, Vanguard, Audacious, and Ajax), although for submarines, they are apparently following letters as well.
IIRC, Ocean was quite unusual, as there had only been one previous HMS Ocean, which was a Colossus class aircraft carrier.
One interesting part of Royal Navy tradition is that battle honors for namesake ships are carried across to the new ship, and I believe that the wardroom silver- and crystal-ware is also moved to the new ship.
If this is the case, you can imagine there having to be significant storage space for the wares from all the ship names that are no longer in use!
Late to the comment trail, I know, but AS/400 was the hardware platform, and used to have it's own processor types, although they adopted (and some say saved) IBMs PowerPC processor platform, with Rochester picking up 64 bit systems with the Amazon (RS64) processor when Austin dropped the ball with the failed PowerPC 620, which barely saw the light of day outside of IBM.
IBM i was previously called OS/400.
One reason that IBM i persists is because it is a very business friendly system. Before things like Apache and the other open software packages were grafted on top of the POSIX compatibility layer, many organizations did not employ specialist system admin/operations staff. It was sufficiently simple and menu driven that the general running could be given to ordinary admin staff with little training, and all of the hardware type stuff was handled by IBM CEs.
But it is a propriety system, and you have almost complete vendor lock-in, which is why most consultancies will suggest ditching them. But that does not mean that they could still be the best solution for some companies.
I think you need to look at bigclivedotcom's channel in YouTube for his battery tear-downs.
There are differences between expensive and cheap rechargeable batteries, but they are probably much less than you might think, and it's the embedded electronics that are often the biggest difference. As long as there is some charging protection and over-current fuses, both of which are now *very* cheap to add to a battery (using Chinese produced single chip solutions), they might fail, but not catastrophically. Things have moved on hugely in the last few years.
Of course, if you buy the cheapest, there are likely to more corners cut, but I've bought replacement batteries for phones and laptops from Chinese sellers for years, and not had any problems.
The only faulty phone battery that I've had was a branded Nokia battery for a 7110 (although it could have been a counterfeit, it was bought from a high street phone accessories shop), which suffered an internal short and overheated, although it did not explode or catch fire.
When it comes to SD cards, buying them from supermarkets is nearly as cheap as on-line, and will very rarely give any trouble at all.
The chemistry of ordinary car batteries that start a fossil fuel car, and those that run EV's is very different.
Starter batteries, which are single batteries, need to provide a very high current (40-100 amps depending on the type and size of engine) for a matter of a few 10's of seconds, and then get charged over the next 20 minutes or so using relatively unsophisticated, and generally rather poor power.
EV batteries need to provide reasonably constant current draw for a few hours, and are then charged using sophisticated charging hardware from a clean supply normally over a number of hours. There are multiple batteries that each contribute to the overall current, and you can do some clever things with switching them from parallel to series for short bursts of power when accelerating.
This means that the chemistry and physical design of starter and EV batteries is very different, even though they look similar from the outside, and also means that starter batteries tend to age faster.
I was going to say something very similar, and add to it by saying that decompilers are not exactly new.
I know it's a bit clumsy, but various debuggers have decompilers built into them to turn lumps of machine code into something more readable.
I mean, dbx and gdb have been around a good long time, and I used adb, cdb and in fact the original db (on UNIX edition 6) 35+ years ago.
Whilst I would not want to decompile a complete software suite using one of these tools, investigating interesting bits of code has always been possible.
The idea of starting the period of restricted change half way through December is to allow time for any cockups that do get in before the freeze to be fixed before the real Christmas shutdown.
In most cases, it is not really a full 'freeze', because changes to fix operational problems may still have to be made, but it is really to hold back on any non-essential service affecting changes that may inadvertently cause a problem. Many organizations still allow changes in their non-customer facing systems.
Steam catapults need a steam plant. Not necessarily nuclear.
Britain invented the steam catapult just after WWII. This was way before nuclear propulsion was an option.
But you are in a way, quite right. We don't have any ships with steam turbines any more (probably the last built was the Type 82 destroyer HMS Bristol), so there isn't any serious steam generation in HMS QE or PoE (these are IEP - Integrated Electric Propulsion involving diesel and gas turbines driving generators and electric motors), and there is not enough electrical generation for EMALS, although I think that EMALS actually use a kinetic storage device to charge up and rapidly dump the electrical power that is needed to launch aircraft.
When I visited the machine room in Claremont Tower at Newcastle University in 1978 or '79, they had a drum acting as the swap space on the IBM System/360 Model 65.
What I remember is that the side was replaced by a perspex panel, and you could see the multiple fixed heads arranged around the spinning drum, so there was no seek component of the access time, merely the rotation time of the drum.
At Durham, we had a PDP11/34e with RK05 drives in a DEC 19" rack , and when the drives were busy, the whole rack rocked forward and backward quite violently as the voice-coils moved.
Later, I looked after a system with 80MB SMD drives. The worst that we had happen was the platter brakes seizing, making one hell of a racket, and a minor head-crash. We did have one pack that had the bottom guard platter bent making it a little unbalanced, which used to sing, but we only used that to hold an infrequently updated system backup.
Was not a rumble filter, was modified motor mounts, a different profile belt and a pulley to match the belt profile. All hardware, no electronics.
The motor now floats much like a Rega.
Um. Queller drive. Space 1999 Series 1?
Let me look it up.
Ah yes. S1E6 Voyagers Return. Excellent. Have an upvote.
I think you may have me confused with one of my then colleges, probably Jan-Simon (surname withheld, as I haven't talked with him to check, he was previously at Imperial College, so knew Sun kit really well), or Paul (ex. of ICL and OSF) who were performing the sysadmin at the UK AIX Systems Support Centre, although 1991 would be around the time that I started picking up what Jan-Simon was doing as he was preparing to leave, so it is possible.
Jan-Simon had big shoes to fill, and I was standing on the shoulders of giants when I moved in to try to manage what he was instrumental in setting up. I believe he's with Google now.
I'm flattered that you even remember the team. Few people either inside or outside of IBM do now.
I did not take it any further, as I've been busy myself (mainly work related - still a wage-slave even if as a 'consultant').
If you have actual contacts who may know, I am still interested. I should have followed up with TPM, but I think I got on his bad-books when he was posting HPC stuff on the Register, as I was a bit picky with corrections to one of his articles about the UK Metoffice, where I was working at the time.
Probably all in my mind, and he probably wouldn't remember anyway, but me agonizing about the past and being too self-critical are a couple of my failings.
Way back, you wrote a post that implied that we had crossed paths some point in the past, but try as I might, the only contact I remember with anybody on the US West coast was when I was working on UTS on Amdahl mainframes while at AT&T. Most of the contact I've had with people in the US has been east coast in AT&T and IBM (and possibly DEC thinking back).
Anyway, glad the fires are less of a problem. I have a traveling friend who was caught up in them with his family a little, but they're OK too.
Both "sudo su" and "sudo sh" have problems, in that they will not load the root environment, or run the profile.
You really need "sudo su -" to get the full effect as if you had logged on.
EE don't provide updates to phones that they supplied, so why do you think they will provide OTA updates for one that they didn't?
If your phone uses stock Moto firmware, it should be possible for you to get the code to put it on yourself.
EE have a bad habit of getting tweaked firmware from the handset manufacturers for the phones they supply, with a slightly different EE specific model number, and a different ROM id. They then do not provide the updates, and the changed ROM id prevents you putting the stock manufacturer ROM on them without significant effort.
I've been caught by this twice with 'phones that came from them. I'm now seriously looking at not upgrading my current phone, but buying an unlocked phone and dropping to a sim only deal.
The only way you can claim that the PoW batleship was obsolescent would be if you said battleships as a whole were obsolescent. The KGV class PoW was a modern ship, having been completed in 1939, commissioned in 1940, and sunk in 1941.
By battleship standards, it was modern, with contemporary propulsion, protection and armament.
The design was hampered by the London Naval Treaty, which put significant limits in the way of a good ship.
British battleships were not designed to fight in close quarters in range of land based aircraft, they were designed to fight other surface ships. That said, the British experience of fighting in the North Atlantic, North Sea and Mediterranean showed that they could still serve a useful purpose in protecting against and deterring enemy warships, even while under air attack.
If the Home Fleet had not existed, German Navel Raiders would have torn the Atlantic convoys to shreds in the area where land based aircraft could not reach.
WWII was the cusp of the change to air dominated warfare, but in that conflict, it was still necessary to have significant surface ships as well as aircraft.
The previous Prince of Wales was the second ship of the five ship King George V class of battleships. Other ships, in order of launch were Duke of York, Anson and Howe.
There are some references to Vanguard being a KGV class, but in reality it had more similarities to the cancelled Lion class, which was an evolution of the KGVs.
It really does depend on exactly what you're doing with an HPC.
If you're doing any type of simulation, then HPC comes down as much to communication and shunting data around between processors/nodes as it is computation.
The flow is generally a computation cycle followed by a communication cycle to prepare for the next computation cycle.
Until you specialize your communications into silicon, moving data around is much better done using a general purpose CPU that an FPU/APU.
A proper HPC system is a balance of multiple different technologies.
Unfortunately, real aircraft are not that strong.
This was the first Thunderbirds episode shown, so dates from around 1965, over fifty years ago.
The effects still stand up now. Good old British brute force, ignorance and an explosives license at it's best.
Interestingly, in the episode "Terror in New York", Thunderbird 2 is crash landed using foam to ease the landing, something that is now done in reality.
But that's the point. Basic System 5 (n)awk, sed, ksh88 and the other tools will probably never change, and that means that it will always work as you expect.
No matter how good the writers of gawk at. al. are, there will always be enough differences to trip you up once in a blue-moon, normally when you can least afford the time to problem solve. Also, the exact version number becomes important as the tools evolve.
I know this is a very backward looking view, but it's served me well over the last 35 or so years (before that, you're talking the original. much more limited awk from UNIX edition 7, and probably Bourne shell or maybe csh - did you know that somewhere in 2BSD circa 1979, there was a shell called vsh which worked uncannily like the later Norton Commander utility).
If you want a real laugh, dig out the Edition 6 shell documentation at TUHS! Two character variable names, with a significant number reserved, no functions and very minimal looping constructs, and a much less usable environment to allow variables to be inherited by child processes.
The problem with laptop batteries is that if they are at the stage where they can't even provide power for 30 minutes, at the end of that period, the voltage will take a sudden dive, effectively crashing the laptop.
As the warning is based on either the battery history and/or the voltage delivered by the battery, it often does not give the system enough time to spot and report a battery issue before it's too late!
Biting the hand that feeds IT © 1998–2018