* Posts by Peter Gathercole

4213 publicly visible posts • joined 15 Jun 2007

Middleweight champ MX Linux 23 delivers knockout punch

Peter Gathercole Silver badge

And, also, it's generally regarded in real production environments that you don't allow direct root login except on the console of the system.

This is not a problem when the console is on the system right in front of you, but with production UNIX systems, the console is normally next to the system in your secure machine room. This makes it useful in emergencies, and does not expose your root password across terminal serial lines or networks (Oh, and turn off remote access to your KVM switch - I've come across systems where they apply the rule of no remote login for root, only for the KVM to remain accessible, sometimes without an additional password, across the network, and sometimes via telnet!)

Peter Gathercole Silver badge

Or use "su", which has been around basically forever. It works (although I've come across some niche uses of Linux, like the Smoothwall firewall distro, where it doesn't), but you need to know the password of the user you're changing to (unless you're already root).

Peter Gathercole Silver badge

Re: @Dr. Syntax

I call youe Edition 7 (I know, I used the term Version 7 for years, before I realized that the correct term was using the edition of the manual as the reference), and raise you Edition 6!

I remember different functional users for different functions, and I also remember /etc/group with group passwords, a feature that still nominally exists, but fell out of favour when secondary group sets were adopted from BSD (yes, in Edition 7 - and even up to SVR3) originally you were only a member of one group at a time, and could switch between the groups with the newgrp command, with access conditioned by the group password (not ever wondered why the /etc/group file entries have a password field?). I also remember back then that these systems were reasonably small (my Edition 6 systems had about 12 terminals, and a user base of a couple of hundred, so administration was largely a one-man band).

I'm not saying sudo is perfect. The system that I mentioned called op (or op.access) allowed you to put in some argument checking if you wanted it, something you have some control of in sudo, but it's crude, mainly checking that arguments are present or missing. But Hey! something is better than nothing (which is what you get when using a superuser shell).

Also, you talk about 2FA? You want 2FA, configure your plugin authentication modules for it, sudo can use it as well (although I think you're really talking about is two separate user authentications, yours and root's, which is not really 2FA, just two passwords).

The way Ubuntu and other distro's that configure sudo out-of-the-box is familiar to the way that Windows works without domains set up, so is useful for people transitioning to Linux from home systems. Once they know better, they may opt to change, but many won't. But basically, if you don't like sudo, very little has changed, so set a root password and empty the sudoers file, and you have what you want.

Peter Gathercole Silver badge

@nematoad

You are aware that sudo was written by admins. just like you.

It was invented to allow admins to give more granular access to some facilities in a more flexible way than UNIX groups on a UNIX-like system without giving full superuser access.

If you've ever run a UNIX-like OS in a larger environment, especially one running production services, you will be glad that you can give facilities such as creating user accounts, managing filesystem mounting, starting and stopping application software etc. to lower grade administrators and operators without handing out the crown-jewels to people who could be dangerous with too much power. Remember some UNIX systems in the past have been mainframe-class systems, and have had hundreds of users and run many applications on the same system. They're not all run as workstations.

Yes, it can be laborious to configure the sudoers file, but there are tricks you can do (like setting up rules to allow members of various groups access to different sets of commands and integrating with LDAP or Kerberos) to allow you to almost achieve RBAC type functionality.

I agree that the standard sudoers file which most Linuxes ship, which effectively only sets up administrator and non-administrator accounts is too blunt, but used properly it can be a very effective tool, especially for systems that are managed by many people. But even using it as an Admin, with full access, using sudo allows me to protect myself by running as a normal user, and just jumping over the privilege hurdle only when I need it. Admins who work as root all the time are asking for finger trouble to kill their systems!

I wonder just how many people badmouthing sudo have never really investigated just how it can be controlleld to make it more useful, and just taken the out-of-the-box configuration at face value. I really wouldn't want to go back to an environment where the only way of administering systems was by logging on or su-ing as root, or setting suid bits all over the place (before sudo became widely adopted, I used a similar system called "Op", and I also know that there have been other very similar solutions, both freeware and commercial that do similar things, so it's a problem that has generated tools like sudo more than once).

Peter Gathercole Silver badge

Re: Devuan again

I find Synaptic better than anything that's been tried more recently. If I'm building a new deb based system, it's pretty much the first thing I install after the first reboot.

Soft-reboot in systemd 254 sounds a lot like Windows' Fast Startup

Peter Gathercole Silver badge

It's very close in my world. I've just started bringing my daily driver laptop more up to date from Ubuntu 16.04 to something still in support (mainly because software is now doing OS checks, and stopping working after the OS leaves mainstream support), and now when I use it (it's currently at 20.04) I'm noticing broken things left, right and centre. Almost everything I've tried to do ends up with having to fix something before I can finish the job.

The current annoyance is that when using X.org, I'm getting background corruption, and every now and then I get my terminal sessions just flashing junk on the screen, before reverting to readable text. This is liveable with, but is incredibly annoying (and it's not just on one terminal program, it seems to affect them all!). Oh, and the fact that Network Manager keeps dropping out, leaving me with no working network connections.

20.04 should already be quite mature, so I can't really understand why these problems still exist. I can't say that they are systemd related, but some of the problems I have had (like DNS name resolution failing after one upgrade) have been. But taken together with other Linux moves away from UNIX, it's making me want to to ditch Linux completely.

I'm just wondering whether to do it now before putting Ubuntu 22.04 on the system.

Peter Gathercole Silver badge

The proprietary UNIX systems I use every working day still have separate /, /usr, /var, /home and /opt filesystems.

In the case of AIX, this is largely because of the diskless boot operations that still want / and /usr to be completely read-only (although I've come across IBM supplied software that broke those rules - it cam as a great surprise to the people who wrote some AIX hardware maintenance task scripts that they could not write to /usr because it was read-only!)

Oh. and in case you wonder whether this is actually used any more, the IBM Power 775 Supercomputer systems running AIX ran their compute nodes as diskless systems, but I suppose that even this is more than 5 years ago.

Nobody would ever work on the live server, right? Not intentionally, anyway

Peter Gathercole Silver badge

Re: A life saver

Oooh. Naughty naughty. Embedded terminal control sequences.

I suppose most people use something that looks like an ANSI 3.64 (ISO 6429) terminal now, but I always get jumpy when I see such things as not all terminal emulators implement the same colours or control sequence supersets of the standard. In addition, my experience with the new MS Windows Terminal indicates that this is missing several capabilities that appear on an xterm (which is how it identifies itself), so things like vi are unusable. Hopefully fixed in later versions.

Peter Gathercole Silver badge

My ex-college room-mate was talking to me a few years after we graduated about wanting something to help his (secondary school) students prepare data from experiments, using the BBC Micros that the school used.

I took this as a challenge, and wrote him a program that could take multiple data sets and plot them against auto-scaled axes, with the ability to load, save, print, edit and even merge 2 different data sets together, along with the ability to apply a function to the data before plotting it. Not as extensive as something like Gnuplot became or SPSS, but something suited for the classroom. (and this was something like 1983).

I was working in education (further education in my case), and had been writing demonstration code (mainly in ISO standard Pascal) that HNC/D computing students could use as examples of good programming style, so I decided to make this BEEB program a Rolls-Royce of coding, on the assumption that the kids would take it apart to see how it worked, and I wanted them to have as good an example as possible. It used every good feature I could think of that BBC Basic provided, and even though the memory was tight, managed to leave enough comments in so people could work out what it was doing.

I got some very complementary comments back from staff at my mate's school, with their IT technician saying that he had never seen such a well-written program. Strangely, a program with very similar capabilities to mine appeared in educational software catalogues soon after, but as I had explicitly written in the comments that I was making it freely available, I don't think I could complain (not that I proved it was my code).

I keep meaning to find it again to see whether it stands up against my memory, but it's buried somewhere in the hundreds of ageing 5.25" disks that I accumulated over the years. I was going to go through them to write them to SD card (I bought an SD card filesystem for my BEEB and refurbished it and another one just for this purpose) before they became unreadable, but it seems such a huge task.

Peter Gathercole Silver badge

Budget HiFi

It's true. If you spend tens of thousands of pounds, you will (probably) get a good audio system.

But you don't need to spend that much!

A couple of thousand, concentrating on the turntable, cartridge and speakers can get you something that will exceed most people's needs (and their preparedness to arrange a room for optimal listening!)

If you spend very carefully, and cherry pick what you get, especially if you're prepared to buy second-hand, you can get something very listenable for under a thousand, and if push come to shove, I could probably put together a reasonable sounding complete system, new, to play LPs, for under £500, but there would be compromises (the speakers, especially, and you would end up with a turntable sporting a AT-3600L cartridge which is an absolute bargain).

My vintage kit, which currently includes a modded Pro-Ject turntable(1) with a classic Ortofon cartridge(1), Cambridge Audio CD player(2) and DAC(2), rebuilt NAD receiver(2), ancient JVC casette deck(1) (this is the last component of my original HiFi that I'm still using), and a range of speakers from Kef(2), Warfdale(2) and a niche brand called Keesonic(1) (I use different speakers for different types of music) could probably be bought in it's entirety second-hand for well under a grand. I've collected it over 40+ years, some new(1), some second-hand(2), replacing bits when they break or I find something better, and I think it sounds more than good enough (although the itch is always there), and it has certainly surprised visitors when they used to come into the house.

I actually have more than a complete system of components I've tried to see if I could improve the sound over the years. Must get around to advertising some of it to sell while there is still a market for them!

Creator of the Unix Sysadmin Song explains he just wanted to liven up a textbook

Peter Gathercole Silver badge

I got along fine with using mailx for years, even after pine and elm came along, especially when the AT&T Toolchest version of mailx was made available outside of Bell Labs. (which would handle attachments, as long as you had a suitable handling program and mailcap configured).

What stopped me doing this was when more people started sending mails formatted as HTML than didn't, some not even deigning to provide a plain text version of the content. The person who thought HTML mail was a good idea should have been strangled st birth!

On the record: Apple bags patent for iDevice to play LPs

Peter Gathercole Silver badge

Re: Had to have been filed 2021-04-01?

I would have believed that you were doing what you say, when you said, right up until the point I realised you had typed "vinyls". That is a much more recent term than the decade before CDs became common.

AMD Zenbleed chip bug leaks secrets fast and easy

Peter Gathercole Silver badge

Re: Parsing the data @Claptrap314

Well, on a system with larger than 8-bit GP registers, almost all load and save instructions work a word at a time!

In a word-aligned system, it's actually much more difficult to address individual bytes in a word. Some systems have hardware support for this (but it's still slower), but in others, it involves a load, an AND and (three out of 4 times on 32 bit or 7 out of 8 times on a 64 bit architectures), a shift. If there are fused instructions for AND and shift, this can be made slightly more efficient.

So for these types of architecture (almost everything after the 8-bit microprocessors), it is vital for efficiency to access strings as words.

This is hidden on most architectures by the compilers. It's only really people who write assembler or even more hair shirt, machine code who see this level of detail (the exception is when using pointers, where word aligned data structures can be critical on some architectures).

But you're using the general instruction set to do this, not any SIMD extensions.

Peter Gathercole Silver badge

Re: Parsing the data

The example of strlen() was actually in the article. If we were living in the 8-bit ASCII world still, I would accept that this is a scenario. But we're not. UTF-8 or sometimes ISO8859 are the order of the day, and just counting along an array of bytes until you get to a zero byte is no longer enough to work out the character length of a string. It's not that simple any more (although I'm sure a single byte with value 0x00 still represents a NULL in most collating sequences other than EBCDIC).

I'm also not sure about how vector instructions can make conditional decisions on part of a vector register. I very much doubt that it is efficient to treat each byte in a vector register with SIMD instructions. so for both of these reasons I doubt that SIMD is being used for strlen(). Maybe I should look at the source, after all, it's generally available.

I can definitely see that doing 256 bit load/save operations, using the registers but not SIMD arithmetic instructions could be a real timesaver (so strcpy, or better, strncpy), but still UTF-8 makes life a bit more tricky. It is interesting to see whether this is more efficient than the variable length instructions (basically implied loops) that were explicitly added to many instruction sets for this purpose (although I'm not sure x86-64 has them). I'm too out of touch with modern processors.

Actually, I really ought to know this, but exactly how does the application development toolset cope with building programs to run on same basic architecture but different ISA features? For what I know best, Power and AIX, unless you tell the compiler otherwise, it will create code that will only include the common subset of instructions for the processor family. This counts double for libraries, as otherwise you would need to provide multiple libraries for each set of extended instructions for a particular processor model or generation, and if you generate the code to include, say, some of the Power 10 specific features, the resultant code will not run on the older processors (or will result in processor traps and software emulation of the later instructions).

The only way that I can think that this would work is if there were multiple versions of library routines in a library, and the runtime-loader decided which version to link to at execution time depending on the model of processor it was running on. I don't think you could make this decision in during the program flow itself, because otherwise any code would be deciding which varient of code to run as much as running the code itself! Maybe someone can enlighten me here.

Peter Gathercole Silver badge

Re: Parsing the data

The description of this got me thinking. The write up seems to suggest that this vulnerability affects just the vector registers, and I wondered how the vector registers were actually used.

The vector units in Zen processors are designed to do single operations over multiple data values (Single Instruction Multiple Data or SIMD).

So how many string operations are likely to be processed as vector operations? I'm no expert, but the way that I thought these units were used was mainly for mathematical operations, like array processing. I'm sure that there are some applications where you may apply integer arithmetic operations to strings of characters (things like encryption and compression spring to mind but I'm sure there are inventive people out there who can think of novel uses), but nowadays many processors feature specific instructions not involving the vector units to do these operations, although I suppose that these instructions may use the vector registers as they are convenient large registers (256 bit in Zen2, I believe). This means that they can hold a maximum of 32 8-bit bytes, long enough for a password, but probably too short to hold the entirety of a decent length key or certificate.

So the question in my mind is how often the vector units are actually used to process data that could be easily identified as useful? Although it may be possible to get quite large amounts of data, I'm not sure that it would be that easy to consolidate data from each vector register capture into larger units of data from consecutive areas of memory (I don't think that the vector contains anything indicating the address of the data in memory), and I don't think that the software implementing the capture has huge control over what the vector registers were used for after being free'd up.

But user IDs and passwords or pass phrases may be damaging enough.

Want to live dangerously? Try running Windows XP in 2023

Peter Gathercole Silver badge

Re: XP64 felt not merely usable, but good: fast, responsive

Some, but probably not that much. The AV indirection layers that effectively examine every packet in and out of the system and even looks into disk traffic in real-time is probably a bigger resource hog.

I wonder if stripping these things out of a modern Windows (if it is even possible) would be another dangerous but interesting experiment.

Peter Gathercole Silver badge

Re: Why? Really, why? @RAMChYLD

That's pretty much rubbish.

What initially drove PC development was IBM cashing in on departments in companies wanting to get away from the stranglehold that their Data Processing departments had over the computing resources.

On top of that, smaller companies that could not justify even a departmental mini-computer were able to do business tasks on PCs. This market was opened up by Apple (with the Apple II) and others with Z80 based CP/M systems, and parts of IBM wanted to be able to grab some of that market (although other parts felt that the dominance of the Mainframe was threatened by those systems, which ultimately proved correct).

At the time, games were a very small part of the PC market (if you ever played games on a PC with CGA graphics and the built-in speaker for sound, you will know that a Commodore 64 was a better games platform!)

I know it was a little different in the US, but the cost of a PC in the early days dwarfed that of 8-bit micros by a factor of at least 5.

PCs ate into the home market when the very cheap clones stared to appear (like the Amstrad PC1512 in the UK, and probably the Tandy 1000 in much of the rest of the world, although there were others), which were only 2-3x the price of the 8-bit micros, but rapidly took over as it was one common platform (ish) verses all of the disparate 8-bit systems available at the time. Once there was effectively a single platform, that is when mainstream games development came to PCs, and the rise of fast graphics adapters and high function sound cards cemented the PCs place alongside consoles. I would place this in the early 90's.

Peter Gathercole Silver badge

Re: Why? Really, why?

References?

UNIX had segregated virtual address spaces from at least Edition 6 in 1976, which used the segmentation design of the mid-to-large PDP-11 models very effectively (I used V6 on PDP-11/34 in 1978, and these were really not large systems). These were isolated virtual address spaces, which meant that as a normal user, you didn't have the ability to look into the address space of another process, and you appeared to have a linear address space for your program, regardless of where it was in memory.

UNIX was a multi-user OS in the same time frame, and although the privilege model was quite simplistic, it did give significant protection to both OS components and other users when used correctly. Of course, it was not the multi-level ring design that came from Multics (which VMS pretty much adopted), but that was part of the design ethos of UNIX to not be Multics.

What UNIX did not get until after VMS was demand page storage virtualization, which appeared in BSD 4.2, and was rapidly added to System V (I think SVR3 was the first mainline release from AT&T which did it, but R&D UNIX 5.2.6 had it), and people like Sun pretty much made it a feature of their early releases. Other UNIX vendors also incorporated it at a similar time (early '80s).

I can imagine that IBM may have said something to this effect to increase FUD in the very early days of UNIX, but remember that they did have UNIX ports on the Mainframe in the '70s (IX), and of course AIX on 6150 and PS/2 in the '80s, and then the RISC System/6000 so even as they were disparaging towards UNIX, they were also adopting it! What they were afraid of was small, capable department level office systems that were cheaper than anything they were prepared to offer.

Peter Gathercole Silver badge

Re: Chromium 86 based browser for XP

And even typing *WORD, and being in a word processor instantly (if you have the ROM fitted)

But you may have some problems trying to use Google or watching YouTube (and, yes, I have seen the video of a project to watch YouTube videos on a C-64), and even playing your MP3s is more than a bit tricky.

Peter Gathercole Silver badge

Re: My takeaway from this article...

Linkers no longer do what they used to.

When you used a linker to produce a statically linked binary using proper libraries (it's interesting that ancient versions of UNIX used the "ar" command to manage libraries, which was extended to create "tar"), the linker would extract each .o file from the library before linking that into the binary. The scope of the .o file was exactly what the creator of the object wanted. You wanted a single subroutine, put it into it's own .o file. You want a large library of subroutines, include them all in the .o. The linker would include the whole of the .o file in the resultant binary (which, because it was statically linked, would load quickly, and so long as the system-call layer from the OS did not change, became largely OS version independent!)

When dynamic linking started to be used, because of the way that it was integrated into the memory segment model of many systems, it was not really possible to do this at the object file basis. Many shared libraries occupied an entire memory segment, so it was best to make them as large and all-encompassing as possible to reduce the number of memory segments used. The linker really now just becomes a dependency checker, to make sure that all of the symbols required by the program are actually present in the included library, and much of this has to be done at run time!

And anyway, they were all shared so what did it matter if you pulled in the whole library! You would save space because there would only be one copy of, say, the C library for all of the applications that were running on the system

This can cause problems. I don't know about Windows, but on Linux and most UNIX systems with shared libraries, it can be difficult to work out the true size of a program. Most simple tools will tell you the size of the code segment(s) in use by a program, but they will count the entirety of the code including the shared libraries. This means that if you are looking at several programs that all use the same shared library, that shared library will actually be counted multiple times.

There are more sophisticated tools to look at exact memory usage (on AIX there is svmon), but actually untangling what they say is difficult.

Bring this up to date. Modern applications are built on layers and layers of tools. And each layer will include their own variable, self-modifying code (think things like Perl modules or Java Class libraries) or more traditional shared librarieslibraries And if these are arranged as dynamically linked libraries, with the premise that each library has to include as much as possible, these applications become absolutely huge!

Add to this the memory space segregation that is required for threaded applications now because of security concerns (remember, the original SunOS Lightweight processes and the following Posix thread model shared the code and data space between threads to make them, um, lightweight), and we now have a situation where every tab in the browser is effectively a full heavyweight process, not really the thread that used to drive it.

To add insult to injury, current containerized formats like Snap, Flatpak and appimage ship the entire library dependency set in the container for each application. So a large application packaged like this will have it's own extensive set of the Linux userland, meaning that you can have many, many versions of things like the C library, for example, running on your system. This is the diametric opposite of the original thinking behind shared libraries!

I want to go back to the days of the more-static syscall interface, and static linking of applications, rather than this bloatfest that modern day application development forces us to carry! It does have it's problems, but just think how fast everything would run!

World's most internetty firm tries life off the net, and it's sillier than it seems

Peter Gathercole Silver badge

When I worked for a major IT company's support centre, one of my colleague's 'workaround' for a particularly severe security issue that was entered into the problem tracking system was exactly that.

"Turn it off, unplug it, put it in a locked cupboard and throw away the key".

Officially it did not go down well with the team it was escalated to, but unofficially, they were amused.

Peter Gathercole Silver badge

@Katrinab

Actually not.

General purpose computing, with terminals and latterly stand-alone and LAN connected PCs were used in businesses before The Internet really existed (or at least before company's data processing systems were attached to the Internet - remember the NSFnet links [part of the original backbone of the Internet] were only opened up to commercial organisations in 1991).

And before that, company records were stored for decades using paper, but the type that had holes punched into it that could be processed by card readers, discriminators and sorters.

And you're forgetting how much business was done via fax.

Peter Gathercole Silver badge

Yes, but generally speaking they had to be in the immediate physical vicinity of the paper, or at least have someone complicit who was.

Now, they can be on the other side of the world!

Someone just blew over $190k on a 4GB first-gen iPhone

Peter Gathercole Silver badge

Re: Historical iPhone @LateAgain

I don't agree that it was the first phone that was independent of a network.

I bought a Palm Treo 650 before the iPhone was a thing, and even though it was an Orange branded device, it was able to take SIMs from different networks (the lock in was a contractual one, not a technical one).

But you have to admit that, before devices like the iPhone, Palm Treo's, Nokia Communicators et. al. there was actually little reason to take your handset with you when you switched phone providers. It's only when these high-value phones came along that people wanted to take their handset with them.

Peter Gathercole Silver badge

Pascal for the BBC micro.

Um. I think you needed to read the manual. The BBC Micro OS (even 0.90) was perfectly capable of doing updates to the middle of files on DFS, NFS and all later filesystems, and if it was Acornsoft Pascal (that came in two ROMs), it was a full ISO validated Pascal, and you had full access to all of the file options that the BBC OS and filesystems provided.

It's been such a long time since I looked at Pascal on the BBC micro that I would have to drag down my copy of the Acornsoft Pascal guide to check the syntax, but I see no reason why it could not be done.

Strict standards compliant Pascal was a very limited language. It was always intended to be. There was strict typing, and the I/O you could do was limited, mainly to fixed-length records, and you had to jump through hoops in order to do anything like direct memory access or type casting between different data types (Pascal has pointers, but they are vastly different to what something like 'C' provides). But this was by design, because it was really a teaching language, and intended to teach good programming practices before students learned poor ones, and was meant to be a stepping stone to other languages. The people who wrote the extended Pascals did not really understand the purpose of the language.

Peter Gathercole Silver badge

16 years? I bought a Li-Ion powered tool, and left it on the shelf for 2 years, and when I eventually came to use it, the battery was totally flat and would not charge at all.

Cautionary tale I guess. Periodically re-charge all your battery tools, even if you've not used them.

Antitrust clouds continue to gather over Microsoft's European business

Peter Gathercole Silver badge

Re: Cry me a river

If the playing field was level, you would be right.

But in case you hadn't noticed, this article was not about products from other companies not being competitive, but about MS's ability to make it more expensive to try to compete, thus reducing their ability to try.

When it comes to Slack, it's about MS being able to give away a product to remove a whole market segment that other companies inhabit. And when their competitor's products are no longer viable, and Teams has become ubiquitous, MS can ramp up the price, either of Teams itself, or more likely the subscription for MS Office 365, or whatever it's called nowadays.

This is why it's being examined by the competition watchdogs.

Stolen Microsoft key may have opened up a lot more than US govt email inboxes

Peter Gathercole Silver badge

Russian dolls.

Jokes aside, we now get to the nub of the matter. You can encrypt it at rest... and then you have to store the encryption key somewhere to allow you to use it. And if you use the data in the cloud, then the keys have to be stored somewhere in the cloud as well.

Client-side encryption can fix this, but you then have to have a means of distributing/generating the client side key, and we're back to the same old problem of signing the new key with something that you can trust on the Internet, which then makes the signing key vulnerable, (and of course, you can't process the data in the cloud!)

If you have it locked to one device, then sharing data is difficult, which negates many of the perceived benefits of the cloud. And you have to have some form of back-door, because devices can and do fail (and Apple appear to want to replace complete motherboard assemblies including the SSD storage if a device fails in warranty). So where do you secure your device storage in case of failure? Why, the cloud, of course!

Without some completely inviolate signing method, the cloud can never be completely secure.

Douglas Adams was right: Telephone sanitizers are terrible human beings

Peter Gathercole Silver badge

Re: Real Sanitizers

Radio, books, LP, TV series, and (unfortunately) the film.

With some respect for the attempts, by Eoin Colfer and Dirk Mags, the follow up radio series' (after the Secondary Phase) were just a pastiche of the original, even the ones penned by Douglas himself.

First of Tesla's 'bulletproof' Cybertrucks clunks off production line

Peter Gathercole Silver badge

Re: Joke

The Rover 500 was designed to be a vehicle to replace both the Rover 200 and 400 models, which both relied heavily on Honda designs (the 200 was based on a restyled earlier model 213/6, and both included large number of components built by Honda), which reduced the profit making potential. The design was flexible, and could be the basis for models ranging from small family cars to replace the 200 up to larger hatch back and saloons to replace the 400 and 600 models.

When it was clear that the Rover 500 was not going to be funded by BMW, it was necessary to tweak the 200 and 400 models to become the 25 and 45, styled to match the 75, but in reality still Honda designs under the covers.

When BMW divested themselves of Rover, Honda decided to stop supplying the parts they were making for the 25 and 45 models, and there had to be a hasty but very cheap re-working of various parts of both the 25 and 45, which spilled over to produce the re-vitalized MG-R, S and T models that were based on the 25, 45 and 70. But Phoenix had no money to create a new car (now that the Mini and 500 were denied to them) to replace the ageing models, so the writing was on the wall, and when they tried to forge links with Chinese car companies, it seems to me that the two companies involved dangled a carrot just long enough for Rover to go bust, and the waltzed in and bought the remains for a much lower price than the collaborations would have cost them.

One story I heard was that in the days following the final collapse of Rover, Honda sent in engineers to remove all of the machinery to produce any part of the 25 and 45 models that were derived from their designs, just to prevent the Chinese companies from getting their hands on the jigs and presses, to prevent them being made in China (even though Honda had long ago retired the Concerto and Civic models the Rovers were derived from). This would have been the reason why variants of the Rover 75 appeared (which had no Honda parts) in China, but no 25/45 models ever did (although the production issues of the K-Series engine were finally fixed and the result had a long and successful life as the SAIC Kavachi and NAC N-Series engines in vehicles made for the far-eastern markets).

Peter Gathercole Silver badge

Re: Joke

I'm not sure the first gen. new MINI was a re-skinned BMW.

When BMW bought the Rover Group, there were apparently design concepts and later mockup's and even some engineering designs for five possible Mini revival cars already created by the Rover design team.

When BMW sold the remains of Rover to Phoenix, they retained all of the work that had been done on the new Mini, together with the almost production ready design of the Rover 500, a car whose styling and some design features looked suspiciously like the BMW 1 series, although that is disputed by BMW. They also retained the Cowley factory (the most modern production facility that Rover had), which is where the new MINI is produced.

Even the original engine was not a traditional BMW design, being an engine produced in Brazil under a consortium originally set up by the Rover group and Chrysler to produce small, efficient engines that would run with a high percentage of ethanol, to make it suitable for the US (and other) markets.

So there is a very good chance that the New MINI is in reality the last successful effort that came out of Rover.

Since then, as pointed out already, the MINI has become an oxymoron, not being very small at all, and now probably re-skinned other BMW. models.

Goodbye Azure AD, Entra the drag on your time and money

Peter Gathercole Silver badge

The ones that annoy me are the vehicles where they've taken a small family hatch back, put big wheels on to make them taller, and large skirts or bulging body trim to make them look 'butch', but left much of the mechanics and interior fit the same.

So you come up to one, and see a tall car with a large footprint, and then you climb in, and it's just as small and cramped as the original vehicle they adapted! What is the point!

Peter Gathercole Silver badge

Re: Time to rename it and make it just part of the Entra brand

But the people who know still call IBM Spectrum Protect by the name Tivoli Storage Manager. And many of the underlying tools still start have references to the previous incarnation, Adstar Distributed Storage Manager (ADSM).

Ditto Spectrum Scale (aka GPFS, or MMFS [Multi-Media File System], or even TigerShark [some tools are still called ts...]), and System Mirror PowerHA (HACMP). Names tend to persist in technical circles.

When talking about documentation, unless Microsoft is going to tear through the API, you can probably get away with a comment in the introduction to your documentation, at least in the short term, along the lines of "The product previously known as Azure AD has been renamed by Microsoft as Entra. For all references to Azure AD, read Entra."

The only time I use an IBM tool's 'proper' name is when I'm talking to a marketing or licensing droid.

Three signs that Wayland is becoming the favored way to get a GUI on Linux

Peter Gathercole Silver badge

It's not really memory or CPU in the case of the display system (I was talking in a more general sense), but hardware wise, the actual display hardware also changes, and is continuing to change rapidly.

The people doing the the most promising Linux port to Apple M1 and M2 hardware have said that they do not have enough resource to both re-write the X.org backend display driver for the new silicon, and also do a Wayland compositor. So they've opted to just do Wayland. And Apple never have done a native X11 driver for their display silicon fro MacOS, so are not going to do one now.

Unfortunately, the different vendors of display hardware all do it their own way, and are developing their own silicon, with different numbers of cores and basic graphic primitives, different levels of abstraction for higher graphics operations and even different ways to send the commands to the display hardware. And they don't keep it the same over time. New cards from Nvidia, AMD, Intel et.al. just don't work with the old drivers, and keeping up with the changes if the vendors themselves don't do it is very difficult for community development projects, especially if the specs. aren't published by the vendors.

This is, unfortunately, the way things will go forward, and an indication that Linux is moving further and further away from being a UNIX-like OS to being something distinct from it.

Peter Gathercole Silver badge

Re: How to do this with Wayland? Don't know!!

It doesn't have it, and it's not going to have it as part of the Wayland protocol.

What there is is an X server called XWayland that runs on top using the Wayland protocol to talk to the compositor (think display driver). But it's not perfect, even though it is mainlined in X.org's X server. And I believe that it has problems with remote sessions, and many things like window re-parenting and keymap modifications, and possibly cut-and-paste will not work exactly the same to anything outside of the XWayland display. I've not played around with it much, so I don't really know.

Having got used to using X for the last 30 years, this seems to be a backwards step to me.

Peter Gathercole Silver badge

Re: At least systemd worked…

You had to update the device driver, didn't you?

Both AMD and Nvidia retire the drivers for old cards. Chances are your card is no longer in the AMD universal driver, and you're back in un-accelerated VESA mode. Reinstall the old driver and pin it so that it doesn't get upgraded, or switch to the open drivers.

Peter Gathercole Silver badge

It's less that the foundations are crumbling, and more that change is happening on the hardware side of things.

If we had stability in that software didn't become more bloated and resource hungry, and new programming and application paradigms didn't keep appearing, then the existing tools would work fine forever.

But change is happening. More powerful systems, with more and different cores, more memory, more network bandwidth, and new storage designs appear, and existing software needs to change to accommodate this. I don't thing anybody thought when Linux was first being written, that we would have the monster 64 bit systems with multi-gigabyte RAM and terrabyte disks.

What is happening here is that the new, young things doing the programming don't want to learn legacy, they want the new, shiny, and feel that re-inventing the wheel rather than just changing the tyres is the way to go.

Oracle pours fuel all over Red Hat source code drama

Peter Gathercole Silver badge

Re: @Mickey9fingers

I think there is a time limit. I believe they have to keep providing the source to the binaries that you already have for a period of 2 years, but where that 2 years starts is probably a moot point. And they can charge a reasonable media and transport fee for providing it (this goes back to the time of distributing code on physical media like tape or some form of disk). Nowerdays the Internet is perceived to be the way to do it, but GPL2 goes back to when the Internet was quite primitive.

But after terminating the support contract, they have no obligation to provide either updated binaries or the associated source code for the new binaries.

Peter Gathercole Silver badge

Re: Opensolaris anyone?

I presume you're talking about MacOS here.

Well, it's a bit debatable whether MacOS is a true UNIX. Yes, it passes the Single Unix Specification verification suite (or at least it did about 10 years ago), so there is merit in calling it UNIX. But, and in my view, this is a big BUT, it is not a genetic UNIX, i.e. one derrived from AT&T sources.

The kernel is some strange mashup of the Mach kernel and some stuff developed by Apple to produce their XNU kernel. From the Mach side, it inherited code from BSD 4.3, but this is mainly AT&T free except for the ancient stuff, due to the settlement between AT&T and UCB.

The MacOS CLI userland is straight from BSD, and has quite a few differences from AT&T derived UNIXes, and quite a different feel to it. People who know BSD will be quite at home, but to me, mainly System V and ports, it feel more like the archaic UNIX Edition 7 that SVR3/4. And besides that, most people very, very rarely use the shell on a Mac. They use a proprietary GUI (not even X11 based) which is what users mostly see.

So yes, it is a UNIX system, but few people actually use it as such.

If you look, the main reason why UNIX did not survive on the desktop was cost. In the mid '90s, I was putting AIX systems on peoples desks, both using thin clients (IBM X-Stations and PowerPC RS/6000 model 220, 230 and 250s), and also full-fat systems like 43Ps. But the cost was HUGE compared with even IBM priced PC's. Using 6091-19 monitors (the 1091-17 monitors were cheaper, but late to market) at close to £2K a pop, and IBM Model M keyboards at £108 and over £40 for a three button mouse, and then adding the cost of the system unit itself, again running in thousands, it was just cheaper to put a decent Thinkpad and a good sized monitor in front of the user (and more in line with IBM's desktop policy)

IBM was expensive, but I think that all of the UNIX vendors put a premium on their workstations. They just thought that customers would pay for the perceived better performance. None of them ever produced systems running their proprietary UNIX systems at a price where they could compete even with high-end PCs (look at IBM's PS/2 model 70 and 80 running AIX 1.2 and Sun's i386 running Sunos/SVR4, and compare prices - and they were Intel systems!) Even the OS licenses were much more expensive than Windows.

When you look at the route Apple took, they only put a UNIX on their Mac systems after being unable to offer decent systems with their original, completely proprietary OS, and effectively bought in a UNIX platform when they acquired NeXT. By that time, commodity processors had achieved enough power and features to comfortably run a UNIX-like OS.

I think that the point at which UNIX really lost out was when IBM decided on the 8088 for their PC platform (and the rest of the desktop world followed IBM). If, by a twist of history, the 68000 had been ready enough (and at the right price) to be used, I suspect that by the mid to late 1980's, desktop PCs would have had all of the necessary hardware to run UNIX, and I think we would have seen UNIX rise instead of Windows, because at the time it was just so much more mature and capable as an operating system than was MS-DOS.

Peter Gathercole Silver badge

Re: Opensolaris anyone?

Ah. A mistake in my post. HP's RISC was Precision Architecture, or PA-RISC. Where did I get confused with Prism?

Peter Gathercole Silver badge

Re: Opensolaris anyone? @containerizer

I actually got to look at some of the source. The kernel really was a merged system. It had quite a lot of both Sun and AT&T code in it, but one of the major features was that they tried very hard to remove all #ifdef'd code, so that much of the internals had to be rewritten to make this happen.

When it comes to userland, there were multiple versions of tools that used different flags. You could select your 'personality' by setting the order of the directories in your path to pick up your preferred flavour. The same thing could be done to select the library functional flavours with the LD_LIBRARY_PATH when using dynamic linked libraries (yes, I'd forgotten about that until just now!)

So I wouldn't say that SVR4 is SunOS, nor would I say it was SVR3+. It was bits of all of these things.

Peter Gathercole Silver badge

Re: Opensolaris anyone?

Part of the problem is that while Sun defined SPARC, they did not initially want to make processors. They wanted silicon foundries to license and actually bake SPARC processors, in a similar fashion to what ARM do now.

But the problem is that only a couple of chip makers decided to pick up SPARC designs, and eventually Sun had to commission the creation of processors for their own use themselves, although Fujitsu did manufacture SPARC processors for a long time.

As a result, although SPARC performed well for it's time, when HP started to up the clock speed of their Prism architecture, IBM produced the Power processors and DEC produced Alpha, SPARC did not stay competitive. They tried going massively parallel for a while, and then tried to up the thread count of the processors by increasing the number of integer units in each core, but they never really produced a killer SPARC implementation again. Fujitsu did, and Sun/Oracle did use some Fujitsu chips in their 64 bit lines, but they just did not have the resources to remain competitive. The so called ROC chip, which was supposed to have much promise, never saw the light of day. It was a long and slow death.

Eventually Sun ran out of money, and were consumed by Oracle, probably the worst place they could end up! I know that IBM currently has a bad rap. sheet because of Red Hat, but it often crossed my mind what would have happened if IBM had bought Sun.

Peter Gathercole Silver badge

Re: Opensolaris anyone? @naive

When looking at SVR4, you have to remember when it was defined, and what it was trying to achieve.

It was intended to be Unified UNIX, allowing software to be easily ported from SVR3, BSD4.2, SunOS and Xenix. It was backed by AT&T, Sun, the original SCO, ICL, Amdahl, Fujitsu (although differentiating between the last three was difficult), and also included IIRC Dell, Acer and a number of other PC manufacturers. But it did not have the buy-in from IBM, DEC or HP, who, saw this as a risk, and set up the Open Software Foundation, with their own brand of 'standardized' UNIX in competition.

SVR4 was cumbersome, to a degree, because of all of it's roots. The initial revel started in 1987 (I was at one of the developer conferences), and I then got to play with some Sun boxes with it on in 1988.

To me, with a Bell Labs./AT&T background, it felt very much like AT&T UNIX, but at the time I was using R&D UNIX, which had already got many of the features that AT&T contributed to SVR4. It had some really nice features. I particularly liked ABF (Application Binary Format), which meant that shrink-wrapped software for a particular processor architecture could be installed on any SVR4 system with the same architecture, even from different manufacturers. I saw the same packages installed on SPARC systems from Sun and ICL, and also (another package obviously) on Sun i386 systems and another Intel, was it a Dell? system.

Because of the time it was set up, many of the things that we take for granted, such as user interfaces like Motif, CDE and the various spin-offs just hadn't been invented (and Windows was at version 1 or 2). So comparing it to systems even a few years later is actually disingenuous. If I remember correctly, they actually decided to use a DisplayPostscript rendering model rather than X11 (well, I suppose X10 really) because that was not really mature.

SVR4 was a product of it's time and IMHO was good, but was left to wither on the vine because it just did not get the traction after IBM, HP and DEC all pushed into the UNIX market place with their proprietary UNIX systems (yes, IBM and HP having been fundamental in setting up OSF, abandoned it after poisoning Unified UNIX, and kept their own flavours of UNIX. Only DEC had a real OSF product). Sun were really the only company that gained mainstream popularity with SVR4, although ICL/Amdahl/Fujitsu did try, and Sun put a very SunOS like view on their systems.

When the remains of UNIX Software Laboratories (set up by the SVR4 founders to promote SVR4) ended up with Novell, and they licensed their systems to SCO group who had bought some of the IP from The Santa Cruz Operation who had, it was only the UNIXware line that remained alongside Solaris. Where it is now is anybody's guess.

Peter Gathercole Silver badge

Re: Opensolaris anyone? @naive

They have. Power and AIX have moved on in the last decade or two. There are Power systems under the OpenPower banner that do not have the Power hypervisor burned in, but use one of the Open Firmware projects, and can be managed as an adjunct to a VmWare environment, managed by many of the same tools. These systems only run Linux though, as far as I am aware.

Interestingly, Power10 systems have a RESTful API to the hypervisor and the newer soft HMCs, so they can also be managed to use more standard tools to a degree. There is a cloud offering of AIX on Power in Azure and IBM Cloud (although I don't think there is any in AWS or GoogleCloud) managed by an outfit called Skytap.

IBM have obviously thinking about this, because there are a number of changes to AIX that have been made to allow blank, vanilla system images to be cloned at the storage level, and pick up their configuration at boot time. This is very clever, but tripped us up when we built some traditional systems using NIM, which lost all their network customizations whenever we rebooted them because this option was enabled by default!

I had a brief play with it early last year, and it was interesting, but IMHO unlikely to be used like a cloud service (although the possibility is there, with fast system deployment and spin-up), but the problem we found is that accessing Azure storage blobs was not as easy as it should be, and although you had rapid deployment, the best option was to internalize all the storage within the Skytap environment as virtual fibrechannel storage.

The evaluation I was involved with fell down when we were working out how to import a couple of 10's of terrabytes of data, and could not work out how to do it to AIX storage in the Skytap environment within the outage window that the end client had outlined. I also think that it looked like it would be quite costly in the long term compared to migrating to in-house managed systems, but I don't know for certain, because I was on the technical side, not the financial side.

It all felt a little not-quite-ready, and as such, I fear it will miss the boat. Hopefully, in the last year, they've ironed out some of the issues, and it is a bit more slick.

Peter Gathercole Silver badge

@Larry the Lobster

But that is it. Once you do it you will have no support, no further updates, no access to future source code releases, no possibility of becoming a Red Hat customer ever again.

Distributing the code is a one-shot option that you will not be able to repeat. All in accordance with the GPL and RH's contract.

I don't like it, but that's how it is.

Peter Gathercole Silver badge

Re: Opensolaris anyone? @containerizer

What you said is only partly correct. There are things that the last few tru UNIXes standing do that Linux just doesn't, just as chuckufarley said above.

I maintain AIX systems. The RAS features of AIX on Power are unmatched by anything Linux does. There's predictive failure reports, automatic disabling of failing components keeping a system running even when it's in a little trouble, dynamic replaceability of many failed components, live LPAR migration from one system to another and many others, and they've even borrowed back ftom Linux by implementing live kernel patching!

What this enables you to do is keep AIX systems running for a long time without outage. And that is the crux. With Linux, it's all about scalability and resilience by going wide, with many small(er) systems in parallel such that losing one because of maintenance or a hardware issue will only have a marginal effect on the service, rather than keeping a single large OS instance running.

But that requires your application to be written a certain way. But not all workloads conveniently fit into that deployment model. I know, the monolithic deployment mechanism of small numbers of high powered systems is seen as old-hat nowadays, but I sometimes wonder whether that is just fashion - that's the way the commodity hardware works, that's the way applications should be written. But the monolithic model is older and well proved, and has survived many challenges to it's use over the years (and has even subsumed many models, such as the Open Systems move in the '90s and '00s). As long as you can keep scaling up the systems to keep up with the workload, then it is still a perfectly valid way of running systems, as many banks, and other financial institutions will tell you, if you ask.

Whilst I have seen them, and modern hardware platforms do allow it, you just don't really see the mainframe-challenging single instance Linux systems being used. It's much more normal to slice these large systems up into virtual machines than to deploy a huge Linux system (I know, Power systems are also sliced up, but it is much more normal to see a couple of large LPARs doing the grunt work, with smaller LPARs surronding them for the application interface and management functions with system-image isolation).

AIX is dying. I know that (but I'm close to retirement, so I care less than I did). Slowly, Linux is getting some of the features that the financial orgs. want (and I think that one of the few benefits of IBM/Red Hat is that Power RAS features may end up being better supported in Linux), but also Linux is becoming different from UNIX. It's now almost impossible to back-port a modern piece of software from Linux to a traditional UNIX (goodness, it's becoming difficult in some cases to port to Linux systems that don't run systemd!)

I can see a further slow tail-off of traditional UNIX, and whether it will stabilize in a niche like mainframes have is an interesting thing to watch for, but I don't think it will.

Oracle have done almost nothing with Solaris and SPARC systems since they bought the technology. It is very, very unlikely IMHO that we will ever see new iterations of SPARC processors, now Fujitsu seem to have transferred their attention to 64 bit ARM processors. There is nowhere for any customers still running Solaris to go other than moving to another platform. HP/UX is all but dead. IBM AIX and Power? Well, all the time IBM invests in the Power roadmap, I think AIX will survive in some form, but it will become increasingly legacy as time goes by.

Turning a computer off, then on again, never goes wrong. Right?

Peter Gathercole Silver badge

Re: PC Engineers...

When I was talking about ANFS, it was really the file block caching that it would do that I meant. And that was effectively a change to the client that improved the entire experience. Reading a data file through the original NFS actually transferred the file one byte at a time, because NFS did not provide any memory pages for doing block reads (unlike DFS, that read whole sectors at a time). This made file read and write/update very slow. I actually wrote a small piece of 6502 assembler that trapped the OSBGET and OSBPUT vectors, and implemented a single page buffer using OSGBPB to speed up file access for several BBC languages.

This speeded up the from-file compile time for Acornsoft Pascal about 20 times, and also reduced network contention (making our BBC micro lab a much more useful resource), as the 'net would slow to a crawl when transferring one byte at a time.

I am aware that there was not a one-to-one mapping between the filesystem that the BBC used, and that actually running under the covers on the server. Acorn Econet Level 3 server used ADFS under the covers, which did reasonably closely match what NFS and ANFS would do. What was under the covers in SJ fileserver, I don't know, but as Acorn NFS was just so simple, it could have been anything provided it kept to the API. But I think that the file naming convention and directory structure was probably defined by the server, not the client, but was exposed to the client, so the server was not totally transparent to the NFS client.

Peter Gathercole Silver badge

Re: PC Engineers...

The main problem with early UNIX filesystems was that UNIX was already fully multi-tasking and multi-user, even in the '70s. This often meant that if you. for example, deleted a file so that it's space was put back on the free list, it was possible that the space was immediately used by another user creating or extending a file (I had to keep explaining this when users of CP/M and MS/DOS asked me to undelete a file they had just deleted when using UNIX). And as even relatively low-powered systems would support 8-16 users, if the system stopped unexpectedly without writing out the dirty disk blocks and inodes, it could be unclear which file an orphaned block (one that was not on the free list nor in the blocks allocated to files) belonged to. And worse, it could sometimes be that a block could appear in the block list of two files.

The tool to fix such a problem before fsck (fsck was first provided outside of AT&T on the V7 Addendum tape which was sent with later Edition 7 distributions for PDP-11s) was icheck, which was not interactive. You would get a list of problems, and then go through fixing them with rm and clri, and then have to run icheck again and rinse and repeat until you had sorted out all of the issues. And then you had to sort out which files you'd lost or damaged. (I made sure that I did most of this on a hard-copy terminal when I had to, so I had a record of what had been done)

The commands I referred to were icheck, which would check all the blocks listed against files, and reconstruct the free list from the blocks explicitly not in files, dcheck which would check whether all of the allocated inodes had appropriate directory entries and that the link counts were correct (useful for cleaning up pipe-files), and ncheck (not really a check) which would go through all the inodes, and list the directory entries which referenced that inode or reported an orphaned file (interestingly, ncheck still exists on late UNIXes like AIX, and is often useful to look at a filesystem that may have directories and files hidden by overmounted other filesystems or point-to-point mounts without unmounting the covering object). Icheck and dcheck were been completely replaced by fsck, but remained for several versions.

Some wag at AT&T created an exp-tool called ipatch (arrrrr!) to help with cleanup, which would allow you to change individual values of an inode without having to dive in with a full blown fsdb session.

These sorts of thing just didn't happen with single-user OS's, although that was obviously not the case with the system you're talking about (I knew I recognised it, but I had to admit that I had to look it up before it clicked), but that was a file-server, so the circumstances were a little different. With BBC micros with ordinary NFS or DNFS, file operations were quite often full load and saves (although Econet did support opening files as read, write and even update), but if I remember correctly, most operations were whole-block reads or writes, and files were normally contiguous blocks on a disk (at least on Level 3 econet servers which I had experience of - defragging the disk was a real pain!). I believe things were a little different with ANFS, but I had little experience of that.

Peter Gathercole Silver badge

If a system requires you to log in on the console as a specific account, I'd argue that it was not a proper server at all, more like a workstation running a peer-to-peer application!

Man who nearly killed physical media returns with $60,000 vinyl turntable

Peter Gathercole Silver badge

Re: That's a decent enough home office setup..

I'd actually upgrade the entire Essential. I never did like the bare-bones design of that turntable!

Mind you, my upgraded Debut II is probably not a lot better (although it's sporting my vintage Ortofon VMS20E-II on a Debut-III arm with an add-on acrylic platter to compensate for the lower body height - sounds sublime)

Rather than looking for Cambridge Audio amp, I'm still happy with a rebuilt NAD 7020, although I have a C-A CD player and DAC.

What I can't get over nowadays is how expensive Pro-Ject decks are. When originally bought, my Debut II cost £108 (I came across the receipt recently). Current Pro-Ject turntables seem to start at north of £350.

Looking at the Linn, I think I preferred the looks of the original squared-off corners. Rounded corners (yes, really) don't look right on a Sondek!