* Posts by Peter Gathercole

4215 publicly visible posts • joined 15 Jun 2007

Battle of the retro Unix desktops: NsCDE versus CDE

Peter Gathercole Silver badge

Re: Digital alchemy

Nope. Can't stand the things. Too itchy.

But when I've been lazy, and not shaved for a few days, it is quite grey.

Peter Gathercole Silver badge

Re: RAM usage @m4r35n357

Don't know what you were doing back in the mid-90s when CDE was the default. but I would say that the minimum for a UNIX workstation back in the day was 1024x768 at 8 bit colour, but the majority of the systems I was using at the time had 1280x1024 with at least 16 bit colour visuals.

The 640x480x8 may have been a common resolution for PCs, but not for UNIX workstations.

On the subject of where the memory went, it was not normal to have shared memory for the processor and graphics adapter. Anything running on a true workstation would probably have it's own display adapter memory, which is often what limited the resolution/colour depth. So the lost memory was probably not down to the screen.

For the version using 800+MB, I would suggest that the author should look at the options used to compile the source. I'll bet that the binaries are not stripped, the diags. are all turned on by default, and the optimizer was not running aggressively enough.

In the mid '90s, we had CDE running to X-Stations, with about 12 per IBM RS/6000 320H each of which only had 80MB of memory, with some remaining free for other processes. So CDE was really not that heavy.

Peter Gathercole Silver badge

Digital alchemy

"Its terminal emulator can't handle modern apps such as htop or the Tilde text editor".

This is a problem of lack of understanding of terminals and terminal emulators by the system administrator. Similar problems are faced by using Putty or terminals on Linux systems to access proper legacy UNIX systems.

The issue is that not all terminal emulators are vt220, xterm et. al. compatible, and in fact xterm is not a very safe setting for the TERM environment variable used to condition the terminfo entries, as there have been just soooo many mostly compatible, but ultimately not the same versions of "xterm" across the years.

I believe that the correct setting for TERM with the CDE terminal emulator should be "dtterm", but I would suspect that many Linux systems do not have a dtterm terminfo entry, so fall back to xterm, or xterm-256color or something similar. This will almost certainly not match the capabilities of the dtterm terminal emulation.

The common problems are:

Function keys not being recognized correctly

Non-7-bit-ascii characters do not work correctly, especially box draw characters

Any colour support will be very spotty

Some cursor movement operations do not work correctly.

Many of these problems can be fixed at one fell swoop, by identifying the location of the dtterm terminfo file, and making sure there is a copy in the appropriate place for the hosting OS that you are using (unless, like me, you add to the terminfo database with local additions).

The one that will possibly cause a problem is the font that is used, as I'm pretty certain that you will have to have iso8859 fonts in your font path, rather than just UTF-8 ones, unless the version of dtterm on NsCDE has been altered for UTF.

I've just fired up an AIX 5.3 system, and installed an original version of CDE on it, and then run a dtterm via X11 onto a RHEL 8.6 system, and things work pretty much OK, although I don't have "-dt-interface user-medium-r-normal-m" at any size in my default font path.

You cannot imagine how frequently I find I want to take a UNIX or Linux administrator who accepts the wrong characters for box draw or unrecognized function keys as something normal, and try to shake some knowledge into them, because it is nearly always user error, not a problem with the system! UNIX was written to allow *LOTS* of different terminals types to use the system correctly, and Linux mostly inherited the capabilities.

Things have become both more simple, while at the same time more complex with the commodification of UNIX-like OSs, such that what was once well known appears almost like alchemy nowadays.

Chinese booster rocket tumbles back to Earth: 'Non-zero' chance of hitting populated area

Peter Gathercole Silver badge

Re: I would say ....

That's a very nominal point. Neither Bletchley nor Newport Pagnell were immediatly subsumed into Milton Keynes. It was more properly built on land between Bletchley and Newport Pagnell, and has now grown out to meet them.

Basingstoke old town (not Old Basing) has now almost completely disappeared under the new(er) shopping centres. When I first visited it in the late 1960's (before they had finished building the original pedestrianized shopping centre (it always stank of chlorine because the swiming pool was right under the middle of the shopping centre), there was still a number of older buildings. There may be one or two left on Wote Street and London Street, but I doubt there is a lot.

Since the original shopping centre was built in the mid-1960s, the centre of Basingstoke has been re-developed at least twice, with the 'new' market square disappearing under Alders, which has itself long since disappeared.

Although I lived in Basingstoke for six years in the mid 1990s, when I visited it a few years ago, I barely recognised the place, and it had not got better!

Peter Gathercole Silver badge

Re: I would say ....

Umm. Couple of things. I know both Basingstoke and Milton Keynes quite well, and they are streets apart!

Firstly, Basingstoke is older. Most of the new town was built in the 1960's. Milton Keynes is a couple of decades newer, so if they were actually clones, Milton Keynes would be a copy of Basingstoke, but read on.

Secondly, Basingstoke is built as a series of rings, with inner and outer ring roads, and the town originally built in radial segments. Milton Keynes is built on a grid system, with north-south and east-west roads.

Thirdly, when Basingstoke was built, many buildings went up. whereas Milton Keynes was built to be very low profile (OK, there have been some taller buildings built more recently).

Lastly, Basingstoke was built around an older village as a London relocation town (think slum clearance), with a large number of the houses built as council houses, so if you find some Basingstoke 'locals' (not the original Hampshire residents, but those who moved in in the '60s and '70s), you will find various London dialects being very prevalent. Milton Keynes was a true green field 'New Town' with people moving there from all over the country because of jobs, and they ended up buying their houses, and it is thus much more cosmopolitan.

Since it was built, Basingstoke has got worse, with many of the green spaces that were originally left to make life a bit more palatable being filled in by new buildings, although they did tunnel under or fly over most of the roundabouts. Milton Keynes is actually quite pleasant in comparison.

Tavis Ormandy ports WordPerfect for UNIX to Linux

Peter Gathercole Silver badge

Re: Who needs it when ... ?

Au contraire.

With emacs, you could load electric nroff mode or one of the electric LaTeX modes, so you could format more than just plain text from directly in emacs.

People sometimes forget that the justification for the purchase of the first PDP/11s at Bell Labs. UNIX team (they started by borrowing a mainly unused system) was as a document processor. for patent preparation. Not WordPerfect or Wordstar, but still high quality document preparation for it's time. And Documenters Work Bench remained an important feature of UNIX well into the SystemV days.

Emacs was a bit later, of course.

I wrote my first year university project write-up using roff on UNIX edition 6.

Copper shortage keeps green energy, tech ventures grounded

Peter Gathercole Silver badge

Re: BT Cabling Home Phones

They've been pulling copper from under the streets for decades!

In the '80s, I worked for a telecom equipment supply company that was working with BT replacing the long distant trunks with fibre. I was told by one of the marketing people that the scrap value of the copper recovered, even then, was enough to buy the fibre, the equipment to add to the exchanges, and the cost of laying the fibre and recovering the copper, and still turn a profit.

In most places, it's only the so-called last mile that is still copper, and some of that will be aluminium.

But it's a diminishing return as you get closer to the last mile. I'm sure that once you get to FTTP provision, the cost or recovering the copper cable will not be worth it. at least not until all of the last-mile in an area has been replaced and they can just rip the whole lot out wholesale.

Peter Gathercole Silver badge

Re: That's a lot

My middle son works rebuilding steam locomotives, and they recently had a copper firebox custom made for a rebuild they were doing.

It was a huge lump of crafted copper weighing over half a tonne, which he said was worth tens of thousands of pounds (but that was probably the manufactured price), and was kept under strict security until they fitted it. Half a tonne of copper is a lot of metal!

Peter Gathercole Silver badge

The thing about gold is that up until about 50 years agom gold was regarded as persistent, i.e. not actually consumed. You would always be able to melt and reuse a very high percentage of the gold coinage and jewellery that it was used in.

Since we started putting it into electronic devices, it's become a consumable, very difficult to recover, partly because of it being a noble metal, but mainly because in each device, a very small amount is actually present, making recovery uneconomical. But the number of devices is huge, so the amount of gold consumed ends up being considerable.

I know that there are efforts being made to recover rare earth and precious metals from some electronic devices, but a huge amount will still end up in landfill.

Copper is actually easier to recycle. For things like cable with near pure copper in it, you just throw it into a furnace. The insulation burns away (and the fumes must be filtered, of course), but you end up with mostly molten copper that can then be re-refined. Things like copper clad aluminium, steel armored copper cable, and devices were copper is integrated into the manufacturer of a device (like a motor), as well as when copper is used in an alloy (like brass) are more challenging.

2050 carbon emission goals need nuclear to succeed, says International Energy Agency

Peter Gathercole Silver badge

Re: Evidence!! Here!!

Hmm. That lonk shows that they're building a demonstrator that will come into operation in 2027, which will do one pulse per day.

It really looks interesting, but I would guess that this technology is actually still a decade away from commercial power generation.

Peter Gathercole Silver badge

Re: This looks like it ends the 'need' for nukes

Well, it does until they ban burning anything other than 'redy-to-burn' wood in solid fuel stoves, because burning anything else will contribute to carbon emission, are generally an inefficient generator of heat, and have no control of the chemicals that are emitted from burning the plastic and other materials that go into packaging.

There have already been noises about controlling wood burners, particularly in urban areas. See the UK Environment Bill 2020.

I don't like it myself, and I think it will impact a lot of people who rely on foraged wood in rural locations, especially as relying on electricity has been shown to be a bad thing over the recent storms, but such is the way of things.

Peter Gathercole Silver badge

As has been pointed out before, the amount of lithium in a lithium ion battery is small, and in a form which does not in itself burn. It's certainly not in the form of metallic lithium.

Some of the electrolytes do (BigClive used to enjoy pulling apart batteries in his YouTube videos, some of which went up in flames spontaneously), but the biggest danger in battery fires is the energy stored in the battery. It makes attempting to put it out with water difficult and dangerous, and can cause the water to electrolyse, liberating hydrogen and oxygen which then will then recombine (burn) without any external air.

On top of this, under certain circumstances, it can actually be a shock risk to the people attempting to put the fire out.

The current ways of extinguishing a large battery fires appears to be drown or bury it, or if you can't do either of those, cordon it off and let it burn out.

With tanked LPG or hydrogen, you may get a bit of a kaboom, but it will be over very quickly once it has dissipated (probably far more quickly than a petrol fire). And there are contained hydrogen storage methods and ways of using it to generate power without setting light to it (think fuel cells).

Graphical desktop system X Window just turned 38

Peter Gathercole Silver badge

Re: What I like about X

If you are rendering into pixmaps and transferring them to the server, X has been able to do partial updates of what has changed since, I don't know, X11R4 maybe or possibly R3, basically forever the way that computing has changed.

X was always optimized to to this so that it could cope with expose events, backing store, save-unders and opaque window moves. It was all thought out to cope with many client programs from different remote systems on a single display, over networks that could carry a fraction of the capacity that modern networks carry.

The client program could be told by the server to redraw just part of it's window, with the shape and size being informed to it by the window manager. The client could decide whether it could re-draw the part indicated, or whether it had to redraw the entire window. The X server kept track of which windows had been obscured, and would need to be redrawn.

The X primitives allowed for a clip window which could be set to the boundaries of the expose event, which would minimize the amount of redrawing effort needed (things outside the clip window would be culled from the operations that needed to be done without the client having to actually decide itself.

X comes from the days of 10MB/s networking, and was written, even with local client pixmap rendering, to be very frugal in network use.

I always get the impression that the people who claim that X is arcane, inefficient and outdated are all from the Windows generation where the client program controlled the whole presentation of a window as the norm, without them actually understanding what they were asking to be removed.

The font model comes to mind. The X font model was quite well thought out, and actually allowed the client program to be very frugal about it's use memory use for mainly text only operations. All it had to know was the metrics of the font, not the complete bitmaps or scalable model. The server sorted that out. But many client program writers disliked the fact that they were subservient to the X server for the available fonts, even though with the X11 font server support, and the later inclusion of spline based scalable fonts, an application could add and use custom fonts quite easily. But they never bothered learning how to use them, and instead resorted to rendering the whole window themselves, relegating X to just a pixmap transfer protocol.

If that's all you use it for, then pretty much any transfer method will work. But X was always much more than that!

Peter Gathercole Silver badge

Re: What I like about X

How is re-implementing the tool kits the same as re-writing everything?

The whole idea of re-writing just the tool kits is that is it a one-time operation per toolkit that will enable many applications to be run without further effort from the application owners.

Peter Gathercole Silver badge

Re: Baby and bathwater come to mind.

I think you need to watch this in order to understand what the developers of Wayland were thinking.

I'm not going to watch it again, because it really sums up the attitude of the developers, and every time I watch it, I feel my blood pressure start to rise.

Peter Gathercole Silver badge

Re: Baby and bathwater come to mind.

I ought to point out that using Web technologies to render graphical application is also one of the items on my hate list, alongside Wayland, Pulse Audio and dare I say it, Systemd.

It's come a long way from it's origin, but browsers were meant to handle largely static, mainly one-way communication. Extending that into an application display framework always seemed like an ugly fudge to me, and I think the bloat and inefficiency in our modern web browsers, which are now probably the biggest resource hog in any workstation just shows how poor a decision that was.

Just imagine if X had been kept modern, with proper access control, encryption and keeping the efficient graphics transport across the network, application rendering in web pages would not have been necessary. And I believe that we would all have been much better off.

Peter Gathercole Silver badge

Re: What I like about X

Well, I'm going to argue against form here.

Most applications do not deal with X directly now (and they really haven't from the early days of Widget sets). They use various toolkits and graphics libraries that isolate them from the underlying rendering infrastructure.

Provided that you can do a complete re-implementation of the toolkits to use Wayland, in theory you don't need to change the applications. Just re-link the application objects with the new toolkits, and off you go. You may even be able to do this with dynamic linking at run-time, rather than compile time.

I know this is an idealistic argument, and that some applications may still need to be a little knowledgeable about the way that X works, but for most desktop type applications, there is probably not the case, and other, more graphics intensive operations use a variant of GL to render complex objects anyway.

I'm not arguing for Wayland here, just pointing out that it is possible.

Peter Gathercole Silver badge

Re: Secure network connection @steelpillow

Well, you've got the client/server bit the right way around, but generally speaking, a client on the remote system opens a session to the server, not the other way around.

Of course, you may be thinking of starting applications from the window manager, but it is possible to run the window manager itself on the remote system (as is often the case if you are running an X-terminal or an X Station). It's even possible to run a remote window manager in a managed window inside your local X server (as a nested window manager).

X does not have a mechanism for the X server itself to open a new session. A client can (and the window manager is just a client, after all), and that other client has to have a mechanism outside of X to do some form of RPC if it runs on a different machine. The X11 protocol is just a transport, not an RPC mechanism.

When I was working in a purely X environment, it was normal for a menu item in the local window manager to run a command on a remote machine using SSH, or some kerborised RPC, and fire up a client session on the remote system back to the local X server. It's as secure as the RPC tool you use. I presume that in your scenario, the admin had a secure method of getting from their workstation to the remote server as well.

X Windows was written in a more trusting age. There were glaring security holes. Xauth and cookie support was added to get some degree of access security, but for years now, people have tended to use X11 sessions through SSH tunnels so the X protocol is encrypted across the network and SSH is performs the authentication.

But it would have been possible to extend cookie support to include better cryptographic authentication, and an encryption layer in the X protocol itself would have been pretty easy to add, but it wasn't, presumably because SSH tunnels seemed good enough.

After all, it fitted into the UNIX way quite well.

Peter Gathercole Silver badge

Re: "Wayland's privacy controls..."

I think it means that you can't just plug in between the application and the display, as you easily can in X11 (reparenting just one window, or even the whole screen by inserting between the display manager and the root window), although this always was a bit of a security issue if done incorrectly.

This might make screen sharing a little more difficult, but any application will still be able to record a session in the application itself, that won't change.

Peter Gathercole Silver badge

Baby and bathwater come to mind.

"One of the central functions of X is that it works over a network connection, something that Wayland by design does not do"

This is THE deal-breaker for me. Beyond just using a local display, this is the core X11 function that I use. And I do it quite frequently (last weekend, I actually had VirtualBox running on a remote system using X to display it where I happened to be). It still work reasonably well, but some applications wanting to talk to the display via D-bus or one of the other in-system communication methods does not work (so why require it in the applcations?)

XWayland and the various types of VNC or RDP, whilst they do work, I find them obnoxious, jerky, prone to block, and as they are basically remote control applications for the console of the remote box are just a broken idea for multi-user systems.Not tried waypipe, but if it works like ssh tunnelling of X, it may work OK. IMHO, for everything I do, X11 works far better, and can cope with different display resolutions on server and client (whichever way you want to see this). I also work with non-intel legacy system, and for that X11 is still a must.

This really is a case of throwing the baby out with the bath water.

Back in the last century, I worked in an environment where we had powerful centralized servers with IBM X Stations on people's desks. It worked really well for the majority of what needed to be done, and passably well for most of the other tasks, with the exception of running Windows applications (which was not a huge issue at the time). It was easy to administer (just a handful of servers serving a community of 100+ workstations), and meant that people could pretty much work on any desk without having to move any hardware around.

I regard the move to desktop Windows workstations and now laptops as a really regressive move.

Google, Oracle cloud servers wilt in UK heatwave, take down websites

Peter Gathercole Silver badge

Re: "As a result of unseasonal temperatures in the region"

It's actually worse than just them trimming to the lowest spec, their pricing is set so it is only just on the surface cheaper than on-prem. provision (and even then, I think that a number of organizations are eyeing their cloud bills with some consternation, trying to identify the savings they were promised).

Once you factor in designing the service to be multi-zone, the affordability equation changes quite markedly.

I know. You should be making services site failure tolerant anyway, but if you control the complete infrastructure from building, plant and infrastructure, you have a better way of ensuring that you adequately spec. the installation.

What was the Azure story a few weeks back? MS had a problem in one region where they could keep existing load running, but could not spin up new images that were not already executing? What would happen if you were muti-zone, or even multi-cloud, and your fallback resided in that affected zone? Think you could spin up your backup zone with no resource?

Service designers have been lured into thinking that there is always spare capacity in the cloud. Recent events seem to suggest that this is not the case, and the cloud is actually finite! Whodathunkit!

The cloud is just another person's computer (and one where you have no say in how it's installed).

Peter Gathercole Silver badge

Re: The lost art of conversation

No need to be crude!

NASA's CAPSTONE silence down to a software flaw

Peter Gathercole Silver badge

Re: Whats Happens If

You end up putting mutually checking fault systems. Preferably an odd number more than one of them.

Peter Gathercole Silver badge

Re: How to write space software

I always worked on three plans. Plan A, contingency plan B where there was a chance that the work could still be completed with some additional steps, and the back-out plan.

Of course, where I had a problem, there was always a conflict between plan B and the back-out plan. Where you have a time critical service, the service managers don't really like using the contingency plan if it eats into the time necessary to restore the system to it's previous condition before the work.

This is not quite so easy when your asset is in a remote (in this case, a really remote) location.

Russian Debian-derivative Linux slinger plans IPO

Peter Gathercole Silver badge

Re: That Explains Things...

The second world war was defined by technological innovation. Think through the following:

ASDIC/Sonar

Radar

Aircraft performance (compare the Spitfire 1B with the versions that were flying later in the war)

Guided munitions (V1, V2 etc.)

Bomb technology (Tallboy, Grandslam) with the associated advances in the aircraft to carry them

Precision bombing aids

and, as the final clincher

The atomic bombs.

Modern warfare is defined by technology. The recent surprise has been that Ukraine has been able to stand toe-to-toe with what was thought to be one of the most advanced military machines in the world. Either the Russians have been holding back, or opinion of their forces was hugely over estimated, by everyone including themselves.

Of course, there is much to be said for too much complexity in weapons. There are many stories of advanced systems that just do not work in the field, so it is a balance between cost and complexity and effectiveness.

Choosing a non-Windows OS on Lenovo Secured-core PCs is trickier than it should be

Peter Gathercole Silver badge

The problem is that Microsoft can in theory restrict or refuse support for Windows on a system with Linux also installed in a dual boot configuration.

Not a problem for me, I've mainly been microsoft free for years, but would be for anybody who still needs a dual boot environment.

Systemd supremo Lennart Poettering leaves Red Hat for Microsoft

Peter Gathercole Silver badge

Re: one step forward

No. EulerOS is a branded UNIXtm, not GNU/Linux in general.

It doesn't run the other way, unless Huawai made no changes to their Linux distro.

There's nothing on Huawai's home page that explains whether they did anything to get it through UNIX 03 branding, but I suspect that they had to do something. I suspect that they have a number of possibly non-open source additions that provide the parts of the verification suite that standard GNU/Linux doesn't pass.

If Linux kernel+standard toolset passed UNIX 03, then I pretty much guarantee that Red Hat, Suse et.al. would have paid for the verification tests.

As the other vendors have never have, I suspect that Huawai did some tweaking before submitting to the test. Does anybody know someone who is using EulerOS to ask for the source code?

Peter Gathercole Silver badge

Re: one step forward

Actually, I think that z/OS is less UNIXtm that all of the others.

IBM’s first cloudy mainframes scheduled to launch this week

Peter Gathercole Silver badge

Re: Alien Chit Chat/NEUKlearer HyperRadioProActive IT Chatter which is neither Chaff nor Fluff

Wow. That took even more decoding that the normal amfM1 post. I'm glad he included the reference. It's very deep.

Has the AI been upgraded to include cultural references?

Original Acorn Arthur project lead explains RISC OS genesis

Peter Gathercole Silver badge

Re: RISCiX

The two things that RiscIX needed that the A310 did not have were a hard disk and more memory than the A310 had. I think it had all of the rest of the required facilities (I don't think that the Ethernet was essential.

These were things that most UNIX systems needed, although the amount of memory required would have been more on a RISC processor than a CISC. At this time, MC680X0 based UNIX workstations would probably have had 1-2MB.

It would not surprise me if using a SCSI podule and disk and after market memory expansion boards (A310 had space for two podules I think, as long as the riser was fitted, so Ethernet could have been provided as well) that it could have been made to run, but it's unlikely anybody would try in this day and age.

Totaled Tesla goes up in flames three weeks after crash

Peter Gathercole Silver badge

Re: Deja vu again @nobody

And they will the want to pass the costs on and charge a disposal fee for scrapping vehicles.

Scrap yards are businesses. The scrap value of the materials must exceed the costs of breaking them for the businesses to be viable. If the value of the scrap materials does not, they will want to charge for the disposal of the vehicle.

And if you can't afford to scrap a car, I can see a lot of accidental vehicle fires at the end of many vehicle's lives.

Record players make comeback with Ikea, others pitching tricked-out turntables

Peter Gathercole Silver badge

Re: Stop wasting you’re time! @Fender strat

Unfortunately, I can't actually afford the Les Paul Custom I would like, but I do, like you, have a Columbus knock off of one.

It's really a bit of a decoration now, as I am mainly a classical player when I do play, but I did have a bit of an go at improving it as best I could, as something to do, but you can't really put lipstick on that pig (it has a ply body with a moulded, not carved top, and a bolt-on neck).

I stopped when I realized that I would have to reposition the bridge to be able to get it to tune properly (on later model Les Pauls, Gibson angled the tunamatic bridge to effectively lengthen the lower strings, which would then allow the correct fine adjustment using the bridge adjusters), but this knock off has a bridge parallel to the nut. It plays OK, but it's not great, and the bottom E is never perfectly in tune at the higher frets. Lighter strings and reducing the action helped, but it just won't tune perfectly.

When I was at collage, I played a 12-string in a band where the main guitarist played a Columbus Les Paul Custom, the same model (but not the same one, unless by incredible co-incidence) which suffered the same problem. I understood guitar set up less well then, but now I know what the problem was.

Peter Gathercole Silver badge

Re: That vinyl sound

If you can learn to listen for the sonic artifacts, then that means that they are there, and the music is not the same.

Seems to me you've argued against yourself there.

Peter Gathercole Silver badge

Re: It gets more fun... @AC

I can tell where you're coming from. I've seen so many sets of speakers where the suspension of the drives has rotted away over time, and now the cones are almost free-floating, only held in place by the secondary suspension close to the voice coil.

But...

I have three pairs of vintage speakers. Keesonic Kubs, Mission 760s and Wharfedale Diamonds (not sure which versions, but quite early ones), and none of these have physically damaged cones or suspension, and I've previously had others as well. The Keesonics I've owned from new (bought 1979), and still sound good to my ears, even compared to the more modern speakers that Ive bought and since got rid of.

I've not tested the rigidity of the cones, but they still seem pretty stiff.

Not all power supplies in vintage amps. suffer the power problems you talk about. Many of them (such as my NAD) were designed to be able to supply high currents in bursts to allow for low impedance speakers, and anyway, I never listen at an antisocial volumes that challenge even the modest 20 watts RMS the amp's rated at. I wonder whether modern switch mode power supplies, although on paper technically better, can actually do the same.

It's also been re-capped.

The effect of time on semiconductors seems to generate much discussion, but from what I read, provided the transistors are not run at the edges of their thermal envelope, they should perform pretty well for several decades. It actually seems that there may be a faster degradation of modern devices because of the tight integration, packaging and being run much closer to their limits because we 'understand' the material properties better now.

So don't discard the best of the vintage stuff. It still can be good.

Peter Gathercole Silver badge

Re: Digital transmission?

Damned auto-correct. Rumble is inaudible, i.e. can't be heard!

Peter Gathercole Silver badge

@Old69

Techmoan on Youtube has reviews of several linear tracking turntables.

Peter Gathercole Silver badge

Re: Stop wasting you’re time! @Fender strat

With the deliberate intention of starting a completely different flame war, would that be an early '60's strat, one of the absolutely terrible ones from the '70s, or one of the posy signature editions that they like turning out now.

And is a US made one better than a Mexican one? Or how about a Japanese one vs. a Chinese one (all made by Fender).

My vote would be for a Gibson Les Paul Custom from the late '50s.

Peter Gathercole Silver badge

Re: Digital transmission?

This is a very nuanced area.

I have 45 year old vinyl that has been kept reasonably well, and it still plays fantastic.

My HiFi is fairly vintage, and all pretty budget (but good budget), although components and parts have been replaced over the years.

The turntable started life as a Pro-Ject Debut II in about 2001, but has had a new motor suspension which seriously reduced the rumble (made it almost inalienable at normal volumes), has had the arm from a Debut III fitted, and the cartridge replaced with my vintage Ortofon VMS20E cartridge (unfortunately using a quality aftermarket stylus, as the original is no longer made). The most recent change was replacing the heavy rubber mat I used with 6mm of acrylic disk, which has made a huge difference to the accuracy of the bass.

For CD, I used to use a Technics CD changer, and at that time, I felt that vinyl copies of the same album were clearer than the CD. But I discovered two things. When I replaced the CD player with a vintage Marantz one, with a Cambridge Audio external DAC, the sound playback from the CDs jumped in quality using the external DAC, even compared to the built-in DAC of the Marantz. The other thing is that modern CD pressings, especially 'remastered' ones often sound terrible compared to the vintage CD pressings. And many modern albums sound really bad as well, mainly because the levels are set so high that sometimes they clip, and they rarely use the full, much greater dynamic range of CD (everything is loud, nothing is quiet).

Yes. Vinyl is a flawed medium. Yes the quality of the turntable is important, and the cartridge and stylus even more so (finer stylii sit more deeply in the grove, and are more immune to surface scratches, but suffer from debris in the bottom of the grove more). Yes, badly kept vinyl suffers from dirt and damage. Yes the dynamic range of vinyl is lower than CD. Yes, there is distortion caused by the non-linear path of pivoted tone arms. Yes the tracking speed of vinyl changes from the outer to the inner groves.

But even given all of this, vinyl can still sound superb, and many respected brand CD players can mangle the music, and modern audio engineering and production can misuse the supposed better capabilities of CD just as badly.

BTW. I bought one of the cheap (£89) Dual manual turntables from Lidl a while back, just to see what it was like (it's not the Dual of old, however, merely a badged Chinese TT, which is available under several names), and I was pleasantly surprised with the quality of playback,especially when the felt mat was replaced by my heavy rubber one. The original cartridge was quite good (an Audio Technica AT3600L which has had very good reviews for a rock-bottom budget cartridge) but also takes better cartridges quite well as well. Only the poor initial set-up of the arm and the built-in phono pre-amp let it down. Turning the pre-amp off, and feeding it into the phono input of my NAD amp cured nearly all of the problems from the pre-amp. This shows that there is still a cheap way into vinyl.

Bipolar transistors made from organic materials for the first time

Peter Gathercole Silver badge

Re: Gatekeeping @My-Handle

When I first read your use of 'scale', I was thinking volume production.

But it is really size that is the issue.

It is possible to create single devices (transistor or diode) using technology that is available in the home, after all, the labs that originally created transistors weren't that much more sophisticated than a modern day school physics lab.

But these devices will be slow, unreliable, and very large, just as they were when they were first made.

Even using the scale of integration that went into 7400 series TTL (commonly used chips from the '60s, and still used in some forms today) would be beyond all but the most dedicated home enthusiast. The requirements to mask, dope and sputter the gates in silicon, even at the scale of original TTL require you to work on die sizes measured a few millimeters across. It's not impossible, but well beyond most home enthusiasts. And these devices typically have a couple of dozen transistors, and a very simple design with only a few layers.

When I was studying electronics at University in the early '80s, we had the facilities to build chips in the semiconductor fabrication lab there, but the process was quite involved, using optical lithography, and could only build devices with a few thousand gates. Us undergraduates did not get access unless we did a relevant final year project.

Now look at more complicated chips, with millions of gates, and the area you need to use to build the gates either becomes way to big to handle conveniently, with the associated signal proporgation speed, power and heat problems, or you have to start working at a much smaller scale.

Even if we were able to print organic semiconductors, you would need something much better than your average ink-jet printer to deposit the materials, although that is what they do for large panel OLED screens nowadays, if I understand the technology correctly.

Know the difference between a bin and /bin unless you want a new doorstop

Peter Gathercole Silver badge

Re: tar(macadam) joy

I don't believe this. Tar will overwrite files that match a file in the archive, but won't remove any files that don't.

If you back up an empty directory, on restore, it will check whether the directory named in the archive exists, and if it does, it may change the permissions back to what is in the archive, but it won't delete it.

If you used rsync, things may have been different, but tar? No.

Peter Gathercole Silver badge

@stiine

And what do you do about case sensitivity?

I once had to stage an AIX service pack through a Windows server (using Windows as your gateway systems is a REAL PAIN), and it squashed the case on the file names such that when they arrived on the destination AIX system, not only did the files have the wrong character case in the name and wouldn't install, but with two files, there was only a difference between the filenames in the case of some of the characters, and one file over-wrote the other!

Nowadays, I tar or PAX the files together, bsplit (or split -b) the resultant huge files (did I say that I use Linux on my workstations), and send them through like that (as long as you make the files a multiple of 4 characters long - some windows comms. packages [like the Windows FTP server] silently pad files out to the nearest 4 characters - did I say using Windows as your gateway systems was a pain?), and then re-assemble the files at the other end and unpack.

Now some of you may say "Why are UNIX-like OSs case sensitive", to which I reply, why is Windows not? In my view, having case-insensitive file names is the real limitation, not the other way round (I had this discussion with D. Evnulll, the pseudonym of the writer of the Hands On UNIX column in Personal Computer World magazine years ago, and he really showed his colours as a migrant from another OS.

Peter Gathercole Silver badge

Re: We can do better than 8.3 these days, can't we?

As Adrian said above, the slots in the directory file were 8 characters and 3 characters, and the dot was not stored but implied by the OS.

If you look at that other commonly used operating system of the time, UNIX, you could have extensions on file names, but there was nothing special about the extension, it was just part of the file name. It was normal to use things like .c, .o etc. but the extension was just a part of the file name, and the dot was just another character in the filename. You could have more than one dot as well (like the directory link ".."). Only a small subset of ASCII characters, including "/" were not allowed to be part of a name. Spaces, punctuation, non-printing characters et. al. could be in file names, and have been causing problems for UNIX users for years, but I would not have changed it. One learned about the "-b" flag in ls.

IIRC, the UNIX filesystem shipped with Edition 7 of UNIX only allowed 14 characters in a file or directory name (it was to do with an entry in a directory file being limited to 16 bytes, with two bytes being used for the 16-bit inode number). That changed sometime before UFS on System V, but I can't remember when.

Filesystems based on the Berkley Fast Filesystem also had the 12 character limitation lifted.

To check, I would have to dig through the code on tuhs, but I can't be bothered at the moment.

Peter Gathercole Silver badge

Re: We can do better than 8.3 these days, can't we?

Actually, as the dot was implied on MS-DOS and DEC files-11 filesystems, it only took 11 characters.

Peter Gathercole Silver badge

Re: Surprised that you weren't commenting on the limey use of "the boot"

In the context of computers, boot is a contraction of booting, which is itself a contraction of "pulling itself up by it's bootstraps".

It's basically a way to start small programs that call larger programs that end in loading the operating system.

I suspect, but I don't know for certain, that the term boot for a car actually originates in the term caboose, which was a storage compartment at the back of a horse-drawn carriage or coach (as well as other things).

I'm prepared to be shouted down about that.

But why trunk? That is a piece of luggage, an elephant's nose, or when paired, warn by men as swimming apparel.

Atos, UK government reach settlement on $1 billion Met Office supercomputer dispute

Peter Gathercole Silver badge

Re: Weather Forecasting

Back in the '80s, ECMWF used IBM mainframes as the systems to collect the data. In 1978, I was at a presentation where I was told that NUMAC's IBM 370-168 was the same as the system they used as a FEP to the Cray 1 at ECMWF.

For the seven years I worked at the Met Office in Exeter, they upgraded their mainframes from a sysplexed z990 to a sysplexed z196 to a cluster of two z14s.

They use mainframes as data gathering and forecast distribution systems, not because of the computing power, but because they don't crash.

During a critical power problem while I was there, we had to do a load shedding operation in order to prolong the critical environment as the UPS ran out of power (can't remember why the diesel generators wouldn't start up, but we were just on the UPS). Surprisingly, the supercomputers were the first to be shutdown, in the sequence test, collaboration, non-forecasting and then the forecasting computer (they used to keep two systems, one running the live forecasts, and the other able to pick up the live forecast, but normally running research work - it's a bit different now since they installed the current Crays). The last systems shutdown was to be the mainframe.

The reason for keeping the mainframe, storage and comms systems running was that some data cannot be gathered later if it is missed.

Concerns that £360m data platform for NHS England is being set up to fail

Peter Gathercole Silver badge

Re: DIY

Love your references to ITIL and PRINCE2.

Very good set of practices, very often paid lip service to, even though they were 'required' for government projects.

And I believe that both were defined since 'the mid '90s' that you use as a break point between good and not-so-good.

NATS is an interesting example. It's not fair to say that the system in use at that time could not be replaced. A like-for-like replacement could have been done quite easily. But the scope of the new system was so much greater than the original, and this is often the case with replacement projects. This, and the fact that the integration problems of the US based software with the different areas of ATC in the UK were more extensive than originally expected, plus the issue of the contract being moved from one contractor to another twice during the implementation all contributed to the late delivery.

Meteoroid hits main mirror on James Webb Space Telescope

Peter Gathercole Silver badge

Re: Disappointing

Given the speed and energies they're talking about, I really doubt that a shield would have been particularly effective. I'm sure that this was less of an impact, and more of a through-and-through, with the meteoroid passing straight through the mirror.

I'm wondering whether the L2 point will actually be busier than open space. After all, if it's a point of gravitational neutrality, then will debris have gathered since the formation of the Earth and Moon.

If there is more debris, maybe the life of this telescope will be short.

I love the Linux desktop, but that doesn't mean I don't see its problems all too well

Peter Gathercole Silver badge

Re: Choosing to choose

It really depend on how you define a 'workstation'.

It began to be used as a term to describe so-called 3M systems defined by CMU, and became applied to Sun systems, which had large graphical screens, local disk and significant processing power. But even in the '80s, they weren't all UNIX. The Xerox Star system weren't and some of the ones from US educational establishments had non-UNIX workstations based around Lisp and Smalltalk.

Even the Apollo workstations did not run UNIX, although DomainOS was rather UNIX-like.

DEC's VaxStations appreared towards the end of the 80's, but if you discount the more powerful IBM PCs, IBM's main workstation offering was actually UNIX based, the 6150 runnning AIX.

Most of the other workstation vendors, like Whitechapple, SGI, Torch, NeXT, Evans and Sutherland, and others, have disappeared from memory, followed by DEC, and soon Sun and HP are or will be ex-workstation manufacturers.

Peter Gathercole Silver badge

Snaps noy (yet) essential.

Currently, you can still turn off Snap, at least up to about 20.04, and download the normal debs, but that will persist only as long as Canonical keep the debs in the repositories.

I dislike snaps intently, mainly because I would like the output from "mount" to fit on one screen, and although I would accept them on niche or specialist application software, I can't abide them being used for packages to run the system, like Gnome or Firefox.

I know what they're for. I know that it makes it easier for software vendors to ship packages for a smaller number of target systems, and I applaud that facility for non-system packages, so Linux has another opportunity to pick up paid-for applications that may make it more acceptable for end users.

But for parts of the distro itself, they are completely inappropriate.