* Posts by Peter Gathercole

2924 posts • joined 15 Jun 2007

Oh, SSH, IT please see this: Malicious servers can fsck with your PC's files during scp slurps

Peter Gathercole Silver badge

Re: When your whole backup solution is centered around SCP transfers...

Thanks for clearing that up. I know now.

I was thinking that rsync was a little older than it appears to be (the original release was dated 1996, now I look). That means that I was using it very soon after it was released.

Peter Gathercole Silver badge

Re: When your whole backup solution is centered around SCP transfers...

Hmm. Not totally sure about this, but I think that rsync is layered on a transport layer that you specify, and this used to be rsh/rcp in the old trusting days of computing, and is normally replaced by ssh/scp.

I don't know if it uses ssh as a raw transport, and does all of it's file handling itself, or whether it hands things off to scp. I'm sure someone here knows.

So it may be that rsync is just as compromised!

Bish, Bash... gosh! Good ol' Bourne Again Shell takes a bow as it reaches version five-point-zero

Peter Gathercole Silver badge

Re: Bourne Again Shell (Bash – geddit?) @tfb

I was using SVR2 when I first came across ksh (ksh85, I think it was) in 1987. It was available as an exptool for AT&T related companies.

ksh88 became accepted as part of R&D Unix for AT&T companies (and I think it was purchasable commercially for 3B2 systems), and shipped as a standard shell in SVR4 in 1989 (and thus available in SunOS 4). I think it made it's way into a lot of AT&T licensed and derived UNIX systems including AIX 3.1 onwards. I remember that I also came across it on a DGUX box and Digital UNIX systems.

It became the basis for the POSIX shell (which was effectively ksh88 with a few tweaks), and I think that Bash was written to be a POSIXly compliant shell, which makes ksh a direct ancestor of Bash.

Oh, and in answer to csh, what planet are you from! I know that some wierd people using BSD used csh (but many people on BSD didn't), but really it bore almost no relation to the Bourne shell. In my experience, most people only really used it because of the history features. And most of them only used it as an interactive shell, and wrote most of their scripts in the Bourne shell, even though csh was intended as a programming shell.

I'm not saying that it could not be used, but when I compiled it up on a Bell Labs. V7 UNIX box from a BSD 2.3 tape in something like 1982, it felt so foreign and wrong that I quickly went back to the normal shell.

Peter Gathercole Silver badge

Re: Bourne Again Shell (Bash – geddit?)

Probably the case that Bash was not the successor to the Bourne Shell, but to ksh, the Korn Shell, but the naming sort of still fits.

I think it would be an interesting exercise for many of the commenters here to actually try to use the original Bourne Shell as their main shell for a period of time, so that they could appreciate how much a step up ksh, the Posix Shell and bash actually are.

And for those who have hair-shirt tendencies, the Bell Labs. UNIX version/edition 6 shell (if you can format it - it appears it's not compatible with the groff version of the man macros installed on this Redhat system) is a real eye-opener!

Begone, Demon Internet: Vodafone to shutter old-school pioneer ISP

Peter Gathercole Silver badge

Re: Historical accuracy

Where I was working, we had a commercial arrangement with EUnetGB when it was part of the University of Kent.

I thought that EUnetGB was bought by PIPEX sometime in 1992 or thereabouts.

I was running a leased line to Canterbury at the time, and had to re-work the CISCO AGS router configuration one evening during a managed transition to ensure continued Internet access.

Peter Gathercole Silver badge

Re: netcom.co.uk

I managed to get PPP working with Redhat (original Redhat, not RHEL) with the information that Virgin.net provided back in 1997. I did not think that it was too difficult (I also managed to get it working with Breathe without much difficulty). I managed to set Linux up as a router as well, to share the connection with other systems, and even managed to get Internet Connection Sharing working in Windows (95 I think it was).

Neither of them provided specific instructions for Linux, but IIRC it was perfectly possible with the information that they provided for Windows.

I also managed to get Dial-on-Demand working with Smoothwall (a dedicated Linux firewall) a little later, providing protected network access on demand to the proto home network that connected the systems in my home office together. Seamlessly transitioned to ADSL as soon as it was available in my area without having to re-work any of the individual PCs other than the Smoothwall when the change happened.

All those memories...

Google Play Store spews malware onto 9 million 'Droids

Peter Gathercole Silver badge

Re: Do phones still have an IR port?

It's a flaw in the review systems. They should all have separate ratings for not only the quality of the item purchased but also the customer service. This would allow someone to grade it as "1" for the item, but give a "5" for the way the seller responded to the problem.

Excuse me, sir. You can't store your things there. Those 7 gigabytes are reserved for Windows 10

Peter Gathercole Silver badge

Re: WinSXS

I'm not sure how Windows does it, but on UNIX and Linux with most of the common filesystem types (ufs, extX etc.), the system cannot tell the difference between the original file and a hard-link to it (in fact, there is no difference, both directory entries point to the same i-node in the same way, the system does not even record which link was made first as the dates on a file are stored in the i-node, not the directory entry).

The only thing that anything can tell is the number of hardlinks to a file.

It can also be difficult to identify where other (hard) links actually are in the filesystem without doing a file tree walk. Sometimes ncheck (if it is installed) can be used, but this utility, dating back to ancient UNIX is often not installed on a system (or may not even be present).

Of course, Windows may do it differently (as do some of the more advanced filesystem types on *nix), I just don't know as I don't really follow Windows that much.

Peter Gathercole Silver badge

Re: 32GB HP Monstruosities @Dave

The original Atoms were a bit pants, but the current 64 bit ones not so.

Intel appears to have changed the meaning of the Atom range since first introduction. Initially, they were processors intended to be soldered onto a board (rather than in a socket), but they still needed external logic to create a system.

Recently, Atom appears to be used as a branding for SoCs.

The most recent generations of Atom use the same processor architecture as Celeron and Pentium Silver processors, and there are ranges of clock speeds and capabilities available in each family.

In terms of what they can do, a lot will depend on what you want them to. They will never be good systems for processor intensive operations, but for something that needs a low power processor with moderate performance, they are quite capable. I had a laptop with an Atom x7-E3950 in it, and was very surprised by the speed of the system compared to my (admittedly aging) 3rd gen i5 Thinkpad.

Peter Gathercole Silver badge

Re: 32GB HP Monstruosities @Dave

You are aware, of course, that Intel have reused the Pentium name for the low grade processors that would previously been called "Celeron", aren't you?

But you ought to also be aware that the 4 core Atom-X 64 bit processors can be really punchy little things, capable of doing a lot of work.

Techie basks in praise for restoring workforce email (by stopping his scripting sh!tshow)

Peter Gathercole Silver badge

Re: I learnt to test my WHERE clauses on a DELETE with a SELECT first @GrumpenKraut

No. You will find that -print0 is not in the SVID version of find. There is a typo, though, as I missed out the filename!

I take your point about print0, but even though whitespace is actually allowed in filenames in pretty much all UNIX-like filesystems.

If I ruled the world, the set of characters that would be illegal in filenames would be much larger! I really hate when filesystems are shared between UNIX-like OS's and Windows (Samba, NFS etc.), and I see filenames created by Windows including spaces, asterisks and other characters. And that is not to mention when different multi-byte character encoding systems were used, prior to UTF8 becoming dominant.

But then, many people think I'm far too old to make relevant comment about modern computing.

Peter Gathercole Silver badge

Re: I learnt to test my WHERE clauses on a DELETE with a SELECT first

Grrr. Creaping featureism!

The 'correct' use should be:

find <dir> -name -exec rm {} \;

(the starting directory should be required, I don't know when GNU made it optional). I admit that it spawns more processes, but if you're really worried about this, then:

find <dir> -name -print | xargs rm

These work on pretty much all unix-like operating systems whereas the GNU ones don't, which is why they trip off the fingers when I'm working.

Mine is the copy with the SVID in the pocket!

Commodore 64 owners rejoice: The 1541 is BACK

Peter Gathercole Silver badge

Re: Emulated peripherals

Now there is an ironic circle.

The ARM instruction set was originally modelled on an Econet of BBC micros, and then the first ARM 1 development board was a Tube second processor!

Acorn really did have some exceptional engineers, and the BEEB was an exceptional system for it's time.

Incidentally, there were 80x86 (I think it was an 80286) second processors, as well as Z80 and even a NS32016 (originally intended to run Xenix) available as second processors. These were even packaged as business oriented machines as the ABC (Acorn Business Computer) range.

Corel – yeah, as in CorelDraw – looks in its Xmas stocking and discovers... Parallels

Peter Gathercole Silver badge

Parallels Workstation for Linux

I actually bought a copy of Parallels Workstation for Linux back in about 2004, when VirtualBox either did not exist, or was very immature.

Parallels was excellent, allowing me to run Windows on Linux in seamless mode, and was very efficient.

Unfortunately, after they pulled the product (for Linux) and the Linux kernel interface changed over time, the kernel modules for Parallels (which were provided in source to be compiled with the kernel headers) would not compile, and although I had a bit of a poke at them to try to get them working, I could not do it my available time, and I eventually (and reluctantly) switched to VirtualBox.

Although I still use VirtualBox now, I would still consider buying a new copy for Linux if they re-introduced it (rather than paying for the commercial extensions to VirtualBox for the better bits like high speed USB).

IIRC, they also had/have a containerization product (it was available long before Docker et. al.) that they touted for Cloud applications a few years ago. Might Corel have been wanting that technology?

College PRIMOS prankster wreaks havoc with sysadmin manuals

Peter Gathercole Silver badge

Econet security

Unfortunately, the Econet implementation in the original BBC Micro's had very little in the way of security.

The station ID was coded using a set of dip-switches under the top cover, on the keyboard PCB, but this was read into a location in page 0 of the RAM. As the 6502 and BBC OS did not have any concept of a privilege mode, it was possible to change the station ID with a simple command.

There was a vague idea of a privileged user when you logged into the file server (and there was some minimal user-seperation on the fileserver), but again, the User ID and whether they were privileged or not was stored in page 0 again, and could easily be changed.

It was the nature of the machines. There was no real way of securing what was effectively an open workstation.

When I administered an Econet Level 3 network, I very quickly established that there was nothing that could be secured on the network, and told the teaching staff to only store course records on floppy, never on the fileserver.

It was a shame really, because it was a rather nice system (with one or two drawbacks, like security and very slow byte-by-byte access to files using OSDCH and OSWRCH)

Sysadmin’s plan to manage system config changes backfires spectacularly

Peter Gathercole Silver badge

Re: SCCS hits you

The problem (or maybe it's a strength) with SCCS is that you have embedded tags that are expanded, normally with dates, versions etc. as the file is checked out readonly. With SCCS, they are surronded by % or some such. (RCS does use similar but incompatible tags, I'm not sure about other systems).

The problem is that in some cases, these tags can mean something to other tools, and may also expect to use % as a special character, in which case deploying an un0checked in copy may cause undesirable effects.

Of course, one solution to this is to use it with "make", which would allow you to perform additional processing around the versioning system. I'm not sure I remember how I did it, but I'm pretty certain when I used make and SCCS in anger, I had a method where I could spot that it was not checked in. Make is slightly aware of SCCS.

But of course, you can't meaningfully compare SCCS with modern tools. I'm sure it wasn't the first versioning system around, but it must have been one of the earliest, dating back to the early 1970's. It was not meant to work with vast software development projects with many people working on them, but for it's time, it did a pretty good job (Bell Labs. used it to develop UNIX).

Each iteration of version control since, like CVS, RCS, arch, Subversion, Git et. al. has expanded on the functionality, meaning that as the grandaddy of them all, SCCS cannot come out favorably in any comparison.

But I still use it on occasion, as it is normally installed on AIX, even when nothing else is.

NHS supplier that holds 40 million UK patient records: AWS is our new cloud-based platform

Peter Gathercole Silver badge

Re: Bullshit Alert

OK. Thanks for your scenario. You're using only cloud storage, I can understand that. Encrypted as it goes to/from the cloud, and never actually used in the cloud. Cheap storage, but to do any volume analysis, will be very expensive on data transfer costs.

But actually running the application in the cloud? Or using cloud-based desktop (not mentioned here, I'm extrapolating)? In these cases, the keys need to be in the cloud.

OK. Encrypted region within a cloud domain? You're trusting the cloud provider cannot be coerced to hand the data and the keys over to some TLA or hacker, and backed up by a warranty which will not exceed the cost of the service (even if you can prove that the data's been nabbed?) This cannot be considered a good move.

Peter Gathercole Silver badge

Re: Bullshit Alert

"...the keys aren't available"

Someone please correct me. If this data is encryped, but being used by cloud based analysis applications, then those cloud based applications must have the keys necessary to access the data (I accept that using the data from, for example, GP surgerys. there is scope for keys to be on the surgery's systems, and presented for every request, but that does not cope with bulk analysis mentioned here).

And they're in the same cloud, so if someone really wants the data, they half inch the data and the keys (OK, you could go down the rabbit hole of needing a key to decrypt the key store in the cloud, but how often do you go round this loop until you must store a key somewhere readable).

So where is the security?

I'm sure I must have missed something, so I'm asking for someone to point out where I'm being stupid?

Support whizz 'fixes' screeching laptop with a single click... by closing 'malware-y' browser tab

Peter Gathercole Silver badge

Build your own PC

I've been building my kids gaming PCs for a couple of generations of machines.

A few years ago, I was building one to wrap an put under the tree at Christmas for my youngest son.

The build went fine, and the system was working perfectly, so I checked and tightened all the screws, and put the cover on, and then wrapped it.

Christmas morning. Wrapping paper comes off, and the system was connected up. The power button was pressed, and... nothing. No lights, or fans. Nothing. I spent the rest of Christmas day going through the build, including replacing the power supply and removing all of the adapters. Nothing. A disappointed son returned to using his really underpowered old machine that struggled to play his games.

Eventually decided that the motherboard must have failed between testing and unwrapping (unlikely, but the only thing I could think of). Online on Christmas evening to order another motherboard, with the most expensive delivery option to get it as soon as they could get it to me.

Day after boxing day, it arrives (yes, really). Out with the first board, in with the second. this would fix it! Only it didn't. No change.

I was baffled. I ended up doing a case-less build on the kitchen table, using a switch wiring set from a decommissioned case, the new power supply and the first board, Surprisingly, everything powered up without problem. Put it back in the case, nothing.

So, thought I maybe the switch set? Left the board in place, and used the set I'd used in the case-less build. Everything worked!

Finally I had a clue, so I checked the wires in the case. Remember when I said I had tightened the screws? Well, I had been careless, and the wires to the power button were caught between the case and a flash card reader that was where the floppy would normally be. During the build, everything was working fine. As soon as I tightened the screws, the sharp metal edges scissored the wires to the power switch, cutting them both. Result, no contact to turn the power on. A quick swish of the soldering iron, some heat shrink, and a happy son that could finally get his new gaming rig running.

So the moral of the story is, even if it was working before closing the case, check that it still works before re-packaging.

Still, I found a use for the extra motherboard building a franken-machine from spare parts I had knocking around (which included the case from my son's old machine), which with a wave of a Ubuntu CD (no unused windows licence available), became my first machine that I didn't have to share with my family!

Amazon's homegrown 2.3GHz 64-bit Graviton processor was very nearly an AMD Arm CPU

Peter Gathercole Silver badge

Re: Interesting comparison... @ToddRundgrensUtopia

Throwing terms like NUMA around in multicore systems without sufficient qualification can be completely misleading (and this is separate from the abomination that is term "non-Non-Uniform-Memory-Access" used here).

NUMA is normally not used at a chiplet level, but at a complete system level. I certainly can see that each quad core 'cluster' chiplet could have Uniform Memory Access (see what I did there) to their local memory for each of it's 4 cores, but at a system level (or even a chip level), this will almost certainly be a NUMA architecture.

I spent some time working with IBM Power 7125 575 and 775 systems, and know that as processor count increases, coherent cache and memory access becomes exponentially more difficult.

Peter Gathercole Silver badge

Re: Interesting comparison...

SciMark is inherently a single threaded benchmark, so it really measures single core performance, which would make sense given 2x performance with 1.5x clock speed and an architectural bump.

Once you factor in the four times core count, it will be much more useful in a datacentre environment with real-world workloads.

It's interesting that it's a non-NUMA design. This normally causes memory bus contention issues with multi-core designs, so I wonder what they've done to allow 16 cores to access the same memory without blocking.

Blighty: We spent £1bn on Galileo and all we got was this lousy T-shirt

Peter Gathercole Silver badge

Re: FUD Central Nervous System @amanfrommars 1

That nearly made sense to me.

Am I going mad?

Oi, Elon: You Musk sort out your Autopilot! Tesla loyalists tell of code crashes, near-misses

Peter Gathercole Silver badge

Re: Say what you like about Teslas @bob

I drive a lot on un-lit roads (it's a hazard of living in a rural environment), and it is not just drivers who have their lights set too high that bother me.

The super-bright LED lights on cars coming in the opposite direction are enough to upset night vision even when they're adjusted correctly and not on high-beam. They're just too bright.

What surprised me a while back was that these super-bright lights are also being put on pushbikes. This is just wrong, especially when they are set to flash. Even if they don't flash, when you come across one, you have to look hard to see past them to make sure they are not a car with one light not working (and thus difficult to see how much of the road they occupy.)

And don't get me started on the stupidity that allows manufacturers to put indicator lights next to or surrounded by high brightness side lights, especially if the sidelight has to turn off when the indicator turns on to allow the indicator to be seen. You get a light that just appears to go from white to orange, without the required change in contrast. Why are they even allowed in the homologation tests!

A new Raspberry Pi takes a bow with all of the speed but less of the RAM

Peter Gathercole Silver badge

Re: I swerved the PoE hat @defiler

I'm pretty certain that RiscOS can only use one of the 4 cores, and certainly won't use the 64bit instruction set, so keeping it on the B is probably a good use of the B.

OK Google, why was your web traffic hijacked and routed through China, Russia today?

Peter Gathercole Silver badge

Re: So much for the original intent of the ARPANET

The original thinking for ARPANET did not include BGP. I believe that the alternative routing strategies were provided by static routing with routes preferences and hopcounts providing alternate pathing.

For some history, look up RIP, which was deployed sometime around 1969.

But RIP would never cope in today's massively complicated Internet. Since class-based routing broke down to allow re-use of the previously reserved network ranges that have been freed up to keep IP4 going, the routing tables that the core routers have to know are HUGE.

But considering how BGP hijacking has been known about for a long time, I'm surprised that it has taken this long for a key based trust system to be introduced.

Can your rival fix it as fast? turns out to be ten-million-dollar question for plucky support guy

Peter Gathercole Silver badge

Many years ago..

I went for an interview at Seqoia (they made large for the time UNIX systems), and was presented a test.

"There is a problem in the printing system that causes lpd to crash and corrupt the print queue. See if you can spot where the problem is likely to be be" they said., before leaving me at the console of one of their test systems, with the root password.

I found the problem in about 10 minutes. I then proceeded to spend the time until they checked back to come up with a patch, work out how their compilation system worked, and compiled it ready for deployment.

I did all of this, and what I would do to test it, and ended up twiddling my thumbs for some time until they decided to check back with me.

From their reaction, I don't think that they expected me to even find it, but I knew my way around both the System V and BSD UNIX source tree quite well. They made great noises about how I would fit into their support team, and how it would be really good if I could join them, and the local Managing Director wanted a chat with me, before admitting that they could not even match the package I was getting where I was working at the time (even though the job agent knew exactly what I would need).

So I left furious, as I would not have turned up for the interview if I had known the maximum package they were offering. I think that the agent was using me as a foot-in-the-door for other candidates.

I was not happy with the agent, even though I was on quite good terms.

My hoard of obsolete hardware might be useful… one day

Peter Gathercole Silver badge

Hmmm. Maybe it is time time to..

I only recently ditched all of my ISA and EISA sound, graphics and communication boards. You know, from the time that PCs didn't even have a serial port on the motherboard.

I conceded that not even having any motherboards from the era to put them in probably meant that they were surplus to requirements.

I must get round to ditching all of the <1GB drives sometime, but I've just got to check that there's nothing important on them....

Peter Gathercole Silver badge


If you ever do get round to ditching your 1970's vinyl, give me a call. I will consider coming and picking it up.

Many of them are better (or at least more authentic) than the 're-mastered' compressed copies that you can get on digital download.

This just in: What? No, I can't believe it. The 2018 MacBook Air still a huge pain to have repaired

Peter Gathercole Silver badge


It's funny.

30 years ago, I would have agreed with you about ESD damage.

But since then, although I have taken hundreds of laptops and other computers apart, many of which I used myself, I don't think that I can attribute any of the (relatively few) failures to ESD damage.

I do take minimal static precautions, like having something earthed close to me that I will touch periodically, and before I touch the processor, but I don't completely follow the rules, and I don't use an anti-static strap.

I know, you're going to quote cumulative static damage, which may be true, but I think that chip design, for all it's modern complexity, has meant that unless you really zap stuff, it's likely to survive with only moderate precautions.

And I think that this is true across the computing spectrum. In my last post, I was working with hardware engineers on supercomputers, and they were not that different, even when changing very complicated assemblies (but of course, there was plenty of grounding around when working on equipment that was still connected to the power infrastructure, as was the case with these systems.)

Monster mash: Spectra Logic's tape library now twice the beast it was

Peter Gathercole Silver badge

Many organizations do

Pretty much anybody who has a need to collect and keep vast and ever growing quantities of data.

My most recent experience is of Meteorological data. As forecasts work at ever increasing resolutions, the amount of data that is generated that is wanted to be kept keeps growing at an ever increasing rate.

Docker invites elderly Windows Server apps to spend remaining days in supervised care

Peter Gathercole Silver badge

Nothing is new

Looks like the same concept as AIX 5.3 vWPARs that allow you to run apps from old AIX versions on modern Power boxes with supportable levels of AIX.

Only been around for about 7 years.

Macs to Linux fans: Stop right there, Penguinista scum, that's not macOS. Go on, git outta here

Peter Gathercole Silver badge

Re: Great plan Timmy. @Caffinated Sponge

My previous post on this was a little incomplete. I had not realized that on Intel Secure Boot systems, there is a 'shim' bootloader signed against a Microsoft certificate that can isolate grub and the kernel from Secure Boot. This shim will do additional signature checking, and have certificates maintained by the sysadmin to allow locally compiled versions of grub to be booted. So only the shim needs to be signed against a certificate in the Secure Boot in the UFEI.

But my original point is that the certificates installed in the Secure Boot system are entirely under the control of the hardware vendor. For the UEFI used on Intel systems to boot Windows, the main certificate holder is Microsoft. Microsoft has come up with this method to allow some Linux distributions to sign against the shim certificates, and allowed them to get grub or other bootloader signed with the shim certificate.

UEFI does have a facility to install new certificates, but I think many systems have this disabled so the only certificates that can be used are those that were installed when the system was created.

I suspect that on latest Apple hardware, the only certificate holder is Apple, and only Apple certificates are installed.

If Apple choose not to sign the shim bootloader, then you can't run Linux. It's nothing the Linux community can change, it's completely at Apple's discretion.

I think that the cryptography involved in signing with a certificate is sufficiently advanced that you can't 'steal' a certificate. There is magic (read - a cryptographic checksum) in the signature that will check that the code that contains the certificate signature has not been tampered with. So the only solution is to obtain a correct signature for your code. If Apple don't want to grant one, then tough.

I can totally see why some people want to be able to prove that their system is secure, and is only running software from a recognized source (I won't say trusted, because I think that some OS vendors have abused any trust that they once may have had), but the mechanism used is a double-edged sword which allows these organizations to eliminate rival and alternative OS installations.

So far Microsoft have been prepared to play fair. But there is absolutely nothing that says that they will remain that way. Remember, the last E in EEE is Extinguish...

Peter Gathercole Silver badge

Re: Great plan Timmy. @AC

You've missed the critical point here.

It is not down to the Linux community to get their code certified, it is for Apple to include the existing certificates that Linux can use into the certificate store in the secure boot system.

If apple do not want to include a certificate Linux installs can use, there's pretty much nothing that the Open Source community can do to make Linux run without breaking secure boot.

This was evident when the original Palladium security system was being mooted back in 2002. Some people, like Ross Anderson spotted this, and rang the warning bells, but not too many people heeded the warnings.

Just because Windows 10 currently requires a PC to have the ability to turn Secure Boot off, this may change in the future, and having secure boot even present means that at some point, it could be enabled, restricting the choice of every owner of a PC with it in.

I agree that you can choose to not buy Apple hardware. But generally speaking, it's nicely engineered (or has been in the past, not so sure now) and used to be a good choice if you were prepared to pay the premium.

IBM sits draped over the bar at The Cloud or Bust saloon. In walks Red Hat

Peter Gathercole Silver badge

Re: That's all very well, but ...

Hey, maybe they could RA Lennart!

Or even better, put him in a quite corner where he can't wreck any more of the Linux/UNIX legacy in the name of making it like an offering of another large IT company.

Zip it! 3 more reasons to be glad you didn't jump on Windows 10 1809

Peter Gathercole Silver badge

@Timmy B

I have not really had any problems printing to mfds from manufacturers like Epson, HP and Brother, although using the fax or scanner functions on remotely connected mfds can be a bit of a problem. For the Brother I'm currently using, I had to download and install their Linux printer definitions, plus a script to install them, but that's not really that much different from Windows,

I did have a terrible battle with a cheap Lexmark mfd from Linux some years ago, but they don't appear to make devices for the home market any more, and I'm fairly certain that their laser printers can be driven as generic Postscript of PCL devices without any additional software.

When I got my first HP mfd, I plugged it into my laptop that was running Hardy Heron (6.06?) using USB, and was amazed to find that Ubuntu recognised the device, and created both print and scanner devices for it that allowed me to use it immediately, with almost no intervention from me (I think I may have had to tell it that the paper size was A4).

So I would be interested in hearing which manufacturers mfd's you're struggling with?

Roughly 30 years after its birth at UK's Acorn Computers, RISC OS 5 is going open source

Peter Gathercole Silver badge

Re: it was a joy to work in and ahead of it's time for creating structured code @Mage

I have often thought about what it was that made the micro revolution happen in the 1980s.

My thoughts are that one of the reasons was the immediacy of getting something done that hooked the youth of the '80s. Rocking up to a machine, typing a four or five line program followed by RUN, and having colours splatted all over the screen, or random beeps coming from the speaker says to a newbie "look, you can do magical things", and they're hooked, almost in no time flat.

BASIC was the best tool at the time for this first step. Quick to learn, easy to remember, and immediate.

I look at what is necessary to learn Pascal, Modula and the other compiled languages. First you have to learn the editor. Then you have to write the code. Then you work out how to compile, and only then (assuming that you don't get any cryptic errors from the compiler), you get to see the results. Even using IDEs puts too much complexity in the first step before you achieve anything.

Most of the youth of today will turn off after exhausting their limited attention span at the point that you have to invoke the compiler. And this IMHO is the problem with most modern languages used for teaching.

Add to this the need to learn quite complex language constructs before being able to write syntactically correct code in things like Python, currently the poster boy of teaching languages, and you will turn off more kids than you attract, even if they are quite able.

I saw this in the early '80s. I worked in a UK Polytechnic, and had several intake years on HMC and HMD computing courses coming in having learned BASIC on Spectrum VIC20 and C64 systems (amongst others) who sat down at a terminal, learned how to log in and use an editor like EDT, and start writing Pascal, complaining bitterly that this was not what they thought computing was all about, and why was it so complicated! Once they got over the hump, they were fine, but some did not get that far.

Similarly, my father learned to program on Spectrum and BBC micros, and as a retirement present in about 1992 was given an 8086 MS/DOS PC, and one of the first things he asked me was "How do I write a program that draws pictures and plays sounds" (things he had been doing for years to aid the teaching he was doing), and I had to say that it was not built in to MS/DOS, and even GW Basic by itself without extra software packages.

I don't believe that he ever wrote another program ever again.

Your comment about Forth is interesting. I learned Forth as an additional language (PL/1, APL, C and Pascal were my primary languages then) back in the 1980's (ironically on a BBC Micro with the HCCS Micro Forth ROM, not Acornsoft Forth), and I would say that it is an extremely poor language to for a newbie to learn programming in. The stack based arithmetic system is completely non-intuitive to someone who has not studied computing already (good grief, most people have difficulty understanding and using named variables in a computer program), and although you can define meaningful words in the dictionary, most of the primitives are terse, and impossible to guess the meaning of without reading the manual. And even getting to the point where you could define a word would tax most kids I have known.

At least most Fortran/Algol/BASIC/COBOL based languages have their keywords closely matching English and sometimes mathematical languages. And BASIC scores well in not having strict typing, something that becomes more important as you get more proficient, but a real barrier to someone just learning.

So in my view, as a first stepping stone, BASIC is a good start to gain the concepts of programming, followed on by a move to a more comprehensive language. And BBC BASIC was one of the fastest and best.

Could it be bettered? Yes, I'm sure it could, but Javascript and Python are not it!

Peter Gathercole Silver badge

Am I sure?

I'm no expert, although I have looked into the memory layout of RiscOS because I was interested.

It may be that over the different versions of RiscOS, new features were included, but Wikipedia indicates RiscOS 2 did not have virtual addressing, and I saw nothing in the remaining history to indicate that it was added later.

It is quite true that MEMC did have memory protection capabilities, but from what I have read, it was not used in the earlier versions of RiscOS, although I am sure that it was used in RISC iX.

I find it hard to believe that the current versions of RiscOS do not have memory protection, but my original post was really about RiscOS under Acorn's custodianship.

Peter Gathercole Silver badge

...and it did!

If you look at the core implementation of modern Intel and many other processors, including the zSeries mainframe, they have microcoded RISC processors in them.

And that is not taking into account the remaining RISC processors, ARM, IBM PowerPC (although this is the most un-RISC RISC processor I've ever seen), RISC-V, and MIPS derived processors that are still available.

And I don't think that the micro-controllers and PIC processors that you find embedded in many millions of devices would exist without the research done for RISC processors.

Peter Gathercole Silver badge

Re: "Underpowered"?

According to Wikipedia, the BLiTTER functionality was added to the ST range in late 1989, with the introduction of the STE models.

This is Wikipedia, I know, but for things like this it is mostly correct.

The A400 model of the Archimedes was launched in June 1987.

The Master 128 was a continuation of the 8-bit BBC microcomputer range, which is why it was still available for continuity purposes for schools, even after the Archimedes was launched.

Peter Gathercole Silver badge

Re: RiscOS really was magnificent but...

Whilst it was co-operative multitasking with no interrupt driven scheduler, the points at which a process could lose the CPU were built into many of the system calls, including the ones to read keystrokes from the keyboard and the mouse position.

What this meant was that if you were doing any I/O through the OS, there were regular points where control could be wrested from a process.

That's not to say that it was not possible to write a process that would never relinquish the CPU, but most normal processes are not written like that.

The real issue (IIRC) is that the earlier versions of RiscOS did not enforce any memory space virtualisation or separation. All processes had to be written as position independent code that could sit anywhere in memory, and used the OS to manage their data space. This meant that in this day and age, RiscOS would be regarded as a really insecure OS.

Peter Gathercole Silver badge


You make the comment that the graphics on Archimedes was underpowered, but you have to put in historical context.

The original Archimedes hails back to 1987. At that time, the Atari ST520 was available and the Amiga 500 was released. The Atari had no graphics assist beyond some sprite handling, the Amiga has a blitter which automated block transfer of memory, but only in the first 512K of memory.

The Archimedes was able to do everything that the others could just using the power of the ARM processor, and was not at a serious disadvantage.

And at the same time in PC land, you had the CGA, EGA and early VGA adapters (plus the third party graphics cards) that did almost no processing on their own, and provided a dumb frame buffer that was manipulated by the underpowered (compared to the ARM) main processor.

As the ARM was an efficient full 32 bit RISC processor (as opposed to the 16 or 32 bit register, 16 bit data path of the Intel and 68000 based systems) with good memory access and a high clock speed, it was able to drive a frame buffer as well or better than almost everything else available at the time. The Amiga had some advantage due to it's blitter, but IIRC, it had some serious limitations in what you could do with it.

Where it fell behind was when the clock speed of the Intel processors started being pushed up into the decade MHz range, mainly because Acorn did not have the resources to build the higher speed ARM and ARM based systems. But this was a financial limitation, not a technical one.

And of course Acorn never got into the graphics co-processor market that only hit the mainstream after Acorn was split up.

Erm... what did you say again, dear reader?

Peter Gathercole Silver badge

To be human.

If a human had produced the transcription of the statement, they may well have missed out the "erm", or possibly qualified it with some descriptive text.

But when you get a machine performing the transcription, all of these little hesitations, repetitions and deviations are reproduced verbatim.

Sounds like Norman is a fan of "Just a Minute"!

Your pal in IT quits. Her last words: 'Converged infrastructure...' What does it all mean? We think we can explain

Peter Gathercole Silver badge

Re: I think I get it now @Pascal

I think you need to change point 2. I think it should be:

2) Complicate everything by giving existing technology a new incomprehensible name and make sure to constantly repeat that it is different and important.

Microsoft yanks the document-destroying Windows 10 October 2018 Update

Peter Gathercole Silver badge

Re: "were made available for other OS" @ defiler

Just because different GUIs are available on Linux, it does not prevent applications with different look-and-feel from running simultaneously on a system.

While it is true that a mixture of user interfaces on screen at the same time may look messy, it's no reason to say that you can't run an application on a Linux if you are using a different UI than the one the application expects to use (and even if this is the case, you can have multiple GUIs installed, and switch between them).

I always have the Gnome and KDE support libraries installed (along with several others) on my primary workstation, and this means that I can run an application meant for almost any Linux GUI.

Buttons and menus appear may be different from one app to another, but if an application is important, you get to know where to look for that app. Not ideal, but not a show-stopper.

Peter Gathercole Silver badge

Re: "were made available for other OS" @AC

You don't understand the relationship between the GPL and LGPL.

GPL is not a barrier to writing commercial software on Linux, as all the bits you need (most development libraries, compilers and GUI support) are published under LGPL or other fairly permissive license, which allow you to compile, link and ship code without the GPL requiring you to open-source your code.

Properly packaged Linux packages can have quite good portability within a system architecture (like x86-64), although sometimes the version checking for some of the libraries throws up unnecessary problems. But that's not much different from DLL-hell on Windows.

Going forward, package managers like Snappy and AppImage will make Linux portability even better.

WLinux brings a custom Windows Subsystem for Linux experience to the Microsoft Store

Peter Gathercole Silver badge

Re: Why?

You might not, but Microsoft are hoping people who might have experimented with Linux will do it this way, rather than setting up a dual boot system (which may result in Windows never being booted again!)

And if the user gets it running well enough, why would you even consider installing a native Linux distro.

Microsoft is Embracing Linux. The rest will follow!

That scary old system with 'do not touch' on it? Your boss very much wants you to touch it. Now what do you do?

Peter Gathercole Silver badge

Re: Put it all together...

Worse than that, it was called the All-agro!

Dead retailer's 'customer data' turns up on seized kit, unencrypted and very much for sale

Peter Gathercole Silver badge

Re: How's this different than normal?

Normally, kit like this is sold by the liquidator or administrator to settle debts, pay creditors (after lining their own pockets, of course, as preferred creditors).

Put the onus on them to clean the data from any kit that it sold on, and let them pass that obligation on to any disposals company that is engaged to clear a site. Make it a penalty on the liquidator to allow customer data to leak from a company they've closed down.

Will probably mean more perfectly usable kit being destroyed rather than recycled, and possibly make the IT equipment more of a liability than an asset, but perfectly doable.

Ubuntu flings 14.04 LTS users a security lifeline, chats some more about Hyper-V

Peter Gathercole Silver badge

Why stick to 14.04?

One should point out that 14.04 is the last LTS release that does not have systemd as the default init (it is still there, but upstart is running as process 1).

This may make some shops stay with 14.04 for a bit longer.

Apple hands €14.3bn in back taxes to reluctant Ireland

Peter Gathercole Silver badge

What I want to know

is what the Irish Government intends to do with the ~3000 Euros per head of their population (there's approximately 4.77 million Irish residents).

I'm sure that Irish tax payers would love a tax rebate, or even money put into the country's infrastructure.

I know, they could use it to build the border with NI, when we hit the WTO rules next year.

Biting the hand that feeds IT © 1998–2019