* Posts by Peter Gathercole

2953 posts • joined 15 Jun 2007

A Reg-reading techie, a high street bank, some iffy production code – and a financial crash

Peter Gathercole Silver badge

Re: QA's fault @Phil re: lint

This code is not a no-op. It will change the value in the TOTAL_EXPOSURE variable each time it runs around the loop. Thus there is no need for lint to pick it up.

And even though it would not be efficient, the code does leave the value of the last POSITION.EXPOSURE in the TOTAL_EXPOSURE variable. I can't see why someone would code this, but it is possible (especially if the variables were less meaningful) that this was the intended result.

Peter Gathercole Silver badge

Re: Or... @John H Woods

The issue with what you said is contained in the term "modern language".

I don't believe that the article said anything about when the error was coded. At one time, C, Pascal, Algol, PL/1, FORTRAN et. al. were all regarded as modern languages, and none of them had a construct to auto sum elements of an array without a loop, but I suspect that you already know something about older languages, as you give a snippet in Smalltalk.

And then, one of the oldest high level languages around, APL, would allow you to sum across a slice of an array in a single operation, to the point where there is not even an explicit loop construct in the language (don't ask me to write the code, it's nearly 40 years since I wrote any APL in anger, and I don't want to work out how to represent the greek characters necessary to represent it here).

Looking at problems with a different perspective often gives different answers.

Peter Gathercole Silver badge

@A Non e-mouse

The shortcuts ++, --, += and -= were designed to allow the code to map on to the instruction set of the PDP-11 (and probably the PDP-7 before this), because there were auto-increment and auto-decrement instruction modes on these processors.

This made it possible for a skilled programmer to write code that would generate fewer instructions (and thus be faster), rather than seeing whether the compiler would spot the possible short-cuts.

Remember, when B (forerunner of both BCPL and C) was written, the machines Ken and Dennis had were only just capable of running a compiler at all, and code optimization was completely out of the question.

The systems were really slow. When I got my first UNIX Version 6 and later version 7 system to look after (a long time after UNIX was first written), compiling the kernel took over 4 hours (and I had relatively fast disks), and I never did get around to recompiling the tool set, just used what came in from the distribution tape. It got to the point where I would touch many of the .o files and the libraries and bits I had not touched just to fool Make into not going the whole hog and compiling everything.

This direct mapping of high level code to machine instructions is why many people used to refer to C as a two-and-a-half generation language, and suitable for writing efficient code for operating systems.

Nowadays, where the systems are so obscenely fast as to make compiling code a relatively trivial operation, adding optimizers into the compiler such that these short cuts are not necessary is a no-brainer, so they could be deprecated, but they're written into the standards, and C has spawned a huge number of C-like languages that have taken much of C syntax into themselves verbatim.

Peter Gathercole Silver badge

Re: QA's fault @Phil

A code analyzer pick this up? Why?

The snippet looks sufficiently like C for me to generalize, and what is written is quite valid code, just not doing what was intended.

A code analyzer like lint will recognize things like the argument or argument types being wrong, code that will never be run, or integer/pointer/data object size mismatches. And when it comes to lint, most modern C compilers and their optimizers will do a better job than lint if the correct options are turned on.

Unless you have a meta language that you code the requirement in separately to check the code, you will not pick up logic errors like this, and if you have such a meta language, firstly, the problem has to be correctly coded in this meta language, and secondly. if it could check the code, it could write the code in the first place, so why employ a programmer?

Finally: Historic Eudora email code goes open source

Peter Gathercole Silver badge

Re: Email is fundamental to modern life

Whilst I respect David Harris's position regarding Linux, I suspect that if he is still working with Pegasus, he needs to at least update his blog regarding his position. It's dated April 2005.

Reading it, I don't think he's really understood GPL and LGPL. Just producing a free package that runs on Linux does not necessarily mean that the package needs to be open-sourced or published under GPL, as long as it is written correctly. It is perfectly possible to produce binary only software for free distribution under another license, or even commercially, as long as you do not incorporate any GPL code in your code-base. Most of the required C and C++ libraries required to compile your own packages are published under LGPL, which allows them to be linked in either statically or dynamically to a binary package.

This fact annoys some of the Open-Source stalwarts who want to convert the whole world to software that is free and open (RMS, I'm looking at you), but the licenses were written the way they were for a reason.

I appreciate that if he uses an editor from a third party as part of the package, then he would have to get some agreement on that, but Linux repositories are full of editors, which provided they are run as separate processes, can be called quite freely from another program without any licensing issues. Using it as a widget may be a little more problematic, although much of, say, Qt or GTK+ are published under LGPL, so there will be editor widgets in them somewhere.

The issue of support is only one of degree. At the time of writing the blog, he was doing it for Windows, so doing the same for Linux, once the learning curve has been followed, would not be significantly different, just more.

But given the date on the blog, and the overall age of the software, I suspect that he is just not interested in porting the product, and if this is the case, when Microsoft starts removing some of the legacy APIs in Windows, the Windows package may be doomed in the log term.

Opening it up to other developers is the only real way to keep the package alive over the longer term. And if Mercury is actually a functional email server, then a Linux port, even a commercial one, would be really welcome.

Dixons to shutter 92 UK Carphone Warehouse shops after profit warning

Peter Gathercole Silver badge

Re: Are Dixons...

I don't think I have any horror stories, but I have to admit that I did not make a habit of buying from them after my first experience.

Their own brand products were built down to the lowest quality they could get away with. I had a Printztronic Mini Scientific calculator bought for my birthday when I was 16 as the SMP Maths syllabus allowed calculators at A-level (but not in the exam). In function, it was exactly the same (and I mean exactly) as the Sinclair Cambridge Scientific (not the RPN one) and a similar size, but believe it or not, the Sinclair was built better!

Instead of engraved or molded (or even screen printed) legends on the buttons, the Printztronic had transparent plastic buttons, with a printed sheet underneath that you read through the button. In addition, the metal bubbles for the contacts were held on to the PCB with adhesive tape, rather than the sealed sandwich the Sinclair calculator used.

I regularly had to dismantle the thing, clean the contacts and replace the tape after one or more of the buttons stopped working, and I ended up re-drawing the legends on the paper sheet when it wore out. I guess most people would have tossed it, but I fix things to keep them working (and still do!)

I kept it going for a couple of years until I persuaded my parents to get me a Commodore SR4190R for University (another birthday present), a much better calculator. This was not bought from Dixons.

Peter Gathercole Silver badge

Re: notably National Living Wage @AC

I think you missed out a * 52.

What you've done is worked out the weekly increase in the total wage bill, not the annual increase. So,

30K (number of NLW employees) * 0.33 (hourly increase) * 40 (hours per week) * 52 (weeks per year) = 20,592,000 (yes, that's over 20 million.)

Divide by 300,000,000 and multiply by 100 to get percentage = 6.9%

This is still quite small, but more than the insignificant figure you quoted, and definitely more than the annual rate of inflation. A business cannot take even this loss of profit for a number of years without it having an effect (on the dividend and share price, at least).

In practice, what is happening is that people above NLW are not getting any increase until the rising NLW reaches their wage, at which point they will be swept up, and I predict that we will see the number of jobs that are at, or close to NLW significantly go up over the next few years.

Can't pay Information Commissioner's fine? No problem! Just liquidate your firm

Peter Gathercole Silver badge

Re: Liquidate company to avoid paying

In some cases, liquidating the company is the only option. If there is no chance that the finances of a company could pay fine, then the company is technically insolvent (i.e. not able to satisfy the creditors, which includes the body issuing the fine), and entering a CVA or starting insolvency proceedings is probably the correct thing to do.

Where the problem is really, is when a company that is solvent and potentially able to pay the fine is voluntarily wound up. In cases like this, the fine should still be paid, because the ICO should be registered as creditors, and if the company is solvent, then all the creditors should be paid. My suspicion is that the directors will actually find some way of extracting money from the company before starting the insolvency proceedings, in a way that makes the company insolvent, but allows them to pocket the cash.

There can only be a small window where a company knows that a fine is likely, and declares itself insolvent before the fine is issued, where they might get away with this, but as there is a relatively lengthy process to identify creditors, I'm not sure that this would really work.

In cases where a potentially solvent company is voluntarily wound up as insolvent, the directors should already be liable, because actions deliberately driving a company insolvent must be at least negligence, if not corporate misconduct.

I suspect that it is just too difficult to prosecute these cases.

Welcome to Ubuntu 18.04: Make yourself at GNOME. Cup of data-slurping dispute, anyone?

Peter Gathercole Silver badge

Re: Dude @Camilla

Most people have dynamically allocated IP addresses provided by their ISP. The ISP can identify the account from the IP address and the time, but whether the IP address is enough for the ISP and everyone else probably depends on how long the lease time is for the dynamic IP address.

But even the account owner name does not definitely identify the user by itself, unless only one person uses it. For example, during the week I stay in a shared flat with four other people, and the broadband account is in the landlords name.

Of course, if you pay for a static IP, then yes, it is likely that you will be easier to identify, and of course by combining the IP address with other information (like the cookies in your browser, and whether you're logged in to a Firefox or Google account) many more things can be found out about you (I'm pretty sure Firefox ties together multiple devices I use by profiling the usage pattern, even though I don't enable the sync feature).

Expect this last behavior to increase as time goes by.

Das blinkenlights are back thanks to RPi revival of the PDP-11

Peter Gathercole Silver badge

Re: The PDP-11 lives on

A regular instruction set was really a requirement in the early days of computing, as grouping the instructions allowed you to reduce the amount of logic in the instruction decoder, as did using the same addressing modes for different instructions.

What I found really interesting with the PDP11 instruction set was that the stack pointer and program counter were just implemented the same as general purpose registers, a fact that became obvious if you looked at the generated op. codes that the machine code for jump and stack manipulation instructions generated.

Remember that the CPU of the PDP11/70 and others of the same generation were mainly constructed from 7400 series TTL in normal DIL packages, which explains why there were so many boards. IIRC, the CPU on 'my' 11/34a was four boards for the CPU, one of which was an FP-11 floating point processor, and another of which was the 22-bit memory controller (it was a SYSTIME special, PDP11/34s did not normally have 22-bit addressing).

Peter Gathercole Silver badge

Re: How noisy are the cooling fans? @Jake

My slipstick is a Faber Castell log-log slide rule. I would like to say that it is the same one that was bought for me in 1971 when I went to senior school, but that got lost in one of many house moves, and I had to do a like-for-like replacement from eBay.

Although I think I still know how to use all of the scales (it's got around 20 different ones), I don't do the type of maths that it's best suited for very often.

I have one of my Grandfather's slide rules, probably dating back to the 1930's that he would have used at the RAE in Farnborough (it was one of the UKs primary aircraft research institutions). It's engraved polished ivory on wooden slides, but feels so fragile that I don't play with it very much.

When it comes to abacuses, not me. I used a blackboard and chalk and counting gates when counting sheep and hay bales on my father-in-law's farm before he retired.

Meet Asteroid, a drop-in Linux upgrade for your unloved smartwatch

Peter Gathercole Silver badge

Re: Is Linux the best starting place for a watch OS?

Well, Ubuntu touch, which was dropped by Canonical, has got a second life as a community supported Linux Phone OS, although it does still use the Android kernel.

If your phone is not already on the support list (and I admit it's not huge), there are people who will help you attempt a port.

Off with e's head: E-cig explosion causes first vaping death

Peter Gathercole Silver badge

Re: Here we go again

Some of these devices have stupidly large batteries.

My suggestion is set a maximum capacity limit on the batteries, so that they have less energy to dump if they go wrong. Could still injure, as any rechargeable battery could, but less likely to kill.

But I realize that even alkaline AA cells are an explosion risk, and can get really hot when shorted, and bigclivedotcom should have used his "explosion containment pie dish" when just dismantling one of a certain discount supermarket's AA rechargeable battery, but instead ended up burning his bench!

Britain to slash F-35 orders? Erm, no, scoffs Lockheed UK boss

Peter Gathercole Silver badge

Re: The curse of the F-35......... @CliveS

I am aware of the kinetic storage that is used. But you're still taking about diverting a significant amount of the available power into the catapult while you are recharging it, probably at the same time as you're trying to drive the ship forward, and maybe operating the weapon systems.

Also, the figure I quoted was the total of both gas turbine generators, and all four diesel generators. I'm not sure that you can gang all this power together, but I admit that I did get the sum wrong. The total is actually 118.8MW, not 82.4MW (I only counted one of the gas turbine generators).

What I was contrasting was the fact that HMS QE has less total power than the existing Yank carriers that are regarding as having too little electrical power to operate EMALS.

And, yes, I do understand that Nimitz and Ford class carriers have four catapults, whereas they were only considering fitting one on the QE class. But if you were intending to exclusively use non-STOL aircraft, would you really want to rely on a single aircraft launch system when aircraft are your primary defense?

Peter Gathercole Silver badge

Re: The curse of the F-35.........@John Brown

If I remember my Biggles, you might want Camels rather than Pups. I think Pups were trainers...

Peter Gathercole Silver badge

Re: The curse of the F-35......... @Aladdin Sane

You might think that.

HMS Queen Elizabeth has a total electrical generation capability of 82.4MW, which provides power for moving the ship and all other electrical demands on board.

The Gerald R. Ford class which will have EMALS will have about 600MW of electrical generation, which is in addition to the steam for the turbines that move the ship (i.e. none of the electrical power is used to move the ship).

The Nimitz carriers have about 200MW of electrical generation, which is also in addition to the steam for steam turbines that moves the ships, and that is considered too little to consider fitting EMALS on the current carriers.

(All figures are from Wikipedia)

So, you still think that QE has enough spare power for an EMALS catapult?

IBM bans all removable storage, for all staff, everywhere

Peter Gathercole Silver badge

Re: Poorly thought through

When IBM built their own laptops, and for a few years after the sale of the Thinkpad brand to Lenovo, IBMers working in secure environments within IBM, or on customer's own secure sites (generally those requiring some form of government security clearance) had to have Thinkpads without webcams.

Now they are buying from third parties, they do not have the control over the devices they can get (and they don't want to have laptops built to their specification) so the users are instructed to cover the camera lens.

In addition, phones with cameras used to be banned (if you had one, you had to leave it outside of the secure area). Now, as IBM no longer buy phones for their workers at all (the worker provides the phone, IBM provide a SIM) the prohibition is that you must not use a camera within one of these secure areas.

All in all, less control rather than more.

Ubuntu sends crypto-mining apps out of its store and into a tomb

Peter Gathercole Silver badge

Re: The problem is the mindset behind it

That was one of the things I most miss about PalmOS. You just knew that you would have a note application, a very functional calendar application, a calculator application and a contacts application (which was integrated into the phone on Treo devices).

They were always in the same place, always worked the same, and the data was portable between devices without having to hand the data over to Google, Apple, Microsoft, or your 'phone vendor when you upgraded your device.

Even with the web or internet based sync tools, I've always found problems going from one Android device to another.

Windows Notepad fixed after 33 years: Now it finally handles Unix, Mac OS line endings

Peter Gathercole Silver badge

Re: Notepad++ @Baldrickk

I said 'by default'.

Of course you can change these things. What modern software doesn't allow you to change everything about it!

But I just like to do the work I'm paid for, rather than fiddling around configuring the tools I have to use (note, I use about 12 different locked down windows environments using Citrix, and I would have to change them all separately). And, as I'm a UNIX/Linux person (and have been UNIX since long before DOS, let alone Windows existed) without huge amounts of in-depth Windows experience (yes, I use Linux exclusively on my home systems), I do not find Windows and Windows software intuitive to configure.

And, yes again, all systems have to have a default set of settings. I just don't agree with a significant number of those made by modern developers (like hard-coding ANSI escape sequences into scripts and documents!)

Peter Gathercole Silver badge

Re: Emacs... no Vi.... no Ed!

Upvoted for reference to EDT.

Peter Gathercole Silver badge

Re: Notepad++ @veti

Except that Notepad++ has many 'features' turned on by default, like Tabs, character counts, syntax highlighting, minimalistic font and expecially launching with the files that you had open last time you had it open.

Well, actually, my biggest gripe is the syntax highlighting and opening with the files you used last time. When I open an editor, I expect to see either a blank file, or the file I passed as an argument or I opened it with.

I know I'm a Luddite in some respects, but I really dislike having coloured text supposedly highlighting something with the choice of colours that the developer thought was a good idea, especially when, like the default alias for 'ls' set up on many Linuxes, I run with a different background colour which makes the developers choice stupid.

Second wave of Spectre-like CPU security flaws won't be fixed for a while

Peter Gathercole Silver badge

Basically, client side code execution is always a risk, because it provides a mechanism to run code from a server that you have no control of, on your system.

The problem is that without it, a lot of our interactions on the web would look like they did back in HTML2 and earlier, where all that could be done had to be done as tables, and any complicated tasks had to be rendered into a pixmap on the server side, and sent to the browser to display.

My view is that Javascript provides too much control. AFAICT, as originally specified, it was supposed to be interpreted. This should have made it quite difficult to issue a stream of machine code instructions which has not been generated by the interpreter or JIT compiler. And if you can make it generate specific vulnerable code, fixing the interpreter or JIT compiler to prevent this is much easier than fixing the processor (it is interesting that the IP protocol compiler in the Linux kernel could be manipulated to generate code to demonstrate meltdown!)

Of course, injecting executable machine code directly into a machine via buffer-overruns or in images or other binary blobs through poorly written client side processes would still be a vector for executing malicious code, and if you have direct control to import and execute code through a direct user session on a system, then there is nothing much you can do to protect yourself from processor flaws. Running a non-x86 architecture would provide some mitigation, but only from vulnerabilities that affected x86 processors.

It is at this point that having trusted executables, preventing you from running imported code, could be a help, but that would not work with anything that used self-modifying, or JIT compiled code or on a system that is used for development (if you can compile code on a system, it is extremely difficult to negate processor flaws).

If you're a Fedora fanboi, this latest release might break your heart a little

Peter Gathercole Silver badge

Re: Nvidia cards are fine @Lee

And when, during the update process on Ubuntu, does one get to read these release notes?

Oh. you can read the changelog in synaptic (can you do this in the Software Center?) I suppose, but I wasn't explicitly updating the graphics driver, I was just allowing the automatic update process to install the updates that were in the repository. This means it was silent as far as I was aware.

So what do ordinary users do? Freeze the graphics drivers (if they know how to do this) so they don't get updated, and vet every graphics update manually? This will work until a kernel update that requires a new graphics driver module, and then the result will be, again, that the graphics stop working.

And if you do spot it, switching to the legacy driver is not something that is obvious. Graphics drivers are normally in use when you are running normally, so in my experience, it is necessary to stop the GUI, and work in console mode. This is something that is also not obvious.

It's exactly what opponents of Linux complain about, you need to know a lot about what you are doing if you want to run Linux on the desktop with the vendor drivers, and this is why I recommend to non-technical Linux users to use the open-source drivers.

Peter Gathercole Silver badge

Re: Nvidia cards are fine

The problem with proprietary drivers for Nvidia or ATI hardware is that they both silently remove support from older chip sets.

On two occasions with Ubuntu, one Nvidia, one ATI, I've put some updates on a system (one was a dist-update, and the other was just a normal in release set of updates), rebooted, and been faced with either a text login screen, or a 640x480 16 colour screen.

In both cases, the support for the graphics card that was in the system had been removed from the proprietary drivers, so the system defaulted to the highest system it could use. This is far from ideal, especially when Linux is seen as suitable for older hardware.

Nowadays, I recommend that users switch to the Open Source drivers before doing any major updates, and as I don't have any major reason to use heavy 3d applications, I use them all the time.

My PC is on fire! Can you back it up really, really fast?

Peter Gathercole Silver badge

Re: Magic blue smoke...

Again, not IT related, but we had an engineer working on the Asteroids machine in the JCR bar at Uni. (back when Asteroids was actually current).

He decided to replace the huge main electrolytic smoothing capacitor in the power supply.

He got it the wrong way round (quite a feat of carelessness, but not impossible)

One hell of a bang, and a lot of smoke!

I can't remember whether that machine ever worked properly again.

Leave it to Beaver: Unity is long gone and you're on your GNOME

Peter Gathercole Silver badge

Re: Upgrade, but not right now?

10.02?

What was I thinking when I posted this?

10.04 (Lucid Lynx).

Peter Gathercole Silver badge

Re: Upgrade, but not right now?

This is normal for Ubuntu. Its been like this since at least 10.02.

For the first few months, you need to install from scratch if you want the new release. I guess this is because when installing from scratch, it is easier to know the eventual state of the system, whereas upgrading starts from an unknown state, so more testing is required to make sure that they get all of the dependencies right.

This is doubly so from a 2 year old LTS release rather than the previous non-LTS release.

I'm wondering what my best upgrade path is form 14.04 (I am still not convinced by systemd, and I'm actually using Gnome Fallback). I think I will have to go trough 16.04 before 18.04. Still, at least a year of support left for 14.04.

McDonald's tells Atos to burger off: Da da da da da, we're lobbing IT ...

Peter Gathercole Silver badge

Normal marketing speak to mislead..

..is "Our product is made from 100% beef".

Yes. 5% of the total product is 100% beef.

Although this is not, in fairness, what McDonalds claim, although being pedantic, the presence of small amounts of seasoning with the beef is enough for the claim of "100% beef" to be wrong.

How about, "our patties are 100% beef, to the nearest integer percent".

Not quite got the same impact, has it.

BT pushes ahead with plans to switch off telephone network

Peter Gathercole Silver badge

Re: Yeah right @Hoppy

If I remember correctly, ISDN specified a 144Kb/s link, which could carry a 2 voice calls, each using 64Kb/s, and a 16Kb/s signaling channel.

Also IIRC from my POTS training, analog phone lines used to have a filter at 8KHz, which was regarded as plenty high enough to carry voice communications.

Supreme Court punts on Microsoft email seizure decision after Cloud Act passes US Congress

Peter Gathercole Silver badge

Re: As has been noted...

But ultimately, if the data is on your kit, in your buildings, you have the recourse of air gapping it, turning it off, putting it through a crusher et. al. which will prevent any further data loss. Try getting any of the cloud providers to surrender or destroy the disks or tapes that have held your data when you move away from their service.

You also have much more control about how the data is protected, rather than relying on the promises of one or more third parties, possibly in other countries.

For example, you get to choose the number and type of security boundaries so that you are not so reliant on one single firewall, OS or network router/switch supplier, and you can vet your staff, and take appropriate disciplinary action if they go astray.

Gemini: Vulture gives PDA some Linux lovin'

Peter Gathercole Silver badge

@AC re phone and data.

Ubuntu would be a good place to start for Linux on an Android phone, because Canonical have already done it, albeit abandoned now.

I have a Nexus 4 with Ubuntu Touch on it, and although it is built on top of an Android kernel, it is supposed to be a full Linux (although the display is Mir and Unity). Opening a terminal session does make it look like a more complete Linux than doing the same on an Android device. Phone and data work fine. I've not attempted to put an external keyboard on it (I can't remember if the Nexus 4 supports On-the-Go USB).

In order to run other normal apps than the ones in compiled for Unity, it is necessary to have some form of compatibility layer installed to provide something approaching a normal X11 display, but I never managed to get it working. Maybe I should attempt it again.

VMs: Imperfect answers to imperfect problems, but they're all we have

Peter Gathercole Silver badge

Re: Multitasking @lleres

I am aware that prior to Edition 4, UNIX on the PDP-7 was one user at a time, but it had the concept of multiple users, although only one, at a time from earlier than that.

Bearing in mind that in the beginning, it was a side-of-the-desk project, borrowing a system that did not belong to them, it is not surprising that it took a short while to become fully multitasking with multiple concurrent users.

The early '70s was before my (computing) experience. I first used UNIX Version (Edition) 6 at Durham University in England in October 1978 (Yay, 40th anniversary of first using UNIX coming up), although I had used ICL George 3 or 4 and TENEX as a guest a few months earlier, and MTS at the same time as UNIX, but shared access computers were a real rarity at the time, especially outside of universities and other research establishments.

UNIX must have been quite the breakthrough for those who came across it at the time.

Peter Gathercole Silver badge

..imperfect problem.

I think 'imperfect problem' is a poor moniker, as it implies that there is a 'perfect problem', which surely must be an oxymoron.

In my view, a perfect problem is one that does not exist!

Peter Gathercole Silver badge

Re: Multitasking @lleres

??

I don't recognize your categorization of "Unix die-hards" being proponents of real time computing.

UNIX was made multitasking almost from the beginning in order to allow several people to share what was an expensive and scarce resource. At that time, UNIX was NOT, and never has been a proper 'real-time' operating system like DEC's RT-11 or RSX-11 (note, there have been real-time extension, like AT&T UNIX RTR, but they are not really mainstream).

In fact, completely counter to what you said, the movers and shakers of UNIX (Dennis, Ken, Doug and Joe - although Brian was less involved) were involved in various degrees with Multics, with all of them taking an active role in that project. Multics was multi-user and multi-tasking, and the desire when creating UNIX was to preserve many of the good things in Multics, on much smaller and less costly systems than Multics needed.

So as a result, UNIX was written, pretty much from the ground up, as a multi-user and multi-tasking system.

In my view, if IBM had chosen a cut-down OS based on UNIX rather than what Microsoft provided, the whole computing world would have been better. As it was, proper multi-tasking did not appear on desktop-class machines for many years, and windows was only dragged into the multi-user world very late indeed.

But I take the points made in the article that the poor implementation of many computer OSs and applications does not provide sufficient isolation between each application, but a properly designed OS with the correct resource fences (for CPU, memory and IO) should really do everything that is currently being done by a type 2 hypervisor. Basic UNIX has always provided process and memory separation, and AT&T derived UNIXes had a 'fair share scheduler' back in the 1980's to enforce CPU limits, and AIX has had Work Load Manager (WLM) since AIX 4.3.3, which is used for WPARs (Workload Partitions - much like Solaris Containers) for limiting CPU, memory and I/O resource use.

A proper OS should enforce memory separation (UNIX has since it was re-written on the PDP-11), although the current Meltdown has shown that Linux (note, Linux is not UNIX) has taken some (in hindsight, and IMHO) poorly thought out efficiency shortcuts (like mapping most of the kernel memory space into each process). UNIX never did this, at least not on the PDP-11, s370, VAX, Sun Motorola and SPARC platforms that I know most about.

It would be interesting to look at Intel UNIX ports like Sun/OS i386. AIX PS/2 (damn, I should know this for this platform), Xenix/368, Interactive UNIX, Microport UNIX and UNIXware to see whether those platforms properly separated the kernel address space from user-land.

Here's the list of Chinese kit facing extra US import tariffs: Hard disk drives, optic fiber, PCB making equipment, etc

Peter Gathercole Silver badge

Re: Liquid elevators

Try an Archimedes Screw.

Linus Torvalds says new Linux lands next week and he’s sticking to that … for now

Peter Gathercole Silver badge
Joke

Re: As for every release @teknopaul

...in vms...

Has the Intel port of OpenVMS got that far already?

Ohhhhh. He meant VMs, didn't he?

Probe: How IBM ousts older staff, replaces them with young blood

Peter Gathercole Silver badge

That is so true.

Who knew? Fabric access NVMe arrays can work with Spectrum Scale

Peter Gathercole Silver badge

Don't know what the fuss is about

As long as storage can be mapped into a *nix device, Spectrum Scale can use it.

What Spectrum Scale can achieve is not just high speed access, but very high bandwidth to single filespaces by multiple clients. Historically, it has achieved this by a high degree of parallelism across relatively slow (disk speed) storage.

That's why it is popular in supercomputer installations where speed and file-store size are both important.

What using NVMe will do is reduce the latency, although increasing the individual device read speed will help reducing the amount of parallelism that is needed to obtain the required performance.

Spectrum Scale also allows managed tiered access to storage of difference performance.

The art will be to organize it to get the maximum benefit from that speed.

10 PRINT "ZX81 at 37" 20 GOTO 10

Peter Gathercole Silver badge

Re: Memories... @travellingman

I'm really not sure how much of an asset the Cambridge Programmable would have been in an exam over a normal scientific calculator.

It did not have any stored memory capability, so you would either take it in powered on, and risk the battery running out, or remember any program that you wanted to use, not that much of a problem, however, with only 32 (or was it 36) programmable steps.

I did use a high-function Commadore SR4190R in a physics exam at university to do some linear regression that I could not remember the formula for. Worked out the results, then reverse-engineered the calculations to fit so I could present my 'workings'. Non-programmable scientific calculators were allowed, but I suppose it was cheating (a bit). I don't actually think that that exam added much to my overall degree.

Peter Gathercole Silver badge

Re: Memories... @IMG

I'll see your HP RPN calculator (mine was an HP-45), and raise you (because of difficulty in fitting anything useful in the limited memory) a Sinclair Cambridge Programmable and a Commodore PR-100 (I also had a Texas TI-57 programmable at one time, but it went wrong after about 2 weeks, and I got my money back).

I've forgotten all of the other calculators I've had across the years. I still have a TI-58 as an ultimate fallback, but I mainly use my 'phone now.

Interestingly enough, in the past couple of weeks, I've had to remind a colleague about the fact that some calculators did arithmetic hierarchy (generally TI and possibly Rockwell), and some didn't (Sharp, Commodore/CBM, early Casio, and most cheap 4/5 function calculators). HP were pretty much a law unto themselves, using RPN.

Peter Gathercole Silver badge

Re: Gateway Drug

I added an external keyboard using a Tandy membrane keyboard, suitably modified by scraping tracks and repainting with conductive paint. This was attached by a ribbon cable long enough that ZX81 and rampack was some distance from the keyboard. Never had a wobbly rampack crash after that.

I also added a 7400 TTL on a small board in the Sinclair rampack to re-map the 1K of static memory to a usable memory address (which I used for small machine code assists to basic), and also added another 1K of static memory under the keyboard, attached to the ULA side of the bus isolation resistors. This allowed me to change the I register, which was used to hold the base address of the character generator table to point at an address in this RAM. This gave me a fully programmable character set! So my ZX81 was probably the only one with 18K of RAM!

I also had a sound board from QuickSilver which provided 4 channel (3+white noise) sound using an AY-8910 sound generator added to the video signal using an external modulator. QuickSilver also produced a point addressable graphics board, but I think that worked by doing a similar trick to mine with the RAM, and writing all the characters out to the screen, and manipulating the pixels in each character cell. I believe that it came with some M/C routines in an additional ROM that allowed basic line draw commands.

I had great fun getting it to produce harmonized music while drawing it on the screen at the same time. The only problem was that in 'slow' mode, the Basic was just a bit too slow to make it seamless.

Although it looked a bit Heath-Robinson, it drew some interest in the computer club of which I was a member.

Paul Allen's six-engined monster plane prepares for space deliveries

Peter Gathercole Silver badge

Re: Gerry Anderson thought of it first

I always wondered why the tyres of lifting body 1 weren't scuffed when the wing-tips angled down. The main body was not still on it's stilts when LB1 was attached.

I made a mean Lego model of Zero-X when I was a kid. It was about 15" long, and used nearly all of the Lego that we had. The colours were wrong, of course, and as all large Lego models were, it was a bit fragile.

Unfortunately, I did not take any pictures, because I didn't have a camera at the time.

Intel gives Broadwells and Haswells their Meltdown medicine

Peter Gathercole Silver badge

Re: New processor? - NO! @chasil

The retpoline fix, IMHO, is not a complete mitigation for Spectre V2.

What has been described is a change to the compiler/compiler flags used when the kernel was compiled.

As I understand it, retpoline will prevent a call from generating speculative execution at the time of the call, so now the kernel has been compiled with this fix, the kernel will not have any speculative execution occurring whenever it performs a call.

But what runs on these systems is more than just the kernel. Compile time fixes need to be performed on all code that runs on the system, and the kernel is just part of a running system.

You would also have to re-compile the whole of the repository if you wanted to also roll this out to a Ubuntu system, and that pre-supposes that you don't have any code compiled without the retpoline options present on your system.

But even this is not enough. If there was a remote execution vulnerability that allowed executable code to be dropped onto your system and executed, then you have ABSOLUTELY NO CONTROL over whether that has a retpoline fix in it, and you can bet your last dollar that the code would not have the workaround.

Add to that the possibility of locating sequences of bytes that form valid code for performing a Spectre type 2 attack on the system already, and you should be able to see that retpoline fixes in the kernel are seriously not enough to mitigate this attack.

Just my 2 penny's worth.

Nokia tribute band HMD revives another hit

Peter Gathercole Silver badge

Re: I still have a 7110... @msknight

Definitely had a spring, although the tracks that connected the microphone in the slide were the weakest point. As soon as the contacts got dirty (and they were exposed to the air), the microphone stopped working.

It was an easy fix, but tedious to do frequently.

OpenBSD releases Meltdown patch

Peter Gathercole Silver badge

@Zippy

Just think how I feel.

In October, I will celebrate the 40th anniversary of logging on to a UNIX system for the first time.

Cue up the real grey-beards...

If you haven't already killed Lotus Notes, IBM just gave you the perfect reason to do it now, fast

Peter Gathercole Silver badge

Re: CVE-2018-1383 @Seven

I took the efix apart yesterday (publicly available to anybody and can be examined using anything that understands tar), the description is "ABSTRACT=CAA clcomd fix", and the only thing that is shipped with it is a replacement for /usr/sbin/clcomd.

Whilst it is true that this fileset is shipped as part of AIX (although only usable on Standard and Enterprise edition, not Express), it is only needed on systems that are clustered in some way. I know it is needed by System Mirror PowerHA (HACMP), but I suspect that it may also be used by some of the other cluster services like Spectrum Scale Storage (GPFS) and maybe other things that uses RMC/RSCT, although it is not used for communication with the HMC.

The published APARs contain virtually no information about the nature of the vulnerability, so it would require internal knowledge to definitively know what the problem is.

Maybe the AC who replied to you actually has seen something to confirm your guess.

I am currently involved in running a mixed estate of clustered and non-clustered (PowerHA) AIX systems, and clcomd is generally not running on the non-clustered systems.

Home taping revisited: A mic in each hand, pointing at speakers

Peter Gathercole Silver badge

Re: C90

Ah, the AD90. I got through dozens of the things. Much higher quality than the D90s, and better than the Maxel equivalents (IMHO), but much cheaper than the Psudo-Chrome SA90s. The equalization bias was such that they tended to produce a slightly bright sound on most Hi-Fi, so it was best to use a record deck that did not produce too much surface noise.

I remember splicing an extra 5 minutes on some tapes to record the two sides of some LPs onto the single side of an AD90 (although the TDKs had about 46 minutes of tape as measured on my JVC KD720 HiFi deck). I think one of them was Genesis Wind and Wuthering, and I had Meatloaf's Bat out of Hell on the other side (if any record company is reading, I have since bought both on CD, so you still made a sale!)

In general, most LPs were under 20 minutes a side, so would fit on one side of an unadulterated AD90.

I remember there being a country-wide shortage of AD90s sometime around 1980 because it was the tape of choice for most home-tapers.

I avoided Scotch/3M or BASF tapes, because they shed oxide even when new! And I would not touch unbranded tapes at all, and even good C120 tapes suffered from print-through, and tended to jam even on good tape decks.

Intel adopts Orwellian irony with call for fast Meltdown-Spectre action after slow patch delivery

Peter Gathercole Silver badge

Re: No replacement

But packaging a Coffee Lake+ in a Socket 1150/1 package (at the volume of Core Quads produced) may be cheaper, especially if you consider a like-for-like replacement of the mobo and memory in some gaming rigs will cost a similar amount to the processor!

Last year I did a just-behind the leading curve rebuild of one of my son's gaming rig, and the cost of the mobo and memory was easily more than the processor.

Peter Gathercole Silver badge

Re: No replacement

But they could produce latest generation chips without the design flaw, and package them in the older chip packages. As most Core and Xeon processors are in sockets, it would be possible to do a one-for-one replacement, although you would either have to be happy taking the systems apart yourself, or paying someone to do it.

They could get approximate performance by tweaking the clock multiplier and possibly disabling some cores and L0/1 cache, and I dare say they could also turn off some of the newer features (as they already do for current generation Celeron and the recently re-launched Pentium processors) so that end users did not get the benefit of those not in the older CPUs. They would have to do something with the ID info, because some mobos may struggle to configure the newer chips without a firmware upgrade.

I think that the only thing they might have problems with was the TDP. Underclocking later generation CPUs would use less power, but I think that they should be generous enough to allow people to benefit from that.

But it would be pretty expensive, so I have no expectation that Intel will do this.

Biting the hand that feeds IT © 1998–2019