* Posts by Peter Gathercole

2924 posts • joined 15 Jun 2007

As far as the gender pay gap in Britain goes, IBM could do much worse

Peter Gathercole Silver badge

Re: What gender gap though?

That is exactly why I started my "At the risk of being branded.." with the comment that it is a bad measurement.

The median pay difference works out the median pay (the middle of the upper and lower bounding figures) for all men and all women, and then compares that two figures.

Let's do a thought experiment.

A company has 10 women and 10 men. The 10 women all earn £25,000. Nine of the men earn £20,000, and one earns £50,000.

The median woman's salary is (naturally) £25,000, the median men's salary is £35,000 (20,000+50,000)/2. The average (mean) woman's salary is 25,000 and the average (mean) men's salary is £23,000.

So by this median measure, this company is terrible. It has a £10,000 difference between men and women in favor of men, which corresponds to a 40% pay difference in favor of men! Quick, do something.

But the more realistic measurement, comparing the mean salaries show that women on average (mean) are being paid £2,000 more than the men, and looking at the mode of the salaries, the women are quite a bit better off than the men. The company is seriously discriminating against the men.

This is a contrived example, but it is designed to show just how misleading this measure actually is.

Peter Gathercole Silver badge

Re: At the risk of being branded misogynst... @Jellied Eel

<sarcasm>So you are advocating positive discrimination for women, are you? I thought all forms of discrimination were frowned upon</sarcasm>.

The UK legislation provides that women will get all pay rises that happen for all workers, like rate-of-inflation rises. But how many technical companies actually pay any rate-of-inflation rise at all? Pay increments are nearly always based on achievement, and someone not working does not achieve anything. That is still not against the law.

What happens to any in-work qualifications obtained during that time the woman was away? Is the woman expected to try to study them while on maternity leave? Or be paid for an achievement they've not earned.

As I've said elsewhere, there are exceptions, as per your 'geek squad' example. I don't dispute that women can be exceptional in a job role, but I'm talking about the general, not the exceptional.

And I think that your point about recruiting reflects my point about what you do now will take years to actually make a significant difference.

Peter Gathercole Silver badge

Re: At the risk of being branded misogynst... @Lusty

OK, lets look at it another way.

Lets assume that it is time served that is rewarded, not age. In the scenario I outlined, a woman who takes 3 years out will still be three years behind on their time served compared to their male counterpart. There's still a disadvantage there.

I accept equal pay for equal work, but most people would like pay increments without having to change role, and someone who has been doing the same job for a number of years may expect to be paid more than someone who has only just started in the same job. Experience counts in these environments, especially if, like in the technology sector, there are no automatic pay increments, and any pay rise has to be justified by achievement. Taking three years off does not achieve anything work wise, so will not earn pay increments.

Equality has to be equal in all aspects of a job, including experience.

On the subject of keeping up to date, I'm wondering whether you have children yourself. With a young baby in the house (especially if you are the sole carer during the day), it is incredibly difficult to concentrate on anything for longer than their sleep cycle, they are incredibly demanding, and sometimes you just may want to catch up on some sleep as well.

I have four children, and my daughter is currently on maternity leave after her first child. Looking after a baby for 8 months is nothing like taking a vacation for a couple of weeks. You just don't get the breaks. And as my daughter found out when she asked, if you want to work to 'keep your hand in', and can arrange child care, the maternity leave rules impose strict limits on how much you can work without losing the whole of the maternity benefit! So any keeping up will be done at your own time and expense.

I'm not trying to put women down. I have some women friends who do an incredible job of balancing a successful work career and family, but they are an exception, and really have to work far harder than their male counterparts just to keep up, and this is for no reason other than biology and society.

I accept that there are always exceptions, with both high achievers and low achievers, and that some people make it into senior management positions at quite young ages (although probably as a career manager, rather than one building a management position on technical knowledge), but if you were to to a survey the age people are when they get to certain positions in management, I'm absolutely certain that it will be skewed to middle age to older people. Experience counts, and if it were not like this, it would destroy the traditional career path people have grown used to.

Peter Gathercole Silver badge

At the risk of being branded misogynst...

... there are a number of problems in society that make it unlikely that there will ever be full equality, at least in the very misleading median pay gap measurements.

The problems are mainly about the biological nature of the family, i.e. women are much more intimately involved with the process of having a family.

Let's consider the best case scenario. A woman is in a company, being paid the same for the same role as their male counterpart. The woman decides to have a child, and then works up until a few weeks before the baby is due. She has the baby, and then takes the maternity leave offered, and stays off work for a further 8 months or so, as she is encouraged to do for the sake of her baby.

Let's assume she can return to the same job she had before (which isn't always the case).

She's now been away from the workplace for three-quarters of a year. Her male counterparts have had that extra service, seniority (and probably pay increments), and the woman has to come back to re-learn certain aspects of her job. And in a fast moving industry such as IT, she's also got to catch up on the new developments. If she's in a customer facing role, her customers will have got used to a new face, and she will have to get back in with them, or find new customers.

If we assume that her partner will really share all child care duties, this means that she is actually likely to be a year behind her male colleagues. And if it is not equal, this is another impediment.

Multiply this by two or three, and this is the barrier she has to overcome.

But more realistically, career women end up by choice having their children close together. So it may be that instead of several one-year gaps, one per child spread over several years, they end up with a single two or three year gap.

If this happens, it will be much more difficult to catch up her peers, and their original job role may not even exist! So returning to work will be much more difficult. Also, in the past, Women have been able to retire earlier than men, removing experienced women from the pool of talent.

Add to this the significant number of women who, because of unequal child care or just personal preference, decide to down-grade or completely sidestep their previous careers, such that they will be unavailable to be considered for high level jobs, and the situation gets worse.

Another aspect of high function technical jobs is that the climb to the higher reaches of the board takes 20-30 years, so the women able to be appointed to these roles now will have joined in the 90's, a time when there were fewer women in technical jobs. We will have to wait 20 more years for any current actions we take on recruitment to kick in.

I don't think that there is any real surprise that many of the women currently high up in industry have chosen not to have children, have had them early and restarted their career in their early 20's, or have been able to out-source their child care at the risk of damaging their family life.

Until we have a complete shake-up of society, this pattern of family life will persist, and I can see very little that will significantly alter this in the near future.

Boffins want to stop Network Time Protocol's time-travelling exploits

Peter Gathercole Silver badge

Re: Time NTP was upgraded(See what I did there!)

Unfortunately, things like Blockchain, and a lot of historical trading and other financial systems absolutely need reliable sub-second accuracy in order to record the absolute time of transactions to make sure that a successful sequence is recorded. It is here that, for example, making a transaction look like it happened later (or earlier!) than it actually did could invalidate the transaction (think if someone were able to delay your registration of a newly mined bitcoin, and claim it as their own merely because they could subvert the time your system apparently mined it).

I worked in the electricity distribution industry some time back, and they had a requirement for accurate sub-second time as well, not that I ever asked why ( the fact that I was compiling the xntpd source to include the RCC8000 time clock tells you how long ago that was).

Now NHS Digital is going after data on private healthcare too

Peter Gathercole Silver badge


I wish that people would stop conflating care.data with sharing data within the NHS.

Care.data was all about sharing supposedly anonymized data from the NHS outside of the NHS, with people like medical research organizations, drug companies and insurance companies.

Whilst I have no problem with the first of these, I have some concerns with the second, and violent objections to the last, and there are other companies outside of all three of these categories that were being considered for access.

It was demonstrated that the anonymization could easily be undone by combining the anonymous data with well known data and information from social media.

I'm all for making the NHS more efficient by sharing data across different groups within the NHS, because then for example my son would not have arrived at his booked appointment at hospital for a pretty rare eye condition (and thus easily identified from this and the region that the records were for) only to find that all of his previous records had been misplaced, and could he tell the consultant what he was there for!

SUSE Linux Enterprise turns 15: Look, Ma! A common code base

Peter Gathercole Silver badge

Re: numbering

Although, to be fair, Solaris was originally a software grouping title (a bit like how IBM package multiple software under the WebSphere, Tivoli and now Spectrum brands) which contained SunOS 5, along with a variety of other software packages.

I think you're right that Solaris 2 was the point at which the OS generally got to be referred to as Solaris.

I believe the break between SunOS 4.x and SunOS 5 was when Sun decided to separate SunOS from SVR4 after UNIX System Laboratories (USL) was wound up. This allowed them to take back complete control over the internals of SunOS.

Peter Gathercole Silver badge

Re: How about Windows skipping 'Windows 9'?

Seven ate Nine.

Peter Gathercole Silver badge

Re: SuSE Linux

I'm pretty sure StarOffice was bought by Sun, not SuSE, and they then took out any propriety licensed software (I believe that the biggest item was the Adabase databse component) to create the open source OpenOffice product. StarOffice (with the database component) remained a product that Sun would sell, at least for a while.

Oracle then upset the OpenOffice community by ignoring it after they bought Sun, which lead to the LibreOffice fork, and then Oracle, who had no real interest in OpenOffice, gave it to the Apache foundation.

IBM took a fork of OpenOffice to try to produce a more compatible product (to MS Office), which I believe they called Lotus Symphony (although there had been a previous Lotus Symphony product, a spreadsheet on steriods in the '90s). Symphony died the same death as SmartSuite (which I actually quite liked) as a result of IBM apathy.

I don't know where this actually leaves StartOffice. I guess it's still an Oracle product, but whether it is still available is an interesting question.

Ubuntu reports 67% of users opt in to on-by-default PC specs slurp

Peter Gathercole Silver badge

Re: Really small systems

I've not put 18.04 on any of my machines yet, but I do have a casual use system in the bedroom that runs 16.04 that is a Acer dual core Atom Netbook clocked at about 1.6GHz with 1GB of memory and 8GB SSD, although the SSD is abysmally slow, so I run it off of a normal install on a 32GB USB micro memory card reader (not a live distro).

It works OK for browsing and YouTube videos, but I would not want to use it for anything serious. And Firefox's lax memory management means that it is necessary to stop and start Firefox on a relatively regular basis. I can't believe how frequently Firefox just grows to consume all the available memory, regardless of how much you have (it's driven my normal 8GB CoreDuo Thinkpad into paging more times than I care to remember).

I'd like to run the Netbook off an SD card in the MMC slot, but the BIOS does not support booting from that device, and I've not (yet) managed to get the boot partition on the SSD to successfully boot the kernel from the MMC (it's something to do with the modules loaded into the GRUB image - I'll get there).

Now Microsoft ports Windows 10, Linux to homegrown CPU design

Peter Gathercole Silver badge

Garbage recycling analogy

Whilst your analogy is clever, it doesn't really describe mainstream modern processors.

What you've ignored is hardware multi-threading like SMT or hyperthreading.

To modify your model, this provides more than one input conveyor to try to keep the backend processors busy, and modern register renaming removes a lot of the register contention mentioned in the article. This allows the multiple execution units to be kept much busier.

The equivalent to the 'independent code blocks' are the threads, which can be made as independent as you like.

I've argued before that putting the intelligence to keeping the multiple execution units busy in the compiler means that code becomes processor model specific. This is the reason why it is necessary to put the code annotation into the executable, to try to allow the creation of generic code that will run reasonably well on multiple members of the processor family.

Over time, the compiler becomes unwieldy, and late compared to the processor timelines, blunting the impact of hardware development. But putting the instruction scheduling decision making process into the silicon as in current processors increases the complexity of the die, which becomes a clock speed limit.

I agree that this looks like an interesting architecture, and that there may be a future in this type of processor, but don't count out the existing processor designs yet. They've got a big head start!

Intel confirms it’ll release GPUs in 2020

Peter Gathercole Silver badge


I would go one stage further. I can see the GPU becoming not just a co-processor on the same die, but execution units in a super-scalar processor. Once this happens, writing code for the GPU will be much easier, as the compilers will include the ability to compile code directly, rather than the rather haphazard methods being used now.

Intel chip flaw: Math unit may spill crypto secrets from apps to malware

Peter Gathercole Silver badge

Re: Pedantic spelling

This was the subject of conversation on at least two Radio 4 programmes in the last couple of months (one of which was More or Less on Friday 11th May - available as a podcast), and the general conclusion, from representatives of various linguistic and maths related institutions, was that both terms were correct.

This was backed up by several references to documents, both from England and America going back a couple of hundred years, and as a result it is largely personal choice as to which is used.

I'm actually with you with Maths, but it is an interesting listen.

Peter Gathercole Silver badge

Re: Performance on maths code?

Just a guess, but I expect that it's a small bit of code that sanitizes the floating point registers on a context switch. Bound to have some performance impact, but probably only a small one.

My second guess is that if a process has not used any floating point registers, the OS makes no attempt to save and restore them, nor to clean them on a return from a context switch (saving space in the stack frame, and not running the code to copy the registers). If this is the case, then any code that does use floating point registers will do a save/restore when a context switch occurs anyway, so there may not be any performance impact at all for code that uses floating point instructions.

Which? calls for compensation for users hit by Windows 10 woes

Peter Gathercole Silver badge

@AC re: Free.

Firstly, it's not free to all users. Some people have actually paid for it, and on new PCs, I'm sure the OEM will have paid something to put a valid Windows license on them.

Secondly, the extreme measures they used to try to persuade users of previous versions of Windows to upgrade may weaken any 'user choice' arguments they may try to use.

But doesn't Microsoft palm off all support for Windows on OEM systems to the manufacturer of the PSs? I think they only offer direct support to people who buy retail of enterprise licenses.

IBM to GTS: We want you to 'rotate' clients every two years

Peter Gathercole Silver badge

Re: Making It Worse?!

IBM support have pretty rigid rules about what a Sev.1 is, and they normally demand a 24 hour contact from the integration team or end customer (often if you are working on a customer account, the support contract will be a customer specific account, not one for the IBM account team.)

If they attempt 3 contacts out-of-hours that are not answered, they will automatically drop the severity on the quite reasonable assumption that if the customer is not prepared to work 24 hours a day, why should IBM.

When I worked in IBM support, not only was there a severity that was set by the customer, but there was a priority which was set by the support team. Not too sure whether that happens now, but calls could be graded S1P3, which meant that it was important to the customer, but IBM did not judge it a high priority.

Also, when I was working, support were expected calls to be defect only. If the problem was obviously a how-to, we were supposed to try to sell some consultancy, although this does not work very well when you're talking to an IBM account team (you know how it goes - "this work is sooo important, and will bring in $$$ to to IBM [but not to the support team], so you've just got to make it Sev.1")

Monday: Intel touts 28-core desktop CPU. Tuesday: AMD turns Threadripper up to 32

Peter Gathercole Silver badge

Re: Maths co-processor?

The Tube wasn't even really a bus. It was a fast synchronous I/O port that kept the original BBC micro running, but as a dedicated I/O processor handling the screen, keyboard, attached storage and all the other I/o devices the BEEB was blessed with, while the processor plugged into the Tube did all of the computational work without having to really worry about any I/O. All of the documented OSCLI calls (which included storage, display and other control functions) worked correctly across the Tube, so if you wrote software to use the OSCLI vectors, they just worked.

When a 6502 second processor was used, it gave access to almost the whole 64KB of available memory, and increased the clockspeed from 2MHz to 3MHz(+?) IIRC. Elite was written correctly, and ran superbly in mode 1 without any of the screen jitter that was a result of the mid-display scan mode change (the top was mode 4 and the bottom was mode 5 on a normal BEEB, to keep the screen down to 10KB of memory). Worked really well, and even better with a BitStik as the controller.

I also used both the Acorn and Torch Z80 2nd processors, and I know that there were Intel 8186 running DOS, NS32016 running UNIX (used in the Acorn Business Computer range) and ARM 2nd processors built as well.

Peter Gathercole Silver badge

Re: Intel was fudging

I think you would be surprised about how closely related the Power and Mainframe processors are nowadays.

With the instruction set micro- and milicoded, the underlying execution engines rely on some very similar silicon.

Oh, and there have been relatively recent Power6 and Power7 water-cooled systems, the 9125-F2A and -F2C systems, but only a relatively small number of people either inside or outside of IBM will have worked on them (I am privileged to be one of them). These were water-cooled to increase the density of the components rather than to push the ultimate clock speed. The engineering was superb.

And... they were packaged and built buy Poughkeepsie, next to the zSeries development teams, and use common components like the WCU and BPU from their zSeries cousins.

There was no Power8 system in the range, because of the radical change to the internal bus structures in the P8 processor. I don't know whether there will be a Power9 member of the family, because I'm no longer working in that market segment.

Peter Gathercole Silver badge

Re: Intel was fudging

Yes, but even IBM has backed off from pushing the clock speed to add more parallelism.

The Power6 processor had examples being clocked at 4.75GHz, but the following Power7 clock speed was reduced to below 4GHz (but the number of SMT threads went from 2 to 4, and more cores were put on each die, again 2 to 4). Power8 kept the speed similar, but again increased both the SMT and cores per die.

In order to drive the high clock speeds in Power6, they had to make the processor perform in-order execution of instructions. For most workloads, putting more execution units, reducing the clock speed, and putting out-of-order back into the the equation allowed the processors to do more work, but could be slower for single-threaded processes.

The argument about compiler optimization really revolves around how well the compiler knows the target processor. Unfortunately, compilers generally produce generic code that will work on a range of processors in a particular family, rather than a specific model, and then relies on run-time hardware optimization (like OoO execution) to actually use the processor to the best it can.

In order to get the absolute maximum out of a processor, it is necessary to know how many and what type of execution units there are in the processor, and write code that will keep them all busy as much of the time as possible. Knowing the cache size(s) and keeping them primed is also important. SMT or hyperthreading is really an admission that generic code cannot keep all of the executions busy, and you can get useful work by having more than one thread executing in a core at the same time.

I will admit that a very good compiler, targeting a specific processor model that it knows about in detail is likely to be able to produce code that is a good fit. But often the compiler is not this good. You might expect the Intel compilers to reflect all Intel processor models, but my guess is that there is a lead time for the compiler to catch up to the latest members on a processor family.

I know a couple of organizations that write hand-crafted Fortran (which generates very deterministic machine code - which is examined) where the compiler optimizer rarely makes the code any faster, and is often turned off so that the code executes exactly as written. This level of hand optimization is only done on code that is executed millions of times, but the elimination of just one instruction in a loop run thousands of millions of times can provide useful savings in runtime.

All of the time an organization believes that hand-written code delivers better executables, they may justify the expense of doing it. It's their choice, and making a generalization about the efficiency of code generated by a compiler is not a reason to stop when faced with empirical evidence. Sometimes, when pushing the absolute limits of a system, you have no choice than making the code as efficient as possible using whatever means are available.

US govt mulls snatching back full control of the internet's domain name and IP address admin

Peter Gathercole Silver badge

Re: internet freedom @Ole re: alternate root

Whilst it is true you can do this for DNS for a DNS alternative, it is not possible with the numeric IP4 or IP6 address spaces. This is because a single organization does not control the routing tables outside of their own networks. You certainly could give your network any IP address you wanted, but persuading an upstream ISP to route to that set of addresses without it being properly registered isn't going to happen (and the problem only gets worse as you get further from your network).

It would be possible to use VPNs across the current Internet proper to tunnel a private address space, but you could not really call that an alternative Internet. At best, you would regard it as a parasitic network. relying on the thing you want to replace for it's existence.

To really set up an alternative Internet, you would need an alternative global router network, which would be very expensive to set up. But some global companies do run trans-national intranets, like most of the owners of the class A and many of the class B address ranges. But these are (again) not really an alternative to The Internet.

Foolish foodies duped into thinking Greggs salads are posh nosh

Peter Gathercole Silver badge

Food resembleing other food

I was in an a chain Italian restaurant, and ordered the veal (I know, it was one of those "I've got to try it once" moments overriding any ethical thoughts), and I was very disappointment to get a plate containing something that looked and tasted like a Bernard Mathews turkey steak with half a tin of Heinz Spaghetti in tomato sauce and a quarter of a bag of Florette small-leaf salad.

Maybe it was, and I was just duped!

A Reg-reading techie, a high street bank, some iffy production code – and a financial crash

Peter Gathercole Silver badge

Re: QA's fault @Phil re: lint

This code is not a no-op. It will change the value in the TOTAL_EXPOSURE variable each time it runs around the loop. Thus there is no need for lint to pick it up.

And even though it would not be efficient, the code does leave the value of the last POSITION.EXPOSURE in the TOTAL_EXPOSURE variable. I can't see why someone would code this, but it is possible (especially if the variables were less meaningful) that this was the intended result.

Peter Gathercole Silver badge

Re: Or... @John H Woods

The issue with what you said is contained in the term "modern language".

I don't believe that the article said anything about when the error was coded. At one time, C, Pascal, Algol, PL/1, FORTRAN et. al. were all regarded as modern languages, and none of them had a construct to auto sum elements of an array without a loop, but I suspect that you already know something about older languages, as you give a snippet in Smalltalk.

And then, one of the oldest high level languages around, APL, would allow you to sum across a slice of an array in a single operation, to the point where there is not even an explicit loop construct in the language (don't ask me to write the code, it's nearly 40 years since I wrote any APL in anger, and I don't want to work out how to represent the greek characters necessary to represent it here).

Looking at problems with a different perspective often gives different answers.

Peter Gathercole Silver badge

@A Non e-mouse

The shortcuts ++, --, += and -= were designed to allow the code to map on to the instruction set of the PDP-11 (and probably the PDP-7 before this), because there were auto-increment and auto-decrement instruction modes on these processors.

This made it possible for a skilled programmer to write code that would generate fewer instructions (and thus be faster), rather than seeing whether the compiler would spot the possible short-cuts.

Remember, when B (forerunner of both BCPL and C) was written, the machines Ken and Dennis had were only just capable of running a compiler at all, and code optimization was completely out of the question.

The systems were really slow. When I got my first UNIX Version 6 and later version 7 system to look after (a long time after UNIX was first written), compiling the kernel took over 4 hours (and I had relatively fast disks), and I never did get around to recompiling the tool set, just used what came in from the distribution tape. It got to the point where I would touch many of the .o files and the libraries and bits I had not touched just to fool Make into not going the whole hog and compiling everything.

This direct mapping of high level code to machine instructions is why many people used to refer to C as a two-and-a-half generation language, and suitable for writing efficient code for operating systems.

Nowadays, where the systems are so obscenely fast as to make compiling code a relatively trivial operation, adding optimizers into the compiler such that these short cuts are not necessary is a no-brainer, so they could be deprecated, but they're written into the standards, and C has spawned a huge number of C-like languages that have taken much of C syntax into themselves verbatim.

Peter Gathercole Silver badge

Re: QA's fault @Phil

A code analyzer pick this up? Why?

The snippet looks sufficiently like C for me to generalize, and what is written is quite valid code, just not doing what was intended.

A code analyzer like lint will recognize things like the argument or argument types being wrong, code that will never be run, or integer/pointer/data object size mismatches. And when it comes to lint, most modern C compilers and their optimizers will do a better job than lint if the correct options are turned on.

Unless you have a meta language that you code the requirement in separately to check the code, you will not pick up logic errors like this, and if you have such a meta language, firstly, the problem has to be correctly coded in this meta language, and secondly. if it could check the code, it could write the code in the first place, so why employ a programmer?

Finally: Historic Eudora email code goes open source

Peter Gathercole Silver badge

Re: Email is fundamental to modern life

Whilst I respect David Harris's position regarding Linux, I suspect that if he is still working with Pegasus, he needs to at least update his blog regarding his position. It's dated April 2005.

Reading it, I don't think he's really understood GPL and LGPL. Just producing a free package that runs on Linux does not necessarily mean that the package needs to be open-sourced or published under GPL, as long as it is written correctly. It is perfectly possible to produce binary only software for free distribution under another license, or even commercially, as long as you do not incorporate any GPL code in your code-base. Most of the required C and C++ libraries required to compile your own packages are published under LGPL, which allows them to be linked in either statically or dynamically to a binary package.

This fact annoys some of the Open-Source stalwarts who want to convert the whole world to software that is free and open (RMS, I'm looking at you), but the licenses were written the way they were for a reason.

I appreciate that if he uses an editor from a third party as part of the package, then he would have to get some agreement on that, but Linux repositories are full of editors, which provided they are run as separate processes, can be called quite freely from another program without any licensing issues. Using it as a widget may be a little more problematic, although much of, say, Qt or GTK+ are published under LGPL, so there will be editor widgets in them somewhere.

The issue of support is only one of degree. At the time of writing the blog, he was doing it for Windows, so doing the same for Linux, once the learning curve has been followed, would not be significantly different, just more.

But given the date on the blog, and the overall age of the software, I suspect that he is just not interested in porting the product, and if this is the case, when Microsoft starts removing some of the legacy APIs in Windows, the Windows package may be doomed in the log term.

Opening it up to other developers is the only real way to keep the package alive over the longer term. And if Mercury is actually a functional email server, then a Linux port, even a commercial one, would be really welcome.

Dixons to shutter 92 UK Carphone Warehouse shops after profit warning

Peter Gathercole Silver badge

Re: Are Dixons...

I don't think I have any horror stories, but I have to admit that I did not make a habit of buying from them after my first experience.

Their own brand products were built down to the lowest quality they could get away with. I had a Printztronic Mini Scientific calculator bought for my birthday when I was 16 as the SMP Maths syllabus allowed calculators at A-level (but not in the exam). In function, it was exactly the same (and I mean exactly) as the Sinclair Cambridge Scientific (not the RPN one) and a similar size, but believe it or not, the Sinclair was built better!

Instead of engraved or molded (or even screen printed) legends on the buttons, the Printztronic had transparent plastic buttons, with a printed sheet underneath that you read through the button. In addition, the metal bubbles for the contacts were held on to the PCB with adhesive tape, rather than the sealed sandwich the Sinclair calculator used.

I regularly had to dismantle the thing, clean the contacts and replace the tape after one or more of the buttons stopped working, and I ended up re-drawing the legends on the paper sheet when it wore out. I guess most people would have tossed it, but I fix things to keep them working (and still do!)

I kept it going for a couple of years until I persuaded my parents to get me a Commodore SR4190R for University (another birthday present), a much better calculator. This was not bought from Dixons.

Peter Gathercole Silver badge

Re: notably National Living Wage @AC

I think you missed out a * 52.

What you've done is worked out the weekly increase in the total wage bill, not the annual increase. So,

30K (number of NLW employees) * 0.33 (hourly increase) * 40 (hours per week) * 52 (weeks per year) = 20,592,000 (yes, that's over 20 million.)

Divide by 300,000,000 and multiply by 100 to get percentage = 6.9%

This is still quite small, but more than the insignificant figure you quoted, and definitely more than the annual rate of inflation. A business cannot take even this loss of profit for a number of years without it having an effect (on the dividend and share price, at least).

In practice, what is happening is that people above NLW are not getting any increase until the rising NLW reaches their wage, at which point they will be swept up, and I predict that we will see the number of jobs that are at, or close to NLW significantly go up over the next few years.

Can't pay Information Commissioner's fine? No problem! Just liquidate your firm

Peter Gathercole Silver badge

Re: Liquidate company to avoid paying

In some cases, liquidating the company is the only option. If there is no chance that the finances of a company could pay fine, then the company is technically insolvent (i.e. not able to satisfy the creditors, which includes the body issuing the fine), and entering a CVA or starting insolvency proceedings is probably the correct thing to do.

Where the problem is really, is when a company that is solvent and potentially able to pay the fine is voluntarily wound up. In cases like this, the fine should still be paid, because the ICO should be registered as creditors, and if the company is solvent, then all the creditors should be paid. My suspicion is that the directors will actually find some way of extracting money from the company before starting the insolvency proceedings, in a way that makes the company insolvent, but allows them to pocket the cash.

There can only be a small window where a company knows that a fine is likely, and declares itself insolvent before the fine is issued, where they might get away with this, but as there is a relatively lengthy process to identify creditors, I'm not sure that this would really work.

In cases where a potentially solvent company is voluntarily wound up as insolvent, the directors should already be liable, because actions deliberately driving a company insolvent must be at least negligence, if not corporate misconduct.

I suspect that it is just too difficult to prosecute these cases.

Welcome to Ubuntu 18.04: Make yourself at GNOME. Cup of data-slurping dispute, anyone?

Peter Gathercole Silver badge

Re: Dude @Camilla

Most people have dynamically allocated IP addresses provided by their ISP. The ISP can identify the account from the IP address and the time, but whether the IP address is enough for the ISP and everyone else probably depends on how long the lease time is for the dynamic IP address.

But even the account owner name does not definitely identify the user by itself, unless only one person uses it. For example, during the week I stay in a shared flat with four other people, and the broadband account is in the landlords name.

Of course, if you pay for a static IP, then yes, it is likely that you will be easier to identify, and of course by combining the IP address with other information (like the cookies in your browser, and whether you're logged in to a Firefox or Google account) many more things can be found out about you (I'm pretty sure Firefox ties together multiple devices I use by profiling the usage pattern, even though I don't enable the sync feature).

Expect this last behavior to increase as time goes by.

Das blinkenlights are back thanks to RPi revival of the PDP-11

Peter Gathercole Silver badge

Re: The PDP-11 lives on

A regular instruction set was really a requirement in the early days of computing, as grouping the instructions allowed you to reduce the amount of logic in the instruction decoder, as did using the same addressing modes for different instructions.

What I found really interesting with the PDP11 instruction set was that the stack pointer and program counter were just implemented the same as general purpose registers, a fact that became obvious if you looked at the generated op. codes that the machine code for jump and stack manipulation instructions generated.

Remember that the CPU of the PDP11/70 and others of the same generation were mainly constructed from 7400 series TTL in normal DIL packages, which explains why there were so many boards. IIRC, the CPU on 'my' 11/34a was four boards for the CPU, one of which was an FP-11 floating point processor, and another of which was the 22-bit memory controller (it was a SYSTIME special, PDP11/34s did not normally have 22-bit addressing).

Peter Gathercole Silver badge

Re: How noisy are the cooling fans? @Jake

My slipstick is a Faber Castell log-log slide rule. I would like to say that it is the same one that was bought for me in 1971 when I went to senior school, but that got lost in one of many house moves, and I had to do a like-for-like replacement from eBay.

Although I think I still know how to use all of the scales (it's got around 20 different ones), I don't do the type of maths that it's best suited for very often.

I have one of my Grandfather's slide rules, probably dating back to the 1930's that he would have used at the RAE in Farnborough (it was one of the UKs primary aircraft research institutions). It's engraved polished ivory on wooden slides, but feels so fragile that I don't play with it very much.

When it comes to abacuses, not me. I used a blackboard and chalk and counting gates when counting sheep and hay bales on my father-in-law's farm before he retired.

Meet Asteroid, a drop-in Linux upgrade for your unloved smartwatch

Peter Gathercole Silver badge

Re: Is Linux the best starting place for a watch OS?

Well, Ubuntu touch, which was dropped by Canonical, has got a second life as a community supported Linux Phone OS, although it does still use the Android kernel.

If your phone is not already on the support list (and I admit it's not huge), there are people who will help you attempt a port.

Off with e's head: E-cig explosion causes first vaping death

Peter Gathercole Silver badge

Re: Here we go again

Some of these devices have stupidly large batteries.

My suggestion is set a maximum capacity limit on the batteries, so that they have less energy to dump if they go wrong. Could still injure, as any rechargeable battery could, but less likely to kill.

But I realize that even alkaline AA cells are an explosion risk, and can get really hot when shorted, and bigclivedotcom should have used his "explosion containment pie dish" when just dismantling one of a certain discount supermarket's AA rechargeable battery, but instead ended up burning his bench!

Britain to slash F-35 orders? Erm, no, scoffs Lockheed UK boss

Peter Gathercole Silver badge

Re: The curse of the F-35......... @CliveS

I am aware of the kinetic storage that is used. But you're still taking about diverting a significant amount of the available power into the catapult while you are recharging it, probably at the same time as you're trying to drive the ship forward, and maybe operating the weapon systems.

Also, the figure I quoted was the total of both gas turbine generators, and all four diesel generators. I'm not sure that you can gang all this power together, but I admit that I did get the sum wrong. The total is actually 118.8MW, not 82.4MW (I only counted one of the gas turbine generators).

What I was contrasting was the fact that HMS QE has less total power than the existing Yank carriers that are regarding as having too little electrical power to operate EMALS.

And, yes, I do understand that Nimitz and Ford class carriers have four catapults, whereas they were only considering fitting one on the QE class. But if you were intending to exclusively use non-STOL aircraft, would you really want to rely on a single aircraft launch system when aircraft are your primary defense?

Peter Gathercole Silver badge

Re: The curse of the F-35.........@John Brown

If I remember my Biggles, you might want Camels rather than Pups. I think Pups were trainers...

Peter Gathercole Silver badge

Re: The curse of the F-35......... @Aladdin Sane

You might think that.

HMS Queen Elizabeth has a total electrical generation capability of 82.4MW, which provides power for moving the ship and all other electrical demands on board.

The Gerald R. Ford class which will have EMALS will have about 600MW of electrical generation, which is in addition to the steam for the turbines that move the ship (i.e. none of the electrical power is used to move the ship).

The Nimitz carriers have about 200MW of electrical generation, which is also in addition to the steam for steam turbines that moves the ships, and that is considered too little to consider fitting EMALS on the current carriers.

(All figures are from Wikipedia)

So, you still think that QE has enough spare power for an EMALS catapult?

IBM bans all removable storage, for all staff, everywhere

Peter Gathercole Silver badge

Re: Poorly thought through

When IBM built their own laptops, and for a few years after the sale of the Thinkpad brand to Lenovo, IBMers working in secure environments within IBM, or on customer's own secure sites (generally those requiring some form of government security clearance) had to have Thinkpads without webcams.

Now they are buying from third parties, they do not have the control over the devices they can get (and they don't want to have laptops built to their specification) so the users are instructed to cover the camera lens.

In addition, phones with cameras used to be banned (if you had one, you had to leave it outside of the secure area). Now, as IBM no longer buy phones for their workers at all (the worker provides the phone, IBM provide a SIM) the prohibition is that you must not use a camera within one of these secure areas.

All in all, less control rather than more.

Ubuntu sends crypto-mining apps out of its store and into a tomb

Peter Gathercole Silver badge

Re: The problem is the mindset behind it

That was one of the things I most miss about PalmOS. You just knew that you would have a note application, a very functional calendar application, a calculator application and a contacts application (which was integrated into the phone on Treo devices).

They were always in the same place, always worked the same, and the data was portable between devices without having to hand the data over to Google, Apple, Microsoft, or your 'phone vendor when you upgraded your device.

Even with the web or internet based sync tools, I've always found problems going from one Android device to another.

Windows Notepad fixed after 33 years: Now it finally handles Unix, Mac OS line endings

Peter Gathercole Silver badge

Re: Notepad++ @Baldrickk

I said 'by default'.

Of course you can change these things. What modern software doesn't allow you to change everything about it!

But I just like to do the work I'm paid for, rather than fiddling around configuring the tools I have to use (note, I use about 12 different locked down windows environments using Citrix, and I would have to change them all separately). And, as I'm a UNIX/Linux person (and have been UNIX since long before DOS, let alone Windows existed) without huge amounts of in-depth Windows experience (yes, I use Linux exclusively on my home systems), I do not find Windows and Windows software intuitive to configure.

And, yes again, all systems have to have a default set of settings. I just don't agree with a significant number of those made by modern developers (like hard-coding ANSI escape sequences into scripts and documents!)

Peter Gathercole Silver badge

Re: Emacs... no Vi.... no Ed!

Upvoted for reference to EDT.

Peter Gathercole Silver badge

Re: Notepad++ @veti

Except that Notepad++ has many 'features' turned on by default, like Tabs, character counts, syntax highlighting, minimalistic font and expecially launching with the files that you had open last time you had it open.

Well, actually, my biggest gripe is the syntax highlighting and opening with the files you used last time. When I open an editor, I expect to see either a blank file, or the file I passed as an argument or I opened it with.

I know I'm a Luddite in some respects, but I really dislike having coloured text supposedly highlighting something with the choice of colours that the developer thought was a good idea, especially when, like the default alias for 'ls' set up on many Linuxes, I run with a different background colour which makes the developers choice stupid.

Second wave of Spectre-like CPU security flaws won't be fixed for a while

Peter Gathercole Silver badge

Basically, client side code execution is always a risk, because it provides a mechanism to run code from a server that you have no control of, on your system.

The problem is that without it, a lot of our interactions on the web would look like they did back in HTML2 and earlier, where all that could be done had to be done as tables, and any complicated tasks had to be rendered into a pixmap on the server side, and sent to the browser to display.

My view is that Javascript provides too much control. AFAICT, as originally specified, it was supposed to be interpreted. This should have made it quite difficult to issue a stream of machine code instructions which has not been generated by the interpreter or JIT compiler. And if you can make it generate specific vulnerable code, fixing the interpreter or JIT compiler to prevent this is much easier than fixing the processor (it is interesting that the IP protocol compiler in the Linux kernel could be manipulated to generate code to demonstrate meltdown!)

Of course, injecting executable machine code directly into a machine via buffer-overruns or in images or other binary blobs through poorly written client side processes would still be a vector for executing malicious code, and if you have direct control to import and execute code through a direct user session on a system, then there is nothing much you can do to protect yourself from processor flaws. Running a non-x86 architecture would provide some mitigation, but only from vulnerabilities that affected x86 processors.

It is at this point that having trusted executables, preventing you from running imported code, could be a help, but that would not work with anything that used self-modifying, or JIT compiled code or on a system that is used for development (if you can compile code on a system, it is extremely difficult to negate processor flaws).

If you're a Fedora fanboi, this latest release might break your heart a little

Peter Gathercole Silver badge

Re: Nvidia cards are fine @Lee

And when, during the update process on Ubuntu, does one get to read these release notes?

Oh. you can read the changelog in synaptic (can you do this in the Software Center?) I suppose, but I wasn't explicitly updating the graphics driver, I was just allowing the automatic update process to install the updates that were in the repository. This means it was silent as far as I was aware.

So what do ordinary users do? Freeze the graphics drivers (if they know how to do this) so they don't get updated, and vet every graphics update manually? This will work until a kernel update that requires a new graphics driver module, and then the result will be, again, that the graphics stop working.

And if you do spot it, switching to the legacy driver is not something that is obvious. Graphics drivers are normally in use when you are running normally, so in my experience, it is necessary to stop the GUI, and work in console mode. This is something that is also not obvious.

It's exactly what opponents of Linux complain about, you need to know a lot about what you are doing if you want to run Linux on the desktop with the vendor drivers, and this is why I recommend to non-technical Linux users to use the open-source drivers.

Peter Gathercole Silver badge

Re: Nvidia cards are fine

The problem with proprietary drivers for Nvidia or ATI hardware is that they both silently remove support from older chip sets.

On two occasions with Ubuntu, one Nvidia, one ATI, I've put some updates on a system (one was a dist-update, and the other was just a normal in release set of updates), rebooted, and been faced with either a text login screen, or a 640x480 16 colour screen.

In both cases, the support for the graphics card that was in the system had been removed from the proprietary drivers, so the system defaulted to the highest system it could use. This is far from ideal, especially when Linux is seen as suitable for older hardware.

Nowadays, I recommend that users switch to the Open Source drivers before doing any major updates, and as I don't have any major reason to use heavy 3d applications, I use them all the time.

My PC is on fire! Can you back it up really, really fast?

Peter Gathercole Silver badge

Re: Magic blue smoke...

Again, not IT related, but we had an engineer working on the Asteroids machine in the JCR bar at Uni. (back when Asteroids was actually current).

He decided to replace the huge main electrolytic smoothing capacitor in the power supply.

He got it the wrong way round (quite a feat of carelessness, but not impossible)

One hell of a bang, and a lot of smoke!

I can't remember whether that machine ever worked properly again.

Leave it to Beaver: Unity is long gone and you're on your GNOME

Peter Gathercole Silver badge

Re: Upgrade, but not right now?


What was I thinking when I posted this?

10.04 (Lucid Lynx).

Peter Gathercole Silver badge

Re: Upgrade, but not right now?

This is normal for Ubuntu. Its been like this since at least 10.02.

For the first few months, you need to install from scratch if you want the new release. I guess this is because when installing from scratch, it is easier to know the eventual state of the system, whereas upgrading starts from an unknown state, so more testing is required to make sure that they get all of the dependencies right.

This is doubly so from a 2 year old LTS release rather than the previous non-LTS release.

I'm wondering what my best upgrade path is form 14.04 (I am still not convinced by systemd, and I'm actually using Gnome Fallback). I think I will have to go trough 16.04 before 18.04. Still, at least a year of support left for 14.04.

McDonald's tells Atos to burger off: Da da da da da, we're lobbing IT ...

Peter Gathercole Silver badge

Normal marketing speak to mislead..

..is "Our product is made from 100% beef".

Yes. 5% of the total product is 100% beef.

Although this is not, in fairness, what McDonalds claim, although being pedantic, the presence of small amounts of seasoning with the beef is enough for the claim of "100% beef" to be wrong.

How about, "our patties are 100% beef, to the nearest integer percent".

Not quite got the same impact, has it.

BT pushes ahead with plans to switch off telephone network

Peter Gathercole Silver badge

Re: Yeah right @Hoppy

If I remember correctly, ISDN specified a 144Kb/s link, which could carry a 2 voice calls, each using 64Kb/s, and a 16Kb/s signaling channel.

Also IIRC from my POTS training, analog phone lines used to have a filter at 8KHz, which was regarded as plenty high enough to carry voice communications.

Biting the hand that feeds IT © 1998–2019