* Posts by Peter Gathercole

2953 posts • joined 15 Jun 2007

Drink this potion, Linux kernel, and tomorrow you'll wake up with a WireGuard VPN driver

Peter Gathercole Silver badge

Re: Because it's still a module?

I only read the article, and that contains "pouring his open-source privacy tool directly into the Linux kernel".

I see that it will remain a module, but the intention I get from this is that they are trying to make it compiled directly into the kernel. Maybe the article is misleading.

I actually have no problems with it remaining as a module, either official or unofficial, but there are only a very limited number of scenarios I can see where having it actually compiled in the kernel will benefit users or administrators.

Peter Gathercole Silver badge

Why?

The Linux kernel is already heading towards bloat central.

The module mechanism was invented so that you could add functionality without having to include it in the kernel as a compile time option.

The only possible reason for doing this would be if it were needed during system startup, for example if you wanted to network boot through a VPN. This would be problematic, especially as you would have to find some way of loading the certificates or keys before having a filesystem available.

It seems to me that some Linux developers need to remember the design decisions that were made to try to make running Linux easier!

'A word processor so simple my PA could use it': Joyce turns 30

Peter Gathercole Silver badge

Re: guitar tutor

I think you'll find BASIC is older that Forth, so 'old school' would be better applied to both.

(I had a 2nd language ROM in my BBC Micro for Forth back in 1982)

Now that's a dodgy Giza: Eggheads claim Great Pyramid can focus electromagnetic waves

Peter Gathercole Silver badge

Re: And...

No, because they're triangular based pyramids.

All of the ones that have 'unexplained' effects are square based, and aligned to the earth's magnetic field!

But if you look at a pyramid tea bag, you will see that it is just a tube of paper, with the top an bottom seams glued/welded at right angles to each other in parallel planes with the perpendicular aligned along the centre of the tube.

All this time I thought that it was some clever folding machine, only to discover it's almost no different from a machine making normal square tea bags.

Peter Gathercole Silver badge

Re: It was aliens wot did it

"There are pyramids in my head

There's one underneath my bed

And my lady's getting cranky...."

Sysadmin trained his offshore replacements, sat back, watched ex-employer's world burn

Peter Gathercole Silver badge

Re: Logic bombs @razorfish

Not the end of the world. Reload your offline backups that are stored in a remote location.

Remember, online copies are *not* backups, for exactly the reason you specify!

BBC websites down tools and head outside into the sun for a while

Peter Gathercole Silver badge

Re: Scary

There was more than just the test card.

During the rollout of coulor TV in the UK in 1967 or so, BBC2 carried a number of test programs, which I believe were called "Trade Test Transmissions". They were basically colourfull short documentaries, broadcast at fixed times of the day, so that TV installers has something predictable to set the colour up on the TV they were installing (Colour TVs were still mainly valve driven, and were fiddly to set up, took ages to warm up from cold, and generated large amounts of heat).

I happened to be ill for a while that year, and off school for a week or two, and I remember three of them. One was called "Ride the White Horses", and was about power boat racing, another was called "Skyhook", and was about helicopter cranes, and one was called "Birth of a Rainbow", about rainbow trout farming.

There were more, but I can't remember them. Wikipedia has a list.

Looking a YouTube, they have the one about rainbow trout, and some of the others, but not the other two I remember.

UK.gov commits to rip-and-replacing Blighty's wheezing internet pipes

Peter Gathercole Silver badge

Re: Not wanting to state the obvious @EnviableOne

C4 and the license fee.

I see my mistake. There was a plan 10 years ago for C4 to receive some TV license money, but it was never carried out.

S4C, which is not C4 does receive license payers money, though.

Peter Gathercole Silver badge

Re: Not wanting to state the obvious @Martin an gof

That's true for standard definition, but BBC HD channels do not carry any local content. It's national, with local content slots showing a message saying that the content is not available in your region.

Peter Gathercole Silver badge

Re: Not wanting to state the obvious @EnviableOne

Your argument falls down, because Channel 4 take a slice of the license fee.

Peter Gathercole Silver badge

Re: Not wanting to state the obvious @JDeaux

I'm pretty certain that you do not need a TV license to watch Netflix. You only need one to watch broadcast TV.

There is a concise wording of what broadcast means in the TV license, but I can't be bothered to dig it out, but it's basically along the lines of watching a program at the same time as it is being broadcast, whatever transmission media you're using.

So, for example, if you watch something that is being served out using one of the catch-up services before it's finished in it's broadcast (it has to overlap the broadcast), then you need a TV license. If you wait until it's finished, and then watch it on a catch-up service, then a TV license is not needed (but remember the +1 TV channels).

They've also broadened the scope, as they've defined computers and other devices as TVs for the purpose of watching broadcast programmes.

As far as I am aware, Netflix do not broadcast any content, so it is all on-demand. No TV license needed. If Netflix were to start carrying 'Live' programs, sent to multiple users simultaneously, you may need one, however.

NowTV, which carries channels that are broadcast along side their catch-up content does require a license.

BBC iPlayer is a bit of an exception though, as they have added a specific requirement to have a TV license in order to use any aspect of iPlayer. This is actually more like a no-fee commercial contract. They justify this because you can watch programs on iPlayer at the same time as they are broadcast, but I actually object quite strongly to what the BBC is doing in this area.

I can see the nature of 'broadcast' being changed or challenged again in the near future, because of the nature of multicast services on the Internet. For example, many road traffic cameras provide real-time video to whoever wants to see them. Does this count as a 'broadcast'? And of course, as the technology gets cheaper, we are beginning to see small live TV stations being run out of bedrooms using cloud services, in the same way that we get small Internet radio stations. Will these count? Who knows.

ReactOS 0.4.9 release metes out stability and self-hosting, still looks like a '90s fever dream

Peter Gathercole Silver badge

Slight aside

I recently tried getting RedHat 4.1 (that's RedHat, not RHEL or Fedora) from a PCW cover disk from something like 1997 running in VirtualBox.

It was quite a frustrating experience, because although the install worked OKish (once I'd sorted out how to run the loader program running without a floppy disk drive, and then access the CD from the loader program - bootable CDs and ATAPI was still in the future back then), trying to get XFree86 working was difficult.

Back in the day, graphics and mouse support was much less standardized on PCs, and the version of XFree86 needs to be told a lot about the graphics adapter hardware. The autoprobe cannot identify the VirtualBox display adapter from it's ancient database of cards that it knows about (just an example, when I originally used this as my main Linux distro, I had an ATI Mach64 card which worked quite well).

I had minor success setting it up as a dumb SVGA adapter with no hardware assists, but I also had problems with the mouse as presented by VirtualBox. And I did not get around to attempting to get the sound working (I used to use a ISA SoundBlaster 16 which I did get working with OSS, I think). I don't know what the VirtualBox sound adapter looks like.

Even though the virtual machine was running on my 2nd gen Core 2 Duo Thinkpad (not a stellar performer by today's standards, but a couple orders of magnitude faster than the machine I first ran it on), because there was no hardware assist from the graphics adapter, the screen handling was sloooooow compared to when I used it on a Pentium 100 with the Mach64.

I wish I could get it running better, because I would love to be able to show people how simple the GUI looked on old UNIX/Linux systems (I know, I could run fvwm2 on Ubuntu, but it would be so much more authentic on an ancient Linux).

Spectre rises from the dead to bite Intel in the return stack buffer

Peter Gathercole Silver badge

Re: Asking (possibly) dumb question

In this case, I don't think that it is a timing issue (the timing issue leaks were to decide whether a data cache contained a value, or whether the system had to go an fetch it from main memory).

In this case, what is being done is that a buffer that caches the return address from functions is being controlled, so that when returning from a function or sub-routine, it is possible to change the return address from the next instruction after the call (the normal result) to an arbitrary address controlled by the malicious code.

Normally, the processor would go to the stack frame stored in main memory to fetch the return address (and changing this is the primary technique in a 'stack smashing' attack, as the stack is in the address space of the user process), but it looks like Intel and ARM have found a way of keeping this return address in a faster cache, so that the return can save some clock cycles. If you can arrange to change entries in this buffer cache to point to some malicious code, and get the processor to return to this code while still in kernel mode, in theory you can get access to memory that would normally be protected.

The write up and description of the Return Stack Buffer and this vulnerability is quite involved, and I'm not sure I fully understand it, because in ARM at least, there appears to be two buffers, one of which is a conditional buffer that tracks predictive returns, which can be manipulated using 'branch not taken' types of speculative attack.

As I said earlier, invalidating the RSB (assuming the processor has a suitable instruction) on a syscall or context switch should limit this type of attach to the current process, which is still not that good, but should prevent the leaking of information from other processes or the kernel.

Peter Gathercole Silver badge

RSB

I've only had a short think about this, but it strikes me that the main problem here is that the contents of the Return Stack Buffer persists across context switches.

If whatever OS kernel is being used invalidated the RSB when context switching between different process/threads, then this may affect performance, but should prevent this type of leak between processes. Any performance impact would only be when a process is re-scheduled.

Switching to kernel mode (a system call) would be a bit more problematic, as system calls happen frequently. You would not really want to invalidate the RSB on every syscall, but I would have thought that there should be something that the syscall interface could do to sanitize the RSB it inherits from the process. But the separation of the kernel and process address spaces in the Meltdown fixes should really limit the damage.

As I say, I've not read the full papers yet, so there may be something I haven't considered.

No big deal... Kremlin hackers 'jumped air-gapped networks' to pwn US power utilities

Peter Gathercole Silver badge

Re: More detail please

Um. If somebody/anybody has remote access to a network, then it is not "air-gapped".

A properly air-gapped environment has absolutely no communications connections with any other environment, and is completely self-contained in one location.

Anything else should probably be described as "firewalled" (assuming that there are firewalls in place!)

Sysadmin sank IBM mainframe by going one VM too deep

Peter Gathercole Silver badge

Re: Just to mudddy the waters a trifle ... @jake

The ! path separator was adopted by UUCP mail in the UNIX world, and remnants of it still persist in the canned sendmail configurations shipped with real UNIX systems.

I used to be att!ihnp4!hvmpa!gatherp within the AT&T name space (I've googled this, so I don't think it's leaking any info).

ihnp4 used to be a very good machine to use to base mail path addresses, because it seemed to be connected to everything, and I was surprised to find a reference to an ihnp4 based ! mailpath in a document about 10 years ago. I think the name has finally gone now, but the name persisted with new systems adopting the name as hardware was retired.

"ih" stood for Indian Hill, an AT&T development lab in Chicago where AT&Ts switching systems (telephone exchanges) were developed, amongst other things. "np" stood for network processor, and 4 was the number of the system.

It was quite interesting, as because of the way that the ! separator specified a mail path, you could explicitly route mails through various systems to check connectivity. I frequently used to generate a mail loop back to myself, bouncing a mail of a distant server to check that a particular route worked.

It's not that long ago (well, actually ~20 years - but that does not seem that long to me) that you were able to use ! mail routing with sendmail (with a normal rule-set) on TCP/IP networks, but as sendmail get replaced and explicit mail routing fell out of favor, it stopped working.

Peter Gathercole Silver badge

Keyboards with #. £ and $ started appearing in the UK at about the same time as 8-bit characters were becoming popular on mini-computers and PCs (the very early '80s). See my earlier post about keyboards and 7-bit ASCII to understand what happened before that time.

Once 8-bit code-pages became popular, the lower 128 (well, 95 really because of the 33 non-printing characters) code points of almost all internationalized code-pages were the same as US ASCII (X3.4-1986), and any character not in US ASCII was pushed into the top 128 (well, 127 really, because position 255 was normally delete or something).

This meant that there were many, many code pages to cope with different characters for different languages, not just the UK, and a corresponding set of keyboards. Here is an interesting page from IBM, who IMHO were the first company to really start standardizing keyboard layouts for different countries (the 'enhanced' keyboard many of us will be typing on is basically an IBM layout from the PC-AT era, although DEC's international LK-201 keyboards were of a similar time-frame).

Note the references to the 101, 102 and 106 keyboards pre-date the addition of the 'windows' keys.

Peter Gathercole Silver badge

Re: Just to mudddy the waters a trifle ...

That's interesting. I never thought about the roots of the words (although there is something similar about pound weight and lb as a symbol).

I always assumed it was because in the early days of terminal, the US 7-bit ASCII table only had space for 96 characters, and were filled with characters suitable for US data processing. This did not include currency symbols for other geographies.

For many terminals and printers intended for use in the UK, there was a toggle or DIP switch, or sometimes a menu setting that normally replaced the # symbol with a £ symbol (although some replaced $ with £). Same numeric code, different presentation. This is what I thought was the basis for hash/pound.

I remember writing shell scripts with comments that appeared with the £ symbol at the front. In hindsight, it must have looked very strange, but at the time, it was just normal.

When 8-bit ascii with extended character sets started being used, life was a nightmare, because the number of different code-pages (CP437 and ISO8859-1 and -15 anybody) proliferated, with different code pages on different devices, making inter-operabillity extremely difficult.

I don't know how other OS's dealt with this, but IBM came up with quite complicated input and output methods on AIX for most devices that allowed you to specify a translation table that could be used to make it all work, but setting these up was quite complicated, and not many customers actually used them correctly (or in some cases, at all!)

It was only the adoption of various Unicode UTF character encoding schemes that things started working a little easier.

Fukushima reactors lend exotic nuclear finish to California's wines

Peter Gathercole Silver badge

Re: They missed a source

I get the Asimov reference, but it was not R.Daneel Olivaw or R.Giskard Reventlov's plan. It was Levular Mandamus who set up the nuclear intensifier. The two robots merely did not stop the plan, in order to invigorate the human race.

This caused the demise of R.Giskard, as he did not have the flexibility to work around the first and zeroth law.

I always found it strange that R.Daneel was able to invent and invoke the zeroth law, and then partly ignore it to allow 'harm' the humans on Earth for their long-term benefit. He was quite an early humaniform robot (at least in the 'Spacer' era, and ignoring Tony and Georges Nine and Ten), so why was his positronic brain so adaptable?

Peter Gathercole Silver badge

Obligatory XKCD

Radiation

Samsung’s new phone-as-desktop is slick, fast and ready for splash-down ... somewhere

Peter Gathercole Silver badge

Re: WIMP

tek4010 had a relatively small screen (~11" diagonal IIRC), so I would not call that "horrendously large", although it was normally set on it's on floor-standing pedestal. It's more capable cousin, the 4014 was larger.

It was a storage scope, so the screen 'remembered' what had been drawn without the screen processor redrawing it (unlike on a raster CRT monitor, which has continually to repaint the screen). Over the course of a minute or two, the image started to degrade, and the image could not be scrolled. You had to clear the screen and draw the next one.

I used to use it to do work in APL, as it could draw all of the over-struck greek characters, and I actually wrote a 4010 graphics emulator (it was a very simple protocol) in BBC basic, which was fast enough to keep up on a 9600 baud serial link.

UNIX troff (a text formatter) had a post-processor that would allow di-troff output to be drawn on the screen of a 4014 for proof reading before the days of high definition terminal screens. I believe it's still there in groff in GNU/Linux, even it's not needed anymore.

'Fibre broadband' should mean glass wires poking into your router, reckons Brit survey

Peter Gathercole Silver badge

Re: It was always fibre though, right?

Certainly, there was a time when most of the trunk or core network was fibre, while the analogue phone system beyond the local exchange was copper. That was the concept of System X.

I worked for a company that was providing BT with telephone exchanges in the late '80s, and they were definitely selling fibre trunk equipment then. According to Wikipedia, the last trunk connection in the UK was switched to fibre in 1990.

Peter Gathercole Silver badge

Re: Eir, Vodafone & Sky

Actually there is a simple definition of what 'broadband' actually means.

It means that there are multiple carrier frequencies running over the transmission media. Basically frequency division multiplexing (FDM).

In the dim and distant past, when thick and thin wire Ethernet or Token Ring was the main data networking technology, this used time division multiplexing, sometimes called narrow-band, because there was only one digital data stream, and each network station got a share of the total available.

Cable TV started using multiple frequencies over coaxial cable to provide analogue cable TV, which was the first time many people outside of the comms. industry would have come across 'broadband' (unless you count ordinary OTA analogue TV).

One place I worked, we had a data network that ran over coaxial cable with multiple data channels being carried (multiplexed) on different frequencies down the cable (actually it was a hybrid system, because there was TDM being done on each of the FDM channels).

All *DSL systems are broadband, there being multiple carrier frequencies being sent down the 'phone line. DOCSIS is broadband because of there being multiple carrier frequencies over the cable.

Interestingly, most Fibre is also broadband, because the carriers use multiple frequencies of laser light down the Fibre, although it is not clear to me whether this is what is delivered in FTTP (it definitely is in the backhaul or core network). I did read a description that suggested that the down path on FTTP was shared TDM spread over multiple FDM carrier frequencies, and the up path was FDM, with each customer having their own frequency. This means that it is possible to split and combine the different feeds using passive optical splitter nodes at the pole/distribution point, and only need expensive powered switches at the cabinet.

I'm not sure whether 4G counts as broadband. There are definitely multiple carrier frequencies being transmitted and received, so I suppose that it must by the definition I gave at the beginning of this post.

Oracle wants to improve Linux load balancing and failover

Peter Gathercole Silver badge

IBMs RDMA over HFI and IB

ran RDMA over an abstracted network device, with the resilience built in to the underlying network. This allowed the network layer to adjust to failures without the RDMA setup being exposed to the changes.

Seemed to work quite well on AIX, and I believe that the Linux support (for the P7 775 9125-F2C) worked the same (or even better!). I'm pretty sure that IBM would have put their work back into the kernel.

Evil third-party screens on smartphones are able to see all that you poke

Peter Gathercole Silver badge

Re: What? @DougS

Interesting thought. I don't know enough about Android device drivers to know whether they can accept microcode from the device that is being configured, but I think it is unlikely.

Doing a bit of research, it does indeed appear that the interface for most screen controllers is I2C, SPI or SSI, and these do not require, and does not allow code to be injected into the device driver. It may be possible that it would react to a request for the device parameters from the device driver, but that would be at the device drivers request.

On the subject of recovering the device, these replacement screens are provided at the lowest cost, and the chances of someone actually following up on a warranty after they've had the device for a year is very unlikely. Generally, phone repairers working at this low end just buy direct from China over Ebay, and are unlikely to return the faulty device unless there was a compelling reason, so they would have to be complicit (the cost of returning stuff ti China is much greater than sending it from China). I'm not saying it couldn't happen, but...

Peter Gathercole Silver badge

Re: What?

It would not at all surprise me if the touch screen has it's own ARM core, or at least a microcontroller or PIC.

All of these could run additional code that captures all sorts of information, but I have to ask, what does it do with the data after it's been collected?

I very much doubt that whatever the controller is, that it has access to the higher levels of function like the IP stack, or even access to the main memory of the phone or whatever device the touch screen is fitted to.

Chances are that there is a definite protocol that the screen and main processor use (it's probably based on I2C or something similar) to allow the OS to identify what is touched and when, and unless this is a lot more functional than it needs to be, passing out-of-band data is not something that is likely to be very easy. You would have to have some system component added to Android (the applications are abstracted from controlling the hardware) to read the data.

So even loading a dodgy app. that looks to see whether one of these hacked screens is fitted, is unlikely to be able to query the screen controller to get that information.

What I would like to see is what the special modifications were made to the demonstration phone to allow this demonstration. I'll bet they included a modified Android driver to talk to the screen. If that's the case, then I'll breathe easily regarding the screens I've replaced on several phones.

Who fancies a six-core, 128GB RAM, 8TB NVMe … laptop?

Peter Gathercole Silver badge

Re: Still lightweight @Duncan

But the VT220 could do 132x24 text resolution!

ZX Spectrum reboot firm boss delays director vote date again

Peter Gathercole Silver badge

Debt collectors

This is from memory, but in order to send in debt collectors, a process has to be followed which requires the complainant to file a claim in the court. There is then a period during which the defendant can respond to the claim.

Depending on this response, the court can then issue a court order for the outstanding amount to be paid. This has another waiting period, and it is only after the defendant fails to pay in the time specified that the complaint can engage a court recognized agent (sheriff or debt collector) to attempt to enforce the debt or recover goods to cover the debt.

If that is not possible, then the complainant can file to have the defendant bankrupted.

All in all, the process takes months, not days, and this is probably what indigogo have found. It will be interesting to see whether the process has been started. If it has, the information should be in the publicly available court record. Anybody any idea which court they might be using?

As I say, this is from memory, learned after a friend of mine returned a car to a car dealer as not fit for purpose, and attempted to get a refund of the money he paid for it.

As far as the gender pay gap in Britain goes, IBM could do much worse

Peter Gathercole Silver badge

Re: What gender gap though? @Lusty.

Hmm. All this time, I've got it wrong. I'm so humbled on this.

I thought I had a good grasp of mean, median and mode, but I obviously don't.

I'm thinking I should withdraw my original comment, but that would make this comment trail a bit difficult to follow.

Peter Gathercole Silver badge

Re: What gender gap though?

That is exactly why I started my "At the risk of being branded.." with the comment that it is a bad measurement.

The median pay difference works out the median pay (the middle of the upper and lower bounding figures) for all men and all women, and then compares that two figures.

Let's do a thought experiment.

A company has 10 women and 10 men. The 10 women all earn £25,000. Nine of the men earn £20,000, and one earns £50,000.

The median woman's salary is (naturally) £25,000, the median men's salary is £35,000 (20,000+50,000)/2. The average (mean) woman's salary is 25,000 and the average (mean) men's salary is £23,000.

So by this median measure, this company is terrible. It has a £10,000 difference between men and women in favor of men, which corresponds to a 40% pay difference in favor of men! Quick, do something.

But the more realistic measurement, comparing the mean salaries show that women on average (mean) are being paid £2,000 more than the men, and looking at the mode of the salaries, the women are quite a bit better off than the men. The company is seriously discriminating against the men.

This is a contrived example, but it is designed to show just how misleading this measure actually is.

Peter Gathercole Silver badge

Re: At the risk of being branded misogynst... @Jellied Eel

<sarcasm>So you are advocating positive discrimination for women, are you? I thought all forms of discrimination were frowned upon</sarcasm>.

The UK legislation provides that women will get all pay rises that happen for all workers, like rate-of-inflation rises. But how many technical companies actually pay any rate-of-inflation rise at all? Pay increments are nearly always based on achievement, and someone not working does not achieve anything. That is still not against the law.

What happens to any in-work qualifications obtained during that time the woman was away? Is the woman expected to try to study them while on maternity leave? Or be paid for an achievement they've not earned.

As I've said elsewhere, there are exceptions, as per your 'geek squad' example. I don't dispute that women can be exceptional in a job role, but I'm talking about the general, not the exceptional.

And I think that your point about recruiting reflects my point about what you do now will take years to actually make a significant difference.

Peter Gathercole Silver badge

Re: At the risk of being branded misogynst... @Lusty

OK, lets look at it another way.

Lets assume that it is time served that is rewarded, not age. In the scenario I outlined, a woman who takes 3 years out will still be three years behind on their time served compared to their male counterpart. There's still a disadvantage there.

I accept equal pay for equal work, but most people would like pay increments without having to change role, and someone who has been doing the same job for a number of years may expect to be paid more than someone who has only just started in the same job. Experience counts in these environments, especially if, like in the technology sector, there are no automatic pay increments, and any pay rise has to be justified by achievement. Taking three years off does not achieve anything work wise, so will not earn pay increments.

Equality has to be equal in all aspects of a job, including experience.

On the subject of keeping up to date, I'm wondering whether you have children yourself. With a young baby in the house (especially if you are the sole carer during the day), it is incredibly difficult to concentrate on anything for longer than their sleep cycle, they are incredibly demanding, and sometimes you just may want to catch up on some sleep as well.

I have four children, and my daughter is currently on maternity leave after her first child. Looking after a baby for 8 months is nothing like taking a vacation for a couple of weeks. You just don't get the breaks. And as my daughter found out when she asked, if you want to work to 'keep your hand in', and can arrange child care, the maternity leave rules impose strict limits on how much you can work without losing the whole of the maternity benefit! So any keeping up will be done at your own time and expense.

I'm not trying to put women down. I have some women friends who do an incredible job of balancing a successful work career and family, but they are an exception, and really have to work far harder than their male counterparts just to keep up, and this is for no reason other than biology and society.

I accept that there are always exceptions, with both high achievers and low achievers, and that some people make it into senior management positions at quite young ages (although probably as a career manager, rather than one building a management position on technical knowledge), but if you were to to a survey the age people are when they get to certain positions in management, I'm absolutely certain that it will be skewed to middle age to older people. Experience counts, and if it were not like this, it would destroy the traditional career path people have grown used to.

Peter Gathercole Silver badge

At the risk of being branded misogynst...

... there are a number of problems in society that make it unlikely that there will ever be full equality, at least in the very misleading median pay gap measurements.

The problems are mainly about the biological nature of the family, i.e. women are much more intimately involved with the process of having a family.

Let's consider the best case scenario. A woman is in a company, being paid the same for the same role as their male counterpart. The woman decides to have a child, and then works up until a few weeks before the baby is due. She has the baby, and then takes the maternity leave offered, and stays off work for a further 8 months or so, as she is encouraged to do for the sake of her baby.

Let's assume she can return to the same job she had before (which isn't always the case).

She's now been away from the workplace for three-quarters of a year. Her male counterparts have had that extra service, seniority (and probably pay increments), and the woman has to come back to re-learn certain aspects of her job. And in a fast moving industry such as IT, she's also got to catch up on the new developments. If she's in a customer facing role, her customers will have got used to a new face, and she will have to get back in with them, or find new customers.

If we assume that her partner will really share all child care duties, this means that she is actually likely to be a year behind her male colleagues. And if it is not equal, this is another impediment.

Multiply this by two or three, and this is the barrier she has to overcome.

But more realistically, career women end up by choice having their children close together. So it may be that instead of several one-year gaps, one per child spread over several years, they end up with a single two or three year gap.

If this happens, it will be much more difficult to catch up her peers, and their original job role may not even exist! So returning to work will be much more difficult. Also, in the past, Women have been able to retire earlier than men, removing experienced women from the pool of talent.

Add to this the significant number of women who, because of unequal child care or just personal preference, decide to down-grade or completely sidestep their previous careers, such that they will be unavailable to be considered for high level jobs, and the situation gets worse.

Another aspect of high function technical jobs is that the climb to the higher reaches of the board takes 20-30 years, so the women able to be appointed to these roles now will have joined in the 90's, a time when there were fewer women in technical jobs. We will have to wait 20 more years for any current actions we take on recruitment to kick in.

I don't think that there is any real surprise that many of the women currently high up in industry have chosen not to have children, have had them early and restarted their career in their early 20's, or have been able to out-source their child care at the risk of damaging their family life.

Until we have a complete shake-up of society, this pattern of family life will persist, and I can see very little that will significantly alter this in the near future.

Boffins want to stop Network Time Protocol's time-travelling exploits

Peter Gathercole Silver badge

Re: Time NTP was upgraded(See what I did there!)

Unfortunately, things like Blockchain, and a lot of historical trading and other financial systems absolutely need reliable sub-second accuracy in order to record the absolute time of transactions to make sure that a successful sequence is recorded. It is here that, for example, making a transaction look like it happened later (or earlier!) than it actually did could invalidate the transaction (think if someone were able to delay your registration of a newly mined bitcoin, and claim it as their own merely because they could subvert the time your system apparently mined it).

I worked in the electricity distribution industry some time back, and they had a requirement for accurate sub-second time as well, not that I ever asked why ( the fact that I was compiling the xntpd source to include the RCC8000 time clock tells you how long ago that was).

Now NHS Digital is going after data on private healthcare too

Peter Gathercole Silver badge

care.data

I wish that people would stop conflating care.data with sharing data within the NHS.

Care.data was all about sharing supposedly anonymized data from the NHS outside of the NHS, with people like medical research organizations, drug companies and insurance companies.

Whilst I have no problem with the first of these, I have some concerns with the second, and violent objections to the last, and there are other companies outside of all three of these categories that were being considered for access.

It was demonstrated that the anonymization could easily be undone by combining the anonymous data with well known data and information from social media.

I'm all for making the NHS more efficient by sharing data across different groups within the NHS, because then for example my son would not have arrived at his booked appointment at hospital for a pretty rare eye condition (and thus easily identified from this and the region that the records were for) only to find that all of his previous records had been misplaced, and could he tell the consultant what he was there for!

SUSE Linux Enterprise turns 15: Look, Ma! A common code base

Peter Gathercole Silver badge

Re: numbering

Although, to be fair, Solaris was originally a software grouping title (a bit like how IBM package multiple software under the WebSphere, Tivoli and now Spectrum brands) which contained SunOS 5, along with a variety of other software packages.

I think you're right that Solaris 2 was the point at which the OS generally got to be referred to as Solaris.

I believe the break between SunOS 4.x and SunOS 5 was when Sun decided to separate SunOS from SVR4 after UNIX System Laboratories (USL) was wound up. This allowed them to take back complete control over the internals of SunOS.

Peter Gathercole Silver badge

Re: How about Windows skipping 'Windows 9'?

Seven ate Nine.

Peter Gathercole Silver badge

Re: SuSE Linux

I'm pretty sure StarOffice was bought by Sun, not SuSE, and they then took out any propriety licensed software (I believe that the biggest item was the Adabase databse component) to create the open source OpenOffice product. StarOffice (with the database component) remained a product that Sun would sell, at least for a while.

Oracle then upset the OpenOffice community by ignoring it after they bought Sun, which lead to the LibreOffice fork, and then Oracle, who had no real interest in OpenOffice, gave it to the Apache foundation.

IBM took a fork of OpenOffice to try to produce a more compatible product (to MS Office), which I believe they called Lotus Symphony (although there had been a previous Lotus Symphony product, a spreadsheet on steriods in the '90s). Symphony died the same death as SmartSuite (which I actually quite liked) as a result of IBM apathy.

I don't know where this actually leaves StartOffice. I guess it's still an Oracle product, but whether it is still available is an interesting question.

Ubuntu reports 67% of users opt in to on-by-default PC specs slurp

Peter Gathercole Silver badge

Re: Really small systems

I've not put 18.04 on any of my machines yet, but I do have a casual use system in the bedroom that runs 16.04 that is a Acer dual core Atom Netbook clocked at about 1.6GHz with 1GB of memory and 8GB SSD, although the SSD is abysmally slow, so I run it off of a normal install on a 32GB USB micro memory card reader (not a live distro).

It works OK for browsing and YouTube videos, but I would not want to use it for anything serious. And Firefox's lax memory management means that it is necessary to stop and start Firefox on a relatively regular basis. I can't believe how frequently Firefox just grows to consume all the available memory, regardless of how much you have (it's driven my normal 8GB CoreDuo Thinkpad into paging more times than I care to remember).

I'd like to run the Netbook off an SD card in the MMC slot, but the BIOS does not support booting from that device, and I've not (yet) managed to get the boot partition on the SSD to successfully boot the kernel from the MMC (it's something to do with the modules loaded into the GRUB image - I'll get there).

Now Microsoft ports Windows 10, Linux to homegrown CPU design

Peter Gathercole Silver badge

Garbage recycling analogy

Whilst your analogy is clever, it doesn't really describe mainstream modern processors.

What you've ignored is hardware multi-threading like SMT or hyperthreading.

To modify your model, this provides more than one input conveyor to try to keep the backend processors busy, and modern register renaming removes a lot of the register contention mentioned in the article. This allows the multiple execution units to be kept much busier.

The equivalent to the 'independent code blocks' are the threads, which can be made as independent as you like.

I've argued before that putting the intelligence to keeping the multiple execution units busy in the compiler means that code becomes processor model specific. This is the reason why it is necessary to put the code annotation into the executable, to try to allow the creation of generic code that will run reasonably well on multiple members of the processor family.

Over time, the compiler becomes unwieldy, and late compared to the processor timelines, blunting the impact of hardware development. But putting the instruction scheduling decision making process into the silicon as in current processors increases the complexity of the die, which becomes a clock speed limit.

I agree that this looks like an interesting architecture, and that there may be a future in this type of processor, but don't count out the existing processor designs yet. They've got a big head start!

Intel confirms it’ll release GPUs in 2020

Peter Gathercole Silver badge

@Lee

I would go one stage further. I can see the GPU becoming not just a co-processor on the same die, but execution units in a super-scalar processor. Once this happens, writing code for the GPU will be much easier, as the compilers will include the ability to compile code directly, rather than the rather haphazard methods being used now.

Intel chip flaw: Math unit may spill crypto secrets from apps to malware

Peter Gathercole Silver badge

Re: Pedantic spelling

This was the subject of conversation on at least two Radio 4 programmes in the last couple of months (one of which was More or Less on Friday 11th May - available as a podcast), and the general conclusion, from representatives of various linguistic and maths related institutions, was that both terms were correct.

This was backed up by several references to documents, both from England and America going back a couple of hundred years, and as a result it is largely personal choice as to which is used.

I'm actually with you with Maths, but it is an interesting listen.

Peter Gathercole Silver badge

Re: Performance on maths code?

Just a guess, but I expect that it's a small bit of code that sanitizes the floating point registers on a context switch. Bound to have some performance impact, but probably only a small one.

My second guess is that if a process has not used any floating point registers, the OS makes no attempt to save and restore them, nor to clean them on a return from a context switch (saving space in the stack frame, and not running the code to copy the registers). If this is the case, then any code that does use floating point registers will do a save/restore when a context switch occurs anyway, so there may not be any performance impact at all for code that uses floating point instructions.

Which? calls for compensation for users hit by Windows 10 woes

Peter Gathercole Silver badge

@AC re: Free.

Firstly, it's not free to all users. Some people have actually paid for it, and on new PCs, I'm sure the OEM will have paid something to put a valid Windows license on them.

Secondly, the extreme measures they used to try to persuade users of previous versions of Windows to upgrade may weaken any 'user choice' arguments they may try to use.

But doesn't Microsoft palm off all support for Windows on OEM systems to the manufacturer of the PSs? I think they only offer direct support to people who buy retail of enterprise licenses.

IBM to GTS: We want you to 'rotate' clients every two years

Peter Gathercole Silver badge

Re: Making It Worse?!

IBM support have pretty rigid rules about what a Sev.1 is, and they normally demand a 24 hour contact from the integration team or end customer (often if you are working on a customer account, the support contract will be a customer specific account, not one for the IBM account team.)

If they attempt 3 contacts out-of-hours that are not answered, they will automatically drop the severity on the quite reasonable assumption that if the customer is not prepared to work 24 hours a day, why should IBM.

When I worked in IBM support, not only was there a severity that was set by the customer, but there was a priority which was set by the support team. Not too sure whether that happens now, but calls could be graded S1P3, which meant that it was important to the customer, but IBM did not judge it a high priority.

Also, when I was working, support were expected calls to be defect only. If the problem was obviously a how-to, we were supposed to try to sell some consultancy, although this does not work very well when you're talking to an IBM account team (you know how it goes - "this work is sooo important, and will bring in $$$ to to IBM [but not to the support team], so you've just got to make it Sev.1")

Monday: Intel touts 28-core desktop CPU. Tuesday: AMD turns Threadripper up to 32

Peter Gathercole Silver badge

Re: Maths co-processor?

The Tube wasn't even really a bus. It was a fast synchronous I/O port that kept the original BBC micro running, but as a dedicated I/O processor handling the screen, keyboard, attached storage and all the other I/o devices the BEEB was blessed with, while the processor plugged into the Tube did all of the computational work without having to really worry about any I/O. All of the documented OSCLI calls (which included storage, display and other control functions) worked correctly across the Tube, so if you wrote software to use the OSCLI vectors, they just worked.

When a 6502 second processor was used, it gave access to almost the whole 64KB of available memory, and increased the clockspeed from 2MHz to 3MHz(+?) IIRC. Elite was written correctly, and ran superbly in mode 1 without any of the screen jitter that was a result of the mid-display scan mode change (the top was mode 4 and the bottom was mode 5 on a normal BEEB, to keep the screen down to 10KB of memory). Worked really well, and even better with a BitStik as the controller.

I also used both the Acorn and Torch Z80 2nd processors, and I know that there were Intel 8186 running DOS, NS32016 running UNIX (used in the Acorn Business Computer range) and ARM 2nd processors built as well.

Peter Gathercole Silver badge

Re: Intel was fudging

I think you would be surprised about how closely related the Power and Mainframe processors are nowadays.

With the instruction set micro- and milicoded, the underlying execution engines rely on some very similar silicon.

Oh, and there have been relatively recent Power6 and Power7 water-cooled systems, the 9125-F2A and -F2C systems, but only a relatively small number of people either inside or outside of IBM will have worked on them (I am privileged to be one of them). These were water-cooled to increase the density of the components rather than to push the ultimate clock speed. The engineering was superb.

And... they were packaged and built buy Poughkeepsie, next to the zSeries development teams, and use common components like the WCU and BPU from their zSeries cousins.

There was no Power8 system in the range, because of the radical change to the internal bus structures in the P8 processor. I don't know whether there will be a Power9 member of the family, because I'm no longer working in that market segment.

Peter Gathercole Silver badge

Re: Intel was fudging

Yes, but even IBM has backed off from pushing the clock speed to add more parallelism.

The Power6 processor had examples being clocked at 4.75GHz, but the following Power7 clock speed was reduced to below 4GHz (but the number of SMT threads went from 2 to 4, and more cores were put on each die, again 2 to 4). Power8 kept the speed similar, but again increased both the SMT and cores per die.

In order to drive the high clock speeds in Power6, they had to make the processor perform in-order execution of instructions. For most workloads, putting more execution units, reducing the clock speed, and putting out-of-order back into the the equation allowed the processors to do more work, but could be slower for single-threaded processes.

The argument about compiler optimization really revolves around how well the compiler knows the target processor. Unfortunately, compilers generally produce generic code that will work on a range of processors in a particular family, rather than a specific model, and then relies on run-time hardware optimization (like OoO execution) to actually use the processor to the best it can.

In order to get the absolute maximum out of a processor, it is necessary to know how many and what type of execution units there are in the processor, and write code that will keep them all busy as much of the time as possible. Knowing the cache size(s) and keeping them primed is also important. SMT or hyperthreading is really an admission that generic code cannot keep all of the executions busy, and you can get useful work by having more than one thread executing in a core at the same time.

I will admit that a very good compiler, targeting a specific processor model that it knows about in detail is likely to be able to produce code that is a good fit. But often the compiler is not this good. You might expect the Intel compilers to reflect all Intel processor models, but my guess is that there is a lead time for the compiler to catch up to the latest members on a processor family.

I know a couple of organizations that write hand-crafted Fortran (which generates very deterministic machine code - which is examined) where the compiler optimizer rarely makes the code any faster, and is often turned off so that the code executes exactly as written. This level of hand optimization is only done on code that is executed millions of times, but the elimination of just one instruction in a loop run thousands of millions of times can provide useful savings in runtime.

All of the time an organization believes that hand-written code delivers better executables, they may justify the expense of doing it. It's their choice, and making a generalization about the efficiency of code generated by a compiler is not a reason to stop when faced with empirical evidence. Sometimes, when pushing the absolute limits of a system, you have no choice than making the code as efficient as possible using whatever means are available.

US govt mulls snatching back full control of the internet's domain name and IP address admin

Peter Gathercole Silver badge

Re: internet freedom @Ole re: alternate root

Whilst it is true you can do this for DNS for a DNS alternative, it is not possible with the numeric IP4 or IP6 address spaces. This is because a single organization does not control the routing tables outside of their own networks. You certainly could give your network any IP address you wanted, but persuading an upstream ISP to route to that set of addresses without it being properly registered isn't going to happen (and the problem only gets worse as you get further from your network).

It would be possible to use VPNs across the current Internet proper to tunnel a private address space, but you could not really call that an alternative Internet. At best, you would regard it as a parasitic network. relying on the thing you want to replace for it's existence.

To really set up an alternative Internet, you would need an alternative global router network, which would be very expensive to set up. But some global companies do run trans-national intranets, like most of the owners of the class A and many of the class B address ranges. But these are (again) not really an alternative to The Internet.

Foolish foodies duped into thinking Greggs salads are posh nosh

Peter Gathercole Silver badge

Food resembleing other food

I was in an a chain Italian restaurant, and ordered the veal (I know, it was one of those "I've got to try it once" moments overriding any ethical thoughts), and I was very disappointment to get a plate containing something that looked and tasted like a Bernard Mathews turkey steak with half a tin of Heinz Spaghetti in tomato sauce and a quarter of a bag of Florette small-leaf salad.

Maybe it was, and I was just duped!

Biting the hand that feeds IT © 1998–2019