* Posts by Peter Gathercole

2953 posts • joined 15 Jun 2007

Part of CAP IT system may be scrapped after digital fail – MPs

Peter Gathercole Silver badge

Re: The last time I was involved in paper maps for field registration...

The problem was not getting the maps, it was getting the maps at the same time as all of the other farmers in the area doing the same! You've never seen such a group of grizzled, wind-burnt old codgers outside of a farm deadstock sale.

Of course all of the large farmers just sent one of their workers to queue, or got their farm-agent to do it for them. As an IT specialist, I felt most out of place, not being able to talk of the field yield, soil heaviness, milk quota, lambing figures and the myriad of other farming jargon.

It did make me think how intelligible we must be to other people sometimes!

Peter Gathercole Silver badge

The last time I was involved in paper maps for field registration...

...what ended up happening were very long lines at the local Ordinance Survey offices trying to purchase the relevant maps to send off as the deadline approached.

You could not use any of the popular and readily available scales, you had to use the 1 to 10,000 scale (about 6 inches to the mile) which show field boundaries, and which were only available in person from an OS local office. The queues were incredible. I spent over 8 hours in one trying to get three sheets for my father-in-law's farm.

That was some time ago, but it was a real pain. I hope that what they've introduced is better now, because I understand that the OS local offices are no longer there!

TOP500 Supers make boffins more prolific

Peter Gathercole Silver badge

Re: Chemists are... @YAAC

I was writing in the present tense, so I was commenting on what I've seen, not on what should be done.

But I seriously doubt that you are correct. If you get some computer scientist on board, they will want to write in something like Python if they've just coded as part of their degree, C++, Java, or derivatives if they've been taught formal programming languages, or something obscure like Haskill or Erlang if they are working in the field of functionally correct programs.

Like it or not, writing efficient HPC code is still best done in a relatively simple language like Fortran, because you can get so close to the machine code actually being executed that if necessary you can tweak it at the assembler level to wring out the last few clock cycles in critical parts of the code. Depending on which HPC segment you're looking at, owning an HPC is normally not just about running your code fast, it's about running it as fast as you possibly can.

Don't believe any hype that for this type of programming, an IDE is ever going to generate more efficient code than something closer to the bare metal, And I don't think that you will get any Computer Scientist seriously considering Fortran as a language to work in, unless they are already involved in the HPC field.

I am involved in the field myself at the moment (as a mere system admin), and I talk to people involved in solving big problems using HPCs, and this is what I am told by people actively writing for such systems.

Peter Gathercole Silver badge

Re: Chemists are...

But you will probably find that programs written by chemists to solve chemical problems are more effective than those written to solve the same problem by computer scientists, because chemists understand the problem, rather than the computer scientists who may write better code, but don't understand the problem they are trying to solve.

BT bemoans 'misconceived' SUPERFAST broadband regs

Peter Gathercole Silver badge

Re: This sort of crap @Colin

The National Grid is not a good comparison.

National Grid moves electricity from the electricity generating companies to the regional electricity distribution network operators, so they own and operate a large part of the pylons that you see striding across the country.

The regional electricity distribution network operators (in my region, that is Western Power Distribution) own and operate all of the local distribution network, which in the telecommunication industry is often called (inaccurately) the Last Mile, which links all of the consumers.

I agree that we could see a similar tiered structure in the UK for telecommunications, where you would have regional companies operating all the hardware and lines linking the exchanges and the consumers, and trunk companies operating the inter-exchange networks, with the overall service being provided by virtual operators who use the facilities operated by the other two, but I think that is a very different proposition to the one you propose, one that the US tried to follow and failed at.

Peter Gathercole Silver badge

Re: BT Fibre @streaky

They count FTTC as Fibre.

Drug drone not high enough: Brit lags' copter snared on prison wire

Peter Gathercole Silver badge

Re: *shakes head* @werdsmith

And yet, although Rye Hill is a more modern prison, it was originally built as a borstal in the 1960's when there was less opposition to this type of development. Since then, it's role has been changed more than once, probably with modifications added each time to make it more suitable for it's new purpose.

It's always easier to get agreement to extend an existing facility like this than it is to build a new one from scratch.

Peter Gathercole Silver badge

Re: *shakes head* @Bob

Unlike the US, where prisons tend to be isolated outside population centres and can afford to have wide perimeters with more than one fence (at least that is what I see on US telly), in the UK, older prisons, many of them built in the Victorian era are often in towns or cities, where there is just no space to have a wide perimeter.

It is often the case that there are public roads right next to the perimeter wall. Prisons such as these mean that it is impossible to prevent the public getting close enough to catapult small amounts of contraband over the wall. Here are the co-ordinates of HMP Bedford: 52.139530, -0.469831. Look at it the satellite image in Google Maps or something to see the problem.

You might say that it is poor planning to have prisons in towns, but the UK is smaller and has a much higher population density than the US or many other countries. If prisons are built away from a population centre, first of all, there is a huge outcry about using precious green-field sites for prisons, and then if they do get built, a new population centre grows up around the prison, because people working in the prison tend to like living close to where they are working. The prison then has to expand because of the increasing prison population, so it tends to grow towards it's perimeter, using the space.

Helium-filled drive tech floats to top of HGST heap

Peter Gathercole Silver badge

"before those persistent helium molecules find a way through"

I grant that having a medium to 'fly' the heads is important, but surely the major problem is not preventing the helium getting out, but preventing other atmospheric gasses getting in.

Because nitrogen, oxygen, argon and carbon dioxide are much larger molecules, it's much easier to prevent them entering the drive.

Provided you can prevent other gasses entering, the rate of helium leakage will fall anyway, as the pressure inside the drive drops, so I can see them having a quite long life.

Standard General bids to save RadioShack from oblivion

Peter Gathercole Silver badge

Re: " from selling a Bentley to selling a Ford to selling a used Vespa.”

It depends how far back you look.

Having found a site with archived catalogues (which I posted a link to in a previous story comment), it is quite clear that in the 1930's through to the early 1960's, they had an extensive set of products which would have appealed to anybody interested in shortwave radio or building and repairing things, especially as I suspect that they did most of their business mail-order.

If I were to pin a point where they started going down hill, I would suggest that it was when they tried to become a general electrical retailer (HiFi, TV) with a physical presence in most large towns (US cities), which saw them expand and have to compete with all the other electrical retailers who were also doing the same thing. They were briefly successful, riding the wave of early home and small business computers, but after that it was all down-hill, IMHO.

They were not able to make their brands, such as Realistic or Micronta as recognisable as the Japanese companies (like Technics, Sony etc), and a lot of the stuff they sold could be easily recognised as rebranded OEM products, often more expensive than the originals.

Aged 18-24? Don't care about voting? Got a phone? Oh dear...

Peter Gathercole Silver badge

Re: Targetting young people with specific policies is a waste

I think the example of allowing 16yo's to vote in the recent Scottish Independence referendum was a cynical ploy to try to win the vote.

It is well documented that the young tend to be naieve, idealistic, and often have a very simple view of the world. It is very common for young people to have very reactionary views which would chime with the notion of an independent Scotland.

People are often militant when they are young, and mellow over the years to become more conservative (with a small c), regardless of their politics.

The number of times I hear young people demanding that they are people too, and need representing. My answer to that is "Well, I've been young, and now I'm older. I know much more about how you feel, because I've been where you are, and so have my children. You're still young, and never have been old, and don't yet know what that's like, so you don't yet have a balanced view of the world."

I know that this could be taken to absurdity so that only the oldest people in society are allowed to vote, but I feel that there is a level of experience sufficient to allow you to look both ways, and you've not got that at 16.

Peter Gathercole Silver badge

Targetting young people with specific policies is a waste

The problem with young people is that they become older, and the political process takes time.

Take policies targeted at students. Most students of voting age are on 2 or 3 year courses, and have been since autumn last year. Any government elected in May this year will not be able to change policy this academic year, and probably will not manage it in the next either, meaning that it it would likely come in for the academic year starting September 2017. Any student in their second or third year will not be affected by any policies brought in by the next government so it is a waste of effort.

In addition, a government making new policies on, say student loans, will not change the conditions existing students, because that may materially change the affordability of being a student, and this is recognised as being unfair.

It's difficult getting young people to engage. My four kids (ages 19-29) are all registered to vote, but I suspect that none of them will, not because they are apathetic, but because they have not yet learned enough about life to be able to identify what will affect them in the future. And I think my kids are of at least average intelligence.

They just do not follow what is happening in the country or the wider world, and they don't trust (probably wisely) the spoon-fed election propaganda from the politicians, as they realise that this is just a sop. I keep getting asked what my views are, and being a of a liberal (note the small L), I don't what to influence them with my thinking too much.

Backing up cloud applications is never easy but Asigra gets it done

Peter Gathercole Silver badge

Backup is not what I think of when I hear Data Protection

I'm not sure whether Canada uses different terminology, but to me, Data Protection means making sure that data is not revealed where it is not meant, and that the data retained is appropriate to the reasons for which it is being kept, and correct.

This is the opening paragraph of Wikipedia on "Information Management".

Information privacy, or data privacy (or data protection), is the relationship between collection and dissemination of data, technology, the public expectation of privacy, and the legal and political issues surrounding them.

And UK and EU legislation for "Data Protection" is all about this.

Still, this isn't a UK centric news site, is it.

Apple boots Windows 7 out of Boot Camp

Peter Gathercole Silver badge

Well, it makes some sort of sense.

You can no longer buy Windows 7, so of course when you get your new Mac, you can't get a brand new, shiny, shrink wrapped Windows 7 license anyway, can you.

Of course, they've overlooked the fact that you may already own a transferable Windows 7 license, but nobody would put a 'used' version of Windows on their shiny shiny, would they. That's just so dirty.

BOFH: Mmm, gotta love me some fresh BYOD dog roll

Peter Gathercole Silver badge

Re: Vax? VAX!

It's definitely and acronym. Virtual Address eXtension IIRC.

If it's been in storage, I would expect that it would take a while to get running again, and you wouldn't get any support from Digial Compaq HP.

BBC gives naked computers to kids (hmm, code for something?)

Peter Gathercole Silver badge

Re: Sorry to be a killjoy, bit it's dumb.

In the early '80s I built and ran a 'computer appreciation' lab at a UK Poly, back when computers were relatively rare things, and peripherals like robot arms, speech synthesisers, light pens, plotters etc. even more so. I do understand that physical things are more meaningful, but only for capturing initial interest.

Once the novelty wore off, the young people who 'got' what it was all about, the physical nature was less important, and those that didn't, it was no longer interesting.

In fact when given the opportunity to use any or all of this great kit in their end of year project, none of them were interested, and they all settled on pure software projects. Everybody involved in setting up and running the lab. were very disappointed.

Peter Gathercole Silver badge

Re: Sorry to be a killjoy, bit it's dumb. @werdsmith

I agree. I'm not making the comparison with the BBC micro, other people are. I just pointed out how stupid that comparison is.

Before this will be useful even as a wearable gadget, it will need a case, and some power. That will make it much more bulky, and mean there will be an additional outlay immediately before it can be used like this.

I can see a niche for it, but not as something given to 'every schoolchild' of a certain age, especially if all the majority do is connect it up to a PC running some cute development environment that allows it to display on the 5x5 LED grid a character moonwalking or dancing (Barclays are involved) at the click of a few on-screen buttons. Is that really going to be enough of an achievement to capture a child's long-term interest?

At best, it's going to be a novelty hook that may hold the attention of a small part of their target audience for long enough for them to learn something useful, but I fear that the number who take advantage will be tiny.

Peter Gathercole Silver badge

Sorry to be a killjoy, bit it's dumb.

As is comparing it the the BBC Micro of old.

What made the BBC micro attractive was that it was a fully functional computer (of it's time) in it's own right, which you could start in a simple language drawing lines around the screen, making laser zap sounds, and have it prompt for the child's name, and then move on using the same system all the way through to structured programming, assembler, industrial control, simple office applications and data gathering and display. A cheap laptop with a suitable application is going to be far better.

This microbit is a toy that cannot exist without another computer being involved. It's going to be seen by the majority of kids as 'just another thing they plug into their PC with flashing lights' in the same vein as computer controlled toy missile launchers or bluetooth controlled RC cars. It will hold their attention for a few hours (if they get it working) and then be discarded.

I'm not saying that it will not get any traction. There are bright teachers (and some self-taught kids) who will do some tremendous things with it, I'm sure. Just don't expect this to be a significant part of teaching advanced computer interaction to the generation of kids who will receive it.

They'd do better going back to Logo controlled turtles, or a Kim-1 development kit.

'Rowhammer' attack flips bits in memory to root Linux

Peter Gathercole Silver badge

Re: ECC is not enough

I did not mention hamming codes, because I learned about it over 30 years ago (it was one of the first things taught on my CS course at uni.), and I was not certain the term was in use still, and I could not be arsed to spend any time reading up on the specific techniques used nowadays while writing the comment.

Peter Gathercole Silver badge

Re: ECC is not enough

If you could predictably determine which bits would be flipped, you may be able to do this, but most ECC memory has multiple bits for error correction per 64 or 128 bit word, and use something a bit more sophisticated than plain parity.. The ECC I've seen normally allows single or double bit corruption per word to be fixed, with multi-bit corruption detected. You would have to be able to flip the bits in a pattern where the ECC bits would not flag an error, and in order to do this, you would need to know how the ECC bits are calculated, which is probably memory vendor specific.

I don't know how Linux handles uncorrectable ECC errors, but other OSs normally take exception, and depending on what was running at the time the ECC error occurred would either kill the process, or if it were in Kernel mode, panic the kernel. I've even seen this take out an entire system running VMs in a type 1 hypervisor, if the error occurs while executing hypervisor code.

As a result, if you are using ECC memory, you would have to get a correct pattern every time or else things will happen that will be noticed.

Peter Gathercole Silver badge

Probably difficult to exploit in real systems

I was initially sceptical about this because I could not see how you could predict which other memory pages would be affected by a particular rowhammer, but the report adds much detail to the issue. If you are really interested, give it a read,

There is quite a lot of "Hopefully this..." type of statement, so the authors acknowledge a degree of luck in triggering the exploit.

The report also acknowledges that the exploit will work best on a system that is fairly idle, as it requires the process to essentially fill all of the available memory first with known data to identify related memory pages, and then with page table entries created using mmap().

In a machine with other workload, the likelihood of being able to control enough memory to allow this to be reliable is seriously reduced (it requires a page that is identified, then freed to be immediately used by the system for a page table page, something that could not be guaranteed on a busier system, especially as the page freed with madvise() + MADV_DONTNEED would probably generate a context switch).

All the time you are rowhammering essentially unpredictable pages (part of the exploit is to hammer memory lines until you can find one that affects a memory page you control), you could also be creating other problems on the system including unpredictably modifying running code and data structures.

The more you have to do this, the more likely it is that you will trigger another unpredictable action which would attract attention.

There is also tacit acknowledgement that this attack in it's published form relies on certain features of the Intel x86-64 architecture (specific instructions to allow rapid toggling of memory bypassing the cache like CLFLUSH), although it does suggest ways of triggering the bit flip on other architectures.

Don't get me wrong. It's a clear issue, and one that is exacerbated by the fact that Linux, being open, is less difficult to craft an exploit for because the internals are better understood, but I believe that exploits in the wild are likely to be few.

Clinton defence of personal email server fails to placate critics

Peter Gathercole Silver badge

Re: Convenience

I would imagine that she has 'satff' to do it for her. All she had to do was make sure they were paid, and that was probably being done by someone else anyway.

The convenience was only having to carry around one device, with one account on automatic login.

Quantum computers have failed. So now for the science

Peter Gathercole Silver badge

Re: For Markets in a Pickle and Heading for a Mass Flash Crash

My deity. He's got a Blog.

I don't understand anything he says there either.

Web protection: A flu mask for the internet

Peter Gathercole Silver badge

Re: Dummed down readership? @AC

Well, someone had to say it, and unusually, I had the chance to be the first.

Cleverness is relative. I only claim to be a Register reader for some years, not that I am particularly 'clever' or 'bright', although I do value my silver badge. IMHO, this type of article is poor compared to previous articles, so I was commenting on whether the type of reader The Register is attracting is going down.

Peter Gathercole Silver badge
Thumb Up

Re: different access levels for ... departments. @Andy Non

And they let you attach USB memory devices that had been used on non-secure systems?

Goodness, how lax!

Peter Gathercole Silver badge

Dummed down readership?

This article does not appear to be aimed at the traditional Register reader.

Add to that the fact that it is proselytizing cloudy services means that it is, IMHO, a below par article.

Is it really the case that we now have people who could derive benefit from this type of overview article reading the Register? If so, I need to find another site to get my tech. news.

Google promises proper patch preparation after new cloud outage

Peter Gathercole Silver badge

Re: I hope that... @Aitor

All good points... provided that the SLA with their customers reflects a realistic amount of downtime for the service. I've been involved in implementing services that are designed to be able to cope with failures, and this is what I feel the cloud providers should be aiming at.

The problem as I see it is that cloud providers sell their service as resilient ("just look at all the places we can move your services to"), giving customers an expectation, without actually investing in the infrastructure that actually allows this expectation to be achieved.

Cost is an issue, of course. Cloud providers must match a realistic expectation of total system availability with the cost, with different tiers of pricing. Otherwise, providing cloud services will become, in a currently popular phrase, a race to the bottom, with price being the ultimate factor in the choice.

Unfortunately, the people buying the service may not actually really understand what they are being sold, and if someone with real experience of service continuity tries to point out the deficiencies, they will be branded alarmist, or protectionist (of their own jobs), and be sidelined.

Peter Gathercole Silver badge

I hope that...

... the outage did not affect the whole of the GCN, just parts of it.

If it did affect it all, then it would appear to be that they've got at least one too many single points of failure.

Whilst I know that certain parts of the core infrastructure are difficult to make completely redundant, multiple network fabrics that can be run in isolation for resilience is not a new idea, nor is only working on one of your fabrics at once a particularly taxing notion.

I feel that all cloud providers should effectively have the best High Availability features for their infrastructure. Don't rely on the MTBF figures provided by your equipment suppliers to be a realistic figure for the availability of the service run on the kit, because unexpected 'stuff' happens.

Broadband routers: SOHOpeless and vendors don't care

Peter Gathercole Silver badge

Re: "having a modem and a router as separate devices" @LDS

I mentioned Raspberry Pi, because I happen to have a B+ sitting around not doing a lot at the moment. For me, it's all about not spending too much money.

My VDSL sync speed to the exchange is around 80Mb/s, and I have managed to get speed tests of ~50Mb/s when directly connected via GigE to the router, so I am a little uncertain that USB2 connected Ethernet adapters (theoretically capable of connection at the required speeds, but I' always sceptical) can hack it.

I've just scavanged a newer old laptop back from one of the kids (they weren't using it as it would not game), and am going to try a firewall distro that supports Cardbus with a 1GB Cardbus Ethernet card that I have lying about. IPFire looks like a suitable distro.

Peter Gathercole Silver badge

Re: "having a modem and a router as separate devices"

Yes, I meant Raspberry Pi.

The problem is that I don't want to spend too much (any?) money (you can call me a skinflint if you like, I won't take exception). I've got used to using otherwise discarded kit (It's a Thinkpad T20 at the moment) for my firewall, and it's got to the point where I don't actually really have anything powerful enough to comfortably do this job without having to resort to a re-purposed deskside machine. Being a full-time Linux user, I'm still finding my go-to Thinkpad T43 good enough with Ubuntu 14.04 (LTS with Gnome-fallback), and have not had to buy a more modern laptop to free up anything remotely powerful.

I've now actually looked at IPFire on Intel processors, and it may actually be good enough with the current laptop I'm using as a firewall, when used with a PC-Card ethernet device. This is where Smoothwall was lacking, it didn't support PC-Card devices without serious modification, involving a modified kernel - the stock kernel does not support PC-Card modules.

But I then have the problem that the best Wireless Access Point I have (in the Bright Box) is outside of the firewall!

Peter Gathercole Silver badge

Re: Actually, a router is a router, and "modems" no longer really exists...

There used to be single function devices that took a Ethernet (or USB) on one side, and an DSL link of some sort on the other. They allowed you to use PPPoE on another device (firewall or computer) directly to whatever was in the exchange. There was no 'routing' done in the device at all. You might have called them 'repeaters' or bridges, although both of those names have fallen into disuse. These devices could not remotely be called routers, although modem would not really be accurate either, but was a common term used.

Some ADSL routers could operate in bridge mode, doing the same, and not actually offering any IP routing. This may still be possible, I've not looked recently.

But nowadays, what you get is a multi-function device that does lots of things including routing, wireless access, firewall, DHCP, VPN and in some cases print and filesharing, and this is one angle of the story, that the more function you build into a device, the more likely it is to have a security vulnerability.

Peter Gathercole Silver badge

"having a modem and a router as separate devices"

Before I had FTTC installed, I used to have an ADSL router connected to the Smoothwall firewall "Red" network, and all other comms kit on the "Green" network, including the wireless router(s), powerline Ethernet and all the systems.

My wife, looking at the pile of 'things' next to the 'phone line always complained about the space and power (really), and whenever we had an interruption to the Internet, always blamed it on the fact that we did not do it the same as everyone else, even though she has no more idea of comms and security than a potato.

Unfortunately, and to my shame, I've had to drop the idea of running a separate firewall. The hardware I have available just can't keep up with the speed of the network, and having just a single box powered on all the time appears quite attractive at the moment.

I need to spend some real money sometime! Anybody know if IPFire for a RiPi is any good?

UK Supreme Court waves through indiscriminate police surveillance

Peter Gathercole Silver badge

Re: @Steven

I know this. That's why I know that AC comments appear on "My Posts". What you don't know is how many of my comments are posted under my name, and how many are anonymous!

As there is no HTTPS offering, you or any other boss (or your network specialists running the firewall(s)) could examine any postings made from inside your company (or even on the "Dirty" wireless network that you offer to your employees) without any difficulty at all. So it's not even protecting people from their bosses!

What I was trying to say is that in terms of anonymity to the security services or the Register staff, the AC box has no meaning.

Peter Gathercole Silver badge

Re: It's Over for Democracy

Stating the bloody obvious here. Posting as an anonymous coward here does not make the comment truly anonymous. All it does is make it so other readers cannot see who who posted it.

In order to post a comment, any comment, you have to be logged in. Even if you click the "Post anonymously" box, the comment is still recorded against your ID. Just look in your "My Posts". So anybody in El. Reg. can tie any comment to the email address registered against the account.

Anybody who really thinks that they are anonymous just by clicking the box is deluding themselves, as I'm sure everyone here knows.

Microsoft comes right out and says backup software is dead

Peter Gathercole Silver badge

Re: oh really!?

Deduplication is fine if your data contains significant amounts of duplicated data.

Where I work at the moment, we generate 6-8TB of UNIQUE data (scientific data collected from the wild plus results of modelling derived from that data) that needs backing up (via incremental forever) every day.

It's relatively compressible, and TSM compresses the 6-8TB down to 2-3 tapes worth (1TB uncompressed tapes), but I would be very surprised if there is much at the block level that is duplicated, and less at a file level.

For this type of data, cloud backup is really not an option, and cloud storage does not work because of bandwidth issues. Powerful though the collective resources of your cloud provider are, they don't match the requirements of HPC workloads.

UK spaceport, phase two: Now where do we PUT the bleeding thing?

Peter Gathercole Silver badge

Re: "no UK Government ever spends any money west of Bristol"

Plymouth had an airport. It just didn't have passengers wanting to use it.

Unfortunately, the market will ultimately dictate whether an airport is viable, and it looks like Plymouth wasn't.

I'm not sure whether Newquay is viable, either.

Peter Gathercole Silver badge

Re: "no UK Government ever spends any money west of Bristol"

You have no idea how ironic it is that I forgot to include the Met Office!

Peter Gathercole Silver badge

"no UK Government ever spends any money west of Bristol"

Well, not strictly true.

You've got Plymouth Dockyards, RNAS Yeovilton, and the UK Hydrographic Office which are directly MOD or MOD trading funds, so west of Bristol is too broad.

But Cornwall? Isn't that a different country! It certainly feels like it sometimes. I mean, for some time last year, there was not even a mainline railway running to it!

Telly behemoths: Does size matter?

Peter Gathercole Silver badge

Re: When I was a kid

I had a colleague who used to work on TV and monitor design. He said that the automatic degaussing circuit that used a PTC thermistor often drew up to 100A from the mains. The only reason it did not trip the breaker or fuse on the ring main was because it only drew that amount of current for a fraction of a second, less time than it took the breaker (or a fuse) to operate.

I had no reason to disbelieve him.

I also know that when I worked for a big bank, if the power went out on the floor I was on, we were told to turn off all the large CRT monitors before they resumed the power because the degauss surge on so many monitors at the same time would throw the breaker again as soon as it was operated. Apparently, although the building was designed with computers in mind, the architects did not expect the large CRTs that support people asked for. As a result, the floor rings were right at the safe limit under normal operation.

That floor was the first to get flat panel LCD monitors when the refresh happened.

Peter Gathercole Silver badge

Re: When I was a kid

Degaussing circuits were not in late '60s and early '70s valve TVs. These would often, over time, acquire a permanent magnetic field around the chassis or the tube itself, leading to psychedelic colours at the edges of the screen. You got a TV engineer out with a magical degauss coil that he waved over the screen to make it work properly.

To compensate for the earth's magnetic field, these TVs actually had small bar magnets mounted on the chassis around the tube on bendable 'stalks'. These would be painstakingly adjusted until the Test Card showed no distortion.

My mother was obsessed with keeping a cabinet TV. They used to rent a Baird from Radio Rentals right from when BBC 2 first started transmitting in colour (around 1967 IIRC - That was a bad year for me because of illness, and I was off school for some time, and I got hooked on the Trade test transmissions which were broadcast on the hour for the benefit of TV installers - White Horses, Skycrane, and trout farming come to mind). Towards the end of it's life in the '80s, the tube was so badly magnetized that it would not demagnetize, no matter how many times the degauss coil was passed over the TV. Of course, it could be that Radio Rentals no longer had any working degauss coils in their toolboxes!

Eventually, Radio Rentals pleaded with my parents to stop calling in faults and let them take it away, because they could no longer fix it. They provided a Ferguson in it's place, which just did not hack it with my mother as it was made from paper covered chipboard, rather than real wood!

Right up until the end of her life, my mother still complained that the sound and colour(maybe because it was not psychedelic!) of whatever TV they had was poor compared to the Baird TV. I think it was an ideological thing, however, as this was even with the sound passed through external amplification.

And the buggiest OS provider award goes to ... APPLE?

Peter Gathercole Silver badge

Re: This is not a football match. @h4rm0ny

It is entirely possible that it could be done as a community project, but the resource involved would probably be too much for a one-man band, or even a small group of people doing it in their spare time, and the necessity to test it against the plethora of distros would be a similarly mammoth task.

It's easy to have a community project that adds a veneer over the top, because you can break the tool down into modules that drive the documented tools. Getting in at the fundamental layers, where the different disros tend to differ from each other, and where the documentation has not been maintained, or in some cases not even written is a much harder task, and requires much more research and testing.

It would be difficult to get such a layer accepted to the extent that the major distro owners would adopt and maintain this common approach in preference to their own distro specific tool.

If we had had a situation where a fully free Linux had become a defacto standard, then if that distro maintainer was altruistic, they could have incorporated something like this and hope that it would be picked up by other distros, but it seems unlikely that the increasingly fragmented Linux world will settle on a dominant distro (hell, the systemd risks fracturing the community even more than it currently is).

What with Canonical, a company that was being portrayed as a bit of a white knight a few years back, going in a direction that is unlikely to be followed by other distros, I think the time for a dominant distro is fading into the past. Mint is unfortunately reliant on Ubuntu, and RedHat always had an agenda to try to leverage support contracts from their users. SuSE, which looked like it's independence was under threat appears to have weathered the storm but has lost followers. Debian appears to be going with systemd, which will alienate a lot of people (and will be a nightmare to administer using a tool such as I am proposing).

I suppose that Lennart Poettering (systemd) could take on an administration tool that would plug into systemd and extend it to cover other sysadmin tasks, but I for one would not trust him to run such a project without making it almost completely unusable/unsupportable.


Peter Gathercole Silver badge

Re: This is not a football match. @h4rm0ny

System administration is one of those areas where Linux has suffered because of the diversity of the distros.

The one-size-fits-all processes like useradd will do the basic job at hand on the local system, and are pretty similar across all versions of Linux. Once you get beyond this, each of the distros have their own idea of how to streamline this and other admin tasks, and most of these are pretty distro specific. In some cases they are proprietary and closed source to try and generate a revenue stream, and do not interoperate.

There is not even a consistent package management format across all versions of Linux.

It is very difficult for a new Open Source package to come along and streamline this. What is needed is a low-level tool that goes in at a suitable level so that it can manipulate the configuration files/databases/objects fundamental in Linux, to provide a consistent system management layer in all distros .

What you actually get (like with Puppet) is a whole load of distro specific methods layered on top of and driving the specific interfaces for each distro. This works, but is high maintenance, which often means that it becomes paid-for software (again, Puppet is an example of this).

There are two ways this could happen. One is if the major distros decide to collaborate and produce a common administration interface. The other is for a standardisation body to add the specification of such an interface, and have the distros adopt that standard.

The former is unlikely to happen, as the distro specific sysadmin stuff is where people like RedHat and Canonical make some of their money. The latter cannot happen as there is no accepted Linux standard or even standardisation authority, and even if there were, it would be dominated by the commercial distro maintainers, because they are the only people who might have resources to invest in a standard, and then we are back to the former point.

So what we have left is paid-for software or home-grown scripts put together by sysadmins which do the job, but are seen as being messy.

I can see no way of moving this forward unless someone with big pockets and a lot of influence with the distro maintainers decides to take it on.

Britain needs more tech immigrants, quango tells UK.gov

Peter Gathercole Silver badge

I think that all the time there are native people with relevant skills available, that this type of request should be squashed. Does the Government even try to assess whether there is a real skills shortage, or do they trust the very people who are asking and would likely benefit through smaller wage bills?

How to independently measure whether there are people available to do the jobs? Well, how about a Register!

Anybody in El. Reg interested in creating a list of people with specific job skills who are currently available to present to the Government to counter claims of shortage of skills? I'd probably be prepared to pay a reasonable amount to appear on such a list when I'm not in work!

I've used the Joke icon because of the pun, but maybe it's not a joke.

Samb-AAAHH! Scary remote execution vuln spotted in Windows-Linux interop code

Peter Gathercole Silver badge


Unless I've missed something here, the steps of forking another process and performing a setuid/seteuid are still separate calls. It's not the fork() that is the problem, it is the fact that in order to perform the setuid/seteuid() the process changing it's credentials must be running as root.

So you have a root owned samba process that forks another root owned samba process, which then changes it's credentials to the user.

This is the way it works for all traditional UNIX processes first acquire a users credentials, things like login, sshd, telnetd, ftpd etc. etc. As people point out, it is a fundamental feature/flaw depending on how you look at it.

This is changed significantly if SELinux is turned on (or another RBAC system on other UNIXes), whereby you need to have the correct roles assigned to a process for it to be able to perform actions, which includes syscalls. Thus, I think that Linux already has a more controllable authentication system, it's just not turned on in most systems, as it's foreign to the way that most Linux/UNIX systadmins think.

Even though I understand the concept. I'm one of the sysadmins who've never set up a RBAC/SELinux system in anger, so I still have to go through the learning curve for this.

'Utterly unusable' MS Word dumped by SciFi author Charles Stross

Peter Gathercole Silver badge

Re: complex documents


Peter Gathercole Silver badge

Re: I admit, I am Word Processor inept.

You reverse engineer an existing Word document to work out how to use Word!

I would say that this is close to an impossibility, especially using styles as an example.

I've seen Word documents that have dozens of what look like identically names styles, caused by someone tweaking a particular element in a paragraph (like indenting it), which leads to a new modified style being created with the same or a very similar name.

I once spent the best part of a week cleaning up a long, operational document that had been pieced together by cut-and-paste from other documents which had something like 100 different styles in it. All of the source documents were supposed to have been written using the same template, but a lot had had the styles changed in minor ways at the whim of the author. And Word kept the modified styles when doing cut-and-paste!

I'm a real throwback. I did most of my technical writing in the past in troff with memorandum macros, and I used to use SCCS as the change control (and make to control the whole process). I suppose if I was writing more than I do at the moment, I would probably take a similar tack with LaTeX and a modern change control package, although I do find for my purposes Git or Subversion are too complex. As it is, for short documents and letters, I tend to use Libre all the time, because I can pretty much guarantee that it is either already available or can be installed. Such is the advantage of free software.

Expired router cache sends Google Cloud Engine TITSUP

Peter Gathercole Silver badge

Re: Remind me again @Lee D

The difference is that if an in-house service goes down, you can investigate the problem after the fact, and try to so something to prevent that same problem from happening again, and this includes disciplining anybody responsible for the design or operation of the service.

You have nothing like that level of control with cloud services. You might hope that the service provider may learn from the experience and do the same type of review that you might, but there is probably nothing in your contract that forces them to do so, and this probably includes actually being given an accurate and complete report of the issue. Unless there are specific uptime targets in your contract, it may ultimately be that the only lever you have over the provider is to threaten to leave them, with whatever the fallout that will cause.

I don't doubt that there are some services where this is a perfectly acceptable risk, but there are many, many others where this is just not the case. Couple that with the uncertainty regarding control of access to your data, and these things together make it quite unlikely that I would recommend putting any business critical service in the cloud.

Got $600 for every Win Server 2003 box you're running? Uh-oh

Peter Gathercole Silver badge

Re: Over a barrel. ¿lots more security patches for RHEL?

"That's like claiming IE stats don't count because Microsoft got the original code from Spyglass.."

Um. No. It's really not.

Red Hat do not 'own' all of the packages. They do not claim that they maintain all of the packages. You are falling into the same trap that I showed was false in a previous post. Please refer to that.

But to re-iterate. Red Hat own the compilation and packaging of many of the packages in their repositories. They do not own the maintenance of the packages themselves. They could fork a package if they wanted (it's Open Source after all), but in most cases they don't want to for perfectly valid reasons. Use Firefox as an example, which is in the distro, but is maintained by the Mozilla Foundation.

In contrast, Microsoft claim IE as their own package. They maintain it. They employ staff explicitly to maintain it, and they would be super-pissed if someone else tried to publish a derivative of IE, or claim some IP over it.

It appears to me that you are deliberately trying to confuse the issue, unless you really have a fundamental mental block about what Open Source is all about.

Peter Gathercole Silver badge

Re: SIP server @AC

It seems to me that several of the distros include packages like Asterisk and Sems in their repositories, and Glassfish/Sailfin appear to be Open Source packages shipped as jar files that will not need compiling. Now I don't know what you were trying to achieve, but did you look?

I realise that you may have been wanting features that are not in builds of packages in the repositories, particularly if you want interoperability with some commercial products (vendors just love to include proprietary or bleeding edge extensions which often cause problems with Open Source packages).

If the package you were wanting was part of a commercial product, even if it were a free component, then did you try suggesting that the vendor provide the same degree of support for OSs other than Windows as they do for Windows? Sometimes what people see as a deficiency in Linux is really with the vendor of a particular package being unwilling to provide adequate support for Linux platforms, and that is hardly the fault of the distro maintainer, or the Linux community as a whole!

Lenovo shipped lappies with man-in-the-middle ad/mal/bloatware

Peter Gathercole Silver badge

Re: I Wonder

There always were different ranges of Thinkpads.

Go for the T series (or an X series if you want a compact laptop).

When IBM owned the brand there were at least the R series which were plasticky, and the A series which were larger and heavier. Before that, they were numbered, with the 300 range being budget and made of plastic, and the 700 range being the business systems.

Lenovo have dropped all of the old IBM ranges except the T and X, and have re-branded some of their other ranges as Thinkpads to cash in on the name.

I have a work T420, and apart from the appalling new 'island' keys, it seems as robust as the older systems.

The T used to stand for Titanium (actually an alloy with titanium in it) that was used in a chassis to stiffen the screen/lid, which along with clever interlocks between the lid and base led to the reputation about them being extremely robust. The hinges certainly last longer than most other laptops.

Biting the hand that feeds IT © 1998–2019