* Posts by Peter Gathercole

2953 posts • joined 15 Jun 2007

Microsoft has Windows Server running on ARM: report

Peter Gathercole Silver badge

Re: Apps?

Microsoft 'bought' Insignia Solutions (or at least took out a pretty much exclusive license) for their SortPC technology that allowed 'foreign' binaries to run on a particular architecture, a feature called Windows-on-Windows (WOW).

This meant that you could have had shrink-wrap Windows applications that should run on all Windows platforms. I doubt that the technology was maintained when Windows became x86 only.

Peter Gathercole Silver badge

Re: Linux ahead(as per usual)

There were systems you could have bought that ran Windows NT on Alpha.

But it is clear that the majority of support for them came direct from Digital, not MS.

I did see an IBM Power system (I think it was a prototype model 40P) running Windows 3.51.

Doctors urged to adopt default opt-out approach to care.data scheme

Peter Gathercole Silver badge

Re: If it was respected @AC

This is not about sharing data for patient care. That should already be being done under a different initiative. Care.data is about sharing data with non-clinicians who perform fundamental, mainly statistical research to correlate and synthesize new conclusions from data that is already held. That should be a good thing.

At least in theory.

The problem here is that the organisations allowed to apply for access to the data goes far beyond the NHS, and indeed beyond pure medical research, and I believe that insurance companies (supposedly for actuarial reasons) and drug companies (probably to assess whether a condition was worth developing a drug for) were the sort of commercial organisation that were applying for access.

Long armof of the saur: Brachially gifted dino bone conundrum solved

Peter Gathercole Silver badge
Thumb Up

Re: Easily explained

Besides thumbs up and down counts, this type of comment could do with a groan count!

Is your home or office internet gateway one of '1.2 MILLION' wide open to hijacking?

Peter Gathercole Silver badge

And this is why...

...I run an additional hardware firewall separate from my ADSL router.

It's long been an axiom of any 'proper' security that you have multiple layers, each provided by a different vendor.

Even if each of them may have their own vulnerability, it seriously deters casual hackers if once they've breached one line of defence, there's a new and different one to knock down.

Some may see it as a challenge, but most will just give up.

UNIX greybeards threaten Debian fork over systemd plan

Peter Gathercole Silver badge

Re: the "fun" part about systemd

Unfortunately, laptops in particular vary quite a lot in the chipsets that are included. There is a lot of tuning required to get a Linux stable when suspending and resuming.

There is a whole subsystem called pm-utils (ironically modelled on sysv init) which allows you to tweak the suspend and resume system for the particular model of laptop. I tend to run IBM/Lenovo Thinkpads, for which there are a significant numbers of profiles which work quite well.

Where I've had problems are with the models with Radion Mobility graphics adapters when KMS is enabled, and I've also had a problem with the sample rate of pulseaudio not getting restored properly.

But with KMS turned off (Ubuntu releases between 8.04 and 12.04), if you can ignore the audio issues, suspend works quite well. 14.04 appears to have fixed the sound sampling issue.

Hibernate is more problematic, as on Thinkpads it is necessary to have a FAT primary partition on the hard disk to contain the hibernate file. Before I upgraded my Windows partition to Win2K, it used to work fine, but all those years ago, when I upgraded to NTFS I found that the hibernate code in the Phoenix BIOS could not handle the newly formatted NTFS partition. The 'old' boot record format cannot have more that 4 primary partitions (WinXP now, current Ubuntu, last/next Ubuntu and an extended partition containing the rest), I don't have a spare primary partition just for a FAT filesystem.

Peter Gathercole Silver badge

"Haven't used it much yet"

And there is your problem.

You really know that it's not the right approach when you find your first system that either does not complete the boot process, or even worse, sometimes does but sometimes does not.

You then have this impenetrable black hole to try and debug, which may "appear to be well-documented", but does not tell you what is happening.

Once you've seen it, the "huge pile of little shell scripts" is easy in comparison. The naming convention is only funny if you don't understand how the shell performs globbing.

Doctor Who's Flatline: Cool monsters, yes, but utterly limp subplots

Peter Gathercole Silver badge

Bad Wolf

Bad Wolf was introduced in a very subtle way.

It was not rammed down our throats, as in Here's the ARC you're looking for. It was more Hang on a second, didn't we see something like that a few weeks back. And it sort of made sense, with Rose, while she controlled the power of the Tardis, touching all of her timeline with the Doctor to leave some clues as to what had to happen.

I wonder why she didn't see any evidence of Clara though. Oh, of course, no multi-series ARC (Babylon 5, why could you not have had more influence on other series).

Peter Gathercole Silver badge

Re: Did anybody notice...

Yes. Probably a Scientific but could have been a Programmable. Need to check the stills. And it still worked! The display was clearly visible at one point.

Hope they didn't ruin it.

Sky's tech bets pay off: Pay TV firm unveils blazing growth for Q1

Peter Gathercole Silver badge

Re: Defining Free

Hmm. The BARB figures are interesting, and it horrifies me to see just how skewed towards a few high profile programs like The Great British Bake Off, The X factor, Downton Abby etc TV viewing in the UK actually is.

But it does beg the question of why something like 40% (based on 10 million sky subscribers and 25 million households in the UK - although very broad statistical flaws here) decide to spend money with Sky. And that does not include Virgin Media customers.

There must be something pretty compelling in the 2% of viewing time for Pay channels to justify this expense. Obviously, some of that is going to be sport, and maybe the relatively easy to access catch-up and on-demand services, together with the bundled boxes could be helping maintain their customer base. Of course, even Sky customers will watch free-to-air services some of the time. Like phones, possibly Sky customers don't like the up-front cost of buying the box.

I have both freeview hard disk recorders and streaming services available to me on TVs, as well as Sky, and also have been through two generations of USB freeview stick and played around with other on-line TV services, and I still find that the go-to service in our household is Sky. Maybe we're trying to justify spending the money, but as I said although it is quite expensive, I still regard it as reasonable value for money just for the content I can't (legally) get anywhere else.

Interestingly enough, whenever my wife and I have 'spirited conversations' about what we spend money on, she always brings up the Sky subscription as an unnecessary expense (which is significantly less than she spends on cigarettes in a month), and I have to remind here that she is the one to be found most frequently watching the pay channels! In fact, I would almost not miss it, because I get so little time to watch the slightly less mainstream pay TV channels that I find interesting (documentaries, arts, Syfy, but also the movie channels)

Peter Gathercole Silver badge


How are you defining "free content"?

If it's content that is available on Free-to-view other services (Freeview or Freesat), then I would dispute your figure of 90%. I have well over 200 TV channels available on Sky, and only about 30 available on Freeview and approx 160 on Freesat. All have at least some +1 channels, so not all of those channels contain unique content.

If you are saying that it is available through the Sky infrastructure without having a Sky subscription, then I may be in slightly closer agreement with you, but try try removing your Sky subscription card and seeing how may channels you can no longer get.

For my ~£60 a month for a Sky HD package, in addition to the Freeview channels, I get Sky 1, Sky Atlantic, Sky Living, all of which contain content not available anywhere else in the UK, and I also get SyFy, Sky Arts, a host of documentary channels, access to 'golden' channels like Watch, a moderate selection of movie channels (although not as good as they were) and also a whole host of on-demand content which I would not pay any extra for. On top of that, they gave me the box(es) for free (they replaced my original SkyHD box without cost when they rolled out the on-demand services).

I don't agree with the way that they spread the desirable content across as many packages as they can to maximise the number of packages you need to buy, and I certainly don't agree with the gouging of their customers with regard to sports channels, but I don't think it is such bad value.

If they still existed (and this is mostly the reason why they don't), I certainly would no longer rent any DVDs from places like Blockbuster, and I've noticed that the number of DVDs I buy has dropped significantly since Sky installed their on-demand service. So in recent years, the amount of money I've spent on content has actually declined as Sky have brought on their services. This seems good to me!

I am reluctant to become a triple-pay customer, because I don't actually like Sky's business model much, but I don't really object to getting TV from them.

It's 2014 and you can still own a Windows box using a Word file or font

Peter Gathercole Silver badge

Re: Why would you PARSE FONTS in the kernel? @AC - Linux drivers

My recollection is that xdm actually could switch UID when it ran on a system.I believe that it was a configurable option, and you could specify an X server restart (partly to change the UID, but also to set the server to a known state with no client programs left over form the last user) during the login process on a device that allowed it. Obviously not on an X terminal, though.

It's later graphical login processes like gdm and lightdm that changed this.

Unfortunately I no longer have anything old enough running to confirm this.

Peter Gathercole Silver badge

Re: Windows 10, for those interested @AC

Whilst shellshock is/was a really worrying problem, I don't think that any serious web site will actually any CGI-bin bash scripts.

Yes, I know that the problem will persist across other binaries as long as they preserve the environment variables, whenever a bash is started as a child, and that the system() call will almost certainly start a shell, so there is still danger there, but I would be startled if Google, Amazon et. al. were ever vulnerable. The patching they did was mainly to be absolutely sure.

SOHO or SMB web sites may be vulnerable, of course, so I am not downgrading the risk, but I think that your implied assertion that all Linux web servers will by default be vulnerable is overstating the problem.

Peter Gathercole Silver badge

Re: Why would you PARSE FONTS in the kernel? @AC - Linux drivers

Actually although a small part of the video driver system is in the kernel, the majority of the driver runs as plug-in modules to the X server process (not kernel modules), which is a use-land process, not in the kernel. This makes graphics drivers different from, say, a driver for a disk adapter.

The bits in the kernel are to do with allowing the X server process to access the video hardware at a register/DMA level, and is pretty generic glue code. All of the smarts are in the X server, and that is the code that is most likely to have a problem. This means that it is unlikely that you can crash a Linux box with a graphics driver, although you may make it difficult to use on the directly attached monitor (other access methods are available!)

In fact, if you try hard, you don't even have to run the X server as root. Generally speaking, modern distributions do run the X server as root because it is started up before the graphical login starts, and that needs X, but if you disable the graphical login, log in as an ordinary user using a text-based authentication method, and then run up an X server (using something like startx), it works just fine.

I would actually like the graphical login methods to switch away from root during the login process. It can be done, but is likely to introduce a visible glitch as the X server restarts during the login process. But as we will end up with Wayland or Mir in the near future, changing the way that X11 is used seems a bit pointless.

Take CTRL! Shallow minds ponder the DEEP spectre of DARK CACHE

Peter Gathercole Silver badge

Re: wouldn't be multi-tasking the same way

There were serial terminals that provided two or more serial ports allowing them to be connected to two different systems (or the same system twice!).

Ones I came across included the HP2392, Falco 5220, and I believe that Wyse and Esprit also had models that did the same.

But none of the normal terminals that I came across allowed direct cut-and-paste between different sessions, although I could not say that there were none that did.

I should note that the AT&T BLIT, running on UNIX with layers backing it up allowed virtual terminals on the same machine using a RS232 or Starlan serial connection (there's a video copyrighted 1982 on YouTube), and did come with a mouse! AT&T also had a session manager called screen that allowed a process on a UNIX system to masquerade as several terminals, maintaining screen state, and allowed you to switch between them. This worked on any terminal with sufficient curses support.

Peter Gathercole Silver badge

Re: CTRL-C @dan1980

"the command prompt came from a time when mouse-control was not really there"

No. It came from a time before mice were an option. I know that the mouse was first demonstrated to the world in 1968, but they did not appear on general purpose computers until the Xerox Star, AT&T Blit, Sun 1, and Apple Lisa, all in the early 1980's. The first PC mouse appears around 1983.

'ordinary' terminals with CLI interfaces go back much further than that!

Peter Gathercole Silver badge

Re: CP/M applications @Mage

It pre-dates CP/M as well.

I first came across Control-C as interrupt in the default settings for the UNIX Version 7 TTY driver (although it was 'soft', and could be redefined with the stty command), which would have been around 1979. Prior to that, the standard interrupt was communications Break. But I don't want to talk about the V6 TTY driver. That ugliness is best consigned to history.

I'm fairly certain that Digital Equipment Corporation (DEC) used Control-C for interrupt in RSX and RSTS as well.

Control-Z to suspend was a feature that came from BSD UNIX that introduced TTY job control sessions that allowed you to have backgrounded programs that you could switch between with the fg and bg commands.

OMG, we're gonna be invaded by NEXUS 6 replicants – no, wait. It's just a rumored phablet

Peter Gathercole Silver badge

Re: CPU to be an Snapdragon 805, really? @ Jedibeeftrix

Apart from playing the numbers game, just why is 64bit a good idea in a tablet? It's not like it's got many-gigabytes of RAM to manage (3GB can be managed in a 32 bit address space), or that it needs to handle large integer or high-precision floating-point numbers..

Just because Apple thinks that a 64 bit processor is a good in a hand-held device does not mean that it is a good idea at the moment.

Sir Tim Berners-Lee defends decision not to bake security into www

Peter Gathercole Silver badge

Re: HTTP or HTML? @Mage

The whole concept of wide area networking security was a moot point when it came to early email systems. UUCP was the best that there was (UUCP is the UNIX to UNIX Copy Program, not just a mail system, although one of the most common uses was mail, and another was remote printing).

Everybody knew it was not secure, because it was a store-and-forward scheme, such that any of the intermediate systems had access to the content. That was just the way it worked, and everybody knew and accepted it.

If you look at basic UUCP, it ran over serial communication lines, often over analogue telephone lines using modem. The concept of it being secure was never even thought about. It was easy to tap a telephone line and feed the data captured through a modem, so it was obvious that there was no security. If you wanted to send something securely, you encrypted it and the uuencoded the result.

There was an encrypted UUCP system which used the UNIX crypt technology. I cannot remember the exact details or what it was called, but it was in the AT&T BNU, but it effectively meant that the data was not transmitted in the clear. But it was still vulnerable on the intermediate host systems.

Saying that it should have been secure is like saying early cars should have been built with roofs, windows and locks. But they weren't.

Anyway. The TELEX system was about as insecure as you can imagine, so that is not a particularly good example.

BTW. To all of the people saying that X.400 should have been the default mail system should remember that SMTP was defined in RFC 821 several years before the initial recommendations for X.400, and they expected to be running over X.25 transport systems, so were a bit weak on the security side as well.

Official: Turing's Bombe better than a Concorde plane

Peter Gathercole Silver badge


Look again at what was being celebrated. It is the best historical artefact, so unfortunately, it is limited to what actually still exists.

I agree that HMS Dreadnought was clearly a revolutionary ship, and rendered the rest of the world's battleships obsolete almost overnight, but Dreadnought herself was rapidly overtaken by subsequent ships that introduced the 13.5" and then the 15" main gun, fuel oil in place of coal, superheated steam boilers and improved protection. Notable British Dreadnought follow-on ships included the Iron Duke class and then the Queen Elizabeth class, which was IMHO probably the peak of the British Super Dreadnoughts. Subsequent ships moved away from the classic Dreadnought layout, and culminated in the fast battleship that was built by various navies to counter ships like KMS Bismark and IMS Yamato.

HMS Dreadnought herself only had a life of around 13 years, which is a very short time for a capital ship, and managed to miss Jutland, but does have the distinction of being the only battleship to have ever sunk a submarine!

Peter Gathercole Silver badge

Re: HMS Belfast

I was going to ask the same question. Belfast was one of a subclass of the Town, or Southampton class of large light cruisers. The primary difference was that during the building of the ship, an extra 22 foot section was added between the forward superstructure and the forward funnel.

The original intention was to allow the ships to carry more (16 vs. 12) six inch guns, but as the quadruple turrets were never built, they ended up with the same main armament as the original ships. They could cover a target with continuous fire, but were not really any better that the rest of the class.

This left the two ships (Edinburgh and Belfast) longer than the so called heavy cruisers, and as long as the smaller battle ships (like the Royal Oak class), without significant armour or heavy guns.

I also think that the extra section spoilt the very hansom lines of the 'towns, giving them an awkward, lop-sided silhouette, certainly nothing worthy of accolade.

But I suppose that as there is little preserved of the glory days of the British Navy, that we should be glad we still have Belfast.

I would have liked to see either Vanguard, the last British battleship, or the Audacious class Ark Royal (not the harrier carrier) preserved, but alas, they are gone.

Linux systemd dev says open source is 'SICK', kernel community 'awful'

Peter Gathercole Silver badge


I've been thinking about this a bit more. What we are seeing are the first signs of battle-lines being drawn up between two different factions. The divide is whether Linux should stay as mainly a UNIX clone, or whether it should become a new operating system based on UNIX but no longer adhering strictly to the UNIX ethos.

I'm getting old. I've been working with UNIX for 36 years. I'm definitely in the "UNIX clone" camp. I really don't relish learning what would rapidly become a new operating system. I fear that complexities would effectively produce a technocracy who are the only people who understand the inner workings of the new OS, to the exclusion of people on the 'outside'.

I think that the systemd people will be in the "New Operating System" camp. I don't know which camp Linus would sit in. If he is in the UNIX clone camp (and this was really how he started Linux in the first place), I think that people who want to move away from the UNIX roots should fork the kernel, and really make it a new OS. According to the rules as I understand them, they would no longer be allowed to call it Linux, however.

If they do not want to take on the responsibility of maintaining their own kernel, they really should listen to the influential people who do control the existing one, and that means paying some heed to what Linus says rather than trying to browbeat the development team or slip poorly coded patches into the kernel source, because it does not work the way they think it should.

With the direction Canonical want to take Ubuntu, and the friction between the kernel developers and some other projects in the community about the future direction of what a core GNU/Linux system should look like, I can really see there being a schism on the horizon.

Peter Gathercole Silver badge

Re: Unpaid volunteers in a lot of cases @AC

You're looking at this the wrong way. The problem with your argument is that you think that systemd is better than what preceded it. May of us who have long UNIX and Linux experience do not believe that the advantages of systemd, mainly of faster boot time outweigh the horrible, horrible complexities that it introduces.

Just because someone has come up with an interesting alternative to init and the traditional rc scripts does not mean that it is automatically better.

I blame the fact that a lot of people have grown up with Windows as their learning platform. In that model, complexity, opaqueness and proprietary lock in is a way of life, and too many young (and not so young) programmers producing Open Source software accept that it is the way to produce a system.

One of UNIXs real advantages was that there were serious efforts to keep it simple. Systemd does not fit in that model, nor (as others have pointed out) does most of the sound system in Linux (not just pulseaudio, but the other things that came before it) or several other additions.

Where systemd has crossed Linus's path is although there is a kernel/utility separation in Linux, systemd (which is not really part of the kernel, but part of the utility toolset), the systemd developers were demanding changes in the kernel, and abusing some of the management and logging facilities of the kernel in a way that was never envisaged. That caused what looked like kernel problems, like the system hanging on boot.

Linus did not agree that the kernel needed changing, and certainly did not agree with the way that the logging facility was being used, and pushed back in his own inimitable style.

As he is the custodian of the kernel, not of the utility tools, that is his prerogative.

What’s the KEYBOARD SHORTCUT for Delete?! Look in a contextual menu, fool!

Peter Gathercole Silver badge

Re: Talking of 1-2-3 @Deryk

I only used the term ASCII because I believe that that it was more immediately understandable than "serial" or "asynchronous". I am well aware that there were many terminals that were normally used as asynchronous serial terminals that had form-filling capability. But I would suggest that outside of some proprietary applications that mandated particular terminal types, almost all ASCII terminals were used as asynchronous serial devices, so much so that the terms are almost synonymous. These devices rarely used the form-filling functions, even if they had them.

By the early '80s, which is when Lotus123 came to the fore, terminals were normally IBM 3270 or 5250 compatible, and did indeed use EBCDIC, or serial terminals that nearly all used ASCII, such as Lear Siegler ADM3A, Wyse50/60, DEC VT100, Beehive etc. There were dozens of manufacturers, all of whom gave up as cheap PCs could also be terminals with the correct software.

Peter Gathercole Silver badge

Re: Text editting

I knew I should have qualified that. Curses was an API abstraction layer allowing people to write software without having to know what terminal type was going to be used. It was written by Ken Arnold at UC Berkeley, and was shipped with BSD, before being re-implemented in System III Unix by AT&T.

Interestingly, the Wikipedia article asserts that strictly speaking vi predated curses, and curses heavily borrowed code from vi. After all this time, you learn something new.

Peter Gathercole Silver badge

Re: Text editting

400 baud was an odd speed. The standard speeds were 75, 150, 300, 600, 1200, 2400, 4800 and 9600. Some terminals would do 19200, but that was generally frowned upon because of the interrupt load on the server. Faster speeds came about when people started running multiplexors or PPP for internet access.

But yes, that was one of the reason why vi commands were so terse, and the requirement for curses to optimise screen updates. Vi was written to be able to work over the slowest of lines with the most basic of terminals. All you needed was full-duplex communication, the alphanumeric keys and some punctuation. The terminal had to have some form of direct cursor addressing and at least a home and clear screen command, that could be encoded in termcap. But even the, there were some terminals that were just too brain-dead to be used for vi. I seem to remember some comments in ancient termcaps about a super-beehive terminal and maybe one of the Ann Arbour terminals.

What was most concerning was terminals that would not flow-control properly, so there was a mechanism for encoding timing delays into the functions so that curses would not overwhelm a terminal, preventing corrupted screens.

Peter Gathercole Silver badge

Re: Talking of 1-2-3

Chances are that the visible part of the sheet was sent as a 3270 form, and you would have been able to move between the cells/fields with tab and/or arrow keys, filling in multiple cells, and once all of the fields were how you wanted, you could hit enter and transmit all of the cells up at once, and have the sheet recalculate. This would have been quite familiar to a mainframe user, but completely foreign to anybody used to instant update.

I know that having grown up on full-duplex ASCII terminals on UNIX, DEC and other systems, moving into a 3270 world when I joined IBM frustrated the hell out of me until I worked out the best way to do it. But once the concepts were understood, it worked pretty well, only differently.

The reason for it working the way it did was because 3270 terminals had quite a lot of function built in, and would allow local editing of data on the screen without any involvement from the mainframe or terminal controller. This meant that you could attach a lot of terminals to a mainframe without it melting down, and that interacting with a remote terminal down low speed telecommunication lines was bearable, with only the download and upload screen refresh being slow.

For full-duplex ASCII terminals, the computer was involved in the most basic of functions, and ended up having to echo every key typed back to the terminal. Interrupt handling per keystroke sapped the life out of a lot of mini-computers unless they were good at it (like the PDP11 was).

PCs, where the computer had the keyboard and screen locally attached were a different proposition, and naturally lent themselves to update per keypress type applications.


Peter Gathercole Silver badge

Re: "TVs these days are a lot harder to repair than TVs of old"

As an aside, I have been told, and I think I believe a lot of it, that when you look at the lifetime claims of compact fluorescent lightbulbs (CFLs), the lifetime quoted is actually the expected lifetime of the tube.

Within the bulb, you also have an inverter to generate the voltages necessary to drive the tube (it's in the large white plastic blob between that screw/bayonet and the tube and makes the bulb difficult to fit in some light fittings). These invariably contain similar capacitors, such that when the CFL fails, the tube is often OK, but the inverter has stopped working. This is, I believe, why they do not appear to last as long as the claimed lifetimes.

Unfortunately for LED bulbs, until we get low voltage lighting supplies in houses, they will have to have similar electronics to produce a low voltage DC source in the bulb, and will also suffer premature failures.

Peter Gathercole Silver badge

"TVs these days are a lot harder to repair than TVs of old"

Whilst we have moved away from the failure rate of valve TVs, it is well known that a very significant number of modern TV failures are caused by capacitor break-down in the power supply. It's normally within the ability of anybody who can learn to wield a soldering iron and screwdriver to unplug the TV from the mains, ignore the "No user serviceable parts inside" label, take the back cover and shielding off, spot the bulging capacitor(s), and replace them (fortunately, the capacitors are unlikely to give a serious shock in a modern TV).

Alternatively, there is a scrap industry that works like the car breakers. Companies break TVs up into their working component boards, and sell them at a fraction of the price of a new TV. Ebay and the Amazon Market place are great places to find such businesses.

My 7 year old 32" cheap (for the time!) no-brand HD TV has now been repaired at least twice like this, and I have a 26" Acer that I bought over 10 years ago that it still going strong after several bouts of maintenance.

There is still a place for someone who can fix TVs. Whether it is workable as a means of earning a living, I'm not so sure.

So long Lotus 1-2-3: IBM ceases support after over 30 years of code

Peter Gathercole Silver badge


"look at Linux, same mistake".

That statement makes it sound like there is one person or organisation in control of Linux who could fill that gap.

I'm sure that you realise that it's just not like that. Linus was interested in creating a UNIX clone, originally for his own use. He did not really have any ambitions for the desktop. It's true that someone like RedHat or Canonical could attempt to fill that gap, but most of the Open Source projects just don't have the resources to produce something on the scale of a full-blown office productivity suite.

The one realistic candidate, StartOffice, was a project that came from proprietary and commercial package that was offered for free, non-commercial use on various platforms after being re-written in C++. When Sun purchased the company, they forked StarOffice to create OpenOffice, which had some of the copyright-encumbered components removed (particularly the database component, which was IIRC a cut-down ADABAS implementation). Sun kept StarOffice on their product catalogue as a commercial product, but as time went on, they had difficulty committing serious resource to it's development.

And Oracle's purchase of Sun was the death knell for StarOffice, and a serious kink to the development of OpenOffice. Whether the fork to produce LibreOffice will be enough to kick-start attempts to make is a serious contender for deployment at Enterprise level (it's already perfectly capable for SOHO or most SME uses) remains to be seen.

If you have the odd few tens-of-million dollars (or more) to develop a new, compatible competitor for MS Office, I'm sure that the whole world would wish you well! I'm sure that there really is a niche for a cross-platform, commercial suite, but trying to play catch up with Microsoft will always be a difficult task. Maybe you should invest in Corel, and try to get WordPerfect and Quattro ported, but I suspect that even this would be a quite herculean task!

Microsoft's nightmare DEEPENS: Windows 8 market share falling fast

Peter Gathercole Silver badge

I agree...

...but I object to the categories, particularly "Cheapskates".

This does not take into account the low end of the income demographic, where just obtaining a PC was a major challenge in the first place. These people may be faced with a decision like "Do I replace the (working) PC, or do I pay all of the electricity bill, the rent and do the shopping?".

These are not cheapskates. They may not fully understand the issues but are mainly not ignoramuses, and they are certainly not doing it to prove a point (the "brave"). These are people who effectively have no choice other than to keep a machine with XP, or give up on the Internet completely.

I can see the tail-off of XP systems being very slow.

Music-mad Brits drive up hardware sales too – claims BPI

Peter Gathercole Silver badge

Re: "Copyright exception allowing them to make a private copy of a music CD"

In the UK, there was no exemption for media conversion (backup), but it was commonly accepted that there was no point in trying to prosecute someone for copying their LPs to cassette for use in the car.

Nothing in the digital age had changed that until this recent change, so technically it was still against copyright law, and this included ripping CDs for use in an MP3 player or computer. There is no fair-use provision in UK copyright legislation.

There had been various suggestions about formalising exceptions, but none had made it into an amendment to the copyright legislation until now.

Hackers thrash Bash Shellshock bug: World races to cover hole

Peter Gathercole Silver badge

Re: OpenBSD for the win @Michael Wojcik

I never got to see AIX V1 on any platform or V2 on the RT source code (actually, I think I did have a login on one of the machines that used to hold it but I never looked). But the preceding port (IX) that was done by Locus was pretty much a pure SVR2 port, first onto the s370. That was used as the base for AIX on the RT, even if they did re-write parts.

I have had access to various Bell and AT&T distributions from Edition 6 through to R&D UNIX 5.3. and whatever was layered over SunOS 4.0.3 for the R&D additions to that OS.

I would never have said that AIX for the RS/6000 was ever SVR4. AIX 3.1 was definitely only SVID version 1 compliant, which meant that it was really only an SVR2 implementation.

The more modern features were mainly added through the OSF side of things, because IBM was on that camp, not the SVR4 camp.

The convergence really came with the UNIX 98/SUS2 accreditation of AIX 4.3.1, but as this is an interface specification, the underlying code could be written any way you wanted provided that it complied with the interface definitions.

Indeed, if you go through the include files for a current version of AIX, you will find almost no copyright statements left for Bell Labs, AT&T, USL, Novell, XOPEN or The Open Group. This does not prove how little AT&T code is left in there, but it does give some indication.

Peter Gathercole Silver badge

Re: OpenBSD for the win @iEgoPad

I'm thinking in terms of shellshock here. No OS is totally secure, and I have acknowledged that often in other posts.

The business of reading another processes environment variables is not totally true anyway. You could read the environment that was passed into a process, but not any variables that were defined since the process was started were invisible.

That behaviour was not just AIX, but several other UNIX-like OSs (I've just checked, and the same behaviour is in RHEL 6.5), and it has definitely been fixed now on AIX (in 2008 - I can get you the APAR numbers if you want), so that you can only get to see the initial environment of processes you own. That is unless you're thinking of something other than the "ps ewww" output that pretty much every other UNIX-like OS also suffers from.

I think that you should look at some of the AT&T - or even better the Bell Labs. UNIX source. It's not perfect, but compared to some of the bloatware and spaghetti that is contributed to open-source projects including Linux, it's a model of conciseness and well documented code.

Peter Gathercole Silver badge

Re: Wanted : amputation patch

No. It's really not like sourcing another file. It's more like ksh FPATH libraries, or the shell rc files (normally .kshrc or .bashrc), where the functions are automatically defined every time a shell starts.

I'm uncertain about the feature of exporting functions. I see it could be useful, but I've lived for so long not using it that I don't see it as essential. In fact, I get really pissed of about the amount of pollution that infects a users environment in most Linux distributions. I mean, just log on and type env or set or typeset+f and see how much crap is in there!

Peter Gathercole Silver badge

Re: OpenBSD for the win @iEgoPad

"They'll be calling vacuum cleaners Hoovers next!"

That's the most stupid thing I've heard on this thread. Linux is not UNIX. Linux is not even POSIX compliant in almost all of it's various distros. There ain't no way that you can call Linux UNIX, even if they look superficially similar.

Oh, and by the way, I'm an AIX zealot, and am feeling a bit smug. AIX was derived from AT&T code, and I use ksh as my default shell, and will not allow any bash scripts to be deployed to action service requests on any system I am in control of!

Peter Gathercole Silver badge

Re: If you do not sanitize CGI input @DainB

I was initially sceptical, but looking into the problem, it appears that Apache and other web facing services actually do accept some information and then passes into CGI-bin scripts as environment variables. Variables such as REMOTE_IDENT and REMOTE_USER are examples documented in rfc 3875.

This is where the problem lies. If Apache, or whatever, does not do any checking (and why should it, it would have to second-guess what is meant to be passed into CGI-bin programs), and allows the variable to be set up in the way described, and then for any reason spawns a bash, the extra code will get executed as part of the bash setup before any script starts running.

It is not that the function gets set up and can be called, it's that the extra code appended to the function definition gets executed at the time bash starts.

Look again at the proof of the bug at the shell level. I set the following environment variable up. Note this is just a variable, not a function at this time, and I can actually be running any shell (csh would need a setenv command to set it up).

never_run='() { : ; } ; echo "Vulnerable"'

This sets up an environment variable in the same pattern as bash uses to allow exported functions to work. When a bash starts, it effectively evals all of the environment variables in the new shell which will either set a variable up in the new shell, or will in this case define a function 'never_run' which would execute a null shell command and then exit. What the function does is completely irrelevant. What is important is that while the new shell evals the string to set the function up, it then also runs the code after the second semicolon. It does this immediately, not when the function is called. So in the example above, if I export the new variable

export never_run

and then start a bash, any bash, it will execute the code. So,

$ bash -c date


Fri Sep 26 11:03:57 BST 2014

runs it, as does just entering a new interactive shell with bash by itself. Note that I've never run the function it's set up. This is where the danger lies. I'm sure that when bash was written, it seemed like a elegant way of exporting functions to subshells. It is clever, but obviously not thought through.

Apache is a limited examples, as it should run as a non-root user, and if set up properly, will run in a chrooted environment (not that this will prevent all information leakage). But the same exploit may be available in any poorly written service that passes user-specified data on to another command through the environment variables.

DVLA website GOES TITSUP on day paper car tax discs retire

Peter Gathercole Silver badge

Re: Abolish it @Down not across

As soon as the tax runs out, then it becomes an offence to store the car on the road, obviously. The car is no longer taxed so you fail the "a taxed and insured vehicle" test!

That does not alter the fact that it's an anomaly. I don't understand why of the three things you need to legally drive a car on the road, they've not made it a requirement to have an MOT in order to keep it on the public highway. It's just inconsistent.

Peter Gathercole Silver badge

Re: Abolish it

The same ANPR systems that the Police use to detect untaxed vehicles on the road is also used to detect that an uninsured vehicle is on the road.

It is now illegal (and has been for a couple of years) to have an uninsured vehicle on the road, even if it is parked and not being driven.

So we have the strange situation where an untaxed or uninsured vehicle must be stored off the road, but at the moment, a taxed and insured vehicle without MOT can be parked on the road, but must not be driven.

I'm sure they will fix this deficiency at some point.

Peter Gathercole Silver badge

Re: A little common sense is called for... @Martin

You can still queue up at the Post Office. They will take your money however you want to pay it, and inform the DVLA (they've had a direct route to the DVLA for many years). The only difference is that you won't get a round piece of paper to put in your car!

I too don't understand. The old site (which I did some work on the backend servers for some years ago) coped very well. The rate of transactions is quite predictable. Whilst there is normally a surge at the end/beginning of the month, it should not be that different with the new system.

Sounds like there is some misinformation flying around here.

Supercapacitors have the power to save you from data loss

Peter Gathercole Silver badge

Re: Does not compute

It's because the drive flags back to the RAID adapter that the write is complete before it has actually been committed to disk. The RAID controller will invalidate and delete it's copy in it's battery-backed DRAM, and the only copy that exists for the period until the disk write is complete is in the DRAM in the drive.

If the RAID set is large enough, you could hope that only one drive's copy is lost, which would allow the data to be reconstructed in RAID modes 5, 6 or 10 (but probably not RAID 1), but I would not want to bet the farm on it

Soundbites: News in brief from the Wi-Fi audiophile files

Peter Gathercole Silver badge


Oops. Can't do arithmetic. 11.025 KHz. Still, does not alter the case significantly.

Peter Gathercole Silver badge


You've been reading the Wikipedia article on the Nyquist Frequency, and particularly the section on Aliasing sinusoidal waveforms.

This is a very special case, and does not mean that you can reconstruct any waveform from a sample of 1/2 of the frequency of it's highest component. It's really pointing out the minimum sampling rate that allows you to differentiate between one sine wave and another with an integer multiple of the it's frequency. The important thing is that you have to know is that it is a sine wave before you start.

There are many special cases, and the one that I like to think of is a sine wave at 1/4 of the sampling frequency, which at 44.1 KHz sampling, would make the frequency of the sinewave 11.25KHz, well within the hearing range of most people. This would mean that if sampled at exactly 90 degree intervals, you would get something between a perfect sawtooth and a square wave. Of course, if you know it is a sine wave, you can reconstruct it, but on a CD player it would be stupid to assume that everything you play will be a sine wave, so it tends to use some mathematical spline to smooth the waveform, and this is what will be fed to the analogue part of the system. Different implementations of CD use different smoothing functions, but none of them can perfectly reconstruct the original signal in every case.

As has been pointed out, this is a pathological case, but it illustrates that digital sampling can never be anyway close to perfect unless the sampling rate is many times the maximum frequency, certainly more than twice, whereas a mechanical system could be perfect within a range of frequencies, even though it is unlikely to be so because of material physics.

Monitors monitor's monitoring finds touch screens have 0.4% market share

Peter Gathercole Silver badge

Re: Obsession with tablets @localzuk

The assertion is a leap of faith without anybody doing the proper market analysis.

What marketeers are seeing is PC sales, particularly desktops, slowing down (because people are happy with what they have) at the same time as tablets sales have increased. They put 2+2 together and get something close to 10, and then predict that tablets are replacing PCs.

I totally agree with you. I've been saying for a long time that technological pressures to replace desktop and laptop systems has effectively been removed from the equation. Systems have become too powerful. Any non-budget machine built in the last 5-7 years will still be very usable today (my current laptop is a 9 year old Thinkpad running Ubuntu with Gnome Flashback). To paraphrase, if it ain't broke, don't replace it!

The manufacturers were hoping with XP out of support, that many people would ditch older but still serviceable machines, leading to new sales. It hasn't happened. Lots of people I know still keep their Vista, Win7 and even XP systems running for real PC work, especially if they have augmented their IT provision with a phab/tablets for media consumption. And when they do replace a system because it breaks, a member of my extended family is doing quite good business selling refurbished ex-corporate systems at a significantly lower price than a new system. Computers are getting even more like cars!.

I still say that there should be a push from someone like Which! to encourage people to see whether their Core systems can have their life extended still further by installing Linux once the security situation for XP and Vista becomes untenable (i.e. when Banks and on-line shopping emporia stop letting IE8 and earlier, and older versions of Firefox, from connecting).

Edge Research Lab to tackle chilly LOHAN's final test flight

Peter Gathercole Silver badge

The secret is...

... to prevent the batteries dropping below a certain critical temperature. So the batteries powering the heater must be inside the heated enclosure.

It's really a bit of a shame that the internal resistance of the batteries is not a bit higher. If it were, the act of powering the electronics may generate enough heat to keep the batteries warm, or at least slow down the cooling rate!

But according to the specs. Energizer Lithium should be good to around -40 C, so I'd be a bit surprised if they would be a problem for most of the ascent.

Run little spreadsheet, run! IBM's Watson is coming to gobble you up

Peter Gathercole Silver badge

Watson is not a single computer any more

While what we saw on Jeopardy! could clearly be seen as a computer cluster running as a single service, what IBM have now is an analytics application that runs as a fenced cloud service. This means that it runs on just your data, and that data is separated from another companies data, as much as anything is fenced in a cloud service.

So, if you trust company data separation in the cloud, you're just as safe using the IBM Cognitive Computing service as any other cloud service.

I'm not saying how safe I feel that is, however...

'Windows 9' LEAK: Microsoft's playing catchup with Linux

Peter Gathercole Silver badge

Re: And does anyone actually use this in Linux?

At work I support four separate HPC clusters. I have one virtual desktop allocated to each so that I can have all the windows on each cluster grouped together. When you have hundreds of nodes, most of which should be identical, but often have specific problems

I have another four, one for a full-screen mail session, one for a full screen web-browser (with multiple tabs), another for various monitoring tools, and one used for anything else that takes my fancy (typically local windows on my workstation).

Counting the open windows I have today (which has been a quiet day), I have 18 windows open, scattered across all 8 desktops. On busy days, I can have between 30 and 40 open windows. I can switch between workspaces easily and know that all of the windows open on one desktop relate to one particular facet of my work. I would hate to fit all of that into even 2 or 3 monitors, even if I were prepared to sacrifice the desk space.

I've been working in a similar fashion to this for nearly 25 years!

I use virtual desktops at home as well on my personal laptop, mainly to separate out different things I am doing at the same time. For example, at the moment I am working out how to typeset music while referring to on-line tutorials (full screen musical notation editor without intruding window decorations in one desktop, browser in another, rapidly switching between them by pressing two keys).

Honestly, unless you are incredibly single-minded and can really concentrate on just one thing at once, I believe that almost anybody could benefit from multiple desktops.

Peter Gathercole Silver badge

Re: FFS!

vtwm is normally described as the virtual tab window manager. The relationships between the various twm family members are documented here.

Peter Gathercole Silver badge


I was using vtwm on UNIX in 1990. Both CDE and OS/2 Warp had it from 1994.

Vtwm was interesting, because rather than separate 'desktops', what it gave you was a scrollable/snappable window over a much larger desktop than the size of the screen. This meant that you could have a huge window that you could move the visible screen over. Coupled with hotkeys to control the window manager rather than on-screen buttons, it made a very usable and flexible environment. I did find the source for it a while ago, and compiled it up again, but I'm afraid that I'm now corrupted by the need to support freedesktop extensions from more modern window managers.

And IIRC, the AT&T 5620 Blit had some rudimentary multi-view extensions to Layers in the mid '80s.

I can't remember whether the Sun 3 that I played with in the early/mid 80's had a virtual extension to SunView. I think that they preferred icon boxes to contain multiple minimised windows that you could open and close as a group.

Every billionaire needs a PANZER TANK, right? STOP THERE, Paul Allen

Peter Gathercole Silver badge

Re: For heavens sake

He could do a James May.

Get someone to life-size the plastic kit, and have fun building it. He could also put 'himself' into the driving position.

Biting the hand that feeds IT © 1998–2019