Feeds

* Posts by Peter Gathercole

1734 posts • joined 15 Jun 2007

WAR ON PORN: UK flicks switch on 'I am a pervert' web filters

Peter Gathercole
Silver badge
FAIL

@J.G.Harston

Your post makes no sense. Individual users on a normal shared home network do not 'log on' to the network (even security concious people such as I do not operate a RADIUS server at home). ADSL connections are almost always-on, logged in using stored credentials in the ADSL router, and individual machines just connect to the network (using a pre-shared key), get a DHCP address (if this is how they are configured), and off they go. Your post shows a remarkable lack of understanding.

What was being said on the radio this morning was that the first time a user from a household connects after the control is turned on, they will be presented with the pop-up which would prevent further web access until the level of filtering had been selected. The way I understood it was that it would be from whatever device attempts to access the web first. This could be from one of the kids computers, logged in as their own account on the system.

In this day and age, people do not share a single computer. I have (believe it or not) more than 30 devices in the house that can connect to the network and browse the net (computers, laptops, phones, tablets and consoles), and on a regular basis, I would expect to see at least 15 connect on a daily basis (7 active computer users in the house, each with more than one device).

It is possible that it could be made per-device, but that would need something like cookies, and would thus only affect browser traffic. But this would not work, because I regularly clear out the cookies on my systems, and would also mean that the kid's computers would be allowed to set their own policy.

In my case it is mostly academic. The youngest member of my household is 17, so strictly speaking does currently count as a child, but they will be 18 when it is likely that these controls kick in. But a household with a scattering of laptops and tablets, often the kids will have their own devices, and could see the request to set the filtering first.

3
1
Peter Gathercole
Silver badge

I was listening about the 'pop-up' or 'splash screen' that would come up on Radio 4 this morning on the way to work.

Neither of the people interviewed who were supporting it said anything about how they were going to make sure that it was the account holder who clicked 'allow'. What if the kids saw it first?

I like my internet to be unfiltered, and I would love to see how the ISPs are intending to implement this. I suspect DNS filter, reverse IP lookup and subsequent DNS filter in a content filter in the ISP (gets around using alternate DNS servers), and direct blocking of specific known IP addresses. Extend this to IP addresses that do not reverse resolve (just to be on the safe side), and it would be possible to do what is being talked about.

But all of this is very intrusive, and will probably rely on blacklists in order to work. And it will have to be stateful in order to be remotely efficient. This means that over and above what the ISPs already keep, there will be mine-able information, and also there will be the ability to control what the country sees by controlling the blacklist.

4
0

Ubuntu forums breached, 1.8m passwords pinched

Peter Gathercole
Silver badge

@AC 21:35

If you can't differentiate between the OS and an application that runs on the OS (the forum software), then I suggest that you go and do some education.

Any application that runs it's own authentication mechanism, regardless of the OS it runs on, has the same degree of vulnerability.

I have an account on that site, but is it using the lowest grade of password that I use, so any site that may share the same password is probably not going to have any serious consequences to me.

6
0

Hackers crippled HALF of world's financial exchanges - report

Peter Gathercole
Silver badge

Re: Bomb Proof @plrndl

That may have been how it was designed, but that does not mean that it the way it now works.

The current Internet had a number of very serious pinch-points, where disruption would not necessarily damage total connectivity, but would cripple performance. Certain organisations and particular buildings around the world are regarded as hubs, and have a disproportionate amount of the connectivity for a region, country or for international traffic.

But that is not what this article is about. If you are a stock or futures trader, and either your systems or the systems that you need to talk to on t'internet are DDoSd, then you may be unable to trade. If this happens, and the news leaks, then your share price may take a tumble, and you may also end up losing company value as well as revenue. Ditto any company that relies on connectivity to trade or operate, and there are a large number of those.

1
0

CONFIRMED: Driverless cars to hit actual British roads by end of year

Peter Gathercole
Silver badge

"most likely be configured to perform boring, tricky tasks like parking"

I though there were cars that pretty much did this already.

6
0

Virtualisation extremist? Put down that cable and step away slowly

Peter Gathercole
Silver badge

Re: There was technology max maximise hardware usage before virtualisation

Generally completely agree with you.

But there are situation where it is useful, and also where it is essential.

It's useful to allow two different operating systems run on the same hardware. Back in the late 1970s, the University I was at turned of their IBM 360/65 running OS/360, and migrated the workload onto a proto-VM on their 370/168. Normally the 370 was running MTS (look it up), but by using a VM, it could also do the legacy OS/360 work at the same time.

Currently, you might do the same to run Windows next to Linux on the same system.

In addition, many enterprise OSs running today were initially designed more than a couple of decades ago. Back then, 2 CPUs in a system was novel outside of the Mainframe world, so the same OS facing a machine with 1024 CPUs may struggle. OK, the OS should have been updated, but when these OSs were written, people probably did not foresee such large systems (640KB anybody), and built in serious limitations that require a lot of work to overcome. Unfortunately, these OSs are often becoming legacy for the vendors, so it seems unlikely that the necessary work to overcome the limitations will be done. So often, it makes sense to divide up your workload into separate OS instances, and stick each into it's own VM.

4
0

PM writes ISPs' web filter ads for them - and it must say 'default on'

Peter Gathercole
Silver badge

Re: DNS look up @Irongut

They can knobble this as well. All they have to do is block TCP and UDP to port 53 on any systems other than their DNS servers in either the router they supply to you, or within their infrastructure.

Would be hugely unpopular with most of the readers of this site, but would make no difference to the majority of their customers.

0
0

How the clammy claws of Novell NetWare were torn from today's networks

Peter Gathercole
Silver badge

Re: Don't forget X

I have no knowledge of Netware myself, but if you are talking X11, then it's UNIX, not Linux. Linux had X11 servers and clients (of course), but X11's home was UNIX (and to an extent, some proprietary OS's like VMS).

If it was X11, then what it gave you was the ability to run the GUI administration client programs remotely on any workstation with an X11 server (if you are unfamiliar with it, the server controlled the screen, keyboard and mouse, and programs that attached to this X11 server were clients, wherever they ran), meaning that you would have the ability to remotely administer the Netware server, long before RDP, VNC, or Citrix were on the scene.

X11 servers were available for UNIX and Linux workstations, OS/2 and even Windows NT and later systems, as well as thin clients from people like NCD and Tektronix, so there were a wide variety of workstations that you would have been able to use.

People tend to forget what an enabler X11 was.

6
0

The IT crowd: Fiercely loyal geeks or 'inflexible, budget-padding' creeps?

Peter Gathercole
Silver badge

Re: No, No,Thrice No

I was involved in reviewing and updating part of the platform security standards at a large UK bank, and I can tell you that the IT department are the police, not the legislators.

What happens is that a security policy is defined by either an IT security department, or by specialist consultants. This states things in very broad language, such as controlling user access and data flow between security zones. They don't specify technologies, protocols or methods.

The IT department gets this deliberately woolly and poorly defined policy (by definition, as it will be architecture independent), and then has to try and implement it.

Security people are all about saying no to things that they don't understand. The business people want to be able to do anything without restrictions. There is a natural and totally understandable conflict here.

The IT department has to work out what the business users really need, rather than what they want, and then convince the IT security people, who always have a veto that it is safe. This normally means that the IT architects are between an irresistible force and an immovable object. And always, one of the ends of the process think that the IT department have failed.

Having come up with a design that they have fought tooth and nail to be able to implement, and done so at the lowest cost possible, often in completely unreasonable timescales, the IT department then have to defend the decisions taken to the users, who very rarely have any thought about why security is there for anything other than stopping them doing their job.

Unfortunately, the group with the most influence are the people who feel that they earn the money for the company, even though they are the least qualified.

It's a no win situation.

12
0

Microsoft lathers up Windows 8.0 Surface RT for quick price shave

Peter Gathercole
Silver badge

Re: I came close re. MS Office Home and Student

used to allow three installs.

The current incarnation only allows one, and is more expensive.

0
0

Sysadmins: Everything they told you about backup WAS A LIE

Peter Gathercole
Silver badge

Full tests are good

I did most of the technical design for the backup/recovery and DRM of UNIX systems at a UK Regional Electricity System back in the late '90s.

The design revolved around having a structured backup system based around an incremental forever server and a tape library.

One of the requirements of getting the operating license for the 1998 deregulated electricity market in the UK was passing a real disaster recovery test. A representative of the regulator turned up on a known day, and said "Restore enough of your environment to perform a transaction of type X". The exact transaction was not known in advance.

We had to get the required replacement hardware from the recovery company, put it on the floor, and then follow the complete process to recover all the systems from bare metal up. This included all of the required infrastructure necessary to perform the restore.

First, rebuild your backup server from an offsite OS backup and tape storage pool, and reconstruct the network (if necessary). Then rebuild your network install server using an OS backup and data stored in the backup server. Then rebuild the OS on all the required servers from the network install server and data from the backup server. All restores on the servers had to be consisntent for a known point-in-time to be usable. Then run tests, and the requested transaction.

And where possible, do this using people other than the people who designed the backup process, from only the documentation that was stored offsite with backups, using hardware that was very different from the original systems (same system family, but that was all).

Apart from one (almost catastrophic) error in rebuilding the backup server (the install admin account for the storage server solution had been disabled after the initial install), for which the inspector was informed, but allowed us to fix and continue because we demonstrated that we could make a permanent change that permanently overcame the problem while he was there, the process worked from beginning to end. Much running around with tapes (the kit from the DR company did not have a tape library large enough!), and a frantic 2 days (the time limit to restore the systems), but was good fun and quite gratifying to see the hard work pay off. I would recommend that every system administratror gos through a similar operation at least once in their career.

We were informed afterwards that we were the only REC in the country to pass the test first time, even with my little faux pas!

When supply and distribution businesses split, we used the DR plan to split the systems, so having such good plans is not always only used in disasters, and I've since done similar tests at other companies.

2
0
Peter Gathercole
Silver badge
Meh

Re: Point 3 is wrong

My view is that it depends entirely on ho much has changed in the OS since it was installed, and that is probably determined by the function of the system being backed up.

I've worked in an environment where every server in the server farm is a basic install with scripted customisations, with all the data contained in silos that can be moved from one server to another (the bank I used to work for had been doing this on a proprierty UNIX since the turn of the century, before Cloud was fashionable). These systems can be re-installed rather than restored.

I've also worked in environments where each individual system has a unique history that is difficult to replicate or isolate. These systems need to be restored.

One example of this latter category is the infrastructure necessary to reinstall systems in the former category!

There just is not one fixed way of doing things. Each environment is different.

2
0

Snowden leak: Microsoft added Outlook.com backdoor for Feds

Peter Gathercole
Silver badge
Facepalm

Re: Don't blame Microsoft but... @ShelLuser

Bloody bloody. I must be slipping.

I actually read the whole of Section 9 of the service agreement policy to see the link with GiTS before the obvious smacked be in the face!

1
0

UK Post Office admits false accusations after computer system cockup

Peter Gathercole
Silver badge

Re: Keeping the beaurocracy alive... @beck13

I was the one who brought up Tax discs, and I did refer to the Post Office being used to obtain Tax discs, although I did not sufficiently discriminate between the Post Office and Royal Mail. My mistake.

My other points about the Post Office in rural areas still stand IMHO.

If it were profitably for TNT et. al. to put a last mile delivery service in, they would. They don't, so it can be assumed that they have judged that it is not worth it. IIRC, Royal Mail originally said that they would at best break even doing the last mile (although that is really not descriptive of what is done), and would more likely end up doing it at a loss. Unfortunately, they were forced to do this in order to allow other companies to break the total monopoly that Royal Mail had for many years.

It is probable that residents of most medium sized or larger towns could live without a local Post Office day-to-day. It is similarly likely that rural areas need Post Offices more. But I would bet that many of the people who say that they can live without it probably do not know what they could use it for. They are for far more than just buying stamps.

0
0
Peter Gathercole
Silver badge

"There is no such thing as a Tax disk" @David Cherry

You might like to tell the DVLA and the gov.uk websites that.

https://www.gov.uk/browse/driving

1
0
Peter Gathercole
Silver badge

Re: Keeping the beaurocracy alive... @Me

Damn. Bloody Americanisms. Of course I meant disc.

1
0
Peter Gathercole
Silver badge

Re: Keeping the beaurocracy alive... @AC 8:13

If you can live without a mail service, then I suspect that for you the Post Office is irrelevant.

But I also suspect that when you need your next car tax disk (assuming you drive), you may find one of the Post Office and Royal Mail services useful, either to collect in person or to deliver the disk. And if you don't drive then you are not typical, and your comment is irrelevant.

Or you want your next bank card to be securely delivered, or that job application that the employer wants documentary evidence for and you want to be tracked, or any number of things for which a physical delivery is required.

What you may not realise is that people like TNT and DHL (I think) and others actually use the Royal Mail for last-hop delivery, because they can't be bothered to raise the money to put a national delivery mechanism in place for themselves. If there was no Royal Mail to do this, these alternative services would become much more expensive.

And for may people, particularly in rural areas, Post Offices fulfil the function of Bank, basic shop and news agent, and social hub, when no other shop would remain open.

Royal Mail and the Post Office are not perfect organisations (especially in light of this report), and their role is definitely diminishing, but if they were to disappear overnight, you, along with everybody else, would notice at some point.

17
1

Universities teach us a thing or two about BYOD

Peter Gathercole
Silver badge

Re: Security???

You're missing the fact that these are not single networks, but networks of networks, with fenced links between them, and at arms length from the core University networks. The only really complex part is the distributed user authentication that allows access to the core systems.

It really is a case of divide and conquer.

2
0
Peter Gathercole
Silver badge

Re: Does this really count as BYOD? @John H

If you look at large corporate BYOD programs, one of the conditions is often that you surrender a lot of control of your own device. This normally means purchasing hardware from a list, installing company supplied tools like VPN, encryption and AV, and also surrender some control (have additional administrator accounts created). Certainly challenges the idea of it being your device.

What most Universities do is to have an open(ish) student network (or, in fact, many of them, often firewalled from each other and the main University campus network), together with a portal or gateway on each that allows them restricted access to the central file servers and other facilities of the core University networks. In addition, there is firewalled access to the Internet.

I don't see why that model cannot be used by business. It keeps your core network safe, while providing much of the access that is required by the user.

My kids were always told that it was their responsibility to make sure that their systems were adequately secured, and the only assistance given by the collage was to perform standalone virus scans. If the system failed the scan, they were offered one of the free AV packages, and told to either install and run it, or get someone to do it for them. Their machines/accounts were blacklisted until it had been proved to be virus free.

0
0

Samsung Galaxy S3 explodes, turns young woman into 'burnt pig'

Peter Gathercole
Silver badge

Re: Increased energy density leads to increased risk @Craigie

But in order to liberate that energy from a chocolate bar, you need to oxidise (i.e. burn) it in one way or another, and you need atmospheric oxygen, so you ought to take the mass of that into account as well.

Chocolate can be made to burn if you try hard enough, but I'd love to see you 'recharge' your burnt chocolate bar.

But the nature of a battery means that you cannot take the cheap route of just setting light to it. I suspect that the calorific value of oxidising the components of a battery may be even higher than the rated re-usable capacity of a battery.

In short, you're not comparing like figures.

3
0

What happens on G-Cloud stays on G-Cloud

Peter Gathercole
Silver badge

Do I spot a supplier tie-in?

In order to use this, you have to be an Office365 registered user?

OK, this is currently just for UK Government employees and information partners. and I know that I have to temper my dislike of Microsoft's business practices, but this feels like Microsoft just having to wait for all UK Government on-line services to use this mechanism before signing up the entire UK adult population on a subscription service.

Where's the openness, fairness and competition.

1
0

Fedora back on track with Schrödinger's cat

Peter Gathercole
Silver badge

Re: Hang on a sec

The difference is that while a Linux update will reboot a system once, there is a good chance that if you are updating Windows with other components (like hardware drivers), Windows will reboot more than once, sometimes many more times. It's got better than it used to be, but.....

Updating a kernel of any operating system on-the-fly is difficult, regardless of whether it is a desktop or a server system.

The problem is that the kernel is more than just another programme, and is being used all the time by running processes, and one of the things the kernel does is to track and allocate resources to the running processes. In theory it is possible to replace the kernel while it is running without disrupting the processes that it is controlling, but to get it right under all circumstances is difficult, time-consuming to test and thus costly.

A micro kernel implementation may be easier to update, but that assumes that you can re-bind running processes to new instances of a service on-the-fly. But even if you can do this, it is likely that there is one or more components that will require a system re-start if they are updated (the thread scheduler is one example).

With modern on-the-fly service migration, it may be possible to boot the new kernel in a different VM, and then migrate processes into the new VM, but most people just put up with losing their system for 10 minutes.

1
0

Hanslope Park: Home of Britain’s ‘real-life Q division’

Peter Gathercole
Silver badge

Blooming heck!

I used to drive past there every day for months without even knowing what it was!

0
0

Broadband rivals 'pleased' over Ofcom's market shake-up plans. Maybe too pleased

Peter Gathercole
Silver badge
Black Helicopters

Re: Router Costs @Why Not?

That's one of the reasons why I always provision my own router. It's a cost I bear, but one I believe is reasonable to maintain independence from any ISP.

I don't trust them not to put some nasty spying functions in their firmware to leak information about my network and the devices installed on it.

Paranoid, me?

Yes, probably.

0
0

MSX: The Japanese are coming! The Japanese are coming!

Peter Gathercole
Silver badge

Re: alt-Speculation @Me

That should have been Advanced Workstation Division (AWD) in Austin.

0
0
Peter Gathercole
Silver badge

Re: alt-Speculation

Not sure that the 801 ROMP was really intended for PC machines. It was originally intended to be the CPU for a dedicated word-processor, but was picked up by the Advanced Workstation Team in IBM Austin to fill a niche as a technical workstation for education and engineering use. It was most successful as a CATIA workstation, either on it's own, or as a front-end to a mainframe using Distributed Services. It always had weak floating point performance until the advanced floating point processor was available late in it's life. It was an important stepping stone to the RS/6000, p Series and Power systems, and the PowerPC processor, though.

Although the 6150 was originally marketed as a 6150 RT PC, it was never a PC per se. There is folk-law that suggests that it was going to be used as a PC, but looking at the reason why the 5150 was rushed out of the door as a quick-and-dirty temporary solution to stop the likes of Apple and various Z80 CP/M systems from dominating the market, it would never have been ready in the timescales required. That's why IBM used off-the-shelf components and a ready made OS and Basic for the system.

0
0
Peter Gathercole
Silver badge
Meh

Re: alt-Speculation

Of course, I was referring to non-I&D PDP-11, which I think that the LSI-11 was. I think that the J-11 and F-11 may have been separate I&D machines, but that only allows you to double the process address space, and even then, with serious limitations (64KB text space and 64KB data).

0
0
Peter Gathercole
Silver badge
Happy

Re: alt-Speculation

As much as I love the PDP-11 as an architecture, it would still have run out of steam in the late '80s. The problem was the memory model, and the mixed-endian nature of the system.

Without further architectural evolution (which was the VAX-11 in 1978), the PDP-11 was limited to 64KB processes (unless you used overlays) mapped into an overall 22-bit (4MB) maximum address space.

Don't get me wrong. It was a magic architecture, and because of the orthogonality of the ISA, I used to be able to decode PDP-11 machine code directly from octal dumps on paper. But it was a '70s architecture, not an '80s one.

The '80s should have belonged to Motorola 68000, NS16032 or 32032 (a very nice instruction set), or possibly ARM, running UNIX derivatives.

Just imagine if the IBM PC had had a 68000 with enough of a cut-down UNIX back in 1982. As soon as hard-disks became available (PC-XT time scales), we would have had multi-tasking full UNIX systems on the desktop, a bit like the AT&T 3B1.

PDP-11s survive (even to the current day and into the future according to a recent El-Reg article) because they are fine industrial controllers for systems that do not need large amounts of code to perform their function.

4
0
Peter Gathercole
Silver badge
Happy

Re: It was training in autism.

But Acorn User also produced a barcode scanner for the BBC, and printed their programmes as barcodes as well as listings that could be scanned in, complete with checksumming.

They had special yellow pages in the middle of the magazine so that you could find them easily.

2
0

Bank details - PAH! Phishers want your FACEBOOK password

Peter Gathercole
Silver badge
Alert

Re: Hey phishers!!!!.... THINK AGAIN!

Be careful with your Facebook account. There are many, many other sites that will use the Facebook login process to access their site (I think linked-in will, and I was looking at the On-TV app on Android that allows it - I tend to ignore it as I don't want all my accounts linked together). I think these processes work by logging into Facebook themselves, and seeing whether the ID that you've given is currently logged in.

There seems to be a group of information providers that would like to become single sign-on candidates. I've seen Google, Yahoo and PayPal as well as Facebook offered as quick ways of registering and authenticating for other sites on the Web.

1
0

Galaxy S4 way faster than iPhone 5: Which?

Peter Gathercole
Silver badge

Re: Battery

Sounds like my Palm Trio. Still gives me a weeks battery life on it's original battery, and was very smart in it's time.

0
0

That enough, folks? Starbucks tosses £5m into UK taxman's coffers

Peter Gathercole
Silver badge

And therein lies the problem.

'Nuff said.

1
0

SCO vs. IBM battle resumes over ownership of Unix

Peter Gathercole
Silver badge

Re: I think Apple owns Unix now anyway @Vic

I agree that the header files are not necessarily authoritative, but unless you know somewhere else that is generally available, the header files may still be the best even if they are not very good.

Most people (and me, now) do not have access to any current UNIX source code. Generally speaking, although the temptation was there, I resisted taking snapshots of the various code when I left companies with source. I try to abide by the rules, even though in hindsight, I have often regretted being so 'moral'.

The only UNIX source code I have available to me now is the V6 Lyons commentary, and the V7 code that was freed up by Caldera.

When I wrote my previous comment, I had a bit of a dig around in the IBM AIX V7.1 include directories. I was very surprised to see almost no copyright notices to Bell Labs or AT&T (understandable), USL (I suppose that is understandable as well), or Novell, Caldera or SCO, and precious few to the Regents of the University of California at Berkeley.

It looks like IBM have been cleaning up the copyright notices over the years.

I am currently not working on any other platform to check.

0
0
Peter Gathercole
Silver badge

Re: I think Apple owns Unix now anyway @lars

Oh. Yes. I forgot about 32v. That was in the same announcement.

BSD/Lite was, as far as I understand, BSD 4.4 with AT&T code removed/re-written. I think, although I am prepared to be corrected, that is the reason why it was called Lite.

UNIX does indeed contain code written at Berkeley. The obvious example is vi, although it would not surprise me if the paging code had something to do with BSD. As I understand it, there were relatively good relations between the Bell Labs. people and the Computing Labs people as Berkeley.

The networking code probably has not, because AT&T took the Wollongong TCP/IP code, and re-wrote most of it to use STREAMS/TLI.

But it does not matter how much code cam back from BSD, because the BSD license is a very permissive one that does little to restrict what the code is used for, provided it is acknowledged.

It is other contributors (which will mostly be companies working with AT&T) that may be more problematic, but I guess it depends on the contractual relationship between them and AT&T. The best place to look is probably the copyright notices in the header files for each release.

0
0
Peter Gathercole
Silver badge
Boffin

Re: I think Apple owns Unix now anyway @lars

I would actually dispute that UNIX(tm) has ever been Open, as we would think Linux or other GPL code is.

Yes, UNIX code source code has been available, but only under license. Versions (editions) 1-6 were available to academic users under a very permissive license, but one that prevented commercial use. At the time, Bell Labs/AT&T were prevented by a US anti-monopoly judgement from supplying commercial computers, and this included Operating Systems. At this time, there was a thriving pre-Open Source group of academic users who dabbled in the code, and shared their work with others. This was a really exciting time (I was there), and you often found 400' 1/2" tape reels being sent around (it was pre-networks) various Universities.

Version 7 tightened this up to prevent the source from being used as a teaching example. Version 7 and earlier code has, since 2002, been published under GPLv2, granted by Caldera (Horray!). This is now "Open", but I don't know of anybody who is shipping a commercial V7 implementation (although a free x86 port is freely available from a South African company called Nordier Associates).

Commercial use of UNIX post Version 7, from PWB to UnixWare was under a commercial license that did not contain any right to the source code. The same was true for all other-vendor UNIX systems. Source licenses were available, but under their own strict licensing conditions, and at a high cost (and often required the licensee to have an AT&T source licence as well!).

BSD code prior to BSD/Lite required the user to have an AT&T version 7 (or later) license. BSD/Lite or later does not contain any AT&T code (or at least nothing that AT&T were prepared to contest), so is available under the BSD license, but as I have stated before, cannot legally call itself UNIX.

Having got that out of the way, why was UNIX used as the basis for Open Systems?

Well, UNIX was always easy to port. This meant that there were several vendors (piggy-backing on various academic ports, like SUN and DEC) who could sell UNIX systems, meaning that application writers have something approaching a common base to target their code, although differences had to be worked around. This was unique. There was no other large-system operating system around at the time that had this.

It became apparent that if there could be a standardised subset of UNIX (commands, APIs, libraries) that all vendors would support, then this could mean that application writers could possibly entertain a "write-once, compile once per vendor UNIX, and sell" strategy. This was first championed by AT&T (who by this time were allowed to sell computers and operating systems) with the System V Interface Definition (SVID), which was adopted by IEEE, with minor changes, as the various POSIX 1003 standards.

These standards are what gave UNIX the "Open" label. Anybody could write an OS that met these standards, whether based on genetic UNIX code or not. This has resulted in numerous interesting products and projects, one of which is GNU/Linux (POSIX compliant, but not any later UNIX standard), and includes such things as QNX, BeOS and z/OS, which can be regarded as UNIX or UNIX-like, some of which are truly open. Not all of these can be called UNIX, however.

I agree about the Linux kernel. The reason why this has remained as a single kernel is because Linus keeps an iron hand on the kernel source tree and official release numbers. It is perfectly possible for someone to take this tree, and modify it (and it has been done by several people including IBM and Google) under the GPL, but they can't get their modifications back in to the main tree without Linus' agreement. They could maintain their own version, however, as long as they abide by the GPL. AFAIK, they can even still call it Linux.

0
0
Peter Gathercole
Silver badge
Boffin

Re: This will only end when the case is ruled on @Vic

I think you're wrong. This is what I understand.

UNIX System Laboratories (USL) was set up as the home for UNIX as part of the SVR4 Unified UNIX program, and was joint-owned by a consortium of companies including AT&T. Part of the set-up was that all UNIX IP and code was not just licensed to USL, but the ownership was transferred from AT&T to USL. (I was offered a job by USL in the UK, and nearly took it, so I have an interest in this part of the history)

When USL wound itself up it got bought by Novell, and the ownership of all of the IP for UNIX went to Novell. This included all branding, code, copyright and patent information.

In 1993 or 1994, Novell transferred the UNIX brand and verification suites to X/Open (now The Open Group), and licensed the use of the code and IP to SCO, although through a contractual quirk (SCO not having enough money at the time), the copyright (and I believe that this includes the right to use and license the code) remained with Novell.

SCO then sold itself to Caldera, which then renamed itself the SCO Group.

The SCO Group then tried to assert ownership of the code and failed. This was one of the SCO Group vs. Novell (or vice versa) cases that was ruled on in Novell's favour. In parallel, SCO had engaged in campaigns of FUD and law suites against RedHat, IBM and their customers. These cases have never been concluded and are the ones that will not die, particularly the IBM one.

Novell was then mostly bought by Attachmate, although, and I quote from the Wikipedia article on Novell, "As part of the deal, 882 patents owned by Novell are planned to be sold to CPTN Holdings LLC, a consortium of companies led by Microsoft and including Apple, EMC, and Oracle."

I was never clear about whether this IP included any of UNIX, or if that remained with Novell. This is the bit I am uncertain about. If it went to CPTN Holdings, this is how it could be used, although looking at the agreement, CPTN's ownership of the IP is subject to GPL2 and the OIN licenses, which may offer some protection.

Confused? You will be after this years episode of SCO*

(* with apologies to the creators of Soap for the shameless paraphrase of their catch line)

Please, please! Whoever own the UNIX copyright, publish the non-ancient code under an open license. There's no commercial reason not to any more.

1
0
Peter Gathercole
Silver badge
Boffin

Re: I think Apple owns Unix now anyway @lars

You are so wrong in your suggestion that there is no AT&T code in AIX. Also, you are wrong about people wanting to pay for UNIX branding. Look at the Open Group website, and see which UNIX variants have been put through the various UNIX test suites (which costs quite a lot of money). IBM, Sun (as was), HP and Apple have all paid the money, and achieved the certification.

IBM has a SVR2 source license and AIX was very clearly derived from AT&T SVR2 code. It was not written from the ground up. I've worked in IBM and had access to the source code, and I have seen parts of the code that are clearly related to AT&T UNIX, complete with the required AT&T copyright notices. This was a long time ago (early '90s), but they were there.

For Power systems the current AIX can be traced back to AIX 3.1, released on the RISC System/6000 in 1990. AIX 3.1 itself was derived from the code that IBM had for the 6150 RT PC, and this was a direct port of SVR2, mainly by IBM but aided by the INTERACTIVE System Corporation, who had also worked on PC/IX for IBM. Reports of the Kernel (in places like Wikipedia) being written in PL/I or PL/8 refer to the VRM, not to the AIX kernel.

I admit that there has been a huge amount of code added in AIX over the years, but it is still a genetic UNIX. How much code is related? Maybe you should ask SCO. They've seen the AIX source.

The same is true for SunOS/Solaris. I was working for AT&T when SVR4 was released, and I can say with absolute certainty that Sun0S 4.0.1 was the same source code base (again, I had access to the source code) as AT&T's SVR4.

Sun were one of the principal members of UNIX International and the Unified UNIX programme that attempted to standardize UNIX in the late 1980's with AT&T, ICL, Amdahl and various other vendors long gone. I still have the notes from the developer conference. Prior to this release, SunOS 3 and earlier was based on BSD 4.2, with enhancements added from 4.3.

I am not so clear about HP-UX, but I know that HP had a direct UNIX V7 port on a system I'm sure was an HP 500 in the early 1980s, although I can't find any references (it was pre-Internet). Wikipedia says HP-UX was derived from System III. HP (and in fact IBM and DEC) were in the Open Software Foundation that was set up in opposition to the Unified UNIX. They had their own UNIX called OSF/1, which had a common code base that was taken from DEC and IBM versions of UNIX. The tension between UI and OSF was known as "The UNIX wars".

Time moves on, and of course there is no feedback from the vendors back into the main tree, so of course the different versions diverge, but I am sure it is safe to say that all three of these are genetic UNIXes, and they all have achieved UNIX branding at various levels. They can all be called UNIX as per the branding rules, but in this day an age, this is not really important. UNIX as a unified OS (much to my regret) is largely a has-been.

My biggest fear is that without some form of standardization (like the Linux Standards Base which is mostly ignored) Linux will go the same way.

0
0
Peter Gathercole
Silver badge
Boffin

Re: I think Apple owns Unix now anyway @peredur

There are nuances to this. Note that I said "UNIX(tm)" not UNIX-like.

Want to know the difference?

There is a set of verification test, owned by The Open Group (http://www.unix.org/), which tests a system for UNIX compliance. There have been several UNIX standards over the years, starting with SVID, through Posix 1003.X, UNIX 93, UNIX 95, UNIX 98 and most recently UNIX 03.

UNIX(tm) is a registered trade mark. Use of this mark to describe an operating system is restricted to those that have passed one or more of the test suites maintained by The Open Group.

OSX Mountain Lion has passed the UNIX 03 test suite. As has Solaris 10 and 11, HPUX 11i, and AIX 5.3 and 6.1. All of these operating systems can call themselves UNIX.

There are absolutely no Linux distributions that have passed any of the UNIX test suites, so legally, no Linux system can be called UNIX.

Two other quirks. There are no BSD systems that have been tested, so strictly, BSD is not UNIX (although there may be historical justification for BSD 4.4 and earlier) . But z/OS V2R1 (and some earlier versions) have been tested and passed against UNIX 95, so bizarrely, z/OS 2.1, an operating system that has little or no UNIX code in it can be called UNIX!

Now I don't know how many OSX systems have shipped in total compared to Solaris, HPUX and AIX systems, but in terms of new systems installed, I would hazard a guess that Apple are now shipping more OSX boxes than the other vendors are of their own brand of UNIX. And you can't count Linux.

This is why I said what I did.

1
0
Peter Gathercole
Silver badge

Re: The code allegedly ported was written by IBM in the first place

@__________

If you are talking about JFS, then the original implementation was on AIX 3.1, but it was re-implemented (not ported) for OS/2, and it is this that was this OS/2 code that was then ported to Linux. So you are probably right, but not in the way you think.

0
0
Peter Gathercole
Silver badge

Re: I think Apple owns Unix now anyway @AC

Would love you to justify this. Apple may now ship more UNIX(tm) systems than anybody else, but they own nothing of the UNIX IP.

OSX is a UNIX derived system, having taken the MACH kernel, married with bits of BSD (which is not branded), and then got UNIX 03 branding. This means that it passes the UNIX test suite, not that it has any UNIX IP in it.

3
0
Peter Gathercole
Silver badge

This will only end when the case is ruled on

I said a couple of years ago that this may come back. Until it is finally ruled on and closed, beyond all hope of an appeal, it will keep coming back. This is both because the claim is big enough to keep creditors and lawyers interested, and because it is a vector to attach Linux as a platform.

Mind you, the landscape has changed. I never fully understood where Novell's IP went to when SuSE got bought. If it is the case that it ended up with a shell company that is controlled by parties who have an interest in derailing Android, Chrome, Tigon and all of the other Linux related platforms, then consolidating SCO's claim with the ex-Novell's IP could prove more than an annoyance.

It all hinges around UNIX code that was allegedly incorporated into the Linux source tree by IBM as part of AIX code that was ported to Linux (I know that JFS was one thing quoted), but IIRC the case was never proved, as SCO could or would not point out the code in question. There were also arguments about derivative works. But they were never closed either.

Like the MS patent list, I feel that it would be in the best interests of all of the interested parties of Linux to make sure that any code that could be cited was rewritten and expunged from the Linux code tree. At least this would protect future Linux products, and turn this into a chase for money, rather than a FUD attack on Linux.

In one bizarre slant on this, it may actually prolong the life of Genetic UNIX (directly descended from the Bell Labs code), as it keeps it in view. I would love to see the SVR4/UnixWare source opened up as a result of any real settlement of this case, but I think that this is unlikely.

6
0

Home Office launches £4m cyber security awareness scheme

Peter Gathercole
Silver badge

Re: @Martijn Otto @Khaptain

There is a way to make users like the ones you indicate safe, but it means locking down their computers so that they can't install software, and are completely removed from any decisions about installing patches.

Whilst it would appear that Microsoft and Apple may be moving to that mindset, it is gathering some opposition from computer users, especially those who understand how things work.

I'm sure that there are other organisations that would like there to be this level of control, especially if they can recruit the vendors into installing other software components as part of the patching process.

The problem is one of balance between on-line liberty and security (and I'm not specifying whose!)

2
0

PC makers REALLY need Windows 8.1 to walk on water - but guess what?

Peter Gathercole
Silver badge

Re: My solution @John H Woods

If such a high pixel density is required, why have I never had migranes up until now?

I completely dispute that it is necessary to have such high resolutions.

In my view, as long as there are enough pixels, it's screen size that is important. And don't go on about 'colour saturation', 'jagged fonts', 'graphics intensive work', and 'multiple windows'. They're just excuses to justify the cost of such displays.

The only reason for higher definitions is to get more on the screen, and once the character height drops to below 2mm, it becomes unusable without a magnifying glass, regardless of how many pixels are ued to display it.

5
0

Nuke plants to rely on PDP-11 code UNTIL 2050!

Peter Gathercole
Silver badge

Re: there are alternatives

It depends on the model, but many of the UNIBUS PDP-11s were built out of TTL (even the CPU and FPU). This means that it should still be able to source and fit almost any of the silicon parts, although I suspect that the most difficult parts to source would be the memory chips.

If they were F11 or J11 systems, you would have to rely on existing parts.

But I suspect that with the state of current chip baking technology and the simplicity of the chips back then, it may be possible to create a pin compatible memory chip using an FPGA relatively easily if it were really necessary.

Hmmm. What a project. Keep PDP-11 alive using FPGAs!

2
0
Peter Gathercole
Silver badge

Re: Pah

I was going to say exactly the same.

And you've missed out UNIX, which was definitely multuser on the PDP-11.

I think the hack was confusing PDP with RSX-11M Plus, which really is the ancestor of VMS.

I was the primary technical support for RSX-11M on a SYSTIME 5000E (actually a PDP11/34e with 22-bit addressing and 2MB of memory - a strange beast) between 1982 and 1986. We had 12 terminals working relatively well on a system that on paper was little more powerful than a PC/AT.

I think I can still do PDP-11 assembler. At one time, I used to be able to decode PDP-11 machine code in my head, although this was mainly because the instruction set was extremely regular. I still would recommend people looking at the instruction set to see how to design one. It's a classic.

11
0

First look: iOS 7 for iPad

Peter Gathercole
Silver badge
Joke

It's a style thing

Like all things that look like fashion (and I count iThings in this category), changes do not have to make sense.

0
0

Samsung Galaxy Note 8: Proof the pen is mightier?

Peter Gathercole
Silver badge

Having been a Palm user

I'm seriously thinking about taking a Note 2 as an upgrade on my phone next month. The downside is the size.

There are any number of situations when using a finger is just not accurate enough (such things as free-form document mark-up and notes, sketching and handwriting recognition). I still find Graffiti easier to use than swipe, which I just can't seem to use accurately on my current phone, and doing something that feels like writing is easier with a stylus than a finger.

1
0

Top500: Supercomputing sea change incomplete, unpredictable

Peter Gathercole
Silver badge

Did you see the number of cores on Tianhe-2

It says over 3 million, and draws 17MW of power.

I guess that what this says is that throw in enough hardware, even with the law of diminishing returns, you can have the #1 supercomputer.

0
0

Girls, beer and C++: How to choose the right Comp-Sci degree for you

Peter Gathercole
Silver badge

Re: "This weird new software was Unix" @Sandra

As you might expect, AT&T used UNIX a lot.

I actually worked for an outreach of AT&T that was doing work on the 5ESS telephone exchange, and not only was UNIX used in various parts of the exchange (the AM ran UNIX/RT on a duplexed 3B20D when I was working with it), but UNIX was also the development environment for all the code.

In my time, they were also using Amdahl mainframes running R&D UNIX from AT&T Indian Hill as an emulation environment (EE) for the exchange, as believe it or not, the costs of emulating the exchange on a mainframe was less than having a full exchange as a test-bed.

After I left, they switched to Sun 3 (because the SM used 680x0 processors) and Sun 4 kit for the main working environment. Just before I left, I was playing around with gluing all the systems together with AT&T RFS, which allowed you to do some really neat tricks.

On the subject of Indian Hill (Chicago), pre-TCP/IP and SMTP, the UUCP hub IHLPA, which used to be a go-to for routing mail to systems that you did not have a direct path to was run from this site by AT&T. I don't know when it was decommissioned, but not that long ago (a couple of years) I came across a reference to it in a sendmail configuration that took me by surprise.

0
0