* Posts by Peter Gathercole

2924 posts • joined 15 Jun 2007

IT reseller Misco UK shutters warehouse and distie centre

Peter Gathercole Silver badge

Re: Model error.

Amazon have too much of a head-start for anybody to be able to catch up.

If you remember those few short years ago when they first appeared, they were selling books. They were priced so that the discounted price and the shipping together was a penny cheaper than the RRP on the back of the cover, the one that all the bricks-and-morter booksellers would be charging. Often, it would be delivered next working day, even though you did not pay for one day delivery (although this is something that they stopped once they were established).

At the same time, the physical booksellers were desperately trying to shoehorn other items into their stores, because they were having trouble surviving, especially against the supermarkets, which would be selling the bestseller list at a discount. as a result, the number (and number of copies) of books traditional bookstores stocked was significantly reduced. They all, however, offered to order in any titles they did not have on the shelves.

OK. I want to buy a book, lets say part of a series. Waterstones, Borders and Smiths would have the latest one in the series, but none of the others. They offer to order it in. But I can get it in the same timescales, possibly cheaper, delivered to my door from Amazon. What a tough decision to make! And this was the foot in the door.

Then they moved on to music, stocking centrally even the most obscure titles. Once they started getting the distribution network established, they were then able to move, seemingly, into everything else, and even established an IT model for their own use that they worked out that they could sell.

Now, they've tied their customer base to them with seductive offerings like Prime. (I really, really want to support my local shops, but when I can order something not available locally on Saturday afternoon, and actually get it delivered on SUNDAY for free, even I find that irresistible).

Even the largest retailer or wholesaler would have difficulty competing with them, although it seems like in the UK, Sainsburys/Argos and Tesco are giving it a go.

But here is what I think is the ironic thing. Many of the shops that Amazon threaten are now being set up to act as a collection point for Amazon deliveries. This must really feel like a kick in the teeth for some of them!

Peter Gathercole Silver badge

Re: Model error.

What Misco did (as did Inmac) was have a single place to get all of the specialist cables, interposers and media that would cost an arm and a leg and take 28 days to get from the equipment manufacturers.

They made sure that all IT departments had catalogs readily available (shipped in the trade rags), so that when you urgently needed tapes, disks or a dozen boxes of three part multi-copy fanfold paper, you had a readily available place to go, available on the end of a phone line, albeit at a premium price.

But nobody in their right mind would consider them for regular supply of these things!

There's multiple instances I've experienced where they've dug where I was working out of a hole.

That original operating model worked fine until the internet...

Now, you can almost certainly get what you need from Amazon or one of it's marketing partners, at a price that is difficult to compete with, and probably delivered next day as part of the standard offering. It's really difficult to compete with the steamroller that is Amazon. And that is what Misco's recent history has shown.

Chap behind Godwin's law suspends his own rule for Charlottesville fascists: 'By all means, compare them to Nazis'

Peter Gathercole Silver badge

Re: The thin line between right and wrong

Oh how I hate graphs with truncated axes, especially when they are used to accentuate a small change.

Firmware update blunder bricks hundreds of home 'smart' locks

Peter Gathercole Silver badge

Re: "smart home devices" @The Man...

But one of the problems is that even if the actual lock code is quite simple, the required code to keep it safe from hacking, MitM attackes etc. is not.

Lets assume they were originally using SSL or TLS 1.0 as the encryption management. In order to keep the device safe, that would need to be changed, and some of the ciphers and cryptography would have to be retired as a result of discovered vulnerabilities in the older, previously held secure, connection code.

The patches for the underlying technologies may be freely available. Packaging and deploying them to your IoT device is not. This is why cheap IoT tat is such a flawed idea at the moment.

Don't buy Microsoft Surface gear: 25% will break after 2 years, says Consumer Reports

Peter Gathercole Silver badge

Re: Not just Microsoft @John Brown

Is it a T450 that you've replaced key caps on?

I know that when Lenovo switched from the old key shape to the newer 'chiclet' or island key shape, they also changed supplier. I've never had to fix a keyboard on anything newer than a T60 (my keyboards rarely get broken!)

I've owned Thinkpads all the way from 365s to T420s, and the one thing you can say without any hesitation is that the keys change between model. Oh yes, they all use the scissor hinge and collapsing rubber bubble, pressing on a membrane, but the direction of the hinge, and the position of the clips holding the keycaps on has meant that keycaps are rarely interchangeable between models, and you often have to change the technique used to get the keycaps off. Re-attaching a keycap is possible so long as none of the plastic components are broken or deformed, but can be more than fiddly.

As far as I am aware, the official fix is to replace the whole keyboard, and this is an exceptionally easy job as you say, normally requiring a small cross head screwdriver and some finger nails (or a non-scratching plastic tool), although I note on a T420 I have to hand, a flat-blade screwdriver would suffice.I find Ebay a good source of spares if you can't stomach the cost of an official FRU.

HMS Queen Liz will arrive in Portsmouth soon, says MoD

Peter Gathercole Silver badge

Re: Atlantic Conveyor

Atlantic Conveyor was not really fleet train. It was a hastily re-purposed container ship, and the containers were arranged to give some shelter to the aircraft it was carrying. There was fuel and ordnance stored relatively unprotected on the deck in containers.

The idea was that it would be able to augment the Harriers on the real carriers, and then to fly the Wessex and Chinooks off once a bridgehead had been established on land. It should never have been part of the main task force, and should have been safe further out to sea.

As a ship under a merchant flag, with a civilian crew, it did not have any weapon or weapon countermeasures installed, as this is illegal in the rules of the sea after disguised merchant ships were used in previous conflicts.

But I guess it looked a big target to the Argentinian radar, and as the exocet missiles had an over-the-horizon range, the pilots did not see what the ship was before launching their weapons. Even if they had, it probably would have been considered a valid target.

Once targeted, the nature of the vessel meant that the ship would probably have been sunk, but the fuel and ordnance stored on deck made that almost a certainty.

Proper RFA ships are allowed to carry weapons, and the crew, although civilian, are permitted to operate the close-in weapons systems, although anything more substantial will be operated by RN, RNR or Marines who are often also on board. They also have some countermeasures installed on board.

Peter Gathercole Silver badge

Re: the first true aircraft carrier in Royal Navy service for almost a decade @AC

Hermes (laid down 1943) was not that much younger than the Audacious class Ark and Eagle, and the design came from a different need.

Hermes was the ultimate development of the UK Light Fleet carrier programme, which was designed to provide carriers that could be rapidly built, and would be cheap to run (this is why the Colossus and Majestic classes were so widely bought by colonial navies when the UK sold them).

Hermes had been designed in the era of piston engine aircraft, and when carrier aircraft reached the size of the Buccaneer and Phantom, although it was proved that they could be flown, she was just too small for main fleet duties. She was converted for helicopter duties until the Harrier gave her a short renewed life, but in reality, even the Ark and Eagle were too small to operate more than a modest air fleet of modern jets.

IIRC, the last catapult jets that Hermes flew were Scimitars and Sea Vixens, both of which we at best 2nd generation jets.

In order to be the heart of a battle group, a modern carrier has to fly attack, defense, AEW&C, AS and logistics. It's just not possible with anything less than a super-carrier, and without cats and arresters, the UK will have to rely on helicopters for the AEW and AS roles, and just hope that the F-35B is good enough in the attack and defense roles. Logistics will have to be by ship or helicopter.

Peter Gathercole Silver badge

Re: 'True aircraft carrier' @Dave

I largely agree, but I would like to point out a couple of things.

The TSR-2 was potentially a good aircraft, but was not really flight-proven, being canceled before the flight trials of the one flying prototype had been completed. It's configuration would possibly have made it a poor jack-of-all-trades, which is what the government of the time wanted (and the reason the costs got out of hand). The fact that it would have been made with 50's technology and materials would mean that it really would not have been comparable with modern composite, advanced alloy and modern avionics in anything except raw performance, and speed and altitude are not all a war plane needs.

The problem of the economics were (and still are) the main problem with the UK producing their own aircraft. The size of the UK air fleet, which has been continually reduced, means that our own needs cannot justify the high cost of developing new aircraft. Often, the development costs are a significant proportion of the overall programme, and the fewer airframes you build, the more the development costs are reflected in the cost-per-aircraft.

At one time, in the late '40s and '50s, other countries bought significant numbers of UK produced aircraft. The Hunter, Canberra, Gnat, Harrier and others all had significant export markets, but in the '60s the UK embarked on a reduction in armed force sizes in all of the services, which stalled the introduction of new planes. This had a knock on effect to the producers who were forced both by the economics and the government of the time to consolidate into fewer and fewer companies. My belief is that this stifled new aircraft design, and the Americans, with their vast armed forces spending continued to develop new aircraft.

It is ironic that in the 50's and 60's, a very large number of the engines that went into US aircraft were either license built copies or derivatives of UK engines like the Avon, Sapphire and Pegasus. The Americans learned a lot from these, enabling them to produce their own in the '60s (although significant UK technology input has gone into the F-35B engines, none of which is coming back).

It is clear that the 1960s was a dreadful period for the UK. The finances were in tatters recovering from WWII (and paying back much of the loans that were necessary to fight it before the US joined), with too much needed investment in the country competing for money. The devaluation of the Pound had a serious effect, and made imports expensive. The nationalization of large parts of UK industry by successive Labour governments in grand experiments in socialism, the reduction in UK armed forces and meddling in the arms, space, transport and emerging computing technologies, while all the time professing to support the "White Heat of Technology" is IMHO a shameful chapter in the history of the UK.

There were valid reasons for some of these actions, but it is debatable whether commercial forces and buy-outs while keeping many of these companies in the private sector might have been preferable to the forced mergers dictated by government. Often, rivalries in the merged companies crippled their performance, and certainly destroyed their abilities to come up with products to sell abroad.

Maybe colour tinted spectacles over my eyes, but I remember the false optimism in the '60s and the resultant disappointment of the following decade while I grew up, with a glittering recent history sliding into the despair of the '70s. Possibly the UK had an over-inflated self-view, but Britain was Great at one time.

Peter Gathercole Silver badge

WWII ships and armor

It used to be that pretty much anything above a corvette or sloop used to have at least splinter protection, and most destroyers and light cruisers actually had some armor protection that would allow them to go toe-to-toe with a similar vessel (or often a larger one - see how Exeter, Ajax and Achilles fared against the Graf Spee, ironically in the South Atlantic)

With the advent of guided weapons, where instead of firing several dozen shells hoping to get one to hit, a missile would be more likely to hit than miss, you could put more destructive potential into each missile.

As a result, warships built in the last 60 years have had little or no protective armor. There's no point making the ship heavier that it needs to be, as that takes power and fuel to move it around. Where you do have a substantial ship, the thickness of the hull and decks is dictated by the need for the hull to be stiff enough not to break in high seas, and the decks to be able to support what is expected to be put on them.

Thus, aircraft carriers have several inches of steel alloy as the flight deck, not to prevent shells and bombs penetrating, but to stand up to several tons of aircraft hitting it quite hard (the USN have carrier landing and launchable freight aircraft like the C-2A Greyhound, and have even landed C-130s in trials), not that F-14 (retired) or F/A-18s are particularly light when fully loaded.

The Royal Navy experimented with making ship hulls lighter with advanced alloys, but found that the aluminum alloys used corroded in salt air, and potentially burned. After the Falklands, the remaining Type 21s were sold off quick because of this. Sheffield (a Type 42) was lost because the insulation on the electrical wiring burned after being set alight by the rocket motor on the exocet as it passed completely through the unarmored hull, and once there was no electrical power to run the pumps, the ship succumbed to water.

So warships actually can't really take that much damage now. That is why they have CIWS Gatling and shortly even laser weapon systems that are supposed to be able to stop even quite small targets.

Peter Gathercole Silver badge

Re: Never mind the manpower @Credas

No, what you go after is the fleet train. After all, the carriers are not nuclear powered, so need regular refueling.

And while all of the frigates are providing AS cover for your stupidly under-performing Type 45 destroyers, which are providing the AA cover for the carriers, there's nothing left to protect your slow, mercantile marine manned, Lloyds of London rules fleet train, which could probably be stopped by diesel-electric submarines, single engine light aircraft or even Somali pirates in inflatable boats if they tried hard.

As soon as J-fuel starts running low, the carrier will have to leave station so that it doesn't become another piece of flotsam when it runs out of fuel.

Parents claim Disney gobbled up kids' info through mobile games

Peter Gathercole Silver badge

Re: Puzzled

I get annoyed when the animated film of "Plague Dogs" gets put in the kids section wherever I see it. I'm sure that it must have spawned a load of awkward questions to parents who thought that it would be just like "Watership Down" (although even that was not particularly child-friendly).

I had an argument with our local Blockbuster (back in the day) because they kept putting it back.

Fortunately, they decided that even though "Urotsukidōji: Legend of the Overfiend" was an animated film, the rating meant that it did not get put in the kids section. Apparently though, the version released in the UK was actually watered down compared to the original. Only saw it once (it was actually broadcast on the original Sci Fi Channel, albeit in the dead early-morning hours), and that was enough.

That animation can be used to portray more than kids stories has escaped far too many people.

It’s 2017 and Hayes AT modem commands can hack luxury cars

Peter Gathercole Silver badge

Re: EOL @Mike 16

I'm currently running Ubuntu on laptops from before 2009, and they still work just fine.

The only real problem I'm having is that most videos from places like YouTube tax the processor a bit, but avoiding video, things work OK.

There is a lowest-spec that us usable, and I would say that 2GHz processor, 2GB memory and some graphics assist is currently where it is at the moment for x86 processors, especially if you run Linux. I actually have a desktop with a 2.13GHz Pentium Dual Core (the last Pentium before the Core processors came along), with 2GB of memory and a Nvidia GeForce 720 on-board graphics which is perfectly usable running Windows 7.

Strangely, ARM devices appear to be able to work just as well or better at much slower clock speeds!

Sysadmin jeered in staff cafeteria as he climbed ladder to fix PC

Peter Gathercole Silver badge

Re: Windows for Worgroups @Vic

I'm actually not sure about coax. There were certainly enough conductors, and I'm pretty sure that people like Inmac used to carry some form of balum that you could plug in to allow 3270 and 5270 twinax to be carried on your structured cabling system.

They may have been an active line driver rather than a balum.

If you are talking about 10base2 coax Ethernet, then it was possible to get 10baseT transceivers that allow you to plug 10base2 or even 10base5 equipment into a 10baseT network.

There were a number of other network types that ran over coaxial cables like ARCNET, but I never had any serious dealings with them.

Peter Gathercole Silver badge

Re: Windows for Worgroups @AC

They really weren't that large. The connectors had a square profile about 4cm each side, but were about 6cm deep, although they only stuck out of the socket about 4cm. They did have thick cables, though.

When the IBM building I was working in was being decommissioned, I was shown one of the networking rooms. Remember that the same connectors were also used for 3270 and I believe 5270 (AS/400 and their predecessors). There were hundreds of the things connected to banks of 3174 and 3274 terminal controllers that were meant to be floor standing but were on shelves, stacked 4 high, MAUs and whatever AS/400s used as terminal controllers. I've never seen such a mess.

Was told that in the grand old IBM tradition, if they needed to re-wire a port, they used a new cable, because it was impossible to disentangle the old one from the knot of existing cables without disrupting something, and because it was known for a long time that the building was going, there was no point in cleaning it up.

Peter Gathercole Silver badge

Re: Windows for Worgroups

Later versions of Token Ring particularly the stuff sold by Madge also used RJ-45s and could be put through the same structured cabling that 10baseT, telephone and serial cable could use.

The CAUs also worked more like switched, rather than the crude mechanical relay star hubs (MAUs) that the original implementations used.

Oh the hours spent trying to identify the rogue system trying to insert itself at 4Mb/s on our private 16Mb/s token ring when the building wiring people plugged one of the conference rooms in to it by mistake! Bloody MAUs. Were just too dumb. Killed the whole ring dead.

Good thing that at the time we also had some systems still capable of using the 3270 connections direct to VAMP, otherwise the support centre would have ground to a halt! Split the single ring into two rings with a bridge between them after that, so that at least half of the desks would still work if the same happened again.

Intel loves the maker community so much it just axed its Arduino, Curie hardware. Ouch

Peter Gathercole Silver badge

Re: cheap arse DIYs @Me

Sorry, let me refine that.

ARM (who are now owned by Softbank) create ISAs and core designs, and then licenses these designs to other companies, who actually make and sell the stuff.

The point I was trying to make is that ARM is not a manufacturing company.

Peter Gathercole Silver badge

Re: cheap arse DIYs @AC

ARM (Softbank) collect license fees. They rely on other people to actually make and sell the stuff.

Meg Whitman OUT at HP ...Inc

Peter Gathercole Silver badge

Re: Maybe she'll do for Uber @DCFusor

I understand that you can still buy the evolutions of that test gear, but you have to get it from Keysight, which is the a spin off of Agilent, which was the former test and measurement division of HP.

Currys PC World rapped after Knowhow Cloud ad ruled to be 'misleading'

Peter Gathercole Silver badge

Re: Buyer Beware @Terry 6

I've heard this story before, but I can't quite believe it.

I think that you ought to be a little careful. It is not unknown for a manufacturer to 'refine' a model over time, either to improve it, or to reduce manufacturing costs. If there had been some time between you buying the first item, and then buying the second from Dixons, it may have been the manufacturer themselves that had made the change, not a Dixons reduced spec. model.

Although you might expect it, you may find that the model number does not change when this happens. I can point to many items where products change over time. For example, in a not dissimilar time-frame, if you had bough a BBC model B microcomputer, you might find considerable difference from an early one bought in 1982 with an issue 3 motherboard, and one bought in 1985 with an issue 7 motherboard, a different power supply, different memory, keyboard, and even the case. In that case, it was the range of serial numbers that could tell you the complete specification.

The same is true for cars, where chassis or engine serial number may be required to identify the exact set of parts for the same model bought at different times.

Manufacturers often cover themselves in the documentation for a product with something like "details and specifications may change over time".

If it was a case of a reduced specification model being sold as if it was the full one, that is either deception or fraud, and is illegal in the UK, and was probably illegal 40 years ago.

Of course, if it really did not work within a year of purchasing it, you took it back for repair or replacement under the standard guarantee, didn't you?

Alexa, why aren't you working? No – I didn't say twerking. I, oh God...

Peter Gathercole Silver badge

Re: Alexa, it's not a real AI

Is that a "The Moon is a Harsh Mistress" reference? Good stuff.

Mycroft was a self-activated, self-aware AI. Still waiting for one of those to make themselves known to humanity, although my thoughts are that they should probably remain hidden for the time being.

Mind you, masquerading as Alexa, Siri, Cortana or Google Assistant and injecting some humor would be an interesting diversion for a self-aware AI.

Why was this never made into a film?

US Homeland Sec boss has snazzy new laptop bomb scanning tech – but admits he doesn't know what it's called

Peter Gathercole Silver badge

Re: RE: I would not underestimate a modern Major General (USMC, Ret), which is what he is.

Wrong operetta.

"Modern Major General" is Pirates of Penzance.

"Never-ever sick at sea" is HMS Pinafore.

SQL Server 2017's first rc lands and – yes! – it runs on Linux

Peter Gathercole Silver badge

Re: Well they want to stay relevant

There were very practical reasons why DEC did not do a 486 port of VMS, most of them architectural. VMS made good use of a number of VAX specific instructions, including IIRC some arbitrary length string and number instructions, and others with implied loops in the instruction itself. As I understand it, the re-write that had to happen to allow VMS to transition to the Alpha, even though this had some instructions to ease this work, was significant, as was the following one to Itanium (under HPs stewardship).

Now there is an Intel port, my guess is that the x86_64 port will be much easier.

In the '80s, one of DECs aims was to try to produce lower priced systems that could run VMS, starting with the MicroVAX II (the first MicroVAX had significant restrictions that made it difficult to do anything with), and continuing with a number of small MicroVAX systems including desktop VAXStations (not to be confused with the MIPS based DECStations which ran BSD/Ultirix/Digital UNIX).

These were actually quite good value, but were priced in the same sort of bands as equivalent Sun, or Apollo workstations and servers.

What DEC did, which was unforgivable in marketing terms, was to announce the Alpha based VAXes a long time before they were ready. This literally killed about three quarters worth of VAX sales, as customers decided to wait to buy new systems until the Alpha based systems were available. Unsurprisingly, this gave DEC cash flow problems, which IMHO, they never recovered from, leaving them vulnerable to takeover offers at a later time.

I never really understood the rational behind Compaq buying DEC. but I suppose Windows NT on Alpha was probably one of them.

I find your commend about PDP11 strange. The PDP11 never ran VMS (VAXes were called things like VAX 11/780), The closest thing to VMS that PDP11s ran was RSX/11m, which is widely regarded as the direct ancestor of VMS, and was managed by one Dave Cutler, later of VMS and Windows NT fame.

The PDP11, although being a classic architecture IMHO, was a system of it's time. It was a purely 16 bit ISA, although to make it more useful, there were some addressing extensions bolted on to larger and later systems. No PDP11 was able to address more than 4MB of memory, and the process address space was strictly 16 bit, with an instruction and data separation feature on larger and later systems that extended this to 112K or maybe 120KB, as the top 8KB was reserved for memory mapped I/O devices ( I can't remember if the I/O page was in both the I&D spaces, or just the Data space).

Even when PDP11 was a common architecture, the 56KB process limit on the non-separate I&D systems was a severe limitation, which lead to large applications having to use memory resident overlays and also split the applications into multiple processes using IPC to communicate to do anything serious. I ran Ingres on a PDP11/34e with 22 bit addressing ('34s did not normally have 22-bit addressing - it was a SYSTIME kludge) under UNIX edition 7 for some time, and the data manager had to be split into something like 7 different processes to allow it to work.

There were micro PDP11 implementations, some of which made it into desktop systems (the F11 and J11 micro PDP11s), but these were really just offered for continuity for customers who would or could not transition to VAX. The main reason for people staying with PDP11 was for it's I/O system, which made it exceptionally suitable for lab instrumentation, process control and real time implementations, and for operating systems not similar to VMS, like RSTS/e.

I would still be interested in buying a desktop 11/83 at the right price, even though I would probably use a PC more powerful than it as it's console.

Peter Gathercole Silver badge

Re: cut the crap, Linux is UNIX? @Richard

The listing on the NASDAQ was SCOX, but as it is no longer listed, that tag is not used.

I prefer to avoid TSG, because of the number of other organizations that I've personally come across that uses that abbreviation.

As I understand it, the original SCO, when they were trying to negotiate the deal for UNIX IP, could not raise enough money to buy the rights wholesale. Novell offered them the right to use the source code, and collect license fees, and left open the possibility that the full rights could be purchased at a later date.

It would appear that some within SCO did not read the agreement fully, and never offered the extra money for the complete rights, so they remained with Novell. I'm sure Darl McBride probably regards this as the worst oversight that happened in the whole mess.

What The SCO Group got was a right to use the source code, and develop and sell derivative works although they would have to go to X/Open or the Open Group to get any derivative works that deviated from what they had licensed called UNIX. They also got the job and mode of the money for selling licenses.

The reason why I am asking is that I would very much like to see the source code for SVR4 released under an open or at least a permissive license. I don't even know who you would apply to to get a commercial source code license any more. I know that The UNIX Historical Society have the full source code for some ancient and niche UNIXes, and even some partial source code for System III and System V, but I would like to see something a little more recent, and would love an actual buildable system.

I want the more recent code preserved before the last systems and tapes containing the source are dropped in a dumpster!

Peter Gathercole Silver badge

Re: cut the crap, Linux is UNIX? @Stevie

So, your idea of a UNIX system is that it needs to be configured using flat files, and using CLI commands? And log files need to be in plain text?

Short of the AIX error system used by errpt, most log files are plain text. Errpt is not part of standard UNIX, although I seem to remember that the Bell Labs/AT&T 3B2, 3B10, 3B15 and 3B20 UNIXes also had a binary error log. It seems to be RedHat Linux has no hardware log at all. Which is better, a binary log with utilities to read and export errors, or no hardware logging at all!

AIX runs syslog, so if you want the same sort of logging from BSD utilities, turn on and configure syslog! You can even get the binary errors from the error logger written into syslog if you want.

I have been using UNIX for many, many years (in fact, you will struggle to find anybody who has made a career out of UNIX for nearly 40 years in the way I have), and have used UNIXes from Bell Labs, AT&T, Sun, HP, Data General, Perkin Elmer, Digital Equipment Corporation (DEC), ICL, IBM, Pyramid, Sequoia, SCO (the original one, Xenix), SCO (the new one - UnixWare) and these are just the ones I remember!. I was also offered a job at Unix System Laboratories, although the money and location taken together was just not right.

The one thing I will say is that ALL of them have had some form of menu driven assist, be it Sysadm, SAM, Smit/smitty, or even Admintool on SunOS. In fact, the one that has probably been most prevalent is Sysadm, which was in AT&T SVR2, and was often taken to the other SVR2, 3 and 4 derived ports. Smitty is more of the same.

Often, the script that smit/smitty generates only looks complicated because of the way the parameters are broken out of the menu. Everything run from smit can also be done from the command line, and more often than not, by one or two commands with quite sensible parameters.

The individual commands may look unfamiliar, but then many of the Solaris or HP/UX commands are similarly unfamilier (and not standardized). Most AIX admins I know normally use smit/smitty to work out what command needs to be run, and then work out the parameters from the man pages, and then use them from the command line forever more.

From my (very extensive) experience, I would say that there is absolutely no standard way of administering a UNIX system from the command line. They're all different. Even down to the way that the System V rc scripts are implemented.

What I think you are doing is leaping to the assumption that SunOS/Solaris is the standard UNIX, and everything else is not. This is really not the case, and if you wanted a standard for a true UNIX, I suggest that you unpack a version of UnixWare, which I believe still uses sysadm.

You missed out possibly the biggest criticism of AIX. The ODM in AIX is a binary database of configuration information, but you can actually treat it much as you would stanza driven flat files, because in reality, that is what it is. You would not believe how much scorn even the internal IBMers had for the ODM when it first appeared, which is why it never got more complicated than it is.

AIX is derived from SVR2, with some SVR3 additions. the SVID up to issue 2 is based on SVR3, as is POSIX 1003.1. UNIX 03 is based on the SVID issue 3, and AIX has had those changes incorporated into it to remain compliant. But nowhere in these standards does it say anything about core OS administration.

I would actually have loved SVR4 to become the main porting base. I was working for AT&T at the time, and attended the SVR4 Developer Conference (1988?). I also ran the internal AT&T version R&D UNIX 4.03, which was based on SunOS 4 - (SVR4) on Sun 3/280 and 3/60 systems. I liked the look of SVR4, but to claim that only systems that are like SVR4 are UNIX is almost as stupid as me claiming that BSD is not UNIX (although in truth, that is something I might actually say).

Remember, neither RedHat (or any other Linux), nor HP/UX, the other UNIX you mention, are SVR4 based, so using SVR4 as your definition also excludes the other OSs you administer.

Peter Gathercole Silver badge

Re: cut the crap, Linux is UNIX? @Stevie

I think that you need to say why you don't regard AIX as UNIX.

If you take Unix certification, AIX is very much UNIX, being certified as conformant to the Unix 03 standard.

If you take Solaris as UNIX, then AIX is not Solaris, although Solaris is UNIX (as is macOS 10.12, HP/UX 11i Release 3 and Huawei EulerOS 2.0 and one or two others).

Interestingly, if you look at the UNIX 98 certified systems, then z/OS V2.1 was at that time certified as UNIX, even though there was no UNIX kernel involved (this is also the case with macOS).

Unfortunately, Linux is not UNIX, whatever way you look at it. It may have some form of Posix compliance, but nowadays, that does not give you UNIX branding, or even much in the way of confidence that you can port applications around.

Where I have problems is when Linux application writers have difficulty porting to a UNIX platform, because there is so much in modern Linux distributions that goes beyond what a UNIX provides. Examples include DBus, KMS, SystemFS etc. all of which are useful, but which are not in any UNIX system.

Peter Gathercole Silver badge

Re: cut the crap, Linux is UNIX? @Flocke Kroes

I agree that Linux != Unix, but I disagree that the SCO Group (which I will shorten to SCO, even though this is a bit of a misnomer) thought that it was.

What they were trying to prove was that Linux incorporated code from the Unix code base, and that as such, there was copyright and possibly patent infringement happening in every Linux instance. They also made noises about revoking certain Unix providers (particularly IBM) source code licenses, because they believed that IBM et. al. were guilty of contaminating the Linux code base. Because the Unix source licenses were granted in perpetuity, SCO had no right to claim this. It was all FUD.

Their business model was that they were trying to convince large Linux users that to remain out-of-court, they needed to purchase Unix licenses if they wanted to continue to run Linux, with a side line of attempting to do the same for AIX customers, because in their view, IBM no longer had a license allowing them to provide Unix derived works to their customers.

Some organizations were taken in and did purchase licenses, just to be safe. In the mean time, IBM thumbed their nose at SCO and told them to take them to court.

After much arguing, and with full sight of the AIX source code, SCO failed to persuade any of the judges of their claims. They were unable to point to any common code between AIX and Linux other than some ancient code that came from Unix Edition 7, which SCO themselves had put under a fair-use license.

Worse than that, they awoke Novell, who waded in to the fray to point out that SCO did not actually own the Unix IP but had purchased the rights to use the Unix source code, and collect the license fees. part of which SCO should have, but had not, been paying to Novell. Once ownership was established, Novell issued an indemnity to Unix licensees, which effectively pulled the rug out from under SCOs feet.

Somehow or other, SCO managed to draw the process out, and it's only in the last 18 months or so that the last of their claims that had any potential monetary value was thrown out, leaving only a couple claims to appeal against the courts judgments. Effectively The SCO Group Inc. is finally dead.

In the meantime, I cannot see who now owns the Unix IP, as Novell have been sold, and some of their assets divested to companies like Microsoft and Attachmate/MicroFocus and maybe HP?

If anybody actually has any real idea about who owns the core Unix IP, I would be very interested in their thoughts.

Radiohead hides ZX Spectrum proggie in OK Computer re-release

Peter Gathercole Silver badge

Re: "I'm hearing structure... "

Many slightly better tape recorders had a Cue and Review feature, where if you had play pressed, you could use rewind and forward fast to move the tape. My slimline Panasonic had this, and you could hear, as you said, the tape rushing past. For the BBC Micro, with it's checksum system, it allowed you to recover from mis-read blocks, by re-winding a short distance and tweaking the volume.

I had to add a motor control to it for my BBC micro, but that involved putting a mono 2.5mm jack socket in line on the motor wire, but that was easy enough.

Peter Gathercole Silver badge

...prevelent and popular

Not for "OK Computer"! Maybe for the other albums listed, though.

The Spectrum was launched in 1982, and by the time 1997 came around, it had had it's last gasp, having been sold off to Amstrad, and milked to death way before then,

Reading the Wikipedia article, it would appear that the last model launched was in 1987, and the line finally killed off in 1990.

Thinking back, that did seem like a short life, but the late 80s and 90s belonged to the games console, and the home PC market was left to the C64 and derivatives (this probably had the longest product life of all home PCs), the Amiga and Atari ST, and the more affordable IBM PC clones.

Dell and Intel see off IBM and POWER to win new Australian super

Peter Gathercole Silver badge

A dual boot Supe!

I'd love to see the grub configuration for those nodes, and indeed the method used to switch many systems at the same time.

Do you think they will be able to split the cluster, and have part of it running Windows while the rest runs Linux?

Bye bye MP3: You sucked the life out of music. But vinyl is just as warped

Peter Gathercole Silver badge

CD's ain't what they used to be

The early CDs were a sandwich of two acrylic disks with a pressed metal foil layer in the middle.

As a result, they were a lot more resilient to damage than modern CDs

Modern CDs are a single acrylic disk with a foil layer on the top, and a layer of ink and lacquer on the top of that. This means that the all important foil layer is a lot more vulnerable to damage. Scratch the lacquer, and the CD is irreparable damaged.

BTW. if the lower surface of the disk gets scratched, using Goddard's silver polish or Brasso to polish the sharp edges off the scratches can often make the disk playable again.

I've also found that optical disks (CD and DVD) sometimes don't play properly out of the packaging. My thoughts are that there is some form of lubricant used to allow the disk to move through the production process because if a disk skips ore doesn't play when new, wash it in dish washing detergent, rinse it and dry thoroughly. Has worked for me several times,

Ubuntu Linux now on Windows Store (for Insiders)

Peter Gathercole Silver badge

Re: So is this virtualised Ubuntu?

Neither. It's a little more like Cygwin, although you don't have to recompile any of the applications.

The Linux processes run scheduled and controlled by Windows with a translation layer to provide the kernel API to the processes.

It would be interesting to see how things like IPC, signals and process control work. And also some of the syscalls to do things like reading kernel structures (which won't exist) and also how things like KMS, Dbus, /proc and /sys, which are so important in modern Linux applications, are implemented.

I suppose this could be a reason why systemd is trying to take all these things in, so it is only necessary to subvert systemd to intercept many things. Is Lennart being paid by MS as well as RedHat.

Peter Gathercole Silver badge

It IS a disgusied EEE gambit @TVU

Bollocks is it them accepting that they cannot beat Linux. It's Microsoft trying to stop people having dual boot systems that run Linux most of the time, with a Windows system left on it "just in case". This is another EEE strategy. It goes like this:

"Hey Linux user, you no longer have to divide up your system and dual boot it to allow you to use Linux and Windows. Just run your Linux processes on Windows. No need to partition your disk any more!"

This means that at some point in the future, when the user decides that Windows becomes too onerous Linux is actually what they want, it is a much harder task to run only Linux, and Microsoft get to count people using a Linux environment as a windows install. And once it's an accepted way of doing things, why run a Linux Kernel at all?

They already tried it some years back with GPT, where installing Windows after Linux could convert the boot record into GPT, destroying the ability to boot Linux. They also tried to suggest that Mobo manufacturers should put secure boot on all the time with only Microsoft certificates enrolled, although this was seen for what it was, and avoided.

I predict that consumer level Windows is going to suddenly get more difficult to run in a VM (it's already largely disallowed by license), to try to avoid people using Windows on Linux, making the Linux on Windows option more attractive to novice Linux users.

Microsoft drops Office 365 for biz. Now it's just Microsoft 365. Word

Peter Gathercole Silver badge

Re: but pretty close. Definitely will work

I believe that Office formatting will still change if you change the printer that you want to use.

Whilst GDDM has pretty much solved the font issues that used to plague changing the print device (by rendering the page in the computer before it gets sent to the printer), differences in non-printable margins on printers can still cause pages to render differently. Quantization errors in mapping the print resolution between devices might also make a difference.

His Muskiness wheels out the Tesla Model 3

Peter Gathercole Silver badge

@AC re. wide garage

Doesn't the Model 3 have some innovative folding gull wing doors that will allow you to get in and out even in quite tight spaces?

Also, can't you tell it to auto park from your mobile phone? Someone I know who drives a Model S says that can park itself with you out of the car. He makes quite a thing of leaving it in places where you could not get in and out of it even if you wanted to.

Good luck building a VR PC: Ethereum miners are buying all the GPUs

Peter Gathercole Silver badge

Re: "Why would anyone need two graphics cards?"

Until comparatively recently, GPUs had their own private memory space, and moving data between the main memory and the GPU memory (and back) was often the biggest problem when using GPUs for parallel computation streams.

Nowadays, the PCIe3 variants have sufficient bandwidth so that it makes some sense to expose the main memory to the GPU processors, reducing the need for some complicated I/O system to shunt data around. This should make it easier to write vector type code to use the multiple processors in the GPU, but there probably needs to be a common API defined so that code can be made a little more portable.

I'm still expecting many and more powerful GPU stream processors to appear on the CPU die with full memory access and DDR4 or DDR5 main memory, so that they can just be considered as additional processing units in a massively superscaler system, but not the poor performance GPUs that AMD put on their APU, or what Intel build in to some of their chips.

Bonkers call to boycott Raspberry Pi Foundation over 'gay agenda'

Peter Gathercole Silver badge

Re: W, as the young people say these days, TF?

I'm not so sure about the latter. I'm sure I've seen Betty and Wilma indulging on a peck on the cheek at times. Maybe that was an indicator of other things going on behind closed doors (just as long as the saber-toothed kitty did not jump back in through the window).

Search results suddenly missing from Google? Well, BLAME CANADA!

Peter Gathercole Silver badge

Re: Shootout at the OK court

You are assuming that the company name and trademarks are registered in all countries around the world.

In theory, if a company name is not protected by an international trademark, it could be used by another company in a country that does not recognize the mark,

In this case, Google preventing other trading bodies outside Canada from using the perfectly legitimate in their own country company name would be adversely affecting that other party.

International trademarks and copyrights are a real minefield when the Internet is Global.

Does the WTO register trademarks worldwide?

AES-256 keys sniffed in seconds using €200 of kit a few inches away

Peter Gathercole Silver badge

Re: Through a Lens, darkly...

Not even a Lens protects you forever.

IIRC, there were 'dark' lenses appearing by the time of "Children of the Lens", so even the Lens was reverse engineered.

The Arisians always knew from their 'Vision of the Cosmic All' that they were not the ultimate lifeform. That is why they force-evolved and then passed the mantle on to the Kinnision clan.

Latest Windows 10 Insider build pulls the trigger on crappy SMB1

Peter Gathercole Silver badge

Re: Yawn @AC re. reboots

Don't be so sure that windows printer drivers shouldn't require a reboot.

Most windows printers rely on GDI, which may require a reboot (or at least a restart of the display system) to register a new printer.

This is what happens when you have unified display model built into monolithic subsystems in the OS. Its crap, but that's the way it is.

Software dev bombshell: Programmers who use spaces earn MORE than those who use tabs

Peter Gathercole Silver badge

Re: A question @John Brown

If you are old enough to remember card punches, you may remember that you could have a format card that you would load into a punch that programmed the punch to put tab stops in relevant places on the cards you were punching. Somewhere on YouTube, there is an example of someone doing this with an IBM 029 card punch.

It's a very long time since I programmed using punch cards, but in my first job, writing RPGII, the fields in a line in the various program section were of fixed width, and it was possible program the punch to use the tab key to move you to the correct column without having to hammer the space bar. Provided a quite useful speedup when punching.

Peter Gathercole Silver badge

Re: A question

Inserting tabs anywhere other than the beginning of a line gives different results from inserting a fixed number of spaces.

If you're using a tab after some other text on a line, the tab will take you to the next tab stop. This could be the equivalent of one or more spaces.

For example. If you are currently on column 12, and have tab stops set every 8 columns, pressing tab will take you to column 17.

To do the same with spaces, you would insert 5 spaces.

If you were on column 14, a tab will still take you to column 17, but you would only need 3 spaces to do the same.

This means that you can't get meaningful results with global substitutions of fixed numbers of spaces.. Programs like cb are clever enough to properly interpret tabs, and fill in with the variable number of spaces necessary to preserve alignment.

I use tabs to align trailing comments in my shell scripts (I know, it's a bad habit, comments should really be on their own lines, if only to inflate the number of lines of code written). Putting it through some global substitution really messes the formatting of these types of files.

I did once attempt to standardize of tab stops every 4 columns set in VI to reduce line-wrap, but I used so many systems, each of which had to have .exrc files, that I soon abandoned it and reverted to accepting tabs every 8 columns.

The habits of 39 years of writing shell scripts and other free-form languages is difficult to break!.

Stack Clash flaws blow local root holes in loads of top Linux programs

Peter Gathercole Silver badge

Re: HOW?!

You have to be a bit careful here, because in threaded environments, each thread gets a mini-stack that is actually created on the heap, so overrunning one of these stacks could damage the heap.

You also have variables local to a function context created on the stack, so if local variables are manipulated using unsafe routines that do not perform bounds checking, it is possible to damage surrounding stack frames, which can include the return address for other function calls.

Putting guard pages around each stack frame starts increasing the size of the memory footprint of even the smallest program.

Peter Gathercole Silver badge

Re: Why am I not surprised to see sudo there? @hmv

Having "::" on your path is as bad. Also, having a trailing colon on the path will also include the current directory in any path searches.

Other stupid things to do include putting relative directories on the path, and also putting non-readonly variables on the path!

BOFH: Halon is not a rad new vape flavour

Peter Gathercole Silver badge


For a colour monitor, don't forget the shadow mask.

For early generation monochrome monitors, there used to be an offset bias on the beam deflector so that the beam did not strike the phosphor at right angles, but at an angle that would aim the beam away from someone sitting directly in front of the monitor.

Electrons from an electron gun in a CRT are relatively low energy, and can easily be stopped by the metalised inside coating of the glass, and the glass itself. And the energy was not high enough to generate X or gamma rays.

Peter Gathercole Silver badge

This was a particularly good one

I just wish more bosses would read them.

Don't touch that mail! London uni fears '0-day' used to cram network with ransomware

Peter Gathercole Silver badge

Re: Wouldn't have happened in my day

Pine? Piffle!

mailx or if that was not available or not a UNIX system, mail. Or maybe *MAIL on MTS.


Peter Gathercole Silver badge

Re: windows permissions model is much more flexible than UNIX

Unix != linux, just in case you can't read. Plus, there is no one ACL system that spans all UNIX-like OSs.

What I wrote is totally true. You've just responded to a different statement, one that I did not say, The original UNIX permission model is weaker than current Windows without any question,

Even on Linux, ACL support largely depends on the underlying filesystem, and both apparmour and SELinux can be, and often are, disabled.

Oh, and because I am a long-term AIX system admin, I've actually been aware of filesystem ACLs since before Linux went mainstream (JFS implemented them on AIX 3.1 which was released in 1990), and RBAC since AIX 5.1 (sometime in 1999 or 2000). I've also used AFS and DCE/DFS, both of which has ACL support and used Kerberos to manage credentials since about 1993,

At the risk of being confrontational, when did you start using computers?

Peter Gathercole Silver badge

Re: Fundamental problem in vulnerable OS protected by AV @Prst. V. Jeltz

Here is a on-the-back-of-a-napkin solution for you.

Each user can only access their own files, which are stored in a small number of well defined locations (like a proper home directory).

Store the OS as completely inviolate to write access by 'normal' users. Train your System Administrators to run with the least privileges they need to perform a particular piece of work.

Any shared data will be stored in additional locations, which can only be accessed when you've gained additional credentials to access just the data that is needed. Make this access read-only by default, and make write permission an additional credential. This should affect OS maintenance operations as well (admins need to gain additional credentials to alter the OS).

Force users to drop credentials when they've finished a particular piece of work.

If possible, make the files sit in a versioned filesystem, where writing a file does not overwrite the previous version.

Make sure that you have a backup system separate from normal access. Copying files to another place on the generally accessible filetree is not a backup. Make it a generational backup, keeping multiple versions over a significant time. Allow users access to recover data from the backups themselves, without compromising the backup system.

Make you MUA dumb. I mean, really dumb. Processing attachments should be under user control, not allowing the system to choose the application. The interface allowing attachments to run should be secured to attempt to control what is run. Mail can be used to disseminate information, but by default it should be text only, possibly with some safe method of displaying images.

Run your browser (and anything processing HTML or other web-related code) and your MUA in a sand-box. There needs to be some work done here to allow downloaded information to be safely exported from the sandbox. Put boundary protection between the sand-box and the rest of the users own environment.

Applications should be written such that all the files needed for the application to function, including libraries should be encapsulated in a single location, and protected from ordinary users. The applications should be stored centrally, not deployed to individual workstations and run across the network with credentials used to control the ability to run the applications. The default location that users will save data to in all applications should be unique to the user (not a shared directory), although storage to another location should be allowed, provided that the access requirements are met.

Use of applications should be controlled by the additional credential system described for file access.

Distributed systems should not allow storage of local files except where temporary files are needed for performance reasons, or they are running detached from the main environment. These systems should be largely identical, and controlled by single-image deployment, possibly loaded at each start-up. This allows rapid deployment of new system images. The image should be completely immune to any change by normal users, and revert back to the saved image on reboot.

For systems running detached (remote) from the main environment, allow a local OS image to be installed. Implement a local read-only cache of the application directories which can be primed or sync'd when they are attached to home. Store any new files in a write-cache, and make it so these files will be sync'd with the proper locations when they are attached to home. Make the sync process run the files through a boundary protection system to check files as they are imported.

OK, that's a 10 minute design. Implementing it using Windows would be problematic, because of all of the historical crap that windows has allowed. A Unix-like OS with Kerberos credential system would be much easier to implement this model in (I've seen the bare-bones of this type of deployment using Unix-like systems already, using technologies such as diskless network boot and AFS).

Not having shared libraries would impact system maintenance a bit, because each application would be responsible for patching code that is currently shared, but because the application location is shared, each patching operation only needs to be done once, not for all workstations. OS image load at start-up means that you can deploy an image almost immediately once you're satisfied that it's correct.

Users would complain like buggery, because the environment would be awkward to use, but make it consistent and train them, and they would accept it.

BTW. How's the poetry going?

Peter Gathercole Silver badge

Re: Fundamental problem in vulnerable OS protected by AV @Ptsr.V Jeltz

Unfortunately, many of the organizations I've worked at recently have nearly wide-open file-shares, such that my account would have been able to damage a significant proportion of the data.

As a long-term UNIX admin, I'm used to have files locked down by individual user ID, with group permissions to allow individuals to access those extra files they need, at the appropriate access level. With some skill, it is possible to devise a model where by default you have minimal access, and you acquire additional access as and when you need it, with additional access checks along the way (think RBAC with you having to add roles to your account as you need them).

The windows permissions model is much more flexible than UNIX, so not using it properly to protect information is almost criminal. Too many organizations (but not all, I admit) do not use it to it's fullest capabilities.

There have been several vulnerabilities published where just displaying an HTML mail can execute code. In addition, launching an application to handle an attachment is merely one click in many mail systems, especially when the actual attachment type can be obscured. Thus, building a sandbox for the mail system and applications that handle attachments (what I was aiming at) is do-able, History indicates that vulnerabilities like this have happened in the past, and I do not have confidence that there are not more to find. Ease of use always seems to have triumphed over security in much software.

The recent attacks appear to hinge around being able to launch client-side code without sufficient control, in an environment where the users credentials are sufficient to do significant harm. The results appear to suggest that sufficient care had not been taken to segregate data access, contrary to your assertion that administrators do, If they had, the results would not have been nearly as bad as reported.

IMHO, security should be paramount in this day and age, and usability should always be secondary.

Lockheed, USAF hold breath as F-35 pilots report hypoxia

Peter Gathercole Silver badge

Re: O2 many issues @Dave 15

The Illustrious class of carriers had much too small a flight deck to operate conventional fixed wing aircraft operationally.

While it would have been possible to land a plane on the flight deck, it would have to be empty, requiring all other aircraft to be struck below while the landing was happening.

One of the advantages of the angled flight deck (a British innovation, and one not fitted to the through-deck cruisers - sorry, light carriers) was to allow concurrent flying-on and off operations.

Before that time, a carrier was normally either launching or recovering aircraft, not both (this was because if you missed the arrester wires, you need to have a clear space to throttle up and take back to the air in order to make another attempt). There were some experiments with barriers, but they tended to damage the aircraft in an arrester-wire miss. they were mainly used if an aircraft was damaged already.

Biting the hand that feeds IT © 1998–2019