* Posts by Peter Gathercole

2655 posts • joined 15 Jun 2007

Alexa, why aren't you working? No – I didn't say twerking. I, oh God...

Peter Gathercole
Silver badge

Re: Alexa, it's not a real AI

Is that a "The Moon is a Harsh Mistress" reference? Good stuff.

Mycroft was a self-activated, self-aware AI. Still waiting for one of those to make themselves known to humanity, although my thoughts are that they should probably remain hidden for the time being.

Mind you, masquerading as Alexa, Siri, Cortana or Google Assistant and injecting some humor would be an interesting diversion for a self-aware AI.

Why was this never made into a film?

2
0

US Homeland Sec boss has snazzy new laptop bomb scanning tech – but admits he doesn't know what it's called

Peter Gathercole
Silver badge
Headmaster

Re: RE: I would not underestimate a modern Major General (USMC, Ret), which is what he is.

Wrong operetta.

"Modern Major General" is Pirates of Penzance.

"Never-ever sick at sea" is HMS Pinafore.

4
0

SQL Server 2017's first rc lands and – yes! – it runs on Linux

Peter Gathercole
Silver badge

Re: Well they want to stay relevant

There were very practical reasons why DEC did not do a 486 port of VMS, most of them architectural. VMS made good use of a number of VAX specific instructions, including IIRC some arbitrary length string and number instructions, and others with implied loops in the instruction itself. As I understand it, the re-write that had to happen to allow VMS to transition to the Alpha, even though this had some instructions to ease this work, was significant, as was the following one to Itanium (under HPs stewardship).

Now there is an Intel port, my guess is that the x86_64 port will be much easier.

In the '80s, one of DECs aims was to try to produce lower priced systems that could run VMS, starting with the MicroVAX II (the first MicroVAX had significant restrictions that made it difficult to do anything with), and continuing with a number of small MicroVAX systems including desktop VAXStations (not to be confused with the MIPS based DECStations which ran BSD/Ultirix/Digital UNIX).

These were actually quite good value, but were priced in the same sort of bands as equivalent Sun, or Apollo workstations and servers.

What DEC did, which was unforgivable in marketing terms, was to announce the Alpha based VAXes a long time before they were ready. This literally killed about three quarters worth of VAX sales, as customers decided to wait to buy new systems until the Alpha based systems were available. Unsurprisingly, this gave DEC cash flow problems, which IMHO, they never recovered from, leaving them vulnerable to takeover offers at a later time.

I never really understood the rational behind Compaq buying DEC. but I suppose Windows NT on Alpha was probably one of them.

I find your commend about PDP11 strange. The PDP11 never ran VMS (VAXes were called things like VAX 11/780), The closest thing to VMS that PDP11s ran was RSX/11m, which is widely regarded as the direct ancestor of VMS, and was managed by one Dave Cutler, later of VMS and Windows NT fame.

The PDP11, although being a classic architecture IMHO, was a system of it's time. It was a purely 16 bit ISA, although to make it more useful, there were some addressing extensions bolted on to larger and later systems. No PDP11 was able to address more than 4MB of memory, and the process address space was strictly 16 bit, with an instruction and data separation feature on larger and later systems that extended this to 112K or maybe 120KB, as the top 8KB was reserved for memory mapped I/O devices ( I can't remember if the I/O page was in both the I&D spaces, or just the Data space).

Even when PDP11 was a common architecture, the 56KB process limit on the non-separate I&D systems was a severe limitation, which lead to large applications having to use memory resident overlays and also split the applications into multiple processes using IPC to communicate to do anything serious. I ran Ingres on a PDP11/34e with 22 bit addressing ('34s did not normally have 22-bit addressing - it was a SYSTIME kludge) under UNIX edition 7 for some time, and the data manager had to be split into something like 7 different processes to allow it to work.

There were micro PDP11 implementations, some of which made it into desktop systems (the F11 and J11 micro PDP11s), but these were really just offered for continuity for customers who would or could not transition to VAX. The main reason for people staying with PDP11 was for it's I/O system, which made it exceptionally suitable for lab instrumentation, process control and real time implementations, and for operating systems not similar to VMS, like RSTS/e.

I would still be interested in buying a desktop 11/83 at the right price, even though I would probably use a PC more powerful than it as it's console.

0
0
Peter Gathercole
Silver badge

Re: cut the crap, Linux is UNIX? @Richard

The listing on the NASDAQ was SCOX, but as it is no longer listed, that tag is not used.

I prefer to avoid TSG, because of the number of other organizations that I've personally come across that uses that abbreviation.

As I understand it, the original SCO, when they were trying to negotiate the deal for UNIX IP, could not raise enough money to buy the rights wholesale. Novell offered them the right to use the source code, and collect license fees, and left open the possibility that the full rights could be purchased at a later date.

It would appear that some within SCO did not read the agreement fully, and never offered the extra money for the complete rights, so they remained with Novell. I'm sure Darl McBride probably regards this as the worst oversight that happened in the whole mess.

What The SCO Group got was a right to use the source code, and develop and sell derivative works although they would have to go to X/Open or the Open Group to get any derivative works that deviated from what they had licensed called UNIX. They also got the job and mode of the money for selling licenses.

The reason why I am asking is that I would very much like to see the source code for SVR4 released under an open or at least a permissive license. I don't even know who you would apply to to get a commercial source code license any more. I know that The UNIX Historical Society have the full source code for some ancient and niche UNIXes, and even some partial source code for System III and System V, but I would like to see something a little more recent, and would love an actual buildable system.

I want the more recent code preserved before the last systems and tapes containing the source are dropped in a dumpster!

0
0
Peter Gathercole
Silver badge

Re: cut the crap, Linux is UNIX? @Stevie

So, your idea of a UNIX system is that it needs to be configured using flat files, and using CLI commands? And log files need to be in plain text?

Short of the AIX error system used by errpt, most log files are plain text. Errpt is not part of standard UNIX, although I seem to remember that the Bell Labs/AT&T 3B2, 3B10, 3B15 and 3B20 UNIXes also had a binary error log. It seems to be RedHat Linux has no hardware log at all. Which is better, a binary log with utilities to read and export errors, or no hardware logging at all!

AIX runs syslog, so if you want the same sort of logging from BSD utilities, turn on and configure syslog! You can even get the binary errors from the error logger written into syslog if you want.

I have been using UNIX for many, many years (in fact, you will struggle to find anybody who has made a career out of UNIX for nearly 40 years in the way I have), and have used UNIXes from Bell Labs, AT&T, Sun, HP, Data General, Perkin Elmer, Digital Equipment Corporation (DEC), ICL, IBM, Pyramid, Sequoia, SCO (the original one, Xenix), SCO (the new one - UnixWare) and these are just the ones I remember!. I was also offered a job at Unix System Laboratories, although the money and location taken together was just not right.

The one thing I will say is that ALL of them have had some form of menu driven assist, be it Sysadm, SAM, Smit/smitty, or even Admintool on SunOS. In fact, the one that has probably been most prevalent is Sysadm, which was in AT&T SVR2, and was often taken to the other SVR2, 3 and 4 derived ports. Smitty is more of the same.

Often, the script that smit/smitty generates only looks complicated because of the way the parameters are broken out of the menu. Everything run from smit can also be done from the command line, and more often than not, by one or two commands with quite sensible parameters.

The individual commands may look unfamiliar, but then many of the Solaris or HP/UX commands are similarly unfamilier (and not standardized). Most AIX admins I know normally use smit/smitty to work out what command needs to be run, and then work out the parameters from the man pages, and then use them from the command line forever more.

From my (very extensive) experience, I would say that there is absolutely no standard way of administering a UNIX system from the command line. They're all different. Even down to the way that the System V rc scripts are implemented.

What I think you are doing is leaping to the assumption that SunOS/Solaris is the standard UNIX, and everything else is not. This is really not the case, and if you wanted a standard for a true UNIX, I suggest that you unpack a version of UnixWare, which I believe still uses sysadm.

You missed out possibly the biggest criticism of AIX. The ODM in AIX is a binary database of configuration information, but you can actually treat it much as you would stanza driven flat files, because in reality, that is what it is. You would not believe how much scorn even the internal IBMers had for the ODM when it first appeared, which is why it never got more complicated than it is.

AIX is derived from SVR2, with some SVR3 additions. the SVID up to issue 2 is based on SVR3, as is POSIX 1003.1. UNIX 03 is based on the SVID issue 3, and AIX has had those changes incorporated into it to remain compliant. But nowhere in these standards does it say anything about core OS administration.

I would actually have loved SVR4 to become the main porting base. I was working for AT&T at the time, and attended the SVR4 Developer Conference (1988?). I also ran the internal AT&T version R&D UNIX 4.03, which was based on SunOS 4 - (SVR4) on Sun 3/280 and 3/60 systems. I liked the look of SVR4, but to claim that only systems that are like SVR4 are UNIX is almost as stupid as me claiming that BSD is not UNIX (although in truth, that is something I might actually say).

Remember, neither RedHat (or any other Linux), nor HP/UX, the other UNIX you mention, are SVR4 based, so using SVR4 as your definition also excludes the other OSs you administer.

0
0
Peter Gathercole
Silver badge

Re: cut the crap, Linux is UNIX? @Stevie

I think that you need to say why you don't regard AIX as UNIX.

If you take Unix certification, AIX is very much UNIX, being certified as conformant to the Unix 03 standard.

If you take Solaris as UNIX, then AIX is not Solaris, although Solaris is UNIX (as is macOS 10.12, HP/UX 11i Release 3 and Huawei EulerOS 2.0 and one or two others).

Interestingly, if you look at the UNIX 98 certified systems, then z/OS V2.1 was at that time certified as UNIX, even though there was no UNIX kernel involved (this is also the case with macOS).

Unfortunately, Linux is not UNIX, whatever way you look at it. It may have some form of Posix compliance, but nowadays, that does not give you UNIX branding, or even much in the way of confidence that you can port applications around.

Where I have problems is when Linux application writers have difficulty porting to a UNIX platform, because there is so much in modern Linux distributions that goes beyond what a UNIX provides. Examples include DBus, KMS, SystemFS etc. all of which are useful, but which are not in any UNIX system.

6
0
Peter Gathercole
Silver badge

Re: cut the crap, Linux is UNIX? @Flocke Kroes

I agree that Linux != Unix, but I disagree that the SCO Group (which I will shorten to SCO, even though this is a bit of a misnomer) thought that it was.

What they were trying to prove was that Linux incorporated code from the Unix code base, and that as such, there was copyright and possibly patent infringement happening in every Linux instance. They also made noises about revoking certain Unix providers (particularly IBM) source code licenses, because they believed that IBM et. al. were guilty of contaminating the Linux code base. Because the Unix source licenses were granted in perpetuity, SCO had no right to claim this. It was all FUD.

Their business model was that they were trying to convince large Linux users that to remain out-of-court, they needed to purchase Unix licenses if they wanted to continue to run Linux, with a side line of attempting to do the same for AIX customers, because in their view, IBM no longer had a license allowing them to provide Unix derived works to their customers.

Some organizations were taken in and did purchase licenses, just to be safe. In the mean time, IBM thumbed their nose at SCO and told them to take them to court.

After much arguing, and with full sight of the AIX source code, SCO failed to persuade any of the judges of their claims. They were unable to point to any common code between AIX and Linux other than some ancient code that came from Unix Edition 7, which SCO themselves had put under a fair-use license.

Worse than that, they awoke Novell, who waded in to the fray to point out that SCO did not actually own the Unix IP but had purchased the rights to use the Unix source code, and collect the license fees. part of which SCO should have, but had not, been paying to Novell. Once ownership was established, Novell issued an indemnity to Unix licensees, which effectively pulled the rug out from under SCOs feet.

Somehow or other, SCO managed to draw the process out, and it's only in the last 18 months or so that the last of their claims that had any potential monetary value was thrown out, leaving only a couple claims to appeal against the courts judgments. Effectively The SCO Group Inc. is finally dead.

In the meantime, I cannot see who now owns the Unix IP, as Novell have been sold, and some of their assets divested to companies like Microsoft and Attachmate/MicroFocus and maybe HP?

If anybody actually has any real idea about who owns the core Unix IP, I would be very interested in their thoughts.

9
0

Radiohead hides ZX Spectrum proggie in OK Computer re-release

Peter Gathercole
Silver badge

Re: "I'm hearing structure... "

Many slightly better tape recorders had a Cue and Review feature, where if you had play pressed, you could use rewind and forward fast to move the tape. My slimline Panasonic had this, and you could hear, as you said, the tape rushing past. For the BBC Micro, with it's checksum system, it allowed you to recover from mis-read blocks, by re-winding a short distance and tweaking the volume.

I had to add a motor control to it for my BBC micro, but that involved putting a mono 2.5mm jack socket in line on the motor wire, but that was easy enough.

0
0
Peter Gathercole
Silver badge

...prevelent and popular

Not for "OK Computer"! Maybe for the other albums listed, though.

The Spectrum was launched in 1982, and by the time 1997 came around, it had had it's last gasp, having been sold off to Amstrad, and milked to death way before then,

Reading the Wikipedia article, it would appear that the last model launched was in 1987, and the line finally killed off in 1990.

Thinking back, that did seem like a short life, but the late 80s and 90s belonged to the games console, and the home PC market was left to the C64 and derivatives (this probably had the longest product life of all home PCs), the Amiga and Atari ST, and the more affordable IBM PC clones.

1
1

Dell and Intel see off IBM and POWER to win new Australian super

Peter Gathercole
Silver badge

A dual boot Supe!

I'd love to see the grub configuration for those nodes, and indeed the method used to switch many systems at the same time.

Do you think they will be able to split the cluster, and have part of it running Windows while the rest runs Linux?

0
0

Bye bye MP3: You sucked the life out of music. But vinyl is just as warped

Peter Gathercole
Silver badge

CD's ain't what they used to be

The early CDs were a sandwich of two acrylic disks with a pressed metal foil layer in the middle.

As a result, they were a lot more resilient to damage than modern CDs

Modern CDs are a single acrylic disk with a foil layer on the top, and a layer of ink and lacquer on the top of that. This means that the all important foil layer is a lot more vulnerable to damage. Scratch the lacquer, and the CD is irreparable damaged.

BTW. if the lower surface of the disk gets scratched, using Goddard's silver polish or Brasso to polish the sharp edges off the scratches can often make the disk playable again.

I've also found that optical disks (CD and DVD) sometimes don't play properly out of the packaging. My thoughts are that there is some form of lubricant used to allow the disk to move through the production process because if a disk skips ore doesn't play when new, wash it in dish washing detergent, rinse it and dry thoroughly. Has worked for me several times,

0
0
Peter Gathercole
Silver badge

Re: My old music never reached CD let alone digital download

Ebay is your friend here, but avoid the club DJ turntables. Go for something like a 2nd hand Project Debut 2 or 3 or a Dual 504 or 505 which would give you reasonable performance at a quite reasonable price (watch out for the later Project Debut turntables, they've got a bit big-headded because of their success, and have put their prices up).

You may need a moving magnet (or moving coil if you go big) pre-amp to play it through a modern Hi-Fi, however, as most Hi-Fi nowadays does not have a phono input.

(And don't forget the decent speakers!)

3
0
Peter Gathercole
Silver badge

Well there you have it.

If you are basing your vinyl listening to picture disks, then you've got a really jaundiced sample.

You need good quality black vinyl to get the best experience.

I recently bought the first of the Beatles Vinyl Collection partwork, which was Abbey Road, my absolute favorite Beatles album. This has been recently re-mastered, and the pressing is on 180gm hich quality vinyl, and it's really refreshing to listen to such a good pressing. Unfortunately, they re-mastered from the original master tapes, and I find the top end a bit muted, and I notice that the cymbals in tracks like Something and Here Comes the Sun have just disappeared compared to previous pressings.

It's a shame that the series was going to be so expensive. £17 for a single album and £25 for a double album is a bit too much.. Over all, it would have cost over £450 for the entire collection.

3
0
Peter Gathercole
Silver badge

Modern sound engineering

I think he's complaining about the engineering and mastering of modern recordings rather than the actual limits on the media.

I don't buy much modern music, but I was appalled by the mastering of "Memory Almost Full" by Paul McCartney when I bought it (stop sniggering at the back, he can still write a good song or two). The first thing I did was to rip the CD and put it onto my laptop and phone, where I listened to it quite a lot, and it sounded OK.

A while back (just after I added a DAC to my Hi-Fi - see a previous post in this thread), I put the CD on and came to the conclusion that the sound engineering on this album is just crap. It's a mainly acoustic album, but it's been pushed so that it's right at the top of the dynamic range, and as a result, sounds terrible on a decent HiFi. It actually sounds like it's clipping frequently. I guess that the rip I did (using one of the Linux MP3 encoders) must have cleaned it up. Either that, or the DAC or the pre-amp in my Hi-Fi amp is being pushed beyond it's capabilities, but I don't hear this on other CDs.

Paul is a pro, so I guess that either his hearing is dropping off, or he's never listened to the CD. I cannot otherwise imagine how he let this audio mess (just shut up, I think the songs are quite good) get released.

5
0
Peter Gathercole
Silver badge

Re: Rather than like buying a BMW

I had an interesting Digital Epiphany a couple of years ago.

I have a HiFi cherry-picked from the high end of the budget part of the market over many years, with one weak element in that I used whatever CD player I could get (although I always bought a HiFi brand name, the last one was a Technics).

With this set-up, over several different CD players, I always preferred my vinyl copies over the CDs whenever I had the the same music on both formats.

I took the attitude that a CD player was a CD player because, when all is said and done, prior to the DAC in the player it was all digital, and modern DAC chipsets were cheap and good enough to not matter any more!

One car boot sale, I found someone selling a Marantz CD player with digital output, and a Cambridge Audio DACMagic 2 at a very reasonable price.

Now, this is not a high-end DAC, and got rather mixed reviews when it was first produced. But the difference it made when playing my CDs compared to the Technics was absolutely astounding! And I also found that the DACMagic was better than the DAC in the Marantz CD player as well. I could not believe my ears at the clarity and instrument separation, pretty much identical to the vinyl, and spent many hours repeating the comparison of vinyl to CD, much to my wife's dismay ("why do you have to listen to the same track more than once?")

As such, I've realized that my preference was not really a vinyl vs. CD, but a good turntable/cartridge compared to a mediocre CD player. I wonder whether there are other people here who have has decent turntables and cartridges, but merely adequate CD players?

I still listen to both, but now the surface noise issue on vinyl, which I accepted as a necessary evil is actually more of an issue than it used to be with the old CD players.

12
0

Ubuntu Linux now on Windows Store (for Insiders)

Peter Gathercole
Silver badge

Re: So is this virtualised Ubuntu?

Neither. It's a little more like Cygwin, although you don't have to recompile any of the applications.

The Linux processes run scheduled and controlled by Windows with a translation layer to provide the kernel API to the processes.

It would be interesting to see how things like IPC, signals and process control work. And also some of the syscalls to do things like reading kernel structures (which won't exist) and also how things like KMS, Dbus, /proc and /sys, which are so important in modern Linux applications, are implemented.

I suppose this could be a reason why systemd is trying to take all these things in, so it is only necessary to subvert systemd to intercept many things. Is Lennart being paid by MS as well as RedHat.

1
0
Peter Gathercole
Silver badge

It IS a disgusied EEE gambit @TVU

Bollocks is it them accepting that they cannot beat Linux. It's Microsoft trying to stop people having dual boot systems that run Linux most of the time, with a Windows system left on it "just in case". This is another EEE strategy. It goes like this:

"Hey Linux user, you no longer have to divide up your system and dual boot it to allow you to use Linux and Windows. Just run your Linux processes on Windows. No need to partition your disk any more!"

This means that at some point in the future, when the user decides that Windows becomes too onerous Linux is actually what they want, it is a much harder task to run only Linux, and Microsoft get to count people using a Linux environment as a windows install. And once it's an accepted way of doing things, why run a Linux Kernel at all?

They already tried it some years back with GPT, where installing Windows after Linux could convert the boot record into GPT, destroying the ability to boot Linux. They also tried to suggest that Mobo manufacturers should put secure boot on all the time with only Microsoft certificates enrolled, although this was seen for what it was, and avoided.

I predict that consumer level Windows is going to suddenly get more difficult to run in a VM (it's already largely disallowed by license), to try to avoid people using Windows on Linux, making the Linux on Windows option more attractive to novice Linux users.

1
0

Microsoft drops Office 365 for biz. Now it's just Microsoft 365. Word

Peter Gathercole
Silver badge

Re: but pretty close. Definitely will work

I believe that Office formatting will still change if you change the printer that you want to use.

Whilst GDDM has pretty much solved the font issues that used to plague changing the print device (by rendering the page in the computer before it gets sent to the printer), differences in non-printable margins on printers can still cause pages to render differently. Quantization errors in mapping the print resolution between devices might also make a difference.

1
0

His Muskiness wheels out the Tesla Model 3

Peter Gathercole
Silver badge

@AC re. wide garage

Doesn't the Model 3 have some innovative folding gull wing doors that will allow you to get in and out even in quite tight spaces?

Also, can't you tell it to auto park from your mobile phone? Someone I know who drives a Model S says that can park itself with you out of the car. He makes quite a thing of leaving it in places where you could not get in and out of it even if you wanted to.

1
1

Good luck building a VR PC: Ethereum miners are buying all the GPUs

Peter Gathercole
Silver badge

Re: "Why would anyone need two graphics cards?"

Until comparatively recently, GPUs had their own private memory space, and moving data between the main memory and the GPU memory (and back) was often the biggest problem when using GPUs for parallel computation streams.

Nowadays, the PCIe3 variants have sufficient bandwidth so that it makes some sense to expose the main memory to the GPU processors, reducing the need for some complicated I/O system to shunt data around. This should make it easier to write vector type code to use the multiple processors in the GPU, but there probably needs to be a common API defined so that code can be made a little more portable.

I'm still expecting many and more powerful GPU stream processors to appear on the CPU die with full memory access and DDR4 or DDR5 main memory, so that they can just be considered as additional processing units in a massively superscaler system, but not the poor performance GPUs that AMD put on their APU, or what Intel build in to some of their chips.

0
0

Bonkers call to boycott Raspberry Pi Foundation over 'gay agenda'

Peter Gathercole
Silver badge

Re: W, as the young people say these days, TF?

I'm not so sure about the latter. I'm sure I've seen Betty and Wilma indulging on a peck on the cheek at times. Maybe that was an indicator of other things going on behind closed doors (just as long as the saber-toothed kitty did not jump back in through the window).

3
0

Search results suddenly missing from Google? Well, BLAME CANADA!

Peter Gathercole
Silver badge

Re: Shootout at the OK court

You are assuming that the company name and trademarks are registered in all countries around the world.

In theory, if a company name is not protected by an international trademark, it could be used by another company in a country that does not recognize the mark,

In this case, Google preventing other trading bodies outside Canada from using the perfectly legitimate in their own country company name would be adversely affecting that other party.

International trademarks and copyrights are a real minefield when the Internet is Global.

Does the WTO register trademarks worldwide?

0
2

AES-256 keys sniffed in seconds using €200 of kit a few inches away

Peter Gathercole
Silver badge

Re: Through a Lens, darkly...

Not even a Lens protects you forever.

IIRC, there were 'dark' lenses appearing by the time of "Children of the Lens", so even the Lens was reverse engineered.

The Arisians always knew from their 'Vision of the Cosmic All' that they were not the ultimate lifeform. That is why they force-evolved and then passed the mantle on to the Kinnision clan.

3
0

Latest Windows 10 Insider build pulls the trigger on crappy SMB1

Peter Gathercole
Silver badge

Re: Yawn @AC re. reboots

Don't be so sure that windows printer drivers shouldn't require a reboot.

Most windows printers rely on GDI, which may require a reboot (or at least a restart of the display system) to register a new printer.

This is what happens when you have unified display model built into monolithic subsystems in the OS. Its crap, but that's the way it is.

6
1

Software dev bombshell: Programmers who use spaces earn MORE than those who use tabs

Peter Gathercole
Silver badge

Re: A question @John Brown

If you are old enough to remember card punches, you may remember that you could have a format card that you would load into a punch that programmed the punch to put tab stops in relevant places on the cards you were punching. Somewhere on YouTube, there is an example of someone doing this with an IBM 029 card punch.

It's a very long time since I programmed using punch cards, but in my first job, writing RPGII, the fields in a line in the various program section were of fixed width, and it was possible program the punch to use the tab key to move you to the correct column without having to hammer the space bar. Provided a quite useful speedup when punching.

0
0
Peter Gathercole
Silver badge

Re: A question

Inserting tabs anywhere other than the beginning of a line gives different results from inserting a fixed number of spaces.

If you're using a tab after some other text on a line, the tab will take you to the next tab stop. This could be the equivalent of one or more spaces.

For example. If you are currently on column 12, and have tab stops set every 8 columns, pressing tab will take you to column 17.

To do the same with spaces, you would insert 5 spaces.

If you were on column 14, a tab will still take you to column 17, but you would only need 3 spaces to do the same.

This means that you can't get meaningful results with global substitutions of fixed numbers of spaces.. Programs like cb are clever enough to properly interpret tabs, and fill in with the variable number of spaces necessary to preserve alignment.

I use tabs to align trailing comments in my shell scripts (I know, it's a bad habit, comments should really be on their own lines, if only to inflate the number of lines of code written). Putting it through some global substitution really messes the formatting of these types of files.

I did once attempt to standardize of tab stops every 4 columns set in VI to reduce line-wrap, but I used so many systems, each of which had to have .exrc files, that I soon abandoned it and reverted to accepting tabs every 8 columns.

The habits of 39 years of writing shell scripts and other free-form languages is difficult to break!.

10
4

Stack Clash flaws blow local root holes in loads of top Linux programs

Peter Gathercole
Silver badge

Re: HOW?!

You have to be a bit careful here, because in threaded environments, each thread gets a mini-stack that is actually created on the heap, so overrunning one of these stacks could damage the heap.

You also have variables local to a function context created on the stack, so if local variables are manipulated using unsafe routines that do not perform bounds checking, it is possible to damage surrounding stack frames, which can include the return address for other function calls.

Putting guard pages around each stack frame starts increasing the size of the memory footprint of even the smallest program.

1
0
Peter Gathercole
Silver badge

Re: Why am I not surprised to see sudo there? @hmv

Having "::" on your path is as bad. Also, having a trailing colon on the path will also include the current directory in any path searches.

Other stupid things to do include putting relative directories on the path, and also putting non-readonly variables on the path!

0
0

BOFH: Halon is not a rad new vape flavour

Peter Gathercole
Silver badge

CRTs

For a colour monitor, don't forget the shadow mask.

For early generation monochrome monitors, there used to be an offset bias on the beam deflector so that the beam did not strike the phosphor at right angles, but at an angle that would aim the beam away from someone sitting directly in front of the monitor.

Electrons from an electron gun in a CRT are relatively low energy, and can easily be stopped by the metalised inside coating of the glass, and the glass itself. And the energy was not high enough to generate X or gamma rays.

5
0
Peter Gathercole
Silver badge

This was a particularly good one

I just wish more bosses would read them.

29
1

Don't touch that mail! London uni fears '0-day' used to cram network with ransomware

Peter Gathercole
Silver badge

Re: Wouldn't have happened in my day

Pine? Piffle!

mailx or if that was not available or not a UNIX system, mail. Or maybe *MAIL on MTS.

Youngsters!

0
0
Peter Gathercole
Silver badge

Re: windows permissions model is much more flexible than UNIX

Unix != linux, just in case you can't read. Plus, there is no one ACL system that spans all UNIX-like OSs.

What I wrote is totally true. You've just responded to a different statement, one that I did not say, The original UNIX permission model is weaker than current Windows without any question,

Even on Linux, ACL support largely depends on the underlying filesystem, and both apparmour and SELinux can be, and often are, disabled.

Oh, and because I am a long-term AIX system admin, I've actually been aware of filesystem ACLs since before Linux went mainstream (JFS implemented them on AIX 3.1 which was released in 1990), and RBAC since AIX 5.1 (sometime in 1999 or 2000). I've also used AFS and DCE/DFS, both of which has ACL support and used Kerberos to manage credentials since about 1993,

At the risk of being confrontational, when did you start using computers?

0
0
Peter Gathercole
Silver badge

Re: Fundamental problem in vulnerable OS protected by AV @Prst. V. Jeltz

Here is a on-the-back-of-a-napkin solution for you.

Each user can only access their own files, which are stored in a small number of well defined locations (like a proper home directory).

Store the OS as completely inviolate to write access by 'normal' users. Train your System Administrators to run with the least privileges they need to perform a particular piece of work.

Any shared data will be stored in additional locations, which can only be accessed when you've gained additional credentials to access just the data that is needed. Make this access read-only by default, and make write permission an additional credential. This should affect OS maintenance operations as well (admins need to gain additional credentials to alter the OS).

Force users to drop credentials when they've finished a particular piece of work.

If possible, make the files sit in a versioned filesystem, where writing a file does not overwrite the previous version.

Make sure that you have a backup system separate from normal access. Copying files to another place on the generally accessible filetree is not a backup. Make it a generational backup, keeping multiple versions over a significant time. Allow users access to recover data from the backups themselves, without compromising the backup system.

Make you MUA dumb. I mean, really dumb. Processing attachments should be under user control, not allowing the system to choose the application. The interface allowing attachments to run should be secured to attempt to control what is run. Mail can be used to disseminate information, but by default it should be text only, possibly with some safe method of displaying images.

Run your browser (and anything processing HTML or other web-related code) and your MUA in a sand-box. There needs to be some work done here to allow downloaded information to be safely exported from the sandbox. Put boundary protection between the sand-box and the rest of the users own environment.

Applications should be written such that all the files needed for the application to function, including libraries should be encapsulated in a single location, and protected from ordinary users. The applications should be stored centrally, not deployed to individual workstations and run across the network with credentials used to control the ability to run the applications. The default location that users will save data to in all applications should be unique to the user (not a shared directory), although storage to another location should be allowed, provided that the access requirements are met.

Use of applications should be controlled by the additional credential system described for file access.

Distributed systems should not allow storage of local files except where temporary files are needed for performance reasons, or they are running detached from the main environment. These systems should be largely identical, and controlled by single-image deployment, possibly loaded at each start-up. This allows rapid deployment of new system images. The image should be completely immune to any change by normal users, and revert back to the saved image on reboot.

For systems running detached (remote) from the main environment, allow a local OS image to be installed. Implement a local read-only cache of the application directories which can be primed or sync'd when they are attached to home. Store any new files in a write-cache, and make it so these files will be sync'd with the proper locations when they are attached to home. Make the sync process run the files through a boundary protection system to check files as they are imported.

OK, that's a 10 minute design. Implementing it using Windows would be problematic, because of all of the historical crap that windows has allowed. A Unix-like OS with Kerberos credential system would be much easier to implement this model in (I've seen the bare-bones of this type of deployment using Unix-like systems already, using technologies such as diskless network boot and AFS).

Not having shared libraries would impact system maintenance a bit, because each application would be responsible for patching code that is currently shared, but because the application location is shared, each patching operation only needs to be done once, not for all workstations. OS image load at start-up means that you can deploy an image almost immediately once you're satisfied that it's correct.

Users would complain like buggery, because the environment would be awkward to use, but make it consistent and train them, and they would accept it.

BTW. How's the poetry going?

2
2
Peter Gathercole
Silver badge

Re: Fundamental problem in vulnerable OS protected by AV @Ptsr.V Jeltz

Unfortunately, many of the organizations I've worked at recently have nearly wide-open file-shares, such that my account would have been able to damage a significant proportion of the data.

As a long-term UNIX admin, I'm used to have files locked down by individual user ID, with group permissions to allow individuals to access those extra files they need, at the appropriate access level. With some skill, it is possible to devise a model where by default you have minimal access, and you acquire additional access as and when you need it, with additional access checks along the way (think RBAC with you having to add roles to your account as you need them).

The windows permissions model is much more flexible than UNIX, so not using it properly to protect information is almost criminal. Too many organizations (but not all, I admit) do not use it to it's fullest capabilities.

There have been several vulnerabilities published where just displaying an HTML mail can execute code. In addition, launching an application to handle an attachment is merely one click in many mail systems, especially when the actual attachment type can be obscured. Thus, building a sandbox for the mail system and applications that handle attachments (what I was aiming at) is do-able, History indicates that vulnerabilities like this have happened in the past, and I do not have confidence that there are not more to find. Ease of use always seems to have triumphed over security in much software.

The recent attacks appear to hinge around being able to launch client-side code without sufficient control, in an environment where the users credentials are sufficient to do significant harm. The results appear to suggest that sufficient care had not been taken to segregate data access, contrary to your assertion that administrators do, If they had, the results would not have been nearly as bad as reported.

IMHO, security should be paramount in this day and age, and usability should always be secondary.

2
2
Peter Gathercole
Silver badge

Fundamental problem in vulnerable OS protected by AV

If AV is your primary defense against this type of attach, then you've got a problem.

There will always be a lead time between the appearance of this type of attack, and AV systems identifying and blocking it and becoming effective when it is deployed. This will be unlikely to be less than 24 hours, and probably much longer as organizations rarely provide daily AV updates.

It really surprises me that we have not seen more sophisticated malware, with constantly changing content and delivery vectors. I know that AV systems are trying to become heuristic to avoid that type of threat, so they make an attempt to programmatically identify suspicious traffic, but this can lead to false positives.

OS and application writers (of any flavor) should make sure that easily exploited vulnerabilities (like allowing mail attachments to be able to execute code) are either not present (preferably) or patched very quickly, and administrators should make sure that access to data is controlled and segregated to limit the scope of any encryption attack (at this point, running your MUA in a sandbox looks good!).

Whenever I see "Avoid messages with a subject line of..." then it is clear that the malware writers just aren't really trying very hard. Fortunately. Maybe they don't have to because the attack surface is so large.

7
0

Lockheed, USAF hold breath as F-35 pilots report hypoxia

Peter Gathercole
Silver badge

Re: O2 many issues @Dave 15

The Illustrious class of carriers had much too small a flight deck to operate conventional fixed wing aircraft operationally.

While it would have been possible to land a plane on the flight deck, it would have to be empty, requiring all other aircraft to be struck below while the landing was happening.

One of the advantages of the angled flight deck (a British innovation, and one not fitted to the through-deck cruisers - sorry, light carriers) was to allow concurrent flying-on and off operations.

Before that time, a carrier was normally either launching or recovering aircraft, not both (this was because if you missed the arrester wires, you need to have a clear space to throttle up and take back to the air in order to make another attempt). There were some experiments with barriers, but they tended to damage the aircraft in an arrester-wire miss. they were mainly used if an aircraft was damaged already.

1
0
Peter Gathercole
Silver badge

Re: O2 many issues @Dave

That depends on what you call a fast jet!

The only supersonic jet that was deployed on UK carriers was the F-4K Phantom II (FG.1), which was a US design re-worked with British engines and avionics. Only the Ark was capable of flying the F-4K. as Eagle has not been fitted with the reinforced and water cooled blast deflectors that allowed the Ark to operate them. This meant that the Eagle was withdrawn from service before the Ark, even though it was actually in a better state of maintenance (I very sadly saw her in her last days, moored and in reserve at Drakes Island in the Plymouth sound).

Ignoring the Harrier, the last UK produced 'fast' plane was the Blackburn Buccaneer, which was a formidable surface attack aircraft, bot not supersonic. Prior to that it was Sea Vixens, Sea Venoms, Scimatars, and Sea Hawks. All of these were designed in the '40's and '50s, and were regarded as 1st of 2nd generation jets at best.

Amusing story. The F-4K needed afterburners in order to launch with full weapons load (The Spey engines were less powerful without afterburner than the US General Electric J79 engines fitted to the F-4J). When joint operations with the US happened, it was found that the heat of the afterburners, and the increased angle as a result of the lengthened nose wheel would soften and melt the deck and blast-deflectors on the US carriers,which meant that the UK planes were not welcome on the US carriers.

1
0
Peter Gathercole
Silver badge

Re: O2 many issues @Mark Demster

The US EMALS system is having problems at the moment, and if one had been fitted to one of the UK carriers, it would have taken almost the entire electrical output of the gas turbine/diesel electric powerplant in the QE for the duration of the recharge. This is probably the main reason that EMALS was rejected as a late addition.

Besides, who in their right mind would only fit a single catapult to such a large military asset. One mechanical failure would render the significant benefit of such a carrier useless, turning it into a liability in a combat situation.

The EMALS system uses an electro-mechanical kinetic energy storage system that draws significant power during the recharge. It is notable that the electrical generation capacity of the Ford sub class of the Nimitz design has a higher electrical output than the Nimitz, mainly (but not entirely) to provide power for the EMALS. This is such that it will not be possible to fit EMALS into the older Nimitz carriers.

The QE and PoW should have been designed as nuclear ships from the outset, but the general dislike of nuclear in the UK Parliament and population has resulted in ships that will succeed or fail on the back of one of the most expensive, complex and apparently troublesome aircraft ever created, and that is from a US contractor who has built a maintenance system that allows them to dictate how the aircraft can be used.

AFAIK, this will include the carriers not being able to do engine replacements in the aircraft without returning them to a maintenance base, that may not even be in the UK. Certainly not while at sea. Who's bright idea was that! Compare that with the F and F/A-18, where the aircraft can stowed as sub-assemblies, and assembled or used as spares while on active deployment (and would have been much cheaper and available now!).

1
0
Peter Gathercole
Silver badge

Re: Top Gun @Jake

B@$t4&d.

I could cope with Berlin, but Disney....

12
0

Ever wonder why those Apple iPhone updates take so damn long?

Peter Gathercole
Silver badge

Re: no no no no no no no, Apple @DougS

I don't know whether you're not thinking this through, don't really understand the differences between different filesystem types, or are just naieve.

It is not easy to, say, do an in place conversion from EXT3 to NTFS. Everything from the tracking of free space, block and fragment allocation and metadata are different between the file systems, meaning that to convert the filesystem it will require every file to be read and re-written. This will effectively destroy the original filesystem while creating the new, meaning that a roll-back is as intensive and risky as the conversion.

Now if the changes between the filesystem types are evolutionary rather than revolutionary, it may be possible to do an in-place upgrade. So, it is possible to upgrade from EXT2 to EXT3, because most of the filesystem structures are the same or very similar. The same is true of EXT3 to EXT4. But these are a family of fileystems, designed for backward compatibility.

If APFS (I'm soooo glad they did not call it AFS, which has been used at least once already) keeps the files in place, and just creates new metadata in free space, as you possibly suggest, it would almost certainly be possible to do this without touching the original data or metadata. But does something like this actually count as a 'new' file system, rather than a new version of the old filesystem?

I would also be interested in how much wear the flash memory will suffer from repeated writing during these test upgrades.

4
1

Tech can do a lot, Prime Minister, but it can't save the NHS

Peter Gathercole
Silver badge

Re: WTF!?

35 years is for full state pension entitlement, but you don't stop paying after 35 years of contributions if you are taxed by PAYE. You still see those NI deductions. They don't stop.

3
0

UK PM May's response to London terror attack: Time to 'regulate' internet companies

Peter Gathercole
Silver badge

Re: V For Vendetta

You ought to read the graphic novel. There are several threads of government corruption and depravity that got lost in the translation to the screen, good though the film is.

Of course, for the ultimate bleak experience, you need to read it in the original black and white, but the story was never finished in Warrior before it ceased publication. Damn Marvel and their obsession with protecting a name that was never theirs to begin with.

I never did get to read the end of Marvel/Miracleman. As I understand it, it was published in the US, and was available for import, but never published in the UK. Maybe I need to hit Ebay.

Edit. Soooo wrong. It was published. I'm just out of date! Some good reading ahead, I think.

0
0
Peter Gathercole
Silver badge

Re: And today I have to...

That's fine, as long as you give HMG a back-door to the encryption. It'll be encrypted, but they will still be able to read the data, so they'll be happy.

They'll pat you on the head, and give you an MBE, and stand you up as a good example to follow.

0
0

Lexmark patent racket busted by Supremes

Peter Gathercole
Silver badge

Re: Epson extortion

Many Epson printers can have their heads cleaned. Look on YouTube for videos of how to do it.

We've discussed this before at an earlier stage of this very issue, and here is a comment I made at the time.

1
0
Peter Gathercole
Silver badge

Re: Lexmark loses twice?

I'm pretty certain that Lexmark are out of the new inkjet market. Any you see for sale now are old stock, which I wouldn't touch with a barge pole.

They still make usable laser printers, but their only activity in the inkjet market is selling cartridges.

I like older Epson printers, because the cartridges are just ink buckets, and the fixed heads are robust enough to allow them to be cleaned. Only problem is the ink sponge counter that needs to be reset once in a blue moon.

0
0

Microsoft Master File Table bug exploited to BSOD Windows 7, 8.1

Peter Gathercole
Silver badge

Re: More like from the 1970s

Early UNIX used to only export the state of things via the /dev/mem and /dev/kmem files, which mapped the whole system's memory, and the memory image of the kernel respectively. It was normal to open /unix and extract the symbol table and then open /dev/kmem and seek to the location of the kernel data structure you were interested in.

These files were set so that you had to be real or effective ID of root in order to read them, and it was drummed into admins that they did as little as possible when logged into root to reduce risk of inadvertent or malicious damage to the system. Scribbling over either file would more than likely crash the system, or at least some of the processes.

I remember many years ago there was a bug in the UNIX Version 7 TU11 driver that would render a tape drive unusable. I used to open /dev/kmem read-write with db or cdb (can't remember which) in order to manually unset the lock to allow me to use it again without rebooting. I don't think I ever identified the cause of the drive being locked.

Later in UNIX, syscalls were added to give more guarded access to a number of kernel data structures.

/proc was a Linux thing that makes some operations much easier, and has been adopted by some UNIXs. /sys may follow, but I don't think anybody's ported, or likely to port, /udev, dbus or kms to UNIX.

2
0

BA's 'global IT system failure' was due to 'power surge'

Peter Gathercole
Silver badge

Re: Back-up, folks?

We hear the failures. We very rarely hear where site resilience and DR worked as designed. It's just not news worthy.

"Stop Press: Full site power outage hits Company X. Service not affected as DR worked flawlessly. Spokesperson says they were a little nervous, but had full confidence in their systems. Nobody fired".

Not much of a headline, is it, although "DR architect praised, company thanks all staff involved and Accountants agree that the money to have DR environment was well spent" would be one I would like, but never expect to see.

I know that some organisations get it right, because I've worked through a number of real events and full exercises that show that things can work, and none of the real events ever appeared in the press.

8
0
Peter Gathercole
Silver badge

Re: Ho hum

It does not have to be quite so expensive.

Most organisations faced with a disaster scenario will pause pretty much all development and next phase testing.

So it is possible to use some of your DR environment for either development or PreProduction.

The trick is to have a set of rules that dictate the order of shedding load in PP to allow you to fire up the DR environment.

So, you have your database server in DR running all the time in remote update mode, shadowing all of the write operations while doing none of the query. This will use a fraction of the resource. You also have the rest of the representative DR environment running at, say, 10% of capacity. This allows you to continue patching the DR environment

When you call a disaster, you shutdown PP, and dynamically add the CPU and memory to your DR environment. You the switch the database to full operation, point all the satellite systems to your DR environment, and you should be back in business.

This will not five you a fully fault tolerant environment, but will give you an environment which you can spin up in a matter of minutes rather than hours, and will prevent you from having valuable resources sitting doing nothing. The only doubling up is in storage, because you have to have the PP and DR environments built simultaneously.

With today's automation tools, or using locally written bespoke tools, it should be possible to pretty much automate the shutdown and reallocation of the resources.

One of the difficult things to decide is when to call DR. Many times it is better to try to fix the main environment rather than switch, because no matter how you set it up, it is quicker to switch to DR than to switch back. Get the decision wrong, and you either have the pain of moving back, or you end up waiting for things to be fixed, which often take longer that the estimates. The responsibility for that decision is what the managers are paid the big bucks for,

22
0

UK ministers to push anti-encryption laws after election

Peter Gathercole
Silver badge

Re: Sorry High Street Bank

As several people have already pointed out, it's not banning encryption, it's forcing the large companies to give UK gov a backdoor.

The idea is flawed not because it will make encryption illegal, but because keeping a backdoor secret is impossible. Once it is leaked, and it will leak, everybody will have to change their encryption. How disruptive has been replacing insecure SSL/TLS. Backdoors leaking would be much worse than this!

The government will try to make using encryption that does not include a backdoor illegal, and will demonize anybody found using such a system, probably by adding laws to the statute book so that anybody found using encryption that is not readable by the intelligence service will be deemed a terrorist, but even that idea is flawed.

This is because, if they find a data stream or data set on a computer that they don't understand, they will immediately assume that it is obscured by a type of encryption that they've not seen before.

"Hey, I can't make any sense of the data in this /dev/urandom file on your computer. Tell us how to decrypt it or we'll throw you in jail for three months for not revealing the key, and then consider a longer jail sentence for using an encryption method that we can't read"

This is obviously a case to illustrate stupidity, and could be easily challenged in court. By what about seemingly random observation data from things like radio astronomy or applied physics, and if there are rules to allow this type of data to even exist on a computer, how do you prevent steganography - hiding data inside the image or other data.

At some point, people wanting to hide things will resort to book ciphers using unpublished or even published books, which will only be decryptable by knowing the exact book that is being used, or by cataloging all texts ever written. Fortunately, despite Google's best efforts, this is something that will remain impractical for some time.

It's a real minefield that there are no good or consistent ways of regulating.

18
0

Dell BIOS update borks PCs

Peter Gathercole
Silver badge

If the BIOS craps out in the POST

If the BIOS craps out in the POST (Power On Self Test), it will not boot whatever you do.

If replacing the BIOS chip requires the motherboard being removed (laptops are not designed to be easily maintained), then replacing the motherboard will be a quicker and possibly cheaper fix (for Dell). Also replacing surface-mount components is far from easy.

Normally the BIOS resides in flash memory nowadays (rather than EEPROM). It used to be that there was a small amount of ROM that could act as a failsafe to allow you to reflash a corrupt BIOS, but I suspect that if that code is still included, is resides in a different partition in the same flash memory chip. If the flash memory gets completely wiped, then you've lost the failsafe as well.

Certain mobo manufacturers (Gigabyte come to mind) used to have a Dual BIOS feature, where if you updated the BIOS, you only did one side, and you had the unchanged other side to fall back to if it failed. That gave you a way of proving a new BIOS without bricking the system.

Some boards also have I2C or SMBus (or other) ports that may allow the flash to be reprogrammed in situ, but often the headers are not soldered on the board to allow it to be used.

3
0

Forums

Biting the hand that feeds IT © 1998–2017