* Posts by Peter Gathercole

2924 posts • joined 15 Jun 2007

Lockheed, USAF hold breath as F-35 pilots report hypoxia

Peter Gathercole Silver badge

Re: O2 many issues @Dave

That depends on what you call a fast jet!

The only supersonic jet that was deployed on UK carriers was the F-4K Phantom II (FG.1), which was a US design re-worked with British engines and avionics. Only the Ark was capable of flying the F-4K. as Eagle has not been fitted with the reinforced and water cooled blast deflectors that allowed the Ark to operate them. This meant that the Eagle was withdrawn from service before the Ark, even though it was actually in a better state of maintenance (I very sadly saw her in her last days, moored and in reserve at Drakes Island in the Plymouth sound).

Ignoring the Harrier, the last UK produced 'fast' plane was the Blackburn Buccaneer, which was a formidable surface attack aircraft, bot not supersonic. Prior to that it was Sea Vixens, Sea Venoms, Scimatars, and Sea Hawks. All of these were designed in the '40's and '50s, and were regarded as 1st of 2nd generation jets at best.

Amusing story. The F-4K needed afterburners in order to launch with full weapons load (The Spey engines were less powerful without afterburner than the US General Electric J79 engines fitted to the F-4J). When joint operations with the US happened, it was found that the heat of the afterburners, and the increased angle as a result of the lengthened nose wheel would soften and melt the deck and blast-deflectors on the US carriers,which meant that the UK planes were not welcome on the US carriers.

Peter Gathercole Silver badge

Re: O2 many issues @Mark Demster

The US EMALS system is having problems at the moment, and if one had been fitted to one of the UK carriers, it would have taken almost the entire electrical output of the gas turbine/diesel electric powerplant in the QE for the duration of the recharge. This is probably the main reason that EMALS was rejected as a late addition.

Besides, who in their right mind would only fit a single catapult to such a large military asset. One mechanical failure would render the significant benefit of such a carrier useless, turning it into a liability in a combat situation.

The EMALS system uses an electro-mechanical kinetic energy storage system that draws significant power during the recharge. It is notable that the electrical generation capacity of the Ford sub class of the Nimitz design has a higher electrical output than the Nimitz, mainly (but not entirely) to provide power for the EMALS. This is such that it will not be possible to fit EMALS into the older Nimitz carriers.

The QE and PoW should have been designed as nuclear ships from the outset, but the general dislike of nuclear in the UK Parliament and population has resulted in ships that will succeed or fail on the back of one of the most expensive, complex and apparently troublesome aircraft ever created, and that is from a US contractor who has built a maintenance system that allows them to dictate how the aircraft can be used.

AFAIK, this will include the carriers not being able to do engine replacements in the aircraft without returning them to a maintenance base, that may not even be in the UK. Certainly not while at sea. Who's bright idea was that! Compare that with the F and F/A-18, where the aircraft can stowed as sub-assemblies, and assembled or used as spares while on active deployment (and would have been much cheaper and available now!).

Peter Gathercole Silver badge

Re: Top Gun @Jake

B@$t4&d.

I could cope with Berlin, but Disney....

Don't touch that mail! London uni fears '0-day' used to cram network with ransomware

Peter Gathercole Silver badge

Fundamental problem in vulnerable OS protected by AV

If AV is your primary defense against this type of attach, then you've got a problem.

There will always be a lead time between the appearance of this type of attack, and AV systems identifying and blocking it and becoming effective when it is deployed. This will be unlikely to be less than 24 hours, and probably much longer as organizations rarely provide daily AV updates.

It really surprises me that we have not seen more sophisticated malware, with constantly changing content and delivery vectors. I know that AV systems are trying to become heuristic to avoid that type of threat, so they make an attempt to programmatically identify suspicious traffic, but this can lead to false positives.

OS and application writers (of any flavor) should make sure that easily exploited vulnerabilities (like allowing mail attachments to be able to execute code) are either not present (preferably) or patched very quickly, and administrators should make sure that access to data is controlled and segregated to limit the scope of any encryption attack (at this point, running your MUA in a sandbox looks good!).

Whenever I see "Avoid messages with a subject line of..." then it is clear that the malware writers just aren't really trying very hard. Fortunately. Maybe they don't have to because the attack surface is so large.

Ever wonder why those Apple iPhone updates take so damn long?

Peter Gathercole Silver badge

Re: no no no no no no no, Apple @DougS

I don't know whether you're not thinking this through, don't really understand the differences between different filesystem types, or are just naieve.

It is not easy to, say, do an in place conversion from EXT3 to NTFS. Everything from the tracking of free space, block and fragment allocation and metadata are different between the file systems, meaning that to convert the filesystem it will require every file to be read and re-written. This will effectively destroy the original filesystem while creating the new, meaning that a roll-back is as intensive and risky as the conversion.

Now if the changes between the filesystem types are evolutionary rather than revolutionary, it may be possible to do an in-place upgrade. So, it is possible to upgrade from EXT2 to EXT3, because most of the filesystem structures are the same or very similar. The same is true of EXT3 to EXT4. But these are a family of fileystems, designed for backward compatibility.

If APFS (I'm soooo glad they did not call it AFS, which has been used at least once already) keeps the files in place, and just creates new metadata in free space, as you possibly suggest, it would almost certainly be possible to do this without touching the original data or metadata. But does something like this actually count as a 'new' file system, rather than a new version of the old filesystem?

I would also be interested in how much wear the flash memory will suffer from repeated writing during these test upgrades.

Tech can do a lot, Prime Minister, but it can't save the NHS

Peter Gathercole Silver badge

Re: WTF!?

35 years is for full state pension entitlement, but you don't stop paying after 35 years of contributions if you are taxed by PAYE. You still see those NI deductions. They don't stop.

UK PM May's response to London terror attack: Time to 'regulate' internet companies

Peter Gathercole Silver badge

Re: V For Vendetta

You ought to read the graphic novel. There are several threads of government corruption and depravity that got lost in the translation to the screen, good though the film is.

Of course, for the ultimate bleak experience, you need to read it in the original black and white, but the story was never finished in Warrior before it ceased publication. Damn Marvel and their obsession with protecting a name that was never theirs to begin with.

I never did get to read the end of Marvel/Miracleman. As I understand it, it was published in the US, and was available for import, but never published in the UK. Maybe I need to hit Ebay.

Edit. Soooo wrong. It was published. I'm just out of date! Some good reading ahead, I think.

Peter Gathercole Silver badge

Re: And today I have to...

That's fine, as long as you give HMG a back-door to the encryption. It'll be encrypted, but they will still be able to read the data, so they'll be happy.

They'll pat you on the head, and give you an MBE, and stand you up as a good example to follow.

Lexmark patent racket busted by Supremes

Peter Gathercole Silver badge

Re: Epson extortion

Many Epson printers can have their heads cleaned. Look on YouTube for videos of how to do it.

We've discussed this before at an earlier stage of this very issue, and here is a comment I made at the time.

Peter Gathercole Silver badge

Re: Lexmark loses twice?

I'm pretty certain that Lexmark are out of the new inkjet market. Any you see for sale now are old stock, which I wouldn't touch with a barge pole.

They still make usable laser printers, but their only activity in the inkjet market is selling cartridges.

I like older Epson printers, because the cartridges are just ink buckets, and the fixed heads are robust enough to allow them to be cleaned. Only problem is the ink sponge counter that needs to be reset once in a blue moon.

Microsoft Master File Table bug exploited to BSOD Windows 7, 8.1

Peter Gathercole Silver badge

Re: More like from the 1970s

Early UNIX used to only export the state of things via the /dev/mem and /dev/kmem files, which mapped the whole system's memory, and the memory image of the kernel respectively. It was normal to open /unix and extract the symbol table and then open /dev/kmem and seek to the location of the kernel data structure you were interested in.

These files were set so that you had to be real or effective ID of root in order to read them, and it was drummed into admins that they did as little as possible when logged into root to reduce risk of inadvertent or malicious damage to the system. Scribbling over either file would more than likely crash the system, or at least some of the processes.

I remember many years ago there was a bug in the UNIX Version 7 TU11 driver that would render a tape drive unusable. I used to open /dev/kmem read-write with db or cdb (can't remember which) in order to manually unset the lock to allow me to use it again without rebooting. I don't think I ever identified the cause of the drive being locked.

Later in UNIX, syscalls were added to give more guarded access to a number of kernel data structures.

/proc was a Linux thing that makes some operations much easier, and has been adopted by some UNIXs. /sys may follow, but I don't think anybody's ported, or likely to port, /udev, dbus or kms to UNIX.

BA's 'global IT system failure' was due to 'power surge'

Peter Gathercole Silver badge

Re: Back-up, folks?

We hear the failures. We very rarely hear where site resilience and DR worked as designed. It's just not news worthy.

"Stop Press: Full site power outage hits Company X. Service not affected as DR worked flawlessly. Spokesperson says they were a little nervous, but had full confidence in their systems. Nobody fired".

Not much of a headline, is it, although "DR architect praised, company thanks all staff involved and Accountants agree that the money to have DR environment was well spent" would be one I would like, but never expect to see.

I know that some organisations get it right, because I've worked through a number of real events and full exercises that show that things can work, and none of the real events ever appeared in the press.

Peter Gathercole Silver badge

Re: Ho hum

It does not have to be quite so expensive.

Most organisations faced with a disaster scenario will pause pretty much all development and next phase testing.

So it is possible to use some of your DR environment for either development or PreProduction.

The trick is to have a set of rules that dictate the order of shedding load in PP to allow you to fire up the DR environment.

So, you have your database server in DR running all the time in remote update mode, shadowing all of the write operations while doing none of the query. This will use a fraction of the resource. You also have the rest of the representative DR environment running at, say, 10% of capacity. This allows you to continue patching the DR environment

When you call a disaster, you shutdown PP, and dynamically add the CPU and memory to your DR environment. You the switch the database to full operation, point all the satellite systems to your DR environment, and you should be back in business.

This will not five you a fully fault tolerant environment, but will give you an environment which you can spin up in a matter of minutes rather than hours, and will prevent you from having valuable resources sitting doing nothing. The only doubling up is in storage, because you have to have the PP and DR environments built simultaneously.

With today's automation tools, or using locally written bespoke tools, it should be possible to pretty much automate the shutdown and reallocation of the resources.

One of the difficult things to decide is when to call DR. Many times it is better to try to fix the main environment rather than switch, because no matter how you set it up, it is quicker to switch to DR than to switch back. Get the decision wrong, and you either have the pain of moving back, or you end up waiting for things to be fixed, which often take longer that the estimates. The responsibility for that decision is what the managers are paid the big bucks for,

UK ministers to push anti-encryption laws after election

Peter Gathercole Silver badge

Re: Sorry High Street Bank

As several people have already pointed out, it's not banning encryption, it's forcing the large companies to give UK gov a backdoor.

The idea is flawed not because it will make encryption illegal, but because keeping a backdoor secret is impossible. Once it is leaked, and it will leak, everybody will have to change their encryption. How disruptive has been replacing insecure SSL/TLS. Backdoors leaking would be much worse than this!

The government will try to make using encryption that does not include a backdoor illegal, and will demonize anybody found using such a system, probably by adding laws to the statute book so that anybody found using encryption that is not readable by the intelligence service will be deemed a terrorist, but even that idea is flawed.

This is because, if they find a data stream or data set on a computer that they don't understand, they will immediately assume that it is obscured by a type of encryption that they've not seen before.

"Hey, I can't make any sense of the data in this /dev/urandom file on your computer. Tell us how to decrypt it or we'll throw you in jail for three months for not revealing the key, and then consider a longer jail sentence for using an encryption method that we can't read"

This is obviously a case to illustrate stupidity, and could be easily challenged in court. By what about seemingly random observation data from things like radio astronomy or applied physics, and if there are rules to allow this type of data to even exist on a computer, how do you prevent steganography - hiding data inside the image or other data.

At some point, people wanting to hide things will resort to book ciphers using unpublished or even published books, which will only be decryptable by knowing the exact book that is being used, or by cataloging all texts ever written. Fortunately, despite Google's best efforts, this is something that will remain impractical for some time.

It's a real minefield that there are no good or consistent ways of regulating.

Bye bye MP3: You sucked the life out of music. But vinyl is just as warped

Peter Gathercole Silver badge

Re: My old music never reached CD let alone digital download

Ebay is your friend here, but avoid the club DJ turntables. Go for something like a 2nd hand Project Debut 2 or 3 or a Dual 504 or 505 which would give you reasonable performance at a quite reasonable price (watch out for the later Project Debut turntables, they've got a bit big-headded because of their success, and have put their prices up).

You may need a moving magnet (or moving coil if you go big) pre-amp to play it through a modern Hi-Fi, however, as most Hi-Fi nowadays does not have a phono input.

(And don't forget the decent speakers!)

Peter Gathercole Silver badge

Well there you have it.

If you are basing your vinyl listening to picture disks, then you've got a really jaundiced sample.

You need good quality black vinyl to get the best experience.

I recently bought the first of the Beatles Vinyl Collection partwork, which was Abbey Road, my absolute favorite Beatles album. This has been recently re-mastered, and the pressing is on 180gm hich quality vinyl, and it's really refreshing to listen to such a good pressing. Unfortunately, they re-mastered from the original master tapes, and I find the top end a bit muted, and I notice that the cymbals in tracks like Something and Here Comes the Sun have just disappeared compared to previous pressings.

It's a shame that the series was going to be so expensive. £17 for a single album and £25 for a double album is a bit too much.. Over all, it would have cost over £450 for the entire collection.

Peter Gathercole Silver badge

Modern sound engineering

I think he's complaining about the engineering and mastering of modern recordings rather than the actual limits on the media.

I don't buy much modern music, but I was appalled by the mastering of "Memory Almost Full" by Paul McCartney when I bought it (stop sniggering at the back, he can still write a good song or two). The first thing I did was to rip the CD and put it onto my laptop and phone, where I listened to it quite a lot, and it sounded OK.

A while back (just after I added a DAC to my Hi-Fi - see a previous post in this thread), I put the CD on and came to the conclusion that the sound engineering on this album is just crap. It's a mainly acoustic album, but it's been pushed so that it's right at the top of the dynamic range, and as a result, sounds terrible on a decent HiFi. It actually sounds like it's clipping frequently. I guess that the rip I did (using one of the Linux MP3 encoders) must have cleaned it up. Either that, or the DAC or the pre-amp in my Hi-Fi amp is being pushed beyond it's capabilities, but I don't hear this on other CDs.

Paul is a pro, so I guess that either his hearing is dropping off, or he's never listened to the CD. I cannot otherwise imagine how he let this audio mess (just shut up, I think the songs are quite good) get released.

Peter Gathercole Silver badge

Re: Rather than like buying a BMW

I had an interesting Digital Epiphany a couple of years ago.

I have a HiFi cherry-picked from the high end of the budget part of the market over many years, with one weak element in that I used whatever CD player I could get (although I always bought a HiFi brand name, the last one was a Technics).

With this set-up, over several different CD players, I always preferred my vinyl copies over the CDs whenever I had the the same music on both formats.

I took the attitude that a CD player was a CD player because, when all is said and done, prior to the DAC in the player it was all digital, and modern DAC chipsets were cheap and good enough to not matter any more!

One car boot sale, I found someone selling a Marantz CD player with digital output, and a Cambridge Audio DACMagic 2 at a very reasonable price.

Now, this is not a high-end DAC, and got rather mixed reviews when it was first produced. But the difference it made when playing my CDs compared to the Technics was absolutely astounding! And I also found that the DACMagic was better than the DAC in the Marantz CD player as well. I could not believe my ears at the clarity and instrument separation, pretty much identical to the vinyl, and spent many hours repeating the comparison of vinyl to CD, much to my wife's dismay ("why do you have to listen to the same track more than once?")

As such, I've realized that my preference was not really a vinyl vs. CD, but a good turntable/cartridge compared to a mediocre CD player. I wonder whether there are other people here who have has decent turntables and cartridges, but merely adequate CD players?

I still listen to both, but now the surface noise issue on vinyl, which I accepted as a necessary evil is actually more of an issue than it used to be with the old CD players.

Dell BIOS update borks PCs

Peter Gathercole Silver badge

If the BIOS craps out in the POST

If the BIOS craps out in the POST (Power On Self Test), it will not boot whatever you do.

If replacing the BIOS chip requires the motherboard being removed (laptops are not designed to be easily maintained), then replacing the motherboard will be a quicker and possibly cheaper fix (for Dell). Also replacing surface-mount components is far from easy.

Normally the BIOS resides in flash memory nowadays (rather than EEPROM). It used to be that there was a small amount of ROM that could act as a failsafe to allow you to reflash a corrupt BIOS, but I suspect that if that code is still included, is resides in a different partition in the same flash memory chip. If the flash memory gets completely wiped, then you've lost the failsafe as well.

Certain mobo manufacturers (Gigabyte come to mind) used to have a Dual BIOS feature, where if you updated the BIOS, you only did one side, and you had the unchanged other side to fall back to if it failed. That gave you a way of proving a new BIOS without bricking the system.

Some boards also have I2C or SMBus (or other) ports that may allow the flash to be reprogrammed in situ, but often the headers are not soldered on the board to allow it to be used.

Bloke charged under UK terror law for refusing to cough up passwords

Peter Gathercole Silver badge

Re: And soon.... The clock will strike thirteen @M7S

I happened across the restart of The Prisoner on Monday myself.

Even though I've seen it before, and I have the complete series on DVD (actually, a largely unwatched impulse purchase from a car boot sale), it had not sunk in before that the dialog in the opening credits ending up with "I am not a number... etc", was re-recorded for whoever was the Number 2 in that episode.

One of the benefits in watching the episodes close together, I suppose.

We just don't make series like that in the UK any more, I guess because we don't have characters like Lew Grade in our media companies.

Peter Gathercole Silver badge

Re: And soon.... The clock will strike thirteen

As a total aside, the clock striking 13 was an interesting plot point in the Captain Scarlet episode "Big Ben Strikes Again"

Totally irrelevant to this discussion, I know.

Why Microsoft's Windows game plan makes us WannaCry

Peter Gathercole Silver badge

Re: Hang on another minute...

I do not believe March to May is an adequate time. What if you've got 200 items of software to regression test. Ignoring the time to actually patch the extensive estate, that's over 2 software packages to regression test every day (if you can use the whole three months), including weekends and public holidays. And all on a heterogeneous hardware estate with attached specialist equipment!

What if it was 2000 software items? How many of the IT support people know the applications they support well enough to be able to perform the regression test? Or do the users have time to actually test the full functionality of their packages (Hint, a day testing a package is a day that the user can't be doing their normal job)

For a large organization, a proper regression test of their software portfolio will take months.

It would not be so bad if the patches were just that - patches that do not change any other function. But Microsoft do like to include functional changes in their patch bundles.

Regression testing Windows 10 in a business environment is going to be an absolute nightmare, and I'm glad I'm not in that game.

DeX Station: Samsung's Windows-killer is ready for prime time

Peter Gathercole Silver badge

@AC

I'm sorry!

The Osboure 1 did not have batteries as standard. It needed mains power to operate. And quite a lot of that.

Any battery pack was a 3rd party add-on.

Peter Gathercole Silver badge
Happy

That's one hell of a pocket!

Do you also have car batteries and a mains inverter in there as well?

MP3 'died' and nobody noticed: Key patents expire on golden oldie tech

Peter Gathercole Silver badge

I don't understand how it 'died'

If the patent is not renewed, then the technology moves into the public domain, which could mean that we could see more use of it, not less.

Whether we do or not is another matter, but I would guess that there are still a lot of optical players and media devices which are happier with MP3 files than some of the later (patent encumbered) audio formats.

It's been two and a half years of decline – tablets aren't coming back

Peter Gathercole Silver badge

I found an innovative use

I found an excellent use for my 10" Android tablet.

When I last sang with a choir, I used it as a musical rehearsal device for learning the music. If you can get the music MIDI form, then there are applications that will convert it back into sheet music, and provided it is broken into appropriate tracks, play the individual parts as per your selection, and display the sheet music at the same time.

Couple it with a pair of headphones, and you can then follow the music and hear the parts all on one easily carried device. Mind you, bursting into song in the middle of a train or plane does not go down too well with the other passengers.

You can also have a audio or video recording of a performance as well, and if you want to go that far, record your own rehearsals with the rest of the choir/orchestra to allow you to review the session.

I have seen musicians use them in place of paper music on their music stands, with the music auto-scrolling so they don't have to turn pages.

I also use it while I'm out for reading comics and books, and watching shows I rip to SD card. It's so old that I can't remember when I got it (it got Android 4.04 soon after I got it), and had an 8000mA battery that puts smartphones to shame. I still get 4-6 hours of continuous use from one charge, although it can get really slow until the firmware is re-flashed (no Trim support for the flash filesystems)

Huawei picks SUSE for assault on UNIX big iron

Peter Gathercole Silver badge

Re: hot swapping - old news

Tandem is old-news.

Unfortunately, the RAS features that used to be around in Tandem and Stratus (bloody hell, Stratus still exists!) are apparently feature that vendors do not consider useful any more.

In the same time, customers have been encouraged, for power and supposedly manageability reasons to consolidate all their systems onto ever-larger single systems divided up by visualisation.

And what happens when features fail and need swapping? Well, I/O cards can be swapped, as can drives, power supply components and fans. But once you get to core features like CPU and Memory, the only way is to take part or all of a system out of service.

Even in the modular IBM Power Systems (770, 780, 870 and 880) systems, where supposedly you can power down individual drawers, I've never come across a situation where a CPU or memory repair action has suggested just powering down the affected processor drawer, but wants the whole system powered down.

The solution to this? Well, on-the-fly workload migration is normally the current suggestion, but that means that you have to keep the same capacity as your largest system spare, and there will be performance and time constraints while migrations are carried out. Otherwise, you de-construct your workloads, and place them onto smaller systems that you can afford to have down for service actions without affecting the service.

Of course, hardware will continue to run in a degraded state now (if a CPU core or memory DIMM fails, the rest of the system may well continue to run), meaning that you can plan your outages rather better that you used to be able to do, but to restore full performance, some outage will probably be required.

If Huawai can produce servers at a reasonable cost where CPUs and memory can be replaced without shutting a system down, I can see current buyers of Power and SPARC systems looking at them vary carefully, but it will need some OS modifications to allow hardware to be disabled and not considered for work. It's possible, but will need work in the scheduler, and the memory allocation code. Power, IBM I and AIX can do some of this already, but I'm not sure that Linux on Power can, and I think on Intel, it's still in it's infancy.

But with the integration of memory and PCIe controllers in modern processor dies, system builders will have to know a whole lot more about the internal architecture of the systems to provide resilient configurations that will allow processor cores with all their associated on-die controllers to be removed without affecting the service.

I personally still favour a larger number of smaller systems, rather than relying on increased complexity in the design, and I think that, whether knowingly or not, customers embracing cloud are making the same decisions.

iPhone lawyers literally compare Apples with Pears in trademark war

Peter Gathercole Silver badge

Re: Apple Records predates Apple Computers

There were three separate lawsuits, with Apple Corp. taking Apple Computer Inc. to task over the use of 'Apple' and an Apple logo in conjunction with music.

Apple Corp. won the first two, and received modest damages and a more explicit license deal, but the third one in 2003-2006 centered around the use of Apple and the Apple logo on the iTunes store, which was clearly about music.

In this case, the judge ruled in favor of Apple Computer, taking a very (IMHO) lose interpretation of the maybe poorly worded section on content delivery and physical media (to me, it looks like the judge did not think that the electronic delivery of digital music conflicted with the previous agreement, which he interpreted as the delivery of music on physical media - clearly the case that digital music delivery was a disruptive techology).

Although Apple Corp. said they would appeal, it was likely they didn't have the financial resources, and eventually Apple Computer. offered a settlement, and part of the settlement transferred the ownership of the Apple logo to Apple Computer, with a perpetual license to allow Apple Corp. to continue using their logo. I think it also included an agreement to allow Beatles music to be delivered through the iTunes store, something that Apple Corp. had explicitly blocked previously, presumably because of the ongoing disagreement.

So there is now nothing Apple Corp. can do to anybody w.r.t the logo, as Apple Computer Inc. own it outright.

It's amazing how much can be achieved by the application of money.

Peter Gathercole Silver badge

Re: Does anyone remember ...

I was going to bring up Peach, but then I remembered that it was an Apple ][ clone, so any infringement case would have been interesting, to say the least!

IT error at Great Western Railway charging £10k for 63-mile journey ticket

Peter Gathercole Silver badge

Re: small city @Spudley

Yes. I do know. Typo.

I had two kids go to Bridgwater College and one of them then went to SCAT. One of the first things I learned was that the 'e' was missing, but when quickly typing a post, it's easy to forget. If you look back at my posts, it's hard to find one that does not have a spelling, typographic, punctuation or capitalization error, no matter how hard I try to get them right.

I think the real reason for the tone of my comment is that the conversion of first Polytechnics, and now Further Education colleges to 'University' status has, in my opinion, devalued degrees, and damaged the vocational education system in the UK. The current system still churns out graduates in 'soft' disciplines, who then struggle to work in their chosen field, and end up not using their education in the jobs that they end up in. And the flip side is that 'real' universities are starved of resources and funds for the required 'hard' disciplines, leading to shortages of STEM graduates in industry and education, and very valuable intermediate level qualifications in these subjects (BTEC HNC and HND for example) have pretty much disappeared.

Peter Gathercole Silver badge

Re: small city

Yes, it has neither a Cathedral nor a University, although the Somerset College of Art and Technology (SCAT - what a bad acronym to use), rebranded itself first as Somerset College, and recently merged with Bridgewater College to form University Centre Somerset.

According to the list at https://www.gov.uk/check-a-university-is-officially-recognised/recognised-bodies, it cannot award degrees itself, and it's tag line is "In partnership with Plymouth University, Oxford Brookes University, UWE Bristol & The Open University", so I suspect that it relies on Plymouth, Oxford Brookes, UWE and the Open University for the award of the degree.

This does not make it a University, in my opinion, so Taunton does not qualify as a city.

systemd-free Devuan Linux hits version 1.0.0

Peter Gathercole Silver badge

Re: Honest inquiry @myself

Hey!

I've just looked at TUHS, and if you're interested in UNIX source code, there's lots of interesting stuff has appeared there recently.

Not just source for Edition 8, but Editions 9 and 10 as well.

The biggest revelation I had was when I found the source for something called pdp11v, which is also called PDP-11 3+2.

Have a look, and work out what it is yourself! Remember, even large PDP-11s were really rather small (maximum 4MB memory, small 16KB memory segments, maximum of 128KB text and data size for single processes without some fancy overlaying), so someone having got this running was a real feat.

Peter Gathercole Silver badge

Re: Honest inquiry

Back on my own machine. V7x86 partition fired up.

/etc/init is a binary that is run from inside main.c, and it is crafted as process 1 (the source refers to process 0 as being the scheduler, and is just a loop that sleeps on a timer interrupt, and presumably inspects the process table to schedule the other processes).

The source for the Edition 7 init is very simple. It handles single and multi-user modes, and runs /etc/rc, and also handles respawning the getty processes (controlled by the entries in /etc/ttys) as they are used by users logging on and off. It's written as an infinite loop with a wait in it. The wait will return every time a process terminates. It then puts a record in utmp, and if the process was a getty or whatever getty exec'd, it respawns the getty.

Other than that, it does very little. The processes that run at boot are actually started by the /etc/rc script, and that is a simple top-to- bottom script that mounts the other filesystems, starts cron and the update process that periodically.

So much more simple that the SysVinit that implements inittab. I don't have access to any Bell Labs or AT&T source later than Edition 7, although I guess I could look at BSD, but that may not give any insight to when the full-blown SysVinit appeared.

I believe that the Edition 8 source may now be at TUHS (at www.tuhs.org). I must check it out, although this is only related to SysV through the common ancestor of Edition 7.

BTW, Correction to my previous post. Lions is spelt Lions, not Lyons.

Peter Gathercole Silver badge

Re: Honest inquiry

Um. Monitoring processes is exactly what SysVinit does, but it requires you to actually have processes directly created by init that stick around.

Look at the entries in /etc/inittab. See field 3 in each line, the one that says wait, once or respawn. Respawn allows you to have a service that will be re-stared if the process dies.

What you are referring to as SysVinit is actually the /etc/rc script that is called from init on runlevel change, that runs the scripts from /etc/rc.d (although different SysV UNIX ports actually implement it slightly differently). While this is part of the init suite, it is not init itself.

The concept of init in UNIX goes back to before SysV. I have a copy of the Lyons Edition 6 commentary, and that references an init process, although I think that the /etc/inittab file did not exist at that time. I will fire up my Nordier Intel port of Edition 7 VM at some point to refresh my memory about how Edition 7 started the initial processes.

The rc.d directory heirarchy of scripts appeared at some point between SVR2 and SVR3 IIRC. The first UNIX I remember seeing it in was R&D UNIX 5.2.6 (which was an internal AT&T release).

Farewell Unity, you challenged desktop Linux. Oh well, here's Ubuntu 17.04

Peter Gathercole Silver badge

Re: Won't install properly @Peter R. 1

I hope your comment was not aimed at me!

If it was, I think you've missed out the gist of what I was saying. If you install or buy some bleeding edge or niche hardware for Windows, something that is not in the normal Windows driver repository, the vendor provides this thing, normally a shiny silver disk or a link to a web site, that adds the support for that device to Windows.

Without it, you would have as much trouble running that hardware on Windows as many people experience on Linux. As an exercise, try installing Windows on one of these problem systems just from Microsoft media, and see how much stuff doesn't work without the mobo and other driver disks from people other than Microsoft. It's an education.

The problem hardware vendors do not provide their own drivers for Linux, and this is the biggest problem for niche hardware. You cannot expect anybody else in the Linux community to reverse-engineer hardware drivers for this type of device. If it's important, do it yourself, and contribute it back into the community!

Do not expect someone like RedHat or Canonical to provide drivers for Linux when Microsoft do not do it for Windows (remember, even drivers in the Windows repository are often provided by the vendor, not Microsoft themselves). It really is the vendors responsibility to ensure that their hardware is supported, not the OS community.

It is a wonder that as much works as it does with just the base Linux install media. A testament to all the hard work that has been done, often by volunteers or philanthropic companies.

What I find more cynical is those vendors who provide Mac OS drivers which would differ comparatively little from the Linux ones, but don't actually bother with that last step of packaging and testing for Linux.

Peter Gathercole Silver badge

Re: My thoughts on this ... @Julian

Before the turn of the century, I liked the version of twm that added a virtual desktop. The version I used was called vtwm.

I actually found the source for it a bit back, and compiled it up. It still does the main part of the job I need a window manager to do quite well (and in an absolutely tiny footprint), but the lack of integration with things like the network manager for wireless keys, no applets and a number of other niggles prevented me from going back to it full time.

I suppose I could have spent more time investigating getting it working better, but I just lost interest. We get too used to the extra luxuries of modern desktops, unfortunately.

Peter Gathercole Silver badge

Re: Good riddance, but..

GNOME flashback (or failback, whatever they want to call it) works for me. GNOME 2 look and feel delivered on top of GNOME 3. It's not identical (plugins have to be re-written, for example), but it's close enough.

I chose that on Ubuntu rather than switching to Mint.

Peter Gathercole Silver badge

Re: Won't install properly

Unless the nVidia drivers in the repository are back-level compared to other distributions, blame nVidia themselves for the poor quality.

As I understand it, both nVidia and AMD (ATI) provide a binary blob that is wrapped to allow it to be plugged into X.org, Mir or Wayland for each distro. As long as that blob is wrapped correctly, any instability will be caused by the blob. Also, are you sure it crashes the system, and not just the GUI? X11 or Mir drivers should be running in user mode, so should be incapable of taking the whole system out. Have you tried Ctrl-Alt-F1 to get to a console so that you can kill the X server?

If the repository is out-of-date, then pick up the new blob from the nVidia or AMD website, and compile it into the wrapper yourself.

Personally, I find the open-source drivers sufficient for my needs, and much less prone to have the code to drive my older graphic cards removed with no notice (which has happened more than once). But then, I'm not a hard-core gamer.

I suspect that the code that Realtek provide for their WiFi dongles (presumably you mean USB devices) hasn't been updated by Realtek recently, and may not compile because the Kernel version and library stack has moved on from when their code was written. Try engaging Realtek to ask them to provide a copy that will compile on what is, after all, a mainstream Linux distro.

But the basic point is, get the chipset vendors to support their hardware better on Linux rather than griping at the distro maintainers. Or buy hardware that is more Linux friendly.

Peter Gathercole Silver badge

Re: My thoughts on this ... @badger31

I never liked Unity on the desktop, but having used it on a 'phone for some time, it works surprisingly well.

My view is that it works well for people and devices that only really do one thing at a time, thus it works on 'phones quite well (who tries to multitask several applications on a phome screen?). Scopes are really interesting, and switching between different concurrently opened programs by swiping from the left does work. I would have loved to use a WebOS device to see whether the Cards feature from that and the task switcher in Ubuntu Touch worked in the same way.

On a desktop or laptop, people who fill the whole screen with what they are doing probably like Unity (and probably the Mac interface and Metro as well). But the original behavior, where applications opened full screen by default and the launcher bringing to the front an already open window rather than opening a new instance alienated me and a whole lot of other users.

Will the MOAB (Mother Of all AdBlockers) finally kill advertising?

Peter Gathercole Silver badge

Re: I havent got the bandwidth yet @Kiwi

There are some. I have a PCI card based on a Broadcom chipset, inherited when I got given a Shuttle compact PC, where I could not find any support, either pre-compiled or in source, that would work to get the card to function in Linux (specifically Ubuntu 12.04 - it was a few years ago).

But then again, the card was so obscure that it took an absolute age to find some drivers that worked in Windows XP, as well.

I also had some problems with the Atheros wireless chip in the original EEE PC 701 with Ubuntu, because it took some time for the particular chipset to be supported in the repository.

Peter Gathercole Silver badge

Re: I havent got the bandwidth yet @Charles

<pedant>If you're wanting any graphics drivers in the kernel beyond the console mode drivers, you're going to be disappointed</pedant>.

What is in the Linux kernel is a series of stub syscalls that allow the user mode graphic drivers to access the hardware, and many of these stubs are actually wrapped in to KMS. The drivers (which I admit may lag the availability of new hardware) are not in the kernel.

This is the X.org way of doing things. I do not know for certain that Wayland does things the same way, but I think that it does.

I've pointed out many times that the reason why the type of examples you've quoted are difficult to find is because the hardware manufacturers can't be arsed, or deliberately refuse to provide Linux support for their hardware (although the GPL does raise some barriers if they want to keep their code secret).

It's unfair to blame the Linux community for the lack of support for these hardware devices. The open-source graphics modules are getting better, but they effectively rely on some clever bods, sometimes working on their own time, to reverse-engineer the support code for new hardware, and this does not happen instantly.

It's often only niche or bleeding-edge hardware which is difficult (even the mainstream Atheros chipsets are quite well supported now). I've not really had problems with WiFi on laptops from the mainstream suppliers for some time now.

Aim your scorn at the hardware manufacturers.

Canonical sharpens post-Unity axe for 80-plus Ubuntu spinners

Peter Gathercole Silver badge

Re: Reboot

The example I used was for commercial UNIXes, where the on-disk image of the kernel is actually overwritten with a kernel update. This is mainly because the initial boot loader is designed to load something like /unix.

For quite some time, Linux has had the ability to have multiple kernels installed on a system. In this respect, you are correct in saying that not rebooting will not cause symbol table mis-matches of the type I described, although I would not like to say there would be no issues (especially if there were any kernel API changes, not unheard of in the Linux kernel).

But I'm pretty certain that the early Linux systems, using Lilo rather than Grub, still relied on there being a link of some kind to a fixed named file in the top level root directory.

My first experience of Linux was with Red Hat 4.1 (original numbering system, not RHEL) around 20 years ago, and I'm sure that is how it worked in those earlier releases. I'm pretty certain that in-place online kernel updates were almost unheard of back then, and nobody would even think of not rebooting after updating the system from a CD. In fact, if I remember correctly, to update a systems back then normally required you to boot a system from the CD containing the updates, so rebooting was mandatory.

My Unix experience at source level goes back to 1978 (goodness, 40 year anniversary of first logging on to a Unix system next year!), so I'm pretty certain of the behaviour of traditional UNIX systems

Prior to the /proc pseudo-filesystem, the normal way for a process like ps, for example, to read the process table was for the process to be set-uid to root, and then open /dev/kmem and seek to the process table using the symbol table obtained from the /unix file. This behaviour was copied from traditional Unix systems in early Linux command sets, and you would be surprised about how many processes actually needed access to kernel data structures.

Peter Gathercole Silver badge

Re: Reboot

Reboots are suggested every time you update the kernel. If you don't reboot after updating the kernel, some things, particularly anything that looks at the symbol table for the running system by looking at the image on disk could cause problems.

This should be less of an issue than on traditional UNIX systems, because they used to change the default kernel image on disk that contained the addresses of most kernel data structures, so the symbol table in /unix (or whatever it may have been called) did not match the actual addresses in /dev/kmem.

Since /proc. /sys et.al. are now used to access most kernel data structures in Linux without having to look in /dev/kmem, there should be fewer problems, as the kernel symbol table should not be used as much.

If kernel updates really bug you, then black-list one or more of the kernel packages, and allow all of the package updates that do not affect the kernel to be updated. At your convenience, remove the black-list entry, and allow the kernel to update, and then reboot the system.

LTS does not mean fewer updates. It just means that you are guaranteed support for a longer period of time. Just because it is an LTS release does not mean that there are fewer bugs that need patching, or that the rate of patch delivery is any slower.

'Tech troll' sues EFF to silence 'Stupid Patent of the Month' blog. Now the EFF sues back

Peter Gathercole Silver badge

Re: Personal opinion

While I agree in your qualification, licensing the patent for someone else to use, provided it results in a real product, would be perfectly acceptable demonstration of the practicality of implementation of a patent. The time scale of 6 months may be a bit short for full a full product to be produced, but should be enough for a demonstration.

As we all know, the problem with what is happening is that there is no attempt to turn patents into a product, but the patent is used to extort money from other people, especially for patents that are so obvious they should not have been granted in the first place.

Although later ARM designs may look like designs on paper licensed to other people, the background of ARM is based on solid product development. Acorn produced both ARM-1 and ARM-2 processors, although they out-sourced the fabrication, they were branded as Acorn products.

Mark Shuttleworth says some free software folk are 'deeply anti-social' and 'love to hate'

Peter Gathercole Silver badge

Re: True to some extent but in this case?

The Edge phone looked like it was going to be an interesting thing, but you could get much of the experience for much less than £200.

I picked up a second user Nexus 4 (one of the reference platforms for the Ubuntu phone distro) for £50, and spent about an hour putting Ubuntu Touch on it.

It's my backup phone, and I actually quite like it. I don't like Unity on a laptop, but it really works on a single-task-at-a-time touch screen device. My one gripe is that there is no real apps for it, although I did nothing myself to add anything to the ecosystem, so I guess that I can't really complain. If it had gained enough momentum, I reckon it it could have been a contender, but the chances of that were always slim.

I guess that I'll have to look for another quirky backup phone at some point (my previous backup was a Palm Treo, which I kept running long past it useful life because I liked it so much). Anybody any suggestions?

Peter Gathercole Silver badge

Re: Weird

It's not really X-Window vs. Mir. X-Windows, although it will live on for a long time as a compatibility layer, is on the way out.

The war was really Wayland vs.Mir, with a rearguard action trying to defend X-Windows. Several campaigns have still to be fought, but it's less complicated with Mir out of the way.

Although it has a long and illustrious-but-tarnished history, X-Windows is not suitable for all graphics devices. Even with the extensions to direct rendering, it can be slow compared to less abstracted systems, and there have always been security concerns with it, which is a bit strange considering that it's major strength was that clients could exist on different systems than the server, as long as there was a network path between them.

It is about time that X was retired, but it will be difficult to get something to the level of ubiquity that X-Windows achieved in the Open Systems era (remember, it was embraced by some of the non-UNIX workstation vendors like Digital), and all mainstream Linux and BSD distributions (but not Android) come with it built in. With Mir disappearing, Wayland will hopefully achieve this, but it is not certain.

Put down your coffee and admire the sheer amount of data Windows 10 Creators Update will slurp from your PC

Peter Gathercole Silver badge

Re: I thought @Adair

Whilst it is quite true that there is representative software that runs on Open Source operating systems, it is not one-for-one compatible.

Don't get me wrong, I'm an Open Source advocate, and have been for a long time, but Open Source application software is often only as good as the time and effort it's writers put into it, and this is often not enough to make it completely functionally equivalent to commercial software, This leads to interoperabillity problems.

Now, for ordinary individuals or SMBs, that is probably OK, but just wait until you engage with another organization that is still wedded to commercial software. and you can suddenly find that for some application types, the fact that a document does not render quite right, or the macros that are used either error, don't work at all, or produce the wrong result, and it becomes a serious issue, possibly risking the viability of the business. This is why most organizations toe the line, and use the dominant offerings.

Big businesses like the control that is available via things like Active Directory, and often Open Source alternatives do not have anything like group policies that make marshaling large estates of desktop PCs easier, and that's ignoring cloud-based modern applications.

And then you have the bespoke applications that are specific to certain technologies. If they are only available on Windows, you have no choice (and please don't talk about emulation - its unlikely to be supported by the vendor and it's fraught with problems, and VMs are a sop that still encourages locked-in application/OS links).

What we actually need, and I've said this over and over again, is for application writers to realize that an Open Source OS does not necessarily mean Open Source applications. Commercial software can be delivered on Linux without having to open up the application source (as long as you abide by the LGPL). But we need either a standardized or dominant Linux environment, so that the Linux support requirements are affordable to software companies. That's just not happening, and the landscape is getting poorer (see the Canonical news about reducing ambitions over the last few days).

The Linux community is, unfortunately, letting the very opportunity offered by unpalatable licensing conditions in other application platforms slip through their fingers. The best we can hope for at this time is something like the Chromebook model to provide an alternative, but in a toss up between the New Microsoft and Google, With these choices, I'll take the third option, almost without regard to what it is.

Ubuntu UNITY is GNOME-MORE: 'One Linux' dream of phone, slab, desktop UI axed

Peter Gathercole Silver badge

Re: When prototypes go too far

I could do that with my old Sony Xperia SP with a MHL adapter.

MHL adapter plugged into the phone USB port, powered USB hub plugged into USB socket on MHL adapter, HDMI cable to a TV plugged into HDMI socket on MHL adapter, keyboard and mouse plugged into USB hub. You could leave it all on a desk, and just plug a single cable into the phone (I believe that Sony actually made a cradle to allow you to drop it into the cradle). And the phone charged at the same time!

Single app nature of Android was a bit of a problem, but with ConnectBot, I could use it to access remote systems as a terminal, and move files between other systems and the phone, and use local apps to process files on the phone.

Manchester pulls £750 public crucifixion offer

Peter Gathercole Silver badge

Re: I see an opportunity

That's in poor taste, (maybe even blasphemous!)

The SPB pretty much was Lester, and he is (unfortunately) no longer with us!

Still a great loss.

Biting the hand that feeds IT © 1998–2019