* Posts by Peter Gathercole

2689 posts • joined 15 Jun 2007

Bonkers call to boycott Raspberry Pi Foundation over 'gay agenda'

Peter Gathercole
Silver badge

Re: W, as the young people say these days, TF?

I'm not so sure about the latter. I'm sure I've seen Betty and Wilma indulging on a peck on the cheek at times. Maybe that was an indicator of other things going on behind closed doors (just as long as the saber-toothed kitty did not jump back in through the window).

3
0

Search results suddenly missing from Google? Well, BLAME CANADA!

Peter Gathercole
Silver badge

Re: Shootout at the OK court

You are assuming that the company name and trademarks are registered in all countries around the world.

In theory, if a company name is not protected by an international trademark, it could be used by another company in a country that does not recognize the mark,

In this case, Google preventing other trading bodies outside Canada from using the perfectly legitimate in their own country company name would be adversely affecting that other party.

International trademarks and copyrights are a real minefield when the Internet is Global.

Does the WTO register trademarks worldwide?

0
2

AES-256 keys sniffed in seconds using €200 of kit a few inches away

Peter Gathercole
Silver badge

Re: Through a Lens, darkly...

Not even a Lens protects you forever.

IIRC, there were 'dark' lenses appearing by the time of "Children of the Lens", so even the Lens was reverse engineered.

The Arisians always knew from their 'Vision of the Cosmic All' that they were not the ultimate lifeform. That is why they force-evolved and then passed the mantle on to the Kinnision clan.

3
0

Latest Windows 10 Insider build pulls the trigger on crappy SMB1

Peter Gathercole
Silver badge

Re: Yawn @AC re. reboots

Don't be so sure that windows printer drivers shouldn't require a reboot.

Most windows printers rely on GDI, which may require a reboot (or at least a restart of the display system) to register a new printer.

This is what happens when you have unified display model built into monolithic subsystems in the OS. Its crap, but that's the way it is.

6
1

Software dev bombshell: Programmers who use spaces earn MORE than those who use tabs

Peter Gathercole
Silver badge

Re: A question @John Brown

If you are old enough to remember card punches, you may remember that you could have a format card that you would load into a punch that programmed the punch to put tab stops in relevant places on the cards you were punching. Somewhere on YouTube, there is an example of someone doing this with an IBM 029 card punch.

It's a very long time since I programmed using punch cards, but in my first job, writing RPGII, the fields in a line in the various program section were of fixed width, and it was possible program the punch to use the tab key to move you to the correct column without having to hammer the space bar. Provided a quite useful speedup when punching.

0
0
Peter Gathercole
Silver badge

Re: A question

Inserting tabs anywhere other than the beginning of a line gives different results from inserting a fixed number of spaces.

If you're using a tab after some other text on a line, the tab will take you to the next tab stop. This could be the equivalent of one or more spaces.

For example. If you are currently on column 12, and have tab stops set every 8 columns, pressing tab will take you to column 17.

To do the same with spaces, you would insert 5 spaces.

If you were on column 14, a tab will still take you to column 17, but you would only need 3 spaces to do the same.

This means that you can't get meaningful results with global substitutions of fixed numbers of spaces.. Programs like cb are clever enough to properly interpret tabs, and fill in with the variable number of spaces necessary to preserve alignment.

I use tabs to align trailing comments in my shell scripts (I know, it's a bad habit, comments should really be on their own lines, if only to inflate the number of lines of code written). Putting it through some global substitution really messes the formatting of these types of files.

I did once attempt to standardize of tab stops every 4 columns set in VI to reduce line-wrap, but I used so many systems, each of which had to have .exrc files, that I soon abandoned it and reverted to accepting tabs every 8 columns.

The habits of 39 years of writing shell scripts and other free-form languages is difficult to break!.

10
4

Stack Clash flaws blow local root holes in loads of top Linux programs

Peter Gathercole
Silver badge

Re: HOW?!

You have to be a bit careful here, because in threaded environments, each thread gets a mini-stack that is actually created on the heap, so overrunning one of these stacks could damage the heap.

You also have variables local to a function context created on the stack, so if local variables are manipulated using unsafe routines that do not perform bounds checking, it is possible to damage surrounding stack frames, which can include the return address for other function calls.

Putting guard pages around each stack frame starts increasing the size of the memory footprint of even the smallest program.

1
0
Peter Gathercole
Silver badge

Re: Why am I not surprised to see sudo there? @hmv

Having "::" on your path is as bad. Also, having a trailing colon on the path will also include the current directory in any path searches.

Other stupid things to do include putting relative directories on the path, and also putting non-readonly variables on the path!

0
0

BOFH: Halon is not a rad new vape flavour

Peter Gathercole
Silver badge

CRTs

For a colour monitor, don't forget the shadow mask.

For early generation monochrome monitors, there used to be an offset bias on the beam deflector so that the beam did not strike the phosphor at right angles, but at an angle that would aim the beam away from someone sitting directly in front of the monitor.

Electrons from an electron gun in a CRT are relatively low energy, and can easily be stopped by the metalised inside coating of the glass, and the glass itself. And the energy was not high enough to generate X or gamma rays.

5
0
Peter Gathercole
Silver badge

This was a particularly good one

I just wish more bosses would read them.

29
1

Don't touch that mail! London uni fears '0-day' used to cram network with ransomware

Peter Gathercole
Silver badge

Re: Wouldn't have happened in my day

Pine? Piffle!

mailx or if that was not available or not a UNIX system, mail. Or maybe *MAIL on MTS.

Youngsters!

0
0
Peter Gathercole
Silver badge

Re: windows permissions model is much more flexible than UNIX

Unix != linux, just in case you can't read. Plus, there is no one ACL system that spans all UNIX-like OSs.

What I wrote is totally true. You've just responded to a different statement, one that I did not say, The original UNIX permission model is weaker than current Windows without any question,

Even on Linux, ACL support largely depends on the underlying filesystem, and both apparmour and SELinux can be, and often are, disabled.

Oh, and because I am a long-term AIX system admin, I've actually been aware of filesystem ACLs since before Linux went mainstream (JFS implemented them on AIX 3.1 which was released in 1990), and RBAC since AIX 5.1 (sometime in 1999 or 2000). I've also used AFS and DCE/DFS, both of which has ACL support and used Kerberos to manage credentials since about 1993,

At the risk of being confrontational, when did you start using computers?

0
0
Peter Gathercole
Silver badge

Re: Fundamental problem in vulnerable OS protected by AV @Prst. V. Jeltz

Here is a on-the-back-of-a-napkin solution for you.

Each user can only access their own files, which are stored in a small number of well defined locations (like a proper home directory).

Store the OS as completely inviolate to write access by 'normal' users. Train your System Administrators to run with the least privileges they need to perform a particular piece of work.

Any shared data will be stored in additional locations, which can only be accessed when you've gained additional credentials to access just the data that is needed. Make this access read-only by default, and make write permission an additional credential. This should affect OS maintenance operations as well (admins need to gain additional credentials to alter the OS).

Force users to drop credentials when they've finished a particular piece of work.

If possible, make the files sit in a versioned filesystem, where writing a file does not overwrite the previous version.

Make sure that you have a backup system separate from normal access. Copying files to another place on the generally accessible filetree is not a backup. Make it a generational backup, keeping multiple versions over a significant time. Allow users access to recover data from the backups themselves, without compromising the backup system.

Make you MUA dumb. I mean, really dumb. Processing attachments should be under user control, not allowing the system to choose the application. The interface allowing attachments to run should be secured to attempt to control what is run. Mail can be used to disseminate information, but by default it should be text only, possibly with some safe method of displaying images.

Run your browser (and anything processing HTML or other web-related code) and your MUA in a sand-box. There needs to be some work done here to allow downloaded information to be safely exported from the sandbox. Put boundary protection between the sand-box and the rest of the users own environment.

Applications should be written such that all the files needed for the application to function, including libraries should be encapsulated in a single location, and protected from ordinary users. The applications should be stored centrally, not deployed to individual workstations and run across the network with credentials used to control the ability to run the applications. The default location that users will save data to in all applications should be unique to the user (not a shared directory), although storage to another location should be allowed, provided that the access requirements are met.

Use of applications should be controlled by the additional credential system described for file access.

Distributed systems should not allow storage of local files except where temporary files are needed for performance reasons, or they are running detached from the main environment. These systems should be largely identical, and controlled by single-image deployment, possibly loaded at each start-up. This allows rapid deployment of new system images. The image should be completely immune to any change by normal users, and revert back to the saved image on reboot.

For systems running detached (remote) from the main environment, allow a local OS image to be installed. Implement a local read-only cache of the application directories which can be primed or sync'd when they are attached to home. Store any new files in a write-cache, and make it so these files will be sync'd with the proper locations when they are attached to home. Make the sync process run the files through a boundary protection system to check files as they are imported.

OK, that's a 10 minute design. Implementing it using Windows would be problematic, because of all of the historical crap that windows has allowed. A Unix-like OS with Kerberos credential system would be much easier to implement this model in (I've seen the bare-bones of this type of deployment using Unix-like systems already, using technologies such as diskless network boot and AFS).

Not having shared libraries would impact system maintenance a bit, because each application would be responsible for patching code that is currently shared, but because the application location is shared, each patching operation only needs to be done once, not for all workstations. OS image load at start-up means that you can deploy an image almost immediately once you're satisfied that it's correct.

Users would complain like buggery, because the environment would be awkward to use, but make it consistent and train them, and they would accept it.

BTW. How's the poetry going?

2
2
Peter Gathercole
Silver badge

Re: Fundamental problem in vulnerable OS protected by AV @Ptsr.V Jeltz

Unfortunately, many of the organizations I've worked at recently have nearly wide-open file-shares, such that my account would have been able to damage a significant proportion of the data.

As a long-term UNIX admin, I'm used to have files locked down by individual user ID, with group permissions to allow individuals to access those extra files they need, at the appropriate access level. With some skill, it is possible to devise a model where by default you have minimal access, and you acquire additional access as and when you need it, with additional access checks along the way (think RBAC with you having to add roles to your account as you need them).

The windows permissions model is much more flexible than UNIX, so not using it properly to protect information is almost criminal. Too many organizations (but not all, I admit) do not use it to it's fullest capabilities.

There have been several vulnerabilities published where just displaying an HTML mail can execute code. In addition, launching an application to handle an attachment is merely one click in many mail systems, especially when the actual attachment type can be obscured. Thus, building a sandbox for the mail system and applications that handle attachments (what I was aiming at) is do-able, History indicates that vulnerabilities like this have happened in the past, and I do not have confidence that there are not more to find. Ease of use always seems to have triumphed over security in much software.

The recent attacks appear to hinge around being able to launch client-side code without sufficient control, in an environment where the users credentials are sufficient to do significant harm. The results appear to suggest that sufficient care had not been taken to segregate data access, contrary to your assertion that administrators do, If they had, the results would not have been nearly as bad as reported.

IMHO, security should be paramount in this day and age, and usability should always be secondary.

2
2
Peter Gathercole
Silver badge

Fundamental problem in vulnerable OS protected by AV

If AV is your primary defense against this type of attach, then you've got a problem.

There will always be a lead time between the appearance of this type of attack, and AV systems identifying and blocking it and becoming effective when it is deployed. This will be unlikely to be less than 24 hours, and probably much longer as organizations rarely provide daily AV updates.

It really surprises me that we have not seen more sophisticated malware, with constantly changing content and delivery vectors. I know that AV systems are trying to become heuristic to avoid that type of threat, so they make an attempt to programmatically identify suspicious traffic, but this can lead to false positives.

OS and application writers (of any flavor) should make sure that easily exploited vulnerabilities (like allowing mail attachments to be able to execute code) are either not present (preferably) or patched very quickly, and administrators should make sure that access to data is controlled and segregated to limit the scope of any encryption attack (at this point, running your MUA in a sandbox looks good!).

Whenever I see "Avoid messages with a subject line of..." then it is clear that the malware writers just aren't really trying very hard. Fortunately. Maybe they don't have to because the attack surface is so large.

7
0

Lockheed, USAF hold breath as F-35 pilots report hypoxia

Peter Gathercole
Silver badge

Re: O2 many issues @Dave 15

The Illustrious class of carriers had much too small a flight deck to operate conventional fixed wing aircraft operationally.

While it would have been possible to land a plane on the flight deck, it would have to be empty, requiring all other aircraft to be struck below while the landing was happening.

One of the advantages of the angled flight deck (a British innovation, and one not fitted to the through-deck cruisers - sorry, light carriers) was to allow concurrent flying-on and off operations.

Before that time, a carrier was normally either launching or recovering aircraft, not both (this was because if you missed the arrester wires, you need to have a clear space to throttle up and take back to the air in order to make another attempt). There were some experiments with barriers, but they tended to damage the aircraft in an arrester-wire miss. they were mainly used if an aircraft was damaged already.

1
0
Peter Gathercole
Silver badge

Re: O2 many issues @Dave

That depends on what you call a fast jet!

The only supersonic jet that was deployed on UK carriers was the F-4K Phantom II (FG.1), which was a US design re-worked with British engines and avionics. Only the Ark was capable of flying the F-4K. as Eagle has not been fitted with the reinforced and water cooled blast deflectors that allowed the Ark to operate them. This meant that the Eagle was withdrawn from service before the Ark, even though it was actually in a better state of maintenance (I very sadly saw her in her last days, moored and in reserve at Drakes Island in the Plymouth sound).

Ignoring the Harrier, the last UK produced 'fast' plane was the Blackburn Buccaneer, which was a formidable surface attack aircraft, bot not supersonic. Prior to that it was Sea Vixens, Sea Venoms, Scimatars, and Sea Hawks. All of these were designed in the '40's and '50s, and were regarded as 1st of 2nd generation jets at best.

Amusing story. The F-4K needed afterburners in order to launch with full weapons load (The Spey engines were less powerful without afterburner than the US General Electric J79 engines fitted to the F-4J). When joint operations with the US happened, it was found that the heat of the afterburners, and the increased angle as a result of the lengthened nose wheel would soften and melt the deck and blast-deflectors on the US carriers,which meant that the UK planes were not welcome on the US carriers.

1
0
Peter Gathercole
Silver badge

Re: O2 many issues @Mark Demster

The US EMALS system is having problems at the moment, and if one had been fitted to one of the UK carriers, it would have taken almost the entire electrical output of the gas turbine/diesel electric powerplant in the QE for the duration of the recharge. This is probably the main reason that EMALS was rejected as a late addition.

Besides, who in their right mind would only fit a single catapult to such a large military asset. One mechanical failure would render the significant benefit of such a carrier useless, turning it into a liability in a combat situation.

The EMALS system uses an electro-mechanical kinetic energy storage system that draws significant power during the recharge. It is notable that the electrical generation capacity of the Ford sub class of the Nimitz design has a higher electrical output than the Nimitz, mainly (but not entirely) to provide power for the EMALS. This is such that it will not be possible to fit EMALS into the older Nimitz carriers.

The QE and PoW should have been designed as nuclear ships from the outset, but the general dislike of nuclear in the UK Parliament and population has resulted in ships that will succeed or fail on the back of one of the most expensive, complex and apparently troublesome aircraft ever created, and that is from a US contractor who has built a maintenance system that allows them to dictate how the aircraft can be used.

AFAIK, this will include the carriers not being able to do engine replacements in the aircraft without returning them to a maintenance base, that may not even be in the UK. Certainly not while at sea. Who's bright idea was that! Compare that with the F and F/A-18, where the aircraft can stowed as sub-assemblies, and assembled or used as spares while on active deployment (and would have been much cheaper and available now!).

1
0
Peter Gathercole
Silver badge

Re: Top Gun @Jake

B@$t4&d.

I could cope with Berlin, but Disney....

12
0

Ever wonder why those Apple iPhone updates take so damn long?

Peter Gathercole
Silver badge

Re: no no no no no no no, Apple @DougS

I don't know whether you're not thinking this through, don't really understand the differences between different filesystem types, or are just naieve.

It is not easy to, say, do an in place conversion from EXT3 to NTFS. Everything from the tracking of free space, block and fragment allocation and metadata are different between the file systems, meaning that to convert the filesystem it will require every file to be read and re-written. This will effectively destroy the original filesystem while creating the new, meaning that a roll-back is as intensive and risky as the conversion.

Now if the changes between the filesystem types are evolutionary rather than revolutionary, it may be possible to do an in-place upgrade. So, it is possible to upgrade from EXT2 to EXT3, because most of the filesystem structures are the same or very similar. The same is true of EXT3 to EXT4. But these are a family of fileystems, designed for backward compatibility.

If APFS (I'm soooo glad they did not call it AFS, which has been used at least once already) keeps the files in place, and just creates new metadata in free space, as you possibly suggest, it would almost certainly be possible to do this without touching the original data or metadata. But does something like this actually count as a 'new' file system, rather than a new version of the old filesystem?

I would also be interested in how much wear the flash memory will suffer from repeated writing during these test upgrades.

4
1

Tech can do a lot, Prime Minister, but it can't save the NHS

Peter Gathercole
Silver badge

Re: WTF!?

35 years is for full state pension entitlement, but you don't stop paying after 35 years of contributions if you are taxed by PAYE. You still see those NI deductions. They don't stop.

3
0

UK PM May's response to London terror attack: Time to 'regulate' internet companies

Peter Gathercole
Silver badge

Re: V For Vendetta

You ought to read the graphic novel. There are several threads of government corruption and depravity that got lost in the translation to the screen, good though the film is.

Of course, for the ultimate bleak experience, you need to read it in the original black and white, but the story was never finished in Warrior before it ceased publication. Damn Marvel and their obsession with protecting a name that was never theirs to begin with.

I never did get to read the end of Marvel/Miracleman. As I understand it, it was published in the US, and was available for import, but never published in the UK. Maybe I need to hit Ebay.

Edit. Soooo wrong. It was published. I'm just out of date! Some good reading ahead, I think.

0
0
Peter Gathercole
Silver badge

Re: And today I have to...

That's fine, as long as you give HMG a back-door to the encryption. It'll be encrypted, but they will still be able to read the data, so they'll be happy.

They'll pat you on the head, and give you an MBE, and stand you up as a good example to follow.

0
0

Lexmark patent racket busted by Supremes

Peter Gathercole
Silver badge

Re: Epson extortion

Many Epson printers can have their heads cleaned. Look on YouTube for videos of how to do it.

We've discussed this before at an earlier stage of this very issue, and here is a comment I made at the time.

1
0
Peter Gathercole
Silver badge

Re: Lexmark loses twice?

I'm pretty certain that Lexmark are out of the new inkjet market. Any you see for sale now are old stock, which I wouldn't touch with a barge pole.

They still make usable laser printers, but their only activity in the inkjet market is selling cartridges.

I like older Epson printers, because the cartridges are just ink buckets, and the fixed heads are robust enough to allow them to be cleaned. Only problem is the ink sponge counter that needs to be reset once in a blue moon.

0
0

Microsoft Master File Table bug exploited to BSOD Windows 7, 8.1

Peter Gathercole
Silver badge

Re: More like from the 1970s

Early UNIX used to only export the state of things via the /dev/mem and /dev/kmem files, which mapped the whole system's memory, and the memory image of the kernel respectively. It was normal to open /unix and extract the symbol table and then open /dev/kmem and seek to the location of the kernel data structure you were interested in.

These files were set so that you had to be real or effective ID of root in order to read them, and it was drummed into admins that they did as little as possible when logged into root to reduce risk of inadvertent or malicious damage to the system. Scribbling over either file would more than likely crash the system, or at least some of the processes.

I remember many years ago there was a bug in the UNIX Version 7 TU11 driver that would render a tape drive unusable. I used to open /dev/kmem read-write with db or cdb (can't remember which) in order to manually unset the lock to allow me to use it again without rebooting. I don't think I ever identified the cause of the drive being locked.

Later in UNIX, syscalls were added to give more guarded access to a number of kernel data structures.

/proc was a Linux thing that makes some operations much easier, and has been adopted by some UNIXs. /sys may follow, but I don't think anybody's ported, or likely to port, /udev, dbus or kms to UNIX.

2
0

BA's 'global IT system failure' was due to 'power surge'

Peter Gathercole
Silver badge

Re: Back-up, folks?

We hear the failures. We very rarely hear where site resilience and DR worked as designed. It's just not news worthy.

"Stop Press: Full site power outage hits Company X. Service not affected as DR worked flawlessly. Spokesperson says they were a little nervous, but had full confidence in their systems. Nobody fired".

Not much of a headline, is it, although "DR architect praised, company thanks all staff involved and Accountants agree that the money to have DR environment was well spent" would be one I would like, but never expect to see.

I know that some organisations get it right, because I've worked through a number of real events and full exercises that show that things can work, and none of the real events ever appeared in the press.

8
0
Peter Gathercole
Silver badge

Re: Ho hum

It does not have to be quite so expensive.

Most organisations faced with a disaster scenario will pause pretty much all development and next phase testing.

So it is possible to use some of your DR environment for either development or PreProduction.

The trick is to have a set of rules that dictate the order of shedding load in PP to allow you to fire up the DR environment.

So, you have your database server in DR running all the time in remote update mode, shadowing all of the write operations while doing none of the query. This will use a fraction of the resource. You also have the rest of the representative DR environment running at, say, 10% of capacity. This allows you to continue patching the DR environment

When you call a disaster, you shutdown PP, and dynamically add the CPU and memory to your DR environment. You the switch the database to full operation, point all the satellite systems to your DR environment, and you should be back in business.

This will not five you a fully fault tolerant environment, but will give you an environment which you can spin up in a matter of minutes rather than hours, and will prevent you from having valuable resources sitting doing nothing. The only doubling up is in storage, because you have to have the PP and DR environments built simultaneously.

With today's automation tools, or using locally written bespoke tools, it should be possible to pretty much automate the shutdown and reallocation of the resources.

One of the difficult things to decide is when to call DR. Many times it is better to try to fix the main environment rather than switch, because no matter how you set it up, it is quicker to switch to DR than to switch back. Get the decision wrong, and you either have the pain of moving back, or you end up waiting for things to be fixed, which often take longer that the estimates. The responsibility for that decision is what the managers are paid the big bucks for,

22
0

UK ministers to push anti-encryption laws after election

Peter Gathercole
Silver badge

Re: Sorry High Street Bank

As several people have already pointed out, it's not banning encryption, it's forcing the large companies to give UK gov a backdoor.

The idea is flawed not because it will make encryption illegal, but because keeping a backdoor secret is impossible. Once it is leaked, and it will leak, everybody will have to change their encryption. How disruptive has been replacing insecure SSL/TLS. Backdoors leaking would be much worse than this!

The government will try to make using encryption that does not include a backdoor illegal, and will demonize anybody found using such a system, probably by adding laws to the statute book so that anybody found using encryption that is not readable by the intelligence service will be deemed a terrorist, but even that idea is flawed.

This is because, if they find a data stream or data set on a computer that they don't understand, they will immediately assume that it is obscured by a type of encryption that they've not seen before.

"Hey, I can't make any sense of the data in this /dev/urandom file on your computer. Tell us how to decrypt it or we'll throw you in jail for three months for not revealing the key, and then consider a longer jail sentence for using an encryption method that we can't read"

This is obviously a case to illustrate stupidity, and could be easily challenged in court. By what about seemingly random observation data from things like radio astronomy or applied physics, and if there are rules to allow this type of data to even exist on a computer, how do you prevent steganography - hiding data inside the image or other data.

At some point, people wanting to hide things will resort to book ciphers using unpublished or even published books, which will only be decryptable by knowing the exact book that is being used, or by cataloging all texts ever written. Fortunately, despite Google's best efforts, this is something that will remain impractical for some time.

It's a real minefield that there are no good or consistent ways of regulating.

18
0

Bye bye MP3: You sucked the life out of music. But vinyl is just as warped

Peter Gathercole
Silver badge

Re: My old music never reached CD let alone digital download

Ebay is your friend here, but avoid the club DJ turntables. Go for something like a 2nd hand Project Debut 2 or 3 or a Dual 504 or 505 which would give you reasonable performance at a quite reasonable price (watch out for the later Project Debut turntables, they've got a bit big-headded because of their success, and have put their prices up).

You may need a moving magnet (or moving coil if you go big) pre-amp to play it through a modern Hi-Fi, however, as most Hi-Fi nowadays does not have a phono input.

(And don't forget the decent speakers!)

3
0
Peter Gathercole
Silver badge

Well there you have it.

If you are basing your vinyl listening to picture disks, then you've got a really jaundiced sample.

You need good quality black vinyl to get the best experience.

I recently bought the first of the Beatles Vinyl Collection partwork, which was Abbey Road, my absolute favorite Beatles album. This has been recently re-mastered, and the pressing is on 180gm hich quality vinyl, and it's really refreshing to listen to such a good pressing. Unfortunately, they re-mastered from the original master tapes, and I find the top end a bit muted, and I notice that the cymbals in tracks like Something and Here Comes the Sun have just disappeared compared to previous pressings.

It's a shame that the series was going to be so expensive. £17 for a single album and £25 for a double album is a bit too much.. Over all, it would have cost over £450 for the entire collection.

3
0
Peter Gathercole
Silver badge

Modern sound engineering

I think he's complaining about the engineering and mastering of modern recordings rather than the actual limits on the media.

I don't buy much modern music, but I was appalled by the mastering of "Memory Almost Full" by Paul McCartney when I bought it (stop sniggering at the back, he can still write a good song or two). The first thing I did was to rip the CD and put it onto my laptop and phone, where I listened to it quite a lot, and it sounded OK.

A while back (just after I added a DAC to my Hi-Fi - see a previous post in this thread), I put the CD on and came to the conclusion that the sound engineering on this album is just crap. It's a mainly acoustic album, but it's been pushed so that it's right at the top of the dynamic range, and as a result, sounds terrible on a decent HiFi. It actually sounds like it's clipping frequently. I guess that the rip I did (using one of the Linux MP3 encoders) must have cleaned it up. Either that, or the DAC or the pre-amp in my Hi-Fi amp is being pushed beyond it's capabilities, but I don't hear this on other CDs.

Paul is a pro, so I guess that either his hearing is dropping off, or he's never listened to the CD. I cannot otherwise imagine how he let this audio mess (just shut up, I think the songs are quite good) get released.

5
0
Peter Gathercole
Silver badge

Re: Rather than like buying a BMW

I had an interesting Digital Epiphany a couple of years ago.

I have a HiFi cherry-picked from the high end of the budget part of the market over many years, with one weak element in that I used whatever CD player I could get (although I always bought a HiFi brand name, the last one was a Technics).

With this set-up, over several different CD players, I always preferred my vinyl copies over the CDs whenever I had the the same music on both formats.

I took the attitude that a CD player was a CD player because, when all is said and done, prior to the DAC in the player it was all digital, and modern DAC chipsets were cheap and good enough to not matter any more!

One car boot sale, I found someone selling a Marantz CD player with digital output, and a Cambridge Audio DACMagic 2 at a very reasonable price.

Now, this is not a high-end DAC, and got rather mixed reviews when it was first produced. But the difference it made when playing my CDs compared to the Technics was absolutely astounding! And I also found that the DACMagic was better than the DAC in the Marantz CD player as well. I could not believe my ears at the clarity and instrument separation, pretty much identical to the vinyl, and spent many hours repeating the comparison of vinyl to CD, much to my wife's dismay ("why do you have to listen to the same track more than once?")

As such, I've realized that my preference was not really a vinyl vs. CD, but a good turntable/cartridge compared to a mediocre CD player. I wonder whether there are other people here who have has decent turntables and cartridges, but merely adequate CD players?

I still listen to both, but now the surface noise issue on vinyl, which I accepted as a necessary evil is actually more of an issue than it used to be with the old CD players.

12
0

Dell BIOS update borks PCs

Peter Gathercole
Silver badge

If the BIOS craps out in the POST

If the BIOS craps out in the POST (Power On Self Test), it will not boot whatever you do.

If replacing the BIOS chip requires the motherboard being removed (laptops are not designed to be easily maintained), then replacing the motherboard will be a quicker and possibly cheaper fix (for Dell). Also replacing surface-mount components is far from easy.

Normally the BIOS resides in flash memory nowadays (rather than EEPROM). It used to be that there was a small amount of ROM that could act as a failsafe to allow you to reflash a corrupt BIOS, but I suspect that if that code is still included, is resides in a different partition in the same flash memory chip. If the flash memory gets completely wiped, then you've lost the failsafe as well.

Certain mobo manufacturers (Gigabyte come to mind) used to have a Dual BIOS feature, where if you updated the BIOS, you only did one side, and you had the unchanged other side to fall back to if it failed. That gave you a way of proving a new BIOS without bricking the system.

Some boards also have I2C or SMBus (or other) ports that may allow the flash to be reprogrammed in situ, but often the headers are not soldered on the board to allow it to be used.

3
0

Bloke charged under UK terror law for refusing to cough up passwords

Peter Gathercole
Silver badge

Re: And soon.... The clock will strike thirteen @M7S

I happened across the restart of The Prisoner on Monday myself.

Even though I've seen it before, and I have the complete series on DVD (actually, a largely unwatched impulse purchase from a car boot sale), it had not sunk in before that the dialog in the opening credits ending up with "I am not a number... etc", was re-recorded for whoever was the Number 2 in that episode.

One of the benefits in watching the episodes close together, I suppose.

We just don't make series like that in the UK any more, I guess because we don't have characters like Lew Grade in our media companies.

4
0
Peter Gathercole
Silver badge

Re: And soon.... The clock will strike thirteen

As a total aside, the clock striking 13 was an interesting plot point in the Captain Scarlet episode "Big Ben Strikes Again"

Totally irrelevant to this discussion, I know.

4
0

Why Microsoft's Windows game plan makes us WannaCry

Peter Gathercole
Silver badge

Re: Hang on another minute...

I do not believe March to May is an adequate time. What if you've got 200 items of software to regression test. Ignoring the time to actually patch the extensive estate, that's over 2 software packages to regression test every day (if you can use the whole three months), including weekends and public holidays. And all on a heterogeneous hardware estate with attached specialist equipment!

What if it was 2000 software items? How many of the IT support people know the applications they support well enough to be able to perform the regression test? Or do the users have time to actually test the full functionality of their packages (Hint, a day testing a package is a day that the user can't be doing their normal job)

For a large organization, a proper regression test of their software portfolio will take months.

It would not be so bad if the patches were just that - patches that do not change any other function. But Microsoft do like to include functional changes in their patch bundles.

Regression testing Windows 10 in a business environment is going to be an absolute nightmare, and I'm glad I'm not in that game.

3
0

DeX Station: Samsung's Windows-killer is ready for prime time

Peter Gathercole
Silver badge

@AC

I'm sorry!

The Osboure 1 did not have batteries as standard. It needed mains power to operate. And quite a lot of that.

Any battery pack was a 3rd party add-on.

0
0
Peter Gathercole
Silver badge
Happy

That's one hell of a pocket!

Do you also have car batteries and a mains inverter in there as well?

10
0

MP3 'died' and nobody noticed: Key patents expire on golden oldie tech

Peter Gathercole
Silver badge

I don't understand how it 'died'

If the patent is not renewed, then the technology moves into the public domain, which could mean that we could see more use of it, not less.

Whether we do or not is another matter, but I would guess that there are still a lot of optical players and media devices which are happier with MP3 files than some of the later (patent encumbered) audio formats.

59
0

It's been two and a half years of decline – tablets aren't coming back

Peter Gathercole
Silver badge

I found an innovative use

I found an excellent use for my 10" Android tablet.

When I last sang with a choir, I used it as a musical rehearsal device for learning the music. If you can get the music MIDI form, then there are applications that will convert it back into sheet music, and provided it is broken into appropriate tracks, play the individual parts as per your selection, and display the sheet music at the same time.

Couple it with a pair of headphones, and you can then follow the music and hear the parts all on one easily carried device. Mind you, bursting into song in the middle of a train or plane does not go down too well with the other passengers.

You can also have a audio or video recording of a performance as well, and if you want to go that far, record your own rehearsals with the rest of the choir/orchestra to allow you to review the session.

I have seen musicians use them in place of paper music on their music stands, with the music auto-scrolling so they don't have to turn pages.

I also use it while I'm out for reading comics and books, and watching shows I rip to SD card. It's so old that I can't remember when I got it (it got Android 4.04 soon after I got it), and had an 8000mA battery that puts smartphones to shame. I still get 4-6 hours of continuous use from one charge, although it can get really slow until the firmware is re-flashed (no Trim support for the flash filesystems)

0
0

Huawei picks SUSE for assault on UNIX big iron

Peter Gathercole
Silver badge

Re: hot swapping - old news

Tandem is old-news.

Unfortunately, the RAS features that used to be around in Tandem and Stratus (bloody hell, Stratus still exists!) are apparently feature that vendors do not consider useful any more.

In the same time, customers have been encouraged, for power and supposedly manageability reasons to consolidate all their systems onto ever-larger single systems divided up by visualisation.

And what happens when features fail and need swapping? Well, I/O cards can be swapped, as can drives, power supply components and fans. But once you get to core features like CPU and Memory, the only way is to take part or all of a system out of service.

Even in the modular IBM Power Systems (770, 780, 870 and 880) systems, where supposedly you can power down individual drawers, I've never come across a situation where a CPU or memory repair action has suggested just powering down the affected processor drawer, but wants the whole system powered down.

The solution to this? Well, on-the-fly workload migration is normally the current suggestion, but that means that you have to keep the same capacity as your largest system spare, and there will be performance and time constraints while migrations are carried out. Otherwise, you de-construct your workloads, and place them onto smaller systems that you can afford to have down for service actions without affecting the service.

Of course, hardware will continue to run in a degraded state now (if a CPU core or memory DIMM fails, the rest of the system may well continue to run), meaning that you can plan your outages rather better that you used to be able to do, but to restore full performance, some outage will probably be required.

If Huawai can produce servers at a reasonable cost where CPUs and memory can be replaced without shutting a system down, I can see current buyers of Power and SPARC systems looking at them vary carefully, but it will need some OS modifications to allow hardware to be disabled and not considered for work. It's possible, but will need work in the scheduler, and the memory allocation code. Power, IBM I and AIX can do some of this already, but I'm not sure that Linux on Power can, and I think on Intel, it's still in it's infancy.

But with the integration of memory and PCIe controllers in modern processor dies, system builders will have to know a whole lot more about the internal architecture of the systems to provide resilient configurations that will allow processor cores with all their associated on-die controllers to be removed without affecting the service.

I personally still favour a larger number of smaller systems, rather than relying on increased complexity in the design, and I think that, whether knowingly or not, customers embracing cloud are making the same decisions.

0
0

iPhone lawyers literally compare Apples with Pears in trademark war

Peter Gathercole
Silver badge

Re: Apple Records predates Apple Computers

There were three separate lawsuits, with Apple Corp. taking Apple Computer Inc. to task over the use of 'Apple' and an Apple logo in conjunction with music.

Apple Corp. won the first two, and received modest damages and a more explicit license deal, but the third one in 2003-2006 centered around the use of Apple and the Apple logo on the iTunes store, which was clearly about music.

In this case, the judge ruled in favor of Apple Computer, taking a very (IMHO) lose interpretation of the maybe poorly worded section on content delivery and physical media (to me, it looks like the judge did not think that the electronic delivery of digital music conflicted with the previous agreement, which he interpreted as the delivery of music on physical media - clearly the case that digital music delivery was a disruptive techology).

Although Apple Corp. said they would appeal, it was likely they didn't have the financial resources, and eventually Apple Computer. offered a settlement, and part of the settlement transferred the ownership of the Apple logo to Apple Computer, with a perpetual license to allow Apple Corp. to continue using their logo. I think it also included an agreement to allow Beatles music to be delivered through the iTunes store, something that Apple Corp. had explicitly blocked previously, presumably because of the ongoing disagreement.

So there is now nothing Apple Corp. can do to anybody w.r.t the logo, as Apple Computer Inc. own it outright.

It's amazing how much can be achieved by the application of money.

6
0
Peter Gathercole
Silver badge

Re: Does anyone remember ...

I was going to bring up Peach, but then I remembered that it was an Apple ][ clone, so any infringement case would have been interesting, to say the least!

2
0

IT error at Great Western Railway charging £10k for 63-mile journey ticket

Peter Gathercole
Silver badge

Re: small city @Spudley

Yes. I do know. Typo.

I had two kids go to Bridgwater College and one of them then went to SCAT. One of the first things I learned was that the 'e' was missing, but when quickly typing a post, it's easy to forget. If you look back at my posts, it's hard to find one that does not have a spelling, typographic, punctuation or capitalization error, no matter how hard I try to get them right.

I think the real reason for the tone of my comment is that the conversion of first Polytechnics, and now Further Education colleges to 'University' status has, in my opinion, devalued degrees, and damaged the vocational education system in the UK. The current system still churns out graduates in 'soft' disciplines, who then struggle to work in their chosen field, and end up not using their education in the jobs that they end up in. And the flip side is that 'real' universities are starved of resources and funds for the required 'hard' disciplines, leading to shortages of STEM graduates in industry and education, and very valuable intermediate level qualifications in these subjects (BTEC HNC and HND for example) have pretty much disappeared.

2
0
Peter Gathercole
Silver badge

Re: small city

Yes, it has neither a Cathedral nor a University, although the Somerset College of Art and Technology (SCAT - what a bad acronym to use), rebranded itself first as Somerset College, and recently merged with Bridgewater College to form University Centre Somerset.

According to the list at https://www.gov.uk/check-a-university-is-officially-recognised/recognised-bodies, it cannot award degrees itself, and it's tag line is "In partnership with Plymouth University, Oxford Brookes University, UWE Bristol & The Open University", so I suspect that it relies on Plymouth, Oxford Brookes, UWE and the Open University for the award of the degree.

This does not make it a University, in my opinion, so Taunton does not qualify as a city.

5
2

systemd-free Devuan Linux hits version 1.0.0

Peter Gathercole
Silver badge

Re: Honest inquiry @myself

Hey!

I've just looked at TUHS, and if you're interested in UNIX source code, there's lots of interesting stuff has appeared there recently.

Not just source for Edition 8, but Editions 9 and 10 as well.

The biggest revelation I had was when I found the source for something called pdp11v, which is also called PDP-11 3+2.

Have a look, and work out what it is yourself! Remember, even large PDP-11s were really rather small (maximum 4MB memory, small 16KB memory segments, maximum of 128KB text and data size for single processes without some fancy overlaying), so someone having got this running was a real feat.

0
0

Farewell Unity, you challenged desktop Linux. Oh well, here's Ubuntu 17.04

Peter Gathercole
Silver badge

Re: Won't install properly @Peter R. 1

I hope your comment was not aimed at me!

If it was, I think you've missed out the gist of what I was saying. If you install or buy some bleeding edge or niche hardware for Windows, something that is not in the normal Windows driver repository, the vendor provides this thing, normally a shiny silver disk or a link to a web site, that adds the support for that device to Windows.

Without it, you would have as much trouble running that hardware on Windows as many people experience on Linux. As an exercise, try installing Windows on one of these problem systems just from Microsoft media, and see how much stuff doesn't work without the mobo and other driver disks from people other than Microsoft. It's an education.

The problem hardware vendors do not provide their own drivers for Linux, and this is the biggest problem for niche hardware. You cannot expect anybody else in the Linux community to reverse-engineer hardware drivers for this type of device. If it's important, do it yourself, and contribute it back into the community!

Do not expect someone like RedHat or Canonical to provide drivers for Linux when Microsoft do not do it for Windows (remember, even drivers in the Windows repository are often provided by the vendor, not Microsoft themselves). It really is the vendors responsibility to ensure that their hardware is supported, not the OS community.

It is a wonder that as much works as it does with just the base Linux install media. A testament to all the hard work that has been done, often by volunteers or philanthropic companies.

What I find more cynical is those vendors who provide Mac OS drivers which would differ comparatively little from the Linux ones, but don't actually bother with that last step of packaging and testing for Linux.

8
0
Peter Gathercole
Silver badge

Re: My thoughts on this ... @Julian

Before the turn of the century, I liked the version of twm that added a virtual desktop. The version I used was called vtwm.

I actually found the source for it a bit back, and compiled it up. It still does the main part of the job I need a window manager to do quite well (and in an absolutely tiny footprint), but the lack of integration with things like the network manager for wireless keys, no applets and a number of other niggles prevented me from going back to it full time.

I suppose I could have spent more time investigating getting it working better, but I just lost interest. We get too used to the extra luxuries of modern desktops, unfortunately.

0
0

Forums

Biting the hand that feeds IT © 1998–2017