Feeds

* Posts by Peter Gathercole

1732 posts • joined 15 Jun 2007

Hunt: We'll slightly inconvenience pirate sites

Peter Gathercole
Silver badge
Unhappy

@Nader re: physical borders

That's exactly it. The content producers demand different distribution rights on their content depending on the physical location of the consumer.

If you look at the American TV-on-demand sites you will find that they have negotiated the rights to the content *IN THE US ONLY*. This is normally because other companies have bought the rights for the same content in other countries.

For an example, let's assume that Universal Media Studios make another series of Heros. They license commercial broadcast in the US to NBC and in the UK to Sky.

If someone in the UK can watch or purchase it from the NBC on-demand service, they might not take out a Sky subscription, causing lost revenue to Sky.

So a condition of the license that Sky enter into with UMC is that US distributors must restrict online access to only people in the US, and if they don't you end up with severe lawsuits between all of the companies involved.

The only way that will change is if production and distribution companies take a whole-world view which is likely to harm choice by making large regional minorities too small to be considered in a whole-work market. There is no perfect solution.

We as consumers must realise that production and distribution companies are commercial enterprises, whose very existence is conditioned on their need get as much money out of their customers as possible.

I look back on the days before the growth-is-essential mantra, when it was enough for a company to ensure it's existence, make reasonable but no excessive profits, provide good employment to their workers and good service to their customers with some fondness. Maybe my glasses have just taken on a rose-tint.

1
0

Linux.com pwned in fresh round of cyber break-ins

Peter Gathercole
Silver badge
Alert

@ShelLuser

Firstly, my word! what a provocative tag you have.

Now, regarding "an eye-opener for *nix people"

The problem here is that even quite technical users can be short-sighted when it comes to security. I know any number of very technically able people who regard security as a barrier to work, and quite often do very dangerous things to "work around the imposition of anti-productive security measures".

All the time this mindset persists with people who should know better, we will have the potential for this type of problem.

As a widely used example, ssh is a wonderful tool in the right hands, but allow people who can't be bothered to read the manual, and who use passphrase-less keys and/or distribute a single private key across their entire estate of systems, and you have a disaster waiting to happen. And if some of these people have escalated privileges, or use the same key for their own ID as they do for root, then it is just a case of lighting the blue touchpaper and waiting for the inevitable explosion.

Also, ssh can be used to circumvent many other security systems in ways that range from the constructive to the malicious. This makes it a multi-edged sword that can make magic happen, or can rip carefully thought out security measures to shreds at precisely the same time. How do I know? Because I have used it extensively to do just that (I think constructively, but sysadmins of other systems where I am a mere ordinary user may think differently).

Ssh can be abused on many OSs, including pretty much all UNIX and UNIX-like systems (and this includes BSD for those of you who have been suggesting that as a more secure OS), and there is at least one port of SSH server for Windows systems as well.

In reality, where you have a mechanism for one system to trust another using whatever means, there is scope for an intrusion on the trusted system to spread to the trusting system. And in the modern environment, where you need to manage hundreds or even thousands of systems from a central location, these trusts are essential. I believe that this is an axiom, and applicable to all OSs.

User training, partitioning of management domains, and insisting on adherence to properly thought out security policies, especially amongst the sysadmins and power users, is the only way to limit the damage of such a compromise.

Even if it is a barrier to productivity.

4
0

Galaxy Tab remains illegal in Germany

Peter Gathercole
Silver badge
Meh

Gordon Bennett

There aren't half some numpties in the legal systems. If they apply this rule to 'phones, then most smartphone designs will be blocked in Germany.

8
0

Hey Commentards! [This title is optional]

Peter Gathercole
Silver badge
Meh

Anonymous comments (I don't mind titles!)

Can we have two names registered against a mail address?

I am open enough to post many of my comments under my real name (unlike many of you), but I frequently use the anonymous option, normally if I am posting things that may upset my employer, wife, children, the police etc (OK maybe not the wife, she is a technophobe, and does not read the Register, and the police could get a court order if what I have said was against the law).

But I appreciate being able to use an icon with my anonymous posts.

What I may have to do is register a second account with an unrelated name to my real one. If I were allowed to have an alternative "alias" for my account, and be able to select it like I do Anonymous Coward as an alternative, I think that would be really useful.

Now, what has not been used yet, but would be suitably humorous?

4
0

Patent wars: Apple attacks Samsung in Japan

Peter Gathercole
Silver badge

Cheap delaying tactics

I suspect that it is much cheaper to file a law suite and get a temporary injunction, than it is to get one lifted. And if filing it delays a competing product from being launched, it means that you have a longer time to attempt to dominate a market, and reap as much profit as possible.

I can see a scenario where Apple hire just graduated lawyers cheap, and say to them "Here are the arguments, take them and stall in court; it doesn't matter if you don't win, just drag it out as long as possible".

Mind you, I suspect that the Japanese might be prepared to back an oriental company over an American one, especially for technology products where Japan excels, so Apple could have their nose bloodied in court over this one.

5
0

Why modern music sounds rubbish

Peter Gathercole
Silver badge

I do this often

mainly because finding CDs (and perish the thought, MP3s) of some of my older vinyl is almost impossible.

I leave all of the compression and tone altering filters out, and only turn on the digital scratch filters on if the amount of noise is very bad.

The CDs I produce like this sound very good (to my ears), even using the commodity A-D converters on generic mobos. Even though these cannot do the highest dynamic range, I suspect that my turntable and cartridge combination (good budget equipment - Pro-Ject Debut II with Ortofon OM-5e) is probably more of a limit on the dynamic range than the sound chip in the computer.

0
0

Google feeds patents to HTC for assault on Apple

Peter Gathercole
Silver badge
Flame

@jake

I applaud your sentiments and appreciate you actions, but unless and until governments in all countries actually employ people who understand technology and their own patent system, all politicians from any administration will be taking advice from interested parties.

These interested parties are often the people most likely to gain from a strong and all-encompassing patent system, and who have deep pockets so can 'voluntarily contribute' to the process, and will not give unbiased advice. This is especially true in the US, which to me as an outsider, it often looks like the government (of all parties) is actually run by big business.

Some of the statements made by the current US administration and echoed by the Europeans sound good, with words like 'reduce administrative costs', 'reuse patent searches during applications in multiple jurisdictions', but when you look into them, it is not suggested that pre-grant verification will be any stronger or with any greater rigour, but merely to make the application process easier, leading to still more stupid, un-enforceable patents going on the books.

The patent system was designed to protect small inventors. The way it has been corrupted means that it now does exactly the opposite.

4
0

E-cars: unaffordable until 2030 (or later)

Peter Gathercole
Silver badge
Boffin

@takuhill - from a comment on a previous article

This is a comment I made on a previous article a year ago about General Motors and Tesla, so some of the content may be out-of-context, but it shows some problems with replaceable battery packs.

<quote>

Someone has to pick up the cost of the loss of capacity after a pack has been recharged a hundred or so times. Leasing makes more sense than owning, as nobody will complain about swapping one that is new for one that is near it's end-of-life it they lease it.

You would still have some uncertainly about range, and you would probably have to have some rules about when a battery pack would be retired or reconditioned. Would you make it 90% of original charge capacity, 80%, 50%?

I'm all for this technology, but there are serious wrinkles that need sorting out, not the least of which is the cleanness of the electricity. Also, could the power grid cope with thousands of battery packs drawing tens of amps at the same time? For example, if a battery charging station has 50 packs charging at any time, which draw 30A each while charging, we're talking 1,500 amps, or at 230V, 345KW per station. That's a lot of power. A typical UK house draws about 0.4KW per hour, averaged out across the year (according to EDF), so the charging station would put the same load on the grid as 800+ houses.

These figures are rough, based on the Tesla's battery pack which apparently take 3.5 hours to charge at 70A at 240V (thanks Wikipedia), mapped into something that is more likely to be found in the UK urban environment.

How many petrol stations serve as few as 150 customers in a day (assuming packs take 8 hours at 30A to charge)? And you would have to be pretty certain that the packs could not be nicked for their scrap value. And how large would the station have to be?

So, interesting ideas, but currently, fossil fuels still rule, as indicated by the icon.

</quote>

0
0

No pain, some gain: Ubuntu Oneiric Ocelot examined

Peter Gathercole
Silver badge
Unhappy

I'm sad

I really thought that Ubuntu was the distro that might finally cross over into the mainstream.

I've now completely changed my mind, and I will be looking for a new distro.

What's changed my mind? Not the radical change in user experience, not the continual churn of new applications for commonly used things like listening to music or watching video, and not Canonical ignoring their loyal user-base but going for the 'new' (although all of these things are annoyances).

It's actually the way Canonical has split the established user-base into "I don't like it" and "I think it's the bee's knees" camps over Unity. What they've done is effectively alienated a considerable part of the people who (like myself) were strong advocates for, and encouraged the use of Ubuntu to users of other OS's. Unfortunately, the most valuable advocates are probably the people with most experience of Linux and Ubuntu, and who are most likely to be the ones upset.

I don't actually mind there being another UI. I don't mind them switching default apps. What I do mind is the "do it our way or not at all" approach of removing the old way of doing things. I feel it's almost as if they are deliberately making a statement of disinterest in some of their most loyal users.

I have recently been unpleasantly reminded about how unresponsive Canonical can be. I know that they have limited resources, and also rely on knowledgeable community members, but I don't like how fast thing change in the normal release process, and how quickly problems are swept under the carpet. I keep to LTS releases, because making significant changes on a regular basis to my daily use machine is not of interest to me. I have been using Hardy since about 6 months after its release, and I was suddenly informed that Google were stopping builds of Chromium for 8.04, because it had moved out of support.

They were right. As a desktop release, Hardy dropped off of support (on desktop systems) in about May this year.

Why was I still using Hardy? Well, in Lucid (10.04), Canonical imposed KMS (although to be fair, it was part of the Kernel), and completely broke suspend and resume support for ATI Mobillity graphics adapters even though it worked flawlessly in 8.04, broke Composite Rendering support (for Compiz), and also crippled Xv performance for video playback. Despite several defects raised by users of Thinkpads and Dell laptops, the calls languished unresolved, and the last suggestions were to upgrade to 10.10, which is *NOT* an LTS release. I spent 10's of hours trying to work out why all of these things were broken, before deciding that I could not afford the time to understand enough about KMS to be able to do anything useful, and went back to Hardy.

I've now (mostly) switched to Lucid, but have had to disable KMS (which is a blunt fix) to allow suspend and resume to work, and also turn off Advanced Desktop Effects (which I used to catch peoples attention), and switched mplayer and Xine to use a raw X11 frame buffer for rendering video (I've not worked out how to do the same for GStreamer/Totem). If I can't get Composite Rendering working, there is basically no chance that I will be able to use Unity on my Thinkpad, even if I wanted to.

So, I will keep the Hardy partition until I've checked that there is no other gotcha's from Lucid, and will then look around at my options. Maybe I will use Xfce on Ubuntu, but it was nice, for a while, to be able to use a Linux distribution that just worked without too much fiddling.

Ho hum.

8
0

RAM prices set to 'free fall'

Peter Gathercole
Silver badge
WTF?

Wow. That would be expensive

because the DRS6000 was launched in 1990! That must have been one hell of a deal getting a prototype system 10 years before the product launch!

In 1980, system memories were being measured in single figure MB count.

When I went to University in 1978, the IBM 370/168, which was at the time supposed to be the most powerful computer in the UK education system (I'm not sure how accurate that boast was) had a total of 6MB of memory. I can't exactly remember what was a lot in 1980, but in about 1984, we paid about £2000 for 1MB of second-hand memory for a PDP-11/34 (before anybody starts, it was in Systime covers, and had the 22 bit addressing feature added by them [not normally an option on a /34], allowing up to 4MB, although we could only afford 2)

0
0

A Farewell to Oates: Adios, El Reg

Peter Gathercole
Silver badge
Pint

Oh well, might as well join in

Thanks, and hope you enjoy whatever it is you are going to.

0
0

World telly shipments stall

Peter Gathercole
Silver badge
Meh

Ah well

we have no compulsive new technology at the moment. 3-DTV has not caught on, and most people either already have, don't really care or don't know about HD.

What we have here is a down-swing compared to a previous up-swing caused by LCD TVs. People could see that an LCD TV occupies less space for a larger screen, can be wall mounted, and uses less electricity than a CRT, but do many of them care that LED is better than CFL for the back-light? And the current ultra-slim tellies are not that much slimmer than the 2-3 inches of the last generation in the scale of a living room. People are realising that, as long as it works, their two year old telly is still adequate for watching Coronation Street, the Simpsons, or Mythbusters.

Are we, at last, seeing a return to a domestic consumer electronics market that is not dominated by hype and the need for the latest shiny things? I sincerely hope so.

2
0

Cambridge Audio Sonata NP30 hi-fi streamer

Peter Gathercole
Silver badge
Alert

How music sounds

Considering how few people actually bother to sit down and listen to music in a quiet room arranged around the audio system, does it really matter that the audio quality wasn't listened to?

I would love to have some of the iPod generation(s) listen to a decent audio setup playing uncompressed audio sources, and also think that it would be a revelation to many of them to actually listen to some unadulterated live music (not what you get in a rave or night club!)

I'm sure that the majority of people believe that the multi-track recorded, compressed, bass and treble heavy mush that is turned out by today's modern music publishers, and then mashed to death by the distribution method (particularly FM radio stations) is actually how music should sound.

My audio setup is comparatively poor, consisting of best-of-breed budget audio equipment, most of which is over 20 years old, and I still get a Wow from some of my children's friends when they come and actually hear what vinyl on a reasonable turntable through real speakers sounds like.

Back to the article. As soon as you distribute a good audio source across a network to an iPad or similar mobile audio system listened to in a noisy environment, you might as well be using Compact Cassette in an 80's Walkman as far as the accurate reproduction of the original material is concerned. Whilst I actually appreciate the fact that quality audio equipment manufactures are making an effort in producing this type of kit, much of it is really just for convenience, not audio quality.

0
0

UK could have flooded world with iPods - Sir Humphrey

Peter Gathercole
Silver badge

BBC permissions

This is negotiated into the contract when the work is commissioned.

Modern productions have clauses in the contract between the BBC (and ITV and Channel 4) and the producing company (which is almost certainly not the broadcaster), and the actors, which specifically allow the content to be available for a limited amount of time on a view-on-demand service such as iPlayer, as well as having repeat rights. This has been the case for most UK produced programs for many years now, but often does not include foreign produced material (for instance Torchwood Miracle Day, which is NOT on iPlayer when I last checked).

This is also why some programs are available as unlimited podcasts (very liberal contracts, and probably only on things that have little ongoing commercial value, like news coverage and topical documentary programs), and some are only available for a limited amount of time, where there may be money to be made on pay-for-view or DVD sales.

But archive material is a bit different. You quite often find old programs being repeated on the BBC, both radio and television, which do not find their way onto iPlayer. This is because in the original production contracts, and the contracts with the actors, there were clauses for repeat broadcasts, but not for distribution using other means (and this includes DVD, CD and tape for very old series). As these were not things considered when the contracts were drawn up (why should they be, nobody thought such things would be possible), the lawyers tread very carefully to avoid the possibility of future loss of royalties law suites.

In order to make such material available through things like iPlayer (at least before the copyright expires), it is necessary to get agreement from the production company, and all of the actors, or in the case of a dead actor, representatives of their estate, to allow the material to appear on formats not considered when the original contracts was drawn up.

This can prove very difficult for the older material, which is very unfortunate for us the viewer, preventing some programmes from being available on DVD or on video-on-demand sites.

As an aside, as different countries have different copyright and royalty rules, this won't necessarily be the case for all countries.

Oh well. Thank goodness for YouTube, which appears to have a very liberal attitude towards copyright, at least until challenged.

3
0

ARM vet: The CPU's future is threatened

Peter Gathercole
Silver badge
Meh

DJNZ e

Z80 DJNZ e - 13 T states if branch taken, or 3.25 microseconds at 4MHz

6502 DEY ; BNE e - 5 clock cycles if branch taken, or 2.5 microseconds at 2MHz

OK, it's one more byte (3 rather than 2), but your assertion that code density == speed is completely wrong when considering 8-bit microprocessors, because there was no overlap in instruction fetching, decoding and execution. The time of any instruction on either a Z80 or a 6502 is exactly what it says, from fetching the instruction and arguments to completion.. From the end of the last instruction to the end of the next is an absolute time, and is easy to determine.

Many Z80 instructions run into 15-20 T states, meaning that there are some situations where it is quicker to run several simple instruction than one complex one, even in the Z80 machine code.

0
0
Peter Gathercole
Silver badge
Meh

@nyelvmark - "20 year head start"

I'm interested in what you are comparing with what.

ARM silicon started appearing in about the same time (give or take a year) as the 80386, and IIRC, ARM systems actually fared relevantly well in benchmarks compared to the i386, and even then were clocked at much lower clock speeds.

So although Intel had all of the years of 8086 development under their belt (which, incidentally, was less that 10 years), as 32 bit architectures, you can consider ARM and the first 32bit x86 processors as being of the same generation, and actually makes the ARM a more 'mature' processor than the 'great leap forward' of the i486.

2
0

NetApp misses revenue goal

Peter Gathercole
Silver badge
Unhappy

It never ceases to amaze me

how a company generating quite respectable profits gets blasted for not reaching other peoples projected figures for unknown future business.

I'm just waiting for the shares to slide. It just indicates to me how badly broken the Corporate Capitalism economic model actually is.

1
0

UCAS website collapses - on results day

Peter Gathercole
Silver badge
FAIL

You obviously don't know what UCAS is for.

See title.

0
0

Sony prices up PlayStation telly for UK

Peter Gathercole
Silver badge

Interesting

although when you think about it, with active-shutter technology, it's a simple matter to make each set of glasses see a separate 2D image by operating each eye in the set in the pair at the same time, alternating with the other set of glasses. It's just a variation on the method of displaying a 3-D images.

If they are going to alternate 2 3-D images, that's a bit more difficult, as they would have to display 4 images, and each person's eye would only be seeing an image for 25% of the time. I suspect that this would be detectable as serious flicker by almost anybody, even if the frame rate was adjusted.

0
0

A-level results accidentally put on interwebs a week early

Peter Gathercole
Silver badge
Meh

Re: AC -Yeah... - I'm curious

A-Levels are supposed to be the first step to understand complex subjects in preparation for Higher education. How are you supposed to demonstrate a good understanding of a complex subject with simple questions?

A large number of simple questions may suit subjects at GCSE, but A-Levels are supposed to be Advanced (remember, O-Levels were Ordinary, and A-Levels were Advanced).

I admit that it was over 30 years ago that I took my A-Levels in science subjects, but I remember that at least one paper in every subject required you to analyse a problem and recognise a particular technique to solve it, and then be able to work through that technique to achieve a solution. You could get some marks if you identified the correct technique, but worked it through incorrectly, or even the wrong technique, but applied it competently.

It demonstrated that you had a knowledge of the subject and how to apply that knowledge to a question. It did mean that there was a large element of luck in which questions would come up, but it was expected that you would have a broad enough understanding to field questions from the whole subject.

I have a 17 year old child who is studying vocational subjects, so I won't be able to see what current A-Level papers are like next year, but I shall be interested to see the test papers that my 15 year old is given in a couple of years time.

On the content of each subject, the chances are that my 30+ year old knowledge of the subjects I studied at A-Level almost certainly does not equip me to sit a modern exam, even if I could remember it all. Physics, Chemistry and even Maths have changed significantly in that time.

Give me 6 months of appropriate time to study a modern syllabus, and I would be happy to see how well I would do compared with a modern student.

0
1

The IBM PC is 30

Peter Gathercole
Silver badge

@Pete 2

Yes, but just how long does it take to copy the two files that made up the OS from a floppy, and then prompt for the date and time! Remember that the first IBM PC did not have a realtime clock, or ANYTHING other than a keyboard adapter. Everything else was on a card, including as far as I can remember, the floppy controller, the display adapter, serial and parallel adapters, and they all cost an arm-and-a-leg from IBM. So enterprising third parties produced 'multi-function' adapters that would include a parallel port on the display card and so on. And there was no plug-and-play, so there was all of the hassle of conflicting base addresses and IRQ settings. I'm sooo glad that those days are gone.

Anybody else remember ROM Basic that the system would drop into if there was no bootable floppy in the drive? If I remember correctly, this persisted in IBM PC's on into the PS/2 line that replaced the PC, although you had to disable the OS from the hard disk to get there.

2
0
Peter Gathercole
Silver badge

I remember

The polytechnic I worked at took a decision in 1982 to install several computing lab's full of 5150's. Over the summer, we were inundated with the things, with boxes filling all the foyers, waiting to be unpacked. Horrible, horrible long persistence phosphor in the monochrome monitors, and the Poly' decided to ditch the one good feature (the keyboard) for a soft-touch silent Cherry keyboard as standard. Ugh.

I never liked them even then. Because they were floppy-disk only systems, the students had to book out the disks from a librarian for the software before they could use them, which meant that we had fragile 5-1/4 floppies moving around like crazy. We got an agreement through the distributor to allow us to keep the originals safe, and issue copies. Was not long before most of the students twigged on that they could further copy the disks, and then not bother with using the booking system.

I was glad when the first PC-ATs were installed, because we at least then only had to worry about keeping the hard disk clean, and repair the applications when the students trashed them. Introducing a virus on one of the ATs became one of the most serious offences, and we had to have disinfectant sessions to clean the student's own floppies to protect our systems and their work. Mind you, the 1.2MB floppy drives on the ATs caused no end of problems when students tried to write to 360KB floppies on them.

This was waaaaaay before disk cloning was thought about, and everything was done according to the installation process, although one of the labs (not one I worked with) was set up with a low cost (hmmm, relatively low cost, it was still bloody expensive) co-ax CSMA/CD Ethernet alternative called Omninet running at 1Mb/s for file and print sharing.

Interestingly, we had Pick installed on one of the ATs, and Xenix-286 on another.

I still regarded the PC's as poorer teaching tools than the lab of BBC micro's I also ran, and of course 'my' UNIX V7 (and RSX-11M) PDP11/34e (in Systime covers, with 22bit addressing and 2MB of memory, and CDC SMD disks to speed it up) was the bees knees as far as I was concerned, running Ingres to teach relational database. Knocked Ashton Tate DBase II (remember that!) into a cocked hat! And it was, of course, far less maintenance work.

The software line-up on the PCs was PC-Dos 1.1 (on the 5150s, the 5157's has PC-Dos 2.1 for the hard disk support) with Word 2, Multiplan (MS spreadsheet before Excel), and DBase II. I couldn't work with Word then, and still find it a traumatic experience now.

We definitely need either a rose-tinted spectacles or an old-fart icon here. I guess I'll just have to use the coat icon. It's the one with the big stretched pockets to hold the 5-1/4 disk box.

6
0

SuperVisor: One hypervisor to virtualize them all

Peter Gathercole
Silver badge
Alien

@Kirbini - It's amanforMars or one of his clones

This is what he does. You're not really meant to understand it, although there is a message in there somewhere.

There is a school of thought that suggests he writes a comment, translates it to some other language and back using something like Babelfish.

He is a Register treasure!

0
0

‘Pitstops’ can inhibit viruses

Peter Gathercole
Silver badge
Flame

@King Edward 1

I'm not talking about the extinction of the human species because of applied technology, just trying to put some perspective into what we are doing with regard to relying on ever more complex technological interventions to keep an unreasonable amount of the population alive.

But more interventions require more resource. I'm sure I heard a discussion on the radio recently which suggested that many countries will be spending significant proportions of their GDP on healthcare within 20 years at current change rates, and the Economist has commissioned a report that presents this as a possibility.

I was actually going to say something about diverse genetic information, particularly what are apparently unused parts of the genome, but I was going to put that into the context of the pathogens keeping recessive attack vectors in their genome, although you are right, it runs both ways (but what is the survival advantage of cystic fibrosis, mongolism, Duchenne muscular dystrophy or even short-sightedness!)

My belief is that we will probably never be able to match the natural forces of evolution, although that does not mean that we should stand still. We need to discover replacements for antibiotics, otherwise we could have a new Black Death. MRSA and C.Difficile already provide pointers to this possibility, and TB is already on the way back.

BTW, and this is a bit of a diversion. Removing fire from our tool-chest cannot happen as long as there is organic material in our environment. But motorised transport? Or the technologies that sustain the Internet? We could lose all of those.

Remember that it is still within the span of a single human lifetime that *ALMOST ALL* of what we regard as modern life has come about (OK, the steam engine, and simple internal combustion engine are more like twice, but even 70 years ago, horses were still the primary power on the land). The rate of technical change has been staggering and accelerating. There is a chance that we could be knocked back into a pre-industrial society. It would not take that much, and if there was suddenly a critical shortage of energy (like if there was a cascade failure of the electricity grids caused by a serious EMP overload from sunspot activity [I am not normally a doom and gloom monger, but the chance is there, NASA says so]), we may lose the capability to rebuild the infrastructure, including the power grids themselves. It takes a lot of serious resource, and a long time, to build the number of large high-voltage transformers that might be needed.

We've used all of the easy-access energy and other resources, and if we were pushed too far down, it would be incredibly difficult to climb back up to where we are without opencast coal, iron, or copper ore mining or easy to extract oil.

And don't start talking about solar, wind or wave power. Without an existing technical and transport infrastructure, this cannot be deployed, maintained, or utilized. I challenge you to build a working wind turbine generator (with a reasonable capacity) with just the raw materials you can find within a 10 mile radius of where you are. You are not allowed to cheat by using existing motors or alternators because that is part of the wind-down, not the rebuild of technology.

The result of a breakdown would be chaos, and conflict over resource, and could lead to a new dark age where the remaining resources were controlled by force. It would be impossible to do anything at a national level. In such a world, there would be NO internet, NO national transport system, NO national electricity grid, and the road and rail systems would degenerate remarkably rapidly.

Just think what panic there was in the UK 10 years ago because supplies of petrol and diesel were disrupted. And that happened within a space of just days!

Do you actually remember a mere twenty years ago how useful (or not) personal computers were before the Internet! Answer, not very. Good for simple games and small data projects. There was a Society, however. Computers are vital for our current way of life, not our survival. They just make it easier.

But none of this would mean an automatic extinction of the human species. The genetic sieve would probably cut back in, and maybe, just maybe, inherited intelligence could prevent a fall back to the stone ages. But people would start dying to what we now regard as curable diseases merely because the technical interventions were no longer available to keep them alive.

1
1
Peter Gathercole
Silver badge
Alert

Genetic battlefield

Although many of these apparent breakthroughs are interesting, it is worth noting two things.

Firstly, apply the rule of unintended consequences to the breakthrough. It may take some time to find out what else these substances do, and some of these may be undesirable, meaning that the technique may never come to anything.

Secondly, the real world is rather akin to a battlefield at a genetic level, with an organisms immune system on one side, and the survival mechanisms of an untold number of pathogens on the other. In both sides, the genetic sieve operates.

Even before humans interfere, what we have are the forces of evolution working against each other. If you think about the operation of the genetic sieve on survival, it is necessary to remove the genetically susceptible members of a population to allow the non-susceptible members to survive and procreate. But the same is true on the other side of the battle, and the most obvious example of this is antibiotic resistant bacteria, where the very few surviving members of a pathogen after the application of antibiotics become the basis of the following generations. This is exacerbated by over-use antibiotics, and courses not being completed, but it will operate eventually anyway.

If we interfere, by allowing susceptible members to survive and pass their susceptibility on to their offspring, we are weakening the population as a whole, and building in a reliance to the techniques and technology for survival to the species. Just think what would happen if modern medicines became unavailable. I don't think we would quite go back to the dark ages (after all, we do now know about how infections spread, and can take physical precautions), but it would not be pleasant.

But even as we are making the sieve less effective on the survival side, we are adding to the sieve on the other. Evolution will eventually allow the pathogens to work around any barriers we put up by making successful members of the pathogen population pass on their success and killing the unsuccessful ones. Life, as has been noted elsewhere, is incredibly persistent, especially at the bacterial level.

We can and will never reach a utopia where diseases are eliminated. Evolution will see to that. And the human species really has no guaranteed right to survive over any other!

3
3

Boffins shine 800Mbps wireless network from flashlight

Peter Gathercole
Silver badge

Memory 'flash'!

I think I've still got that copy somewhere. On the cover, it has something that looked like car headlights to give focussed transmission and reception. Strange I should have kept it, because I did not buy PW regularly.

Boy, does my memory work in weird ways!

0
0

Lost 1967 spacecraft FOUND CRASHED ON MOON

Peter Gathercole
Silver badge

grid markings?

I think that the vertical lines are actually artefacts of the film processing if that is what was done. This seems entirely reasonable and consistent.

Still, if I got pictures back from the developers with defects like this, I would ask for a set of reprints!

0
0

Marketer taps browser flaw to see if you're pregnant

Peter Gathercole
Silver badge

The same browser?

I wouldn't even use the same computer!

0
0

Oracle revs VirtualBox, mushrooms memory

Peter Gathercole
Silver badge
Flame

@AC. I take exception to the HPC comment

I am involved in running a top 500 supercomputer site, and it is reliable. So reliable, in fact, that the customer is saying that they want to manufacturer outages on a certain service so that their users don't get to automatically expect 100% availability.

The main secret as far as I am concerned is the old adage 'if it ain't broke, don't fix it'. Really annoys me when IBM say we *have* to upgrade the software stack to remain in a supported state!

So in answer to the comment, don't tar all services with the same brush.

1
0

Before the PC: IBM invents virtualisation

Peter Gathercole
Silver badge

Oopsie

The R&D version of UNIX was 5.2.5, not 3.2.5. This equated to SVR2 with some AT&T internal developments, including demand paging, enhanced networking (STREAMS [which could have Wollongong TCP/IP modules loaded], RFS), an enhanced multiplexed filesystem (not that I remember exactly what that gave us) and many more I can't remember.

0
0
Peter Gathercole
Silver badge

@david 12

It is quite clear that the security model for UNIX is one of the weakest remnants of the original UNIX development.

In a lot of cases it is actually much *weaker* than that provided by Windows NT and beyond.

But the difference is that it is actually used properly, and has been almost everywhere UNIX has been deployed. It was fundamental to the original multi-user model, and you always had the concept of ordinary users and a super-user.

Multics, VAX/VMS, and possibly several other contemporary OS's had better security models, but the UNIX model was adequate for what it had to do, and was well understood. In fact, the group model on UNIX, with non-root group administrators has so far fallen from use that it is practically absent in modern UNIXes (ever wondered why the /etc/group file has space for a password? Well this was it)

When it comes to virtual address spaces (programs running in their own private address space mapped onto real memory by address translation hardware), UNIX has this from the time it was ported to the PDP/11. Virtualized memory (i.e. the ability to use more memory than the box physically has), first appeared on UNIX on the Interdata 8/32, with the 3BSD additions to UNIX/32V, and then in BSD releases on the VAX.

The first AT&T release that supported demand paging was SVR3.2, although there were internal version of R&D UNIX 3.2.5 which supported this.

1
0
Peter Gathercole
Silver badge
Happy

When considering multiprogramming on S/370

You just cannot ignore the Michigan Terminal System (MTS).

When IBM was adamant that it would not produce a time-sharing OS for the 360, the University of Michigan decided to write their own OS, maintaining the OS/360 API, allowing stock IBM programmes to work with no change, but allowing them to be multi-tasked.

IBM actually co-operated, and the S/360-65M was a (supposedly) one-off special that IBM made just for Michigan, and provides a dynamic address translation which allowed virtual address spaces for programs, and which resulted in the S/360-67 which was one of the most popular 360 models, and influenced the S/370 design.

I used MTS between 1978 and 1986 at university at Durham, and when I worked at Newcastle Polytechnic on a S/370-168 and an Amdahl 5870 (I think), and I found it a much more enjoyable environment that VM/CMS which was the then IBM multitasking offering.

Look it up, you might be surprised what it could offer. There are many people with fond memories of the OS.

On the subject of Amdahl, they produced the first hardware VM system with their Multiple Domain Facility (MDF), which I later used when running UTS and R&D UNIX on an Amdahl 5890E. During an oh-so-secret-under-non-disclosure-agreement, we were told by IBM in about 1989 about a project called Prism, which was supposed to be a hardware VM solution that would allow multiple processor types (370, System 36 and 38, and a then unannounced RISC architecture, probably the RS/6000) in the same system, sharing peripherals and (IIRC) memory. Sounds a lot like PR/SM on the zSeries! Took them long enough to get it working.

2
0

Unix still data center darling, says survey

Peter Gathercole
Silver badge

@Kebabbert

Don't want to have a flame match, but much of Sun's more recent innovations happened between 2000 and 2005, with the exception of LDOMs which look like as much a copy of IBM's LPARs as WPARs were a copy of Containers.

IBM keep adding new features in the virtualization area, as well as RAS, parallelization (which if you don't work with MPI programs, will be completely invisible to you) and large system integration and clustering. See the AIX 6.1 and AIX 7.1 release notes, which summarise the new features quite well.

I was not commenting on Power vs. SPARC vs. x86_64, as that is a discussion for a completely different news story. You definitely made some good points, although what makes customers continue to buy a platform is the combination of hardware, OS and applications, not just the best of one. We'll see what happens over the next few years, I guess.

0
0
Peter Gathercole
Silver badge
Thumb Up

@Jim 59

I'm not intending to start an OS war, nor criticise Solaris (although I must admit that some statements I made could have been considered contentious). The original intention of my comments were to indicate where Linux lacks the Enterprise features other UNIXes have, and I was using AIX as the example, possibly in a rather blunt manner.

Doing a bit of digging on Solaris features, I find that Solaris and AIX both have an extensive set, and many of them are comparable on a like-for-like basis. I do not intend to do a comparison, nor do I wish to compare when things were introduced, because there were novel innovations that were copied by the other in both OS's.

I think that if we were actually to compare notes, we may find that the capabilities of both OS's are comparable, with Solaris having an edge on things like NFS implementation, ZFS and DTrace, and AIX with GPFS, some of the partitioning capabilities and possibly compiler technology.

So it is probably not possible to actually have an objective 'Most Advanced UNIX', and any distinction is likely to be subjective and open to debate. Lets agree that proprietary UNIXes continue to have a place in the datacentre, and encourage our Linux developer colleagues to continue to aspire to produce features that really will make Linux a suitable alternative platform for Enterprise workloads.

In terms of becoming a Linux admin guru, I suspect that it is easier to go from either AIX or Solaris to Linux, rather than the other way round.

0
0
Peter Gathercole
Silver badge
Happy

I think we can agree on this

I like 'Spiritual UNIX".

On the subject of commercial UNIXes using BSD code, if you publish under a permissive license, people will use it. But that's the plan, isn't it? :-)

Thanks for the interesting dialogue.

1
0
Peter Gathercole
Silver badge
Boffin

Re. Jake

The problem regarding BSD as a Genetic UNIX is that there is no AT&T code in it after the huge bruhaha with regard to removing any code that was covered by the UNIX V7 educational licence that BSD relied on in the 1980's!

A UNIX educational license specifically prohibits the use of Bell Labs/AT&T UNIX code in a commercial OS offering (I actually was a Bell Labs V6 and AT&T V7 UNIX license holder for a number of years) or even for teaching purposes, and UNIX System Laboratories took the Regents of the University of California, Berkeley to court to enforce this when they (UCB) started commercialising BSD. BSD did not take out a System III or System V license to cover any code, they just replaced it, leading to BSD/Lite and FreeBSD.

My view is at odds as what Wikipedia says about BSD in the main article. I regard there to be a requirement for there to be actual code, not just design ideas in a UNIX for it to be considered as a 'Genetic' UNIX.

Also, in order to use the UNIX trademark, it is necessary for a UNIX-like OS to be subjected to, and pass the Single UNIX Specification (SUS) verification suite. AIX does, as does Solaris, HP/UX, Tru64 UNIX and SCO UNIXware. Linux and BSD do not, so cannot legally be called UNIX.

Darwin/Mac OSX falls into the same "not Genetic UNIX", even though it qualifies for the UNIX 03 branding (a point I did not realise until I researched it just now).

And as Slackware is definitely not derived from any Bell Labs/AT&T code (It's Linux, with GNUs Not UNIX code running on the top like any other Linux).

See http://www.levenez.com/unix, and try to find any feed from an AT&T UNIX into Linux. There are a couple from IRIX, and a few feeds from Plan 9, but I think that these were filesystems, GL and utilities rather than principal parts of the OS.

Don't get me wrong. I have nothing against BSD as it is a family of fine OS's. But it really is UNIX-like rather than UNIX or a Genetic UNIX.

0
0
Peter Gathercole
Silver badge
Flame

My 'alternative' universe. What's yours like?

I said up front that I make a living supporting AIX. As it happens, I am currently contracting for IBM on a customer site, and have in the past been an IBM employee for a number of years.

But with my 20+ years of AIX (mostly outside of IBM) and over 30 years of other UNIX experience including 10 years of Linux in fields such as banking, utility, engineering, education and government, on systems running from micro-processors through departmental minis to Amdahl mainframes, AIX really has been this easy, at least if sensible design (i.e. like the manuals say plus a bit of common sense) has been followed. And it is still improving! (no, this is not a sales pitch, merely my observations).

I will stand my UNIX experience up against anybody else's. When I started working with UNIX in 1978, there were about half-a-dozen UNIX systems in the UK, and the total number of people with any experience in the UK probably did not exceed 100. And I have worked almost continuously with UNIX ever since.

Back to AIX, and no platform is without warts, and as good as I perceive it to be, sometimes you have problems. But where I am currently we have in the area of my responsibility 300+ AIX systems, being thrashed (literally) 24 hours a day, with 10's of TB of data changing on a daily basis, managed by a team of 5 people, some of whom have other responsibilities. On the same site, we have large Linux and Windows deployments, and there is also a Mainframe doing critical work.

Our current uptime on the AIX systems is low at around 60 days (having had some global power work done in the last two months), but normally runs into the 100's of days. In that 60 days, we have had about 8 disk failures out of an estate of about 4000 all of which were handled without any outage (including system disks). In the past, we have had memory failures, with the systems continuing to run until a convenient time to move the workload, and CPU's taken out of service in the same manner. We've also replaced complete RAID adapters (in an HA RAID environment), power supplies and cooling components without losing service. This is BTW, a clustered environment.

We are just about to embark in replacing 100s of RAID adapter cache batteries, and we do not expect to take *any* service impact at all during the work.

I would suggest that if the systems you 'have been forced' to use have been a bad experience, either you are not giving the whole picture (like if you think that you need the latest and greatest Open Source products - which would really be an application problem, not a deficiency of AIX or POWER platform), or there has not been due diligence in setting them up. Get someone who knows what they are doing in on the installation!

I have often found that sites tend to be partisan. Solaris or HP/UX sites often do not embrace AIX enough to understand how to run it properly, and vice-versa. But I do try to keep an open mind, and I do appreciate that I am not as knowledgeable of more recent Solaris or HP/UX systems as I am AIX. But in recent years, I have perceived them to be less innovative than the IBM offering, and when I last has serious work to do on them they just felt like they had been left in the last century when it comes to RAS and sysadmin tasks. But that's my opinion. I'm sure there are other opinions out there.

But I would say that AIX looks destined to the the last Genetic UNIX standing, given HP and Oracle's current attitude towards their products, and Linux still has a way to go in enterprise environments to replace it. I hope so, anyway, as I would like to get to retirement age without losing my career!

4
0
Peter Gathercole
Silver badge
Meh

The problem is....

that even though Linux provides a UNIX-like programming and application environment, when it comes to enterprise features, even the best Linux distro is not as easy to keep running as the best of the UNIX platforms.

I'm biased, I admit. I earn my living supporting AIX. But if there is a problem on one of 'my' AIX systems, it reports it to me, gathers the debug information, and on the ones so configured will even call the problem in to IBM. Often, if it is a duplexed part like a power supply, fan or disk, the part can be replaced without taking the service down, and even PCI cards can be hot-swapped on many models. CPU and memory failure can even happen and the system can continue running. It's not quite Non-Stop but...

If mission criticality is an issue, it is possible to configure a system such that the partition can me migrated on the fly to another suitable system. AIX has been able to do live partition migration for a few years now.

It is just easier using AIX that trying to patch together something similar with ESX or other virtualisation technology. This may change over time, but it has not yet, and I cannot see any real evidence that any of the large distro providers are doing anything to do it.

The standard complaint I hear is that some people regard UNIX as 'backward' compared to Linux, but that is the price of stability, and I'm sure that BSD users will say the same. I would say that Linux runs the risk or stumbling while it is running forward.

I do also support SuSE systems, and run Ubuntu on my own systems, and there is no doubt in my mind that if asked (and there was no real financial hurdle), I would recommend and AIX system over a Linux one (but, of course, Linux over Windows).

When I talk to people who have grown up with Linux without having used UNIX, it is clear that without that perspective, they just cannot realize the difference, and just regard Linux as UNIX on the cheap.

1
0

Boffins build nanowire lasers from nappy-rash cream

Peter Gathercole
Silver badge

Moving parts misnomer

I think that what was meant was "discrete components" rather than moving parts.

If you go back to the '60s, a laser was made up of several components, including an exciter, a lasing element, and a collimator. They tended to be about the same size as brick, very power inefficient, and cost thousands of pounds.

They also had quite short operational lifetimes.

You can still buy lasers like this, but they are mainly used for high power applications.

Solid state lasers changed all of this. We would not have CD/DVD/BlueRay, optical communications, laser pointers, or a whole raft of gadgets and toys if they had not been invented.

Not bad for a "solution looking for a problem to solve".

0
0

Deep inside AMD's master plan to topple Intel

Peter Gathercole
Silver badge
Facepalm

Round and round we go, where we stop, nobody knows!

Aren't we at the Itanium/x86_64 point again?

Surely the problem with all of these APU or GPGPUs is that suddenly we will have processors that are no longer fully compatible, and may run code destined for the other badly, or possibly not at all!

The only thing that x86 related architectures have really had going for them was the compatibility and commodity status of the architecture. For a long time, things like Power, PA, Alpha, MIPS, Motorola and even ARM processors were better and more capable than their Intel/AMD/Cyrix counterparts of the same generation, but could not run the same software as each other and thus never hit the big time.

Are we really going to see x86+ diverging until either AMD or Intel blink again?

1
1

Pacific rare-earth discovery: Actually just gigatonnes of dirt

Peter Gathercole
Silver badge

Zippy the Pinhead Re: methane

Bearing in mind how much of a greenhouse gas methane actually is, it would be better to put the organics into a digester, extract the methane, and burn it as a fuel. It would then be the less damaging CO2 and water, and we would have gotten some useful energy from it, and what eventually goes into the landfill would be less of a hazard.

0
0
Peter Gathercole
Silver badge

@BristolBachelor

I heard Peter Mills of New Earth Solutions on Radio 4 who suggested that we should mine the plastics from landfill sites, if only to use them as a fuel, although he actually suggested re-using them, and only burning them when they could no longer be recycled.

I think that we need to examine how disadvantaged people in developing countries pick over their landfill sites to get every bit of useful material, down to the tins, bottles and plastic bags. It's not nice, but it gives these people a way of generating some money out of nothing, while reducing what is in the landfill to just the worthless waste.

I'm not suggesting that we should force people into a scavenger class (although bog knows, making the long term unemployed do this once in a while might teach them something valuable about their benefits), but it is clear that there are lessons that we 'superior western' countries could learn from our less fortunate cousins.

2
0

The Register comment guidelines 2010

Peter Gathercole
Silver badge
Unhappy

@Peter Simpson 1

Unfortunately, as matters have panned out, Sarah could and did quit the game!

I shall miss her.

0
0

Solar panel selling scam shown up by sting

Peter Gathercole
Silver badge

Post moderation?

Certainly not!

Just try posting something that breaks the rules, and see whether it actually appears.

Sometimes things get through, and you see a "Rejected by moderator" on the thread, but normally they just don't get through.

It just shows that the Register has dedicated moderators.

My bug-bear is that sometimes, when I post something that I don't think breaks the rules, I still get a post rejected, and I cannot find out which of the rules the moderator thinks I broke. I know it is down to the moderator and their decision is final, but just a single "Rejected because of rule X" would be useful. I had a public exchange with Sarah about this on the comments thread of the news item announcing the rules.

And I have one recent post (which was critical of the Reg. using an inappropriate stock picture appearing on the revolving marquee headline) that did not appear, and was eventually rejected, but it took two weeks for it to be rejected. Strangely, for that two week period, it's status was neither accepted nor rejected, nor was it in 'limbo' (no status). It actually said "Updated on...." This was a new status to me!

0
0

Microsoft bags two more Android patent deals

Peter Gathercole
Silver badge
Meh

Apple use HFS+ already

but only on devices that attach to a Mac.

It used to be the first time that you attached an iPod to a computer with iTunes installed, it would check what the computer was, and if a Mac, format the iPod with HFS+, and if a Windows system, use Fat32.

I found this out when I inherited a nearly-but-not-quite broken iPod from my Daughter after the dog chewed it, and had to install HFS+ onto my Linux laptop to use it.

Soon worked out how to swap to Fat32 (what's the choice when considering two equally patent encumbered filesystems), even keeping the music loaded (ain't tar wonderful)!

0
0
Peter Gathercole
Silver badge
Meh

Hmmmm. Forgot about the driver signing process.

I just don't use Windows enough for that to have been immediately apparent.

However ext2 IFS (http://www.fs-driver.org/) appears to be signed already, at least for Windows Vista. I know that Microsoft could withdraw the signing certificate, but...

0
0
Peter Gathercole
Silver badge
Linux

We desperately need

someone to leak exactly which patents Microsoft are using as the tip of the wedge.

Whilst I believe they should be challenged, the likely ones are Fat32 patents that are often quoted, #5,579,517 and #5,758,352. Unfortunately, these look like they still have 5 and 7 years respectively to run.

Maybe Microsoft are trying to make sure they get maximum value from these by building up a long list of licensees before the patents become useless for trolling.

Now, to reformat the microSD card used in my 'Phone to ext2 or journal-less ext4. I don't need no steenkin' Windows compatibility to attach to my Linux systems!

Actually, interesting point. Why don't companies making Android devices ship an ext2 driver for Windows as part of the application suite for their devices, and remove Fat support? After all, most users are used to putting buckets of crap on their Windows systems as soon as they get a new device. Why not a new filesystem? I know that there will be problems using cards from other devices, but how often to most people do that? Most people use the microSD card as fixed memory, and I'm sure that many would have to think hard about where the microSD card actually is.

10
0

Moderatrix kisses the Reg goodbye

Peter Gathercole
Silver badge
Unhappy

I was going to say

exactly the same.

0
0

Lenovo Thinkpad X220T 12.5in tablet PC

Peter Gathercole
Silver badge

"obtained through their employer"

Thinkpads have a longevity in line with their robustness, and are very popular 2nd user systems. If you spot someone with a T30 or a T40 through T43 (and the odd T60 as well), chances are it's an ex-corporate machine doing sterling service for value and quality concious individuals. Just look on eBay to gauge this popularity. A T43 will still do everything most people want to do on the move, especially if loaded with Linux.

I'm glad I agree with Andrew on something, even if it is something as mundane as a choice of laptop!

2
0