Feeds

* Posts by Peter Gathercole

1810 posts • joined 15 Jun 2007

The IBM PC is 30

Peter Gathercole
Silver badge

I remember

The polytechnic I worked at took a decision in 1982 to install several computing lab's full of 5150's. Over the summer, we were inundated with the things, with boxes filling all the foyers, waiting to be unpacked. Horrible, horrible long persistence phosphor in the monochrome monitors, and the Poly' decided to ditch the one good feature (the keyboard) for a soft-touch silent Cherry keyboard as standard. Ugh.

I never liked them even then. Because they were floppy-disk only systems, the students had to book out the disks from a librarian for the software before they could use them, which meant that we had fragile 5-1/4 floppies moving around like crazy. We got an agreement through the distributor to allow us to keep the originals safe, and issue copies. Was not long before most of the students twigged on that they could further copy the disks, and then not bother with using the booking system.

I was glad when the first PC-ATs were installed, because we at least then only had to worry about keeping the hard disk clean, and repair the applications when the students trashed them. Introducing a virus on one of the ATs became one of the most serious offences, and we had to have disinfectant sessions to clean the student's own floppies to protect our systems and their work. Mind you, the 1.2MB floppy drives on the ATs caused no end of problems when students tried to write to 360KB floppies on them.

This was waaaaaay before disk cloning was thought about, and everything was done according to the installation process, although one of the labs (not one I worked with) was set up with a low cost (hmmm, relatively low cost, it was still bloody expensive) co-ax CSMA/CD Ethernet alternative called Omninet running at 1Mb/s for file and print sharing.

Interestingly, we had Pick installed on one of the ATs, and Xenix-286 on another.

I still regarded the PC's as poorer teaching tools than the lab of BBC micro's I also ran, and of course 'my' UNIX V7 (and RSX-11M) PDP11/34e (in Systime covers, with 22bit addressing and 2MB of memory, and CDC SMD disks to speed it up) was the bees knees as far as I was concerned, running Ingres to teach relational database. Knocked Ashton Tate DBase II (remember that!) into a cocked hat! And it was, of course, far less maintenance work.

The software line-up on the PCs was PC-Dos 1.1 (on the 5150s, the 5157's has PC-Dos 2.1 for the hard disk support) with Word 2, Multiplan (MS spreadsheet before Excel), and DBase II. I couldn't work with Word then, and still find it a traumatic experience now.

We definitely need either a rose-tinted spectacles or an old-fart icon here. I guess I'll just have to use the coat icon. It's the one with the big stretched pockets to hold the 5-1/4 disk box.

6
0

SuperVisor: One hypervisor to virtualize them all

Peter Gathercole
Silver badge
Alien

@Kirbini - It's amanforMars or one of his clones

This is what he does. You're not really meant to understand it, although there is a message in there somewhere.

There is a school of thought that suggests he writes a comment, translates it to some other language and back using something like Babelfish.

He is a Register treasure!

0
0

‘Pitstops’ can inhibit viruses

Peter Gathercole
Silver badge
Flame

@King Edward 1

I'm not talking about the extinction of the human species because of applied technology, just trying to put some perspective into what we are doing with regard to relying on ever more complex technological interventions to keep an unreasonable amount of the population alive.

But more interventions require more resource. I'm sure I heard a discussion on the radio recently which suggested that many countries will be spending significant proportions of their GDP on healthcare within 20 years at current change rates, and the Economist has commissioned a report that presents this as a possibility.

I was actually going to say something about diverse genetic information, particularly what are apparently unused parts of the genome, but I was going to put that into the context of the pathogens keeping recessive attack vectors in their genome, although you are right, it runs both ways (but what is the survival advantage of cystic fibrosis, mongolism, Duchenne muscular dystrophy or even short-sightedness!)

My belief is that we will probably never be able to match the natural forces of evolution, although that does not mean that we should stand still. We need to discover replacements for antibiotics, otherwise we could have a new Black Death. MRSA and C.Difficile already provide pointers to this possibility, and TB is already on the way back.

BTW, and this is a bit of a diversion. Removing fire from our tool-chest cannot happen as long as there is organic material in our environment. But motorised transport? Or the technologies that sustain the Internet? We could lose all of those.

Remember that it is still within the span of a single human lifetime that *ALMOST ALL* of what we regard as modern life has come about (OK, the steam engine, and simple internal combustion engine are more like twice, but even 70 years ago, horses were still the primary power on the land). The rate of technical change has been staggering and accelerating. There is a chance that we could be knocked back into a pre-industrial society. It would not take that much, and if there was suddenly a critical shortage of energy (like if there was a cascade failure of the electricity grids caused by a serious EMP overload from sunspot activity [I am not normally a doom and gloom monger, but the chance is there, NASA says so]), we may lose the capability to rebuild the infrastructure, including the power grids themselves. It takes a lot of serious resource, and a long time, to build the number of large high-voltage transformers that might be needed.

We've used all of the easy-access energy and other resources, and if we were pushed too far down, it would be incredibly difficult to climb back up to where we are without opencast coal, iron, or copper ore mining or easy to extract oil.

And don't start talking about solar, wind or wave power. Without an existing technical and transport infrastructure, this cannot be deployed, maintained, or utilized. I challenge you to build a working wind turbine generator (with a reasonable capacity) with just the raw materials you can find within a 10 mile radius of where you are. You are not allowed to cheat by using existing motors or alternators because that is part of the wind-down, not the rebuild of technology.

The result of a breakdown would be chaos, and conflict over resource, and could lead to a new dark age where the remaining resources were controlled by force. It would be impossible to do anything at a national level. In such a world, there would be NO internet, NO national transport system, NO national electricity grid, and the road and rail systems would degenerate remarkably rapidly.

Just think what panic there was in the UK 10 years ago because supplies of petrol and diesel were disrupted. And that happened within a space of just days!

Do you actually remember a mere twenty years ago how useful (or not) personal computers were before the Internet! Answer, not very. Good for simple games and small data projects. There was a Society, however. Computers are vital for our current way of life, not our survival. They just make it easier.

But none of this would mean an automatic extinction of the human species. The genetic sieve would probably cut back in, and maybe, just maybe, inherited intelligence could prevent a fall back to the stone ages. But people would start dying to what we now regard as curable diseases merely because the technical interventions were no longer available to keep them alive.

1
1
Peter Gathercole
Silver badge
Alert

Genetic battlefield

Although many of these apparent breakthroughs are interesting, it is worth noting two things.

Firstly, apply the rule of unintended consequences to the breakthrough. It may take some time to find out what else these substances do, and some of these may be undesirable, meaning that the technique may never come to anything.

Secondly, the real world is rather akin to a battlefield at a genetic level, with an organisms immune system on one side, and the survival mechanisms of an untold number of pathogens on the other. In both sides, the genetic sieve operates.

Even before humans interfere, what we have are the forces of evolution working against each other. If you think about the operation of the genetic sieve on survival, it is necessary to remove the genetically susceptible members of a population to allow the non-susceptible members to survive and procreate. But the same is true on the other side of the battle, and the most obvious example of this is antibiotic resistant bacteria, where the very few surviving members of a pathogen after the application of antibiotics become the basis of the following generations. This is exacerbated by over-use antibiotics, and courses not being completed, but it will operate eventually anyway.

If we interfere, by allowing susceptible members to survive and pass their susceptibility on to their offspring, we are weakening the population as a whole, and building in a reliance to the techniques and technology for survival to the species. Just think what would happen if modern medicines became unavailable. I don't think we would quite go back to the dark ages (after all, we do now know about how infections spread, and can take physical precautions), but it would not be pleasant.

But even as we are making the sieve less effective on the survival side, we are adding to the sieve on the other. Evolution will eventually allow the pathogens to work around any barriers we put up by making successful members of the pathogen population pass on their success and killing the unsuccessful ones. Life, as has been noted elsewhere, is incredibly persistent, especially at the bacterial level.

We can and will never reach a utopia where diseases are eliminated. Evolution will see to that. And the human species really has no guaranteed right to survive over any other!

3
3

Boffins shine 800Mbps wireless network from flashlight

Peter Gathercole
Silver badge

Memory 'flash'!

I think I've still got that copy somewhere. On the cover, it has something that looked like car headlights to give focussed transmission and reception. Strange I should have kept it, because I did not buy PW regularly.

Boy, does my memory work in weird ways!

0
0

Lost 1967 spacecraft FOUND CRASHED ON MOON

Peter Gathercole
Silver badge

grid markings?

I think that the vertical lines are actually artefacts of the film processing if that is what was done. This seems entirely reasonable and consistent.

Still, if I got pictures back from the developers with defects like this, I would ask for a set of reprints!

0
0

Marketer taps browser flaw to see if you're pregnant

Peter Gathercole
Silver badge

The same browser?

I wouldn't even use the same computer!

0
0

Oracle revs VirtualBox, mushrooms memory

Peter Gathercole
Silver badge
Flame

@AC. I take exception to the HPC comment

I am involved in running a top 500 supercomputer site, and it is reliable. So reliable, in fact, that the customer is saying that they want to manufacturer outages on a certain service so that their users don't get to automatically expect 100% availability.

The main secret as far as I am concerned is the old adage 'if it ain't broke, don't fix it'. Really annoys me when IBM say we *have* to upgrade the software stack to remain in a supported state!

So in answer to the comment, don't tar all services with the same brush.

1
0

Before the PC: IBM invents virtualisation

Peter Gathercole
Silver badge

Oopsie

The R&D version of UNIX was 5.2.5, not 3.2.5. This equated to SVR2 with some AT&T internal developments, including demand paging, enhanced networking (STREAMS [which could have Wollongong TCP/IP modules loaded], RFS), an enhanced multiplexed filesystem (not that I remember exactly what that gave us) and many more I can't remember.

0
0
Peter Gathercole
Silver badge

@david 12

It is quite clear that the security model for UNIX is one of the weakest remnants of the original UNIX development.

In a lot of cases it is actually much *weaker* than that provided by Windows NT and beyond.

But the difference is that it is actually used properly, and has been almost everywhere UNIX has been deployed. It was fundamental to the original multi-user model, and you always had the concept of ordinary users and a super-user.

Multics, VAX/VMS, and possibly several other contemporary OS's had better security models, but the UNIX model was adequate for what it had to do, and was well understood. In fact, the group model on UNIX, with non-root group administrators has so far fallen from use that it is practically absent in modern UNIXes (ever wondered why the /etc/group file has space for a password? Well this was it)

When it comes to virtual address spaces (programs running in their own private address space mapped onto real memory by address translation hardware), UNIX has this from the time it was ported to the PDP/11. Virtualized memory (i.e. the ability to use more memory than the box physically has), first appeared on UNIX on the Interdata 8/32, with the 3BSD additions to UNIX/32V, and then in BSD releases on the VAX.

The first AT&T release that supported demand paging was SVR3.2, although there were internal version of R&D UNIX 3.2.5 which supported this.

1
0
Peter Gathercole
Silver badge
Happy

When considering multiprogramming on S/370

You just cannot ignore the Michigan Terminal System (MTS).

When IBM was adamant that it would not produce a time-sharing OS for the 360, the University of Michigan decided to write their own OS, maintaining the OS/360 API, allowing stock IBM programmes to work with no change, but allowing them to be multi-tasked.

IBM actually co-operated, and the S/360-65M was a (supposedly) one-off special that IBM made just for Michigan, and provides a dynamic address translation which allowed virtual address spaces for programs, and which resulted in the S/360-67 which was one of the most popular 360 models, and influenced the S/370 design.

I used MTS between 1978 and 1986 at university at Durham, and when I worked at Newcastle Polytechnic on a S/370-168 and an Amdahl 5870 (I think), and I found it a much more enjoyable environment that VM/CMS which was the then IBM multitasking offering.

Look it up, you might be surprised what it could offer. There are many people with fond memories of the OS.

On the subject of Amdahl, they produced the first hardware VM system with their Multiple Domain Facility (MDF), which I later used when running UTS and R&D UNIX on an Amdahl 5890E. During an oh-so-secret-under-non-disclosure-agreement, we were told by IBM in about 1989 about a project called Prism, which was supposed to be a hardware VM solution that would allow multiple processor types (370, System 36 and 38, and a then unannounced RISC architecture, probably the RS/6000) in the same system, sharing peripherals and (IIRC) memory. Sounds a lot like PR/SM on the zSeries! Took them long enough to get it working.

2
0

Unix still data center darling, says survey

Peter Gathercole
Silver badge

@Kebabbert

Don't want to have a flame match, but much of Sun's more recent innovations happened between 2000 and 2005, with the exception of LDOMs which look like as much a copy of IBM's LPARs as WPARs were a copy of Containers.

IBM keep adding new features in the virtualization area, as well as RAS, parallelization (which if you don't work with MPI programs, will be completely invisible to you) and large system integration and clustering. See the AIX 6.1 and AIX 7.1 release notes, which summarise the new features quite well.

I was not commenting on Power vs. SPARC vs. x86_64, as that is a discussion for a completely different news story. You definitely made some good points, although what makes customers continue to buy a platform is the combination of hardware, OS and applications, not just the best of one. We'll see what happens over the next few years, I guess.

0
0
Peter Gathercole
Silver badge
Thumb Up

@Jim 59

I'm not intending to start an OS war, nor criticise Solaris (although I must admit that some statements I made could have been considered contentious). The original intention of my comments were to indicate where Linux lacks the Enterprise features other UNIXes have, and I was using AIX as the example, possibly in a rather blunt manner.

Doing a bit of digging on Solaris features, I find that Solaris and AIX both have an extensive set, and many of them are comparable on a like-for-like basis. I do not intend to do a comparison, nor do I wish to compare when things were introduced, because there were novel innovations that were copied by the other in both OS's.

I think that if we were actually to compare notes, we may find that the capabilities of both OS's are comparable, with Solaris having an edge on things like NFS implementation, ZFS and DTrace, and AIX with GPFS, some of the partitioning capabilities and possibly compiler technology.

So it is probably not possible to actually have an objective 'Most Advanced UNIX', and any distinction is likely to be subjective and open to debate. Lets agree that proprietary UNIXes continue to have a place in the datacentre, and encourage our Linux developer colleagues to continue to aspire to produce features that really will make Linux a suitable alternative platform for Enterprise workloads.

In terms of becoming a Linux admin guru, I suspect that it is easier to go from either AIX or Solaris to Linux, rather than the other way round.

0
0
Peter Gathercole
Silver badge
Happy

I think we can agree on this

I like 'Spiritual UNIX".

On the subject of commercial UNIXes using BSD code, if you publish under a permissive license, people will use it. But that's the plan, isn't it? :-)

Thanks for the interesting dialogue.

1
0
Peter Gathercole
Silver badge
Boffin

Re. Jake

The problem regarding BSD as a Genetic UNIX is that there is no AT&T code in it after the huge bruhaha with regard to removing any code that was covered by the UNIX V7 educational licence that BSD relied on in the 1980's!

A UNIX educational license specifically prohibits the use of Bell Labs/AT&T UNIX code in a commercial OS offering (I actually was a Bell Labs V6 and AT&T V7 UNIX license holder for a number of years) or even for teaching purposes, and UNIX System Laboratories took the Regents of the University of California, Berkeley to court to enforce this when they (UCB) started commercialising BSD. BSD did not take out a System III or System V license to cover any code, they just replaced it, leading to BSD/Lite and FreeBSD.

My view is at odds as what Wikipedia says about BSD in the main article. I regard there to be a requirement for there to be actual code, not just design ideas in a UNIX for it to be considered as a 'Genetic' UNIX.

Also, in order to use the UNIX trademark, it is necessary for a UNIX-like OS to be subjected to, and pass the Single UNIX Specification (SUS) verification suite. AIX does, as does Solaris, HP/UX, Tru64 UNIX and SCO UNIXware. Linux and BSD do not, so cannot legally be called UNIX.

Darwin/Mac OSX falls into the same "not Genetic UNIX", even though it qualifies for the UNIX 03 branding (a point I did not realise until I researched it just now).

And as Slackware is definitely not derived from any Bell Labs/AT&T code (It's Linux, with GNUs Not UNIX code running on the top like any other Linux).

See http://www.levenez.com/unix, and try to find any feed from an AT&T UNIX into Linux. There are a couple from IRIX, and a few feeds from Plan 9, but I think that these were filesystems, GL and utilities rather than principal parts of the OS.

Don't get me wrong. I have nothing against BSD as it is a family of fine OS's. But it really is UNIX-like rather than UNIX or a Genetic UNIX.

0
0
Peter Gathercole
Silver badge
Flame

My 'alternative' universe. What's yours like?

I said up front that I make a living supporting AIX. As it happens, I am currently contracting for IBM on a customer site, and have in the past been an IBM employee for a number of years.

But with my 20+ years of AIX (mostly outside of IBM) and over 30 years of other UNIX experience including 10 years of Linux in fields such as banking, utility, engineering, education and government, on systems running from micro-processors through departmental minis to Amdahl mainframes, AIX really has been this easy, at least if sensible design (i.e. like the manuals say plus a bit of common sense) has been followed. And it is still improving! (no, this is not a sales pitch, merely my observations).

I will stand my UNIX experience up against anybody else's. When I started working with UNIX in 1978, there were about half-a-dozen UNIX systems in the UK, and the total number of people with any experience in the UK probably did not exceed 100. And I have worked almost continuously with UNIX ever since.

Back to AIX, and no platform is without warts, and as good as I perceive it to be, sometimes you have problems. But where I am currently we have in the area of my responsibility 300+ AIX systems, being thrashed (literally) 24 hours a day, with 10's of TB of data changing on a daily basis, managed by a team of 5 people, some of whom have other responsibilities. On the same site, we have large Linux and Windows deployments, and there is also a Mainframe doing critical work.

Our current uptime on the AIX systems is low at around 60 days (having had some global power work done in the last two months), but normally runs into the 100's of days. In that 60 days, we have had about 8 disk failures out of an estate of about 4000 all of which were handled without any outage (including system disks). In the past, we have had memory failures, with the systems continuing to run until a convenient time to move the workload, and CPU's taken out of service in the same manner. We've also replaced complete RAID adapters (in an HA RAID environment), power supplies and cooling components without losing service. This is BTW, a clustered environment.

We are just about to embark in replacing 100s of RAID adapter cache batteries, and we do not expect to take *any* service impact at all during the work.

I would suggest that if the systems you 'have been forced' to use have been a bad experience, either you are not giving the whole picture (like if you think that you need the latest and greatest Open Source products - which would really be an application problem, not a deficiency of AIX or POWER platform), or there has not been due diligence in setting them up. Get someone who knows what they are doing in on the installation!

I have often found that sites tend to be partisan. Solaris or HP/UX sites often do not embrace AIX enough to understand how to run it properly, and vice-versa. But I do try to keep an open mind, and I do appreciate that I am not as knowledgeable of more recent Solaris or HP/UX systems as I am AIX. But in recent years, I have perceived them to be less innovative than the IBM offering, and when I last has serious work to do on them they just felt like they had been left in the last century when it comes to RAS and sysadmin tasks. But that's my opinion. I'm sure there are other opinions out there.

But I would say that AIX looks destined to the the last Genetic UNIX standing, given HP and Oracle's current attitude towards their products, and Linux still has a way to go in enterprise environments to replace it. I hope so, anyway, as I would like to get to retirement age without losing my career!

4
0
Peter Gathercole
Silver badge
Meh

The problem is....

that even though Linux provides a UNIX-like programming and application environment, when it comes to enterprise features, even the best Linux distro is not as easy to keep running as the best of the UNIX platforms.

I'm biased, I admit. I earn my living supporting AIX. But if there is a problem on one of 'my' AIX systems, it reports it to me, gathers the debug information, and on the ones so configured will even call the problem in to IBM. Often, if it is a duplexed part like a power supply, fan or disk, the part can be replaced without taking the service down, and even PCI cards can be hot-swapped on many models. CPU and memory failure can even happen and the system can continue running. It's not quite Non-Stop but...

If mission criticality is an issue, it is possible to configure a system such that the partition can me migrated on the fly to another suitable system. AIX has been able to do live partition migration for a few years now.

It is just easier using AIX that trying to patch together something similar with ESX or other virtualisation technology. This may change over time, but it has not yet, and I cannot see any real evidence that any of the large distro providers are doing anything to do it.

The standard complaint I hear is that some people regard UNIX as 'backward' compared to Linux, but that is the price of stability, and I'm sure that BSD users will say the same. I would say that Linux runs the risk or stumbling while it is running forward.

I do also support SuSE systems, and run Ubuntu on my own systems, and there is no doubt in my mind that if asked (and there was no real financial hurdle), I would recommend and AIX system over a Linux one (but, of course, Linux over Windows).

When I talk to people who have grown up with Linux without having used UNIX, it is clear that without that perspective, they just cannot realize the difference, and just regard Linux as UNIX on the cheap.

1
0

Boffins build nanowire lasers from nappy-rash cream

Peter Gathercole
Silver badge

Moving parts misnomer

I think that what was meant was "discrete components" rather than moving parts.

If you go back to the '60s, a laser was made up of several components, including an exciter, a lasing element, and a collimator. They tended to be about the same size as brick, very power inefficient, and cost thousands of pounds.

They also had quite short operational lifetimes.

You can still buy lasers like this, but they are mainly used for high power applications.

Solid state lasers changed all of this. We would not have CD/DVD/BlueRay, optical communications, laser pointers, or a whole raft of gadgets and toys if they had not been invented.

Not bad for a "solution looking for a problem to solve".

0
0

Deep inside AMD's master plan to topple Intel

Peter Gathercole
Silver badge
Facepalm

Round and round we go, where we stop, nobody knows!

Aren't we at the Itanium/x86_64 point again?

Surely the problem with all of these APU or GPGPUs is that suddenly we will have processors that are no longer fully compatible, and may run code destined for the other badly, or possibly not at all!

The only thing that x86 related architectures have really had going for them was the compatibility and commodity status of the architecture. For a long time, things like Power, PA, Alpha, MIPS, Motorola and even ARM processors were better and more capable than their Intel/AMD/Cyrix counterparts of the same generation, but could not run the same software as each other and thus never hit the big time.

Are we really going to see x86+ diverging until either AMD or Intel blink again?

1
1

Pacific rare-earth discovery: Actually just gigatonnes of dirt

Peter Gathercole
Silver badge

Zippy the Pinhead Re: methane

Bearing in mind how much of a greenhouse gas methane actually is, it would be better to put the organics into a digester, extract the methane, and burn it as a fuel. It would then be the less damaging CO2 and water, and we would have gotten some useful energy from it, and what eventually goes into the landfill would be less of a hazard.

0
0
Peter Gathercole
Silver badge

@BristolBachelor

I heard Peter Mills of New Earth Solutions on Radio 4 who suggested that we should mine the plastics from landfill sites, if only to use them as a fuel, although he actually suggested re-using them, and only burning them when they could no longer be recycled.

I think that we need to examine how disadvantaged people in developing countries pick over their landfill sites to get every bit of useful material, down to the tins, bottles and plastic bags. It's not nice, but it gives these people a way of generating some money out of nothing, while reducing what is in the landfill to just the worthless waste.

I'm not suggesting that we should force people into a scavenger class (although bog knows, making the long term unemployed do this once in a while might teach them something valuable about their benefits), but it is clear that there are lessons that we 'superior western' countries could learn from our less fortunate cousins.

2
0

The Register comment guidelines 2010

Peter Gathercole
Silver badge
Unhappy

@Peter Simpson 1

Unfortunately, as matters have panned out, Sarah could and did quit the game!

I shall miss her.

0
0

Solar panel selling scam shown up by sting

Peter Gathercole
Silver badge

Post moderation?

Certainly not!

Just try posting something that breaks the rules, and see whether it actually appears.

Sometimes things get through, and you see a "Rejected by moderator" on the thread, but normally they just don't get through.

It just shows that the Register has dedicated moderators.

My bug-bear is that sometimes, when I post something that I don't think breaks the rules, I still get a post rejected, and I cannot find out which of the rules the moderator thinks I broke. I know it is down to the moderator and their decision is final, but just a single "Rejected because of rule X" would be useful. I had a public exchange with Sarah about this on the comments thread of the news item announcing the rules.

And I have one recent post (which was critical of the Reg. using an inappropriate stock picture appearing on the revolving marquee headline) that did not appear, and was eventually rejected, but it took two weeks for it to be rejected. Strangely, for that two week period, it's status was neither accepted nor rejected, nor was it in 'limbo' (no status). It actually said "Updated on...." This was a new status to me!

0
0

Microsoft bags two more Android patent deals

Peter Gathercole
Silver badge
Meh

Apple use HFS+ already

but only on devices that attach to a Mac.

It used to be the first time that you attached an iPod to a computer with iTunes installed, it would check what the computer was, and if a Mac, format the iPod with HFS+, and if a Windows system, use Fat32.

I found this out when I inherited a nearly-but-not-quite broken iPod from my Daughter after the dog chewed it, and had to install HFS+ onto my Linux laptop to use it.

Soon worked out how to swap to Fat32 (what's the choice when considering two equally patent encumbered filesystems), even keeping the music loaded (ain't tar wonderful)!

0
0
Peter Gathercole
Silver badge
Meh

Hmmmm. Forgot about the driver signing process.

I just don't use Windows enough for that to have been immediately apparent.

However ext2 IFS (http://www.fs-driver.org/) appears to be signed already, at least for Windows Vista. I know that Microsoft could withdraw the signing certificate, but...

0
0
Peter Gathercole
Silver badge
Linux

We desperately need

someone to leak exactly which patents Microsoft are using as the tip of the wedge.

Whilst I believe they should be challenged, the likely ones are Fat32 patents that are often quoted, #5,579,517 and #5,758,352. Unfortunately, these look like they still have 5 and 7 years respectively to run.

Maybe Microsoft are trying to make sure they get maximum value from these by building up a long list of licensees before the patents become useless for trolling.

Now, to reformat the microSD card used in my 'Phone to ext2 or journal-less ext4. I don't need no steenkin' Windows compatibility to attach to my Linux systems!

Actually, interesting point. Why don't companies making Android devices ship an ext2 driver for Windows as part of the application suite for their devices, and remove Fat support? After all, most users are used to putting buckets of crap on their Windows systems as soon as they get a new device. Why not a new filesystem? I know that there will be problems using cards from other devices, but how often to most people do that? Most people use the microSD card as fixed memory, and I'm sure that many would have to think hard about where the microSD card actually is.

10
0

Moderatrix kisses the Reg goodbye

Peter Gathercole
Silver badge
Unhappy

I was going to say

exactly the same.

0
0

Lenovo Thinkpad X220T 12.5in tablet PC

Peter Gathercole
Silver badge

"obtained through their employer"

Thinkpads have a longevity in line with their robustness, and are very popular 2nd user systems. If you spot someone with a T30 or a T40 through T43 (and the odd T60 as well), chances are it's an ex-corporate machine doing sterling service for value and quality concious individuals. Just look on eBay to gauge this popularity. A T43 will still do everything most people want to do on the move, especially if loaded with Linux.

I'm glad I agree with Andrew on something, even if it is something as mundane as a choice of laptop!

2
0

MS advises drastic measures to fight hellish Trojan

Peter Gathercole
Silver badge

@Old Handle

"(Perfectly legal if the last computer it was used on has been retired.)"

This really depends on the type of Windows licence provided with the old computer. If it's a full retail version, you are completely correct. If it's an OEM version, then the licence restricts you to the system that it was purchased on, and some OEM licence keys cannot be used for hardware from a different manufacturer (the installation process can check the BIOS identification string to check that the machine was made by the manufacturer who bought the OEM license).

MS will sometimes grant an activation string if you have to replace the motherboard as a result of a system failure, but I've found that recovery CDs in this scenario do not always work with different motherboards, at least for systems from large suppliers who use custom BIOSes. Simple answer is, if you can get a copy of a retail disk, guard it like it is gold.

I recently found this out when trying to license XP for a VirtualBox on my laptop, which runs Ubuntu (VirtualBox loads a specific BIOS in the VM which is completely unrelated to the actual system BIOS). I could not get it to accept the IBM OEM WinXP Pro key printed on the COA on the bottom of the machine until I cloned the BIOS identification strings in VirtualBox.

Of course, to a system integrator, providing a full retail licence will cost either them or their customer a lot more money than the heavily discounted OEM licence that Microsoft will sell them. This would put the supplier at a significant competitive disadvantage (I believe in the UK it is in the order of £50 per system) to their competitors who just use OEM licences, and as a side effect, ties them almost irrevocably to Microsoft, who will threaten to withdraw the OEM licence if they do anything that Microsoft don't like (like pre-installing Netscape Navigator or Lotus Notes/Symphony [old Symphony, not current], or shippping systems without an OS, or even with Linux pre-installed).

And of course, this also means that MS have a continual revenue stream as people replace their PC, and MS counts another Windows sale, even if it is an OEM one.

1
0

Brewer bashes Beeb over anti-beer bias

Peter Gathercole
Silver badge

"You drink it, you piss it out, they collect it and serve it to someone else"

I think you're confusing proper beer with that fizzy cold stuff that appears to have almost displaced ale in too many pubs.

Funny, the taste of lager when warm and flat, together with it's colour does remind me of something along the lines of your comment!

2
0

Has UK gov lost the census to Lulzsec?

Peter Gathercole
Silver badge

re: bork bork bork

The data capture system was on the Internet, but that does not follow that the main DB server is. They could have (although probably didn't) written each census record to tape, and then bulk-loaded it into a completely standalone database system.

Most internet facing systems are a combination of an internet attached web server of some form, with only enough storage to hold transient data, together with a significant number of security layers, some of which may take part in the transaction, and one or more database servers.

Thus, the database system is only indirectly attached to the Internet, and cannot be directly attacked. One bank I worked at had more than 10 different security zones between the front-end web servers and the systems holding the databases.

The internet facing web server gathers your data, then commits it through secure protocols and intermediate systems to the backend, and then deletes the transient copy.

Normally, the gathering system has no way of bulk-loading data back from the database machine. It may be able to get individual forms back (in order to allow you to edit them), but this needs to be done on an individual basis, and often the security checking is done off of the internet facing box.

This means that even if the Web facing system is hacked, without some authentication information for each address, it will not be able to load data from the database.

This is large web application design 101.

It is normal for there to be multiple security zones, such that it is not possible at to use, at each boudary, any other protocol than the allowed one to get further into the network (implicit deny, explicit allow).

Much more likely is that if there really was a breach, it would have been one of the routes that are used for remote system administration, and once in, a path to export the data was constructed, although even this has problems.

As far as I can tell, there are around 25,000,000 residential addresses in the UK. If the census form could be encoded in 8KB, this would make an approximate size of raw data of around 200GB. This is not a huge amount of data as things stand today, but I would not be wanting to squirt it through a SSH tunnel over the Internet!

0
0
Peter Gathercole
Silver badge
Stop

"security illiterate"

I think that all of the posters who take this statement at face value ought to read some of the UK government security standards. These definitely exist, and they were not written by people who are security illiterate. See http://www.cesg.gov.uk

The problem is that they are difficult to interpret, and are couched in terms that many IT people don't understand (they talk a lot about data crossing security zones rather than being securely stored), and sometimes it seems like there is no real world help in ensuring that a particular application or solution meets the requirements (government security auditors will often tell you that something is not compliant, but will not offer any advice on how to make it so, nor suggest security mechanisms during system design). Thus implementing a security solution often become an iterative process of attrition with the security people.

When I was last involved, it was even the case that some of the Infosec documentation describing what has to be done is classified as RESTRICTED, which does not help trying to implement what they say.

Generally, it is not a lack of standards that cause this type of data breach, it is implementation (often by companies contracted to supply services), or ignorance of the standards by individuals working on such data. Although there should be safeguards, it often only takes one person to make a mistake to put at risk complete datasets, especially if there is any external route in to the systems implementing the solutions.

0
0
Peter Gathercole
Silver badge
FAIL

forced - by law

In case you had not noticed, it is a criminal offence to not fill in a census form when requested, backed up by fines and a criminal record. Is that forced enough for you?

1
0

They shoot mainframes, don't they?

Peter Gathercole
Silver badge

Oops.

I was questioning the claim that the mainframe was never hacked, not the comment. Should have made myself more clear!

The problem is that the term 'mainframe' makes does not actually describe either a computer or an operating system.

The IBM 9370 running AIX/370 that sat under a desk at one of my previous jobs was a (baby) 'mainframe'. The 3090s running VM/CMS and RETAIN (an OS in itself) that I used when in IBM were 'mainframes'. The Amdahl 5890E running UTS and AT&T RDS UNIX was a 'mainframe'. The Honeywell 6180 running MULTICS was a 'mainframes'. LEO was a 'mainframe'. The IBM 370/168 running MTS I used at University was a 'mainframe'. The ICL 1904 and 2904 running George that many Universities had were 'mainframes'. The DEC Systems 10 and 20 running TOPS were 'mainframes'. I could dig around and find a lot more 'mainframe' systems.

Now. Were none of these hacked? I can tell you for a fact that I hacked an Amdahl running R&D UNIX as part of my job more than once, and I must admit to breaking into accounts on MTS on the 370/168 while at University to get more computing budget to play the original Adventure (come on, it was 30 years ago. There must be a statute of limitations on this, surely!).

This article probably means an IBM mainframe running z/OS or its ancestors, probably using RACF. Even this, I'm sure, can not claim to never have been hacked! I have just found this http://www.os390-mvs.freesurf.fr/tenflaws.htm, in which item 9 clearly states that the author gained key 0 protection from a non supervisor account on MVS. Sounds like hacking to me.

I will freely admit that current mainframes running z/OS are incredibility secure, but I ask again. Where is the references that state a mainframe has never been hacked!

0
0
Peter Gathercole
Silver badge

I would like to know

where the references to back this claim up are!

0
1

Miracle Aliens-style indoor comms built for firefighters

Peter Gathercole
Silver badge

What you need is inertial navigation.

Submarines have used it for years when underwater, and surface ships and missiles used to use it before GPS satellites existed.

In fact, I seem to remember that German V1 and V2 missiles used a very primitive form of this for navigation. A documented way of crashing a V1 was to tip the giros by flipping it over wing-to-wing using a late mark Spitfire, Mosquito, Tempest or Mustang, all of which were fast enough to catch a V1.

0
0
Peter Gathercole
Silver badge

scientifiction

It goes back to beyond the golden age, and pre-dates the term Science Fiction. I seem to remember Isaac Asimov commenting in the forward to one of his short stories on the argument between the use of the two terms when Astounding Stories was being published (it's even older than Isaac (rip), but he was representing the view of Hugo Gernsback, the founding editor).

1
0
Peter Gathercole
Silver badge
Happy

Scientifiction

Haven't heard that term in a long time!

0
0

BMW intros revamped Mini as sporty MG-alike

Peter Gathercole
Silver badge

MG alike?

It's not even mid-engined!

0
0

Intel code guru: Many-core world requires radical rethink

Peter Gathercole
Silver badge
Happy

I was going to mention transputers in my last post

but I decided that it was long enough already!

1
0
Peter Gathercole
Silver badge
Facepalm

This is completely wasted on ~100% of commercial software

In that part of the software market, it's all about rapid application development, and sod the efficiency. They rely on Moore's Law to make sure that by the time their software hits customer systems, the computers are powerful enough to cope.

So MIC processors will be completely wasted on commercial boxes, which is where the majority of the systems will be sold.

Even if someone (extremely cleverly) produces an IDE that can generate parallel code to make good use of many-cores, much of the workload that is done is not suited to run in a parallel manner anyway.

Apologies in advance to those that do, but most new programmers nowadays are never taught about registers, how cache works, the actual instruction set that machines use, and I'm sure that there are a lot of people reading even on this site who do not really understand what a coherent cache actually is.

I work with people who are trying to make certain large computer models more parallel, and they are very aware that communication and memory bandwidth is the key. Code that is already parallel tops out at a much smaller number of cores than the current systems that they have available can provide. And the next generation system, which will have still more cores, may not actually run their code much faster than the current one.

But even these people, many who have dedicated their working lives to making large computational models work on top 500 supercomputers, don't really want to have to worry about this level. They rely on the compilers and runtimes to make sensible decisions about how variables are stored, arguments are passed, and inter-thread communication is handled.

And when these decisions are wrong, things get complex. We found recently that a particular vendor optimised matrix-multiplication stomped all over carefully written code by generating threads for all cores in the system, ignoring the fact that all the cores were already occupied running coded separate threads. Ended up with each lock-stepped thread generating many times more threads during the matmul than there were cores, completely trashing the cache, and causing multiple thread context switches. It actually slowed the code down compared to running the non-threaded version of the same routine.

It will be a whole new ball game even for these people who do understand it if they have to start thinking still more about localization of memory, and if they will have difficulty, the average commercial programmer writing in Java or C# won't have a clue!

4
0

MIPS chip slips through Android compliance

Peter Gathercole
Silver badge
Meh

One wonders

what the advantage of a MIPS processor over ARM is.

ARMs are already cheap-as-chips, low power, and easy to license. Several Chinese companies are already making SoC implementations, with graphic assists on the silicon, including Rockchip, who seem to produce millions of the things to go in chipod and apad type devices.

0
0

SpaceX goes to court as US rocket wars begin

Peter Gathercole
Silver badge

Court cases

They need to at least recover their complete court costs in a timely manner. Otherwise, Lockheed et. al. and their proxies will just tie SpaceX up in court until their budget is exhausted.

This is the problem with the US (and increasingly European) legal systems.

Anyway, I'm hoping that they successfully defend their reputation.

9
0

Adobe offloads unwanted Linux AIR onto OEMs

Peter Gathercole
Silver badge
Unhappy

BBC iPlayer - another brick in the wall.

I'm cross, but not because AIR is going, but because it is proving the trend that is making Linux a less suitable OS for ordinary users.

BBC iPlayer was one of the few platforms for content delivery with content expiry that actually worked reasonably well.

The reason why this is important revolves around the perfectly understandable attitude of the content owners wanting to protect their content, and thus their existence.

Like it or not, free content is not the way that the world is going, and the large production companies investing millions in current TV series and films will not license their content for delivery channels unless those channels at least make it difficult to capture and re-distribute it. And strictly speaking, get_iplayer accesses the content in a manner against the terms and conditions for iPlayer.

This means some form of DRM. Without a trusted DRM mechanism, you won't get _legal_ streams or downloads of new content playable on Linux. Without big-name current media, those enlightened ordinary users who try to use Linux will give up. So goodbye to Linux as a creditable Windows alternative.

One of the fears that the content owners have of Open Source platforms (and this includes Open DRM and content delivery platforms, not just the OS) is that someone can take the source and hack it to allow data capture. They will never trust it, so unless AIR remains closed-source (which is perfectly allowable under GPL/LGPL provided it is written correctly), it will become untrustworthy, at least to the content owners.

Whether a closed solution is actually any more secure is an interesting question, but that is a matter of perception and contract law (if you provide some software for a fee, and it fails to do what it is meant to, leading to a financial loss, then it does not matter what the License Agreement says, there may well be legal redress against the provider).

Open source makes no promises, has no contract, and thus has no legal redress.

Sadly, despite efforts from people like Red Hat and Canonical, I think Desktop Linux has now missed the boat. It is clear that the world will/is moving on to tablet and mobile based devices which include some form of content delivery and control system built in from the very beginning. These may be Linux/UNIX based, but they aren't what I call a general purpose Linux device, which is what I want.

Sigh!

0
0

BOFH: CSI Haxploitation Cube Farm Apocalypse

Peter Gathercole
Silver badge
Mushroom

Ahh

but you forgot it is not an infinite resolution camera! They use "image enhancement" to sharpen the image. That's the magic!

I keep asking why, when matching fingerprints, the computer shows each record on the screen. Just think how much faster it would be if it didn't have to do that, and say, just did a relational database search on a hash of the loci!

Only topped by the real-time IR satellite images down to a resolution of about 5cm that appears in Behind Enemy Lines. I'll also swear that the first missile fired at the F/A 18 is in the air for nearly two minutes, whilst following highly evasive manoeuvres.

3
0

Can Big Blue survive another century?

Peter Gathercole
Silver badge
Facepalm

Duh!

Maybe I'm showing my age, but I used card punch time clocks (which normally are referred to as "time clocks") in one of my early jobs.

Might I suggest that you watch the Warner Brothers cartoons of Ralph E. Wolf and Sam Sheepdog. They always clock in at the beginning of the cartoon, and out at the end. That's a time clock.

2
0

Microsoft warns on support scams

Peter Gathercole
Silver badge
Holmes

Possible answers

1. They might have access to leaked phone number lists, or they may have a copy of a Directory Enquiries CD set from BT, or they might just make them up!

2. They probably don't. It's just a line dangled to make them appear more plausible. Alternatively, they may have some leaked information from BT or your ISP, because it is certain that at a known time, those organisations know which IP address is allocated equipment on which phone line.

3. Windows is ubiquitous. For home systems, chances are that at least 90% of homes with a computer have a Windows variant rather than a Mac, Linux or other system. And even those with Linux probably have Windows installed somewhere as a dual boot.. The Reg. readership are not typical. My house as all three (Win2000, WinXP, and Win7, OSX, and Linux), as well as an AIX box.

I suppose that there will be an increasing number of houses that have broadband for just their TV, gaming console, iPad or Android Pad. I wonder how the ISP's will cope with supporting such customers? At the moment they all appear to be geared around having a Windows box around.

0
0

Creationists are infiltrating US geology circles

Peter Gathercole
Silver badge

@hammarbtyp

An understanding of evolution was not essential to the creation of the smallpox vaccine. This was developed by observation, hypothesis, prediction, experimentation and conclusion, exactly as the Scientific Method dictates.

Your example of a Flu vaccine is not a good one, either. Most Flu outbreaks are of known strains, of which there are many. Each vaccine developed is a mix (normally of three strains), and is only effective against a small number of these strains sometimes more than the three target strains), and it is the job of the vaccine producers to make an informed guess about which will be the main threats each year. They then prime the process to produce the vaccine (which are developed in chicken eggs) to produce the vaccine for that year. This process takes weeks to months to get the number of doses for a large population. If they select the wrong strains, the vaccine could fail to protect at all.

What gets the medical profession worried is new mutated strains of 'flu, for which they don't yet have a vaccine. It is necessary to isolate the virus in order to culture it to produce the vaccine. By the time a vaccine for a new variant is produced, it may be that a sizeable part of the world population has been exposed, reducing the value of the virus.

1
0
Peter Gathercole
Silver badge
Thumb Up

@Danny 14

And why do you think that you can trust what a half-life means? And how do you know what radioactive decay is? And how do you know how much of the original sample remains? And how do you know you can trust the mass spectrometer? And... and... and ad nauseam.

Until you think about it, most people regard experimentally confirmed hypotheses as truths. Unfortunately, science does not really refer to truths, but about not-disproved hypotheses. This is a fair point if you believe the scientific method, but becomes hard to justify to someone who wont acknowledge it.

You just have to try arguing this with one of these people who are good at it to understand what it is like. They effectively argue that you have to justify the entirety of known science in order to trust it, and most people get too cross after a while to argue effectively. I just refused to continue once I realised what their tactic was.

4
0
Peter Gathercole
Silver badge
Pint

Dinosaurs

Creationists do not dispute extinctions. They just don't believe the time scales over which they happened.

I've whiled away many hours arguing about ID and creationism with some otherwise completely rational people, and the most skilled of them have convincing-sounding answers to almost every question you could ask!

Firstly, they argue that the dating techniques are not accurate, as nobody understands all of the hypotheses that they are based on, you have to take it on 'faith' that the whole chain of scientific proof is true, and thus their single faith belief (in the Bible) is more trustworthy than many beliefs that previous hypotheses were correct.

Then they will argue that if dating cannot be relied upon, then how do we know that the Earth is older than 6,000 years (I don't know where 10,000 years came from, my friends were certain it was only 6,000).

Then they will argue flood.

Then they will argue 'test of faith' of the believers.

The most recent discussions I had with one of them even allowed for micro-evolution (change of colour, eating habits etc) as a result of environment.

It's all highly amusing, and I still count several of them as friends. But that does not stop me thinking that, at least in their beliefs, they are a bit crazy. But it livens up a beer or five!

Ahhh beery crazy discussions!

4
0