Feeds

* Posts by Peter Gathercole

1898 posts • joined 15 Jun 2007

'Young people don't want to become like us', say IT pros

Peter Gathercole
Silver badge

Re: IBM

Check your history. No computers, but maybe card sorters. Computers in 2WW were people who did computations, not machines.

0
0
Peter Gathercole
Silver badge

Re: I don't get it?

If the OP was an exception,then I must be really rare. Not only do I enjoy my job, it actually partly conditions my life as well.

I will often come home from fixing the work computers (with the associated buzz of a job well done) and open the laptop (or my new Android tablet), and spend time using computers to do other things, including reading about tech.

And before you ask, I am married, and have children. They might get annoyed by the amount of time I spend with computers, but as I was doing this before the family came along, they accept it.

But I know that things are changing, I just hope that my skillset remains sufficiently in demand that I can reach retirement before I struggle to find work. Only 14 years to go, unless they raise the retirement age again

(spot the person whose pension provision is suffering)

0
0

Microsoft: Don't overclock Windows 8 unless you like our new BSOD

Peter Gathercole
Silver badge
Boffin

Re: overclocked CPUs are more likely to make a Windows PC crash

Well, not strictly true.

Most CPUs are designed to run at a certain speed. When a particular member of a chip family is first spun, chances are only a small percentage of the silicon will run reliably at the full design speed, but many more will run at a fraction of that speed. So they are marked with the slower speed, and sold as slower chips. But they were still designed to run that the higher speed.

Manufacturers put pretty much every CPU through some testing, starting at lower speeds and increasing it until the chip fails to execute something correctly. They then stamp the chip with the last speed that worked sucessfully, and then move on.

What overclockers do is that they reason that when a chip runs above it's tested and rated speed, the cause of failure is probably due to heat, so they put a better heatsink on the chip, and then ramp the speed up above the rated value until it fails, and run it at the highest speed that it functioned correctly. The better the cooling, the higher the clock speed you can run it at (that is why some HPCs have direct water cooling of the CPUs, and why people like Amari [I believe] used to sell an actively refrigerated PC at one time).

Unfortunately, another aspect of heat damage is that it can be cumulative. This is, I believe, what Microsoft are trying to say. This aspect has a name, and it's called 'cooking' the CPU. Once you've cooked it, the chances if it running reliably at the same clock speed (or even at it's rated speed) is seriously reduced.

The most obvious case of this I saw was Throughbred AMD Athlon XP2600s (that was the highest speed Throughbred cores with 133MHz FSB, faster Athlon XPs were Barton cores with an FSB of 166MHz). These were actually clocked with a multiplier at something like 2.06GHz, but over time, even if you did not overclock them, they stopped performing at their rated speed. You had to gradually step down the speed to keep the PC stable. Replace the CPU, back up to full speed, at least for a few months. I went through three or four before I realised what was going on, and this happened even with overspec'd heatsinks and fans.

0
0

Leap second bug cripples Linux servers at airlines, Reddit, LinkedIn

Peter Gathercole
Silver badge

Re: Phew...

According to various write-ups, you needed a multi-CPU system for the problem to show itself.

0
0

Dimming the lights on smart(arse) TV

Peter Gathercole
Silver badge
Boffin

Damn

you beat me to it!

0
0

'Inexperienced' RBS tech operative's blunder led to banking meltdown

Peter Gathercole
Silver badge
Meh

Re: Problematic updates are normal?

Don't know about RBS, but I've worked in other places in Banking, Government agencies and the Utility Sector.

Most large organizations will not authorize a change unless there is a fully specified back-out plan, together with evidence that the change to the live system has been tested somewhere safe first.

In some places I've been, the risk managers have wanted a "how to recover the service should the back-out plan fail" plan.

The RBS example is evidence of exactly why you have this level of paranoia, and why you spend more time writing up the change than the change itself takes, and why you sit in Change Boards convincing everybody that the change is safe.

Unfortunately, I'm sure that may of us here have complained about how much the process costs, how much time is wasted, and how quickly you could work if you didn't have this level of change control. I learned my lesson the hard way many years ago, and now follow whatever the processes are without complaining.

Maybe the higher management will learn some lessons from this as well. But I somehow doubt it.

9
1

Natwest, RBS: When will bank glitch be fixed? Probably not today

Peter Gathercole
Silver badge
Coat

@sugerbear

It has not always been like this. I've been working in Data Processing (remember that!) for over 30 years, and there was a time when good best practice, BS5750 and it's follow-ups like ISO9001 were actually valued. But this was back in the days when computers were expensive, and it was seen as valuable to invent in people and process to get the maximum value from your high outlay.

Of course, everybody bitched about having to write the documentation, but at least the management bought in to the overall need for it, and factored time into the project plans, because these standards said it had to be done. Sometimes the docs were junk, but often they contained useful information. And the more documentation you wrote, the better at writing it you became.

Nowadays it's all about trimming the fat, over and over again, and if the managers complain, they get trimmed themselves and replaced by others who are happy to comply. This means that the barest minimum is done to get a service kicked over the wall to support, the support teams have no way of pushing back against a poor service, and then this happens.........

I'm now seen as a boring old fart, locked in the past, so I'll just go and get my Snorkel Parka and go.

12
0

LCD TV shipments slip for FIRST TIME EVER

Peter Gathercole
Silver badge

It is possible to have TV that is 'good enough'

Once people have replaced all their CRTs and small LCDs, they will stick with what they have until it breaks, and the market will reach saturation.

Once a technology matures (and this is any technology) so that further improvements no longer enhance the perceived customer experience, it becomes driven only by replacing broken instances of the technology. I think we can see this from the dip in computer sales, which will be echoed in laptop and tablets over the coming years. TVs just have had a longer journey, although if you look at LCD TVs, that chapter has been quite short.

I personally can't wait for this time to happen, because we just can't continue making new things with short lifetimes. Will break Capitalism, though!

7
1

That new 'Microsoft GCSE': We reveal what's in it

Peter Gathercole
Silver badge

Re: What's in a name?

Most Polytechnics had a requirement to be more business and industry focused than Universities. Normally, where there was a University and a Polytechnic in the same city, there was a requirement from the syllabus authority that the two establishments put different emphasis on what looked like similar courses. This is why a lot of Polys had courses like 'Business Computing' rather than a pure Computer Science course.

When a lot of these courses were originally designed in the 1980's, business computing was based around Cobol, the most commonly used business language at the time (RPG was also common, but I would not suffer any student with learning that as their primary language!) although there were several BASIC orientated business systems (DEC RSTS and Pick spring to mind).

For schools, BBC BASIC was a brilliant choice, because it was structured enough to satisfy most programming purists at a fundamental level (OK, while loops were missing, and complex data structures were a bit difficult), it was fast enough even on modest hardware to do quite impressive looking things to encourage staff and students to try ever more complex tasks, and it was accessible to people with very little previous knowledge.

It also encouraged teachers to learn some programming themselves to help teach their non-computing subjects (because it was relatively easy), rather than as just a support for computing related courses. Currently, teachers have no incentive to learn any programming at all because the initial learning curve is too steep.

I believe that there is absolutely nothing that I have seen then or since which was better as an introduction to computer fundamentals as the BBC microcomputer and BBC BASIC. Updated in a modern windowing OS, with hooks into the GUI and OS (as it was in RISCOS), and it could still be the best thing around.

1
0

How I went from Unix engineering to flogging Google apps

Peter Gathercole
Silver badge

Funny

I don't know how many of the readers remember much about the Digital Equipment Corporation (DEC), but they were involved very early on in the definition of many of the fundamentals that cloud computing is based on.

They were one of the people involved in creating Ethernet and the Internet (although they eschewed TCP/IP for their own DECNet as the preferred network for many years)

They were early adopters of the concept of mobile workloads spread across several machines (DEC-Cluster and VAX-Cluster).

They had network shared storage before almost anybody else (HSC devices) and things like LAVC (Local Area VAX Cluster).

They were one of the early pioneers of clustered desktop machines (DEC ALL-IN-1 and Pathworks) including network booted diskless PC's

And, least I forget it, UNIX was INVENTED on DEC machines (PDP11 systems)

I'm puzzled by the statement Ken Olsen made about UNIX, because DEC had commercialized UNIX in it's software portfolio for years. UNIX V7/11M was a port of Version 7 available through DEC in the early '80s on PDP11s, they did a System V port onto the VAX for AT&T (and I believe it was available to other companies as well), Ultrix was available from DEC in the early 80's on VAXen, you had DEC-Station MIPS based UNIX workstations in the '90s, OSF/1 was available as a supported OS, and later morphed into Tru64 UNIX on Alpha based systems later in the same decade. I can't think of a company that had as long a history of UNIX at the time DEC was subsumed into Compaq.

Maybe Ken thought VMS was the only OS needed, but fortunately other people in DEC did not agree. And others thought they had been daft to drop TOPS-20!

1
0

Strong ARM: The Acorn Archimedes is 25

Peter Gathercole
Silver badge
Stop

@STB

Just because there are text books does not mean that the way of working is correct or to everybody's liking. Take sociology for example......

I'm sure that there are many things that are completely insane that I can make rational sounding arguments to support. Try reading Douglas Adams' books for rational absurd reasoning (although, yes, I know he was a Mac User, but I'll forgive him that because of his genius)

The Mac way of working is fine if you use single or small numbers of applications. Not for many applications on a screen, like I use all the time.

And the argument about power users using key combinations is crazy. In my world, where I use Windows, CDE on UNIX, KDE, GNOME, and (god forbid) Unity, you just cannot learn every one of the myriad of key sequences. And in case you ask, I am an Emacs user, so am used to quite complex sets of key strokes.

0
0

Barclays online banking falls over in outage riddle

Peter Gathercole
Silver badge

Re: Sir

Barclay's used to be very diligent about their failover tests. I was involved in several tests over the years I worked there. But that was when they actually had people in the UK, and did not rely on it being run from Pune or Singapore.

It used to be a big issue if one of the tests failed, and they generally had a second test pencilled in when the initial one was being planned just for this situation.

Still, it's been over 5 years since I last worked there, so who knows what has happened in that time.

0
0

The Register is rocking on Windows Phone 7

Peter Gathercole
Silver badge

Re: At last

What I wan to know is what happened between June last year and early April this year that caused his caps lock key to stick down. There's a gap in his posting history (which actually includes two moderator deleted posts).

He's been registered since 2010, but up until April this year had only entered a handful of comments. Since then, something has woken him up, and caused him to SHOUT about everything he's commented on.

4
0

Pre-Pet Commodore micro up for grabs on eBay

Peter Gathercole
Silver badge

I actually used one for a University assignment

This was in 1979 or 1980 (I can't remember exactly - getting old).

Was my first experience of 6502 machine code, which became very useful when I got my BEEB a few years later. Had to code a sine-wave generator using an attached D-A converter. Real pain putting the opcodes directly into the keyboard, with no means of storing the program.

I think that this one must be a later one, because the ones that Durham had not only used a calculator keyboard, but also a tiny calculator display as well, mounted in what I remember to be the top half of a Commodore 8 digit calculator. My memory may be playing up though...

0
0

US Judge says IP addresses don't identify pirates

Peter Gathercole
Silver badge

Re: MAC Address -AC@11:46 8th May

"..you probably aren't gaining a lot over what a typical "current" ADSL router will provide.."

You think not?

Double NAT, not relying on ISPs router firmware to leak information, capture of packet headers and GBs of log files, multiple DMZs, intrusion detection log, control of inbound connections using SSH to give access to printing and filestorage in my home (you can really do a huge amount through SSH tunnels, including CIFS and lpd), configurable DDNS (I've tried the DDNS support in routers, and given up), not needing a syslog server to capture the logs that are too large to be held in the device, traffic logging from individual systems within the home environment (useful for determining who is the traffic hog), a proper user interface (shell) to diagnose network problems, tcpdump available, serial line attached to my RS/6000 to allow me to remotely power on and off from the Internet. You want me to go on, because I don't think this list is complete.

I don't use SmoothGuardian, SmoothWarrior or any of the other paid plugins, because I do not run my own SMTP server or multisite VPNs. I find Smoothwall Express quite capable enough for my needs, and have been using Smoothwall to protect my network for over 10 years, log before ADSL routers were as sophisticated that they are now.

Expense. A 10 year old 700Mhz Pentium 3 laptop, extra USB Ethernet adapter left lying around from god knows when, and a couple of Ethernet cables. Total outlay, nothing, zero, zilch. Burns about 20 watts of power with the screen off, so is not very expensive in energy either.

And, of course, my time.

Why do you assume I download Torrents? There is enough content in iPlayer, Sky Anytime, 4OD, Demand 5, YouTube, as well as the rest of the Internet, and my kids use Steam, Wii and Xbox games a lot. There's plenty of legal content lying around on the Internet. I'm not blocking the RIAA or MPIAA scanning my systems specifically, I'm trying to keep my home network safe from anybody who might want to do damage to it. I do not want ANYBODY snooping my network out of principle.

If I download torrents, it's only fan-subbed anime for series that are not available in the UK. There's a lot of non-H series that have never been available in the US or the UK, so is very difficult to get to see without some form of copyright infringement. If it were available, I would probably buy it rather than download it.

My library of purchased downloads, DVDs, CDs and videos is quite extensive, and I do buy almost all of the content that I have, although some of it is second hand. I take exception to the implication that I am any more a copyright infringer (even with my admission about anime) than anybody else, and ask whether you live in a greenhouse? At least I post under my own name!

0
0
Peter Gathercole
Silver badge

@me - incomplete sentence.

"But interestingly, it is possible for the MAC addresses of machines connected to a single router device performing both border routing to the ADSL or cable network, and also DHCP and/or Wireless routing"

is not complete. It should also have "to leak" appended.

0
0
Peter Gathercole
Silver badge

@Chemist

Ah. Silly me. So they did (and yes, I do know what MAC stood for!)

0
0
Peter Gathercole
Silver badge

Re: MAC Address

But interestingly, it is possible for the MAC addresses of machines connected to a single router device performing both border routing to the ADSL or cable network, and also DHCP and/or Wireless routing.

What runs on the router is only as good as the firmware, and as we have seen with BT and their powerline Ethernet devices for BT Vision, it would appear that some ISP's modify the firmware to allow some remote discovery. And I'm not sure I fully trust uPnP not to leak service information externally. So we could see internal MAC addresses (that the router has to know in order to function), internal IP addresses (from DHCP), and possibly system types and function available to whatever runs in the router's firmware.

Maybe I'm paranoid, but I have a ADSL router which was not supplied by my ISP (and runs NAT), with a Linux based firewall (Smoothwall, which also runs NAT), and then a wireless hub inside the firewall. DHCP is run by the firewall, not by any of the appliances. This way, I believe that it is almost impossible for anything from the broadband side to get information from inside my network. Now that I'm not relying on wireless as much (I'm using a mixture of direct Cat 5 and, I'm afraid, powerline Ethernet for most network access now - and yes, I generate my own keys), I'm toying with the idea of putting the wireless on a separate DMZ just to give most of my network protection from wireless crackers. Just need to get another Ethernet port in the firewall.

My wife thinks I'm mad, having so much kit 'just to provide the internet', but then I believe (and I check!) that we've been completely clear of intrusion type attacks since I set this up.

0
0
Peter Gathercole
Silver badge

Re: MAC Address

I fail to find any reference to MAC addresses in the article. Has it been edited?

0
0

Microsoft ejects DVD playback from Windows 8

Peter Gathercole
Silver badge

@Oliver

I get good DVD playback on my EeePC 701 using a USB DVD drive running Ubuntu 10.10, and that is really an underpowered PC, being a Celeron clocked at less than 700MHz.

Methinks you need to look at the graphics options. Sounds like you've either not installed the Nvidia restricted drivers (which would be strange, as if that adapter was in the system when Ubuntu was installed, it should pick it up automatically), or something has disabled hardware rendering, and the system is using software rendering. Try installing and using the Nvidia driver settings tool from the Ubuntu repository (no, that's no more difficult than installing drivers from CD that came delivered with your graphics card).

4
0

ARM creators Sophie Wilson and Steve Furber

Peter Gathercole
Silver badge

Re: IBM ROMP vs. ARM

@starsilk. Thanks for the correction. I certainly knew about the multiply-add being missing, but I deliberately avoided talking about the multiply instruction being missing, because I just could not remember.

0
0
Peter Gathercole
Silver badge
Boffin

IBM ROMP vs. ARM

The IBM ROMP chip (aka the 801) was never intended to be a general purpose RISC processor. It was intended to power an office automation product (think of a hardware word-processor like WANG used to sell).

As a result, although it could function as a General Purpose CPU, it was not really that suited for it. It was never a success because at the time, IBM could not see justification for entering the pre-Open Systems UNIX world. RT 6150 and the 6151 were intended as niche systems mainly for education, although they did surface as channel attached display front ends for CADAM and CATIA run on mainframes (and could actually run at least CATIA themselves). This changed completely with the RIOS RISC System/6000 architecture, where IBM was determined to have a creditable product, and invested heavily.

In comparison, the ARM was designed from the ground up as a general purpose CPU. Roger Wilson (as he was then) greatly admired the simplicity and orthogonality of the 6502 instruction set (it is rather elegant IMHO), and designed the instruction set for the ARM in a similar manner. Because the instruction set was orthogonal (like the 6502, the PDP11, and the NS320XX family), it makes the instruction decoding almost trivial. It also made modelling the ARM on an econet of BBC micro's (in BBC Basic, no less) much easier, which allowed them to debug the instruction set before committing anything to silicon.

They had to make some concessions on what they wanted. There was no multiply-add instruction, which appeared to be a hot item in RISC design at the time, and to keep it simple and within the transistor budget, all they could do was a shift-add, (the barrel shifter), which although useful, was a barrier to ultimate performance, but great for multi-byte graphics operations.

It was also simple enough so that they could design the interface and the support chips (MEMC, VIDC and IOC) themselves, achieving early machines with low chip counts.

This is all from memory of articles in Acorn User, PC World, Byte and other publications. Feel free to correct me if my recollections are wrong.

7
0

Software functionality not subject to copyright: EU court

Peter Gathercole
Silver badge

Re: Whatever is left of Digital Research

And I think that the Digital Equipment Corporation might have something to say about Intergalactic Digital Research's products as well! (CP/M was essentially a functional copy of RT/11).

0
0
Peter Gathercole
Silver badge

Re: Am I missing something?

Yes.

What they have said is that you can't copyright something that says (using the example of another recent story) "produce a process that takes sea water as an input, and produces fresh water and brine as outputs" (which is a functional specification).

You can patent the method for doing this (reverse osmosis, for example) but that does not prevent someone from using evaporation or distillation to have the same effect.

I know that this would be a patent rather than copyright in this example, but the concept is the same.

Thus the code you write for your product is protected, but the description of what it does isn't. This has been fundamental in the concept of black-box testing and modular design for many decades, and changing this would break almost all modern industrial processes.

Just imagine not being able to replace Oracle with DB2, because the function of J/ODBC was subject to copyright, or even worse, not be able to port from UNIX to Linux because the interface to the C library was subject to copyright.

2
0
Peter Gathercole
Silver badge
Happy

Hey Heyrick

Nice reference to the original Dungeon!

It's supposed to take you to Y2 though IIRC, not generate a CRC.

1
0

How politicians could end droughts FOREVER: But they don't want to

Peter Gathercole
Silver badge

@Roobarb

I always wonder how much of the water that actually leaks from the pipes actually makes it's way back into the ground water reserves (especially in London), and thus is available again.

Anybody any ideas?

3
0

GCSE, A-level science exams ARE dumbed down - watchdog

Peter Gathercole
Silver badge

AC@16:28

This is a very defeatist attitude. It assumes that all teachers and all students decide in the same year to do next to nothing.

If this does not happen, then all those teachers and students do is to make sure that they will fall behind the ones that do try. As what I was envisioning was competition, this is unlikely to happen.

Human beings are competitive, especially kids. Watch them play. They race, they throw, they compete in games of skill (marbles, conkers, hopscotch, computer games). It's coded into our make-up. You just need to engage their competitive nature in school to ensure that the best can be achieve. You also need to make sure that lesser grades than 'A' still have merit.

On a side note. I heard a news item about a boat builder who was complaining at the number of kids who are now sucked into the academic stream, who would have previously gone into some form of apprenticeship. He said that we needed bright kids to be the skilled artisans of the future, and all he was seeing after the competent ones had gone to university were the kids who were unable to master his skill. Was a very fair point well made.

1
1
Peter Gathercole
Silver badge
Headmaster

Re: Why do we have a set pass mark for grades?

Marking to the curve is a double edged sword, and I accept that it makes comparing marks year-on-year more difficult, but you have to ask what the point of the exams actually are?

When I was doing my 'A' levels in the late '70s, the primary reason was so that you could be selected for further education. As there were many fewer university places available, the marking was set so that you could tell who was 'the best' from that year's student population. If less that 10% of the students got an A, these people, who would be the most likely to excel in that subject, got streamed to the best Universities. The next tier down could select from the remainder, and on downward through the Polytechnic system, aiming at people who would excel at HND qualifications, but may not be up to a full degree.

It did not matter whether there was grade comparison between years, it would be accepted that the best people would always get better marks than the weaker candidates, so the streaming would still work, and the 'right' people would always get to the establishment that best suited them.

Quite often, it was not the grades that determined what type of work someone ended up in, it was how far they went in the education system. Students who had got to University and completed a degree course had demonstrated by that fact that they were worth employing.

It is only now that the 'A' levels that are intended to give an absolute measure of how someone's worth that this problem occurs. Since schools have been measured by result, and the curve has been discarded, it has completely devalued them as a mechanism for selecting the best students. Governments and schools each have an interest in 'improving' the results.

Part of the problem is also political. Educationalists in the '70s and '80s became convinced that non-competitive grading was the only way to avoid stigmatization of kids (abolition of the 11+ and Grammar schools is an example). Schools were not allowed to say to kids "look, you are never going to succeed in becoming a theoretical Physicist, best do some vocational training". All children are given unrealistic expectations by being told that they can achieve anything, and in order to persist this myth, the exams are set so that they think they are good at a subject, when in fact they could be only mediocre.

This is just dumb. Life is competitive, and that is never going to change. When you go for a job, the best candidate wins (unless the recruitment process is also dumbed down, but that is another rant!) And people not suited or without an aptitude for a particular job will never get it, regardless of how much they want it.

Setting kids up with realistic expectations, and giving them some taste of reaching their ceiling by allowing some of them to experience disappointment is a required life skill that they have to learn at some point, and my view is that it should be part of the school experience, instead of a post University kick in the teeth.

1
0

Moore's Law has ten years to run, predicts physicist

Peter Gathercole
Silver badge

Re: Doubling CPU cores is also doubling transistors

One of the problems that chip designers have is how to use the vast number of transistors that can be fitted onto the large die-sizes at the smallest scale.

They got to the point where more registers, more cache and more instructions units in a single core was not making for faster processors, so they then started using the still increasing transistor budget to put multiple cores on a single die.

There is a lot to be said for a large number of cores on a single die, but this has it's own problems with access to memory, cache coherency between cores and I/O.

Another avenue is putting disparate processors (like GPUs) on the same die, or even System on a Chip (SoC), where all of the functional elements (I/O, Graphics, memory etc) of a complete system appear on a single piece of silicon (think what is going into 'phones and tablets).

In my view, to make use of the vast scale of integration, it's about time we had a fundamental rethink about how processors work. I don't have any new ideas, but I think that listening to some of the people with outlandish ideas might be worthwhile in coming up with a completely new direction to investigate.

0
0

IBM fires Power-powered Penguins at x86's weak spots

Peter Gathercole
Silver badge

@Kebabbert Re: Hmm...

I was not clear about entitlement in my earlier post. There were Linux only Power 5 systems back in 2005 or so. What I was trying to say was that they were the same systems with the AIX and the IBM i entitlements turned off. They were also significantly cheaper, and also made it easier to use non-IBM branded disks.

My views about proprietary UNIX being on the downward curve has not changed. I have felt this way for most of the last decade. I still see Power having a place for many years to come.

Intel becoming predominant is much more about them having volume and critical mass in the processor market than speed or technology. PowerPC is still a relatively well architected processor, but for many companies developing products, it makes sense for them to use what is fast becoming a commodity product (Intel) rather than something that they have to put significant design effort into. A high-end PowerPC SoC would be interesting, but I don't think IBM would be interested in creating one of these for the server market.

0
0
Peter Gathercole
Silver badge
Unhappy

Re: Hmm...

This 'new' ability to only run Linux is not new. If you have access to a Power 6 or Power 7 system, and look in ASMI or on the HMC (and I presume SDMC and IVM) at the entitlements section, there has been an entitlement for both AIX and IBM I for several years. Linux has been an officially endorsed OS by IBM on PowerPC for at least 7 years (they have had agreements with Slackware and SuSE), and there are official distributions of RedHat and Ubuntu from those companies.

This makes this a re-announcement of an existing policy, probably to remind some existing PowerPC shops that they can stick with Power rather than moving to another processor, even if they are switching OS. I very much doubt that the product announced will significantly differ from other systems that will still run AIX and IBM I.

This does not give any new reinforcement that policy that you bring up in every discussion about PowerPC or AIX. Both AIX and PowerPC will be here for some time still. That link lookes older each time I look at it.

Now. I'm not going to argue with the fact that AIX (along with all proprietary UNIX systems) is on the downward side of the popularity curve, and I do not think that PowerPC development is in a good place at the moment. It's expensive to build new generations of any processor, and I think that IBM is really thinking hard about what to do with the PowerPC line, at least in high end servers. Sometimes I wonder whether IBM really wants to remain in the hardware business at all (products that have been sold include their printer division, their storage division, the desktop and laptop PC business, and most recently their ATM and PoS business).

This policy may extend to their server systems as well. Power7+ is late according to previous product roadmaps, and there is strangely very little pre-announcement information about Power8. IBM has also made statements that their previously loss-leading HPC work has to become more commercial (probably one of the reasons why IBM pulled out of Blue Waters), which means that future generations of IH HPC systems are at risk.

But one of the effects of there being a creditably competitor to Intel processors is that makes Intel aggressively pursue new processors. Once they are only competing with themselves (remember, AMD need chipfabs like IBM to create their products, because they cannot fabricate processors themselves), then the rate of product development will slow significantly, as Intel would want to get more return on their investment.

I am really not looking forward to a point where the only processor game in town is X86 derived, and that is looking like a possibility within a decade unless ARM moves upward.

4
0

Linux Left 4 Dead port fuels Steam for Ubuntu talk

Peter Gathercole
Silver badge

Re: @Z Eden - Games should stay on Windows

In theory, DRM is not against the Linux way of doing things. If you are careful to make sure that you only use LGPL (not GPL) code in your DRM system, then you do not 'pollute' Linux by adding a DRM API above the OS, and you don't have to publish the details of your DRM. The rest of Linux works just swell.

The main reason why this has not been done to date is that the content providers do not trust that the OS cannot be hacked below the DRM API to gain access to their content, whether it is a game, music or a film.

0
0

Ubuntu 12.04 LTS: Like it or not, this Linux grows on you

Peter Gathercole
Silver badge

Re: @DryBones yes, yes, that's all very well

Huh. DryBones deleted their post! Oh well, just as relevant to the OP of the "yes, yes, that's all very well" thread.

0
0
Peter Gathercole
Silver badge

@DryBones Re: yes, yes, that's all very well

The problem with mp3, the DVD formats and many, many other restricted formats is that they are, well, restricted.

The very nature of Free Software, whether you are talking about free-as-the-air or free beer, is that it is either free of restrictions or free of charge. This means that the distro suppliers won't (in the case of as-the-air) or can't (as in beer because they can't afford it) put the support for restricted formats by default.

Blame the people who foist the restricted formats onto us all for this problem, not the distro suppliers.

Of course, earlier releases of Ubuntu often would tell you exactly what you had to do so that you could make the personal decision to break the licensing conditions or patents on the codecs that the distro supplier cannot make without opening themselves up to being dragged through the courts.

It was not that long ago that Canonical were being slated by the '-as-the-air' community for paying for licences for H.264 just so they could include it for people like you.

I know that this does not help you, but that is the nature of the world we live in. Would you pay for a version of Ubuntu or any other distro (so that the supplier could pay the license fees) that included all the codecs you need?

4
1
Peter Gathercole
Silver badge

I'll give it a go

I will put this on a partition of my laptop, and boot into it occasionally to see how it is doing. If I can cope with Unity (although from current experience, I won't), I will switch over.

But the problems I had with Lucid stopped me from switching permanently from Hardy until they pulled support from the desktop release of Hardy. Even now, there are significant things that don't work on Lucid, despite defects being open in the Ubuntu fault tracking system.

I have Unity as the presentation layer on a netbook running 10.10, and also on a desktop running 11.10. Later releases may work better than earlier ones, but that does not alter the fact that I believe that it is less suited than Gnome 2 for people who work with multiple overlapping windows on several desktops. I can see it working well for the Mac OSX generation (single application occupying the whole screen most of the time), but that's not me. That way of working is just alien to the way I have worked since twm on X.10 or SunView. I want drop down menus attached to the window I am working on, not up at the top of the screen.

As for HUD, I've not played with it. It may be helpful, but it sounds to me like it will tie applications into the Window Manager in ways that will be detrimental to application portability, which can never be a good thing.

3
0

Six of the best ways to mess up IT change management

Peter Gathercole
Silver badge
Unhappy

Re: I got one for automation

And the sad fact is that the people who are made to leave are often those that understood the automation, so as soon as something changes, the automation breaks and nobody knows how to fix it, so it becomes a manual process again.

About 7 years ago, I was part of a project automating the build of servers (IBM Power 5 servers running AIX) in a server farm. Could deploy an OS image on a virtualised machine with all patches, management and security software (and some frequently used application as well if required) installed and registered in about 40 minutes from bare metal to hand it over to the application installation team. Did all the work from base packages, no Golden Image in sight. Brilliant (and also stunned IBM when they came to see what we were doing!)

Came back to the company a year and a bit later, to find that the people running the process were all low skill process monkeys who had reverted to manual processes when new machine types came along, and they did not know how to tweak the process (even though it was fully documented!).

Broke my heart!

1
0

Ten... eight-bit classic games

Peter Gathercole
Silver badge
Thumb Up

You needed a BitStik!

Put thrust on the twist of the joystick and had three buttons. Made complex manoeuvres less like shaking hands with an octopus!

And when you ran Elite on a 6502 second processor (if you bought a BitStik, then you probably had one of these as well to run the CAD software), you got Mode 1 graphics and none of the 'mode change' interrupt tear when it switched from Mode 4 and Mode 5 3/4 of the way down the screen.

1
0
Peter Gathercole
Silver badge

Re: Whatever.

Jake,

Did you have a GT40 and Lunar Lander as well? First McDonalds on the Moon!

4
0

Happy 30th Birthday, Sinclair ZX Spectrum

Peter Gathercole
Silver badge

If it really does not have an analogue tuner....

you don't!

Or, find an old video recorder (with an analogue tuner), and use that to map tuner to a SCART connector on the TV.

0
0
Peter Gathercole
Silver badge

What! No mention of the Plus One

This plugged into the expansion slot, and provided the Microdrive interfaces, along with a joystick port, a serial port and some strange network which allowed you to link several similar systems together in a peer network, sharing the microdrives.

My father bought an early 48K system (I had a bought my own BBC model B), and it did indeed have light grey keys like the picture. In addition, it had the 32K add-on board, and also had a heat sink that ran the entire width of the system under the keyboard, leading to a warm programming experience.

I never really liked the Spectrum, it was too slow, had poor sound, the screen attributes just felt clunky, and that keyboard!

My Beeb, although supposedly lacking in memory, was just a class machine, and ended up being used for things you just could not consider using a spectrum for. OK, it was not suited to large dungeon type games, but I would contend that Snapper, Planetoid, Meteors and Arcadians were great copies of arcade games that the Speccy could not hope to match, and Freefall, Starship Command and especially Elite showed what you could actually do even with a supposed lack of memory.

But the Spectrum was an influential machine, no doubt.

0
1

Look back in Ascii: Computing in the 1980s

Peter Gathercole
Silver badge

@Simon Round

A PDP 11/84 was a single-chip PDP 11 processor (J11?) in a minicomputer rack (it had a UNIBUS rather than a QBUS which made it a proper PDP 11 rather than a micro PDP 11 like the 11/83).

It was definitely *NOT* a mainframe, but a 16 bit minicomputer with address extension. IIRC, it probably was the most powerful of the whole PDP/11 family (I mean real PDP 11 rather than a VAX 11).

1
0

Student's Linux daemon 0-day triggers InfoSec Institute outcry

Peter Gathercole
Silver badge

Re: where the buck stops - AC

it depends. If you have decided to include a software product that needs escalated privilege (root or admin), then that is not Microsoft, and you must take some responsibility yourself, and should also blame the vendor of that package.

If it is software that does not require escalated privilege, and can get it using the package, then that would of course implicate Microsoft as well.

But in your example, it would be better to ask if Microsoft should take any responsibility for something they include from a third party as part of a windows installation (such as the CD and DVD burning code licensed from Roxio), as this is more like what Linux Distributions do.

0
0

The Hardware Hacker's Guide to Home Automation

Peter Gathercole
Silver badge

Am I alone in remembering Red Box...

a company formed by Hermann Hauser in 1985. I know it's since disappeared, but it marketed a through-the-mains control module solution.

I can find almost no references to this in Google, Wikipedia etc. It's amazing how something used to be able to disappear almost without trace before the Internet.

1
0

Thin-client giant Wyse gobbled by Dell

Peter Gathercole
Silver badge

Dell and terminals

"never made a machine that needed a dumb terminal" - this is untrue.

Dell had a brief foray into the UNIX on Intel world in the late '80s and early '90s with systems running SVR3 and SVR4. These systems were shipped with multi-port serial cards, so would have used terminals of the type produced by Wyse.

I can't remember what they used to call them, but I attended an interview for their UK support team. I also can't remember what the outcome of the interview was, but bearing in mind that the team was wrapped up not that long afterwards, it was probably better that I did not work for them.

1
0

The Facebook job test: Now interviewers want your logins

Peter Gathercole
Silver badge

Re: Surely this is illegal

Illegal? No, certainly not under the Data Protection Act. The employers are asking their prospective employees to volunteer their facebook account details. If they agree, then it is a private agreement between the individual and a company. This is exactly the same as a loan company asking for copies of your bank statement before offering a loan.

It may be counted as discrimination if it can be proved that the individual did not get the job because they refused to hand over details, but that would be a completely different issue.

I immediately thought that the employers were going to turn an applicant down if they actually DID give their login details over, because that would indicate a lack of understanding about on-line security! Ho hum.

7
1

Windows 8 tablet freezes in Microsoft keynote demo

Peter Gathercole
Silver badge

@Chemist Re: So what /should/ have been done?

Reminds me of the old IBM PC error "Keyboard error - Press F1 to continue"

The old ones are the best!

0
0

SUPERCOMPUTER vs your computer in bang-for-buck battle

Peter Gathercole
Silver badge

Re: Units

11/780 was the base.

When the IBM PC was launched, remember it was a 16 bit processor in an 8 bit system (8088 had an 8 bit multiplexed data bus needing two cycles to store a 16 bit word), and was only clocked at 4.77 MHz. In the Personal Computer World BASIC benchmarks, the BBC micro could whip the ass off the IBM PC in performance terms, although this should not be a suggestion that Linpack results would be the same.

I always regarded an original 6MHz PC/AT as about the same processing power as a PDP 11/34, although that was only on a subjective feeling, and a VAX 11/780 was much more powerful than my 11/34.

0
0
Peter Gathercole
Silver badge

Yawn

A real supercomputer is a lot more than just processing power.

The current systems I am working with (still on the top 500 - just) are split (very approximately) equally cost-wise between processing, networking and storage.

The interlink is important for massively parallel jobs, and there is no point in crunching numbers if you can't store the results. Linpack can be a very misleading benchmark.

0
0

From server to end user: What's coming up for NFS?

Peter Gathercole
Silver badge

Re: Re: Server-Side Copy

Yes, if you are able to make one NFS server talk directly do another, without using a client computer. But I think that if you use SCP with two remote locations, the data still travels through your local machine, in-and-out.

2
0