* Posts by Peter Gathercole

2698 posts • joined 15 Jun 2007

What should the Red Arrows' new aircraft be?

Peter Gathercole
Silver badge
Headmaster

Re: No F35 in the list? @Ralph

Um. We haven't had any battleships since 1960. The article quoted is about destroyers, although these are the largest combat ship in the RN until the Queen Elizabeth is commissioned. (Please note, HMS Ocean, Albion and Bulwark are not really combat ships, even though Ocean is the Fleet Flagship).

If you had said "warship" rather than "battleship", you might have been correct.

5
0

Arch Linux: In a world of polish, DIY never felt so good

Peter Gathercole
Silver badge

@AC

If you're going to post something like this, you really ought to post it under your name so that you can add the joke icon.

3
1
Peter Gathercole
Silver badge

Re: Nice distro, but.. @ Teiwaz

So use an LTS release.

Apply updates, yes, but you only have to do a dist-upgrade every four years or so, if you're prepared to skip a release.

2
0

Ghost of DEC Alpha is why Windows is rubbish at file compression

Peter Gathercole
Silver badge

Re: "chose not to serve" @Loud Speaker

That's a very interesting point, one I had not thought about, but the term CISC actually refers to a Complex Instruction Set Computer, and is defined by the number of instructions in the set, and the number of addressing modes that the instructions can use. I would say that the memory bandwidth savings were secondary, especially as most early computers processor and memory were synchronous.

I'm not sure that I totally agree with the definition of a PDP11 as a CISC (although it was certainly several generations before RISC was adopted), but the instruction set was quite small, and the number of addressing modes was not massive and exceptionally orthogonal, so it does not really fit in to the large instruction set many addressing modes definition of a CISC processor.

What made the PDP11 instruction set so small was the fact that the specialist instructions for accessing such things as the stack pointer and the program counter were actually just instances of the general register instructions, so were really just aliases for the other instructions (you actually did not get to appreciate this unless you started to look at the generated machine code). In addition, a number of the instructions only used 8 bits of the 16 bit word, which allowed the other 8 bits to be used as a byte offset to some of the indexing instructions (contributing to your point about reducing memory bandwidth).

One other feature that was often quoted, but was not true of most early RISC processors was that they execute a majority of their instructions in a single clock cycle. This is/was not actually part of the definition (unless you were from IBM who tried to redefine RISC as Reduced Instruction-cycle Set Computer or some similar nonsense), although it was an aspiration for the early RISC chip designers. Of course, now they are super-scalar, and overlap instructions in a single clock cycle and execution unit, that is irrelevant.

Nowadays, it's ironic that IBM POWER, one of the few remaining RISC processors on the market actually has a HUGE instruction set, and more addressing modes than you can shake a stick at, and also that the Intel "CISC" processors have RISC cores that are heavily microcoded!

2
0
Peter Gathercole
Silver badge

Re: "chose not to serve" @Oh Homer

CISC processors predated the adoption of the terms CISC and RISC. While you could say that, for example, a 6502 microprocessor was an early RISC processor, it was not really the case. The first processor that was really called a RISC processor was probably the Berkley RISC project (or maybe the Stanford MIPS project), which pretty much branded all previous processors as CISC, a term invented to allow differentiation.

As a result, you can't really claim any sort of design ethos for a CISC processor. Saving memory was a factor, but I don't really think that it was important, otherwise they would not have included 4 bit aligned BCD arithmetic instructions, because these wasted 3/8ths of the storage when storing decimal numbers.

You can say the converse. RISC processors, especially 64 bit processors often sacrificed memory efficiency to allow them to be clocked faster.

5
0
Peter Gathercole
Silver badge

byte, word and longword addressing

The earlier 'classic' Alpha processors (before EV56) did not support byte or word boundary aligned reads and writes from main memory. In order to read just a byte, it was necessary to read the entire long-word (32 bits), and then mask and shift the relevant bits from the long-word to get the individual byte. This can make the equivalent of a single load instruction from other architectures a sequence of a load, followed by a logical AND, followed by a shift operation, with some additional crap to determine the mask and the number of bits to shift.

But you have to remember that in the space of a single instruction on an x86 processor, an Alpha could probably be performing 4-6 instructions (just a guess, but most Alpha instructions executed in 1 or 2 clock cycles compared to 4 or more on x86, and they were clocked significantly faster than the Intel processors of the time - RISC vs. CISC).

Writing individual bytes was somewhat more complicated!

I was told that this also seriously hampered the way that X11 had to be ported, because many of the algorithms to manipulate pixmaps relied on reading and writing individual bytes on low colour depth pixmaps.

5
0

America has one month to stop the FBI getting its global license to hack

Peter Gathercole
Silver badge
Joke

...previously hacked...

"But, Your Honor, Rule 41 says that we're allowed to break in to a previously hacked computer."

"Well, Mr FBI attorney, how do you know it was previously hacked?"

"That's simple to prove, Your Honor. We did it."

7
0

Boffins coax non-superconductive stuff into dropping the 'non'

Peter Gathercole
Silver badge
Headmaster

Grand mixture of temperature scales!

On top of the normal Centigrade/Celsius/Fahrenheit issues, the author has interspersed Kelvin, with both Kelvin and degrees Celsius converted into Fahrenheit, but no conversion from Celsius to Kelvin (I know, subtract 273.15 from the temperature in Celsius).

Technically correct, but confusing, especially as it is too easy to read K as in Kilo if you're not paying attention!

3
0

Chinese electronics biz recalls webcams at heart of botnet DDoS woes

Peter Gathercole
Silver badge

Re: What else they do then... @Peter Gathercole speaker cable

Yeah. Double insulated, as I said. It will work, but I miss the figure-of-eight cross section cable that I've always used.

Stupid really, that I should want to continue to use what I've used in the past.

0
0
Peter Gathercole
Silver badge

Re: Router Rules @Velv

That scheme (allowing DHCP to allocate addresses and hope that devices get the same addresses even when the lease expires) works until it doesn't, and then the consumer who didn't need to know how things work will be completely stuck when their port forwarding rules stop working.

Most DHCP servers on consumer grade routers allow you to reserve persistent IP addresses for certain MAC addresses. I don't see what is so difficult about setting up persistent addresses that will be fixed. After all, in order to set up port forwarding rules, one has to know something about IP and port addressing.

0
0
Peter Gathercole
Silver badge

Re: UPnP is a red herring in this thread @fidodogbreath

You have a point, but to be hacked, you need a vector to get to one of these devices.

If they are snug and secure behind a firewall (even one in a consumer grade DSL router), it will not be possible to even get to the device to attack it, regardless of how easy it is to hack. The reason why UPnP is being mentioned so much is that it is commonly used to expose the services of this type of device to the internet through a firewall.

Unless you can show that the devices were either on an un-firewalled network or directly connected to the Internet, you're going to have to come up with a way that the attacker could initially get to the device to hack it other than UPnP. Until you do, that is still going to be the most likely culprit.

Whether you like it or not, UPnP is a way for undisciplined devices to expose themselves. It's just a flawed service, and many knowledgeable people agree.

2
0
Peter Gathercole
Silver badge

Re: What else they do then... @A. S. A. C.

Probably WiFi connected room speakers, like the ones SONOS sell, and using UPnP to allow the music appliance to find them. Not my cup of tea, but whatever.

My speakers are connected to their amp via some old-fashioned 5A multi-strand lighting cable. Funny, I tried to buy some cable recently, and got the distinct impression that it was no longer available (at least as mains cable), I suspect because in the UK mains cable now needs to be double-insulated.

All I can get now appears to be specific 'speaker' cable, at stupid prices!

Progress?

4
0
Peter Gathercole
Silver badge

Re: Router Rules @AC

Totally agree re. uPNP and WPS, but if you want to set up the port forwarding rules yourself, you probably have to fix the IP addresses of the servers you want to port-forward to, either with manual IP addresses or fixed DHCP MAC-to-IP mappings.

Changing the password is a no-brainer that people do immediately anyway, isn't it? I even generate my own WiFi keys so as not to use the default, just in case it can be derived from some other information on the router, and hide the routers behind a Linux firewall and separate DSL modem.

The thing is, people I know ask why I do all this, when all they do is plug it all in, and press that little button on the router to register a device. "It's so much easier", they say.

If only I could directly implicate their network as being part of the botnet, I could show them the error of their ways...

1
0

Bad news, Trump. NASty storage is pretty popular, too

Peter Gathercole
Silver badge

GPFS...

...well more correctly IBM Spectrum Scale Storage, is a block based protocol (unless you're using the built in NFS bridge), putting the onus of working out where the storage for files is onto the client.

If you're taking about it working like a NAS, then you've probably come across it in it's SONAS storage appliance persona, not in it's GPFS client/server software defined storage persona.

0
0

Smoking hole found on Mars where Schiaparelli lander, er, 'landed'

Peter Gathercole
Silver badge

Re: It's WAR @John Brown

"EARTHMEN, WE ARE PEACEFUL BEINGS AND YOU HAVE TRIED TO DESTROY US, BUT YOU CANNOT SUCCEED. YOU AND YOUR PEOPLE WILL PAY FOR THIS ACT OF AGGRESSION. THIS IS THE VOICE OF THE MYSTERONS. WE KNOW THAT YOU CAN HEAR US, EARTHMEN. OUR REVENGE WILL BE SLOW BUT NONETHELESS EFFECTIVE. IT WILL MEAN THE ULTIMATE DESTRUCTION OF LIFE ON EARTH. IT WILL BE USELESS FOR YOU TO RESIST, FOR WE HAVE DISCOVERED THE SECRET OF REVERSING MATTER, AS YOU HAVE JUST WITNESSED. ONE OF YOU WILL BE UNDER OUR CONTROL. YOU WILL BE INSTRUMENTAL IN AVENGING THE MYSTERONS. OUR FIRST ACT OF RETALIATION WILL BE TO ASSASSINATE YOUR WORLD PRESIDENT."

0
0

Today the web was broken by countless hacked devices – your 60-second summary

Peter Gathercole
Silver badge

Re: Home Router Traffic @Metrognome

UPnP.

Convenient, yes.

Secure, hell no.

One thing it allows is any internal device to knock inbound holes in your firewall, without your knowledge or approval.

I appreciate that without it, some consumers would have to learn something, but the downside is that all the IoT devices that sit inside home networks and use UPnP can potentially become a participants in a DDoS attack like this.

Do consumers worry about this? Well probably none of them understand what it is that caused the DYN DNS outage, and even less about whether their house was part of the cause.

But should we? Definitely yes, if we want to maintain a functional and usable Internet!

I run my firewall with UPnP disabled, so it works inside my network for device discovery, but the firewall can't be controlled, and there's not that much that either I or the other members of my family have noticed that doesn't work.

2
0
Peter Gathercole
Silver badge

Re: Home Router Traffic

The problem with Shields Up! is that by default it only checks the reserved ports 0-1023.

You can use it to do custom scans, but the standard check will not check to see whether uPNP has opened up ephemeral ports through your firewall, and once these are set up, it could allow CnC channels to any devices.

But most edge-firewalls allow outbound connections to a co-ordination server anyway (it really would be a pain to have to configure individual ports on the firewall), and once a session is established, will allow return control requests (remember TCP/IP sessions are bidirectional) even without uPNP (never wondered how your network attached, print-from-anywhere printer works? Well, this is it).

Of course, it is necessary to get a foothold in the network for uPNP or outbound requests to be made, but who knows what is baked into the firmware of these IoT devices from China? I tent to run a Linux firewall, and do a sweep of the ports currently in use at the firewall, but it's difficult.

It's all a bit of a mess. I favour using the vulnerabilities themselves to run destructive code on the IoT devices to break them, but that is illegal in pretty much all jurisdictions.

5
0

10x faster servers? Pop a CAPI in your dome

Peter Gathercole
Silver badge

Re: What an unfortunate naming

This is just a collision of current and former acronyms. It happens all the time.

It's getting increasingly difficult in a particular field to come up with an acronym that is meaningful and can be pronounced as a word, because they've already bee used.

I play a game with my family that if they use an acronym in a conversation, I deliberately misconstrue what they've said by alternative expansion.

For example. ISA (these are all real)

Industry Standard Architecture

Internet Security and Acceleration (Microsoft ISA server)

Independent Schools Association

Individual Savings Account

International Standard Atmosphere

International Students Association

International Studies Association (not the same as above)

International Society of Automation

International Songwriters Association

International Society of Arboriculture

International Survey Agency

International Sign Association

International Sustainability Alliance

.. and there are others if only I could be bothered to go down the hit list.

2
0

How do you make a qubit 10 times as stable? Dress it up for work

Peter Gathercole
Silver badge

Impenetrable descriptions of Quantum Computing

The problem I have with trying to understand this technology is relating it to real world problems.

I think I can understand that you can store information in a qubit, and extract that information again, but what I find difficulty with is manipulating that information in a meaningful way, and extracting the result, which is the essence of what computing is about.

Mind you, I also struggled with Fourier Analysis, which formed the basis a now defunct branch of (analog) computing, which IIRC (from my University maths course decades ago) represent an observable artifact (like a complex waveform) as a sum of a series of more simple mathematical equations, which you can then manipulate using either algebra or vector mathematics to model how the artifact will behave under certain circumstances (although FA has alternative uses, I understand).

But I've just not seen something that describes how the data in the qubit is manipulated.

I can see something of what Destroy All Monsters is trying to say, in that it is the interaction of multiple qubits that enable you to get meaningful results from a combination of more than one piece of information, but I just cant see how this interaction is controlled. And without control, the whole field appears useless. Maybe I just don't understand what is the aim, the only thing I can see is it's not applicable to what we used to call 'general computing'. Your not going to be doing your word processing on a quantum computer!

On the subject of people needing to understand maths to be able to even approach the field, what a lot of people forget is that mathematical notation is just like any other jargon. If you don't even understand how the notation works, no amount of reiteration written in that notation will mean anything.

But then again, I realized a long time ago that there was a real ceiling on the amount of understanding I would ever achieve in maths once it got into apparently abstract areas.

2
0

British jobs for British people: UK tech rejects PM May’s nativist hiring agenda

Peter Gathercole
Silver badge

Re: Don't forget us oldsters! @YAAC

It's funny. I've pulled more all-nighters in the last 6 months than I have in the previous 15 years!

The reason why I do it is because it needs to be done, and my kids are now grown up so that I can afford the disruption to my life that some of my colleagues who still have younger kids cannot.

I must admit I am accompanied by a significant number of 20 somethings who have not yet acquired responsibilities outside of work.

Oh, while I'm waiting for the time for my work, I do origami, not table football!

1
0
Peter Gathercole
Silver badge

Re: @Tom Paine

That is a good point. But the way I rationalise it is by considering the on-going employability of people in the UK.

All the time that tax, benefits, health and other infrastructure services, education etc. are funded within an 'arbitrary regional border', I believe that pay and skills should also come mainly from within that arbitrary border.

If it were the case that full movement allowed people at the lower end of the demographic spectrum to get worthwhile jobs in other countries, then it would be great. But what is happening, and will continue to happen, is that people move from poorer countries to richer ones, displacing the lower skilled locals from the workforce because they are prepared to work for lower wages than the locals.

This occurs in two ways. One is the obvious one where locals just don't find work because it's being done by people who are prepared to work for less. The second, and much more subtle one, is that businesses in the UK don't bother training people from the UK. They just bring them in from abroad, saving themselves all of the costs of training.

What this leads to is a de-skilling of the local workforce, and perpetuates the situation that businesses can't recruit skills from the local workforce, and then bring even more people in from abroad. It will become a self perpetuating issue, whilst all the time money could well be leached from the UK economy.

But it is not just the UK that is harmed. If you look at countries like Poland, Hungary and even Ireland, such a large number of their young people who have got skills marketable in richer countries leave that they are starving their own countries of the skills they need!

I saw a documentary on Ireland that stated some villages effectively don't have any residents between the ages of 18 and 30, because they've all gone somewhere else to find work.

I would love to see a totally egalitarian world, where the resources of the world are equally shared, but we are so far away from that, with no possibility of ever getting there without some world-changing event, that we cannot afford to consider it.

It's absolutely pointless having a country with a 'healthy' economy for the shareholders and owners of the companies, if the rest of the population is un-employed, un-employable, or are effectively wage-slaves of the rich.

3
0
Peter Gathercole
Silver badge

Re: I would be curious...

The figures are there (just as a quick example). But it would make no sense, because the UK is run as a single economic region, with different areas generating different levels of product. If you split the country out, you will certainly find some areas actually running at a deficit, being propped up by London.

If you want to go down that route, maybe you should ask what Scotland would look like outside of the UK, now that oil revenues have fallen below their very optimistic budget calculations at the Scottish referendum.

1
3
Peter Gathercole
Silver badge

Re: @Me.

On second thoughts, add in the SNP to a Remain coalition party in a General Election, and you may get closer to an overall majority, but it would still require a lot of people with disparate ideas campaigning together, and the resultant government would be squabbling amongst themselves about issues other than Europe.

2
0
Peter Gathercole
Silver badge

@qwertyuiop

You clearly don't understand a referendum, do you.

While this particular referendum on leaving Europe was not actually legally binding, I cannot see any government not implementing it, because it would simply crush any notion of the UK being a democratic country.

There is no way back. We have to leave. The only way that it could be avoided is by this government calling a general election before invoking article 50, and the election being won by a party explicitly campaigning on not leaving Europe.

I could see a centrist Labour offshoot campaigning in coalition with the Lib Dems. and possibly the Greens on this agenda, but I don't see that they would win a majority, although they could probably gain the largest share of the vote of all groups. But they would not have the clout to actually form a government able to carry out the policy.

It is unlikely that the Conservatives campaigning on such an agenda would win (it would show severe hypocrisy) and would split the party, so it would be as much political suicide for the current incumbent as calling the referendum on such a blunt question in the first place was for the former one.

But Teresa May has said that she won't do this, so it's moot.

8
0
Peter Gathercole
Silver badge

Re: Don't forget us oldsters! @AC

Sorry, parts of that comment did concentrate too much on young people. I agree that cross training is required as well, but I think that it was covered to a degree by the "training their own people" statement at the beginning of the comment.

I am in my mid '50s and do face the same problems you do, but I hope that my core skills will remain in demand long enough to see me through to retirement, but I am beginning to wonder.

4
0
Peter Gathercole
Silver badge

@Jamie

The problem that you point out is true, but probably not because of what you have stated.

Both sides in the referendum campaigned on negative agendas. As a result, the people in the exit camp were campaigning against Europe, rather than for a particular model of Britain's future. It thus became a wide church, covering a range of people who are out-and-out xenophobic, all the way to those who are prepared to live in a global society, just not one controlled by Europe. There really was no one issue other that Europe itself that the voters could agree on.

And for the campaign that wanted to stay, they really did overplay the fear card, claiming consequences once we had left that are unlikely to pan out to the extreme cases thay envisaged, so much so that many people just didn't believe it.

As a result of these broad campaigns, we have a situation where even if you just look at the leavers, whatever deal is chosen eventually will upset significant numbers of people. It's unavoidable. There is no way there can be a solution that will satisfy a majority of the UK population.

The referendum was just not thought out properly, and was framed in a way that, because David Cameron thought he could not lose, did not actually ask what kind of Britain people wanted.

I would like to have seen a third option of "kick it down the road for a few years". I believe that most people would have voted for that rather than stay in or leave, which would have shown Europe that the UK was not happy with the way things were going, and put them on notice that we really could leave.

I myself have no problem with working with people from other races, creeds and colours. I have no problem with them living and working here as long as they are doing something that cannot be provided by the local workforce (but please see below about my views on training).

My view was that I no longer wanted to be shackled to a group of countries that were becoming inward looking and so bureaucratic that it was going to become impossible to achieve any change, while the rest of the world was moving in a different direction. So I voted to leave. I did not want to be part of a United States of Europe, which I believe is the direction it is going.

I understood that in the short term, there would be economic costs, but my belief is that within 10 years, as long as we don't end up with some membership-lite deal, the UK will pull ahead of Europe.

Where I do have a problem is with a predilection for businesses to employ people from abroad in preference to training our own young (or even older) people to do the jobs. This not specifically a European issue, but was complicated by free movement within Europe. The lack of training upsets me greatly, as I want to invest in future generations. As I look at it, it should be a no-brainer to make sure we have a adequately trained and experienced native workforce. We have to do this for the sake of our children and children's children.

15
6
Peter Gathercole
Silver badge

I'm agahst!

The thing that the article completely misses is that employers are not prepared to invest in training their own people or new workers!

You cannot expect the education, even the further education system to generate the people with knowledge and skills that all companies want. It takes +5 years to get a course up, running, and generating trained graduates. By the time this has happened, the skills required have changed. Windows and .Net a few years ago, Linux and Python now, who-the-hell-knows-what in the future. It's education's job to provide a good fundamental grounding, not fully rounded workers. That comes later as they gain experience.

It has in the past, and will remain in the future, the case that to get the skills needed, you can go out and fight in the market for a limited resource, or you can invest in apprenticeship and cross training of the people available. Find someone who has aptitude, fresh out of education, and mould them into what you need.

And you know what. Training UK people keeps the money in the country, and enriches the available skills, and generates jobs for UK residents. It will also continue the UK's reputation for being somewhere to come from abroad to to learn, something that is being increasingly eroded in recent years.

Businesses have become lazy, inward facing, and too focused on profit. They need to take on some responsibility themselves, instead of wanting to steal doctors, nurses, IT skills and many other things from countries that need to keep them themselves.

33
1

SSDs in the enterprise: It's about more than just speed

Peter Gathercole
Silver badge

Re: Not all about performance @AC and spinning HDDs

Whilst it it true that there are storage systems that do spin down disks when they are not in use, this is not the norm. The problem with spinning disks is that frequently, the most likely time they fail is when they are spun up after being spun down for a period of time.

I recently was in charge of an estate with ~5000 disks in it. It ran 24x7, and was only powered down if there was work being done on the site UPS. When we powered it back up, we inevitably had problems with some of the HDDs, leading to disk-recovery procedures needing to be invoked.

I know that in the archival cold-disk vaults, they end up with a complex RAID/mirroring environment such that data was stored on *several* hard disks, so that when the inevitable failures happen, the data is still safe. You also have to cope with bit-rot.

4
0

ZX Spectrum Vega+ will ship on time, developer claims amid doubts

Peter Gathercole
Silver badge

Re: Arer we surprised? @steamnut

It's a completely different manufacturing environment now. If you have the design and the board layout, and are not using anything particularly esoteric, then there are companies in China who are queuing up to build these things for you.

Just look at Alibaba, and there are adverts for companies that will build to your design, and at volume that you want. It's a bit late for volume shipments before Christmas (shipping by sea can take 6 weeks), but it may still be possible to get them air-freighted in if you can accept the cost.

Chances are these things are in a container somewhere on the ocean. Lets hope it is is not on a ship owned by Hanjin Shipping Company!

0
0
Peter Gathercole
Silver badge

If I remember my brief encounter with a ZX81, I bought an assembler called ZASM or something similar where you loaded the assembler program into memory, then added the assembler code as REM statements, and then ran it. It assembled the code into more REMs somewhere else in the program. You then deleted the assembler and the source, and wrote any extra BASIC around the machine code.

I already knew 6502 assembler, but I learned Z80 assembler on this setup. It kept me busy while my BBC micro arrived.

2
0

Sinclair fans rejoice: ZX Spectrum Vega+ to launch October 20

Peter Gathercole
Silver badge

Re: Hmmmm...

If you've never played Elite on a BEEB with a Bitstik, then you've really missed out!

Excellent control, and the throttle on the twist joystick.

Oh, and a 6502 second processor helped (full screen in mode 1, all galaxies in memory, and smoother as well).

1
0

If we can't fix this printer tonight, the bank's core app will stop working

Peter Gathercole
Silver badge

Re: Some time ago... @ShortLegs

I love all this retro stuff, so it's no bother to explain. What I am talking about here is the 80 column IBM or Hollerith card that was common in the '50s, '60s and '70s, but was mostly obsolete by the '80s. I learned to program PL/1 on them, and also used them in my first job.

A punched card was exactly as it sounds. It was a card measuring 187mm by 83mm. It had rounded corners, with one or other top corner cut off to allow quick card orientation checks.

They had rectangular holes punched in an 80 column by 12 row matrix, with each column representing one character. Not every manufacturer used the same encoding type on the cards.

Generally speaking each card represented one line of information or a line of source code. Most card punch machines would punch the holes representing the character, and would also print the character at the top of the card, so that you could read the card.

Most languages (but not all) reserved the first or last six columns to hold a card number, so that cards could be ordered. When punching the cards, it was normal to step the cards number by 10, so that it was possible to insert cards into the deck without having to repunch the whole deck. Most compilers would allow you to miss the numbers altogether, but if you ever dropped the card deck.....

Punching the cards was done on a desk-sized machine with a keyboard and a transport mechanism which allowed blank cards to be fed in to a hopper (normally on the right), typed one column at a time, and then moved into a hopper on the left (cleverly turning the cards over to keep the order). The better card punches also allowed you to copy a card, one column at a time until you got to an error, and then punch the rest of the card.

Alternatively, you could get hand punched that allowed you to punch by hand the correct holes for each character, but you had to be desperate or very clever to use one. Some people claim also to have blocked cut holes by carefully placing a 'chad' (the cut out rectangle of card) into a hole to 'edit' a card, but I'm a bit sceptical myself.

Column 7 in most languages was a card-type indicator. Typing a C in this column normally meant that the cards was a comment. This may sound wasteful, but the comments would be included in the listing from the compilation. In some languages, like Cobol and RPG, the card-type indicator was used to identify the section that the line was in (Input, Calculation, Output, Exception).

Cards were read in a reader that pulled one card at a time (at a rate of 2-10 cards a second) across either an mechanical, air-jet or optical reader, which worked out which holes were punched. You kept card decks together with elastic bands, and just for security, most people would use two bands, in case one broke.

Creasing a card, or allowing them to get damp or worn at the edges would often cause the cards to jam in the reader, normally meaning that the whole deck would be rejected. Having a deck rejected meant that you missed the slot and have to put your job to the back of the queue once the problem was rectified. Remember that systems were mostly single tasking (at least for compilation streams), so they processed the queue in sequence, one-at-a-time.

I'm not sad to see the end of those days, but having lived through them, I feel that it imposed a rigour that would benefit modern programmers if they had to experience it.

0
0
Peter Gathercole
Silver badge

Re: Some time ago... @earl grey

Thanks. I don't necessarily differentiate between chain and band printers. My bad.

It was probably a band printer, but the basic operating principals were largely similar.

0
0
Peter Gathercole
Silver badge

Some time ago...

I know some of us do, but I wonder exactly how many of the readers really know what a "chain printer" actually is.

Back in the day, these were capable of 600+ lines a minute. The speed that the chain which carried the letters moved was such that if the chain broke, it could seriously damage the heavy metal acoustic cover. There was a reason why there were cover switches to prevent the printer operating when the cover was open, and it was not just the noise.

In my first job, the single chain printer that they had printed all of the council's rates demands and payment books, the car parking fines, the council house rent and backlog reports, the payment cheques etc, well pretty much anything that the council sent out in the post bar what the typing pool typed.

The printer was on a special I/O channel, and the particular model of Sperry Univac 90/30 could only have one of these printers attached. The printer itself was the size of a large wardrobe, and could not easily be swapped out.

They were a significant part of the cost of the computer system, and it really was a severity one call if it did not work for more than a day. Oh, and when it was down, none of us programmers could work either, because we needed it for the printed output from the batch (decks of punched card) compilation runs for the programs we were working on.

29
0

BSODs of the week: From GRUB to nagware

Peter Gathercole
Silver badge

Re: Linux Kernel Panic @fajensen

<sarcasm>Gosh, systemd is more magical than I knew! It can be loaded from a missing filesystem, where init can't and then mystically re-populate a newly formatted root filesystem to make the system work! I must quickly switch all my systems to systemd immediately</sarcasm>

I'm a SysV (and earlier Bell Labs UNIX) init diehard. I've lived with it for 30+ years. I can (reluctantly) cope with Upstart, because it still does the init directory thing, but I'm really thinking of trying to find a Linux that does not include systemd because It's too complicated and non-deterministic.

Failing this, one of the *BSDs beckon.

0
0
Peter Gathercole
Silver badge

Re: Linux Kernel Panic @2 ACs after my previous post

The last real software BSOD I saw was caused by the driver for a graphics adapter in Windows XP, although I have seen the equivalent on Windows 7 caused by not having the correct drivers installed after swapping the motherboard on a system.

I admit that in neither case was it the primary OSs fault, but device drivers. But the driver model on Windows NT 4 onward, where the graphics driver can bring down the whole OS, compared to Linux, where most of the graphics code runs in user mode so the screen crashes, but the rest of the OS functions so you can either re-start the graphics subsystem, or gracefully bring down the OS is IMHO preferable.

Ah, wait a minute. An update caused my middle son's Windows 7 system to fail to boot last weekend. Was fixed, I understand (he fixed it himself), by re-loading the Nvidea driver for his 980.

BTW. I'm old enough that I have worked at the source level on UNIX kernels kernel, and seen (and caused!) real kernel panics caused by code faults.

I've also seen panics (well, 888-102 and 103) errors on AIX (you see this sort of thing in a support centrre)

There used to be a standard X11 screensaver that showed the crash screen of several types of system, including SunOS/Solaris, Macintosh OS (OS9 or earlier) and Windows, amongst several others. Used to really surprise people when they saw it unexpectedly. Was also a challenge trying to identify them all as they cycled round.

IIRC, the Windows mock crash screen had NCC 1701 encoded as one of the fault codes!

0
0
Peter Gathercole
Silver badge

Re: Linux Kernel Panic

Whilst I understand that it would be interesting to see a real kernel panic as a result of a code fault, this isn't one.

It looks as if the ReiserFS filesystem on device md(9,1) - if I read that correctly, is corrupt, and cannot be mounted, and then cannot umount the RAM filesystem that was loaded during the bootstrap. This looks like it is the root filesystem, and as a result when the kernel that Grub has already loaded tries to start init, it can't.

From this point there's not a lot the system can do, and it takes the very sensible decision to panic with an appropriate message.

There's no fault in Linux, so it's not what I would call a real panic.

14
2

Greybeards beware: Hair dye for blokes outfit Just For Men served trojan

Peter Gathercole
Silver badge

Fortunately, I don't need it

I am going slowly grey, but will welcome it, as it will finally make me look closer to my real age.

<smug>Still have a full head of hair that is naturally mostly it's original colour with the odd grey one mixed in, at closer to 60 than 50</smug>.

My daughter is getting married shortly, and I've been told I look younger than her fiancee, even though I'm 27 years older! The wedding photos are going to look strange.

5
0

Ubuntu tees up OpenStack on IBM's iron

Peter Gathercole
Silver badge

Re: Linux with IBM ?

Well, a couple of decades plus a bit.

AIX on POWER (original RIOS processors) was released in 1990 (26 years ago) and running before that in 1989.

Not that long ago, I found a binary compiled on AIX 3.2.5 (probably compiled on a 7013 model 530) which ran on AIX 5.3, and I don't see any reason why this particular binary would not work on AIX 7.2.

These systems are not members of the Linux only POWER family, according to the link to the brochure. They will run AIX and IBM i/OS as well as Linux.

And I think that you may find that some 360 mode programs no longer run on the latest zSeries, as they've slowly been retiring some of the older execution modes on the later mainframes. I think that everything 370XA mode or earlier will not run without being re-linked.

2
0

Pluto's emitting X-rays, and NASA doesn't quite know how

Peter Gathercole
Silver badge

Re: Star Surgeon @DNTP

This is the second James White Sector General reference in as many weeks.

One of my school friends claimed that James was his uncle. Never found out whether it was true or not.

1
0

HP Inc's rinky-dink ink stink: Unofficial cartridges, official refills spurned by printer DRM

Peter Gathercole
Silver badge

Re: My printer not HP's @wordmerchant

Well, if a NutriBullet wouldn't do it, I don't know what would. Vicious things, they are. Designed to smash seed kernels.

1
0

Emacs and Vim both release first new updates in years

Peter Gathercole
Silver badge

@prinox - 4-function calculator

Well, calculator. Emacs has one of those (ESC-X calc-keypad).

Has more than four functions, however, and is an RPN calculator by default.

0
0

Pass the 'Milk' to make code run four times faster, say MIT boffins

Peter Gathercole
Silver badge

Re: OpenMP ... does not have a compiler @Frumious

Mpicc is a wrapper around various C compilers, and is normally used with gcc.

It's really not a compiler in it's own right, more like a pre-processor.

The MPI component will take a number of inline directives, and generate some C, unroll some loops into parallel threads, do map reduction and some other optimisations, and add some glue code and hooks to library routines to handle passing data between local and remote threads.

Having done this, it then passes the resultant intermediate source to the backend (real) compiler which generates the linkable code which is then passed to the linker to resolve all the library calls.

If you remember, the earliest C++ 'compilers' worked in exactly the same way as a pre-processor to a C compiler, but I would suggest that C++ is more of a complete language than OpenMP.

0
0
Peter Gathercole
Silver badge

Re: Software? Or maybe hardware. @Brewster

Fair enough. I missed the line on "common algorithms" in my own reference. No excuses there, but, again, I will wait for the final paper to see which algorithms these are.

I've worked with people doing serious work with HPC systems, and provided technical support for those systems. They put a lot of effort into trying to make sure that data is already in an appropriate place before it is needed. Generally this is done by localizing data in chunks where much of the required data is close to other related data, in blocks aligned to a common block sizes. They try to make sure that data is used such that it can be fetched in regular ways so that the pre-fetch and cache hardware will have it lined up for when it is needed. They make sure that the minimum amount of data is requested between cores and systems over the interlink.

When you are iterating over a loop many millions of times, saving a few instruction per iteration, and avoiding cache misses and context switches can save you a huge amount of resource.

Whilst I can see that there is the possibility that Milk could make savings for some particular types of problem, all it is really doing is eliminating efficiency problems from non-optimized code. It's no substitute for experienced programmers, but held out as a carrot for organizations wanting to reduce the skill set of their programmers. IMHO, having worked with some very skilled programmers, it will take a long time before this can be realistically achieved.

It will be very interesting to see whether adding Milk to the Unified Model for weather forecasting will result in faster code.

0
0
Peter Gathercole
Silver badge

Re: Software? Or maybe hardware. @Brewster

OK, I'll wait for the full paper to be published, but there's a number of things in the quotes in the article that make little to no sense (although it could be that the journo writing the original article has not fully understood it).

Let's start with "when a core discovers that it needs a piece of data, it doesn’t request it"

Um. A core. A physical processor. How is this conditioned by the Milk compiler without wrapping the load instruction with a whole load more code, because that is all a compiler can do!

Then we've got "adds the data item’s address to a list of locally stored addresses".

and then what. Puts the thread to sleep? Hello. Expensive context switch. How's that going to improve latency and throughput.

and "and redistribute them to the cores"

The compiler can sort this out? It's going to have to know a huge amount about the shape of the system the code is going to run on before it generates the code. And most systems leave the placing of data in memory to the kernel and the hardware memory translation mechanisms. Milk's going to be able to control all of this, simply?

And anyway, The article talks first about a new language, and then talks about modifications to OpenMP. Which is it? OpenMP is not a language in it's own right, and it does not have a compiler, it's more like a pre-processor that expands a number of in-line directives in the code into something that the following compiler (Fortran or C) can generate linkable code.

I don't know whether you've ever used OpenMP, but it already does significant data reduction. It sounds like this "Milk" language is merely a modification to OpenMP, and not a language in it's own right.

Ah. I found the original MIT release. One of the things it says is "manage memory more efficiently in programs that deal with scattered data points in large data sets". It also says nothing about "common algorithms". It talks about a rather generalized, sparse data problem that is not actually really suited to the type of computing tha HPC systems and OpenMP are really suited to.

But I'll look for the full presentation, but I doubt that it will claim the type of panacea that the Register article claims.

7
0
Peter Gathercole
Silver badge

Software? Or maybe hardware.

So instead of just fetching the data, you're going to catch the request (in software?), and defer the code waiting for the data until the data can be more efficiently fetched.

Just how many more very expensive context switches will this generate? And where are the other threads that can be dispatched once all of them are waiting for an 'efficient data fetch'. And how will that affect the latency of individual threads?

I'm sure that there are some highly threaded applications with unpredictable data flow where this could be a benefit, but on the brute-force codes that make up most HPC applications, which mostly process data in predictable ways, especially Fortran code where the standard dictate how data is stored in arrays, this is likely to be completely unneeded extra code that can only slow the total throughput.

I think I'll let the hardware cache pre-fetch hardware provide all the speedup most real 'Big Data' requires.

1
4

Using a thing made by Microsoft, Apple or Adobe? It probably needs a patch today

Peter Gathercole
Silver badge

Re: iPatch

Previous history. In the late 1970's and 1980's AT&T Exptools (and probably other tool packages - V7 Addendum tape springs to mind) had a utility to edit i-nodes on a UNIX file system that was called ipatch. I probably have a paper copy of the man page somewhere.

2
0

Delete Google Maps? Go ahead, says Google, we'll still track you

Peter Gathercole
Silver badge

Re: eh? @RAMChYLD

Unfortunately, cell phones have to advertise where they are and be tracked, so calls can be routed to the cell where the phone is. There's no way the 'phone system could probe the cell network of the whole world to locate a phone.

So it's axiomatic that a functioning mobile phone can be tracked without GPS, WiFi, NFC or Bluetooth.

The difference is that the cell location information is normally limited to the service providers running the cell network, and agencies with legal access to that information. The combination of WiFi/Mobile Data and GPS/AGPS makes this type information available to all apps with some tracking function.

I run will all comms except the phone disabled, but mainly because of battery life.

7
0

'Oi! El Reg! Stop pretending Microsoft has a BSOD monopoly!'

Peter Gathercole
Silver badge

Re: Machine Operating System @davidp231

Thanks for the link to RetroClinic, and the detailed startup information.

My BEEB has an issue 3 board, ordered on the first day that they accepted orders, and was delivered with OS 0.1, and still has Basic 1 (the OS was upgraded when the disk upgrade was fitted). I never really used B+ or B+128s, but I did have access to one of the first Masters that was available, with a 3xAA battery holder for normal batteries, not a single battery, but no longer. I also ran an Econet Level 3 fileserver with a 10MB hard disk for a network of machines with many of the available peripherals.

I think the DataCentre may make it onto my Christmas list! I hope it doesn't clash with the ATPL Sideways ROM board, but I suppose I could take that out.

0
0
Peter Gathercole
Silver badge

Re: Machine Operating System @8271.

Yes. How memory plays tricks.

0
0

Forums

Biting the hand that feeds IT © 1998–2017