Podules were for Acorn Archimedes machines, although I suppose that the A3000 was branded as a BBC micro. Not the classic 8-bit 6502A based BBC A and B though!
2924 posts • joined 15 Jun 2007
Bearing in mind how much noise is made by environmentalists about oil contamination if a ship founders, one wonders how much liability insurance would need to be carried by a company operating nuclear merchant ships.
If a nuclear warship is damaged/founders, you expect (and this has been the case so far) that the nation operating the ship will carry the burden of recovery and clean-up of the wreck. There would need to be some guarantee that sufficient resource would be available to prevent nuclear contamination from a merchantman.
If there were arguments and delays after such an accident involving a merchant ship, leading to nuclear contamination, then the outrage that would follow would make the Deepwater Horizon a mere storm in a tea-cup.
I'd also expect that these nucular wessels (sorry, couldn't resist) would also have to be operated far from Somali pirates!
BTW. The description for the warning exclamation icon reads "All hands man the pumps, run for the hills, batten down the hatches and so forth", so I thought it was appropriate.
While it worked, it felt unnatural, and the keys, particularly around the edges, were a bit unreliable.
It just didn't feel right with no physical keys, and the flatness meant that anybody used to typing got aching hands quite quickly. I never saw him use it much in the following months. It was a clever and an impressive gadget though.
I found a better solution for sending texts quickly was to link my (then) Nokia phone to my laptop by IR, but that would rather defeat the purpose when using a smartphone. I got a Palm Treo, installed Graffiti, and used that instead. I wish I could use a stylus and Graffiti on my current android phone (I know, both are possible, but Graffiti appears to have been pulled from the Android Market at the moment!)
Our Choices, which became a Blockbuster during the big change a few years ago is fairly clean and tidy, staffed with enthusiastic people, and is never empty of customers.
I live in a rural town, with the nearest large retailer over 25 miles away. We've lost Woolworths and Curries (games), and our small WH Smiths do not sell a large range of DVDs or any games at all.
The Blockbuster is the only remaining local outlet with a reasonable range of DVDs and games to purchase, and has the added benefit of renting both. The only alternative is the restricted range of DVDs that our local Tesco sells, and as this is a rural branch, only runs to the top 30 or so DVDs and even less games, or the £5 bargin bin titles.
If we loose our Blockbuster, with the really poor rural broadband provision and no cable TV, it will make the area even less attractive to the resident youth. We are already seeing a serious upward change in the age demographic as the young leave when they can.
Yes, we can buy from Amazon or Play. Yes we can download (slowly and encumbered by usage caps). Yes, we can get titles from Love Film, but the postal service is already going down the tube. We appear to only get every other day deliveries of mail as it is, and this will only get worse.
What am I supposed to do when I get one of my kids asking for a rental or game for the weekend. Or for an extra game controller. Modern kids just don't seem to understand "it'll be here next week". They want it NOW.
Part of the agreement, if I remember correctly, is that you are only permitted to use the keys that you obtain through Technet while you maintain the subscription.
As soon as you stop paying the subscription, you need to buy new, full licences, or un-install the software. So you can expect that any keys that you used to fail the Windows Genuine Advantage test sometime in the future.
This was the main reason I never took advantage of the apparently favourable conditions offered. I did not want to tie myself into a long-term agreement with MS where they could repeatedly demand money from me at their own terms.
I would laugh you all for dancing with the devil, if it was not so tragic.
to see if bone would gradually permute the foam over time.
My thoughts are that if it is similar in strength to human bone, it may break, but if over time ordinary bone grows through, it may be able to heal with ordinary bone, without further surgical intervention. Now that would be revolutionary. It may completely change the lives of people who currently have to go through serious bone grafting after injury.
I am not in any way associated with a medical profession, and I am just idly speculating, so I'm sure someone will say that this can't happen. Still...
...that this works fine if all you want is what they offer.
As soon as you get tied in, and decide, say, that you want to use a product that they do not offer, such as a particular new network type, or a better HSM product, or a particular data visualization package to integrate with your MIS, you suddenly find that you either can't, or will compromise the gold-standard support they offer by changing the software stack.
This is a nirvana for corporate sales droids, especially if they can talk to the customer managers rather than their techies (it's amazing how often I have found that businesses will allow the managers to talk to salesmen without having techies present nowadays).
You end up getting steered down a path that ties you in to a vendors products, then when you can't get what you need working, to a vendors consulting arm, all of which will be chargeable.
I think that the problem is that the Typhoon is one of the new generation of inherently unstable aircraft, that are only rendered flyable by the Fly-by-Computer avionics.
I'm sure that if the avionics were still operating, it would be possible to land, but if the avionics were out, there would be virtually no chance of any type of controlled landing. Hopefully, redundant systems and power supplies should be installed to keep the systems running in the case the primary power systems fail.
I'm not sure whether what I'm saying here is a joke or not.
If you consider that civilizations are cyclic (which is not actually proven, but it's a good theory), then you need to leave some easily extracted mineral resources to allow a future civilization to progress through the equivalent of our early industrial age. Otherwise, once our reign a the top of the stack (last in, first out) comes to an end, future civilizations will get stuck in the pre-industrial age. An amusing fictional illustration of what might happen is described in "The Mote in God's Eye", by Larry Niven.
They're not going to be able to jump straight to a solar/green/nuclear lifestyle without going through some pretty low grade technology! Unless you are suggesting a gap of 100Myears to allow fossil fuels to accumulate again.
Mineral oil is actually a quite important lubricant, which may be more important in the future than using it as a fuel. Vegetable based oils are too light without significant processing (which takes energy).
Still, I'm just ranting as an office-chair speculator here.
You really put your model M in the dishwasher. Wow. I religiously strip all the keycaps off and wash them, and then apply a stiff brush and wet-wipes to the rest. Your solution sounds much quicker. How long do you leave it to dry?
When it comes to large numbers of similar devices, you need to look at the MTBF figures. The more of a particular device you have, the more frequently you will see one fail. I would have to look up the exact maths, but I don't think its a simple ratio. Where I am, we have over three thousand 300GB disks, and we lose a couple every month. This does not cause a problem, because they are in a large number of separate raid arrays with two hot spares per 10 disk array (=12 disks total). We could still be operating with three disks down in an array.
Memory, on the whole, seems reasonably reliable, but we have multi-bit parity on the systems, together with bit-steering (the joys of Power6 systems). This means that it is not the built-to-a-budget memory that most people put in their Wintel servers. That price premium must really buy you something.
I used to be able to use my PalmPilot m100 to control TVs quite well using OmniRemote (a free, downloadable app a long time before Apple got in on the act), but when I switched to a Sony Clie and then a Palm Treo, the LED's were only strong enough to control the TV from a distance of about a metre. Not too good for a remote. I would have been able to buy a hardware add-on, but that would have spoilt the lines on the PDA/Phone.
Was a good idea though, having a fully customizable remote able to do all of your media appliences. Shame it was not successful.
Interesting about consoles, though. My son uses his DS as a wireless controller on his Wii. Not sure if this is specific to particular games, but it allows multiplayer games to be played when you don't have enough controllers.
Most organizations (including the Reg in the past) would just post a correction, with a note about the correction in the edited article, and then silently moderate out the obvious comments before they become visible.
I think I'll bookmark the comments on this one, because it is the first time I've seen 5 of the first 7 posts deleted after the fact by the moderator. Unfortunately, like Gillian McKeith and possibly William Hague, you may find that the Internet is an unforgiving medium.
After being put firmly in my place about rejected comments by yourself I know exactly how serious you are. But I do like to tease...
Someone has to pick up the cost of the loss of capacity after a pack has been recharged a hundred or so times. Leasing makes more sense than owning, as nobody will complain about swapping one that is new for one that is near it's end-of-life it they lease it.
You would still have some uncertainly about range, and you would probably have to have some rules about when a battery pack would be retired or reconditioned. Would you make it 90% of original charge capacity, 80%, 50%?
I'm all for this technology, but there are serious wrinkles that need sorting out, not the least of which is the cleanness of the electricity. Also, could the power grid cope with thousands of battery packs drawing tens of amps at the same time? For example, if a battery charging station has 50 packs charging at any time, which draw 30A each while charging, we're talking 1,500 amps, or at 230V, 345KW per station. That's a lot of power. A typical UK house draws about 0.4KW per hour, averaged out across the year (according to EDF), so the charging station would put the same load on the grid as 800+ houses.
These figures are rough, based on the Tesla's battery pack which apparently take 3.5 hours to charge at 70A at 240V (thanks Wikipedia), mapped into something that is more likely to be found in the UK urban environment.
How many petrol stations serve as few as 150 customers in a day (assuming packs take 8 hours at 30A to charge)? And you would have to be pretty certain that the packs could not be nicked for their scrap value. And how large would the station have to be?
So, interesting ideas, but currently, fossil fuels still rule, as indicated by the icon.
Either Apple has been stockpiling iPhone4s for a while, or half of China must be making them.
I cannot believe that they can sustain nearly A QUARTER OF A MILLION activations a day for any length of time. At that rate, the equivalent of the whole of the UK population could have an iPhone 4 within a year, even if they were only making them on week days!
Excuse me, UNIXWare is the direct closest linear descendent of Bell Labs version5/6/7 UNIX that first appeared on University PDP/11 systems around 1976/77. It is the closest thing to being the main UNIX line that exists.
The line runs as follows
Bell Laboratories UNIX Timesharing System Version 5/6/7
System3 (Sometimes written System III)
SystemV (Sometimes written System 5)
Bell Laboratories, which could possibly make a claim for switched to Plan9 after UNIX edition 10 sometime around 1990.
Of course, there has been cross pollination, especially from the BSD releases, but these were made almost completely AT&T code-free around 1993. BSD4.4 could be regarded as intellectual descendents, but you would have to question whether it still counts as a genetic UNIX.
SunOS/Solaris, AIX, and HP/UX are vendor-owned branches of the original code, and Linux is not even a distant cousin, although there may be some illegitimate blood from dalliances from the UNIX members in the past. The family resemblance is striking, however.
SCO picked this up via UNIX System Laboratories (USL), Novell, Original-SCO and Caldera-SCO.
The term is Genetic UNIX. A good diagram can be found here http://www.levenez.com/unix/. Enjoy.
So this will mean that the stewardship (term chosen very carefully) of the genetic UNIX code, originally controlled by Bell Labs. will definitely be changing hands, leaving SCO with just Linux interests?
I'm not sure how this works, bearing in mind that it is the use (or misuse) of the UNIX code itself that was the subject of the legal action. Surely, it is not possible to sell the very subject of the action, while keeping the action going. It makes no sense!
It is the case, of course, that the IP rights and license revenue will remain as-is, with Novell.
What I would be worried about is Darl being backed by someone, and buying back the UNIX business and attempting to start the whole thing over. Or even Microsoft (perish the thought).
Aaaagh, horrible thought. Oracle!!!!!
When I read the conditions on the upgrade edition, it suggested that the license key for XP would no longer be valid once you had put the upgrade version on the system,
I wanted to have a dual boot system, because one of my sons was not convinced that all his games would work on 7. I was worried that Microsoft would be able to cripple/disable the WinXP instance using Windows Genuine (dis)Advantage, so opted for the full retail version of 7, and a second hard disk.
Ironically, I believe that he found everything worked, even from the original installation, so he has never started XP since 7 was installed!
Meant to AC my previous comment, but what the heck. It's fairly innocuous.
Hang on, who's that beating down my door?
CDs and DVDs officer? Yes, I have several hundred scattered around. Where do you want to start? Oh! you want to take them all away! Can I have a receipt please? And please note the ones you can't read are not encrypted, they're almost certainly ones that have failed to burn, and I forgot to throw away. No. Really. They don't have encryption keys for them. No. NO. Not the cuffs!
Help. Call a lawyer!
Whilst I in no way condone this person possessing these pictures, this story indicates how difficult it is to live in the modern world.
Ignorance of the law is no protection, but if this is the case, surely there is an onus on the government to make sure that the most pertinent features of existing and new laws are publicised to give people a chance to comply?
A case in point is the Coroners and Justice act 2009 (I think), which has been discussed often in these forums, which is largely unknown to almost everyone that I talk to in my social circles. This includes a significant number of family friends (mainly of my daughter) who have an interest in mainstream manga and animé.
I'm sure that there are titles that are regularly stocked on the shelves of high street book sellers that contain pictures of seemingly young people (often, but not exclusively girls) in compromising positions. And the fact that they are drawings makes no difference to the legislation. Even if the worst of the titles are removed from the shelves, there will be copies in peoples private collections or in the second hand market.
Why are there not warnings on the bookshelves, Amazon, and everywhere else people who may have such titles will see to check their collections?
And I am still not clear about how the law can be operated. If you take general landscape pictures on a beach where young children are playing, and without any intent, capture an image of a child in a state of undress where the child is in an accidentally provocative pose (like falling on their back at the moment that the picture was taken), this picture may fall foul of the law.
Now don't get me wrong, I do not often take pictures on the beach, even though I live in a seaside town. But I could not guarantee that in the 1000's of pictures I have taken over the years, that I do not have something like that either on paper, negative, or stored on CD-ROM.
And things that may have been regarded as totally innocent 60 or 70 years ago, even in quite prudish times, may also fall foul of this legislation. Acquiring such pictures before the legislation came into effect is no defence either.
So how many of us have checked our photo collections? I know I was shocked to find that one of my archive CD's has pictures taken using my first digital camera, which I let my two youngest sons play with when they were about 6 and 4. Some of these pictures are explicitly of their nether regions, taken I presume for a giggle, in the way that young kids do. I copied them wholesale without checking to CD and backed up this CD several times as I added to the collection. This means I now have pictures of undressed young boys scattered around on numerous disks. I doubt I could find all of them even if I tried. Am I a criminal? How can I prove that I did not take the pictures, or even that they are of my own kids?
In this, and several other instances, the law is definitely an ass, and so open to interpretation that I pity the poor defendants who get dragged in to cases that are taken to court to try to set precedent.
The 22 bit addressing extension on PDP11s were completely transparent to application programs. The address mapping was handled by the OS (as long as you weren't running early versions of RT/11 IIRC) , and every application ran in a private address space that was either a 16 bit 64K combined address space, or one 64K Instruction space and one 64K Data space on separate I&D systems like the PDP11/70. The OS would control the segmentation registers to to the virtual page to physical page mapping, so the programs knew nothing about it.
There was an additional complication on UNIBUS (rather than MASSBUS or QBUS) systems, where the DMA disk adapters (like RL11s) could only write into an 18 bit real address space managed by the Unibus map, limiting where the OS put it's disk buffers. This meant that if you were wanting to have mapped DMA I/O from a disk directly into your programs address space, you had to be very careful about asking the OS to set up your address space correctly.
I had a real oddity, a system called a SYSTIME 5000E, which was, as far as I know, the only system based on a PDP11/34 which had 22 bit addressing (they normally only had 18 bit), but it did not have separate I&D. All other systems, mainly from DEC themselves, were either 16 bit non-I&D, 18 bit non-I&D, 18 bit I&D, or 22 bit I&D. SYSTIME could do this, because the processor was made using TTL logic chips on 5 or more boards plugged into a backplane, and it was possible to buy the basic processors from DEC, and build your own memory management unit, disk controllers and backplane.
This gave me real problems when I was trying to get BL Unix V7 to use the full 2MB (we could not afford the other 2MB, it would have cost about £40K) on this system! I eventually worked out that I needed to turn on 22 bit addressing and 18 bit Unibus addressing, and had to use the Calgary disk buffer modifications, and fix the start address of the buffers at 64K physical, otherwise I would get an address wrap-around during a DMA disk transfer, and wipe out the I/O vectors in the first 256 bytes of real memory about 5 seconds after booting the system. Panic!!
It's rather sobering to think that I ran a 12 concurrent terminal multi-user system on a 16 bit machine with 2MB of real memory, 64MB of disk space (and that was quite a lot for this class of machine in the early '80s), with each program being limited to 64KB of memory, supporting a community of over 100 users. And more than this, it ran Ingres as well (albeit slowly). My phone has much more resource than this now!
I was going to enter a long, essay type reply to this when I realised that it wasn't worth it.
I was caught by the Oracle 8i to 9i problem (but on Winterhawk SP/2 nodes, not p690's), but I reckoned that this revolved around the lock kernel extension (read device driver) that Oracle added to AIX. There was an IBM documented incompatibility here.
Being a cynic, I always thought that this was just a ploy to make customers pay an upgrade charge to Oracle, rather than a real issue with AIX. It would not surprise me if it were perfectly possible for Oracle to have issued a patch for 8i, but this would have had little financial advantage for them.
IBM issued guidance that said that the kernel extension interface was changing, but this sounds very similar to the Adaptec driver issue mentioned. So no real difference from Solaris there.
The rest of the code may well have worked, but would have been completely unsupportable. IBM claimed in the AIX 5.1 readme that the only reason to recompile 64 bit code was to take advantage of the new features of the new release, which is not unreasonable.
1. What were you doing in 1990. I would doubt that it was working in the UNIX field.
2. Sun did not have a logical volume manager at that time, unless you count Veritas, which was not their product Sun Disk Suite was a charged for add on that was released in 1991. Nor did HP until sometime after AIX 3.1 was launched (the HP/UX logical volume manager was derived from the code IBM contributed to OSF if I remember correctly, and added to HP/UX 9.0 in about 1992)
3. HP/UX had SAM. Hmm, not good, and not a patch on Smit.
4. I'm sorry, Solaris definitly did not have dynamically loadable AND unloadable device drivers (I'm not talking linking here, I'm talking device drivers) at this time. I was having to sysgen systems to tell it what devices were included in the Kernel. And I would ask when you think that Solaris was launched, bearing in mind that Solaris 1 was a packaging option including Sun OS 4, and was not the label for the entire OS until Solaris 2 which was SVR4 based in around 1992.
5. Sun OS was a BSD derived OS until Solaris 2 so of course it implemented BSD commands. It had a veneer of SystemV on some commands until the SVR4 initiative that was not Sun's idea (remember this). The way that you effectively chose which type of commands and environment you used was poor. You may not think this was important, but as you pointed out, this could just be an opinion.
6. I'll give you Starfire. I had forgotten about that system. As I pointed out, however, the single processor power negated some of the SMP benefits that other vendors had. But I believe that the cost per TPC was not in Sun's favor on these systems. IBM's closest system at the time was probably the S70, but they were a bit later than Starfire. S80 and p690 (regatta) closed the performance gaps.
7. OK, find them. I don't remember any, because for system management, Solaris was, and IMHO still is in the stone ages. I don't believe that Sun ever had decent hardware error handling system, and patch management appears more primitive than AIX. NFS and automount is better, granted, but even for a hardened CLI user, smit is a great fall-back when you can't remember how to do something you touch once in a blue moon. And dynamically loaded and unloaded device drivers allow you to fix a multitude of problems without needing a reboot. And all of the hot-swap hardware makes management easy.
8. Starfire again. How much were they? All IBM power4 systems were paritionable except for the very smallest.
9. Yes? IBMs SMT on Power4 implemented two separately scheduled hardware threads on a single CPU with multiple instruction units, so more like SMT than hyperthreading. The two threads were not completely symmetrical, which is why I said sort-of.
10. Yes. And dynamic LPARing does work. Very well, in fact, as do hot pluggable disks and adapters. Zones is quite different, and I will admit that WPARs were a direct copy of this functionality. I'm not 100% sure, but I am not sure how well Zones split the allocation of CPU and I/O bandwidth between the systems. LPAR overheads, mainly memory (but not CPU), are quite high granted.
11. Starfire again. And again, how much did it cost? And what was available on the smaller systems?
12. Backward compatibility. WTF. I would bet that a 32 bit executable built on AIX 3.1 on RIOS hardware in 1990 has a greater than 70% chance of still running on Power7 running AIX7 twenty years later. How much more backward compatibility do you want? And once you get to AIX 4.3 and AIX 5.1 it will be getting to more than 95% or more. And IBM make it quite clear what features are likely to prevent an application from working. Many software vendors are still compiling their code on AIX 5.1 or AIX 5.2 knowing that their software will work on all later versions of AIX (an example of this is the AIX Toolbox for Linux Compatibility from IBM, that still proclaims to be compiled on AIX 5.1).
I have VERY RARELY (in fact I can hardly remember the last time) had to recompile a program when switching AIX releases unless I wanted to take advantage of new features of a new processor or compiler.
We both obviously have our own perception of what happened and when, but I still believe that the original statement in the article was wrong.
"AIX was always the laggard when it came to commercial-grade Unixes"
You need to qualify this. When AIX 3.1 on the RS/6000 was first launced back in 1990, it wa streets ahead of any other commercial UNIX. It has a logical volume manager, an integrated system management utility (remember, this was a time when sysadm ruled the roost for most UNIXes), dynamically loadable device drivers, and was one of the first UNIXes that did a good job of merging SystemV with BSD flavours of commands and libraries (SUN's way of doing this was less transparent).
With the SP/2 in the mid 90's IBM moved AIX into high-performance computing (Deep Blue et. al)
In the late 90's, they were up there with 64 bit systems, and had a nearly seamless 32/64 bit strategy that meant that the kernel you booted did not have to match the binary you were running.
For absolutely years, AIX was the leader in the Gartner manageability surveys.
Power4 systems, available in the early 2000's implemented hardware partitioning. I'm not sure whether HP had this on the Superdome (or whatever they were called at the time), but I remember this being a real marketing differentiator at the time. Power4 also had SMT of a kind.
Power5 systems, available 2004/2005 implemented I/O virtualization, sub-cpu partitioning, and dynamic hardware allocation and de-allocation (this might have been possible on Power4, I can't remember exactly).
IBM were slow on SMP, the initial work being done by Bull with the G/J30s, but when you have systems with single CPUs running as fast as your competitors SMP boxes, what was the hurry.
The only thing that I believe that Sun had was the containers, and to tell you the truth, I never worked at a customer where this caused a problem.
So tell me. What else were IBM lagging behind their competitors.
I'm really not sure that you can count nuclear as a 'fossil fuel'. Uranium is dug up from the ground, yes, but it was never part of a living organism, and that is generally what a fossil is.
Remember, coal, oil and natural gas were all plants and marine creatures before they were buried in the ground.
But nuclear cannot, by its nature, be regarded as a renewable fuel. The amount is finite in/on the Earth, and I believe that this is the point you were trying to make. And I agree about nuclear being about the only low-carbon energy source, even if you include cost to build the stations.
Back in the days of Virgin.net, and with a 24x7 flat rate dial up service via Modem, I noticed something worrying. My Smoothwall firewall was reporting a huge number (100's per minute - remember it was dial up) of intrusion attempts on port 135.
I sent Virgin Support a mail, pointing out that many of the addresses probing me were from within their own network, and I got a replay saying that it was a problem affecting all ISP's (it was MSblast in the wild at the time), and that they were taking the issue very seriously, suggesting that I install a software firewall (ignoring my statement that I was using a well regarded dedicated firewall).
And that was it. Nothing else happened at all. Eventually, the frequency of the attack dropped to a more manageable level, but not due to any obvious action on their part.
So I actually welcome people being warned that their systems may be compromised, although I do agree that in this day and age, a paper letter is probably too little, too late.
The c*nt word was used on air by an female artist (can't remember the name, can't be bothered to look it up) on Front Row, which airs on Radio 4 between 19:15 and 19:45 (well before the watershed) I guess about 5-6 years ago. It was used in relation to an art installation of a particularly sexual nature, IIRC, involving models sitting in provocative postures with no underwear, a la Basic Instinct.
Mark Lawson (I think it was) made a hurried apology, together with a request not to use such language on air to the interviewee. Again, IIRC, the terms vagina and vulva were also used several times, but they were not censured. I think it was more shocking to Mark because it was a woman who said it.
Laugh, I could have crashed the car!
Being pedantic, are we talking mean, mode or median. These are all 'averages', but have significant differences. In particular, If you can find a job at 12K, and one at 62K, then the median would be 37K, regardless of the actual distribution. Consider the following set of numbers:
10 numbers, totalling 30.
Mode is 2, Mean is 3, Median is 5.5. I would point out that there one hell of a difference between the Mode and the Median.
I do not have the stats, but bearing in mind how many PC first line support jobs appear to be at the 15-20K level, and how prevalent windows systems are, I've often wondered about the source of these £37K average figures.
There is one classic '50's or '60's news clip of a Union representative who says into camera something like "We will not stop this action until all of our members are being paid at least the average wage", which of course, if applied to all workers (not just his union members, admittedly), would mean that everyone would be paid the same. Still, Maths education must be better than that nowadays, mustn't it?
It's a Maths teacher, obviously.
When trying to fix a problem with a particularly badly sh*gged filesystem late at night, I had the corporate obscenity filters block my mails to and from a vendor support centre because I included a phrase like "I have fsck'd the filesystems, and the problem persists" (btw. I was in phone contact with them as well, but it's difficult to dictate several K of diagnostic data over the phone!)
I had to wait until the following day for the mails to be released when a real person could check the content. Good thing we worked out why the mails were not getting through.
I had a moan at the people running the mail filter who said that because it was a commonly used euphemism especially in spam emails, it had been added it to their blocklist. I then checked over a gig. of archived spam from my mailbox gathered over several years (don't ask me why I had kept it, I don't know, but I hadn't run out of disk space at that time) and found precisely 2 uses in many thousand emails. Not so common use, then.
..to providing my bank details on my tax return, not even knowing whether there is a rebate or not. I know it's a trivial issue, because I'm sure that the banks would roll over and provide bank account details to a suitable request from HM Customs and Revenue, but it just feels wrong.
Let them send me a cheque if I am owed money. Don't know what will happen when cheques are withdrawn, but I'll face that one when I come across it.
You're talking chain or band line printers here. I very much doubt that a dot-matrix, even a heavy duty one like a Printronix, would be able to do more than three part, chemical transfer paper.
When I was working with mainframe band printers, we were using multi-part fanfold stationery with interleaved carbon paper (not chemical transfer paper). There was a machine called a splitter, which would split the copies out and wind the carbon paper up for disposal, while leaving the two split copies neatly folded (at least, if the operator threaded it correctly). For three and more part stationery, it had to be put through further times to split each copy off. Interestingly enough, each carbon sheet had a completely legible copy of what was on the page. We also had authorized cheques with a second carbon copy, but this was for audit purposes.
I was once told that the hood on these fast printers was more than just acoustic protection, because if the band or chain broke, it was moving so fast that it would damage the hood as it flew off. Not something I would like to hit me.
Where's the old fart icon.
There was the full quality setting that advanced a complete letter at a time, and there was the draft quality setting, that actually only moved a fraction of a character position. This meant that you may get gaps in the later letters, where there was an overlap, but your ribbons lasted many times longer. This would make it much more difficult (though not impossible) to read from the ribbon.
I think that they were different ribbons, but it may have been a lever setting in the printer. I don't think it was a software setting.
These were actually thermal transfer printers rather than impact printers. This is how they managed to be so quite. Normal whirring from moving the print head and paper, but printing was silent.
Mine only advanced the ribbon for each letter printed (the ribbon was mounted on the print head), not on a per-line basis although mine was a Quietwriter III or IV and could have been different from Nuke's, so was not quite as wasteful as he suggested.
but legally, there is a difference between two copies printed at the same time using multi-part stationery, and two copies printed one-after-another. There is no guarantee that the two serially printed sheets are identical, because they could just be one print after another, with the second one slightly different. How would you know unless you minutely compared them?
And yes, I know that the lower copies in a multi-part *could* have been pre-printed, but that is why they come bound together with tear-off sprockets, so that you can tell whether the lower copy has been tampered with.
Plasma's will become increasingly irrelevant as LCD and OLED technologies mature, especially with the LED backlights that are becoming available.
I predict that Plasma TV's will be banned because they are too energy inefficient once all filament lightbulbs have been eliminated. Especially if the manufacturers can lobby governments. Soon we will have TV's being replaced every year to meet government carbon emissions targets.
Joke. (I hope)
You're not using HAM or SW radio, and I guess that you don't listen to the air control bands either. Have you checked MW or LW radio reception, which I always found prone to interference (you may still find these on steam powered radios, but a lot of radio's don't even receive them nowadays).
There's LOTS of the EM spectrum in the radio bands, and radio, TV, Bluetooth and WiFi only use a very small fraction. Google (sorry, the URL is too long) for "Fuk_frequency_allocations_chart.pdf" (warning: PDF), and you will find a very interesting wall-chart of the spectrum use in the UK. Try to find the bands you use, and compare them to the whole.
I must admit that most of this thread has followed the normal Windows/UNIX path, leading to name calling.
It is interesting to be reminded of another OS that in it's own way has shaped what we have today.
VAX/VMS is an interesting OS, and in many ways my second favorite OS to UNIX. What you have said of VMS is quite true, but some of the assertions you have made about UNIX are wrong.
As someone who learned UNIX back in the late '70's and then took a spell of sysadmin'ing RSX/11M and eventually some VAX/VMS, I agree that the batch and spooling systems on VMS were much better, because DEC had Tops 10 and Tops 20 as a good model to work from. But RSX/11M's batch system was not as good as the UNIX at/batch commands, but that is because RSX/11 was not really a general purpose OS. In the UK RSTS/E was the main commercial PDP/11 OS, and very little of that mad it into VMS. If you remember that far back, you will find that VMS version 1 was really just a 32 bit port of RSX/11M, complete with non-hierarchical file system, and limited Files/11 support.
The backup and restore, I'm not so sure that BRU and Backup/Restore was hugely better that Fbackup, Frestore, Finc and Frec on generic AT&T UNIX systems, but these fell by the wayside.
It is quite clear that Files/11 (which was a layered product on RSX/11) and the VAX filesystem (I know it had a name, but I can't remember it at the moment) suited commercial use for VAX, including file and record locking, but that does not mean that there was nothing similar in UNIX. UNIX version 7 included a thing called the "Multiplexed file system". This allowed you to add all sorts of functionality to the standard file system. But the standard byte addresses file interface allowed you to implement pretty much any functionality anyway, including arbitrary sized record structure, and there were add-ons like C-isam, which was available as a library on most UNIX variants (OK 3rd party software), which was for a time a near industry standard for UNIX.
AT&T's UNIX from System 3 also had mandatory file and region locking for files in a filesystem. This was not carried into the BSD variants as far as I remember, until the SVR4 merged system that provided cross-fertilization between major UNIX variants.
It is interesting that people also overlook the RFS filesystem that came as part of SVR3.2 and later. This was a highly stateful distributed filesystem that implemented 100% of UNIX filesystem semantics, including the mandatory locking protocols. I'm fairly certain that if you came across UNIX from a BSD/SUN route, that you almost certainly never came across this advanced filesystem which again, fell by the wayside.
It is not directly comparable with VAX Cluster, which was a groundbreaking way of making your environment more that the sum of the machines in it, but this was an add-on to VMS, and if I remember correctly, quite expensive for commercial use.
VMS was good. It's DCL CLI was very helpful to novice users, utilities like EVE and TPU were very good for University work, and the wide variety of vendor provided applications. It had the demand paging system that other vendors aspired to. But DCL had it's own limits. If I remember correctly, in order to get the argument processing working for your application, you had to produce a prototype file so DCL could parse the arguments for you, whereas letting the application manage it's own arguments
But I would contend that although it was very suitable for many types of work, ultimately it was not as flexible or as widely deployed as UNIX. Although you could say that the WorldBox MicroVax II was a microprocessor based system (I was at the UK site of the World launch event in Harrogate), that there were personal VAX Stations, and there were some very large VAX systems, UNIX appeared on everything from desktop PC's (like the AT&T 3B1 and even PC/ATs if you count Xenix/286 as a UNIX) through to the largest mainframes of the time from the likes of IBM and Amdahl. And I haven't seen a HPC cluster running VMS, as I have with COS (at one time Cray's UNIX) and AIX (IBM's UNIX variant).
Don't get me wrong. I'm not criticizing VMS. As I said, it is one of my favorite OS's. But it is like comparing apples with pears. They are similar, but there are significant differences which mean some people like apples, and some pears. I would say that there is probably still a place for VMS, but it has become niche, in the same way that genetic UNIX is going. But UNIX has a direct successor that will keep the line going in Linux.
Maybe you ought to be pressing HP to license VMS with an open license. I think that that is the only way to stop it dying a slow and lingering death.
One has to ask whether your son actually knows how to write grammatically correct and intelligible English, because unless he knows this, trying to teach him to use Word is pointless.
This would be the domain of the English classes, not ICT.
All those many years ago , I remember in English having to read, comprehend, and write relevant comments on a series of articles, which taught me how to use the language. Even though I was not very good at it, it laid the foundation for all of the wordy subjects (History, Geography) as well as a basis for reports on Science experiments.
I think that teaching basic computer use for everybody is a good thing, but there should be a differentiation between this, and teaching Computing as an engineering or technology discipline. This way you would be able to separate the mundane 'using a word processor, web browser and multimedia apps' from the interesting 'what is a CPU, how do programmes run and how do you write them and what is involved in networking'. If you did this, then I believe that the kids with a genuine interest could separate the boring and interesting stuff, hopefully keeping them engaged.
My youngest kids hate(d) the way that ICT is/was taught, but do actually have a genuine interest in how their computers connect together, and what the basic components are. indeed, when I built a system from scratch last Christmas, I had a willing audience for almost all of the work I did. Virtually nothing involved with putting the system together and installing the OS was familiar to them even though they both have studied or are studying ICT at GCSE level. And they are fascinated when I can write a quick program to do something specific, when they cannot see how a spreadsheet, almost the only data tool taught to them, could be applied to a problem.
I must agree that you should have properly trained teachers, at least for GCSE level and up, because having ICT as a second subject will never give the teacher enough background to do more that follow the pre-prepared courses from the syllabus.
I admit to being a little partisan about this, because 25 years ago I taught up to degree level at a UK Polytechnic for a while, and I could see the way that business computing was going at the time.
could well be better. According to another story, a solar plasma aurora storm is due to hit Earth, starting "early in the day on August 4th". I know this is Wednesday, but I believe that the storm will last a while.
If I were in the ISS, and not protected by the bulk of the Earth's magnetic field, I would want to find the most shielded part of the station and chill out for a few days, rather than going on a space walk. I guess that the boffins will have taken this into account.
Anybody got any idea whether the ISS is in a low enough orbit to be mostly protected, or is the space walk scheduled when it is in the Earth's shadow (I know some of the remains of the plasma will reach the far side, but it will be less).
Biting the hand that feeds IT © 1998–2019