Re: Long-term deep storage
I've noticed this. My old EEEPC 701, which is not used much now, has needed to be reinstalled each time I've left it a few months without being powered on.
2166 posts • joined 15 Jun 2007
I've noticed this. My old EEEPC 701, which is not used much now, has needed to be reinstalled each time I've left it a few months without being powered on.
Split your WiFi into trusted and untrusted domains.
Strictly control what can connect to the trusted domain by key or strict access control.
Let the untrusted one be a free for all, with a disclaimer that using it is at the user's own risk.
If there is a requirement for the untrusted devices to connect to trusted services, treat all of the connections as if they were from the Internet proper, and put the correct firewall and barrier controls in place to protect your core services.
Use additional DMZs if that allows you to contain access.
There is absolutely no need to allow BYO devices to connect to your core networks for social media access. If you want them to use their devices for work, you may need to think a bit harder, but for just social media access, it's not that difficult.
... I have often said that if someone is irreplaceable, you should fire them!
Too often people become irreplaceable by hoarding and not sharing knowledge, and such people are never good for an organisation.
By extension, everybody should be replaceable.
If you are not looking at developing the films yourself, you could use C-41 process black and white film. This can be processed by any film processor as it uses the same equipment as colour film.
I believe that both Ilford and Fuji still produce this type of film, and you may still be able to find some Kodak film still within it's use-by date.
I don't count myself as a photography enthusiast, but I have taken pictures over the years that have generated a wow reaction from people.
I taught myself film photography from books and experience while at university, using a tank of a second hand Praktica LTL3 completely manual SLR camera with an f2.8 Carl Zeiss Tessar lens (an optically good, if rather restrictive lens) and stop-down metering.
By my photos were always the ones people wanted to see at the breakfast table when they came back from the developers.
What this hair-shirt experience taught me was that preparation was important, and pre-focus for action shots, setting the aperture and exposure in advance, and, above all, choosing the correct shooting location is essential. All of which are skills that can and should be learned. Another thing was to leave the camera cocked at a medium aperture and mid-range focus (for reasonable depth of field) so as to make an attempt at those 'just happening' shots, and rely on the developing process to correct the exposure. And if you have time and spare film, bracket the exposure for those important shots you don't want to miss.
I stopped spending significant time taking pictures, and am now really just a casual photographer.
When I got my first digital bridge camera, I was appalled by just how difficult it was to actually control the process. Everything was automatic, and the overrides were so difficult to work using the few buttons on the camera that it was a joke. I now possess a slightly more serious Fuji bridge camera with a mid-zoom lens. But I chose this one because I could control the focus and zoom by hand (which does wonders for preserving the battery life), and while I don't fully understand how the synthetic aperture work, I can use it. But what I first learned using a feature-free camera is still useful, even if most of the time I now shoot on full automatic.
I pity people learning photography now, because they just don't get the opportunity to learn the necessary skills properly. One of my kids studied photography a few years back as part of her foundation degree, and I found it highly amusing that they were told to go and buy a cheap second hand film camera with full manual over-ride for use on the course, so at least the colleges still understand.
What on earth does Simon have against SSA disks? I found them easy to deploy, quick for it's time, quite dense (it was the first disk subsystem I knew that used both the front and back of the drawer) and easy to maintain.
OK, it tied you in to IBM and their disks a bit, but I did not find them too bad at the time, and there was never a quibble replacing them while under maintenance.
I don't claim to be an expert in Intel x86 architecture, but I believe that some of the more specific features may have led to additional instructions being added to the ISA. That is certainly the case in other processor families I have used.
In order for code that uses these instructions to run on processors that do not implement the instructions, it is necessary to be able to trap the 'illegal instruction' interrupt, and do something appropriate.
If you did not trap the illegal instruction, the OS would at best kill the process, or at worst, crash the whole system.
In the case of the MicroVAX and early PowerPC processors, you would call code that emulated (slowly) the missing instruction, which had to be part of either the OS, or the runtime support for the application. I've not heard of that happening in the Intel/Windows world, although I'm not discounting that it may be there.
In the s370 world, instead of emulation code, it was possible to trap such things in alterable microcode, this being the method that IBM used to 'add' additional instructions to the s370 ISA for specific purposes to allow application speed-ups.
You make a very good point, but you ignore that compiling for a particular processor, using all of the features of that processor breaks the "compile once run anywhere" ubiquity of the Intel x86 and compatible processors.
If this class action lawsuit is providing relief for home users, these are people who will buy a system and install code that is compiled to a common subset of instructions for the processors it is expected to run on. They are certainly not going to re-compile the applications they buy, let alone the operating system and utilities (you have to admit that dominant players providing x86 operating systems do not make it easy for a user to recompile the code even if they wanted to).
Imagine if when buying a program, you had to check not only which versions of Windows it would run on, but which processor (I know, some games did, but they are a special case).
I also know that it is perfectly possible for an application or OS provider to provide smart installers that identify the processor at install time, and install the correctly compiled version for the processor. Or even put conditional code in that detects at run time which libraries to bind, or which path through the code to select.
Each of those last alternatives lead to significant bloat in either the install media, or even worse, the disk and memory footprint of the installed code. And that is not to mention the support nightmare having several different code paths to do the same thing on different processors.
No, the shrink-wrap application providers will write their code for a common subset of features, and that is what the Pentium 4 was weak at. The same binaries often ran slower on Pentium 4 than on Pentium III processors at the same clock speed (and when launched, the Pentuim 4s did not run at the high clock speeds they later achieved). And later processors such as the Pentium M and Core architecture processors, which used more of the Pentium III architecture, with the 'good' bits of the Pentium 4 grafted on show that Intel eventually got the message that Pentium 4 was a dead end. I'm surprised they contested this, although I guess that this case is all about benchmark deception rather than the ultimate speed.
I sat through the whole thing, thinking "Something has got to happen soon".
Can't do a hand, how about a thumb.
The follow on project to LOHAN has to be an amateur resupply rocket to the ISS.
I'm sure Lester and the other boffins will be up for it!
And RT has not developed a serious anti-US agenda since the situation in Crimea and the Ukraine started, has it!
When Russia Today started, I was surprised by how apparently neutral it was. I tuned in a few days ago and was (actually not) surprised how that has changed in the last few months, with them predicting the demise of the dollar as a world currency (suggesting Bitcoin as an alternative, of all things), and the rise of a fascist police state in the US. It almost seemed that they were listening to anybody spouting a conspiratorial line. Almost like "Controversial TV" used to be, although that did carry drivel by David Ike as well.
I wonder whether Mr Putin has been applying pressure on RT. It must be nice to have a personal mouthpiece broadcasting to the world.
Remind me. How many Windows systems are there on the Top 500 Supercomputer list?
I assume you are either joking or a troll. I cannot really think you are really serious.
I don't think Cray supply anything other than Linux on their hardware.
Most local radio stations do not use the Met Office forecast. I believe that they mostly use the "World Weather Information Service" through Sky News, which is, I believe, a data aggregator, not a weather bureau in their own right.
Microsoft 'bought' Insignia Solutions (or at least took out a pretty much exclusive license) for their SortPC technology that allowed 'foreign' binaries to run on a particular architecture, a feature called Windows-on-Windows (WOW).
This meant that you could have had shrink-wrap Windows applications that should run on all Windows platforms. I doubt that the technology was maintained when Windows became x86 only.
There were systems you could have bought that ran Windows NT on Alpha.
But it is clear that the majority of support for them came direct from Digital, not MS.
I did see an IBM Power system (I think it was a prototype model 40P) running Windows 3.51.
This is not about sharing data for patient care. That should already be being done under a different initiative. Care.data is about sharing data with non-clinicians who perform fundamental, mainly statistical research to correlate and synthesize new conclusions from data that is already held. That should be a good thing.
At least in theory.
The problem here is that the organisations allowed to apply for access to the data goes far beyond the NHS, and indeed beyond pure medical research, and I believe that insurance companies (supposedly for actuarial reasons) and drug companies (probably to assess whether a condition was worth developing a drug for) were the sort of commercial organisation that were applying for access.
Besides thumbs up and down counts, this type of comment could do with a groan count!
...I run an additional hardware firewall separate from my ADSL router.
It's long been an axiom of any 'proper' security that you have multiple layers, each provided by a different vendor.
Even if each of them may have their own vulnerability, it seriously deters casual hackers if once they've breached one line of defence, there's a new and different one to knock down.
Some may see it as a challenge, but most will just give up.
Unfortunately, laptops in particular vary quite a lot in the chipsets that are included. There is a lot of tuning required to get a Linux stable when suspending and resuming.
There is a whole subsystem called pm-utils (ironically modelled on sysv init) which allows you to tweak the suspend and resume system for the particular model of laptop. I tend to run IBM/Lenovo Thinkpads, for which there are a significant numbers of profiles which work quite well.
Where I've had problems are with the models with Radion Mobility graphics adapters when KMS is enabled, and I've also had a problem with the sample rate of pulseaudio not getting restored properly.
But with KMS turned off (Ubuntu releases between 8.04 and 12.04), if you can ignore the audio issues, suspend works quite well. 14.04 appears to have fixed the sound sampling issue.
Hibernate is more problematic, as on Thinkpads it is necessary to have a FAT primary partition on the hard disk to contain the hibernate file. Before I upgraded my Windows partition to Win2K, it used to work fine, but all those years ago, when I upgraded to NTFS I found that the hibernate code in the Phoenix BIOS could not handle the newly formatted NTFS partition. The 'old' boot record format cannot have more that 4 primary partitions (WinXP now, current Ubuntu, last/next Ubuntu and an extended partition containing the rest), I don't have a spare primary partition just for a FAT filesystem.
And there is your problem.
You really know that it's not the right approach when you find your first system that either does not complete the boot process, or even worse, sometimes does but sometimes does not.
You then have this impenetrable black hole to try and debug, which may "appear to be well-documented", but does not tell you what is happening.
Once you've seen it, the "huge pile of little shell scripts" is easy in comparison. The naming convention is only funny if you don't understand how the shell performs globbing.
Bad Wolf was introduced in a very subtle way.
It was not rammed down our throats, as in Here's the ARC you're looking for. It was more Hang on a second, didn't we see something like that a few weeks back. And it sort of made sense, with Rose, while she controlled the power of the Tardis, touching all of her timeline with the Doctor to leave some clues as to what had to happen.
I wonder why she didn't see any evidence of Clara though. Oh, of course, no multi-series ARC (Babylon 5, why could you not have had more influence on other series).
Yes. Probably a Scientific but could have been a Programmable. Need to check the stills. And it still worked! The display was clearly visible at one point.
Hope they didn't ruin it.
Hmm. The BARB figures are interesting, and it horrifies me to see just how skewed towards a few high profile programs like The Great British Bake Off, The X factor, Downton Abby etc TV viewing in the UK actually is.
But it does beg the question of why something like 40% (based on 10 million sky subscribers and 25 million households in the UK - although very broad statistical flaws here) decide to spend money with Sky. And that does not include Virgin Media customers.
There must be something pretty compelling in the 2% of viewing time for Pay channels to justify this expense. Obviously, some of that is going to be sport, and maybe the relatively easy to access catch-up and on-demand services, together with the bundled boxes could be helping maintain their customer base. Of course, even Sky customers will watch free-to-air services some of the time. Like phones, possibly Sky customers don't like the up-front cost of buying the box.
I have both freeview hard disk recorders and streaming services available to me on TVs, as well as Sky, and also have been through two generations of USB freeview stick and played around with other on-line TV services, and I still find that the go-to service in our household is Sky. Maybe we're trying to justify spending the money, but as I said although it is quite expensive, I still regard it as reasonable value for money just for the content I can't (legally) get anywhere else.
Interestingly enough, whenever my wife and I have 'spirited conversations' about what we spend money on, she always brings up the Sky subscription as an unnecessary expense (which is significantly less than she spends on cigarettes in a month), and I have to remind here that she is the one to be found most frequently watching the pay channels! In fact, I would almost not miss it, because I get so little time to watch the slightly less mainstream pay TV channels that I find interesting (documentaries, arts, Syfy, but also the movie channels)
How are you defining "free content"?
If it's content that is available on Free-to-view other services (Freeview or Freesat), then I would dispute your figure of 90%. I have well over 200 TV channels available on Sky, and only about 30 available on Freeview and approx 160 on Freesat. All have at least some +1 channels, so not all of those channels contain unique content.
If you are saying that it is available through the Sky infrastructure without having a Sky subscription, then I may be in slightly closer agreement with you, but try try removing your Sky subscription card and seeing how may channels you can no longer get.
For my ~£60 a month for a Sky HD package, in addition to the Freeview channels, I get Sky 1, Sky Atlantic, Sky Living, all of which contain content not available anywhere else in the UK, and I also get SyFy, Sky Arts, a host of documentary channels, access to 'golden' channels like Watch, a moderate selection of movie channels (although not as good as they were) and also a whole host of on-demand content which I would not pay any extra for. On top of that, they gave me the box(es) for free (they replaced my original SkyHD box without cost when they rolled out the on-demand services).
I don't agree with the way that they spread the desirable content across as many packages as they can to maximise the number of packages you need to buy, and I certainly don't agree with the gouging of their customers with regard to sports channels, but I don't think it is such bad value.
If they still existed (and this is mostly the reason why they don't), I certainly would no longer rent any DVDs from places like Blockbuster, and I've noticed that the number of DVDs I buy has dropped significantly since Sky installed their on-demand service. So in recent years, the amount of money I've spent on content has actually declined as Sky have brought on their services. This seems good to me!
I am reluctant to become a triple-pay customer, because I don't actually like Sky's business model much, but I don't really object to getting TV from them.
My recollection is that xdm actually could switch UID when it ran on a system.I believe that it was a configurable option, and you could specify an X server restart (partly to change the UID, but also to set the server to a known state with no client programs left over form the last user) during the login process on a device that allowed it. Obviously not on an X terminal, though.
It's later graphical login processes like gdm and lightdm that changed this.
Unfortunately I no longer have anything old enough running to confirm this.
Whilst shellshock is/was a really worrying problem, I don't think that any serious web site will actually any CGI-bin bash scripts.
Yes, I know that the problem will persist across other binaries as long as they preserve the environment variables, whenever a bash is started as a child, and that the system() call will almost certainly start a shell, so there is still danger there, but I would be startled if Google, Amazon et. al. were ever vulnerable. The patching they did was mainly to be absolutely sure.
SOHO or SMB web sites may be vulnerable, of course, so I am not downgrading the risk, but I think that your implied assertion that all Linux web servers will by default be vulnerable is overstating the problem.
Actually although a small part of the video driver system is in the kernel, the majority of the driver runs as plug-in modules to the X server process (not kernel modules), which is a use-land process, not in the kernel. This makes graphics drivers different from, say, a driver for a disk adapter.
The bits in the kernel are to do with allowing the X server process to access the video hardware at a register/DMA level, and is pretty generic glue code. All of the smarts are in the X server, and that is the code that is most likely to have a problem. This means that it is unlikely that you can crash a Linux box with a graphics driver, although you may make it difficult to use on the directly attached monitor (other access methods are available!)
In fact, if you try hard, you don't even have to run the X server as root. Generally speaking, modern distributions do run the X server as root because it is started up before the graphical login starts, and that needs X, but if you disable the graphical login, log in as an ordinary user using a text-based authentication method, and then run up an X server (using something like startx), it works just fine.
I would actually like the graphical login methods to switch away from root during the login process. It can be done, but is likely to introduce a visible glitch as the X server restarts during the login process. But as we will end up with Wayland or Mir in the near future, changing the way that X11 is used seems a bit pointless.
There were serial terminals that provided two or more serial ports allowing them to be connected to two different systems (or the same system twice!).
Ones I came across included the HP2392, Falco 5220, and I believe that Wyse and Esprit also had models that did the same.
But none of the normal terminals that I came across allowed direct cut-and-paste between different sessions, although I could not say that there were none that did.
I should note that the AT&T BLIT, running on UNIX with layers backing it up allowed virtual terminals on the same machine using a RS232 or Starlan serial connection (there's a video copyrighted 1982 on YouTube), and did come with a mouse! AT&T also had a session manager called screen that allowed a process on a UNIX system to masquerade as several terminals, maintaining screen state, and allowed you to switch between them. This worked on any terminal with sufficient curses support.
"the command prompt came from a time when mouse-control was not really there"
No. It came from a time before mice were an option. I know that the mouse was first demonstrated to the world in 1968, but they did not appear on general purpose computers until the Xerox Star, AT&T Blit, Sun 1, and Apple Lisa, all in the early 1980's. The first PC mouse appears around 1983.
'ordinary' terminals with CLI interfaces go back much further than that!
It pre-dates CP/M as well.
I first came across Control-C as interrupt in the default settings for the UNIX Version 7 TTY driver (although it was 'soft', and could be redefined with the stty command), which would have been around 1979. Prior to that, the standard interrupt was communications Break. But I don't want to talk about the V6 TTY driver. That ugliness is best consigned to history.
I'm fairly certain that Digital Equipment Corporation (DEC) used Control-C for interrupt in RSX and RSTS as well.
Control-Z to suspend was a feature that came from BSD UNIX that introduced TTY job control sessions that allowed you to have backgrounded programs that you could switch between with the fg and bg commands.
Apart from playing the numbers game, just why is 64bit a good idea in a tablet? It's not like it's got many-gigabytes of RAM to manage (3GB can be managed in a 32 bit address space), or that it needs to handle large integer or high-precision floating-point numbers..
Just because Apple thinks that a 64 bit processor is a good in a hand-held device does not mean that it is a good idea at the moment.
The whole concept of wide area networking security was a moot point when it came to early email systems. UUCP was the best that there was (UUCP is the UNIX to UNIX Copy Program, not just a mail system, although one of the most common uses was mail, and another was remote printing).
Everybody knew it was not secure, because it was a store-and-forward scheme, such that any of the intermediate systems had access to the content. That was just the way it worked, and everybody knew and accepted it.
If you look at basic UUCP, it ran over serial communication lines, often over analogue telephone lines using modem. The concept of it being secure was never even thought about. It was easy to tap a telephone line and feed the data captured through a modem, so it was obvious that there was no security. If you wanted to send something securely, you encrypted it and the uuencoded the result.
There was an encrypted UUCP system which used the UNIX crypt technology. I cannot remember the exact details or what it was called, but it was in the AT&T BNU, but it effectively meant that the data was not transmitted in the clear. But it was still vulnerable on the intermediate host systems.
Saying that it should have been secure is like saying early cars should have been built with roofs, windows and locks. But they weren't.
Anyway. The TELEX system was about as insecure as you can imagine, so that is not a particularly good example.
BTW. To all of the people saying that X.400 should have been the default mail system should remember that SMTP was defined in RFC 821 several years before the initial recommendations for X.400, and they expected to be running over X.25 transport systems, so were a bit weak on the security side as well.
Look again at what was being celebrated. It is the best historical artefact, so unfortunately, it is limited to what actually still exists.
I agree that HMS Dreadnought was clearly a revolutionary ship, and rendered the rest of the world's battleships obsolete almost overnight, but Dreadnought herself was rapidly overtaken by subsequent ships that introduced the 13.5" and then the 15" main gun, fuel oil in place of coal, superheated steam boilers and improved protection. Notable British Dreadnought follow-on ships included the Iron Duke class and then the Queen Elizabeth class, which was IMHO probably the peak of the British Super Dreadnoughts. Subsequent ships moved away from the classic Dreadnought layout, and culminated in the fast battleship that was built by various navies to counter ships like KMS Bismark and IMS Yamato.
HMS Dreadnought herself only had a life of around 13 years, which is a very short time for a capital ship, and managed to miss Jutland, but does have the distinction of being the only battleship to have ever sunk a submarine!
I was going to ask the same question. Belfast was one of a subclass of the Town, or Southampton class of large light cruisers. The primary difference was that during the building of the ship, an extra 22 foot section was added between the forward superstructure and the forward funnel.
The original intention was to allow the ships to carry more (16 vs. 12) six inch guns, but as the quadruple turrets were never built, they ended up with the same main armament as the original ships. They could cover a target with continuous fire, but were not really any better that the rest of the class.
This left the two ships (Edinburgh and Belfast) longer than the so called heavy cruisers, and as long as the smaller battle ships (like the Royal Oak class), without significant armour or heavy guns.
I also think that the extra section spoilt the very hansom lines of the 'towns, giving them an awkward, lop-sided silhouette, certainly nothing worthy of accolade.
But I suppose that as there is little preserved of the glory days of the British Navy, that we should be glad we still have Belfast.
I would have liked to see either Vanguard, the last British battleship, or the Audacious class Ark Royal (not the harrier carrier) preserved, but alas, they are gone.
I've been thinking about this a bit more. What we are seeing are the first signs of battle-lines being drawn up between two different factions. The divide is whether Linux should stay as mainly a UNIX clone, or whether it should become a new operating system based on UNIX but no longer adhering strictly to the UNIX ethos.
I'm getting old. I've been working with UNIX for 36 years. I'm definitely in the "UNIX clone" camp. I really don't relish learning what would rapidly become a new operating system. I fear that complexities would effectively produce a technocracy who are the only people who understand the inner workings of the new OS, to the exclusion of people on the 'outside'.
I think that the systemd people will be in the "New Operating System" camp. I don't know which camp Linus would sit in. If he is in the UNIX clone camp (and this was really how he started Linux in the first place), I think that people who want to move away from the UNIX roots should fork the kernel, and really make it a new OS. According to the rules as I understand them, they would no longer be allowed to call it Linux, however.
If they do not want to take on the responsibility of maintaining their own kernel, they really should listen to the influential people who do control the existing one, and that means paying some heed to what Linus says rather than trying to browbeat the development team or slip poorly coded patches into the kernel source, because it does not work the way they think it should.
With the direction Canonical want to take Ubuntu, and the friction between the kernel developers and some other projects in the community about the future direction of what a core GNU/Linux system should look like, I can really see there being a schism on the horizon.
You're looking at this the wrong way. The problem with your argument is that you think that systemd is better than what preceded it. May of us who have long UNIX and Linux experience do not believe that the advantages of systemd, mainly of faster boot time outweigh the horrible, horrible complexities that it introduces.
Just because someone has come up with an interesting alternative to init and the traditional rc scripts does not mean that it is automatically better.
I blame the fact that a lot of people have grown up with Windows as their learning platform. In that model, complexity, opaqueness and proprietary lock in is a way of life, and too many young (and not so young) programmers producing Open Source software accept that it is the way to produce a system.
One of UNIXs real advantages was that there were serious efforts to keep it simple. Systemd does not fit in that model, nor (as others have pointed out) does most of the sound system in Linux (not just pulseaudio, but the other things that came before it) or several other additions.
Where systemd has crossed Linus's path is although there is a kernel/utility separation in Linux, systemd (which is not really part of the kernel, but part of the utility toolset), the systemd developers were demanding changes in the kernel, and abusing some of the management and logging facilities of the kernel in a way that was never envisaged. That caused what looked like kernel problems, like the system hanging on boot.
Linus did not agree that the kernel needed changing, and certainly did not agree with the way that the logging facility was being used, and pushed back in his own inimitable style.
As he is the custodian of the kernel, not of the utility tools, that is his prerogative.
I only used the term ASCII because I believe that that it was more immediately understandable than "serial" or "asynchronous". I am well aware that there were many terminals that were normally used as asynchronous serial terminals that had form-filling capability. But I would suggest that outside of some proprietary applications that mandated particular terminal types, almost all ASCII terminals were used as asynchronous serial devices, so much so that the terms are almost synonymous. These devices rarely used the form-filling functions, even if they had them.
By the early '80s, which is when Lotus123 came to the fore, terminals were normally IBM 3270 or 5250 compatible, and did indeed use EBCDIC, or serial terminals that nearly all used ASCII, such as Lear Siegler ADM3A, Wyse50/60, DEC VT100, Beehive etc. There were dozens of manufacturers, all of whom gave up as cheap PCs could also be terminals with the correct software.
I knew I should have qualified that. Curses was an API abstraction layer allowing people to write software without having to know what terminal type was going to be used. It was written by Ken Arnold at UC Berkeley, and was shipped with BSD, before being re-implemented in System III Unix by AT&T.
Interestingly, the Wikipedia article asserts that strictly speaking vi predated curses, and curses heavily borrowed code from vi. After all this time, you learn something new.
400 baud was an odd speed. The standard speeds were 75, 150, 300, 600, 1200, 2400, 4800 and 9600. Some terminals would do 19200, but that was generally frowned upon because of the interrupt load on the server. Faster speeds came about when people started running multiplexors or PPP for internet access.
But yes, that was one of the reason why vi commands were so terse, and the requirement for curses to optimise screen updates. Vi was written to be able to work over the slowest of lines with the most basic of terminals. All you needed was full-duplex communication, the alphanumeric keys and some punctuation. The terminal had to have some form of direct cursor addressing and at least a home and clear screen command, that could be encoded in termcap. But even the, there were some terminals that were just too brain-dead to be used for vi. I seem to remember some comments in ancient termcaps about a super-beehive terminal and maybe one of the Ann Arbour terminals.
What was most concerning was terminals that would not flow-control properly, so there was a mechanism for encoding timing delays into the functions so that curses would not overwhelm a terminal, preventing corrupted screens.
Chances are that the visible part of the sheet was sent as a 3270 form, and you would have been able to move between the cells/fields with tab and/or arrow keys, filling in multiple cells, and once all of the fields were how you wanted, you could hit enter and transmit all of the cells up at once, and have the sheet recalculate. This would have been quite familiar to a mainframe user, but completely foreign to anybody used to instant update.
I know that having grown up on full-duplex ASCII terminals on UNIX, DEC and other systems, moving into a 3270 world when I joined IBM frustrated the hell out of me until I worked out the best way to do it. But once the concepts were understood, it worked pretty well, only differently.
The reason for it working the way it did was because 3270 terminals had quite a lot of function built in, and would allow local editing of data on the screen without any involvement from the mainframe or terminal controller. This meant that you could attach a lot of terminals to a mainframe without it melting down, and that interacting with a remote terminal down low speed telecommunication lines was bearable, with only the download and upload screen refresh being slow.
For full-duplex ASCII terminals, the computer was involved in the most basic of functions, and ended up having to echo every key typed back to the terminal. Interrupt handling per keystroke sapped the life out of a lot of mini-computers unless they were good at it (like the PDP11 was).
PCs, where the computer had the keyboard and screen locally attached were a different proposition, and naturally lent themselves to update per keypress type applications.
As an aside, I have been told, and I think I believe a lot of it, that when you look at the lifetime claims of compact fluorescent lightbulbs (CFLs), the lifetime quoted is actually the expected lifetime of the tube.
Within the bulb, you also have an inverter to generate the voltages necessary to drive the tube (it's in the large white plastic blob between that screw/bayonet and the tube and makes the bulb difficult to fit in some light fittings). These invariably contain similar capacitors, such that when the CFL fails, the tube is often OK, but the inverter has stopped working. This is, I believe, why they do not appear to last as long as the claimed lifetimes.
Unfortunately for LED bulbs, until we get low voltage lighting supplies in houses, they will have to have similar electronics to produce a low voltage DC source in the bulb, and will also suffer premature failures.
Whilst we have moved away from the failure rate of valve TVs, it is well known that a very significant number of modern TV failures are caused by capacitor break-down in the power supply. It's normally within the ability of anybody who can learn to wield a soldering iron and screwdriver to unplug the TV from the mains, ignore the "No user serviceable parts inside" label, take the back cover and shielding off, spot the bulging capacitor(s), and replace them (fortunately, the capacitors are unlikely to give a serious shock in a modern TV).
Alternatively, there is a scrap industry that works like the car breakers. Companies break TVs up into their working component boards, and sell them at a fraction of the price of a new TV. Ebay and the Amazon Market place are great places to find such businesses.
My 7 year old 32" cheap (for the time!) no-brand HD TV has now been repaired at least twice like this, and I have a 26" Acer that I bought over 10 years ago that it still going strong after several bouts of maintenance.
There is still a place for someone who can fix TVs. Whether it is workable as a means of earning a living, I'm not so sure.
"look at Linux, same mistake".
That statement makes it sound like there is one person or organisation in control of Linux who could fill that gap.
I'm sure that you realise that it's just not like that. Linus was interested in creating a UNIX clone, originally for his own use. He did not really have any ambitions for the desktop. It's true that someone like RedHat or Canonical could attempt to fill that gap, but most of the Open Source projects just don't have the resources to produce something on the scale of a full-blown office productivity suite.
The one realistic candidate, StartOffice, was a project that came from proprietary and commercial package that was offered for free, non-commercial use on various platforms after being re-written in C++. When Sun purchased the company, they forked StarOffice to create OpenOffice, which had some of the copyright-encumbered components removed (particularly the database component, which was IIRC a cut-down ADABAS implementation). Sun kept StarOffice on their product catalogue as a commercial product, but as time went on, they had difficulty committing serious resource to it's development.
And Oracle's purchase of Sun was the death knell for StarOffice, and a serious kink to the development of OpenOffice. Whether the fork to produce LibreOffice will be enough to kick-start attempts to make is a serious contender for deployment at Enterprise level (it's already perfectly capable for SOHO or most SME uses) remains to be seen.
If you have the odd few tens-of-million dollars (or more) to develop a new, compatible competitor for MS Office, I'm sure that the whole world would wish you well! I'm sure that there really is a niche for a cross-platform, commercial suite, but trying to play catch up with Microsoft will always be a difficult task. Maybe you should invest in Corel, and try to get WordPerfect and Quattro ported, but I suspect that even this would be a quite herculean task!
...but I object to the categories, particularly "Cheapskates".
This does not take into account the low end of the income demographic, where just obtaining a PC was a major challenge in the first place. These people may be faced with a decision like "Do I replace the (working) PC, or do I pay all of the electricity bill, the rent and do the shopping?".
These are not cheapskates. They may not fully understand the issues but are mainly not ignoramuses, and they are certainly not doing it to prove a point (the "brave"). These are people who effectively have no choice other than to keep a machine with XP, or give up on the Internet completely.
I can see the tail-off of XP systems being very slow.
In the UK, there was no exemption for media conversion (backup), but it was commonly accepted that there was no point in trying to prosecute someone for copying their LPs to cassette for use in the car.
Nothing in the digital age had changed that until this recent change, so technically it was still against copyright law, and this included ripping CDs for use in an MP3 player or computer. There is no fair-use provision in UK copyright legislation.
There had been various suggestions about formalising exceptions, but none had made it into an amendment to the copyright legislation until now.
I never got to see AIX V1 on any platform or V2 on the RT source code (actually, I think I did have a login on one of the machines that used to hold it but I never looked). But the preceding port (IX) that was done by Locus was pretty much a pure SVR2 port, first onto the s370. That was used as the base for AIX on the RT, even if they did re-write parts.
I have had access to various Bell and AT&T distributions from Edition 6 through to R&D UNIX 5.3. and whatever was layered over SunOS 4.0.3 for the R&D additions to that OS.
I would never have said that AIX for the RS/6000 was ever SVR4. AIX 3.1 was definitely only SVID version 1 compliant, which meant that it was really only an SVR2 implementation.
The more modern features were mainly added through the OSF side of things, because IBM was on that camp, not the SVR4 camp.
The convergence really came with the UNIX 98/SUS2 accreditation of AIX 4.3.1, but as this is an interface specification, the underlying code could be written any way you wanted provided that it complied with the interface definitions.
Indeed, if you go through the include files for a current version of AIX, you will find almost no copyright statements left for Bell Labs, AT&T, USL, Novell, XOPEN or The Open Group. This does not prove how little AT&T code is left in there, but it does give some indication.
I'm thinking in terms of shellshock here. No OS is totally secure, and I have acknowledged that often in other posts.
The business of reading another processes environment variables is not totally true anyway. You could read the environment that was passed into a process, but not any variables that were defined since the process was started were invisible.
That behaviour was not just AIX, but several other UNIX-like OSs (I've just checked, and the same behaviour is in RHEL 6.5), and it has definitely been fixed now on AIX (in 2008 - I can get you the APAR numbers if you want), so that you can only get to see the initial environment of processes you own. That is unless you're thinking of something other than the "ps ewww" output that pretty much every other UNIX-like OS also suffers from.
I think that you should look at some of the AT&T - or even better the Bell Labs. UNIX source. It's not perfect, but compared to some of the bloatware and spaghetti that is contributed to open-source projects including Linux, it's a model of conciseness and well documented code.
As soon as the tax runs out, then it becomes an offence to store the car on the road, obviously. The car is no longer taxed so you fail the "a taxed and insured vehicle" test!
That does not alter the fact that it's an anomaly. I don't understand why of the three things you need to legally drive a car on the road, they've not made it a requirement to have an MOT in order to keep it on the public highway. It's just inconsistent.
The same ANPR systems that the Police use to detect untaxed vehicles on the road is also used to detect that an uninsured vehicle is on the road.
It is now illegal (and has been for a couple of years) to have an uninsured vehicle on the road, even if it is parked and not being driven.
So we have the strange situation where an untaxed or uninsured vehicle must be stored off the road, but at the moment, a taxed and insured vehicle without MOT can be parked on the road, but must not be driven.
I'm sure they will fix this deficiency at some point.
You can still queue up at the Post Office. They will take your money however you want to pay it, and inform the DVLA (they've had a direct route to the DVLA for many years). The only difference is that you won't get a round piece of paper to put in your car!
I too don't understand. The old site (which I did some work on the backend servers for some years ago) coped very well. The rate of transactions is quite predictable. Whilst there is normally a surge at the end/beginning of the month, it should not be that different with the new system.
Sounds like there is some misinformation flying around here.