1924 posts • joined 15 Jun 2007
Re: "Nokia said China would receive it first" @disembodied John Brown
That was never the joke. The North East used to have many coal mines, and used to export the coal to other parts of the country and abroad out of Newcastle. So the ironic joke was that there was no point in shipping coal to Newcastle because they had enough of their own.
Now the North East has no coal mines, and also does not export much of anything at all out of Newcastle.
Re: A suitably designed, multi-layer protection model implemented @AC
I don't follow, unless you are alluding to there being a much simpler vector for the breach, like an insider or a social engineering attack.
I was actually not making a judgement about this particular issue, but following up on the comment by Wzrd1 about intruders getting in. I think that we are actually talking the same thing about limiting the damage that can be done while the IDS and intrusion incident protocols are triggered.
Re: Yeah but, this is a RE-hacking @Wzrd1
That they will get in is a wise statement to make.
But it does not have to be totally true. A suitably designed, multi-layer protection model implemented using multiple vendors kit will probably defeat almost all attacks, especially if the design is kept secret. The trick is to be utterly ruthless with what is allowed between each of your security zones.
By using multiple vendors kit, each boundary between the security zones presents a new problem to be 'cracked'. If things are designed properly, by the time the attacker gets to the third or fourth boundary, your intrusion detectors should have been tripped so that you can take action to protect the service being attacked, and other systems that lie further into the network.
You layer the servers themselves to form parts of the security infrastructure, so in the case of web-based services, your edge web servers only keep session and transient data, intermediate servers keep application logic and only enough data for the transactions in flight, and you keep the core databases separate still. In all cases, the servers have an external side and an internal side, and the networks on either side are never bridged by network infrastructure (obviously you have to have something to allow the servers to be administered, but the same rules apply to the management infrastructure).
In order to get access to the places where data is really present for bulk-download, the only practical way in is to have knowledge of everything in advance.
I'm not saying that even this design is intrusion free, but the idea is to make it so periphery intrusion does not expose data wholesale, so as to limit the damage. It also does not protect from DOS type attacks, or protect you from holes in the infrastructure you provide for your employee's internet access, but that's another story.
But the problem with a model like this is that it gets expensive. And too often, the risk vs. cost balance is set wrong because the managers are dominated by accountants. Too many organisations assume that a single or dual layer of security devices is sufficient to protect their internal networks, and once on a system on an internal network, the world is the cracker's oyster.
I know one bank that used a design like this, which had many zones boundaries, where the architect declared at the end of the first project that it would have been cheaper to give all the customers of the service access to a personal banker for a year than to build the infrastructure! But they did use the infrastructure again for other services, so the cost of later projects was reduced.
I never gave vinyl up
Although some of the shortcomings of my Project Debut 2 were beginning to take the edge off my enjoyment. So I found that Henley Designs offers a noise reduction kit that is supposed to eliminate the rumble that was just audible enough to annoy, and Hey Presto, so little rumble that I had to check that I'd actually put the needle on the silent track!
OK, I said to myself. Time replace the stock OM-5e cartridge that was 'just about good enough' with my hoarded Ortofon VMS20e MKII and set it up. Oh, and dig out the Osawa OM-10 mat and the HiFi News test disk. I've been meaning to do this for a while, but the rumble and time pressures just prevented me from carrying it through.
Well, I always liked the sound of the Project, but now it's sublime. So much so that the Wife does not see me many evenings as I revisit disks that I've not played for years.
My biggest problem is that the glue on the sleeves of my LPs is degrading. Every time I get a disk down, the sleeve comes apart. Also, the paper inner sleeves are starting to shed wood fibres, so a deep clean is needed. Somehow, it appears that my collection has got slightly damp, but I can't work out how. It was in storage for some months during a house move, which is the most likely time.
I am not an extreme audiophile. My setup has always been only one step above budget, but was bought as best-buy in their class. Besides the Project, it's a NAD 7020 receiver, JVC KD720 tape desk and Keesonic Kub speakers, but the combination is really quite good. There's also a Technics CD player as well, but I don't know the model off the top of my head.
Nothing new here, move along
Newcastle University used the heat from their water cooled IBM 360/64 and later the 370/168 to help heat Claremont Tower back in the 1970s.
One of my kids uses his gaming rig to keep his bedroom warm without having the radiator turned on.
Both different in scale, but similar in concept.
Devices with more capacity are available. I've got one that does 2A from one socket and 1A fro the other. Both will charge my phone.
But I have a problem with the stability of the voltage. Just charging the phone is great, but if I plug the 3.5" jack into the radio to play music from the phone at the same time as I'm charging, electrical noise from the car's electrical system gets through to the phone and renders any quiet audio un-listenable.
I'm just wondering whether I should fork out for a branded adapter, although the one I'm using was not a pound shop special. Anybody any idea whether Belkin et. al. actually make their adapters using better components, or whether they just slap their name on the same old tat and charge a higher price
Flash memory degrades over time due to the migration of electrons as a result of entropy. At he 2013 Flash Memory Summit, it was suggested by a Facebook representative that the "JEDEC JESD218A endurance specification states that if flash power off temperature is at 25 degrees C then retention is 101 weeks". Flash memory retains the data best if the controller is powered up once in a while to scan and correct any bit errors that creep in.
I've always been dubious of flash memory retaining the data for any extended time, and I would be incredibly sceptical about any claim that says that current flash memory technologies could be used to reliably keep data for decades, even if "Flash drive controllers, currently mostly optimised for performance, can be optimised for endurance instead".
Re: But... Why? @Simon Harris
You do know that the original song "Neunundneunzig Luftballons" is an anti-war protest song (and says nothing about the ballons being red - which originally confused me when the German video was shown with the English song).
Heute zieh ich meine Runden,
Seh' die Welt in Truemmern liegen,
Hab' 'nen Luftballon gefunden,
Denk' an Dich und lass' ihn fliegen...
- literal translation (but not mine), definitely not the english version
Today I'm doing my rounds,
Seeing the world lying in ruins,
Found a balloon,
Think of you and let it fly....
Re: Long-term deep storage
I've noticed this. My old EEEPC 701, which is not used much now, has needed to be reinstalled each time I've left it a few months without being powered on.
Re: Well duh
Split your WiFi into trusted and untrusted domains.
Strictly control what can connect to the trusted domain by key or strict access control.
Let the untrusted one be a free for all, with a disclaimer that using it is at the user's own risk.
If there is a requirement for the untrusted devices to connect to trusted services, treat all of the connections as if they were from the Internet proper, and put the correct firewall and barrier controls in place to protect your core services.
Use additional DMZs if that allows you to contain access.
There is absolutely no need to allow BYO devices to connect to your core networks for social media access. If you want them to use their devices for work, you may need to think a bit harder, but for just social media access, it's not that difficult.
As a business policy ...
... I have often said that if someone is irreplaceable, you should fire them!
Too often people become irreplaceable by hoarding and not sharing knowledge, and such people are never good for an organisation.
By extension, everybody should be replaceable.
Re: Enjoyed this
If you are not looking at developing the films yourself, you could use C-41 process black and white film. This can be processed by any film processor as it uses the same equipment as colour film.
I believe that both Ilford and Fuji still produce this type of film, and you may still be able to find some Kodak film still within it's use-by date.
Re: nice commentary
I don't count myself as a photography enthusiast, but I have taken pictures over the years that have generated a wow reaction from people.
I taught myself film photography from books and experience while at university, using a tank of a second hand Praktica LTL3 completely manual SLR camera with an f2.8 Carl Zeiss Tessar lens (an optically good, if rather restrictive lens) and stop-down metering.
By my photos were always the ones people wanted to see at the breakfast table when they came back from the developers.
What this hair-shirt experience taught me was that preparation was important, and pre-focus for action shots, setting the aperture and exposure in advance, and, above all, choosing the correct shooting location is essential. All of which are skills that can and should be learned. Another thing was to leave the camera cocked at a medium aperture and mid-range focus (for reasonable depth of field) so as to make an attempt at those 'just happening' shots, and rely on the developing process to correct the exposure. And if you have time and spare film, bracket the exposure for those important shots you don't want to miss.
I stopped spending significant time taking pictures, and am now really just a casual photographer.
When I got my first digital bridge camera, I was appalled by just how difficult it was to actually control the process. Everything was automatic, and the overrides were so difficult to work using the few buttons on the camera that it was a joke. I now possess a slightly more serious Fuji bridge camera with a mid-zoom lens. But I chose this one because I could control the focus and zoom by hand (which does wonders for preserving the battery life), and while I don't fully understand how the synthetic aperture work, I can use it. But what I first learned using a feature-free camera is still useful, even if most of the time I now shoot on full automatic.
I pity people learning photography now, because they just don't get the opportunity to learn the necessary skills properly. One of my kids studied photography a few years back as part of her foundation degree, and I found it highly amusing that they were told to go and buy a cheap second hand film camera with full manual over-ride for use on the course, so at least the colleges still understand.
What on earth does Simon have against SSA disks? I found them easy to deploy, quick for it's time, quite dense (it was the first disk subsystem I knew that used both the front and back of the drawer) and easy to maintain.
OK, it tied you in to IBM and their disks a bit, but I did not find them too bad at the time, and there was never a quibble replacing them while under maintenance.
Re: Pentium 4 didn't suck. @Nigel 11
I don't claim to be an expert in Intel x86 architecture, but I believe that some of the more specific features may have led to additional instructions being added to the ISA. That is certainly the case in other processor families I have used.
In order for code that uses these instructions to run on processors that do not implement the instructions, it is necessary to be able to trap the 'illegal instruction' interrupt, and do something appropriate.
If you did not trap the illegal instruction, the OS would at best kill the process, or at worst, crash the whole system.
In the case of the MicroVAX and early PowerPC processors, you would call code that emulated (slowly) the missing instruction, which had to be part of either the OS, or the runtime support for the application. I've not heard of that happening in the Intel/Windows world, although I'm not discounting that it may be there.
In the s370 world, instead of emulation code, it was possible to trap such things in alterable microcode, this being the method that IBM used to 'add' additional instructions to the s370 ISA for specific purposes to allow application speed-ups.
Re: Pentium 4 didn't suck. @Gordan
You make a very good point, but you ignore that compiling for a particular processor, using all of the features of that processor breaks the "compile once run anywhere" ubiquity of the Intel x86 and compatible processors.
If this class action lawsuit is providing relief for home users, these are people who will buy a system and install code that is compiled to a common subset of instructions for the processors it is expected to run on. They are certainly not going to re-compile the applications they buy, let alone the operating system and utilities (you have to admit that dominant players providing x86 operating systems do not make it easy for a user to recompile the code even if they wanted to).
Imagine if when buying a program, you had to check not only which versions of Windows it would run on, but which processor (I know, some games did, but they are a special case).
I also know that it is perfectly possible for an application or OS provider to provide smart installers that identify the processor at install time, and install the correctly compiled version for the processor. Or even put conditional code in that detects at run time which libraries to bind, or which path through the code to select.
Each of those last alternatives lead to significant bloat in either the install media, or even worse, the disk and memory footprint of the installed code. And that is not to mention the support nightmare having several different code paths to do the same thing on different processors.
No, the shrink-wrap application providers will write their code for a common subset of features, and that is what the Pentium 4 was weak at. The same binaries often ran slower on Pentium 4 than on Pentium III processors at the same clock speed (and when launched, the Pentuim 4s did not run at the high clock speeds they later achieved). And later processors such as the Pentium M and Core architecture processors, which used more of the Pentium III architecture, with the 'good' bits of the Pentium 4 grafted on show that Intel eventually got the message that Pentium 4 was a dead end. I'm surprised they contested this, although I guess that this case is all about benchmark deception rather than the ultimate speed.
Re: OK, quick survey -
I sat through the whole thing, thinking "Something has got to happen soon".
Can't do a hand, how about a thumb.
The follow on project to LOHAN has to be an amateur resupply rocket to the ISS.
I'm sure Lester and the other boffins will be up for it!
Re: Lazy lazy lazy
And RT has not developed a serious anti-US agenda since the situation in Crimea and the Ukraine started, has it!
When Russia Today started, I was surprised by how apparently neutral it was. I tuned in a few days ago and was (actually not) surprised how that has changed in the last few months, with them predicting the demise of the dollar as a world currency (suggesting Bitcoin as an alternative, of all things), and the rise of a fascist police state in the US. It almost seemed that they were listening to anybody spouting a conspiratorial line. Almost like "Controversial TV" used to be, although that did carry drivel by David Ike as well.
I wonder whether Mr Putin has been applying pressure on RT. It must be nice to have a personal mouthpiece broadcasting to the world.
Re: Good hardware but why not a real operating systems? @AC
Remind me. How many Windows systems are there on the Top 500 Supercomputer list?
I assume you are either joking or a troll. I cannot really think you are really serious.
I don't think Cray supply anything other than Linux on their hardware.
Re: The best weather forecasting...
Most local radio stations do not use the Met Office forecast. I believe that they mostly use the "World Weather Information Service" through Sky News, which is, I believe, a data aggregator, not a weather bureau in their own right.
Microsoft 'bought' Insignia Solutions (or at least took out a pretty much exclusive license) for their SortPC technology that allowed 'foreign' binaries to run on a particular architecture, a feature called Windows-on-Windows (WOW).
This meant that you could have had shrink-wrap Windows applications that should run on all Windows platforms. I doubt that the technology was maintained when Windows became x86 only.
Re: Linux ahead(as per usual)
There were systems you could have bought that ran Windows NT on Alpha.
But it is clear that the majority of support for them came direct from Digital, not MS.
I did see an IBM Power system (I think it was a prototype model 40P) running Windows 3.51.
Re: If it was respected @AC
This is not about sharing data for patient care. That should already be being done under a different initiative. Care.data is about sharing data with non-clinicians who perform fundamental, mainly statistical research to correlate and synthesize new conclusions from data that is already held. That should be a good thing.
At least in theory.
The problem here is that the organisations allowed to apply for access to the data goes far beyond the NHS, and indeed beyond pure medical research, and I believe that insurance companies (supposedly for actuarial reasons) and drug companies (probably to assess whether a condition was worth developing a drug for) were the sort of commercial organisation that were applying for access.
Re: Easily explained
Besides thumbs up and down counts, this type of comment could do with a groan count!
And this is why...
...I run an additional hardware firewall separate from my ADSL router.
It's long been an axiom of any 'proper' security that you have multiple layers, each provided by a different vendor.
Even if each of them may have their own vulnerability, it seriously deters casual hackers if once they've breached one line of defence, there's a new and different one to knock down.
Some may see it as a challenge, but most will just give up.
Re: the "fun" part about systemd
Unfortunately, laptops in particular vary quite a lot in the chipsets that are included. There is a lot of tuning required to get a Linux stable when suspending and resuming.
There is a whole subsystem called pm-utils (ironically modelled on sysv init) which allows you to tweak the suspend and resume system for the particular model of laptop. I tend to run IBM/Lenovo Thinkpads, for which there are a significant numbers of profiles which work quite well.
Where I've had problems are with the models with Radion Mobility graphics adapters when KMS is enabled, and I've also had a problem with the sample rate of pulseaudio not getting restored properly.
But with KMS turned off (Ubuntu releases between 8.04 and 12.04), if you can ignore the audio issues, suspend works quite well. 14.04 appears to have fixed the sound sampling issue.
Hibernate is more problematic, as on Thinkpads it is necessary to have a FAT primary partition on the hard disk to contain the hibernate file. Before I upgraded my Windows partition to Win2K, it used to work fine, but all those years ago, when I upgraded to NTFS I found that the hibernate code in the Phoenix BIOS could not handle the newly formatted NTFS partition. The 'old' boot record format cannot have more that 4 primary partitions (WinXP now, current Ubuntu, last/next Ubuntu and an extended partition containing the rest), I don't have a spare primary partition just for a FAT filesystem.
"Haven't used it much yet"
And there is your problem.
You really know that it's not the right approach when you find your first system that either does not complete the boot process, or even worse, sometimes does but sometimes does not.
You then have this impenetrable black hole to try and debug, which may "appear to be well-documented", but does not tell you what is happening.
Once you've seen it, the "huge pile of little shell scripts" is easy in comparison. The naming convention is only funny if you don't understand how the shell performs globbing.
Bad Wolf was introduced in a very subtle way.
It was not rammed down our throats, as in Here's the ARC you're looking for. It was more Hang on a second, didn't we see something like that a few weeks back. And it sort of made sense, with Rose, while she controlled the power of the Tardis, touching all of her timeline with the Doctor to leave some clues as to what had to happen.
I wonder why she didn't see any evidence of Clara though. Oh, of course, no multi-series ARC (Babylon 5, why could you not have had more influence on other series).
Re: Did anybody notice...
Yes. Probably a Scientific but could have been a Programmable. Need to check the stills. And it still worked! The display was clearly visible at one point.
Hope they didn't ruin it.
Re: Defining Free
Hmm. The BARB figures are interesting, and it horrifies me to see just how skewed towards a few high profile programs like The Great British Bake Off, The X factor, Downton Abby etc TV viewing in the UK actually is.
But it does beg the question of why something like 40% (based on 10 million sky subscribers and 25 million households in the UK - although very broad statistical flaws here) decide to spend money with Sky. And that does not include Virgin Media customers.
There must be something pretty compelling in the 2% of viewing time for Pay channels to justify this expense. Obviously, some of that is going to be sport, and maybe the relatively easy to access catch-up and on-demand services, together with the bundled boxes could be helping maintain their customer base. Of course, even Sky customers will watch free-to-air services some of the time. Like phones, possibly Sky customers don't like the up-front cost of buying the box.
I have both freeview hard disk recorders and streaming services available to me on TVs, as well as Sky, and also have been through two generations of USB freeview stick and played around with other on-line TV services, and I still find that the go-to service in our household is Sky. Maybe we're trying to justify spending the money, but as I said although it is quite expensive, I still regard it as reasonable value for money just for the content I can't (legally) get anywhere else.
Interestingly enough, whenever my wife and I have 'spirited conversations' about what we spend money on, she always brings up the Sky subscription as an unnecessary expense (which is significantly less than she spends on cigarettes in a month), and I have to remind here that she is the one to be found most frequently watching the pay channels! In fact, I would almost not miss it, because I get so little time to watch the slightly less mainstream pay TV channels that I find interesting (documentaries, arts, Syfy, but also the movie channels)
How are you defining "free content"?
If it's content that is available on Free-to-view other services (Freeview or Freesat), then I would dispute your figure of 90%. I have well over 200 TV channels available on Sky, and only about 30 available on Freeview and approx 160 on Freesat. All have at least some +1 channels, so not all of those channels contain unique content.
If you are saying that it is available through the Sky infrastructure without having a Sky subscription, then I may be in slightly closer agreement with you, but try try removing your Sky subscription card and seeing how may channels you can no longer get.
For my ~£60 a month for a Sky HD package, in addition to the Freeview channels, I get Sky 1, Sky Atlantic, Sky Living, all of which contain content not available anywhere else in the UK, and I also get SyFy, Sky Arts, a host of documentary channels, access to 'golden' channels like Watch, a moderate selection of movie channels (although not as good as they were) and also a whole host of on-demand content which I would not pay any extra for. On top of that, they gave me the box(es) for free (they replaced my original SkyHD box without cost when they rolled out the on-demand services).
I don't agree with the way that they spread the desirable content across as many packages as they can to maximise the number of packages you need to buy, and I certainly don't agree with the gouging of their customers with regard to sports channels, but I don't think it is such bad value.
If they still existed (and this is mostly the reason why they don't), I certainly would no longer rent any DVDs from places like Blockbuster, and I've noticed that the number of DVDs I buy has dropped significantly since Sky installed their on-demand service. So in recent years, the amount of money I've spent on content has actually declined as Sky have brought on their services. This seems good to me!
I am reluctant to become a triple-pay customer, because I don't actually like Sky's business model much, but I don't really object to getting TV from them.
Re: Why would you PARSE FONTS in the kernel? @AC - Linux drivers
My recollection is that xdm actually could switch UID when it ran on a system.I believe that it was a configurable option, and you could specify an X server restart (partly to change the UID, but also to set the server to a known state with no client programs left over form the last user) during the login process on a device that allowed it. Obviously not on an X terminal, though.
It's later graphical login processes like gdm and lightdm that changed this.
Unfortunately I no longer have anything old enough running to confirm this.
Re: Windows 10, for those interested @AC
Whilst shellshock is/was a really worrying problem, I don't think that any serious web site will actually any CGI-bin bash scripts.
Yes, I know that the problem will persist across other binaries as long as they preserve the environment variables, whenever a bash is started as a child, and that the system() call will almost certainly start a shell, so there is still danger there, but I would be startled if Google, Amazon et. al. were ever vulnerable. The patching they did was mainly to be absolutely sure.
SOHO or SMB web sites may be vulnerable, of course, so I am not downgrading the risk, but I think that your implied assertion that all Linux web servers will by default be vulnerable is overstating the problem.
Re: Why would you PARSE FONTS in the kernel? @AC - Linux drivers
Actually although a small part of the video driver system is in the kernel, the majority of the driver runs as plug-in modules to the X server process (not kernel modules), which is a use-land process, not in the kernel. This makes graphics drivers different from, say, a driver for a disk adapter.
The bits in the kernel are to do with allowing the X server process to access the video hardware at a register/DMA level, and is pretty generic glue code. All of the smarts are in the X server, and that is the code that is most likely to have a problem. This means that it is unlikely that you can crash a Linux box with a graphics driver, although you may make it difficult to use on the directly attached monitor (other access methods are available!)
In fact, if you try hard, you don't even have to run the X server as root. Generally speaking, modern distributions do run the X server as root because it is started up before the graphical login starts, and that needs X, but if you disable the graphical login, log in as an ordinary user using a text-based authentication method, and then run up an X server (using something like startx), it works just fine.
I would actually like the graphical login methods to switch away from root during the login process. It can be done, but is likely to introduce a visible glitch as the X server restarts during the login process. But as we will end up with Wayland or Mir in the near future, changing the way that X11 is used seems a bit pointless.
Re: wouldn't be multi-tasking the same way
There were serial terminals that provided two or more serial ports allowing them to be connected to two different systems (or the same system twice!).
Ones I came across included the HP2392, Falco 5220, and I believe that Wyse and Esprit also had models that did the same.
But none of the normal terminals that I came across allowed direct cut-and-paste between different sessions, although I could not say that there were none that did.
I should note that the AT&T BLIT, running on UNIX with layers backing it up allowed virtual terminals on the same machine using a RS232 or Starlan serial connection (there's a video copyrighted 1982 on YouTube), and did come with a mouse! AT&T also had a session manager called screen that allowed a process on a UNIX system to masquerade as several terminals, maintaining screen state, and allowed you to switch between them. This worked on any terminal with sufficient curses support.
Re: CTRL-C @dan1980
"the command prompt came from a time when mouse-control was not really there"
No. It came from a time before mice were an option. I know that the mouse was first demonstrated to the world in 1968, but they did not appear on general purpose computers until the Xerox Star, AT&T Blit, Sun 1, and Apple Lisa, all in the early 1980's. The first PC mouse appears around 1983.
'ordinary' terminals with CLI interfaces go back much further than that!
Re: CP/M applications @Mage
It pre-dates CP/M as well.
I first came across Control-C as interrupt in the default settings for the UNIX Version 7 TTY driver (although it was 'soft', and could be redefined with the stty command), which would have been around 1979. Prior to that, the standard interrupt was communications Break. But I don't want to talk about the V6 TTY driver. That ugliness is best consigned to history.
I'm fairly certain that Digital Equipment Corporation (DEC) used Control-C for interrupt in RSX and RSTS as well.
Control-Z to suspend was a feature that came from BSD UNIX that introduced TTY job control sessions that allowed you to have backgrounded programs that you could switch between with the fg and bg commands.
Re: CPU to be an Snapdragon 805, really? @ Jedibeeftrix
Apart from playing the numbers game, just why is 64bit a good idea in a tablet? It's not like it's got many-gigabytes of RAM to manage (3GB can be managed in a 32 bit address space), or that it needs to handle large integer or high-precision floating-point numbers..
Just because Apple thinks that a 64 bit processor is a good in a hand-held device does not mean that it is a good idea at the moment.
Re: HTTP or HTML? @Mage
The whole concept of wide area networking security was a moot point when it came to early email systems. UUCP was the best that there was (UUCP is the UNIX to UNIX Copy Program, not just a mail system, although one of the most common uses was mail, and another was remote printing).
Everybody knew it was not secure, because it was a store-and-forward scheme, such that any of the intermediate systems had access to the content. That was just the way it worked, and everybody knew and accepted it.
If you look at basic UUCP, it ran over serial communication lines, often over analogue telephone lines using modem. The concept of it being secure was never even thought about. It was easy to tap a telephone line and feed the data captured through a modem, so it was obvious that there was no security. If you wanted to send something securely, you encrypted it and the uuencoded the result.
There was an encrypted UUCP system which used the UNIX crypt technology. I cannot remember the exact details or what it was called, but it was in the AT&T BNU, but it effectively meant that the data was not transmitted in the clear. But it was still vulnerable on the intermediate host systems.
Saying that it should have been secure is like saying early cars should have been built with roofs, windows and locks. But they weren't.
Anyway. The TELEX system was about as insecure as you can imagine, so that is not a particularly good example.
BTW. To all of the people saying that X.400 should have been the default mail system should remember that SMTP was defined in RFC 821 several years before the initial recommendations for X.400, and they expected to be running over X.25 transport systems, so were a bit weak on the security side as well.
Look again at what was being celebrated. It is the best historical artefact, so unfortunately, it is limited to what actually still exists.
I agree that HMS Dreadnought was clearly a revolutionary ship, and rendered the rest of the world's battleships obsolete almost overnight, but Dreadnought herself was rapidly overtaken by subsequent ships that introduced the 13.5" and then the 15" main gun, fuel oil in place of coal, superheated steam boilers and improved protection. Notable British Dreadnought follow-on ships included the Iron Duke class and then the Queen Elizabeth class, which was IMHO probably the peak of the British Super Dreadnoughts. Subsequent ships moved away from the classic Dreadnought layout, and culminated in the fast battleship that was built by various navies to counter ships like KMS Bismark and IMS Yamato.
HMS Dreadnought herself only had a life of around 13 years, which is a very short time for a capital ship, and managed to miss Jutland, but does have the distinction of being the only battleship to have ever sunk a submarine!
Re: HMS Belfast
I was going to ask the same question. Belfast was one of a subclass of the Town, or Southampton class of large light cruisers. The primary difference was that during the building of the ship, an extra 22 foot section was added between the forward superstructure and the forward funnel.
The original intention was to allow the ships to carry more (16 vs. 12) six inch guns, but as the quadruple turrets were never built, they ended up with the same main armament as the original ships. They could cover a target with continuous fire, but were not really any better that the rest of the class.
This left the two ships (Edinburgh and Belfast) longer than the so called heavy cruisers, and as long as the smaller battle ships (like the Royal Oak class), without significant armour or heavy guns.
I also think that the extra section spoilt the very hansom lines of the 'towns, giving them an awkward, lop-sided silhouette, certainly nothing worthy of accolade.
But I suppose that as there is little preserved of the glory days of the British Navy, that we should be glad we still have Belfast.
I would have liked to see either Vanguard, the last British battleship, or the Audacious class Ark Royal (not the harrier carrier) preserved, but alas, they are gone.
I've been thinking about this a bit more. What we are seeing are the first signs of battle-lines being drawn up between two different factions. The divide is whether Linux should stay as mainly a UNIX clone, or whether it should become a new operating system based on UNIX but no longer adhering strictly to the UNIX ethos.
I'm getting old. I've been working with UNIX for 36 years. I'm definitely in the "UNIX clone" camp. I really don't relish learning what would rapidly become a new operating system. I fear that complexities would effectively produce a technocracy who are the only people who understand the inner workings of the new OS, to the exclusion of people on the 'outside'.
I think that the systemd people will be in the "New Operating System" camp. I don't know which camp Linus would sit in. If he is in the UNIX clone camp (and this was really how he started Linux in the first place), I think that people who want to move away from the UNIX roots should fork the kernel, and really make it a new OS. According to the rules as I understand them, they would no longer be allowed to call it Linux, however.
If they do not want to take on the responsibility of maintaining their own kernel, they really should listen to the influential people who do control the existing one, and that means paying some heed to what Linus says rather than trying to browbeat the development team or slip poorly coded patches into the kernel source, because it does not work the way they think it should.
With the direction Canonical want to take Ubuntu, and the friction between the kernel developers and some other projects in the community about the future direction of what a core GNU/Linux system should look like, I can really see there being a schism on the horizon.
Re: Unpaid volunteers in a lot of cases @AC
You're looking at this the wrong way. The problem with your argument is that you think that systemd is better than what preceded it. May of us who have long UNIX and Linux experience do not believe that the advantages of systemd, mainly of faster boot time outweigh the horrible, horrible complexities that it introduces.
Just because someone has come up with an interesting alternative to init and the traditional rc scripts does not mean that it is automatically better.
I blame the fact that a lot of people have grown up with Windows as their learning platform. In that model, complexity, opaqueness and proprietary lock in is a way of life, and too many young (and not so young) programmers producing Open Source software accept that it is the way to produce a system.
One of UNIXs real advantages was that there were serious efforts to keep it simple. Systemd does not fit in that model, nor (as others have pointed out) does most of the sound system in Linux (not just pulseaudio, but the other things that came before it) or several other additions.
Where systemd has crossed Linus's path is although there is a kernel/utility separation in Linux, systemd (which is not really part of the kernel, but part of the utility toolset), the systemd developers were demanding changes in the kernel, and abusing some of the management and logging facilities of the kernel in a way that was never envisaged. That caused what looked like kernel problems, like the system hanging on boot.
Linus did not agree that the kernel needed changing, and certainly did not agree with the way that the logging facility was being used, and pushed back in his own inimitable style.
As he is the custodian of the kernel, not of the utility tools, that is his prerogative.
Re: Talking of 1-2-3 @Deryk
I only used the term ASCII because I believe that that it was more immediately understandable than "serial" or "asynchronous". I am well aware that there were many terminals that were normally used as asynchronous serial terminals that had form-filling capability. But I would suggest that outside of some proprietary applications that mandated particular terminal types, almost all ASCII terminals were used as asynchronous serial devices, so much so that the terms are almost synonymous. These devices rarely used the form-filling functions, even if they had them.
By the early '80s, which is when Lotus123 came to the fore, terminals were normally IBM 3270 or 5250 compatible, and did indeed use EBCDIC, or serial terminals that nearly all used ASCII, such as Lear Siegler ADM3A, Wyse50/60, DEC VT100, Beehive etc. There were dozens of manufacturers, all of whom gave up as cheap PCs could also be terminals with the correct software.
Re: Text editting
I knew I should have qualified that. Curses was an API abstraction layer allowing people to write software without having to know what terminal type was going to be used. It was written by Ken Arnold at UC Berkeley, and was shipped with BSD, before being re-implemented in System III Unix by AT&T.
Interestingly, the Wikipedia article asserts that strictly speaking vi predated curses, and curses heavily borrowed code from vi. After all this time, you learn something new.
Re: Text editting
400 baud was an odd speed. The standard speeds were 75, 150, 300, 600, 1200, 2400, 4800 and 9600. Some terminals would do 19200, but that was generally frowned upon because of the interrupt load on the server. Faster speeds came about when people started running multiplexors or PPP for internet access.
But yes, that was one of the reason why vi commands were so terse, and the requirement for curses to optimise screen updates. Vi was written to be able to work over the slowest of lines with the most basic of terminals. All you needed was full-duplex communication, the alphanumeric keys and some punctuation. The terminal had to have some form of direct cursor addressing and at least a home and clear screen command, that could be encoded in termcap. But even the, there were some terminals that were just too brain-dead to be used for vi. I seem to remember some comments in ancient termcaps about a super-beehive terminal and maybe one of the Ann Arbour terminals.
What was most concerning was terminals that would not flow-control properly, so there was a mechanism for encoding timing delays into the functions so that curses would not overwhelm a terminal, preventing corrupted screens.
Re: Talking of 1-2-3
Chances are that the visible part of the sheet was sent as a 3270 form, and you would have been able to move between the cells/fields with tab and/or arrow keys, filling in multiple cells, and once all of the fields were how you wanted, you could hit enter and transmit all of the cells up at once, and have the sheet recalculate. This would have been quite familiar to a mainframe user, but completely foreign to anybody used to instant update.
I know that having grown up on full-duplex ASCII terminals on UNIX, DEC and other systems, moving into a 3270 world when I joined IBM frustrated the hell out of me until I worked out the best way to do it. But once the concepts were understood, it worked pretty well, only differently.
The reason for it working the way it did was because 3270 terminals had quite a lot of function built in, and would allow local editing of data on the screen without any involvement from the mainframe or terminal controller. This meant that you could attach a lot of terminals to a mainframe without it melting down, and that interacting with a remote terminal down low speed telecommunication lines was bearable, with only the download and upload screen refresh being slow.
For full-duplex ASCII terminals, the computer was involved in the most basic of functions, and ended up having to echo every key typed back to the terminal. Interrupt handling per keystroke sapped the life out of a lot of mini-computers unless they were good at it (like the PDP11 was).
PCs, where the computer had the keyboard and screen locally attached were a different proposition, and naturally lent themselves to update per keypress type applications.
Re: "TVs these days are a lot harder to repair than TVs of old"
As an aside, I have been told, and I think I believe a lot of it, that when you look at the lifetime claims of compact fluorescent lightbulbs (CFLs), the lifetime quoted is actually the expected lifetime of the tube.
Within the bulb, you also have an inverter to generate the voltages necessary to drive the tube (it's in the large white plastic blob between that screw/bayonet and the tube and makes the bulb difficult to fit in some light fittings). These invariably contain similar capacitors, such that when the CFL fails, the tube is often OK, but the inverter has stopped working. This is, I believe, why they do not appear to last as long as the claimed lifetimes.
Unfortunately for LED bulbs, until we get low voltage lighting supplies in houses, they will have to have similar electronics to produce a low voltage DC source in the bulb, and will also suffer premature failures.
"TVs these days are a lot harder to repair than TVs of old"
Whilst we have moved away from the failure rate of valve TVs, it is well known that a very significant number of modern TV failures are caused by capacitor break-down in the power supply. It's normally within the ability of anybody who can learn to wield a soldering iron and screwdriver to unplug the TV from the mains, ignore the "No user serviceable parts inside" label, take the back cover and shielding off, spot the bulging capacitor(s), and replace them (fortunately, the capacitors are unlikely to give a serious shock in a modern TV).
Alternatively, there is a scrap industry that works like the car breakers. Companies break TVs up into their working component boards, and sell them at a fraction of the price of a new TV. Ebay and the Amazon Market place are great places to find such businesses.
My 7 year old 32" cheap (for the time!) no-brand HD TV has now been repaired at least twice like this, and I have a 26" Acer that I bought over 10 years ago that it still going strong after several bouts of maintenance.
There is still a place for someone who can fix TVs. Whether it is workable as a means of earning a living, I'm not so sure.