Re: Bah! @Stevie
Were you reading my music catalogue!
2390 posts • joined 15 Jun 2007
Were you reading my music catalogue!
This is exactly why they have strict operating procedures that dictate that if they can't get the system back up within a set period of time, the invoke their contingency plans to keep passengers, aircraf and aircrew safe.
I understand from an interview I heard on BBC Radio 4 on Friday or maybe Monday that this threshold is 7 minutes. The interviewee said that they had the system running again after 15 minutes, but that was 8 minutes too late.
Once they've initiated the contingency plan, which basically involves preventing any more aircraft from entering the controlled air space and getting as many that were already there on the ground as quickly and safely as possible, the damage was done. It was inevitable that there would be issues that ran on into the following days (aircraft and aircrew being in the wrong place, aircraft missing their scheduled maintenance because they were not at their maintenance location etc.)
I was going to say something very similar.
I was worried about my Father until I read this. He does not have Word, and although I cannot be complacent about this (other vectors are still possible), the fact that the major one appears to be Word actually makes me breathe a little more easily. Must check his AV status though.
If it had been using the CORBA vulnerability that was publicised a few weeks back, I may have had more concern.
The difficulties lie in accessing the data structures, and also in efficient memory utilisation.
If you have a single stream of code execution, the processor itself causes an implied serialisation of access to data structures. Once you get more than one processor running, you then have to worry about making sure that two or more threads running simultaneously do not try to write to the same data structure.
You end up having to deal with spinlocks and other mechanisms that are completely unnecessary with single processor code execution. It's been this way ever since multiprocessor machines were available, and I worked on my first multiprocessor machine back in 1987. This challenge gets exponentially worse as the number of cores goes up. There are ways of managing this by separating the data into per-thread memory pools, but again it's something you just don't have to think about with single core machines.
The classic way of doing this has been to put an implied separation into the work that the system is doing. Things like not having multi-threaded processes so that there is no data contention at a process level, or, like the example you quote, having a state machine serialising access to common data structures. But when you start talking about true parallelisation, with multiple threads working on the same data set, these approaches don't work. HPC code writers have struggled with this problem for many years.
You also have the problem that modern multiprocessor machines are normally NUMA, which means that in order to get the best out of the machine, you have to have some idea of how to align memory to the CPUs executing the threads using the bulk of the data.
Both of these problems get much worse if you don't have any idea of the shape of the machine at the time you are writing the code.
What I read this approach as doing is to abstract the machine topology away from the hardware, and putting the complex parallelisation into the abstraction layer. If done correctly, this would allow the code writers to write for a single virtual machine shape without having to worry about the underlying hardware, much in the same way that a JVM allows writers to write code that appears processor neutral.
I often wonder whether it would be possible to resurrect Unixware. This is the closest thing to a mainstream UNIX, and I would love to see a real genetic UNIX available again. But I think Linux has filled the gap where Unixware could exist.
I'm not sure about the Tarentella part though.
I thought (and Wikipedia appears to confirm this) that Tarentella was the remains of the original Santa Cruz Operation after they sold the UNIX server and services division to Caldera, and it was Caldera which renamed themselves The SCO Group.
Tarentella ended up being bought by Sun Microsystems, and is now a division of Oracle.
Darl McBride came into the picture, because he was the CEO of Caldera at the time Caldera bought the Santa Cruz UNIX assets. Before this, Caldera had been one of the early companies specialising in Linux distribution, which is why it was so ironic that they later started threatening to sue other Linux companies. Darl became the CEO of SCO Group when Caldera renamed itself.
Another piece of the picture is that I am sure that HP were originally involved with the original SCO in the transfer of assets from UNIX System Laboratories, when USL was wound up (I missed a bullet there, I was offered a job at USL as a Support/Consultant/Trainer in the UK in the early '90s). This is from memory, although I really ought to see whether there is a UniGram archive somewhere.
I want proper time stamps so that I can reply specifically to one comment in a comment trail, and have everybody know exactly to what I was referring. It's not enough to use the posters name or the title, because we get into conversations with each other on the threads, keeping the comment the same.
Relative timestamps are no bloody use at all!
You would have to make it £100 per person in the household, including the children, if you wanted to make a flat-rate minimum guarantee.
And that would have it's own problems as people realised that the more kids they had, the more money they would receive.
If you were to make the benefits system back to one where need was not taken into account, then you would have to be prepared to return to the days of the flop house, workhouses and foundling orphanages, people living on the streets and escalating crime and prostitution as people did whatever was necessary to survive.
The whole point of benefits where they are needed is to provide a way of living (note I do not say that it should be particularly comfortable) for whole family units, not just individuals. And benefits where needed requires systems to assess the need.
It was a real eye opener when the historical TV programme "Turn Back Time" reminded me that in the first half of the 20th Century in England, people were forced to sell any possessions of value before they would be given any state support if they could not support themselves.
I see your point, but this is just how publishing rights have ever worked, be they for films, books magazines or music. It's been that way ever since works were re-produced and distributed locally. A publisher or distributor buys the rights from the copyright holder. Just look at how books and CDs are published by different publishers on each side of the Atlantic. The only difference is with DVDs and BluRays there is a technical way to enforce it. It may look like a cartel, but unfortunately it's enshrined in well established law.
Until we get a completely global market with no trade barriers, common taxation and the same price, adjusted by whatever exchange rate is current, there will always have to be differentiation of the market in different regions. There may also be classification issues as well for some controversial material.
If you do not give regional rights to distribute these things, then it means that you could only use a distribution company that was global, otherwise some regions would have no distribution at all. If that were the case, you would really find completely different prices depending on how much it cost to import the work from the producing country.
To change it would require huge amendments to copyright legislation and world trade agreements in general. You might wish it changed, but that does not mean that it is going to happen.
The reason is not just price differentiation. It's also that different countries/regions have distribution deals with different companies. So the region lock supports a company which has bought distribution rights for a film in their region to protect their investment from imports from outside the region by a different company that has no rights there.
If a film has different release dates in different regions/countries, it also protects the rights of the distributor in the country that releases the film later or last.
I admit it's all still pretty arbitrary and possibly petty from a consumer's position, but not from the distribution company's point of view.
What amuses me is Disney FastPlay on DVDs.
They spend 15 seconds explaining that you will be advertised to, and then play about 4-5 minutes of adverts before the move automatically starts. This is supposed to be playing faster?
The alternative is jumping to the top menu and selecting "Play Feature". OK, it's more button presses, but a whole lot faster than the default.
That was the original name of what became BBC3. It used to show time-shifted repeats of the best of the other BBC channels, except when Glastonbury was on when it was pretty much full time coverage.
So, having BBC1+1 is almost like going back to it's roots.
And remember, significant numbers of people who were early smart TV and BluRay player adopters have recently been deprived of iPlayer when the BEEB decided to re-work the UI to make it incompatible with older devices.
Yes, you need to attend paranoia classes.
The type that teaches you to be more paranoid.
Yes, you have to be careful about putting all your eggs into one basket.
I was recently very annoyed to see iPlayer disappear from 2 Sony 'Smart' Bluray players that I have. Apparently, the BBC changed the way iPlayer worked (they removed what was termed the "Big Screen" format) in a way that was incompatible with some devices made before 2012, and Sony are not intending to supply an upgrade to these devices. Now 2012 is no more than three years ago, however you look at it, so that's not a very long life for a consumer device.
As it turns out, I bought these Bluray players mainly for their iPlayer function (it was before NowTV or Roku devices were around at a low price), and I've never played a Bluray disk (although they are used to play DVDs), so I am none too pleased with both the BBC and Sony.
But at least I can replace these players relatively inexpensively, especially if I get a £10 Now TV box. If I had lost the function from the telly itself, I might have been even more annoyed.
It's the earliest example of ridiculous warnings. It actually says "May contain fish"
There's a fix to the batteries running down in the remote. You put a mains powered 'controller' with a simple on/off toggle setting to override the remote in a fixed position in the room, somewhere like on the wall at shoulder height just inside the door.
Hey presto, problem fixed.
.... I feel I'm missing something here.
That was never the joke. The North East used to have many coal mines, and used to export the coal to other parts of the country and abroad out of Newcastle. So the ironic joke was that there was no point in shipping coal to Newcastle because they had enough of their own.
Now the North East has no coal mines, and also does not export much of anything at all out of Newcastle.
I don't follow, unless you are alluding to there being a much simpler vector for the breach, like an insider or a social engineering attack.
I was actually not making a judgement about this particular issue, but following up on the comment by Wzrd1 about intruders getting in. I think that we are actually talking the same thing about limiting the damage that can be done while the IDS and intrusion incident protocols are triggered.
That they will get in is a wise statement to make.
But it does not have to be totally true. A suitably designed, multi-layer protection model implemented using multiple vendors kit will probably defeat almost all attacks, especially if the design is kept secret. The trick is to be utterly ruthless with what is allowed between each of your security zones.
By using multiple vendors kit, each boundary between the security zones presents a new problem to be 'cracked'. If things are designed properly, by the time the attacker gets to the third or fourth boundary, your intrusion detectors should have been tripped so that you can take action to protect the service being attacked, and other systems that lie further into the network.
You layer the servers themselves to form parts of the security infrastructure, so in the case of web-based services, your edge web servers only keep session and transient data, intermediate servers keep application logic and only enough data for the transactions in flight, and you keep the core databases separate still. In all cases, the servers have an external side and an internal side, and the networks on either side are never bridged by network infrastructure (obviously you have to have something to allow the servers to be administered, but the same rules apply to the management infrastructure).
In order to get access to the places where data is really present for bulk-download, the only practical way in is to have knowledge of everything in advance.
I'm not saying that even this design is intrusion free, but the idea is to make it so periphery intrusion does not expose data wholesale, so as to limit the damage. It also does not protect from DOS type attacks, or protect you from holes in the infrastructure you provide for your employee's internet access, but that's another story.
But the problem with a model like this is that it gets expensive. And too often, the risk vs. cost balance is set wrong because the managers are dominated by accountants. Too many organisations assume that a single or dual layer of security devices is sufficient to protect their internal networks, and once on a system on an internal network, the world is the cracker's oyster.
I know one bank that used a design like this, which had many zones boundaries, where the architect declared at the end of the first project that it would have been cheaper to give all the customers of the service access to a personal banker for a year than to build the infrastructure! But they did use the infrastructure again for other services, so the cost of later projects was reduced.
Although some of the shortcomings of my Project Debut 2 were beginning to take the edge off my enjoyment. So I found that Henley Designs offers a noise reduction kit that is supposed to eliminate the rumble that was just audible enough to annoy, and Hey Presto, so little rumble that I had to check that I'd actually put the needle on the silent track!
OK, I said to myself. Time replace the stock OM-5e cartridge that was 'just about good enough' with my hoarded Ortofon VMS20e MKII and set it up. Oh, and dig out the Osawa OM-10 mat and the HiFi News test disk. I've been meaning to do this for a while, but the rumble and time pressures just prevented me from carrying it through.
Well, I always liked the sound of the Project, but now it's sublime. So much so that the Wife does not see me many evenings as I revisit disks that I've not played for years.
My biggest problem is that the glue on the sleeves of my LPs is degrading. Every time I get a disk down, the sleeve comes apart. Also, the paper inner sleeves are starting to shed wood fibres, so a deep clean is needed. Somehow, it appears that my collection has got slightly damp, but I can't work out how. It was in storage for some months during a house move, which is the most likely time.
I am not an extreme audiophile. My setup has always been only one step above budget, but was bought as best-buy in their class. Besides the Project, it's a NAD 7020 receiver, JVC KD720 tape desk and Keesonic Kub speakers, but the combination is really quite good. There's also a Technics CD player as well, but I don't know the model off the top of my head.
Newcastle University used the heat from their water cooled IBM 360/64 and later the 370/168 to help heat Claremont Tower back in the 1970s.
One of my kids uses his gaming rig to keep his bedroom warm without having the radiator turned on.
Both different in scale, but similar in concept.
Devices with more capacity are available. I've got one that does 2A from one socket and 1A fro the other. Both will charge my phone.
But I have a problem with the stability of the voltage. Just charging the phone is great, but if I plug the 3.5" jack into the radio to play music from the phone at the same time as I'm charging, electrical noise from the car's electrical system gets through to the phone and renders any quiet audio un-listenable.
I'm just wondering whether I should fork out for a branded adapter, although the one I'm using was not a pound shop special. Anybody any idea whether Belkin et. al. actually make their adapters using better components, or whether they just slap their name on the same old tat and charge a higher price
Flash memory degrades over time due to the migration of electrons as a result of entropy. At he 2013 Flash Memory Summit, it was suggested by a Facebook representative that the "JEDEC JESD218A endurance specification states that if flash power off temperature is at 25 degrees C then retention is 101 weeks". Flash memory retains the data best if the controller is powered up once in a while to scan and correct any bit errors that creep in.
I've always been dubious of flash memory retaining the data for any extended time, and I would be incredibly sceptical about any claim that says that current flash memory technologies could be used to reliably keep data for decades, even if "Flash drive controllers, currently mostly optimised for performance, can be optimised for endurance instead".
You do know that the original song "Neunundneunzig Luftballons" is an anti-war protest song (and says nothing about the ballons being red - which originally confused me when the German video was shown with the English song).
Heute zieh ich meine Runden,
Seh' die Welt in Truemmern liegen,
Hab' 'nen Luftballon gefunden,
Denk' an Dich und lass' ihn fliegen...
- literal translation (but not mine), definitely not the english version
Today I'm doing my rounds,
Seeing the world lying in ruins,
Found a balloon,
Think of you and let it fly....
I've noticed this. My old EEEPC 701, which is not used much now, has needed to be reinstalled each time I've left it a few months without being powered on.
Split your WiFi into trusted and untrusted domains.
Strictly control what can connect to the trusted domain by key or strict access control.
Let the untrusted one be a free for all, with a disclaimer that using it is at the user's own risk.
If there is a requirement for the untrusted devices to connect to trusted services, treat all of the connections as if they were from the Internet proper, and put the correct firewall and barrier controls in place to protect your core services.
Use additional DMZs if that allows you to contain access.
There is absolutely no need to allow BYO devices to connect to your core networks for social media access. If you want them to use their devices for work, you may need to think a bit harder, but for just social media access, it's not that difficult.
... I have often said that if someone is irreplaceable, you should fire them!
Too often people become irreplaceable by hoarding and not sharing knowledge, and such people are never good for an organisation.
By extension, everybody should be replaceable.
If you are not looking at developing the films yourself, you could use C-41 process black and white film. This can be processed by any film processor as it uses the same equipment as colour film.
I believe that both Ilford and Fuji still produce this type of film, and you may still be able to find some Kodak film still within it's use-by date.
I don't count myself as a photography enthusiast, but I have taken pictures over the years that have generated a wow reaction from people.
I taught myself film photography from books and experience while at university, using a tank of a second hand Praktica LTL3 completely manual SLR camera with an f2.8 Carl Zeiss Tessar lens (an optically good, if rather restrictive lens) and stop-down metering.
By my photos were always the ones people wanted to see at the breakfast table when they came back from the developers.
What this hair-shirt experience taught me was that preparation was important, and pre-focus for action shots, setting the aperture and exposure in advance, and, above all, choosing the correct shooting location is essential. All of which are skills that can and should be learned. Another thing was to leave the camera cocked at a medium aperture and mid-range focus (for reasonable depth of field) so as to make an attempt at those 'just happening' shots, and rely on the developing process to correct the exposure. And if you have time and spare film, bracket the exposure for those important shots you don't want to miss.
I stopped spending significant time taking pictures, and am now really just a casual photographer.
When I got my first digital bridge camera, I was appalled by just how difficult it was to actually control the process. Everything was automatic, and the overrides were so difficult to work using the few buttons on the camera that it was a joke. I now possess a slightly more serious Fuji bridge camera with a mid-zoom lens. But I chose this one because I could control the focus and zoom by hand (which does wonders for preserving the battery life), and while I don't fully understand how the synthetic aperture work, I can use it. But what I first learned using a feature-free camera is still useful, even if most of the time I now shoot on full automatic.
I pity people learning photography now, because they just don't get the opportunity to learn the necessary skills properly. One of my kids studied photography a few years back as part of her foundation degree, and I found it highly amusing that they were told to go and buy a cheap second hand film camera with full manual over-ride for use on the course, so at least the colleges still understand.
What on earth does Simon have against SSA disks? I found them easy to deploy, quick for it's time, quite dense (it was the first disk subsystem I knew that used both the front and back of the drawer) and easy to maintain.
OK, it tied you in to IBM and their disks a bit, but I did not find them too bad at the time, and there was never a quibble replacing them while under maintenance.
I don't claim to be an expert in Intel x86 architecture, but I believe that some of the more specific features may have led to additional instructions being added to the ISA. That is certainly the case in other processor families I have used.
In order for code that uses these instructions to run on processors that do not implement the instructions, it is necessary to be able to trap the 'illegal instruction' interrupt, and do something appropriate.
If you did not trap the illegal instruction, the OS would at best kill the process, or at worst, crash the whole system.
In the case of the MicroVAX and early PowerPC processors, you would call code that emulated (slowly) the missing instruction, which had to be part of either the OS, or the runtime support for the application. I've not heard of that happening in the Intel/Windows world, although I'm not discounting that it may be there.
In the s370 world, instead of emulation code, it was possible to trap such things in alterable microcode, this being the method that IBM used to 'add' additional instructions to the s370 ISA for specific purposes to allow application speed-ups.
You make a very good point, but you ignore that compiling for a particular processor, using all of the features of that processor breaks the "compile once run anywhere" ubiquity of the Intel x86 and compatible processors.
If this class action lawsuit is providing relief for home users, these are people who will buy a system and install code that is compiled to a common subset of instructions for the processors it is expected to run on. They are certainly not going to re-compile the applications they buy, let alone the operating system and utilities (you have to admit that dominant players providing x86 operating systems do not make it easy for a user to recompile the code even if they wanted to).
Imagine if when buying a program, you had to check not only which versions of Windows it would run on, but which processor (I know, some games did, but they are a special case).
I also know that it is perfectly possible for an application or OS provider to provide smart installers that identify the processor at install time, and install the correctly compiled version for the processor. Or even put conditional code in that detects at run time which libraries to bind, or which path through the code to select.
Each of those last alternatives lead to significant bloat in either the install media, or even worse, the disk and memory footprint of the installed code. And that is not to mention the support nightmare having several different code paths to do the same thing on different processors.
No, the shrink-wrap application providers will write their code for a common subset of features, and that is what the Pentium 4 was weak at. The same binaries often ran slower on Pentium 4 than on Pentium III processors at the same clock speed (and when launched, the Pentuim 4s did not run at the high clock speeds they later achieved). And later processors such as the Pentium M and Core architecture processors, which used more of the Pentium III architecture, with the 'good' bits of the Pentium 4 grafted on show that Intel eventually got the message that Pentium 4 was a dead end. I'm surprised they contested this, although I guess that this case is all about benchmark deception rather than the ultimate speed.
I sat through the whole thing, thinking "Something has got to happen soon".
Can't do a hand, how about a thumb.
The follow on project to LOHAN has to be an amateur resupply rocket to the ISS.
I'm sure Lester and the other boffins will be up for it!
And RT has not developed a serious anti-US agenda since the situation in Crimea and the Ukraine started, has it!
When Russia Today started, I was surprised by how apparently neutral it was. I tuned in a few days ago and was (actually not) surprised how that has changed in the last few months, with them predicting the demise of the dollar as a world currency (suggesting Bitcoin as an alternative, of all things), and the rise of a fascist police state in the US. It almost seemed that they were listening to anybody spouting a conspiratorial line. Almost like "Controversial TV" used to be, although that did carry drivel by David Ike as well.
I wonder whether Mr Putin has been applying pressure on RT. It must be nice to have a personal mouthpiece broadcasting to the world.
Remind me. How many Windows systems are there on the Top 500 Supercomputer list?
I assume you are either joking or a troll. I cannot really think you are really serious.
I don't think Cray supply anything other than Linux on their hardware.
Most local radio stations do not use the Met Office forecast. I believe that they mostly use the "World Weather Information Service" through Sky News, which is, I believe, a data aggregator, not a weather bureau in their own right.
Microsoft 'bought' Insignia Solutions (or at least took out a pretty much exclusive license) for their SortPC technology that allowed 'foreign' binaries to run on a particular architecture, a feature called Windows-on-Windows (WOW).
This meant that you could have had shrink-wrap Windows applications that should run on all Windows platforms. I doubt that the technology was maintained when Windows became x86 only.
There were systems you could have bought that ran Windows NT on Alpha.
But it is clear that the majority of support for them came direct from Digital, not MS.
I did see an IBM Power system (I think it was a prototype model 40P) running Windows 3.51.
This is not about sharing data for patient care. That should already be being done under a different initiative. Care.data is about sharing data with non-clinicians who perform fundamental, mainly statistical research to correlate and synthesize new conclusions from data that is already held. That should be a good thing.
At least in theory.
The problem here is that the organisations allowed to apply for access to the data goes far beyond the NHS, and indeed beyond pure medical research, and I believe that insurance companies (supposedly for actuarial reasons) and drug companies (probably to assess whether a condition was worth developing a drug for) were the sort of commercial organisation that were applying for access.
Besides thumbs up and down counts, this type of comment could do with a groan count!
...I run an additional hardware firewall separate from my ADSL router.
It's long been an axiom of any 'proper' security that you have multiple layers, each provided by a different vendor.
Even if each of them may have their own vulnerability, it seriously deters casual hackers if once they've breached one line of defence, there's a new and different one to knock down.
Some may see it as a challenge, but most will just give up.
Unfortunately, laptops in particular vary quite a lot in the chipsets that are included. There is a lot of tuning required to get a Linux stable when suspending and resuming.
There is a whole subsystem called pm-utils (ironically modelled on sysv init) which allows you to tweak the suspend and resume system for the particular model of laptop. I tend to run IBM/Lenovo Thinkpads, for which there are a significant numbers of profiles which work quite well.
Where I've had problems are with the models with Radion Mobility graphics adapters when KMS is enabled, and I've also had a problem with the sample rate of pulseaudio not getting restored properly.
But with KMS turned off (Ubuntu releases between 8.04 and 12.04), if you can ignore the audio issues, suspend works quite well. 14.04 appears to have fixed the sound sampling issue.
Hibernate is more problematic, as on Thinkpads it is necessary to have a FAT primary partition on the hard disk to contain the hibernate file. Before I upgraded my Windows partition to Win2K, it used to work fine, but all those years ago, when I upgraded to NTFS I found that the hibernate code in the Phoenix BIOS could not handle the newly formatted NTFS partition. The 'old' boot record format cannot have more that 4 primary partitions (WinXP now, current Ubuntu, last/next Ubuntu and an extended partition containing the rest), I don't have a spare primary partition just for a FAT filesystem.
And there is your problem.
You really know that it's not the right approach when you find your first system that either does not complete the boot process, or even worse, sometimes does but sometimes does not.
You then have this impenetrable black hole to try and debug, which may "appear to be well-documented", but does not tell you what is happening.
Once you've seen it, the "huge pile of little shell scripts" is easy in comparison. The naming convention is only funny if you don't understand how the shell performs globbing.
Bad Wolf was introduced in a very subtle way.
It was not rammed down our throats, as in Here's the ARC you're looking for. It was more Hang on a second, didn't we see something like that a few weeks back. And it sort of made sense, with Rose, while she controlled the power of the Tardis, touching all of her timeline with the Doctor to leave some clues as to what had to happen.
I wonder why she didn't see any evidence of Clara though. Oh, of course, no multi-series ARC (Babylon 5, why could you not have had more influence on other series).
Yes. Probably a Scientific but could have been a Programmable. Need to check the stills. And it still worked! The display was clearly visible at one point.
Hope they didn't ruin it.
Hmm. The BARB figures are interesting, and it horrifies me to see just how skewed towards a few high profile programs like The Great British Bake Off, The X factor, Downton Abby etc TV viewing in the UK actually is.
But it does beg the question of why something like 40% (based on 10 million sky subscribers and 25 million households in the UK - although very broad statistical flaws here) decide to spend money with Sky. And that does not include Virgin Media customers.
There must be something pretty compelling in the 2% of viewing time for Pay channels to justify this expense. Obviously, some of that is going to be sport, and maybe the relatively easy to access catch-up and on-demand services, together with the bundled boxes could be helping maintain their customer base. Of course, even Sky customers will watch free-to-air services some of the time. Like phones, possibly Sky customers don't like the up-front cost of buying the box.
I have both freeview hard disk recorders and streaming services available to me on TVs, as well as Sky, and also have been through two generations of USB freeview stick and played around with other on-line TV services, and I still find that the go-to service in our household is Sky. Maybe we're trying to justify spending the money, but as I said although it is quite expensive, I still regard it as reasonable value for money just for the content I can't (legally) get anywhere else.
Interestingly enough, whenever my wife and I have 'spirited conversations' about what we spend money on, she always brings up the Sky subscription as an unnecessary expense (which is significantly less than she spends on cigarettes in a month), and I have to remind here that she is the one to be found most frequently watching the pay channels! In fact, I would almost not miss it, because I get so little time to watch the slightly less mainstream pay TV channels that I find interesting (documentaries, arts, Syfy, but also the movie channels)
How are you defining "free content"?
If it's content that is available on Free-to-view other services (Freeview or Freesat), then I would dispute your figure of 90%. I have well over 200 TV channels available on Sky, and only about 30 available on Freeview and approx 160 on Freesat. All have at least some +1 channels, so not all of those channels contain unique content.
If you are saying that it is available through the Sky infrastructure without having a Sky subscription, then I may be in slightly closer agreement with you, but try try removing your Sky subscription card and seeing how may channels you can no longer get.
For my ~£60 a month for a Sky HD package, in addition to the Freeview channels, I get Sky 1, Sky Atlantic, Sky Living, all of which contain content not available anywhere else in the UK, and I also get SyFy, Sky Arts, a host of documentary channels, access to 'golden' channels like Watch, a moderate selection of movie channels (although not as good as they were) and also a whole host of on-demand content which I would not pay any extra for. On top of that, they gave me the box(es) for free (they replaced my original SkyHD box without cost when they rolled out the on-demand services).
I don't agree with the way that they spread the desirable content across as many packages as they can to maximise the number of packages you need to buy, and I certainly don't agree with the gouging of their customers with regard to sports channels, but I don't think it is such bad value.
If they still existed (and this is mostly the reason why they don't), I certainly would no longer rent any DVDs from places like Blockbuster, and I've noticed that the number of DVDs I buy has dropped significantly since Sky installed their on-demand service. So in recent years, the amount of money I've spent on content has actually declined as Sky have brought on their services. This seems good to me!
I am reluctant to become a triple-pay customer, because I don't actually like Sky's business model much, but I don't really object to getting TV from them.
My recollection is that xdm actually could switch UID when it ran on a system.I believe that it was a configurable option, and you could specify an X server restart (partly to change the UID, but also to set the server to a known state with no client programs left over form the last user) during the login process on a device that allowed it. Obviously not on an X terminal, though.
It's later graphical login processes like gdm and lightdm that changed this.
Unfortunately I no longer have anything old enough running to confirm this.
Whilst shellshock is/was a really worrying problem, I don't think that any serious web site will actually any CGI-bin bash scripts.
Yes, I know that the problem will persist across other binaries as long as they preserve the environment variables, whenever a bash is started as a child, and that the system() call will almost certainly start a shell, so there is still danger there, but I would be startled if Google, Amazon et. al. were ever vulnerable. The patching they did was mainly to be absolutely sure.
SOHO or SMB web sites may be vulnerable, of course, so I am not downgrading the risk, but I think that your implied assertion that all Linux web servers will by default be vulnerable is overstating the problem.