Re: Ah, another patent encumbered format @Frank Bough
That's probably true now, although as standards progress, it means that you keep having to update your adapter (or phone or tablet) every time a new codec is becomes 'standard'.
2058 posts • joined 15 Jun 2007
That's probably true now, although as standards progress, it means that you keep having to update your adapter (or phone or tablet) every time a new codec is becomes 'standard'.
But it is not just the software. In your comments, you're assuming that the people who write an alternative implementation have ripped off your code.
This is always about software patents, not the code itself. OK, you write a nice implementation, your code should be protected, I totally agree. But the algorithm used should be open, so that someone can provide an alternative implementation. If you can prove that they copied your code, then I would support you suing them through every court in the land. But if they wrote their own, through their own efforts.....
It's a serious dilemma, I admit, but all the time we have ambitions to produce a truly free operating system suitable for everybody,, then we have these problems.
I could (although with some reluctance) support going to a model where the OS is free, but the licensed codecs you need have a reasonable cost associated with them (in line with the H.264 charges that seem entirely reasonable). That is how it was in the early Windows days (remember Windows when it could not play media out-of-the-box and you had to buy software to play music and video), although too many people just copied MP3 and DVD packages on Windows.
If we go to this model, it should be clear that this is the case to users of all operating systems, and maybe other OS vendors should be prevented from providing the software as part of their OS offerings. But this has not exactly been a totally successful strategy in the Browser rulings.
And as long as the alternative implementations abide to the rules on the use of LGPL toolchains, this should not fall foul of any open systems licensing, either.
I got it wrong. It's 100,000 units, not 10,000.
This is a quote from the MPEG-LA H.264 License terms summary.
For (a) (2) branded encoder and decoder products sold on an OEM basis for incorporation into personal computers as part of a personal computer operating system, a Legal Entity may pay for its customers as follows (beginning January 1, 2005): 0 - 100,000 units/year = no royalty (available to one Legal Entity in an affiliated group); US $0.20 per unit after first 100,000 units/year; above 5 million units/year, royalty = US $0.10 per unit. The maximum annual royalty (“cap”) for an Enterprise (commonly controlled Legal Entities)
is $3.5 million per year in 2005-2006, $4.25 million per year in 2007-08 and $5 million per year in 2009-10, and $6.5 million per year in 2011-15
All rights to this text belong to MPEG-LA (just a disclaimer to avoid any copyright issues)
So, 20 cents for every shipped copy between 100,000 and 5,000,000, and 10 cents after that up to a maximum of $5 million dollars. That's quite acceptable if you are incorporating it into a product costing $20, but not so good if you are wanting to include it in a popular free Linux distribution. I wonder whether the fact that you are not 'selling' Linux is enough to get out of "sold on an OEM basis" part of the clause?
I'm perfectly happy with software being paid for running on Open Source platforms, but patenting the codecs such that you can't legally provide them as part of a free (as in free beer) OS puts huge amounts of leverage against projects who want to provide a free OS.
The problem is that if you stay within the law, and don't ship what may become the de-facto standard for video, then something like Linux will always be seen as not for general consumption.
Alternatively, if you ship the codecs as part of a distribution regardless, so that the experience to the end user is good, then MPEG-LA can then demand a payment from you. You have no revenue stream because you are providing the software for free, and cannot pay unless you are Mark Shuttleworth (who paid for an H.264 distribution license for Ubuntu, and got heavily criticised for it).
The problem clauses in the H,264 are the volume, which says something like the distributor has to pay a licence fee per copy deployed if they ship more than 10,000 copies, and the one that says that if you have to pay a fee if you use the encoder to produce commercial videos.
Bearing in mind how viral Linux distributions can be, how do you measure how many times it has been deployed. I download one install image, and use it to install thousands of systems, and offer re-distribution from my web site. How is that measured? And who should pay?
And what qualifies as commercial? If one of my kids record the neighbours cat doing something comical, and upload it to YouTube, and Google attaches adverts, is the video for commercial purposes? Should I pay for the encode? Should Google, even though that may not have encoded it?
Licensing like this is a legal minefield for Open Software since the days of MPEG2 Layer 3 (aka MP3) or GIF. My point is that it would be so much better if the codecs (or even just the algorithms) were available under a permissive license.
Let's hope that MPEG-LA are more generous about the licenses, although I would be surprised if they were.
And to pre-empt people who say that H.264 was freely available, I suggest that you look at the commercial encoding and decoder volume distribution clauses in the license agreement.
I kept being phoned up by my Service Provider saying I could upgrade my phone.
At the time (pre-iPhone), my answer was another question "What can I upgrade to?"
When all they could offer was a Windows Mobile phone, I normally answered "And that is an upgrade?" Eventually, I went Android, although an odd quirk was that because I didn't take an upgrade when I was entitled to it, when I did, the discount I was offered was less if I had upgraded promptly. Bizarre!
I was very happy with my Treo, and it is still by fall-back phone. And now, over 6 years later and still on it's original battery, It still lasts longer on one charge than my Sony Xperia.
Was it not "The Eggman" in the Japanese version, rather that Dr Robotnik?
Sonic also spawned at least two cartoon series on TV. Not quite the 360 degree marketing that Nintendo had with Pokemon, but still quite high market penetration.
I can still remember the nasal "I'm waaaaiting" from the cartoon series, which was akin to Sonic crossing his arms and tapping a foot if you didn't move in the games.
This was a really strange blend of Sonic and pinball, and yet it worked quite well.
When I got my oldest son a Dreamcast (picked it up at 00:00 on launch day), together with a copy of Sonic Adventure, I remember watching in dizzy, sickened awe at the speed of the traversal of the 3-D ramps and loops with the background swooping and rolling around. I found it unbelievable that it was able to render the backgrounds as fast as it did.
Whilst Sonic was undoubtedly a triumph, Sega had a stable of exceptional other titles that over the years included Alex Kidd, Ecco the Dolphin, Panzer Dragoon (particularly Saga), Knights into Dreams, and my personal favourite, Burning Rangers (the absolute heyday of the Saturn IMHO).
I'm sure I still have a picture in the 1976 "Electronics Tomorrow" special edition of the magazine Electronics Today International (the one that also had pre-release articles about Star Wars) of a PET 2001 prototype that was curvy.
When the production PETs came out, I thought the steel case and chiclet keys were just plain ugly, although that did not stop a group of us on the college staff-student consultative committee from trying to get one bought for the college. Unfortunately (for us), the council voted for a mini-bus instead. In hindsight, that was probably the best choice, but it did not seem so at the time.
I could not believe how many attempts at passing the stats the psychology students were allowed before they failed at the university I was at. IIRC, they had at least six chances, and I only had two at the maths to support my computing/electronics course. And their stats were really simple! I did the same at Advanced A-Level maths.
It's not that one is better than the other. I suspect that from a purists point of view, the most elegant, sophisticated and efficient code is written by old-school computer science graduates, you know, those who actually understood the reason for doing things, not just the learn-by-wrote of current teaching methods.
But I would also accept that the code that most resembles what is required for a particular problem may be produced by people who don't have a computer science degree.
The thing is that someone who has been taught computer science probably has a relatively poor understanding of problems that are not directly related with computers. So if you are programming a system for another discipline, someone with outside skills who has cross-trained to get relevant programming skills may not turn out the best code, but may have a better understanding of the requirements, especially if they have applicable knowledge for the problem. This is not a hard-and-fast rule, there will be exceptions, but computing is a terrifically introvert area of work.
A previous poster pointed out that the best technical writers are not computer scientists, and I agree. Writing good documentation is a totally different skill from writing good code. Someone with a basic technical understanding, access to the code writers to ask technical questions, and good writing skills will in almost all circumstances turn out better documentation that the code writers themselves. At least the spelling and grammar will be correct!
and if anybody is likely to have problems, it is me.
My closest English transmitter is Mendip (channels 49-58), but as that is about 40 miles away, I currently need an multi-tap amplifier to make sure that all the TVs in the house get acceptable signal.
Within a few degrees of arc of direct line to Mendip, and at a distance of no more than 600 metres, there are two cell basestations run by operators who won slots in the 4G auction. So there is a great chance that if these basestations start operating in the 800MHz band, my TV aerial and amplifier combination will get a great 4G signal, with little chance of using a directional antenna to alleviate the problem. If a filter attenuates the TV signal too much, it will probably degrade the signal enough so it is no longer viable.
I'm trying to get information from the operators (found using http://www.sitefinder.ofcom.org.uk/search), but so far, they have not answered my queries. I want to know as soon as possible, as I have several TVs in the house, and want to find out the impact as soon as possible.
I just hope that I don't have to wait until too late to find out.
Even that's not really possible. They use an evaporative cooling system as much as they can (they really are into providing the perception that they use as little power as possible - which could be understood if only you knew who they are). The only time that this cannot be done is when the outside temperature is too high, and this is likely to be the time that heating anything is least required.
I got my information directly from the building power and cooling engineer/manager, and if he can't work out a way of getting something useful back, I'm certain it can't be easy.
The heat that comes out of these systems is regarded a low-grade. What this means is that the temperature differential is not high enough to make it particularly practical to use.
Roadrunner is air-cooled (see the picture, spot the hot-cold aisles and no water-cooled rear doors), so the the heat will be picked up by the air handlers.
I've worked with 2 generations of IBM was water-cooled supers, and the output temperature of the water is around 25 degrees centigrade (although slightly hotter for the newer system). This is colder than the ambient temperature of the halls (it's a power concious organisation that is experimenting with running the machine rooms hotter than you would normally expect to save power). This makes it less than luke warm, and certainly not hot enough to even heat the water to wash your hands. The cooling works by cooling the water before sending it to the super (input temperature around 13 degrees centigrade), so the cooling was actually at the wrong end.
Of course, you could use heat-pumps to concentrate the heat, like ground water heaters, but there is a law of diminishing returns operating. If it takes more electrical power to concentrate the head than would be necessary to directly generate the same amount of heat, there is no gain.
is not that appropriate as a title for a picture showing fibre cables! Those are fibre Infiniband cables, with electro-optical transceivers (like SFPs) at both ends. The colour is a dead give-away.
Undoubtedly there will be quite a lot of copper in the system, but not in the picture.
...availability vs. cost.
There have been highly available and even un-interruptible services around, but you have to pay for them, and they don't come cheap. Nor do the staff to set them up and run them.
Microsoft probably have an HA offering now, but I expect that even they will charge a premium.
Segregating the function, so that you can put your information distribution systems on simple, small, cheap, and redundant servers in front of your actual service machines can help with the appearance of a service being available (as well as increasing the security), but if you truly want high availability, it's going to cost.
He's obviously not seen birdshot in action fired from a shotgun. If you can hit a clay pigeon or a bird at 20 metres, you should be able to do enough damage to bring one of these 'copters down, which is within the range of most shotguns.
And if they tried to retrieve it, that would be trespass.
I would dispute that seti@home is an HPC workload. It is a distributed workload, partitioned into units that can be worked on in isolation from each other.
There is a huge difference between a distributed workload and a proper HPC workload, and people like weather agencies, atomic research institutions etc. would be only too happy to explain.
Proper HPC needs a huge interconnect, so that a single model can be broken down into multiple threads spanning many systems, all passing data between each other.
BTW, IBM P7 795s are quite cool, but P7 775s are even cooler (literally, water cooled cooler!). I know, because I work with a couple of clusters worth.
It's not impossible. I can claim both in my experience (it's even on my CV)!
(Historically, UTS and AT&T R&D UNIX on Amdahl mainframes, AIX/370 on IBM mainframes, and two generations of IBM HPC where I currently work!)
Currently, I don't think there is any mainframe proper sold with UNIX, although Linux would not be a problem. I don't count HP Integrity Superdome or the IBM 795s as mainframes.
But having said that, I still get asked about PCs. My stock answer is "PCs, horrible little systems. I can't stand them!"
Each direction of my daily commute takes me 75-90 minutes to travel 42 miles.
It's not traffic that slows me down, it's the fact that there is not a single stretch of dual carriadge way or better, and there are things like towns and villages to drive through with speed restrictions, tractors and other farm machinery, caravans, sheep, cyclists and even the odd tourist who thinks that doing 25 on a national speed limit country road because it's pretty does not worry the people behind them who cannot pass because they cannot see far enough ahead to pass safely.
Just because you may be able to jump on a motorway and burn along at 80 does not mean that everybody can!
My lifestyle is my choice, I admit, and I put up with the drive because it's actually a nice place to live with many other benefits, but sometimes it does get too much.
Why? Because some of them may actually think of working in the field, and they cannot make a decision about whether they would be able to until they have relevant knowledge. It's truly shocking how little almost all kids know about how computers work when they leave school.
I'm not saying that there is no value to iPads, but that there are better ways to obtain the skills. In their day, BBC micros could do representative actions for almost the whole spectrum of contemporary computing skills (I know, I built and ran a lab of them in the early '80s at Newcastle Polytechnic that was used to teach computer appreciation), as well as learning to program. I used it to teach structured programming in BBC Basic and Pascal, assembler programming, word processing, spreadsheets, graphics (including basic design using a digitiser, WIMP and touch screens), robotics, basic networking (putting an oscilloscope on the Econet cable was a real eye opener for the students), and many more things than I can remember.
Tablets can do some of these, but I think that as a representative computing device, they are poor. It really does depend on whether they are the ONLY devices available in the schools.
Photo, music and video editing can be done, but would be better on a machine with more memory and disk, for anything except the smallest project.
For an art and design tool, something that had the accuracy of a Wacom digitiser is essential.
And about cooking a steak. You don't need to know how to farm just like you don't need to know how to fabricate a CPU or memory chip, but you need to know how to use the cooker, pans an utensils in order to perform the creative culinary part of cooking. Using an iPad is like knowing what seasoning to use.
The amount of 'learning' necessary to get an iPad working is minimal. The one good thing about what Apple have done is to make it so any fool can use one with little to no training. And even if they did not have one provided by the school, a large majority of kids will learn transferable skills from their own devices. If there is any benefit, it will be having a standardized device for distributing learning material, but there are other probably for more practical and better value devices for this.
Owning a tablet myself, I can understand that using a media consumption device may be useful, but I actually don't find it very useful for my work, because of the difficulty of getting information on and off because of the (very sensible) security policies of where I work. Schools would be no different, especially when you use such a controlled device as an iPad without some sort of relaxation of the restrictions. So unless you categorize learning as another type of consuming media (maybe it is), I find that the overall value of providing iPads is poor compared to other uses for the money.
Over the years, I've seen technological aids used in teaching, using slide and film projectors, television, video-recorders, audio tape based language, micro-film and fiche based interactive courses and finally various generations of computer based training. But do they work better than a good teacher and appropriate books? I'm not sure, and I think back to the most memorable years of school when a sometimes boring subject was brought to life by a capable and enthusiastic teacher with nothing more than a blackboard and text books.
Give the kids an appropriate understanding of how they (computing devices) work, together with the correct amount of other real life skills (reading, writing, basic maths, contemporary history, nutrition), and that will be a much better use of time and resource.
I am still waiting for the delivery of the promise of natural language recognition combined with Artificial Intelligence (always 5 years away for the last 30 or so) that will make interacting with your information system like interacting with another person. When we get this, all this crap about learning how to use a computing device will become obsolete, and we can go back to learning useful knowledge rather than teaching the current in-vogue fad!
My youngest is doing A levels at what is described as a "Specialist Technology College".
I got a text last Thursday asking him to attend a special session in the Media centre.
When he came home, he had been told that because of a "computer failure", all of this years assessed media work would have to be redone, because it had all been lost. His timetabe for this week was suspended, and he was expected to spend the whole week just catching up.
Whatever "computer" they were talking about was not the only failure.
Icon speaks for itself.
Back in the Mid '90s (around the time of the Pentium 90), I looked after some older IBM PS/2 Model 80s, which had 25MHz 386DX processors (they really were cutting edge at one time). I also had access to OS/2 running on various systems.
IIRC, there was a dancing animals (birds and monkeys) video shipped with OS/2 Warp that I managed to run on the PS/2s (AIX PS/2 with xanim ported, again IIRC). It was pretty low resolution, and the extension was .avi, although I don't know what the codec was (do I still have an OS/2 Warp install CD to find out - I must check), and these systems did not have sound cards, but they were running video. If they could do compressed video, I'm sure they would have been able to do MP3 audio.
I say go back to troff, tbl, eqn, pic and ms or mm macros, edited using emacs.
No, seriously. I mean it. Take that straight-jacket off of me! I'm a retro-technologist, not mad!
Actually, the couple of times I've seen someone do a recursive delete of / on a UNIX box (rm==UNIX/Linux) has not caused the system to reboot. What happens (at least on AIX) is that as soon as you hit /usr/lib and /lib, and wipe out some of the shared libraries, the system becomes largely unresponsive, but does not reboot. You end up not being able to log in or issue remote commands, and anything that spawns a new process that is not already running fails with exec errors. This largely happens unnoticed, unless you happen to have an open session. The system just seems to die, but still responds to icmp echo requests because of IP offload to the network adaptors.
Mind you, the next step of physically rebooting the system fails, with the system stopping before it's able to even start init. Again, on AIX, IIRC, it stops with something like 553 on the LCD display (which shows how long it has been since I saw this, as LCD codes on modern Power systems normally display 4 digits now), which normally means that it can't mount the /usr filesystem, but in this case means that it can't even run the mount command. I expect something similar but specific to the flavour of *NIX on other systems.
Time to reach for that system backup that you took. Or a pint to help you consider your options!
I started reading your comment, and had to check that I was not the author. I've taken exactly the same route, right up to accepting Unity on Precise (12.04).
My Laptop is old. It's not that powerful, but it is mine and it works, and I would prefer to not have to replace it at the moment. It worked fine with Gnome 2 on Lucid, although Compiz worked better with Hardy (KMS implemented in the kernel in Lucid and other distro's broke certain ATI drivers)
It runs Unity 2D, but the experience is awful, both because it is not the full-fat version, and because the performance is crap. Same KMS issue as Lucid preventing composite rendering, probably.
So, I've put Gnome 3 with Cinnamon on (just install the package, and select at logon). I would prefer for it to be offered as a choice during install, but I can stay with Ubuntu and remain current enough without having to change the way I work. So far, I've managed to keep it sufficiently like I want it.
I'm not sure that Unity will ever allow me to work the way I want to as it handles windows in a completely foreign way to what I want to do, but I guess that time will tell.
Bell Labs PDP11 UNIX V7 and earlier did not have any support for overlays. I worked with them on RSX-11M, so I understand what you are talking about.
As far as I am aware, there was some prototype overlay code in the later BSD PDP11 releases, but it would only work on a machine with 22 bit addressing and separate Instruction and Data space (I&D) machines (11/70, 11/44 and later systems). Before this, the standard trick used for large software programs (and I saw this done for the BSD release of Ingres) was to split large programs into several executables, and use some proto-IPC interface to spread the function around. IIRC, Ingres from a BSD 2.3 tape used named pipes. Shared memory and message queues were all in the future, but I believe that the was a primitive semaphore implementation in UNIX V7. Have to look to find out.
There was some work done by Keele University in the UK to produce an overlay loader for UNIX V7 on PDP11s, which I managed to get working on my Systime 5000E (a strange beast, being a PDP11/34E [normally 18 bit addressing, no I&D], but actually with 22 bit addressing added by Systime). I used it with some success, but I never managed to get VI working on my small machine. It was all good fun, as was debugging the Calgary device buffer modifications to maximise the number of device drivers you could compile into the kernel on this 22bit non-I&D machine. Out of the box, the mods assumed that if you had 22 bit addressing, you had to have separate I&D spaces, because no DEC PDP11 did not.
Fun times, long gone.
I would have thought that the Berkeley code would be re-distributable. The Berkeley software license was pretty permissive from the work go.
I'd love to know about the UnixTSS myself. Not because I have any (I obeyed the rules and always left it behind when I changed jobs), but I would love to see some of it again, especially the STREAMS and RFS code.
I just wish I had taken copies of the Bell Labs V6 PDP11 distribution, and the BSD 2.1 and 2.3 tapes I worked with in the very early '80s. I know that V7 and a later PDP-11 BSD tape images are available, but by that time, they were already getting difficult to work with on non-separate PDP11 systems.
And it does not work on Linux, and did not on Android devices last time I looked.
I used to be pretty good at Missile Command. Was certainly on the High-Score table most times anywhere I played, and often at the top.
One day I cam to my favourite machine (with the smoothest track ball), and there was a stranger playing. I watched him clock the machine (twice, IIRC), have cities stacked up across the screen, and then get bored after about 45 minutes and walk away before he was wiped out (in fact, before he even started losing significant numbers of cities). You would not believe how erratic the intelligent mines became, and yet he could hit them. I think he must have maxed out the difficulty levels, and the machine started using more and more lurid colour combinations to put him off.
I never saw him again, and I lost all interest in playing, knowing that I could NEVER be that good. In fact, that was pretty much the end of me spending time in Arcades.
People are talking about Mechanical and Model M in the same comments. As a complete fan of the IBM Model M, I was actually disappointed to find when I tried to clean some Tizer or Irn Bru from one of mine (that child of mine will never be fully forgiven) that once you get through the deep hex head screws and plastic welded lugs, what sits under the buckling spring mechanism is still a membrane keyboard, just with the aforementioned spring and plastic rocker sitting on top.
So no microswitches (in fact, I'm not really sure any keyboard used microswitches), although keyboards from the late 1970's and 1980's had discrete push-to-make release-to-break key switches soldered directly onto PCBs. My Issue 3 BBC micro ended up with more solder on the back of the keyboard PBC than metal track, because the repeated strain on the soldered pins would lift the PCB track from the board, and break it.
I remember Newbury Data RS232 terminals from my time a University having the same problem. You would often find one with the 'return' key nor working, which everybody avoided, but could still be used with Ctrl-M instead!
How does the Windows market share get measures. Is there a chance that it measures the number of systems sold with Windows, rather than the number that actually run Windows?
I know that most large businesses can buy Intel systems without a Windows license, but does anybody have any idea of how many actually do rather than just junking any pre-installed windows installation and reformatting the disk?
Blimy. On my monitor, it's difficult to see the difference between the bright red and bright pink on the market-share charts. Never mind, neither look particularly important,
A cable virus! Load it into the cable and it back-hacks the iDevice.
I believe that there was some concern about FireWire some time back, and some speculation that Lightening may be vulnerable in the same way through RDMA. Anybody remember whether these fears were proved groundless?
Mind you, as the software had to be loaded from the iDevice in the first place, you you would need to get it past Apples App. police.
Not sure whether you were commenting to me, but I have known users who worked entirely from inside Emacs.
Before the advent of WIMP, the multi-window, multi-buffer and electric modes for Emacs allowed users to run a shell (using Emacs as the command editor), read their mail and news groups (there were mail and news clients written in Lisp), compile and debug their code, and even run NetHack from inside an Emacs window on a serial terminal that had a termcap definition (you know, something like a VT100/220 compatible, I won't call it dumb because termcap defined a dumb terminal as one that could not do cursor addressing).
The extensibility of Emacs was legendary, something that has surely been forgotten over the years.
You young whipper-snappers just don't know how easy you have it grumble grumble....
Made me laugh out loud! Embarrassing when at work.
My goodness. At this rate with the keyboard shortcuts, we'll be able to pitch a comeback for Emacs!
This used to be a 'lost' Hitch-Hikers Guide to the Galaxy episode linking the first to the second radio series. It was only broadcast twice originally, and then disappeared from the airways as it was neither in series 1 nor series 2. I recorded it
It's since made it into the CD collections fortunately.
What I like is when Marvin is left to delay a Frogstar D. Can you guess with what weapons? Something pretty devestating surely? No, nothing.
IIRC, his last comment as the Hitch-Hikers offices collapse around him after being destroyed by a neutron-ram is "What a depressingly stupid robot"
The C4 signal from Wenvoe is quite bad, and C4 appears at a different position (FV channel 8 between BBC3 and BBC4?), and not all FV boxes allow you to manually re-number channels.
I'd spotted the rescan myself after I'd posted. I must admit that rescanning is getting depressingly frequent, especially on my older FV boxes that don't do it automatically.
The splitter boxes are intended for multi-occupancy buildings (which I suppose my house is at the moment) and are quite expensive. If it were really that simple, you would not be able to buy 8 port LNBs.
And even if I did become eligible for AT800, and they agreed to fund an intelligent splitter and/or install the wiring, the disruption of laying cables over the whole of the house would be horrible.
I'm just hoping that I will not be affected. Only time will tell.
And someone will pay to replace my multi-outlet TV amplifier in the loft that feeds all of the room in the house?
We need this because in West Somerset (in England), the closest transmitter is Wenvoe (in Wales), and I prefer not to get programs in a language I don't understand (I suppose that I could learn Welsh...), but that would still mean that I got S4C not Channel 4, and also that I would get Welsh news, weather etc. I had enough of that when I was working in Swansea.
So. I point my aerials at Mendip, and the signal strength even since switchover is marginal without an amplifier.
I do get fed up when people assume that you've only one TV in the house, and suggest a single solution like "buy Freesat or Sky" will only do that one TV. If I were to provide separate satellite boxes on every TV in the house (my kids are all grown up but living at home [unfortunately], and have their own TVs in their bedrooms), it would cost a fortune, and I would need at least an 8 port LNB, plus lots of point-to-point wiring.
I need Freeview to work, and as all of our channels are at the top end (we're still getting the multiplex with BBC1 on channel 61 at the moment, so will have to retune again at some point I guess), it is very likely we will be affected. And there is no cable installation.
You've already given them that right. It's in the Windows EULA.
For Windows, MS calculate hashs of information about a number of different components in a system (processor, memory, network card, display adapter, disk and controllers, BIOS signature and many others), and actually stores this on their systems as well as in hidden and protected files that not even an Admin user can change. When you change components, the checking process tries to work out how much of the system has changed, and either allows the change or deems it a different computer and asks for re-authentication. It's been like this since XP. It allows you to change processors, disks and display adaptors with relative impunity.
Unfortunately, now that PCs have heavily integrated motherboards, most of the components Windows check are actually on the mobo. This means that changing that is almost certainly going to trip the 'it's a new computer' check, and has done for much of the last decade. The Microsoft Licensing Centre have been fairly good about this in the past if you've cared to explain, and issued the new authentication strings if requested, but I suspect that is likely to change.
I suspect that Office will plug into that process, bearing in mind Windows already does in the Genuine Windows (dis-)Advantage tool, and non 365 Office only runs on Windows.
I'm nit-picking here, but if all you have to use is an MS file format, that does not prohibit you using OO or LO.
Of course, if they also want the file to be formatted the same, then you should really avoid MS file formats completely. (Did you notice? Changing the target printer often upsets the careful formatting even in the same version of Word!)
If you definitely want to make sure a documents looks correct, you really need to use a proper page description language.
Ah, but you probably can't use the Windows license that came with the computer in the VM. Microsoft were very careful about the time it changed the EULA for Vista to only allow the higher tier of Windows licences (did they call it ultimate or elite or something like that) in a VM.
If you bought any system pre-built and pre-installed, it is exceptionally unlikely that you had this type of Windows license with it.
As you can probably tell, I'm not a Windows user myself, although I did peruse the EULA for Win7 when I built my youngest kids PCs. There's a lot in the various MS EULA that I don't like (particularly about gathering and sharing information about you), but the kids wanted to be able to run mainstream games, so what choice did I have.
My favourite clause of ridicule in a MS EULA was in the XP Home one, which tried to limit the number of systems (unspecified what constituted a system) on a home network to 5 or less by prohibiting you from connecting to more than 4 other systems from a computer running XP Home, but then XP itself trawling the what it could see and trying to stick it's fingers into anything it found. Completely unenforceable clause. The one for the old Microsoft Intellitype keyboard (seriously, an EULA for a keyboard! - well strictly for the driver software although it was stuck to the back of the keyboard) was a hoot as well.
BTX - What a waste of time!
My brother brought me a Dell something-or-other that had stopped working and he wanted fixing. No problem says I, and open the case.
Hmm, something wrong here, everything's arse about face. Ahhh. BTX mobo.
Could I find anything either retail, eBay or other tat bizarre. No.
Could not even reuse the case. Stripped the reusable bits and scrapped what remained.
Don't get me wrong. I'm System V through-and-through, but you have to regrettably admit that it's pretty dead now.
OK, Solaris and AIX are still mainly System V versions of UNIX, but I can't see IBM doing an R-Pi port of AIX any time soon, and I think that the license for OpenSolaris would prohibit a port.
In case you hadn't noticed, UnixWare (the last linear descendent of the Bell/AT&T code) of pretty much died with SCO, and any chance for a reversion to Novell died when it got subsumed into Attachmate. That pretty much killed any chance of a new System V variant.
And there's an interesting point. I wonder who you would approach if you wanted to become a new System V source licensee? The OpenGroup?
Agree on Amarok. When I did a dist-upgrade of my Ubuntu desktop box from Hardy to Lucid it did the 1.4-to-2.something upgrade for me, and I was lost for weeks trying restore all of my music.
The bugs are mostly fixed now (at least the ones that were affecting me), but I still find it bloody annoying to maintain music on an external device. It was sooooooo much easier with 1.4.
I overlooked that one (and that is strange, because I had the exact same problem with TomTom myself). Ironically, I also have problems updating my Android Phone and Tablet because the installers both need Windows (although I think the tablet could be done using an update stored on the micro-sd card if I tried hard).
But I would also wonder whether the myTomTom (or whatever it is called) would suffer the same problem as S-OED that you mention.
So, how do we pressure these shortsighted vendors to provide native Linux apps? They'll have to something to cater for tablet filled PC free households at some point.
I have an working knowledge of an internal combustion engine as well, but I don't have greying hair in common. This is despite my being only about two months younger than Jeremy, and is neither because I am bald nor is it because I use dye.
I sympathise with my follicaly or pigmentaly challenged compatriots.