Re: MS Board Meeting
Haven't they reached their kneecaps by now?
Possibly, but that would require them to stick to a steady and consistent plan with no major deviations along the way.
1439 posts • joined 23 Apr 2008
Haven't they reached their kneecaps by now?
Possibly, but that would require them to stick to a steady and consistent plan with no major deviations along the way.
Ow! That was my foot....
Fixed it for you. The hole is already too big from previous such incidents for the bullet to touch the sides nowadays...
"Nope. At least HERE there's a pushback from the aviation industry and aviation regulators"
Exactly. My point is that the very fact that the aviation industry and regulators are having to push back at all is just plain ridiculous.
One feels that at least in some countries political lobbying, public opinion and telco financial clout has more sway with politicians than does common sense and the local regulatory bodies. For example, in a country where the GPS industry managed to steal a whole adjacent comms band simply by producing rubbish equipment and selling it to everyone without the local regulator noticing or being able to do anything about it afterwards, can one have any faith in the politicians and regulators stopping intrusions into bands that are far more important?
3.5GHz? Doesn't sound like its going to propagate well through the walls of buildings...
The best spectrum for mobile comms is around about 900MHz. It leads to sensible antenna sizes in the mobile, it propagates well through buildings, it's high enough in frequency that reasonable bandwidths are possible, an so on. Australia did exceptionally well to put their 3G services there instead of 2G, which is what a lot of Europe is stuck with in that band.
And the higher up the frequency table you go, the worse it performs. 3.5 GHz might be appealing because it's available, but to make a large network with widespread coverage and good performance indoors and out sounds expensive.
Currently throughout the world there is a jolly nice bit of spectrum from 960MHz to 1215MHz allocated to radio navigation aids for aviation. I wonder how long it is before governments come under unbearable pressure from wealthy powerful telcos to give that up ("blah blah everything is GPS these days blah blah").
For the fabless chip company they always have the option of decapping a chip and comparing what they see to the mask designs they'd originally sent off to the fab.
It's a lot of work and needs some specialised kit, but it's a certain way of being sure. Anything like this that gets you a similar result but more automatically sounds like a good thing.
You owe me a new keyboard!
"Decode/microcode: Decode doesn't mean what you think it is; it's an essential part of any CPU design, RISC or CISC, as decode controls things like "what functional unit does this op go to?" and "what operands does this op use?" Microcode was mentioned nowhere. I suspect you're confusing use of micro-ops - ie, internal basic operations in a long fixed-length format - with microcode, ie lookup of certain complex operations in a microcode ROM at decode time. "
Ha! Yes, you're quite right of course. I've read the article in haste. Ta!
Though I'd like to note that I wasn't dissing the value of out-of-order execution, pipelines, etc.
@ Destroy All Monsters,
"I don't understand this at all. VR and we are talking sub-microsecond realtime arrival times ... ON THE FSCKING CPU (yes, the CPU, not the graphgics pipeline)
Not going slightly overboard here? And I mean hanging on a 15 meter outrigger slightly overboard?"
Going overboard? Quite possibly.
Having spent many a year developing many hard real time systems I yearn for dependable execution times, something that seems to be going out of fashion fast. I hate having to deal with CPUs that don't run code in predictable times. To do large (e.g. 50+ CPUs) real time systems these days is a real pain in the arse - all that variation in latency starts to accumulate and we can't quite max out a collection of CPUs like we used to be able to. Intel chips are truly horrible in this regard, but there's not a lot out there to touch them when it comes to average performance so its hard not to use them.
@Destroy All Monsters,
"Whether CISC or RISC (a useless distinction nowadays, how about simply "ISC"),"
It kinda does matter these days. A consequence of Intel's CISC-RISC translation of x86 to microcode is that there's an awful lot of transistors needed to do that (and everything else that goes with it). Transistors need power, and this was one of the contributors to Intel's best effort at a mobile x86 processor falling short of ARMs on power consumption.
This is an M1, i.e. from ARM's M (for Microcontroller) range.
It most decidedly is not a microcontroller. Since when did a microcontroller have cache, TLBs, an MMU, and all the other gubbins needed by an application CPU.
This M of Samsung's is nothing to do with ARM's use of M to denote a core intended for use in a microcontroller.
If predictable execution times matter to you,
They matter a lot if you're doing VR. Sloppy latency on scene calculations is a good way of inducing motion sickness. Given that everyone is getting into VR these days this might end up being of concern.
Hang on a minute, what's going on here?
Instruction decode? Branch prediction? It's as if someone has decided that ARM is a CISC instruction set all of a sudden and needs to be re-implemented. But ARM is already RISC (very RISCy in fact), and even the 64bit version needs only 48,000ish transistors to implement.
How can it be better to add all that rename, decode and microcode nonsense on top? That's surely going to be a good demonstration of the law of diminishing returns. Wouldn't it be better simply to use all those extra transistors as extra cache (which is always useful), or a whole extra core, instead?
3W at 2+ GHz and not quicker than a competing design at single core performance? Well I think that about answers it. I don't know what Apple have done, but I'd not heard that they (or anyone else) had gone down the same microcode route.
Neural nets for branch prediction? Well, why not I suppose, but from a pure CPU design point of view isn't it a kind of surrender? It's a bit like saying "we don't know how to do this properly" and deciding to build something that cannot be mathematically analysed instead and hoping it's better. That's fine if the result is good...
It does mean that this is useless for hard real-time applications. Branch execution time is now impossible to predict.
"Not necessary. Lennart needs to stop fixing things that aren't broken, and really stop implying that everyone in the community he serves is a relic. He's starting to prove people right about calling him arrogant and single minded. In that Reddit thread, he sounds like he's always sounded toward Linux: every developer is stuck in the past. Which he then uses as an excuse to push for changes that really just let systemd take over that little bit more. "
Ah well, that's the problem in these days when so much has already been done. Pottering has a salary to earn, and he can earn it so long as he's fixing "problems". If he were to say something like "nope, nothing needed", then RedHat would be wondering what else to do with him.
Other software houses are similar. Look at MS - they have a team of people whose job it is to scientifically measure "Usability", and design things that are more "Usable". They did pretty well with Windows 7, but should have been sacked immediately afterwards. They weren't sacked, and we ended up with Windows 8, 8.1 and 10 as a result. The Office ribbon came out of the same bunch of people.
The hardest thing ever for a software developer is to admit that, in some respects, software can be "finished", or at least gets to a point where maintenance is needed, not revolution. Fortunately there are bunches out there who are much more cautious with their approach - FreeBSD, Solaris, etc. Even the Linux kernel devs are somewhat cautious - "don't break user land".
The same is true with senior management. Getting a new director in the company is a guarantee that there's going to be a lot of mucking about, regardless as to whether their predecessor had set things up properly or not. Arrrggghhhh! Weirdly this kind of behaviour has generated a whole sub-profession for those who go around cleaning up the mess caused by others who cannot resist making changes for change's sake.
"If I remember from long ago & far away, Oracle were only prepared to license ME rather than the full desktop Java. Maybe that's why they didn't conclude a deal."
Hmmm, interesting. Since then Google don't seem to have been afraid of inventing languages, which would have solved that problem.
OpenJDK first came into being in 8th May 2007, more or less, and Android first hit the streets a few months later. I don't know when Google would have been speaking to Oracle, but Sun announced the open sourcing of HotSpot on the 25 October 2006, and it was released on the 13th November 2006. So it was certainly very clear the direction in which Java was headed, and it was always inevitable (the benefit of hindsight) that Android would have to join in as they are doing now. Switching to OpenJDK would have delayed Android a little bit, but would have saved an awful lot of grief now.
I remember back then there was also a lot of discussion about native vs managed runtimes. Apple went with native, and made that work very well indeed. BlackBerry with BB10 also went with native and, despite being late to the party, did eventually succeed in making the OS environment itself work well. Nowadays with languages like Rust from Mozilla available as well, one would have to conclude that there's no real debate, native can be as easy and as "safe" as a managed language like Java, and is probably the way to go on battery powered devices.
Google have won? Um, well we'll see about that. With so many legal difficulties around the world not yet settled it's too early to say they've "won".
Microsoft's arena is desktop, office and enterprise servers. Last I looked the bulk of the world was still running Windows, Office and still a healthy number of domain controllers. They're also doing well with Azure and Office365, which are arguably should have been Google's arena. The Surface line of tablet/laptops is also selling pretty well, not bad for the supposedly "dead" concept of a non web-app WIMP user interface. I haven't looked recently but Xbox is still a thing. Sure MS screwed up in mobile, they're doing their level best to irritate their desktop users with Windows 10 and its horribleness and they're in severe danger of losing out in server land as ARM based servers begin to materialise. But MS are far from having lost, and are making money largely by making things they can persuade people to directly part with money for.
Google on the other hand still make all their money through advertising and have grown very fat on the back of that. However in doing so they've engaged in some questionable practises, and the problem they have right now is that various governmental bodies all over the world are now asking the questions, even in the USA.
Plus because they're very aggressive in minimising their tax bills they can't fall back on the old "go easy on us, look at how much tax we pay" plea. Consequently there's not many governments out there eager to give them a break.
The best way to a politician's heart is through healthy tax contributions, but instead Google have relied on lobbying in countries where that works, and have forgotten that in some places it doesn't work at all. For instance, it's hard to influence a stuffy EU official when that official hasn't been elected in the first place. They're basically paid to be bloody minded, not popular.
And whilst the (entirely unjustified) stereotypical view of a French official may be one of a person who'll do anything in return for an excellent lunch, the inquisitorial judicial system there is actually pretty good. Google are on the wrong end of some charges related to tax avoidance in France, and they won't be able to talk their way out of that one if the court decides that the evidence weighs against them. Being found guilty there would also set a precedent around the whole of Europe.
"Though the cars will be driving themselves, Uber says a human driver will be behind the wheel to "supervise" the operation of the vehicle and help train their artificial brains."
Frankly I have never heard anything so bonkers as that idea. Come and be a passenger in our beta-test cars whilst some underpaid and bored individual day dreams in the driver's seat? Yeah right.
It's a strategy that stands a very small chance of success, and risks exposing them to some huge liabilities. The end goal can be achieved only if legislation is passed to allow it, and that will surely push the liability for failures onto Uber. If there's even the merest hint that their self driving cars aren't safe, they're toast.
Meanwhile their bored and underpaid "driver" will most definitely be classed as an employee of Uber, and will be a permanent feature if legislation allowing unsupervised cars is not passed (as seems likely), or if Uber fail to demonstrate an adequate level of reliability (as seems equally likely, I doubt they've achieved results any better than Google's cars in California). In which case, why not just have a driver?
There's also the issue that, human nature being what it is, an unmanned Uber-cab will become the transport of choice for the seriously inebriated who can't get any other ride, vulnerable to petty vandalism or careless damage and will therefore not be the most enjoyable of rides. People behave pretty badly even when there's someone else in the car; without that social restraint it'd only be worse.
As for Uber, I believe they are covered because they aren't offering a taxi/PH service, their drivers are (individually). They just provide the infrastructure to connect them to clients. So Uber do not need a base in Plymouth, their driver does.
Passengers pay their fare to Uber, so the contract is with them, not the driver. Also if Uber have a set of terms and conditions on their website for either drivers or passengers then that reinforces their status as a service provider, not a service broker.
Applause to the TFL - provided that it is _NOT_ discriminatory. All must sit the test (even British passport holders).
Applause indeed, but there is precedent in the immigration rules about exemption from language tests. People from English speaking countries don't have to do the exam as part of the immigration process.
So no need to make British born drivers prove they can speak English, that's already settled elsewhere in laws and rules.
"I see this in the same category as Microsoft embracing Linux and the Bash shell, and Microsoft embracing iOS and Android, too."
Hmmm, i.e. we've lost, and if you can't beat them, join them.
They needed a calibre upgrade, because the bullets they were firing from their old gun were passing through the existing holes in their foot without touching the sides...
This is they've got having skipped Windows 9 as an official version number. Going even, again, was bound to invite trouble...
I was wondering about whether these were accidental malware signature matches. Google scan email for malware, (it's even a service you can buy!), and one could imagine them simply deleting false-alarm emails pretty sharpish and keeping stum about it...
Er, wasn't the weak and electro-magnetic forces unified into the electro-weak force upon the discovery of the W & Z vector bosons at CERN back in the 1980s? Which would mean that there are now thought to be 4 forces, namely gravity, strong, electro-weak, and this new one?
Define "BSD-inspired code" ... very loose term.
Use of some one else's source code to tell one how a device should be driven, rather than plough through some boring over long data sheet or standards document to learn the same information. Of course I'm not suggesting that Hellwig or VMWARE have done that here.
And regardless of that possibility, if both Hellwig and VMWARE based their code on reading the SCSI standards then it's not entirely impossible for the two code bases to end up looking similar, guided as they are by the SCSI standards itself. That's the thing about standards, they are the result of someone else already having done a lot of the thinking on behalf of the software implementer.
I think there'd be something to learn from the SCF repeating their analysis, but for VMWARE's code vs, say, FreeBSD's SCSI code (they did whole kernel vs whole kernel, not just the SCSI code) to see what similarity scores that generates. If they're wildly different then it's possible Hellwig has a point (though as I say above there's still plenty of scope for common solutions arising from separate readings of the SCSI standard). If they're not so different to the scores for VMWARE vs Linux, then Hellwig definitely does not have a point.
We are talking code-lifting here, as in, I copy 1000 lines of code, change a few variable names, change indentation, and claim the lot is my brain-child.
By the SCF's own analysis that's not what's happened here. There's some similarity between some of the functions related to SCSI, whilst others are very different, judging from their stated "ratio of similarity". Not that they're saying what 99% or 14% means: I notice that the SCF haven't put up the two pieces of code side by side for all to see.
At the time, there were copyright disputes for the BSD flavor of UNIX, so they could not lift the code.
By the time FreeBSD came along it was already almost entirely free from AT&T code, most of the work having been done in 1989/1990.
1. BSD was, at the time the Linux kernel was written, still in legal disputes on the matter of the copyright of its code.
FreeBSD came into being in 1993, about 2 years after Linux first hit the servers. Linux was massively incomplete (compared to today) at that stage, and both have grown up more or less in parallel. FreeBSD itself has earlier origins, 386BSD, etc, which go all the way back to 1976; Linux was just 6 years old at the time and, gifted though he is, I doubt he was writing Linux back then.
2. Linux was written from scratch, mostly ... the kernel, I mean. The userland tools as well ... now, you will certainly find this or that, such as the zfs implementation ...
What, there's no BSD-inspired code in there at all? Not one single line? I don't really care, but I bet you cannot prove that.
Now, how is this any worse than what GPL types do ? I mean, GPL'd code is freely available, provided you stick to the license - it is not. GPL is there so that proprietary competitors do not use the work of a gazillion devs in their proprietary BS.
Are you crazy? GPL is cool with usage in proprietary systems so long as the terms of the license are adhered to. There's no "For non-commercial uses only" clause like there is in some proprietary licenses (eg VMWARE Player, a great proprietary gift to the world). Linus even resisted transfer of Linux from GPL2 to GPL3 to ensure that use of Linux didn't drop off as a result.
Hmm, I wonder how they're going to keep the photon carried aloft going without it being absorbed, scattered, etc. etc. It takes quite a long time (relatively speaking) to put a bit of kit in a rocket, launch it, get it stable in orbit.
Agreed, but unfortunately the shareholders kinda expect it, especially if they're not getting a nice fat dividend to make a return on their investment. It is logical that any given market will eventually saturate, but try telling that to the guys who hold the shares.
Practically the only duty of a company board is to increase shareholder value by whatever means possible, and one way to offset falling sales is to pay a bigger dividend on previously earned profits. Though ultimately it's not possible to sustain a share price higher than the true worth of the company. One massive problem for Apple is that it may eventually have to pay a large tax bill if it does repatriate all its profits stashed abroad, unless they can take the borrow-to-pay-dividend to extremes.
But doing that means the Uncle Sam (who is broke and could really do with the $80billion in tax revenue they'd normally collect on profits like Apple's) gets increasingly grumpy about tax avoidance, and becomes more likely to pass a harsh tax law to cover off that trick. The shareholders may expect a return, but the government can pass a law to ensure they get their share first.
Peak Apple? Peak smartphone I suspect. There's simply not any well developed cash rich places left where smartphones haven't already been sold in abundance. Places like India and Africa may not have many smartphones in use, but then there's not that much surplus cash to be milked in iTunes and phone sales either. Apple's next $100billion is not going to come from either continent - they simply don't have that much spare money lying around to be spent on fripperies. And if they did they'd have got it through economic trade with other places, meaning those place themselves then don't have that money to spend on iPhone, etc.
Any CMOS based camera sensor is already an infrared camera, especially so if you remove the IR filter that they place between it and the lens to stop it being an IR camera.
There was an article a few weeks back here on The Register exploring the rise of Android in China. Not Google-Android, but Android spun up by the locals to use local services, mapping, payment, etc. Regardless of what one thinks of the governmental and company set up over there, it seems that they've done a pretty comprehensive job of it over there, and everyone uses it, right down to the QR codes.
So if iPhones aren't glued into that local services infrastructure in China, a lot of people will be thinking "pretty, but useless. Pass me an Android". Which would make it very difficult to get stellar growth in iPhones going in a sustained way over there.
In that respect, both Google and Apple are now shut out of China. Missed the boat, or at least refused to agree to the conditions of passage on that boat.
This case certainly does make the FBI's case vs Microsoft look totally ridiculous. Though I think MS have won that one (thank heavens), at least for the time being.
And BlackBerry. But whilst they either have Android as an OS or and Android-ish runtime, they don't have Google Play Services. And without that it is impossible to run quite a lot of apps. This significantly limits their appeal.
It's interesting to see how the app developers have approached use of Google Play Services.
Lots simply expect to see it and won't run without it.
Some, e.g. Skype will use them if they're there but don't seem to mind if they're missing. Makes one wonder what it gets out of them. I found this out on a BlackBerry. Skype from Amazon's app store runs just fine without Play Services. By then installing a version of Play Services on the phone Skype started trying to use them, but didn't run (apps have to be lightly reprocessed to ignore the lack of a signature on the Play Services). Make those changes and Skype springs back into life.
Some are clearly written with multi platform in mind (eg Nest's app). Yes, Nest from Google doesn't use Google Play Services. Or at least it didn't. I suspect that's because they also want to support iOS from the same code base. Instead Nest and Google can be joined at the server level, where Nest's account can talk to your Google account (which knows everything anyway).
So that's how Google play the Android market. Want to make a usable Android device? You have to install play services, and that comes with a lot of strings attached. Android may be free, but marketable Android most definitely is not. Result: Google monopoly.
The various levels of government, and the courts, are (thank goodness) constitutionally constrained in their powers
Well isn't that part of the problem? If the legal / political machine as a whole is playing silly buggers there's no one who can oblige them to stop it and sort themselves out?
A few years ago I read an article by a retiring constitutional lawyer. The tone of it was along the lines of how America needs something like a monarch or a president in the European style. Basically every other country on the planet has someone whose only real power is to dismiss the government and all the politicians and tell / let the electorate choose some new ones.
Mixing that up with a federation of states is messy, though many other countries don't have this sovereign-state-within-a-federation thing that the US has...
In my lifetime the Queen (well, her governors) have intervened twice, once in Australia and more recently in Canada. In Australia a general election was forcibly called, and in Canada an election was denied. In both cases this successfully resolved a budget deadlock, to the benefit of both. The USA, as things stand, can't do that and this has been to the significant detriment of the country in recent times.
It's never come to that here in the UK, where the Parliament Act basically means we have 5 year dictatorships (so no problems getting things done), but nobody feels that that is undemocratic. The thing that reigns it in is the knowledge that they can pretty well get sacked at any time if the Queen thinks it necessary.
I think RobTub's missing the point. Apple and Google between them have a monopoly on the mobile phone space, and are using that to gain complete control of mobile payments as well as everything else the already control the flow of money for. Want to buy some music, or an app, or a book, or a movie, or a coffee with your phone? Guess who is creaming off the top and uses some flimsy security excuses to say only they should have that power? And every time there's someone creaming off the top, it's the consumer who ends up paying for it.
Competition in processing charges on credit card transactions are the way in which the cost to retailers is minimised. There's not enough competition as it is. Google and Apple between them are seeking to establish a global duopoly of the means by which we buy goods and reduce the competition, and ultimately that can only be a bad thing for the consumer. If the retailers end up with no choice but to go along with Apple and Google and traditional credit cards disappear, there will be no competition left. And the retailers will have to pass the costs on to the consumer. That's why it's a bad thing.
I suppose there's also the question of who is responsible for the content. Content checking, management, hosting, click logging, etc. all takes time and money.
And If you end up unwittingly serving up a nasty trojan-bearing ad the reputational damage is all yours! Using an ad broker at least spreads the blame around a bit.
Oh jezus god, is this the state of 'computer' security in late 2016?
I'm afraid so, and these kind of events will never ever quite go away. Not whilst there is no reliable way for a computer to establish the identity of a human user. We have user names and passwords, biometrics, swipe cards, etc, but all of these have flaws that can and will be exploited.
It's not helped either by too many systems being connected to the public Internet when there is no true need for it. A till in a shop does not absolutely need to be connected to the Internet, and neither does the company network behind it. Connecting it to the Internet seems cheaper than having a private WAN right up until your entire business is hacked to smithereens.
Maybe not. People thought that last time, but now there's this new transgression. You'd be taking a bet that this is the last bit of dodgy design lurking in VW's portfolio...
= my PC reverted back to Windows 7. Went without a hitch, thankfully.
Looks like people at BlackBerry have had a busy weekend - they're now rolling out August 5th patches to their Priv Android phone, or least the factory unlocked SIM free ones.
Fairly smart work. Apart from Nexus and with BlackBerry being hot on Google's heels, who else is keeping their products that up to date?
Not all the flora. I have a certain regard for their local Vitis vinifera.
Oh most certainly, but surely strictly speaking it's an import? And I'm certain it's given me headaches and flu like symptoms the day afterwards!
It's yet another reason to be envious of Australia. Nice place, nice weather, etc. etc.
Though you can keep your animals and plants. Literally everything else is seemingly out to get you, especially the gympie-gympie (Wikipedia) bush and everything with a leg count != 2. And everything that lives in the sea round there. Maybe some of the sheep are safe-ish, perhaps.
Personally speaking I think the inquisitorial system they have on the Continent is far better for handling complicated scientific and technical evidence. It allows all parties to a discussion to be consulted. Our adversarial system doesn't leave room for that.
The inquisitorial system is also far cheaper. All the arguments we have about the legal aid budget are caused entirely by the expense of having two arguing sides to a court case.
As an ex-forensic scientist I've spent many hours in court being challenged*. The closest your statement resembles reality is that no counsel I encountered on either side displayed a knowledge of statistics.
Very few people (least of all me) do understand statistics, which is why the courts are so reluctant for them to be discussed in open court.
My statement about not being allowed to challenge "expert" opinion is based on several cases.
There was a murder case in Scotland where fingerprint evidence jailed a man, and there's wasn't much else. The fingerprint analysis that the dabs matched was presented as fact. However after the case the defence took a look at the analysis themselves, and realised that it was a load of old bollocks; it pointed to matches between mere smudges in the scene of crime dabs. That should have been that - retrial, acquittal, whatever, but it took a desperately long time to persuade the court system that there was anything wrong with the evidence. I think that resulted in a wholesale reorganisation of the fingerprint service in Scotland.
A friend's father-in-law is a senior paediatrician who was asked to act for the defence in a child abuse case. Apparently a junior doctor after many hours on shift had made a rash and almost certainly inaccurate allegation based on a late night examination of child brought into casualty. That kicked off the whole chain of events. However, despite many more senior doctors (not just said father-in-law) protesting that a mistake must have been made, because they weren't there at the time of the original examination they were not allowed to be heard in court, leaving the defence with nothing, no way even of saying that there was "reasonable doubt". AFAIK the prosecution succeeded, and was almost certainly a miscarriage of justice. It seems that in our courts, late night observations made by overworked and tired junior doctor carry more weight than the entire body of peer reviewed paediatric medicine. Not good, especially given the diabolical involvement of people like Roy Meadows.
The DNA contamination thing now is a scary problem I think. Going on the London Underground these days probably means that some of all our DNA ends up at every crime scene in London... It's reassuring to hear that they're aware of the risk of contamination, but it's still a there-but-for-the-grace-of-god-go-I thing.
Having been a juror and seen what goes on in a jury room, I can assure you that you should never put yourself in a position of having to trust in a jury to accurately determine guilt or innocence. Prejudice and illogical thinking can be rife... A colleague who was once a juror caused a rape trial to be stopped by privately reporting some of the goings on in the jury room to the clerk of the court. The judge on reading his note stopped the trial dead in its tracks, made no reference to the note and gave no reason. Judges are terrified that the reliability of the jury system should ever be objectively questioned, yet to those of us who have seen it it has the potential to be very dodgy indeed.
Your mention of having to be rescued by the defence was interesting, and speaks volumes about the problems about how science is handled by the courts. You, the expert, were powerless to intervene when what you'd said was being re-interpreted by the prosecution. Can't have been comfortable.
You say you left before DNA came into use. I don't suppose you're much of a fan of how forensic examinations are now commissioned. Forensics used to be a way by which suspects could be eliminated as well as identifying perpetrators. Now that it's directed by the cops themselves from the cheapest provider, one imagines now that they're now primarily looking for something to convict someone they already have in mind...
Signal != Person
It is, but not every network emanating from a house belongs to the householder. I made specific mention of BT WiFi, because literally everyone who has a BT hub is giving that out to all and sundry and you have no control over who connects to it. So a neighbour can use it to watch BBC, but it'll be your front door that the BBC will knock on. Personally speaking I wouldn't want to be relying on getting an opportunity to explain that a judge and jury.
You might want to read the BBC's charter and look up the legal definitions of things like "conditional access".
Except that access is already conditional - you have to have a license. There's nothing really fundamentally wrong with the idea of demonstrating license ownership when you connect online. If the BBC's charter somehow prevents them instituting such a check-on-connection, then I say the charter is a pile of old bollocks.
As far as I'm concerned this new trick of the BBC's is a terrible way of solving the problem. Its inevitable inaccuracy will result in inappropriate prosecutions and risks miscarriages of justice, which is the most appalling thing that can ever happen to someone. And if that ever happens, I can't see the BBC paying out appropriately for broken families, ruined lives, destroyed careers.
Given that a license check on connection would be completely accurate and very, very cheap to administer (plus getting all that lovely viewership data for free), that's got to be a much better way of solving the problem of making sure people pay up.
"The prosecution would be a civil case, so the test used would be 'on balance of probability' not 'beyond reasonable doubt'. In other words, greater than 50%."
Wrong - TV license evasion is a criminal offence, not a civil offence. You can go to jail for it. You may not get the opportunity to pay a fine, you're simply carted of to the clink for a few months without the option.
That's why there's so much riding on how well the courts handle future cases built on evidence collected in this new way. According to many scientists who have experienced involvement with the courts, it's a disturbing how they approach fact and 'maybe'.
It was recently reported that it's the leading cause for women being put in jail in the UK, which is a ridiculous situation for the country to be in. It costs an absolute fortune to keep someone in jail, and the BBC doesn't contribute to the cost of their incarceration.
Pedant Mode (Sorry)
The vans detected the leaked local oscillator (not the IF) from the first stage of the radio receivers that picked up the TV signal. Colour TVs had more receivers to pick up the colour signals, and so could be distinguished from black'n'white sets.
The local oscillators themselves can be quite powerful (as these things go), around about 1mW, so they're easily detected in the street having leaked back through cheap mixers and up the aerial cable. The same thing still applies today for Freeview digital sets.
I ran a B&W set for ages acting as detector van bait, and always ignored the nastygrams accusing me of probably having a colour set (which I didn't). Saw the van a couple of times. Being an RF engineer and having access to some reasonably powerful kit, I was tempted to give them a nasty blast of a high power signal, see how they like that up their spectrum analyser.
Signal != Person
One of the problems I think they'll have with this new technology is that they cannot identify the people using devices.
I'll explain with the following scenario. I have a TV license, I'm entitled to watch BBC anywhere in the UK, including when I use an public WiFi network such as BT WiFi. I go to a friend's house, who has a BT hub. I use the BT WiFi that their BT hub has switched on by default. That friend has not got a TV license, and I'm watching BBC at their place but not on their private WiFi that comes from the same BT Hub. However the BBC cannot tell the difference; they're not allowed to examine the network packet contents, encrypted or not.
Another problem - two adjoined houses have their living rooms next to each other. The WiFi routers are in the same corner of the rooms, separated by only a couple of feet and the partition wall. One of the houses has a TV license, the other one doesn't. I bet they can't DF the emissions to the accuracy required to tell which of those WiFi routers is in which house.
The Courts' Dismal Approach to Science and Technology
My fear is that the BBC will be too gung-ho with prosecutions, and the courts will take an unreasonably optimistic view of the reliability of the technology. The UK courts haven't exactly been that clever at sorting scientific fact from pseudo-fact, and there's too many holes in this technique for it to be relied upon as the sole evidence required to jail someone.
The Courts have been appallingly willing to accept scientific evidence with low probabilities of correctness as being evidential fact. If an 'expert' states in court that something is fact then the court accepts that, and no amount of dissenting scientific opinion will change their mind. As defence you're not even allowed to challenge the "expert" evidence in court or even discuss probabilities.
This caused a number of people to be jailed on DNA evidence alone, until someone irrefutably showed that the number of base pairs being accepted at the time as "good enough" wasn't. A man accused and jailed for rape commissioned his own more thorough DNA analysis using his own money. This showed that it was a close match but definitely not an exact match - definitely not him then. Quite a few cases got quietly squashed as a result.
Misguided and Easily Circumvented
Effectively they are doing a primitive traffic flow analysis attack on encrypted communications. Well, that's easy enough to defeat in software. As the article suggests changing the network MTU would be one thing. But it wouldn't be hard to develop an app, or even a website, generating network traffic that'll bugger up the analysis too.
What came first? Definitely law suits. BlackBerry practically were the only smartphone (as it seemed to be at the time) of their day, which meant a healthy market share (though weirdly still quite a small share - they were expensive business tools). Then BBM became very popular with youngsters on a budget - BBM was a way of texting that was cheaper than SMS if you sent a looooot of messages (as teenagers do). So far so good, some clever ideas executed pretty well making good money.
And then Apple come along and made a shiny, glitzy phone that mostly didn’t work very well as a phone and had unbelievably terrible battery life (back then a feature phone would easily last more than a week on a charge). Suddenly BlackBerry were nowhere (so was everyone else, eg Nokia). Android only just succeeded because it was given away, allowing non-Apple manufacturers to get a piece of the shiny-glitzy pie that Apple had unexpectedly baked.
BlackBerry the fools tried to make their own thing, but was way, way too late. Had they done BB10 a year or two before Apple did the iPhone, the iPhone would not have had anything like an easy ride. BB10 is actually pretty good from a design and usability point of view (which is why I use it, but then I'm a bit odd...), but was way too late and missed the boat. For example BlackBerry Balance (not just BES) is the dream answer to the BYOD problem (it's really great for both people and company, it's way ahead of all the other MDM solutions), but no one in the market cares. It's not all bad news: BB10 (i.e. QNX plus a graphic front end) is doing quite well in the automotive sector, probably precisely because it isn't iOS or Android. The traditional car manufacturers don't want to be dominated by Apple or Google.
We're kinda at the same point now with Apple and Android. Both well established in the market, but there's now a sense of stagnation brewing. iPhone sales are going down, and really what has been happening in the past few years is refinement of the idea of a smartphone. The improvements are now too trivial to reliably persuade people to upgrade in bulk. Nothing fundamentally new is coming out of Apple or Google. Android manufacturers rely on not putting out software updates to force people to upgrade.
Apple in particular will also deploy the lawyers if they see fit to do so as a mean of maintaining revenue, and has already done so over stupid things like rounded corners (far more trivial than the patents BlackBerry are waving at Avaya). Google don't have to sue quite as much - they get money from people using phones, not from phone sales. Google don't give a damn if people stop buying phones.
Apple in particular know they're stagnated, and both Apple and Google are keenly aware that someone could do to them what they did to BlackBerry and Nokia if they don't invent the next great thing themselves. Consequently they'll jump on almost any idea going. Hence the absurd iWatch and the whole wearable rumpus, and the Internet of Things. Both have to put effort into these things just in case one of them works. So far no success, meaning they're not having the right ideas. iWatch is a dead duck, Google have cocked up Nest, no one is really using smart wearables, Both are pushing into self driving cars, though this is probably going to be too big for even them. No one is likely to make a car truly self driving, we will always have to be sober and paying attention; so not fundamentally useful at all, really, no compelling reason to buy one. Google weren't too happy that the State of California published their trials data, showing that human intervention was needed about once every 1500 miles.
Well, a true patent troll has never made anything, and exploits the crazy patent laws they have in the USA to repeatedly modify an application until it closely resembles something that someone else is making.
BlackBerry clearly aren't in that category.
So it's alright if I come and pinch all your best ideas then?
It's easy to state where BlackBerry went wrong. They rested on their laurels in the first half of the 2000s, and forgot that other people can innovate too.
And BlackBerry aren't out of business, there's a range of keyboarded phones from them available, see the Priv (android), Passport and Classic (bb10).
Er, Apple sue other companies too, and no one's saying that that indicates the imminent demise of Apple, Inc. If anything BlackBerry have been far more patient with other companies concerning real technologies than Apple were over rounded corners, which arguably has cost them real revenue which they could now sorely do with collecting.
SCO's claim was related to copyright, not patents.
Regardless of what one thinks of the patent system and its use by companies, we have to acknowledge that BlackBerry really did invent an awful lot of stuff that is very good. Given their status as being a long established manufacturer, it would be churlish for anyone to say that they are acting like a patent troll (and I gladly acknowledge that you're not suggesting they are). I mean, if BlackBerry can't defend their IPR in court, who the hell can?!
I am a bit puzzled as to why someone hasn't snapped up BlackBerry. They have a lot of very good technologies, and these add up to nifty little things that are great in handsets. Apple could acquire BlackBerry with the loose change in its back pocket, and would instantly acquire one of the more sophisticated patent portfolios available and a range of technologies that Apple simply don't have at the moment.
Never mind the military, there's plenty of electronics that we all rely on. How many fake components in the airliner you last flew on, or in your car, or in medical equipment, etc.
It's not just small components either - I've seen fake FPGAs that worked well enough to pass a cursory examination...