"Skoda, the low-end Czech car manufacturer"
Oh boy, are you out of date or what?
2021 posts • joined 23 Apr 2008
"Skoda, the low-end Czech car manufacturer"
Oh boy, are you out of date or what?
This has conflicting requirements.
1) Detect that the right person is carrying the phone, with 100% reliability.
2) Detect when the wrong person carrying the phone, with 100% reliability.
It's no good if it's, say, 99% reliable. That would mean that it would occassioanlly self destruct on a legitimate user, and sometimes fail to destroy itself in the hands of an unauthorised user.
With any kind of feature, gait, biometric sensor there is a degree of uncertainty as to what they've measured. It can never be 100%. Even us humans sometimes get it wrong (ever been convinced you've seen a friend out and about who turned out not to be? Embarassing when you walk up to them and say "Hi!").
The maths involved in optimising weights for combining unreliable sensors like this are clear. You can bias the system one way or the other, but not in both directions at once.
In short, it won't work well enough to actually be useful. Biased one way it will be too unreliable for legitimate users. Biased the other and it will not be secure enough for the intended purpose.
Incidentally the maths problems underpinning the problems with these sorts of systems and requirements is what killed off the biometric identity card scheme here in the UK. They (finally, and very late) realised that it would be useless at the intended purpose, which was letting UK citizens through passport control at the airport and keeping non-UK citizens out (or queuing up at passport control). It was either going to let people impersonate UK citizens too easily, or deny entry to genuine citizens too regularly.
BlackBerry are doing what Nokia didn't.
Nokia vanished. Maybe BlackBerry won't..
However it's clear from the market figures that that is not a selling point.
Loads of people moan about Android permissions all the time, but it's rarely enough to make them buy a BlackBerry...
The crazy thing is that if, hypothetically, someone did a properly good app for Android that gave you the same level of control as you get in BB10 it would be a knock out best seller. Ad blockers are very popular, it would be similar in essence to those. There seems to be a lot of mediocre / crap ones on the Google store... There is clearly a level of demand for it out there.
Personally I loathe Android's Freemium funding model. It's a distasteful race to the bottom that gives zero choice for those who are happy to pay a little bit of cash for stuff that works but doesn't snoop.
British army fighting vehicle of old, such as the Chieftain tank, could be converted with a spanner fairly quickly.
It wasn't wholly successful. They didn't run very well on either fuel, and if run on petrol for an extended period of time the piston rings would wear quite quickly. The result was that they could start dieseling on their own lubrication oil, and run out of control and explode. Apparently running away was the recommended course of action.
If KBA withdraws approval for the affected cars, they can neither be sold nor driven in Germany.
Not even driven? Blimey! That will sting...
Seemingly VW are fairly unique in the diesel world in not using urea in the exhaust system to reduce NOx emissions. From what I hear, urea injection definitely works, so if other manufacturers (which I gather is all of them) have used it then they should be in the clear.
The real problem they have is that these systems need to be properly looked after. They wear out, run out of urea, get fouled up. A diesel car when it's brand new is pretty clean, but after a 100,000 miles who knows? Especially if you run it on cheap diesel.
And of course, things like diesel particulate filters, catalytic converters, exhaust gas recirculation valves, etc. all cost serious money to replace, especially if you take your old car back to BMW, Mercedes, Renault or whoever. I get the distinct impression that the manufacturers push the renewal costs upwards as a ploy to get you to buy a new car. Who would spend £1000 on a car worth £1500?
So I think the real scandal is that the major atmosphere protecting components are not built to last or be econmoically replaced throughout the true lifetime of the car. There must be many a diesel that gets to 100,000 miles only to then start spitting out soot and fumes, yet no owner can reasonably justify the expense to get it fixed. So they don't, and they just drive it around anyway.
"I'm so happy I'm done with using INTEGRITY, ran against more bugs than we wanted to handle, and the standard response was that there was a newer version just out, please try that."
Sorry to hear that. I have to say my experiences with INTEGRITY were all good. This has all been in the past 4 years, version 10. The support I got here in the UK was very good indeed. We didn't try to run on anything out of the ordinary, stuck to carefully chosen standard x86 hardware, and didn't need a magic cable either. Maybe that was a good choice. In that particular system it was very good value for money.
The drivers may be derived from BSD, but they're not really part if the OS proper. The clever bit, the kernel, is all theirs AFAIK.
It's so hard to put an off the peg OS onto bespoke hardware, Linux, VxWorks, etc they're all hard. Someone somewhere has to put the lot of effort into software support. Linux has put a lot of effort into SoC, which is a tremendous help.
"Intel is rarely the first choice for CPUs these days in embedded markets..."
It depends on which particular bit of the market. I can assure you that in the high performance embedded market (radar, etc) it's Intel all the way for newish projects. That's simply because Freescale (now NXP) failed to deliver a decent PowerPC with decent math performance. Not surprisingly I suppose, the market is perhaps too small.
Once upon a time I can remember it being the other way round. I can remember 400 MHz 7410 PowerPCs being way quicker than 4 GHz Pentium 4s (mostly down to Altivec, the PowerPC equivalent of SSE). In my opinion Altivec is still better than SSE, but the first Nehalem Xeons were just monstrous enough to overwhelm anything the PowerPC world was selling, especially as IBM canned the Cell processor.
So in short, if you want a lot of embedded CPU maths the only choice left in the market is Intel.
Which also implies Linux. The Linux kernel works better on Intel's big chips (Xeons, etc) than VxWorks apparently.
Embedding Linux (with the premp-rt patches) on Xeons and making the whole thing real time is really, really difficult. It can be done however, and these days it is about the only option left if you want to crunch a lot of numbers.
Freescale did very well out of the telecommunications market with their PowerQICC range without Altivec.
And if Intel do screw the vxworks side of the business that will seriously hamper a number of really quite important projects for Uncle Sam's DoD.
When Apple took over PaSemi, they canned the PowerPC chip straight away. Unfortunately for Apple Uncle Sam made them keep it going, because it had already been built into some military kit and Uncle Sam wasn't about to redevelop it simply because St. Jobs didn't like it. (And of course Apple really wanted the PaSemi staff, but they all buggered off to form Agnilux, leaving Apple with nothing. Ha!)
Same thing could happen here. If VxWorks support starts becoming ineffective then Uncle Sam might start getting very cross, and Intel might have to be forced to go cap in hand to the people they've just sacked. You take on a product with very long term support promised to some customers with big sticks, don't be surprised if they wave those sticks at you.
You owe me a keyboard.
Maybe idiotic, but companies that have done it the other way round have ended up in trouble too when all their remaining staff retire...
If they're trying to save money then that's probably a symptom of the management trying and failing to grow a business. Things aren't easy out there for proprietary expensive real time operating systems.
This is probably some MBA trying to be a clever arsehole when really they should accept that the business has filled the market, especially now that Linux has pinched so much of it, including their own Linux distro. Also selling Linux ain't natural, especially if some of the "magic add ons" are actually someone else's unacknowledged open source efforts... If they want to stay in the market properly then that means taking special care to retain expertise, especially the amount they charge. If they lose too much expertise through these redundancies and the inevitable follow on "fuck this for a lark" departures, they may end up with no business at all.
These days I use Greenhill’s INTEGRITY if I need something like that. It's even more expensive, but is very cool and the company is privately owned by a single individual. If he's happy with his niche then there's no one else to tell him he's wrong, and that's a good thing in my opinion from a long term support point of view. If I were in the market for a properly good rtos then I would look at these redundancies today and wonder about Intel's long term commitment to my project's support if I chose their products...
Linux has certainly had a big impact on their business. I heard that VxWorks never adapted well to multi core processors. Sure they bolted multi core support on top, but didn't do it as thoroughly as they might. Linux has left VxWorks behind performance-wise on multi core processors.
When I first started doing big embedded systems it was all VxWorks, VxWorks, VxWorks. Now it's Linux all the way, with the premp-rt patch set being good enough for all but the most demanding of applications.
Now that Linux has grown tools like kernelshark the there's not a whole lot of technical advantage in the proprietary IDE WindRiver had. In fact, Tornado was at its zenith on Sparc/Solaris, primarily for debugging. You could have a separate debugger running for each thread (well, task), and that was amazingly useful. You cannot do that even today on visual studio or gdb as far as I'm aware. The Windows version was rubbish in comparison.
What does he mean by ahead anyway? Google get to read all our data anyway, it's no surprise that they're ahead of the NSA!
Also, it's not exactly a great advert for Android for work if Google's own security bod doesn't rely on it for his company email on his mobile. If it's not good enough for him, how is it meant to be good enough for us?
"He noted the irony, however, in the fact that the "five countries most opposed to a national ID – the UK, US, Canada, Australia, and New Zealand"
Not quite right. In at least two of those countries the driving license is a de facto ID card. However there isn't a means for it to be widely used by the holders as an electronic ID card. I'm not sure what sort of political point he's making; most of the objections in the UK centred on the right of not having to carry it, not the existence or utility of the card itself.
Sure, for a lot of people "security" seems not to matter too much.
However, now that there's tons of adverts for NFC-pay-by-phone, there's money at stake. If there's one thing that everyone really, really cares about it is money. If people start realising that poor security = money stolen from their bank account, they will care a lot about security. Android's update anarchy is a seriously liability in this regard. Apple, Microsoft and BlackBerry can say, "we've got updates properly sorted". Google cannot.
Britain was buying ball bearings from Sweden too. Used Mosquitoes to fly them out. In fact for a short while Britain was buying Swiss Oerlikon anti-aircraft guns built from German steel...
Careful, your case might get replaced by a cardboard box with "Luggage" written on it...
In some ways seeking to impose one's own laws on another country is tantamount to a declaration of war. Imposition of law is control of that country.
If MS lose this case it will be a pretty bad advert for the USA. Want to run an internationally significant company? Don't base yourself in the USA, lest your international business evaporates.
"Your comment is just more uninformed anti-American bullshit. The 14th amendment applies to the USA, not foreign countries."
Well, kindly go and tell the US judicial system that. Apparently they think otherwise.
Are you sure you aren't confusing copyright with trademarks?
Sorry, yes I was. Too early in the morning. I've had a cup of tea now.
However, not defending one's copyright for a sustained period of time is certainly bad for business. Try asking for judge for large damages when one has ignored many other instances of copying.
In this particular case it will be interesting to see who does end up owning the picture. Unless there some sort of deal between Getty and DeviantArt it seems hard for Getty to sustain this claim.
It's not quite that simple. Copyright owners are pretty much obliged in law to actively defend their copyright. If they don't do that then their lack of action can be taken in a court case as meaning that they are happy for the work to be copied. Thus lack of action means risking losing the rights to the work.
So it leads to mad situations like this. It may well be that Getty in this specific case do not actually care at all, but the wider ramifications for their business if they do not act are in general bad for their commercial future.
As usual it's the lawyers who will win, and of course it's lawyers who create the legal problem in the first place. Thanks guys.
It is reminiscent of the boneheads who insist that an IP address represents an unique user or geographical location.
An IP address at any one moment in time does point to a specific connection point, and therefore a fixed geographic location. That's kind of the whole point of an IP address. If they didn't do that then the Internet wouldn't work...
It's solely a matter of record keeping by everyone involved (the ISPs, telcos, etc) for DHCP allocations, base station connections, etc. to be able to say where in the world an IP was.
I say was, because AFAIK there's no infrastructure for that data to be reliably queried in real time. And that's probably a good thing; criminals can be pinpointed eventually, but no one can be pinpointed all the time live. Unless they choose to leave location services on the mobile switched on...
Part of what SpaceX is doing is finding out where the balance point between testing and cost is.
Well, they seem to be doing that the expensive way. I don't know how much that failed launch has cost them, but it would surely have paid for a hell of a lot more testing...
It's really hard to get commercial officers to properly acknowledge risk in all companies. Generally you have to have some sort of corporate disaster before they learn the lesson properly, after which the company might not be around anyway, or they've been sacked, jailed, or whatever. Ask BP, TEPCO,
I don't think Google's search results are that solid. Have you noticed how even if you've put a phrase in quotes Google will show results ignoring the quotes? That makes you look through lots of pages of results (and hence ads) before deciding that no, they have no useful result. Not useful to me, but renumerative for Google.
Also Maps is worse than ever, full of bugs. Tried getting rid of a way point recently on a route?
Still won't work. If you say, "There's an A10 around" it'll frighten a lot of people and they'll run away, just in case.
That does not apply to the F35...
The English Electric Canberra was pretty good, it was the jet equivalent of the Mosquito. Of course, it's no where quick enough by todays standards, but back in its day it was pretty awsome.
Its first showing at the Farnborough airshow stunned a lot of people. Roland Beaumont made it look like a fighter for agility, speed, etc. but was it was clearly a hell of a lot bigger than a fighter.
EE were pretty good at iconic aircraft; they went on to do the frightening Lightning!
Na, it'd have been the size and shape of a washing machine...
It's not really a laughing matter, tempting though a Python reference is. Britain doesn't exactly have a good reputation in mainland China. The Opium Wars were a shameful part in the UK's history.
No, we are not stuck with FAT. The only reason FAT is used is because too many people lack the imagination to see that there is Another Way.
If there were a free, well known, acknowledged and widely accepted ext file system driver for Windows then no one would have to use FAT ever again. If such a driver were available it wouldn't matter a damned what MS did or did not ship, because everyone would be using a driver beyond MS's control for cameras, etc. Whatever concerns there used to be about the inefficiency of squeezing a HDD f/s on to a small SD cards is now irrelevant given the huge capacity of even the cheapest one.
There are ext drivers available, but they either cost money or are free but incomplete. The cost of assembling a team to do this properly is surely far less than the money all the manufacturers pay to MS to use FAT.
The CDDL license used by ZFS was carefully crafted to make it incompatible with the license used by Linux.
Isn't the one defining point of GPL that any other license, no matter what it says, is essentially incompatible with it?
It's a pity that this ever came about in the first place, and was entirely avoidable. All it needed was for someone to do a decent ext2/3/4 (or whatever) file system driver for Windows and then there would have been no need for FAT in cameras, mobiles, etc.
Of course, there isn't a good finished free one. If the vendors pursued by MS clubbed together to make one it would make things cheaper for them all. It's typical of the short term cost conscious thinking that a lot of companies exhibit, instead of the more ambitious we'll-win-in-the-end-we-can-beat-the-incumbent long term advice that engineers routinely give only to be routinely rejected by company boards.
I can remember having to type 'purge' (or something like that) a lot to keep within my space quota on the VAX we had at university...
I just wish Oracle would change the licencing of ZFS out so it can be included with distros by default, instead of being cast out into a legal wilderness as it is now.
Well, it's up to them I suppose. It's their code, and I think everyone is grateful that they chose to share it at all. They obviously had specific goals on control and re-use that they felt GPL wouldn't achieve, so wrote a license to suit. It's not Sun/Oracle's fault that Linux is under GPL2. We can make do and mend with building our own kernel modules or getting some pre built ones.
FreeBSD has had no trouble at all adopting ZFS. There's OSX implementations, and reportedly MS briefly considered putting it into Windows. Rigid and unwavering adherence to the current GPL2 guarantees that Linux is always going to be hampered this way, which ultimately is not beneficial for the Linux community.
integrated filesystem checksumming, for example, should really be everywhere.
Well in effect it has been for a long time, though not necessarily at the level of the file system. Physical storage has had error detection / correction for a long time now.
It was only with the advent of very large storage devices that their on board ECC became inadequate for "ensuring" (there's no such thing as a guarantee) data accuracy. That's led to file systems like ZFS putting in an extra layer of ECC of their own to compensate.
Incidentally I think the characteristics of the ECC in ZFS were carefully chosen to accommodate the typical bit error rate achieved by HDDs. Getting that right in a file system design is important; just slapping in a CRC something-or-other makes no sense unless one matches it's parameters to the BER of the underlying physical devices. Too much in the file system and you're wasting space and throughput, too little and the BER might be higher than desired. Of course, choosing the BER that's right for the business is another matter...
From what I can tell one of the problems in porting BB10 to other manufacturers' hardware is that the bootloader and some hardware design features are a key security component in the BB10 ecosystem.
Makes sense - no open debug ports, signed boot, who holds those keys, etc, all the things that have to be done correctly to allow BB10 to be secure too. So without those things being exactly as needed on, for example, a S5, porting BB10 to the Samsung would be a big job. At least, this is my speculation as to why we've not seen BB10 on other hardware.
However, if BlackBerry make their own Android hardware they can be in charge of all of those features for themselves, so dual boot or whatever becomes a real option without screwing up any of their security accreditations for the BB10 variant.
Being a BB10 user I won't be rushing out to buy this Android phone from them. But if you are an Androidista, it could be very good. BlackBerry are undeniably good at hardware and their keyboard is also very good (screen or hardware). They're also one of the few manufacturers out there who aren't shy of making their handset a couple of mm fatter and putting in a decent battery. This Z30 of mine lasts the best part of two days. And it's built like a brick ****house.
Various experts have been saying they'll be bust imminently for years. Hasn't happened yet.
There are some fairly influential niche users who would find it very difficult to move off the BlackBerry platform. For them it might well be cheaper to buy BlackBerry and run it as is.
"As a space geek and a car nerd, that was the best Top Gear ever. I was gutted at the end result, but kudos for the attempt - so close ..."
It truly was one of the great moments in all of Television History. Especially the bit when they put the Top Gear space stickers on the wings upside down...
"- it is abundantly clear that the current structure of Android makes it stupendously complex to create patches that reach back a few generations because that also involves 3rd parties such as phone providers for the modem code etc. My hope is thus that their patch will include a move towards a more layered model where there are not so many dependencies to address between the various parties."
It was abudantly clear from the moment Android launched in 2008. Literally every other major operating system back then already had online automatic updating available and was well esablished. Even Google's Chrome web browser had an update feature all the way back then.
It suggests that back then Google treated Android as some sort of toy, not really taking it seriously. They created an enourmous security problem for themselves and their users. Not very bright these Google engineers and businessmen; any ecosystem, including Android, is always one major security incident away from being dropped by its users like a hot potato. Where would Google's mobile search revenue be then?
Commercially speaking they handled Android pretty badly too. By making it possible for the Chinese manufacturers to take Android, de-Googlise it and make it their own there's a billion strong market that Google are missing out on. And they run the same risk too in India. If their intention was to make a platform to attract users to Google's ad ladden services, making that platform hijackable by other manufacturers / service providers seems like stupid idea...
Sure, as far as Google's shareholders are concerned Android has been terrific. However, it's nothing like as terrific as it might have been had they found a way to have full control over Android. Fortunately for Google shareholders mostly care about relative performance, and there MS have obliged by being woeful... That's very fortunate for Google for the following reason.
MS's basic model is a standardised hardware architecture that any manufacturer can make, allowing MS to push out standard binary blobs to all users for updates, etc. And that works, generally speaking. All Windows mobile phones get updates, just like Apple, BlackBerry, etc.
Had MS done a better job of making WinPhone appealling and done so a lot earlier, MS may well have very quickly turned it into a big and enduring success.
But they didn't. Google easily slotted into a good second place (profits-wise) behind Apple, meaning they could satisfy their shareholders. Being a poor third to Apple and MS would have lead to grumpy shareholders.
What could possibly go wrong indeed.
Quite a lot.
Having Airdrop wide open like that is equivalent to running an unsecured WiFi network. You're held responsible for the traffic that passes through it. So if someone is using your WiFi for downloading kiddie porn it's your problem to prove it wasn't you when the police come knocking. Difficult.
So if some horrible person sent kiddie port to an open Airdrop iPhone, that phone now has illegal content on it. The owner would then either have to
1) destroy the phone immediately,
2) hand it over to the police immediately with the image intact (the right thing to do, hopefully the cops know what Airdrop is...))
3) or take a risk that their phone at some point later in time is not forensically examined and the deleted image discovered lurking in the file system somewhere.
If 3) did happen it would be a bit late to claim the image wasn't yours and had arrived unwanted through Airdrop. You'd then have that charge added to whatever else was on the rap sheet to have caused your phone to be in the hands of the cops in the first place.
OK, so that might be a low risk, but it would have a high impact on your life.
Maybe this can be tweaked into another IT sponsored pub meeting...
Alas there was quite a lot of 16 bit code lurking inside OS/2 :-( A lot of thunking was going on inside.
I too am old enough to remember the 80186. Research Machines (RM) in the UK used them in their schools-focused PCs in around about 1986? I remember that they ran a slightly wonky version of DOS...
Not sure about being a standard question. Afterall you can always have bacon with anything, or at least fry it on a server if needs be.
Microsoft aren't the only outfit forcing unwanted things on their customers.
Whilst not engaging in the practise to quite the same degree, RedHat are busily driving the Linux world in a direction that is not wholly acceptable to a very large proportion of the community. Systemd is probably going to become unavoidable at some point. At least RedHat's GNOME 3 looks like it might descend into irrelevancy (it is busily disappearing up its own pretentious arse), with Cinnamon being a prime candidate to replace it.
And as for Apple, well if you want to stay secure you have to take the new versions of OS X and iOS. Which doesn't always work out so well for the hardware and applications you already have... Though I suppose if an organisation is swanky enough to have gone the Apple route for its corporate IT then it probably wouldn't blink at all when the IT manager comes in and says that they have to replace all the hardware or applications to avoid being wide open to a hack.
"Me, I am a Linux/Android maven. If I can't see the source code, I don't trust it! "
You've got access to Google's proprietary source code for their proprietary blobs like Google Play Services that they add to Linux to make Android? Care to share that with us?
"Simple as that."
So it would seem.
...is all it takes.
A lot of this goes back to the US patent office. If they had been more rigourous with checking the patents they actually awarded for novely, prior art, etc. then none of this would have ever happened.
Companies are more or less obliged by their shareholders to seek patents, and to then defend them when they are awarded. A company's board that doesn't do that can get into serious personal legal difficulties with their shareholders as they would not be doing their utmost to maximise shareholder value. From a board's point of view, it's probably safer to sue for patent royalities and lose than not sue.
With the patent office handing out patents like free candy all of the same flavour, this situation was bound to arise. I wonder if they can be sued?
It is indeed the interconnect that matters. The interconnect on Fujitsu's K computer is superb, and is wholly responsible for that machine achieving its very high sustained performance.
The problem is that a some important compute loads are not infinitely divisible. And even for those problems that are highly parallel there is a law of diminishing returns. Using smaller and smaller CPUs means that the interconnect has to do that much more to compensate; you've got to get data in and out somehow. Ultimately the computer becomes all interconnect and comparatively little compute. Losing access to cheap high performance CPUs would make the interconnect problem for Cray, etc. a whole lot harder to engineer.
AFAIK the K machine achieved a pretty good balance between interconnect and single thread performance. It's mean/peak performance ratio is pretty close to 1, something you can achieve only if there is adequate interconnect performance for the CPUs used. Fujitsu built the interconnect right into the CPUs, which is expensive but a good way to go. I don't know what Cray do. Looking at the mean/peak ratio for some of the higher up entries in the Top500 list, you'd have to conclude that they have sub-optimal interconnects.
Interconnects themselves are not cheap to develop, and it will all become Ethernet one day anyway. The SerialRapidIO standard can't go anywhere because the market for it is too small to fund the development. Same for Fibre Channel. Speaking purely hypothetically, how long before PCIe becomes too expensive to make it compete with Ethernet? Several lanes of 400Gb/s Ethernet sounds a lot faster than several lanes of PCIe... A 400Gb/s Ethernet switch chip is going to cost a huge sum to bring to market, and so would a comparable PCIe crossbar; can the world afford to do both?
Nice to see Cray making money.
My worry is that Cray and the other HPC and high performance embedded outfits all depend on companies like IBM, Intel and Fujitsu making stupendously powerful CPUs. If IBM, Intel or Fujitsu decide to stop developing and making them, then where do companies like Cray get their CPUs from? They cannot afford to develop their own, the buyers of these supercomputers cannot afford to pay the development costs either.
I know there's GPUs out there, but they just don't fit every computational problem out there. We will always need a fast CPU.
Given that we all kinda need HPC to carry on (climate modelling, protein folding, etc), what can we do to safeguard that other than to keep buying Power/X64/Sparc based servers with large CPUs in large numbers? I like ARM, etc, but if they came to dominate the server market too (and they're trying, and may succeed), where does that leave the niche guys like Cray and their customers who really need fast single thread performance?
Biting the hand that feeds IT © 1998–2018