Re: Is there any way...
Trouble is any kind of reasonable unlocking scheme isn't going to be very compatible with taking snapshots in those fleeting moments of inspired opportunity.
2108 posts • joined 23 Apr 2008
Trouble is any kind of reasonable unlocking scheme isn't going to be very compatible with taking snapshots in those fleeting moments of inspired opportunity.
"The answer to bad speech is not censorship, it's more speech calling out fake news," he said. "We need to spread the idea that critical thinking is important."
Er, am I alone in thinking that that's an ambitious aspiration?
Also I'm not entirely sure what fake news is, or rather more specifically, what real news is. People wring their hands about the traditional press and its 'real news' being eroded to irrelevance by online fake junk, etc, but then again the traditional press includes the Daily Mail, the Murdoch-owned newspapers, the Telegraph, Express, Guardian, etc. We 'know' that a lot of their output is designed to suit their owners' views and is far from being 'true'.
Even broadcast news is questionable - in the UK it's pretty much the same stories in the same order with the same-ish spin no matter which channel one tunes into. TV journalists are told by their editors what sort of story to get, and they choose what interviews with "members of the public" are actually used. And even if one thinks that the news channel is playing it straight, one often has to wonder why so many placards are written in English and displayed in so many protests in so many countries where English is not widely spoken.
I've long ago concluded that the only reason something is broadcast or printed or Web published is because someone somewhere wants you to absorb their point of view to further their own agenda. Editors mostly, press officers too. The difference being these days that almost anyone can be an editor online, and social media 'trending' popular content brings it to the fore. Fake news? 'Real' news outlets are just as capable of deliberately misconstruing or misrepresenting or imagining 'facts' to suit the agenda of the owner of the newspaper, TV channel, etc. For example, look at the BBC's abysmal role in spreading the demonstrably false scare stories about the MMR jab. Balance? Balance my arse - sensational stories bring high viewing figures, broadcast and revel in the numbers, that's their only motivation. On this particular topic, Brian Cox pointed out that the BBC's 'balance' weighed the entirety of scientific peer reviewed wisdom against the opinion of a man not accepted as an authority in the field with no peer reviewed papers to back him up, and judged them to be equal. Oh dear.
Anyway, so almost no one involved in the creation, dissemination of 'real' news is truly an honest broker. So why is 'fake' news significantly worse?
About the only exception is Private Eye, which is pretty good at putting out mostly facts (rare mistakes) and mostly letting them speak for themselves. And the tech press like El Reg does a good job of sticking to tech news.
...when and on what planet did anyone think that information was going to treated well and respected?
Well all across Europe there's quite strong data protection laws. Breaches like the ones reported in the US would result in some stringent fines. In the UK this can be up to £5000 fine per data item lost / breached / exposed. Consequently Megabreaches are potentially veeeery expensive (millions x £5000), and that rather focuses senior management attention on data protection within a business. Basically you don't collect data unless there's a genuine business need, and you don't make it available willy-nilly within a company, etc.
But Uber is USAian, so like many other companies they're rather uncaring able their customers' data, what they collect, etc.
I had no idea they were designing new space suits. I'm somewhat mystified as to why existing designs aren't acceptable. Do you suppose it's simply a matter of IPR ownership?
Oops. For 'billion', read 'million'. Hadn't had an early morning cuppa.
Anyway, to return to the point, for any half decent payload (big TV satellite, big comms sat) the owner would have paid anything up to $1billion building it. ENVISAT cost a reputed $2.9billion. Uncle Sam will quite happily spend $1billion on a spy sat. The one SpaceX blew up a few weeks back was a comparative tiddler money-wise, only $200million. There's a good book here, page 99.
In comparison, even an "expensive" launcher like Ariane will be ~$100million ($150million if one's satellite takes up the whole launch payload), Wikipedia's page is quite informative on the matter. And SpaceX are seeking to chop costs like that into tinier pieces.
It's been a long time since launches were more expensive than building big satellites.
Manpower. SpaceX seem to have no ambition to reuse a booster more than 1 or 2 times. That means that for the foreseeable future they will have to retain the skill base, currency, and capacity for something close to full rate production (assuming that a few don't make it to landing as planned). These are not talentless people who can be picked up off the streets at short notice, these are hard-to-find guys and girls who once you've got them you have to retain them, pay them, keep them busy, etc. It's not like they're necessarily jack of all trades either - they're specialist welders, machinists, etc, all highly specialised in their individual fields and not easily transferable to different roles. A lot of that skill base may very well be vested in suppliers' workforces, but it's there same problem all over again and the costs of dealing with it will be passed on to SpaceX and anyone else buying from the same supplier. Same for the factory - whilst there's a single final assembly building, there's a myriad of smaller plants all over the place that have to be kept operational if they're to build just one booster per year.
And on top of that SpaceX would need to have the refurb staff too to recycle the one's that did make it to landing. If, and only if, SpaceX can re-use their first stage many many times (e.g. 50 times) is it worth building a fleet and then standing down the production line.
This problem aflicts every major large engineering production project. Fighter jets - the unit cost goes up as the government's order shrinks for exactly the same reason. To get an empty cardboard box from Lockheed or SpaceX would cost almost as much as putting an F22 or Falcon 9 inside it. The F16 production line is about to close, and once gone it'll be veeery expensive to bring it back should anyone want a new build F16. The overhead of having the means to produce things this complicated yet not making them is almost as expensive as making them regardless.
And since you mention cars, it costs Ford / GM / Merc / etc. around about $1-6billion to develop a new car. Which is why there's so much platform sharing going on these days. Once they have developed it and set up the line they can churn out millions of cars for a couple of thousand dollars each (if that), but that first one is $1-6billion. Take a look at this.
Anyway, I'm only reinforcing SpaceX's own pronouncements on the matter, covered previously here on El Reg and elsewhere in the press. Take a look at this: SpaceX were talking about a 30% discount, tops. Even that seems ambitious to me.
So lets see... Say SpaceX charge $50million for a fresh launcher, 30% discount for a second hander = $15million saving. $15million/$200million = 7.5% of the price of the satellite that got blown up. That's pretty small beer. In the grand scheme of things, if one could ensure the success of an investment in a $200million satellite by spending an extra $15million on a fresh launcher as opposed to "taking a chance" (no-one really knows yet what reliability SpaceX can achieved for re-used first stages), one (or more likely one's insurer) probably would spend that extra. If it's Uncle Sam who's just spent $1billion make a new military satellite, $15million is really small beer indeed.
They will get to re-use one of their first stages one of these days, but the money saving isn't as attractive as all that to launch customers (or their insurers). If SpaceX can show that it works "as good as new", which I'm sure they'll manage to do one way or other, then it would become a no-brainer. But just at the moment it'd be a brave customer to bet their own enterprise on a small saving.
I'm guessing that they're having trouble convincing a customer to go with it. For various reasons re-using one isn't as cheap as all that, and so the financial incentives for a customer aren't that great compared to taking a brand new one. If they've spent a few $100billion on their satellite, risking it all for the sake of a few million extra for a brand new launcher is almost a no-brainer.
SpaceX may have to launch one or two on test flights to show that it can be done.
The reason re-use isn't as cheap as all that is because SpaceX still have to pay the fixed costs (manpower, supply and support contracts, facilities, heating, cooling, lighting, land, etc) of maintaining the ability to manufacturer new ones. That fixed cost is far more than the price of the materials that get thrown away every time they destroy one. Only if they become massively re-usable can SpaceX economise on those fixed costs.
That's probably what is going on, but that should be the kind of thing the devs assess in debug back in the shop, not live on everyone's phone where it's purpose may be misconstrued (as I did at the top!).
After all, discovering that one's app's power consumption is terrible from live telemetry back from users is bad for business and makes it look like one couldn't be bothered to check it oneself because shipping.
Of course they're going to charge more money to someone late at night whose mobile is close to switching itself off!
In fact, the more pointless computation the Uber app does the more battery it uses up and the more likely the "customer" is to be desparate to get home.
Sounds very similar to VW's emissions controls software!
I know a pub in deepest darkest Devon that had no cash register, just an ancient wooden box. The VAT man was unhappy with this, certain that they weren't paying enough VAT.
So in came a modern electronic till, and a proper sales record presented to the VAT man the following year. Turned out that they'd been massively over paying their VAT for years, and got a healthy rebate. Drinks all round!
Ask yourself this: how would the average Chinese citizen react to hearing their government actually benefited a foreign company over one of their own? Outrage! (and as calculated by the party).
It's more complex than that. Huawei, a Chinese company that does actually come up with good ideas of its own and makes good money selling them all over the world, has problems with their products being faked and flogged by other Chinese companies. There's not much patriotism in play, it's all about making as much money as quickly as possible no matter what the risks or consequences. Hence the various food contamination scandals, etc.
The communist government is quite capable of controlling what people think - it controls the media. A few years ago they stirred up some anti-Japanese sentiment for some crazy reason or other, riots, Japanese cars being burnt in the street, etc. Whatever it was all really about escapes me (there was some WW2 angle used as an excuse) but it went too far; the Japanese auto manufacturers started closing their factories in China, put a lot of people out of work. Funnily enough the riots stopped just as quickly as they'd started but it was too late. A lot of Japanese manufacturers decided that operating in China was more hassle than it was worth.
What makes this all very frustrating is that there's already perfectly good solutions to the leap second problem.
If OS developers wrote their OSes to use International Atomic Time instead of UTC as their base timescale, the OS would never need to deal with a leap second.
And there's perfectly good libraries for converting TAI to, e.g. UTC that already handle leap seconds, can do accurate time calculations, etc. One such example is the SOFA library from the IAU.
Like everything else it cannot predict leap seconds, but an OS is already well placed to receive library updates as part of its regular maintenance. Why not this one too? And if every developer used TAI instead of UTC to represent time values then all their calculations would always be correct, with conversion to UTC for display being the only thing that'd be wrong in the absence of updates.
The problem as always with something like this is clinical responsibility. If you make something that can tell whether or not you have some condition or other, it is saying either "yes, you have it", or "no you don't".
The problem faced is that if your box is even slightly subjective (such as a neural network output), you cannot really afford to have it say anything substantive. So the answers it has to give have to be toned down to "maybe, go see a doctor", or "maybe not, go see a doctor". In which case, what's the bloody point of not just going to see a real, qualified human doctor in the first place?
Not toning the answer down and giving a straight yes/no answer means you're accepting clinical responsibility for the accuracy of the answer, and the resultant liability for those occassions where your box's answer turns out to have been wrong. A false positive upsets patients, and may lead to damaging and inappropriate medical intervention. A false negative may kill them. If you've made claims of complete reliability, you take the blame for that.
So whilst their system might have a strong performance from a statistical point of view, it doesn't amount to anything practically useful at all unless Google are actually willing to accept the liability for the system's performance. I can't see them doing that.
The same's true for a human doctor, but they can get insurance cover.
Perhaps not all of this code is safety critical. Perhaps most of it could be banged out over a weekend, in the usual manner.
That's basically what the self driving car types are doing. Their approach is to get enough cars running incident free for long enough that they can get approval through mere statistical argument.
It's deeply worrying that this approach may actually be accepted by regulators. Unreviewed, unprovable safety critical code with no triplicate redundancy? No thanks. A statistics-based approval ignores the possibility of a systemic date-sensitive bug lurking unnoticed in the code base that will do something very damaging at some point in the future.
Fortunately it seems that their best efforts so far are a long, long way from being statistically acceptable.
In fact, the Japanese built an entire airport on a man-made island that is sinking at many centimeters per year... and all of the structures there are built on jacks. It's actually quite remarkable. So if something sinks more than expected, that doesn't mean the safety margin has been exceeded.
Kansai Airport has sunk far more than expected, and there is some nervousness that it won't stop before it sinks beneath the waves. The building jacks keep them level, but won't keep them dry.
The loss of Kansai airport would be hugely problematic; they're planning on closing Itami airport in nearby Osaka, leaving just Kobe's small airport as backup. Basically the whole region needs Kansai airport to stop sinking.
Data#3 look pretty dumb here. No employer anywhere is entitled to know everything about one's personal life and history. Sacking them subsequently when they've already asked whether the company needs to know of specific aspects about their personal life is pretty crazy.
They should ask themselves, would they sack someone if they found out they had liked the band Genesis? Many today would consider such a past to be fairly appalling, but it's not a sacking offence. Obviously if they're still a fan, well that's different...
I know, but the first flight of the first A350 was just as uneventful. And the A330. And pretty much everything else that has been flown in recent decades.
Back in the old days it was definitely more exciting. Bill Parks, Lockheed's legendary test pilot, demanded and got extra (extra! Ordinary danger money was already part of the package) danger money for taking Have Blue (the F117 prototype) up for the first time it was such a ramshackle assemblage of second hand junk. Hugely successful though.
There rarely seem to be any challenges these days.
In the old days the test pilots would strap themselves in, light the fires and take the aircraft up for a quick circuit and get it back down ASAP before one of the many items that are clearly wrong / broken / rattling / wobbling / leaking / overheating bring it down in a smoking ruin of bent, shattered and charred aluminium. A lifetime of excitement compressed into 5 minutes of sheer exhilaration, possibly ending in some parachute time.
Nowadays it's normal to be able to take the plane up for hours on the first flight and fully explore the flight envelope. One wonders what the remainder of the flight test campaign is really for these days.
Of course, this is a good thing.
Anyway, congrats to Airbus and the wider aviation community for getting it right so often. It's become so normal that we moan about delays.
They kinda did sort their shit out back in 2008/2009. They showed off win 7 + office recompiled for ARM at a trade show, and everyone though "MS is really getting to grips with ARM". And then instead they did Win RT, Windows 8, and it all turned into a debacle...
Windows has traditionally been reasonably re-targetable, they just failed to spot where things might go with ARM. They could right now do it, but with UWP targeting the non existent mobile platform (Win 10 mobile) they're still missing the point.
Apple are dropping their WiFi product line, Google are busily trumpeting their own that's not on sale quite yet?
One's gotta wonder what Google see in it that Apple don't. Slurping opportunities?
America's heading towards recession. That'll do no one any good.
Calling for regulated security on IoT devices is, well, likely to have consequences more far reaching than anticipated.
For a start, when is a CPU + memory + NIC + software an IoT device, and when is it just a computer or smartphone? They're all potentially involved in home automation, especially if you consider the app as being part of the IoT system.
To illustrate the difficulties of trying to make a legal differentiation between IoT and non-IoT, consider the Raspberry Pi. IoT device? Yes. Computer? Yes. Router? Yes. Server? Yes.
So you cannot reasonably apply a bunch of regulations to an IoT device that then don't also apply to smartphones, computers, home routers, smart TVs, back end services, the entire Internet, Thus if the law required IoT devices to meet minimum security requirements, receive regular updates, etc they'd have to apply to everything else too, otherwise there'd be no point.
That would be a problem for Android in particular.
Perhaps they're getting nervous of the direction in which Google are taking Android. The version on the new Pixels seem to be hinting at Google heading in a Google-only direction. If that's true Samsung will need a plan B.
Tizen doesn't seem to me to be a very good plan B.
It's quite interesting to see what the Chinese have done with Android. They've successfully taken it forward and, with a degree of government backing, supplanted Google's own services with their own domestic providers (Baidu). It's a flourishing ecosystem, and not being involved in that makes it very difficult now for both Google and Apple to get market traction there. China succeeded because it's big enough that their own local monopoly could do its own thing and ignore Google.
Doing another "China" would be a better plan B. Could Samsung do the same thing, provide their own replacement services? Probably not. Google's walled garden already encompasses most of the Western World. There needs to be government enforced separation of OS and services and open competition between providers (aka Break Up Google!) before another mobile OS can survive. BlackBerry have tried hard (excellent services, excellent OS, market failure), and it didn't stop the juggernaut. Microsoft too. Even Apple are losing out. BlackBerry may survive because, in their recent adoption of Android, they've decided to climb on board rather than be squashed by the tires again.
Musk made (to my astonishment) an interesting point. He noted that if all manufacturing were automated, we'd have to pay a basic wage to everyone just for living. That'd be especially true if all agriculture were automated too.
That'd be either extremely unhealthy, or very far sighted, or both.
A trade war is no different to a military war; you can't win, you just hope to lose less badly than the other guys.
The move of manufacturing from the USA to China stems from the fact that company chairmen and directors have a fiduciary obligation to maximise shareholder value. This is not a trivial thing, it's deadly serious, especially in the USA. In the USA you can end up in deep personal trouble (e.g. jail, ultimately) if, as a director, you fail to improve value through gross negligence.
One big way to improve value is to make manufacturing cheaper. China was cheaper than the USA.
Now if, 20 years ago, a company chairman, CEO or director had refused to relocate manufacturing to China they'd have been laughed at, probably sacked, or (depending on how they'd 'lied' about the reasons not to) sued / prosecuted / jailed. Saying back then that concerns about long term geopolitical consequences were a reason not to off-shore was simply not a plausible argument, and probably detrimental to one's own future.
Result? China became the world's factory.
Many shareholders all over the world are terrible at taking a long term view - get rich quick or get outta here. They're to blame, they're a scourge on our society.
OK, who are they? Ah, it's our pension schemes. That means practically all of us are ultimately to blame.
The problem with Trump's strategy (let's call it that for the moment) is that imposing trade barriers effectively forces there to be a great reckoning. Just how much money does America have? How well has America invested in the education and training of its own people? How willing are Americans (remember that he's kicking out the illegal immigrants too) to do boring, low paid manufacturing work? Are they prepared to accept that a made-in-America microwave oven of any sort really should cost $400, not $50? None of these questions have good answers...
...That a lie can run round the world before the truth has got its boots on.
The difference today is that the lie gets a helpful push from the likes of Google and Facebook who then glue the truth's boots to the starting blocks.
CERN's data shows how wobbly the planet is, really. They've previously reported correlation between beam position and waves crashing ashore in the Bay of Biscay. So if nothing else they can give worthwhile surf reports for the Atlantic coast of Europe!
Speaking as an RF engineer, it sounds perfectly plausible to me. Though I suspect they need the phone to be relatively still for it to work.
Another way to defeat it is to not use public WiFi.
And it doesn't force them to install any of their apps either. As proven by Amazon, hardware makers have the right to default android and avoid installing any of Google apps.
You're forgetting the role of Google Play Services. Android app writers are forced to write their apps to use these services if the want access to things like mapping, messaging, etc on mainstream Android. Google do not open source Google Play Services. So they're not on Amazon's version of Android.
So App developers are limited in what their app can achieve if they want to target both Google's version of Android and Amazon's without having two separate code bases. So guess what; the app developers target their software for Google's version of Android with Play Services, because that had the biggest market share.
And because Apps are reliant on Google Play Services, the phone manufacturers have to succumb to Google's terms and conditions if they want to make a phone that is marketable.
This strategy has worked very well for Google, but it backfired in China. Baidu was big enough and quick enough to be able to displace Google's Play Services with their own equivalent. There's 1billion+ Android users in China, non of whom are paying into Google's coffers by using Google's services. It's backfiring for Apple too - incompatibility with Baidu's services is becoming a market killer in China, no matter how shiny your phone is. In many ways Baidu have outstripped Google's services, for example offering payment systems that work widely long before Google Pay.
Some OSes like VxWorks have long had nice facilities for post-mortem debugging. With that you can write your software to log various things in a circular buffer, as well as a ton of OS / task runtime scheduling data. If it ever goes bang you simply connect the debug environment (or extract the buffer content somehow) and see what had happened in the time leading up to the crash. Very nice. And being a nice, tidy, tiny OS makes it well suited to an IoT device.
And you can achieve similar results with Linux these days; stuff like FTRACE and kernelshark spring to mind. Maybe Linux needs more resources to run than VxWorks, and this matters a lot for price conscious consumer products.
These are all nice things to have, but in microcontroller land we don't really get an OS. We know we want that kind of post mortem capability, but it's a pain to have to develop it all oneself.
Given that a lot of iOS development seems to be done by people in hackintoshes, it would be bad news for iOS popularity too. Less app development = a reduced iPhone ecosystem.
"He's right, up to a point (and I've known him personally since 1993). But he's wrong too: you can't stop people inventing new ideas;"
Well, we kinda do stop people inventing pointless new protocols and stuff, but only through ensuring they don't profit from them. For example, some IoT startup wanabe creates a whole load of proprietary stuff, and they're highly unlikely to develop into the next Apple or Google. In fact the problem we have is that there is Apple and Google, they have created their walled gardens, and no one else can get a piece of the action.
From the article:
"While diversity in approaches is inevitable and valuable, too many options damages interoperability," Callon observed according to a write-up of the talk in the IETF's most recent newsletter. "We have to be a little concerned about creating too many options because some vendors implement some, while some vendors implement others, and suddenly we don't have interoperability."
He's underdone it there; we should be gravely concerned about the decline of inter-operability. We have a lot of low level standards that work (http, ip, etc), but compatibility with those is not enough to ensure level competition between clouds and online service providers. A lot of new stuff has been built in recent years with the deliberate intent of walling in customers. Part of the problem is that no service of a brand new type can be inter-operable; it is inevitably in a class of 1. However as soon as someone else starts up a similar service there is zero value to the original provider to be inter-operable. And government is too lacking in technical understanding to see that these walled gardens are going to cost the consumer a lot of money. It's practically a license to gouge the market. Look at the "Apple Tax" you have to keep paying if you want to retain access to all that music you've bought in iTunes...
The industry has long since learned that bring a new service type to market 1st is key; quality doesn't really matter up to a point, but being first does. Lock those users in.
And of course 'BlackBerry' means two separate things. There's BB10/QNX, and then there's the Android based ones.
BlackBerry must be continually annoyed to be dismissed as irrelevant when they have what is probably the best spin of Android out there.
Say, just for one minute, that the new Macs were fantastic in every way apart from this touch panel, then maybe we'd put up with it and Apple would deem it a commercial success (even though we all hate it).
The risk is that Apple identify it as the sole reason why no one is liking the new Macs, drop it, and then make everything else even worse than it is at the moment. That might be terminal for Mac in general; if Apple cannot identify the reason they're not selling they may falsely conclude that a Mac (of any design) won't sell in the future, and give up.
It's kinda the same in the PC world. MS think Win10 is ace, it isn't, so we're not buying PCs. Our old Win7 boxes are fine, so we buy nothing. Cue lots of talk about the terminal trajectory of the PC market...
Someone somewhere is going to have to make a usable, affordable PC class computer. We're not going to develop and write everything from this point onwards on mobile phones.
...Qualcomm can put the price up on theirs!
Round our way the installers are taking the opportunity to do a quick survey of your electrics, gas appliances.
1990's houses here, so of course they're finding "lots to be done", and their reports are designed to scare people into getting lots of unnecessary work done. Some people have fallen for it...
"Why would you need C/C++ to make a website safe?"
"So all the 3rd party malvertising sites can run their malware more quickly in the browser? Yeah, progress at last. Sounds safe enough to me."
"You'll have Norton running natively in the browser next. Oh well, at least that'll speed things up a bit."
There's already been proof of concept HTML5 viruses that reside solely in the web browser. Web browsers are simply yet another execution environment, and will / are going through the same phases of bug discovery and fixes as, say, an OS. The more features that get added to that execution environment the worse it'll get. The more such features are used, the less relevant the sandboxing becomes; the sandbox merely prevents web code from interacting with the host OS in certain ways, but that's less relevant if the place malware wants to run is actually in the web browser's own execution environment itself, inside the sandbox.
There's plenty of opportunity for the web browser to use different tricks to ensure one website cannot interact with another's data (e.g. encryption of persistent data, which is in fact what they do), and this will make it considerably harder for malware to succeed. However, if it gets out of hand then there may have to be things like Norton inside browsers. Eeek! On the irony scales, that'd be a full set of tens.
Disagree. Google patched it (side question - how does Google always seem to know more about Windows than Microsoft?). They let Adobe and Microsoft know we have a major issue. Adobe said "Shit, thanks for the heads up" and fixed it days before any public announcement. Microsoft sat on their hands and did nothing for 10 days, three days past the Google security standard for their Chrome devs, then released the info to their Chrome developers...
Get real. All Adobe and Google have done is block use of that system call in their sand boxes. They've not fixed anything, they're simply ensuring that it can't be exploited through Flash or Chrome.
Once you have a sandbox, that's a far easier job than actually fixing the bug in the OS itself. For comparison look how long it took Apple to fix their latest (stupid, self inflicted) OS kernel flaw - months. There's probably good reasons why MS cannot fix the bug quickly.
Personally speaking I don't see that Google or Adobe had any real choice. If the bug is being exploited then we have a real problem and they're in a strong position to mitigate against it, fast. But in doing so they're inevitably advertising the existence of the bug. So they may as well just come out with it and give the rest of us a heads up.
In the round it's probably better to mitigate for this flaw in browsers ASAP because that'd always be the primary exploitation route. Gives the rest of us a problem though.
As others have pointed out if things like VAT became applicable then it'll kill Uber' business model.
Here's a few other things they'd need to pay for. HR, recruitment, third party liability insurance, offices to house those things, staff background checks, car maintenance, car purchase, MoT checks, VAT and all the other taxes and minicab operator licensing to name but a few. And they'd then have to operate like a minicab service, not like the black cab picking up randomly in the street.
After all, if they employ the staff then they're an employer, and are therefore liable for the full business costs of employing them and running the business.
In short, I can't see Uber surviving this in the UK. I can't see their appeal succeeding, given the damning and unequivocal nature of the initial judgement. And with their costs rocketing skywards there's no profit to be made.
So whilst we're in the mood for overturning distasteful USAian business and employment practises that have been found to be riding rough-shod over our own customs and laws, how about someone finally getting round to challenging Apple about their refusal to honour the statutory 2 year warranty on consumer electronics?
Qualcomm may want to close down some of NXP's product lines, but they'll struggle to do that completely. NXP have Freescale, and hence chips like the PowerPC 8641D, etc. These have been widely used in many really quite important military systems in the US, and Uncle Sam won't take kindly to the supply being cut off. Some parts have found a use in the F35 program, and there's no way on earth anyone will consider doing a silicon swap anytime soon.
That happened to Apple when they bought PA Semi all those years ago. Apple bought the company for the staff, not the product line, and promptly announced discontinuation. The US gov told Apple that they had to keep PA Semi's PowerPC SOC chip going, because it had already been incorporated into some fairly significant military systems. To cap it off, the staff (there were a lot of ex-DEC silicon engineers involved who had started up PA Semi in the first place) didn't like being Apple drones and quit, setting themselves up as yet another startup called Agnilux which then got bought by Google.
At the time that SOC was unmatched by anything else on the market, and it would have been impossible to migrate the designs on to alternate silicon. By the standards of the day it was phenomenal - dual core, 2 GHz, 64 bit, dual Eth NICs, fast memory interface, well suited to real time applications, lots of GFLOPS, all in less than 13 Watts.
The point is that no one will want a self driving car in town, nor ones that have autonomous anti-collision self braking systems that won't let the car run over a pedestrian. If every vehicle has these things then pedestrians can safely walk out in front of any vehicle and not get run over.
And the problem is that pedestrians will do that, and the car driver won't get anywhere at all. Result - driving a car in town becomes a very slow way to travel.
Then there's kids. They'll be jumping out in front of cars just for the laughs. It will be really annoying for car driver's, but if these systems are mandated by governments that's what will happen.
That would also lead to some unfortunate accidents. During the transition from driven to self driving / self stopping cars there will come a point where kids are used to most cars being automatic. And that means they're at some point going to prank an older car that doesn't have the automation and will get run over...
And it would have put MS at the forefront of ARM servers too. Instead it looks dead certain that they'd miss that too if ARM servers take off in a big way. Linux is already there of course.
To be moderately fair on MS, when they started all this ARMs were pretty feeble compared to x86 and to where they are today. MS's mistake was to think that ARM would always be too small for a full desktop. Oh how wrong they were.
There's also OpenPower that's looking very promising. They should be thinking about that too in my opinion. Superfat binaries anyone?!?!
Well, the leverage isn't going to work if the Intel chips are rubbish. Qualcomm may put their prices up!
Intel are not going to support CDMA (actually you mean CDMA2000. CDMA was the original 2G digital cellular standard in the US that lost out globally to GSM). Or, they'd be mad to do so. It's a yesteryear standard, it's only 3ishG, and it's not used anywhere other than the USA these days. So at best it's a limited market and a declining one at that. It's hard to justify the investment.
The poor sensitivity may be a firmware inadequacy, but it's far more likely to be a poor quality RF front end. To make a modem all Intel have to do is implement the DSP as outlined in the standard and bolt on on a front end. There's not much room to tamper with the DSP, so the inadequacy is more likely to be in low-spec analogue components in the RF front end.
It does raise the question as to whether Apple ever bothered to do any qualification testing on their prototypes. This kind of under performance would stick out like a sore thumb in even the most trivial of bench tests, and you'd like to think that they’d reject it if it. Seems like they've just stuck down the Intel part, done a quick functional test and shipped it. Sloppy. Or just arrogant-don't-care.
However, LPDDR4 supports an optional capability called target row refresh (TRR) that effectively eliminates the ability to exploit rowhammer. So no need to add ECC, just use LPDDR4 which newer phones have been doing anyway and make sure it supports TRR.
Interesting. Earlier I speculated that the memory industry hadn't done much to mitigate against rowhammer. Seems I wasn't entirely correct.
This 'optional' feature, I wonder if it's an optional part of the LPDDR4 specification, or a compulsory part of the specification that CPUs can optionally exploit if they want to? Either way, 'optional' sounds like someone somewhere wants to make a fast buck and who cares what the consequences for customers end up being. Booo.
Not for the first time I find myself wishing that the tech industry would take a leaf out of other industries' books. For example Rolls Royce, Pratt & Witney and General Electric are deadly serious competitors, yet they will (and have) drop everything to help out a competitor if they run into a serious safety issue. Reason? Everyone benefits from safer engines, and means a bigger market for everyone. The aviation industry is consequently very safe.
[Apart from the mathematically very dubious decision to allow the EC225 Super Puma helicopter to continue flying with a suspect gearbox so long as it was thoroughly inspected after every flight. I say dubious, because whatever calculation was performed to arrive at 1 flight per inspection cannot reasonably have had zero error bars... It took another fatal crash to get it grounded]
In contrast, too often in the tech sector one company's security fails are seen as another's marketing opportunity.
TRR is optional? Great, thanks guys, thanks for not helping out. Whatever caused that to happen should be been resolved in the standard long before it was published, even if that meant company A giving company B money and assistance to bring that about.
I think there was a recent report about defeating ASLR by looking at the addresses in the branch address cache and matching those with the known structure of the OS, thus figuring out the current OS layout in memory.
On Intel Haswell CPUs, not on ARMs.
"Conversely, it doesn't help that this is The Linux Kernel Maintainer's attitude towards kernel level security:"
Whoa, hang on a moment. I'm not a fan of Linus, but he's got a point. If you go and misuse the Linux kernel (as a very large number of people do) that's your problem, not his. It's not his job to decide whether Linux is appropriate for your Web server, router, nuclear power station, etc. Linux is free of cost, you have no contract with him.
If you want something 100% secure, whatever that means, look elsewhere. And good luck.
I did wonder about trying it on my BlackBerry Z30, see if the Linux system call shim present in BB10 is good enough to run it, and have it succeed.
Then I decided that a pint was a far more interesting prospect...
"Parity should be for everyone, not just farmers IMO. :)"
Biting the hand that feeds IT © 1998–2018