Re: World -> Pot
2071 posts • joined 23 Apr 2008
"The choice is between smart and stupid government involvement..."
Well, anytime the government is involved we know which way that goes.
That's not completely fair. The State of California has been very good in its involvement with self driving car experimenters like Google. They've been allowed to drive their cars on the roads, but the State gets the performance data and, crucially, publishes it.
The State's message is clear; they're not going to let Google or anyone else foist half finished unproven and potentially dangerous self driving cars onto the general public. And that's is exactly how it should be.
The problem I think is that regulation of things like IoT devices is that effective regulations would amount to a ban. An effective regulation would be something like "it must be hack proof".
But we just don't have the infrastructure or technology to make small embedded Internet connected devices that get updated, implement best security practices, etc. We can't even make a PC or Mac style computer that, when put into a home, won't become littered with malware within moment of someone browsing some dodgy website. What hope is there for some IoT device that's got to cost less than $50?
Any sane politicians know that when something predictable and bad goes wrong, they get it in the neck for not having intervened beforehand. And because they're elected, generally they lose their jobs as a result. So they regulate, and transgressers pay a fine or go to jail. It's a healthy set up. So if Internet connected air-conditioning systems start being seen as a threat to the electricity grid, they'll likely act before some script kiddie comes along and trashes the grid by getting every air conditioner to switch off at the same moment.
What makes the current situation appalling is that "dangerous" things now includes automated trend-sensitive "news" selection algorithms on Facebook, Google, etc. These permitted fake news to play a significant role in the US election. The dangerous part is that the current crop of elected politicians owe their employment to the result of that election. So they don't see a problem with the situation, and aren't necessarily strongly motivated to do anything about it. Especially as it would mean imposing editorial controls on social media, the operators of which are amongst the most active lobbyists.
That's a huge threat to democracy in general, and makes it more likely that one ends up with a week government that is more favoured by someone like Putin.
One aspect I'm not sure Bruce Schneier covered is just what a government can do about dodgy software, IoT devices, etc.
Suppose some software or IoT device was identified as being a major problem, and had to be stopped, disabled, etc. How effective would a product recall be? Not very - people are very lazy when a device's bad behavior doesn't actually impact themselves. Suppose that some foreign-hosted Web service was spouting fake news and wasn't conforming to appropriate editorial rules during an election?
What would be required is something like a government off switch, or the ability for the misbehaving device's or website's network traffic to be blocked.
The latter sounds like it would need something not unlike the Great Firewall of China. I think that that's what we're going to see being discussed in the coming years. It's going to be a heated debate.
But we may have to accept that if we want government to actually be able to intervene quickly and effectively when some Internet thing or some foreign website is misbehaving, it's going to need something with teeth, not just the power to issue a recall notice or a cease-and-desist letter (which won't work abroad anyway).
Car systems already use "data diodes" to separate critical systems from non-critical stuff like the radio, etc. They're generally not optical as one normally perceives a data diode, but they aim to accomplish the same end result.
Mistakes in implementing this separation is what cost Fiat-Chrysler a $500million fine.
That is certainly becoming increasingly necessary.
Massive Strategic Cock-up
When the world decided that it was just fine to go down the route of client side execution, it rashly assumed that this could be made secure. Well, it cannot.
The proper answer is server side execution, with standardised remote display protocols being the only thing that the browser has. Things like X come to mind.
The need for client side execution has come to an end given that we all have broadband Internet.
You'd also not be giving away one's application source code to each and every client.
Indeed. I can't think of a decent one for Windows either, since Office 2016 borked Outlook.
I am still using 2010...
Don't forget that 'cost' includes training. That may be the part of the equation that they're worrying about.
Also I suspect that Linux is falling behind in some respects. I cannot honestly think of a decent mail client for Linux these days; there's almost no momentum behind Thunderbird and Evolution.
Just wanted to check - do you mean that it's decent style is unusual, or that it's purpose is unusual, or both?!
"This is not a fairness issues"
I was actually making a mild observation that Oracle were seeking to get a different set of evidence heard, rather than simply making a dumb appeal with nothing more to bring to the bench, and thus not quite fitting in Einstein's observation of which FrankAlphaXII so pleasantly reminded us. But you seem to have taken it rather to heart. Bad luck.
"Oracle has lost case after case after case on this."
Actually, on cases that have reached a final settlement Oracle (of which I'm no especial fan) have done quite well. Google have been found to have breached Oracle's copyright, and that's settled. AFAIK that's about the only part of this whole sorry saga that has actually been finally, conclusively decided.
"They keep adjusting and changing the premise, just hoping that some of their crap sticks to the wall."
Well? They're allowed to do that you know. If you don't like the way the legal system in the USA works I suggest you take that up with your local congressmen (assuming you're resident in the USA).
"Google developed Android and based their API on Java well within the legal and moral framework of the time."
We don't know that yet. The court cases have not been fully exhausted yet. As for "moral", if you think a legal judgement is somehow not moral then I suggest you take that up with local congressmen to see if they can have the law brought more into line with your own view.
"The owners of Java had also open sourced it"
"there was a clean room environment of it and the owner endorsed and supported it, and still does!"
OpenJDK was endorsed, not Dalvik. Dalvik in not Java byte code compatible goes AFAIK against the terms under which Java is licensed. Breaking "Java" is something that has previously cost companies money, e.g. Microsoft.
"Replicating someones API has always been seen as okay in computer programming and is done by many major and small companies. "
Not okay anymore in the USA. Google lost that one. It's OK in the EU. Dunno about the Far East - Japan has some pretty restrictive copyright laws.
"As has already been stated this is a complete myth often peddled to try to get a corporation a reason for doing some wrongdoing (sorry we didn't pay our taxes, it was our legal duty)."
I've never once heard of a shareholder who doesn't want their company to exploit every opportunity to pay minimum tax whilst sticking to the letter of the law or make a large profit by any reasonable means. Lawyers-at-dawn is easy-peasy for a large company like Oracle.
"Just look around, plenty of corporations pay full taxes and don't use tax havens so why aren't they being sued by shareholders or being hauled into jail."
I look around and I see a lot of companies who aren't massive multinationals and therefore cannot exploit all the tricks availabel to Google, Apple, etc. If such shennanigans were an option for every Mom'n'Pop outfit, their accountant would be advising them to make use of them.
" Well, you should be. If you are in IT then increasing the avenues for lawsuits in the software environment can only ever be a bad thing."
I know very well that a weakening of the copyright laws is going to be a very bad thing for everyone. If one's copyrighted material can be abused because someone else has deemed it "fair use" that sounds bad for, well, the smaller outfits I guess. We don't all have the legal resources to take on a behemouth like Google.
"You may not like Google but jump in to bed with Oracle and you haven't solved anything!"
I never said anything about favouring Oracle, that's something you've dreamt up all on your own.
"I don't think the GPL code is at risk - it has already gone through courts."
It has, kinda, here and there. Where it has touched the courts it has been treated as a copyright issue.
Google's so-far-winning argument is something like that it is OK to break someone's copyright; they've already been found to have breached Oracle's copyright (an odd decision, but it's beyond all argument now in the USA), and they're claiming fair use.
If that sticks, it chips away at the enforceability of the GPL. GPL basically says "do what you like, but publish the source code, and no mixing with other licenses". There's a risk that some lawyer in the future might contest that such generosity cannot fairly be paired with such restrictions.
"Put simplistically, the directors (and by extension, the company) are required to serve in the "best interests" of the company and again, by extension, the shareholders. The shareholders may deem that their best interests are served by the company maximising its profits but that's down to them and to vote and take action if they feel their interests are not being served. Furthermore, how do you define "maximise profit" - one shareholder may be looking to maximise profit over a single year, another shareholder may be looking for maximum profit over a much longer term (and which could involve losses in any one year)."
I think it's easier to couch it in terms of the opposite; not going after a potentially lucrative and attainable opportunity might lead the shareholders to conclude that the company / board isn't trying hard enough. Sure, there's differences over what period a company might seek to make profit, but turning down the opportunity altogether wouldn't look good in the eye of all the shareholders.
Given the size of the Android "market", and the effort for Oracle being nothing more than paying a bunch of lawyers (3rd least respected profession?!), it's not entirely inexplicable why Oracle are doing this. If they eventually win (either quickly or slowly) and Google have to cough up a few $billion, well done Larry, have another yacht. Oracle's shareholders would probably be quite pleased (though less so if they also hold Google shares, a not unlikely scenario).
Einstein did indeed say something like that, but I don't think he had court cases in mind. And to be fair to Oracle (let's suspend opinion for one moment) they're saying that the appeal is not a straight re-run.
Oracle are not being insane, like any other company they have a fiduciary obligation to maximise profit by any realistic means possible. They own Java, and monetising Java is potentially a very lucrative way of cashing in on the success of Android. If they're not seen to be pursuing every opportunity properly their own employment is under threat from aggrieved and out of pocket shareholders.
I am somewhat concerned about the outcome. If Google ultimately win with their "fair-use" argument (they have already conceded in court that they have broken Oracles copyright) then what other copyrights are vulnerable to the same treatment?
All of GPL licensed source code comes to mind.
Personally speaking I'm not to fussed if Oracle beats Google. I'm no fan of Oracle, but Google is a company that is increasingly unpleasant. They’re making a ton of cash parasiticly, they have a virtual monopoly which they're not shy of exploiting, their services are actually pretty rubbish these days (e.g. search returns adverts, not results, route editing in maps is broken), they're pushing inefficient technologies on to everyone (Web browsers should be about content display, not a VM), they waste their shareholders' money on dubious glory projects (self drive car that isn't and won't ever be) and on fines related to their business practices in Europe.
And in pursuit of cash they naively participated in the most damaging disinformation campaign ever mounted against a country's electoral process (so did Facebook, probably Twitter). What on earth did they think they were doing when they turned on their automatic "News" algorithms?
After all we don't like Java plug-ins because they allow websites to do things like this.
I remember reading somewhere that Firefox logs the WiFi networks that it can see and sends that all off to Mozilla...
But then again an awful lot has changed since the 60s.
It's now perfectly viable to have computer-pointed optics at a reasonable price, the lasers themselves are dirt cheap, and the required data rates now are a serious challenge.
Back in the 60s the lasers would cost a bomb, computer controlled optics would be unbelievably expensive. Thus they may not have been trying to use those things, and so would have run into the compromises inherent in using omni-directional light emissions or passive optics. And the data rate they needed would anyway have been fine down a cheap RS232 line.
So I say it's worth revisiting, even if it still doesn't come to anything substantive.
No matter which outfit puts up a working space probe, they all succeed in sending back some stunning and often very unexpected stuff. I'm continually impressed, and most grateful. This latest batch is more of the same!
Well done folks! BZ
Here is the UK quite a lot of ISPs won't let you send email using their severs that doesn't have your correct address in the 'from' field.
At least that stops some of the flow of malicious email out of infected PCs, etc. If every email server on the planet did the same thing, we'd be better off.
Though I may settle for Solaris...
I think that the excellent endeavour that is Wine illustrates the problems over in Windows land itself.
I mean, there's .NET, WPF, and a lot of encouragement from MS to come that way. And yet, MS's major app suite isn't written that way, nor is Visual Studio, and no one is quite sure how keen MS are in WPF, and then there was the whole Metro debacle. I may be a touch out of date on the topic, but it certainly wasn't clear what the hell devs were supposed to be using to be on MS's main stream.
And so here we are with MS these days officially not seeking massive profits from selling OSes, it's all about services, so they should be OS neutral. They support (ish) endeavours like Mono, but not Wine (afaik), and now we have the Linux runtime on Win10. All very confusing.
As for Wine 2.0, if it works for the few select apps that keep me on Windows then it may be the year of the Linux desktop for me. Possibly. Worth a go. Or maybe a Mac.
They've made a good effort in doing just that with this little beauty.
I honestly don't know how anyone could ever write the code for such a thing at any point in the past 20 years and not stop what they're doing. It must take a special kind of blindness (I'm being generous in not using words like lunatic, idiot, numpty, raving moron) to be able to do it. Presumably someone else somewhere in Cisco reviewed the code and also failed to spot it?
If that's what they do with things like a browser plug in, what's their router source code like?!
Does this mean that Google now don't think that everything can be implemented as a Web app?
But even if that were to happen, we'd come straight to your point about everything beyond the OS.
Linux has the same problem; every year has been given year of the Linux desktop, but somehow it's never actually happened. OS X and Windows simply offer far more than a mere desktop environment.
Fully agree with all that.
Looking back at the history of the PC, we have all benefited from the huge success of a proprietary OS running on an open hardware standard. DOS and Windows have always cost money, but they run on top of a hardware platform that is, even today, completely open to other OSes [secure boot can almost always be turned off]. The benefit is that there is a massive installed base of hardware which can run binaries for Linux, FreeBSD, etc easily, no complex recompiling needed, etc. Consequently these other OSes are accessible to ordinary users.
There is nothing quite like that in the mobile world. I think before we can talk about a free mobile OS we need a popular free and nearly universal hardware standard for mobiles.
The closest we've got to that is Microsoft's mobile phone hardware spec. Close, but no cigar. But if MS did offer a signing service and opened the spec, it would be possible for third parties to develop and deploy their own OS across a range of quite nice mobiles that all conform to more or less the same binary environment. That would make it easier for a free OS to come to the fore.
Maybe it's something MS do just before the platform dies completely, at the point where they have nothing to lose by doing so.
Sounds like it was,
"If you don't like this price, wait until you see what it's like if we don't get exclusivity...".
Which would be a pretty good example of gouging.
Qualcomm do seem to have been playing it pretty dirty, or at least dirty enough to attract a big fine in Korea and the attention of the FTC and the Chinese government. Qualcomm have a lot of patents, but a lot of them relate to CDMA2000 (and bits of UMTS), which is a dying and increasingly pointless technology.
It's no surprise that Apple have gone and tried the CDMA2000-less Intel chips, just a pity that Intel don't seem to have got them quite right.
Android is in a mess because Google didn't plan it out at all. Applauding Google for making sure that there's at least a couple of clean Android phones out there merely illustrates the problem. In a proper ecosystem all phones running the same underlying OS should be 'clean'. Instead they shoved it out there without a thought in the world and, worse, made the most important part proprietary (Play Services), and basically left everyone else to do whatever they thought best. Fragmentation was inevitable. Rubbish.
Microsoft did it properly. They set a hardware standard on which their binaries would run. Manufacturer diversity, OS sanity across the board. Neat, but clearly not of itself a market draw. Apple and BB10 BlackBerry merely defined their own ecosystems and did the whole thing alone. Neat, but no choices.
"Can someone remind us again why the Internet Engineering Task Force decided not to make this next-gen networking protocol backward-compatible?"
Who knows. IPv4 was, kinda, a bit of a kludge even for its time, with limitations built in that are akin to "nobody will ever need more than 640k"...
Perpetuating that is undesirable. However, I'm not convinced that the replacement is kludge free either.
SOAP, RTSP and HTTP don't need Linux. How about QNX? VxWorks?
The real problem is money. The market for small things like this is ultra-competitive, and if you can save a few cents per unit in manufacturing then that's what you do.
Linux has no license costs. In large numbers, an alternative OS might be licensed down in the cents/unit area, but that still a large amount of money out the door.
And as for the idea of keeping a dev team stood up purely to provide support for products in the field, no way is that cheap or profitable (at least not in the short-sighted eyes of the company accountants...), and certainly not if one has gone and spun up a customised stripped down distro that one now has to maintain oneself.
[An exception is, apparently, Belkin, who do at least respond to vulnerability reports and issue updates. Shame their Android app, through which the update process is managed, is awful at doing the update itself. The iOS one is much better].
And clearly the market simply does not give a damn about bugs, vulnerabilities or malware in these things. Some botnet running out of one's own house probably does not impact on the owners of the house in any noticeable way. The only exception I heard of was Nests in the early days - too many bitcoin miners infesting one's Nest meant that it actually stopped working as a thermostat.
Faced with a situation where the customers for IoT devices don't know, don't really care, couldn't do much about it even if the did know that their thing was up to no good, how does the behaviour of the market get changed to prevent all this becoming a truly dangerous thing to the Internet (an thence the economy) as a whole?
The only option left is regulation imposed by government. They're the only outfit that can forcibly change the market's dynamic when the market itself shows absolutely no sign whatsoever of sorting itself out. Though exactly how that's done without mandating an OS + distro, encryption standards, etc. that doesn't make it massively expensive thus killing the market and a massive target for blackhats looking for the thrill at getting one over on the government, I've no idea. Perhaps forcing adoption of HomeKit and maybe a couple of others isn't such a bad idea.
Canada is a member of ESA, and Canada is notable for not even being in Europe, let alone the EU.
With Trump getting inaugurated tomorrow, perhaps Canada would like to be able to cast off, sail across the Atlantic and anchor somewhere off the French coast, perhaps form a land bridge between France and the UK.
Might have to turn it sideways to fit it in without also forming a land bridge to the eastern coast of the USA...
I can't help but agree. It's a pity if Solaris becomes moribund. I still occasionally use it, but I used to use it lots (weirdly, on embedded systems as well as workstations).
I think you're right - if they start pushing things out for Linux ahead of Solaris, the writing is on the wall.
It certainly does look vague.
I'm wondering though, is there any really big deal? I'm not entirely sure how one would define where an OS ends and applications start, and whether a lack of development in the OS is a major deal.
The Linux kernel gets updated a lot, and that's pretty cool if you're wanting the latest and greatest kernel features for x64 chips and the eradication of vulnerabilities, etc. However, they're very careful to not break user-land (I can imagine that doing so is good fun - Linus baiting as a recreational activity). But Solaris's kernel, on SPARC, just how much updating is actually required to keep it viable and useful? I can't see Oracle not supporting it on newer SPARC silicon, or failing to fix identified vulnerabilities.
I can see it becoming a problem though if some major user-land package becomes desirable, and the authors of it have gone and made something hideous like systemd a dependency. Solaris doesn't have systemd. Oracle would have to do quite a lot of porting then to accommodate such a package. There's a big trend these days for containers, things like that, but Solaris already does pretty well in that direction, and Sun were amongst the pioneers of quite a lot of that stuff anyway.
But without that kind of issue, most user land stuff should run just fine on Linux or Solaris with no real compilation problems. Solaris can even run Linux binaries without recompilation if required.
Is there perhaps some parallel between Solaris and Windows 7? Windows 7 was, arguably, perfect and pretty much complete, perhaps could have benefited from some under the hood tweaks, etc. Instead MS took it upon themselves to dispose of the entire existing software base (well, demote it to a 2nd class status with a not-shown-by-default desktop) when they did Windows 8. Then 8.1 drew back from that a little bit, 10 more so. The proper, boring, and almost no effort thing for them to have done was simply to tweak 7's innards (for example, 8, 8.1 and 10 have some good improvements to certain aspects of the kernel) but otherwise left well alone.
So perhaps Solaris is in that same place - there's literally not much to be done to keep it rolling along just nicely.
"Sounds like something from the Hillary/Neocon prayer book."
Hmm, you don't pay attention to the goings on in North Korea and the Western Pacific much, do you. Are you some kinda commie China stooge?
"Japan has to deal with its regional problems on its own terms. The US forces (and the US for that matter) won't be around forever."
Except that Japan pays a vast fortune to host US forces. Unlike NATO, where there's no membership fee as such, the Japan US mutual security treaty involves a very large payment. That's something Trump was unaware of, and when made aware declared the sum inadequate. The treaty does allow one party to terminate it with 1 year's notice.
However there's been some recent adjustments related to moving Kadena, for which the Japanese have also agreed to pay, and this may have included an alteration to the termination clauses.
Cancelling all that now would be very poor form indeed so far as diplomatic codes of conduct are concerned. It also doesn't fit with Trump's apparent pro-Taiwan stance; he cannot support them without military bases being close at hand, and they're all in Japan.
Trump is generally making noises about withdrawing from all sorts of treaties. That's not going to do the USA any good. It'll make doing business with the US more hassle than its worth. And it's not like the USA has any money these days.
"You seem to forget that, having had nuclear weapons used on them, the Japanese have an absolute horror of them."
And they're pretty determined that no one will ever use them again on Japan. They've not had to do anything about it since WWII on account of the treaty arrangements between Japan and the USA, which Trump has threatened to break.
"Without the declared threat (which the Japanese people wouldn't stand for) atom bombs aren't any use defensively, so why would they bother, especially given that the rocket isn't any use in lofting anything but the smallest battlefield weapons?"
Er, have you seen the kind of guff that North Korea puts out daily? Besides, it's no good whinging about undeclared threats if the mini-ICBM is already on its way over from the peninsular.
That kind of thinking led to the policy of appeasement leading up to WWII, which nearly lost us (the UK) the war. The exact same arguments we had then are going in Japanese society even now.
With the two countries so close together you don't need massive range. North Korea has been launching quite small rockets over the top of Japan for years, gives them the collie wobbles every time.
As for why bother, that's a question that can be equally applied to the USA, UK and France. At least two of those countries have been using nuclear weapons for defensive purposes only. Answer: mutually assured destruction is no comfort (not really), but you'd rather have the option of bringing it about than not. Especially when the other country already has a bomb, nearly has a warhead, has a missile, has stated an intent to inflict harm on its neighbours and the USA if it can reach it, and a political insanity that does not encourage belief in their self-restraint. Faced with that, it would immensely surprising that Japan and South Korea (and even China!) didn't take substantive steps to ensure that such a threat (theoretical or not) was neutralised, at least to some extent. Japan can develop a nuclear warhead of its own if it wants to, it has the underlying nuclear industry required to produce the plutonium.
Historically countries faced with a nuclear threat have gone on to do so (India & Pakistan, Russia & West, China & West, Israel & Syria / Iraq / Iran / Libya). I don't see why Japan would necessarily be any different if the USA walks out on its treaty obligations.
Besides, as I've already written, a small nuke is not the only strategically useful payload that can be lofted on something of this size. It's far more likely that they'd concentrate on an interception capability first (something that the USA already fields on their behalf, but seemingly now might not be counted on). An interception capability is less "aggressive", and stands a good chance of succeeding. North Korea almost certainly doesn't have the industrial capability to produce large numbers of missiles and warheads. Shooting down one or two missiles is far easier than shooting down several hundred. Japan hasn't had a vehicle of this size available to them previously. Arguably, they do now (failed launch or not).
If the USA walks out of the Pacific, the countries that are traditional allies of the USA (Japan, South Korea, Taiwan, Philippines [though their current president seems to be taking leave of his senses], Malaysia, Thailand, even Vietnam these days) will feel immense pressure to tool up. Japan already is, it's built some aircraft carriers in recent years, lots of pretty good submarines too. Everyone knows that China is tooling up fast, has been for years and already has nuclear weapons.
Yes! It's certainly more interesting than the boring "utility pole".
Besides, I believe that the one SR71 pilot who actually saw a Russian SAM come up to his flight level (plenty were launched but the crews rarely saw them with their own eyes) described that as looking like a telegraph pole. So I think small rockets have been similarly dubbed ever since.
The idea of a small cheap orbital launcher is very attractive. And this nearly worked. I do wonder though, a few million to launch this sounds expensive in comparison to hitching a cheaper ride on the back of a bigger satellite's launch (typically that costs a few tens of thousands of pounds). That is how cube sats have been launched to date.
Is Everything As It Seems?
This was intended as a launcher to get some about 4kg into an orbit about 2000km up. So it could put something a bit heavier into low earth orbit. Or something even heavier into a sub orbital hop. Or something heavier again a few hundred miles.
If that something were a small nuke, they'd have a pretty handy little intercontinental nuclear armed missile, or (bigger again) a good short range tactical weapon. It could also probably serve as the booster for an interception weapon to take out other missiles.
There's a lot of concern in Japan about North Korea's military capabilities and sanity. There's also some worry that a Trump administration in the US won't stick to its treaty obligations to guarantee Japanese security in return for billions hard earned dollars. In his campaigning Trump essentially threatened to withdraw US forces from the western Pacific because he was fed up of Japan getting it's security for free. On being told that, actually, Japan pays a huge sum of cash annually for hosting US forces, he said that it wasn't enough.
Understandably this has caused some consternation in Japan, with plenty of people pointing out that without US forces (especially a few choice anti-missile systems) Japan is adjacent to and undefended from the world's craziest nuclear armed regime, and a power vacuum would also allow the Chinese to move into the Western Pacific in a bigger way.
So the unpredictable Trump (aren't they always?) actually gets elected, and the Japanese launch this thing, but they have a good excuse for not lighting up the second stage and completing the flight (which would betray its true capability). It may not be entirely coincidental. I think that we may be seeing something of this again, possibly painted green, on a mobile launch platform. And with the way things are going, that'd probably be a good thing.
They should add, "But don't over do it...".
There's a lot of companies out there that could do with remembering that part.
Yes, they passed the tests. But completely disabling some of the cleanup-measures during normal driving is VERY likely still illegal.
And that's reasonable, the cars isn't being driven as tested. It's not so very different to removing the catalytic converter, putting it back only for the annual government inspection.
Note they were always allowed to disable cleanup in high-load situations.
That's interesting. Could Fiat Chrysler's problem simply be one of poor parameter choices rather than code deliberately designed to deceive? Or is it just a more subtle deception? No doubt time will tell.
The more data an online service provider manages to snaffle from its freetard / paying users, the more it likes to boast abouts its security measures.
So, what do Google know about us all?!
Of course, these days Google, Facebook, etc. are snaffling data about people who are not their users (Android's use of caller ID, Facebook's face recognition and tracking, etc).
"It isn't as though Apple had the 'premium laptop' segment all to itself"
For a long time all PC laptops, even top of the line ones, were clunky, plastic and horrible even if the internals were pretty good. Meanwhile Macbooks were aluminium, sleek and lovely. It's only comparatively recently that the PC market has worked out that nicely made good looking premium stuff sells, and sells well. It took them only 10 years of watching Macbook sky high profits to work that out.
"Surface isn't stealing any sales from Apple, it is stealing them from Wintel partners like Dell."
Well, MacWorld reported that Apples Q4 results for iPhone, iPad and Mac sales were all down. Yet everywhere I look I read news of the boom in sales of MS's Surface line. Apple sell less computers, MS sell more computers. Call me old fashioned, but if that isn't sales stealing I don't know what is.
There's been plenty of people, even here on El Reg's forums, announcing that they've jumped off Apple's ship too. Seems that the delta between Windows 10 and OSX is small enough that the availability of things like standard USB, ethernet, HDMI and SD slots is more valuable than slimness, adapters, and an increasingly poorly performing OS-X.
The fact that MS (yes, boring old aesthetically challenged MS) can make money at all from selling a premium notebook / tablet thingy running Windows 10 is, by conventional thinking, unbelievable. Aesthetics is supposed to be what Apple are best at, yet it's clearly not enough to preserve their sales figures.
"These two statements are contradictory. The reason it is possible to lock down an iOS device is because Apple does support business usage via its Device Enrolment Program."
No they're not. The mainstream MDM solutions are all about allowing a company to make a device less the users and more the company's. They're designed so that the user is allowed to do less and the company is restricting it more. That's not BYOD, that's "it's a company phone that does company things, not a phone you the user can do anything you like on. It's CACD (Carry Around a Company Device).
The only way that this really results in anything like a BYOD environment that the user has normal rights on is if the company places few restrictions on what the user can do at all. Which is dumb because there's be zero point of the MDM and "Device Enrolment" at all (apart from slicker roll-outs).
I've yet to see anyone at all use an MDM encumbered iOS or Android device as their own personal phone too, or for their own personal accounts / social media / etc.
"The degree of lockdown is completely up to the company. If yours is unusable then that's because your employer chose to make it so."
I refer back to my previous point. MDM + minimal / zero lockdown is not really MDM'ed at all. It's just a personal phone that maybe the company can remote wipe if they want to. Bit of a stretch to call that "managed". To get any real assurances concerning protecting company data the company has to lock it down.
For example, time and again there's been highly dodgy data stealing malware scattered all over Google's app store, and Android itself is generally wide open to vulnerabilities for very long periods of time. It's the last OS on earth that any company with any concerns at all about their data security would allow an employee to install arbitrary apps on a company mobile.The only real answer is to stop the employee installing apps at all, and then it's useless as a personal device.
In comparison to BlackBerry's Balance and it's multi-level security system with its cryptographic separation of company and personal data and slick unified view of both personal / private email and calendars, the MDM approach on iOS and Android is pretty lame brained. They're essentially doing little more than turning stuff off and maybe installing clunky alternate email clients / calendars / etc.
That's not adding any supporting functionality at all, it's taking it away. But it is right up there, bang in line with what company admins already do (lock down PCs to the n'th degree, more of the same please). It's all distressingly unimaginative, admins don't have to think, and Apple and Google don't really have to do much to support this.
"It appears that in between appealing to executives and being more secure than Android, Apple got a good chunk of the corporate market without trying too hard."
I disagree so some extent. Apple haven't done a single thing to specifically get the corporate market, and I don't think they give a damn about it either. They never bothered doing anything in their computers to help businesses either, not really.
They aim their products at the consumer market, which is where the big money is. Corporate users are welcome to buy of course...
What I mean is that there's nothing in Apple products that helps corporate admins fit them nicely into a (often necessarily) controlled corporate IT environment. OS-X can do domain authentication and browse file shares, but that's it. The MDM solutions that exist for iOS are terrible kludges really, sticking plasters applied on top of an unhelpful OS. It doesn't even do VPNs properly.
With little real support for business use in both iOS and Android, BYOD now seems to have become:
"Your work mobile is an iPhone we've lent you that's locked down to the point where it's not worth stealing it":
This suits Apple very well. Apple get to sell 2 phones, not one, and they still don't have to do anything technological to support that. But us drones have to carry 2 phones.
Android is the same but worse by the way - it doesn't do a very good job of talking to Exchange servers, and so the MDM solutions for it are quite often even kludgier. Yes I know, Exchange servers, domains, it's all so much old bollocks, but not every company (especially outside the USA) is prepared to jump into the Google, Apple or MS clouds
The only mobile company that did understand the needs of business and BYOD was BlackBerry, and built a mobile OS specifically to accomodate that - BB10, and particularly it's Balance feature and the 2 phone numbers in one device party trick.
This is still the most elegant solution to the BYOD problem. It's far superior to sticking a MDM plaster on top of a consumer focused mobile OS. It's the only thing out there that truly makes it plausible for an employee to have one single phone for both work and personal use, with the right level of control for both parties and strong separation of the business and personal domains, and still be easy and pleasurable to use. It definitely doesn't feel like something "added on", it's right at the core of the whole OS, mail client, calendar, apps store, the lot.
Unfortunately, BlackBerry, like a lot of other companies, didn't understand the art of doing business.
Balance is really good, but it also takes a lot of getting your head round. By the time it came out it was already too late, and very few people had the motivation to go look at clever ways of solving the BOYD problem.
If Apple cared about chasing the corporate market they'd have bought up BlackBerry just to get their hands on Balance and incorporated the idea into iOS. They haven't.
Forget the Business Market, it's Irrelevant
Apple, and to some extent Google with Android, taught the world that chasing fat corporate sales was a pointless myth. The corporate market is too small to bother with. The consumer market, that's the thing.
Having learned that lesson, Microsoft went off and did Window 8. Whoopsie!
The PC is really a business tool. It became cheap enough for consumers to buy it. Turns out consumers didn't really want PCs, they wanted mobiles and tablets. Nowadays one looks at the PC market and wonders whether one will still be able to buy an affordable workstation for one's business needs in 5, 10 years time.
I'm no fan of Apple or Jobs, but Jobs did get one thing right. It's the software that matters. As soon as electronics had advanced to the point where a battery powered handheld could just about support an advanced graphical user interface it was inevitable that someone somewhere would do one. Jobs saw that, jumped early (the first iPhone models were absolute bollocks really), cleaned up.
For the rest of the industry the warning signs were there. The Apple Newton may have been poor, but there was the seeds of something there.
Nokia are perhaps the worst offender. They acquried Psion's OS and software stack - the Psion 5mx was a true masterpiece that's yet to be equalled - and threw the opportunity away. Nokia had more than the seeds of an idea, they had something that actually worked. Nokia in effect had a 10 year head start on Apple, and cocked it up. But then again, Nokia was run by hardware guys.
It's looking dodgy. Their designers are now so up their own arses that one seriously wonders whether they can recover. They need to stop chasing "form" and get back to designing things that people will actually lust after. Can't plug an iPhone 7 into a Macbook, even for charging? Job's would have had a fit at the idea.
Weirdly, Microsoft are making a ton of cash on Apples turf - premium laptops. Surface notebooks sell veeery well. Apple need to move fast to make sure that there's no possibility of a resurgency in the mobile space for Microsoft. Apple have a lot of laurels these days, difficult not to rest on them
Apple's product line up is, well, boring and disjointed, and in some ways very very annoying. Whereas MS is possibly be about to launch a good mobile to go along with good laptops. If only a few key mobile apps make it to Surface phone, people might just start wondering whether they need an iPhone and its frailties at all.
It seems that Amazon Echos are becoming quite popular with the elderly - it's an easy way for them to "use the web", and it's seen as a way they can call for help if they fall, etc. So Echo is beginning to find roles which could be seen to have a significant element of safety-criticality in them.
This is unexpected, to say the least. And this has taken off in just the very few weeks it's been on sale here in the UK. Amazing!
I bet Amazon, or anyone else didn't anticipate this...
So it means that Producers of Sound (radio, TV, the lot) are going to have to be careful to not do as you suggest!
Arguably it's a cock up for Amazon - broadcasters might become very reluctant to ever use the word "Alexa", for fear of triggering some chain of events somewhere. We may have escaped the dollshousalypse, and no one wants to be blamed for another. And if its never mentioned, where's the publicity coming from?
...would cause problems for partially sighted / blind people using web-to-spoken-voice-translation aids, if they also have an Amazon Echo in the house. There's a real risk that somewhere out there a blind person is now in receipt of a pointless dolls house.
I'm wondering how long it'll be before some wag on a radio station (perhaps a call in) says, "Hello TomTom. Go Home". Anyone with a modern TomTom satnav who is driving and listening to that station may find their travel plans altered for the better...
Similarly a radio station could, on Mother's day, broadcast "OK Google, call Mum".
Anyway, we're lucky that the world's economy has not been fully configured overnight to supply nothing but dolls houses... Or, perhaps it has?
That said, self-driving cars are the future - but it might come as a surprise to Uber et al when Ford, Honda, et cetera, start running automated taxi services with their own fleets, rather than letting Uber do it.
Personally I doubt we'll ever get true fully autonomous full authority self driving cars, not with the roads as they are and the shared usage of them (bikes, horses, etc). Google are running away from the idea, Apple too, can't see anyone else pulling off a significant development.
I think the car manufacturers are indeed covering moves by Google, Uber etc. The difference is if they don't quite make it they'll still probably end up with something useful. Google, Uber won't - they've not got a foothold in making cars.
"Go context switches are cheaper than OS context switches. It you have more threads, it's more efficient for them to be virtual."
Have you ever programmed for OSes like VxWorks? Some of these hard-RT OSes have ultra-low context switch times (kinda the point I guess). Anyway, one tends to get thread-happy on such OSes because the context switch penalty just isn't that much of a deal. I always found it quite liberating!
But I do like the idea of there being only as many native threads as cores with "green threads" (I think that's their technical term) being used within the language, especially on top of Linux / Windows / etc. I wonder if Go can cope with a thread blocking on network I/O?
This is the problem that Python completely failed to address, so they went multi-process as a "terrible kludge". Perhaps that's too harsh - the Python guys have never put it forward as a hard core, high performance system's language (though I reckon there's plenty of people trying to use it that way...).
And I looked at Python, and came away not believing how anything so naff could actually have any traction whatsoever.
I mean who came up with that dire idea to use whitespace for code logic flow??? What was wrong with curly braces???"
I have to agree. Something that you cannot see or print or write down on paper matters? Rubbish.
Thats because CSP is a formal language. But that doesn't help you when you have to map it into a real language.
There are plenty of real language CSP implementations. I've written some myself. You design your system, express it algebraically in CSP, do the algebra, prove that the system is correct. You then write source code that implements the same architecture that you've expressed in CSP, compile & run. Go and Rust are but the latest in a long line of CSP implementations, but the advantage is that there's real merit (excellent memory handling, etc) to the languages themselves beyond just their CSPness.
It doesn't really make a lot of difference what architecture the board & CPU have.
No, but Intel and everyone else is really struggling to get improved SMP performance, especially when it comes to large machines. The amount of silicon overhead required to make SMP work well on top of what is fundamentally a NUMA architecture is really quite large, which costs power, money, etc.
The only reason they still do it because of the large amount of pre-existing software that has been written around SMP, including all mainstream operating systems. We're not going to throw those out any time soon.
You have to go via the OS API unless you plan on having your code talk direct to the hardware...
It doesn't work like that. SMP makes all your process memory equally accessible no matter where the memory is and where the process thread(s) are running. You use the OS to handle threads, semaphores, locks. Memory access across QPI or L3 cache does not require OS intervention.
However, for one thread to access memory shared with another in an SMP system, it has to take a semaphore (assuming one is locking shared memory), access the memory (which involves a whole load of data transfers up to L3 cache / across QPI), give the semaphore back. And all the while the separate L1/L2 caches have to be kept coherent, because two (or more) cores are all accessing the same memory address.
CSP, which although it's copying data (rather than sharing it) involves essentially the same amount of work, but this is dressed up as some sort of IPC transaction instead. For example, an OS pipe still involves taking semaphores, accessing memory (in order to copy it), giving semaphores back. The difference is that because CSP copies the data instead of sharing it, there ends up being less cache coherence traffic running across the QPI or up to the L3 cache, because the source and copied data are being accessed by only one thread each.
The fact that CSP implementations are doing this at all stems from the fact that they're sitting on top of a faked SMP machine that is itself sitting on top of and completely obscuring a NUMA architecture. If the SMPness were to be omitted altogether, the CSP would be far more efficient (helped further by being able to omit the silicon that implements the SMP environment). Given a genuine NUMA hardware environment (such as a network of Transputers) CSP is a tremendously good fit.
Distributed processing is another topic entirely.
It is, but it really, really shouldn't be.
Things like ZeroMQ do a wonderful job of completely abstracting away the means by which data is moved in Actor model applications (it nearly does CSP). Intra process, inter process, across a network, nothing that matters changes in your source code. The performance changes, yes, but that's no big deal. You already know that it'll be slower across a network, and can plan one's distribution accordingly. The key thing is that changing the distribution is very little work, there's almost no re-coding to be done.
In contrast, if one has used, say, Rust or Go's CSP mechanisms in-process, and then one decides one wants to distribute processes / threads across a number of machines, you've got a major re-write on one's hands. The CSP channels in those languages don't propagate across network connections AFAIK. Bad karma.
Um, yah. Rob Pike and Ken Thompson are wee ankle-biters.
Tony Hoare is 20 years older, and is entitled to call a 52 year old Rob Pike a youngster!
Is using pthreads REALLY so hard? There seems to be a lot of noise about the latest flavour of the month concurrent languages but in reality all they do is prettify (and arguably simplify) threading syntax and control then make the same underlying calls to the OS threading system. They don't actually give you any more power.
PThreads are harder to get right (from the point of view of being sure there's no potential for deadlock, lock failures, etc). Shared memory, semaphores, etc can be fearsomely difficult to debug, etc.
CSP in particular makes it easier to get it right, or at least if you don't you're guaranteed to find out as soon as you run your system. If you've written it such that it can deadlock, it will deadlock.
Also you can do some process calculi maths and prove the correctness of a CSP design theoretically. That's not something you can do with pthreads, shared memory and semaphores in anything but the most trivial of cases.
As for more power, one has to be a bit careful. Pthreads, shared memory, semaphores all assume that it's running on top of an SMP computer architecture. Whereas CSP is quite content to exist on NUMA architectures. What Intel actually give us is, effectively, a NUMA machine with SMP emulated on top. Thus software that's more NUMAry in operation can be kinder to the Quickpath interconnect between CPUS and the shared L3 caches.
And because CSP is NUMA friendly, it's quite straight forward to scatter CSP proecesses around a network and scale up (though Go and Rust don't AFAIK do this for you). That's a complete no-no with shared memory, semaphores and pthreads.
If you like Go's concurrency you may also like Rust. That too gives you Channels which can be rendezvous mechanisms.
Rust is beginning to look good because it is suitable as a system's language. There's a whole lot of youngsters at work getting excited about it.
To us old timers the re-emergence of CSP channels is a delight! It's definitely the easiest way to do parallel execution. Rust'll save me having to implement CSP in C++ for myself every time...
Parliament produces an annual report on surveillance activities, and the court case results and transcripts are openly available. Go read!
But yes, Trump's Muslim registry is an abhorrent idea.
Biting the hand that feeds IT © 1998–2018