Which OS is stuck on x86?
Er, OS X?
Windows has grown an Arm variant quite recently, Linux is on everything, Solaris comes in Sparc flavours, and all the embedded OSes work on everything else too.
1937 posts • joined 23 Apr 2008
Which OS is stuck on x86?
Er, OS X?
Windows has grown an Arm variant quite recently, Linux is on everything, Solaris comes in Sparc flavours, and all the embedded OSes work on everything else too.
I suspect it started off as a platform on which all the power management code could be run. The idea was that with the CPU looking after itself (power settings, cooling, voltages, clock frequencies, etc), you then wouldn't have to put all that code into the main operating system.
This was sensible, given that getting all that hardware management wrong could fry the silicon to a crisp. Offloading it to a separate microcontroller with a fixed binary blob meant that Microsoft, the Linux community, Apple, and every OS developer didn't have to do it themselves and get it right.
Then the feature creep started.
I'm sure that Intel's intentions where perfectly harmless. Being able to manage a server like you can (mount ISO images, see the console, all sorts of useful admin things can be done from afar) is incredibly useful. Just a shame the made a complete mess of it.
To be honest I can't see a way of implementing remote management of that sort without having an ME CPU bolted on the side with quite a lot of low level access. Though I don't see why that should need the ability to access all physical RAM, all Ethernet traffic, etc.
Whitepines beat me to it.
Yes, OpenPower seems to me to be a very viable way to go. The CPU is genuinely the Central Processing Unit,
Who knows. Perhaps NSA saw what Intel was up to and simply decided to let them get on with it, knowing that they'd fsck it up badly to NSA's advantage.
Why bother coercing / cajoling Intel into slipping in a hidden backdoor when you know they'll build in aircraft hangar sized doors through sheer incompetence... So long as Intel stick to this idea of an ME, there's code there that will likely have flaws.
Raptor Engineering are up to something interesting with OpenPower. Basically with the Power9 CPU from IBM being "open source", they're in a movement to do a completely open source computer (all the way down to the silicon design, board schematics, firmware, and of course the OS + software stack on top). It's all there for one's inspection.
No magic closed source firmware / ME there.
This is likely part of an eventual Europe wide reconsideration of OTT services as telcos. That includes Apple (FaceTime), WhatsApp (everything), Facebook (Messenger), Google (surely they have a message service somewhere but I don't bother learning the names because hey keep throwing them away), Instagram, Line, Snapchat, BBM, etc.
If that happens then the newly anointed Telcos will have to strike a balance between complying with LI laws and their current marketing / public positions vis a vis "privacy". Making a big fuss about privacy now may suit the public mood, but may put them out of business later when their privacy conscious users flee once they introduce LI systems.
Withdrawing from lil ol Belgium is one thing, from the whole of Europe dents the bottom line quite a bit. Belgium has simply set a precedent...
Skype's original peer to peer architecture is of course highly resistant to this kind of thing. An open source equivalent with no corporate backer would be very difficult to intercept. But there's no money in it for anyone, so no one organisation with sufficient marketing clout will ever promote such a thing.
Stock-ish. They run the GR-Security Linux kernel. That's the one which is causing a certain amount of friction between Linux/GPL purists and GR (who serve people like BlackBerry that simply want an OS without a long list of known vulnerabilities).
Couple that with a secured boot loader, and it's still pretty solid, and they may have been able to carry over some of their cryptographic accreditations too (or at least be using the same libraries). AFAIK no one has managed to root BlackBerry's version of Android yet.
Their soft keyboard is very good (in my humble opinion). That's on their Android.
Hub is pretty darned good - though I think the deeper integration that was achieved on BB10 will be something that I'll miss.
Otherwise it seems to be a pretty stock Android experience. That is a good thing - it's easier to keep up with the deltas between Android versions.
You may already know of the following - this'll serve as a record of what we once had as much as anything else...
BlackBerry Travel is a thing not carried over; in fact it's gone from BB10 too. It was a BB-branded front end for Worldmate, and for the seriously busy traveller it was very good; sorted out your flights, hire car and hotels for you, and coordinated with your colleagues too. Many a busy traveller swore by BB Travel for, well, a decade or more.
Worldmate has shut up shop rather than try and compete against Google (who, as is typical, have waded in to the travel market with an inferior but ubiquitous effort. Try booking hotels, flights, and hire car all in one smooth action with Google..).
Another thing that has gone, probably for good alas, is BlackBerry Balance. On BB10 this was effectively a multi-level security system that was the neatest solution to the BYOD problem I've ever seen. It is significantly better than Samsung KNOX. It was perfect for keeping both the User and Company happy. It suffered from being a concept that was pretty hard to grasp. Being BB10 only meant that there wasn't an Android / iOS equivalent to educate the world. There's very little possibility of doing something so rigorously developed as Balance on top of Linux, or iOS.
B, b, bu, bu, but it's so beautiful, it must be perfect, mustn't it?????
I got fed up of Google's poor search long ago. It's woeful. There's little point looking for specialised information these days. Quite often I want an exact string match; can't be done any more. Alta Vista, Alta Vista, my kingdom for an Alta Vista!
So I use Bing instead. No, it's no better, but at least I'm not feeding Google with fake confidence.
There's also a lot of aerodynamic subtlety to the shape of Concorde's wing that was less obvious back then than today. And the wing was also, I think, quite difficult to manufacture.
So if the Soviets did have copies of the drawings, it might not have been readily apparent why the shape was how it was. So perhaps the Tu-144 came out a bit more flat-plated than Concorde owing to a lack of understanding of Concorde's shape and not wanting to simply copy it verbatum (if they had the drawings that is). Still, to get the Tu-144 into the sky was still quite an achievement, even if it wasn't totally successful.
Delta wings are often disliked by some aviation design communities. They can suffer from a lack of controlability at slow speed (the pitched up wing masks the control surfaces at the rear from the air flow, so no control). The fix is canards. The Americans went off deltas some time ago; SR-71/A12, B58, F106, etc are all quite old. In contrast the British (Javelin, Vulcan, Concorde, Typhoon) and French (endless Mirage's, Rafale, Concorde) liked delta wings. Concorde also had weird drag / speed characterstics; as the speed bled off and the aircraft was pitched up to maintain altitude, the thrust would have to be increased; tricky stuff.
For all things techno-geaky about Concorde, it's well worth ploughing through this extensive, multi year thread on PPRUNE.
Good luck to Boom. It'd be nice to see a supersonic champagne tasting session once more (i.e. very civilised). The problem is engines, hopefully they'll be allowed some military power units; very expensive to develop from scratch...
Unless the code is released under GPL2, it cannot be integrated into the Linux source code. There's been enough fuss already about using it as a separately provided kernel module.
The problem I see with that is that once it's released under GPL2, will it continue to be released under the more permissive license that helps out, for example, FreeBSD? It would be a real pity if ZFS went to GPL2, and GPL2 alone, because it would seriously screw things up for the people who are already using it elsewhere.
It could be multi-licensed of course, but license fragmentation can easily lead to source code fragmentation too, unless absolutely every contributor is commited to releasing their efforts under multiple licenses.
I too have never found X to be a problem. GTK is a miserable pile of ordur.
I think if one is doing a lot of work that involves a load of texture maps in 3D, the pipe does become a problem. Hence moving away from that architecture. However...
Of course the only reason why a pipe is problem in that circumstance is that it inherently involves memory copying, a lot of context switches back and forth (especially for large amounts of data flowing), a so forth. However modern Intel CPUs have features that would completely eliminate that problem; internal DMAs. A pipe-like facility could be implemented around internal DMAs, which would be lightning fast (indeed, very fast), and would take far less CPU time to shift data from a client application (e.g. texture maps) to a display server's internals. If mailbox semaphores were possible (I don't know if they are on an Intel platform) the DMA could even ping off a semaphore post to wake up a display server once new data had been provided by the client. Et voila, a client server architecture with far more bandwidth (by definition, the best possible bandwidth) and zero context switches in/out of the kernel.
A thin layer on top would all this to optionally pump client data down a real pipe or socket (for remote servers).
What do you think? A good idea?
It seems that no one involved in replacing X is stopping to think whether or not the client-server model could be retained. I think they've blundered straight on into replacing it with a fairly crude driver architecture without casting an eye around to see what hardware facilities now exist to improve the existing client / server architecture. It'd be a great pity if all that nice DMA silicon that Intel now put in their CPUs ended up not being used by an updated X server for *nix.
From the article:
A lot of companies would never have admitted that the vision of convergence wasn't what people wanted. That's the sort of move that takes guts and honest appraisal of what you're doing, what's working and what's not. The GNOME project has never displayed that kind of thinking. And as far as I can tell, it operates on nearly the opposite premise. It's to early to say, but I predict conflict down the road. Keep a bag of popcorn handy, I believe there will be plenty of fireworks to watch.
GNOME is particularly objectionable. Seems typical of the projects with heavy duty RedHat involvement these days. It's like they thought, let's take everything that's good about a desktop and minimise it or, preferably, throw it away altogether. God knows what all that bloaty code is for, but it's not giving me the desktop environment I want.
Erm, aren't they called firmware viruses?
I seem to recall Lenovo put something into some of their device driver firmware that would reinstall bloatware. Or something like that. Ok so that's not a Mac, but then Macs and PCs aren't so very different.
Neat idea, but this kind of concurrency problem has been sidestepped altogether decades ago. Concurrent formulations such as Actor Model and, more importantly, Communicating Sequential Processes are 1970s ideas. The latter in particular is highly relevant to anyone wanting code that executes concurrently which is provably free of deadlock, livelock and spinlock problems as well as having zero memory sharing errors. There's even a process calculus for it.
CSP was briefly fashionable back in the 1980s, early 1990s (Inmos's Transputers, Occam), but is now alive and well in languages such as Erlang, Rust, Go, Scala. Of those, Rust in particular looks really good (no runtime needed, ideal for all sorts of software and not just desktop applications like web browsers). I'm perverse, choosing to do CSP architectures in C/C++ (I have to have a library...).
This is clearly the way to go for future developments. Sticking with the old "lets share memory and guard it with a semaphore" is not faster to run, is a lot longer to debug (even with tools like this from Facebook), and is prone to stinging you in the arse years down the line when some unexpected sharing issue finally occurs for the first time.
It is also inherently limiting when one wants to scale up a piece of software across a whole network of computers; Actor Model or CSP channels can be network connections; shared memory / semaphores cannot. Shared memory architectures fundamentally require an SMP computer; that's increasingly becoming a bottleneck in future CPU speed improvements; massive chunks of modern Intel CPUs, and especially AMD CPUs, are dedicated to synthesising an SMP environment from an underlying NUMA architecture.
Whereas CSP / Actor Model architectures are entirely happy with NUMA. A computer that is a pure NUMA machine would be a lot more power efficient (or faster, depending on how you want to adapt).
...I'm buying one. I was seriously tempted by a KeyOne, but this one does it for me.
You missed one important Point on this - the plan is to assemble the C series for the American markets at Airbus's factories in the US in order to try and avoid the American tariffs on the C-series. It will be very interesting to see how that plays out.
Whilst the tariff is now a consideration, apparently Airbus and Bombardier were talking about a deal before the tariff was announced. Now, that's either excellent judgement on their part as to how the trade dispute would pan out, or there's more at stake than that.
The C-Series is such an excellent fit against what Airbus is already manufacturing and selling that a deal between the two companies was pretty strong. Airbus had an effective gap in their catalogue (the planes they were offering in the class simply weren't selling). Bombardier had the right aircraft, with certification and excellent in-service reports, but lacked the ability to take it to the world and swamp the market. Put the two together, et voila! A very strong line up, and the manufacturing capacity and financial muscle to make it a world beater.
A consequence of the deal might be that they can sidestep the US tariffs. However, I think that what is more important is that the C-Series is now a serious contender in the world market. And the world market is far larger than the US market. Doing well in the USA would be nice of course, but the real prize (one now within their reach) is the global market. Win that, and losing out in America won't really matter at all. Win that, and the existing Bombardier and Shorts Brothers factories will be kept pretty busy (Airbus haven't got lines lying idle in Europe to soak away the work).
Also everyone is forgetting that the tariffs are yet to be imposed. That issue is in itself not settled until the new year, when the US government determines whether or not Boeing was "damaged" by the under-pricing the US government says they found.
All the nice things we can say about the neatness of an Airbus-Bombardier tie up can also be said about a hypothetical deal between Boeing and Bombardier, if not more so. Bombarider's design is clearly excellent, and Boeing are in dire need of an excellent design to compete in the single aisle market. Why oh why oh why were Boeing more focused on grinding Bombardier into the dust than on recognising the opportunity represented by a financially stressed but technically competent Bombardier? Pride? Over-Confidence? These are dangerous traits.
Airbus have clearly sweet talked Bombardier (and importantly their family shareholder who still have a lot of influence in the company) in a way that Boeing never even begun to consider. Boeing's aggressive trade stance was probably the final straw that forced Bombardier (and the family, and other shareholders) into realising that the future lay in a deal, not in independence.
Now that the deal is announced one has to conclude that the future of the design, and by extension the Bombardier company, employees, etc. could very well be far larger than they ever dared hope for. It's a case a 50% slice of a 2000 airframe program being more valuable than 100% of a 500 airframe program. And given the quality of the design there's no reason to suppose that it won't get to be that big over the coming decades.
Airbus's 60 Year Free Ride
Since February 1987 Airbus have not really had to touch the design of the A320 to keep it competitive. Only recently have they NEOised it. And now they've picked up a better design with lots of growth potential for $1.00. This will see them through for another 30 years, probably. This has got to count as the cheapest ever R&D budget spent in maintaining market share.
Boeing has had 30 years to come up with a 737 replacement design that would actually make Airbus sweat, but hasn't done so. This is a ridiculous, decades long failed strategy by Boeing. And now look what's happened. Airbus has taken another leap ahead for the price of a coffee.
Develop and Compete, or Die. Perhaps Boeing don't believe in Evolution?
The timing is significant. We're about 9 months from the Farnborough airshow; that is an ideal period of time in which to go to potential customers, show them the plans, and get a few sales lined up for announcement at the show. The deal between Airbus and Bombardier is itself not scheduled to close until H2 2018, but I don't think that'll matter.
Reportedly there's already been some hurried analysis by various fleeting planners. There's probably a lot of operators out there tempted by the C-series, but were nervous about Bombardier's ability to fulfil an order. Now that concern has all but gone away, and with Airbardier likely willing to let some early orders go through at knock down prices, the C-series is suddenly back on their radar scopes. There's real financial advantage for the early buyers, so I expect the phone lines will be a bit busy in the next 9 months.
WEP is broken, but I fear that it might now be better than WPA2! So far as I know it takes a little bit of effort to break WEP.
This flaw in WPA2 seems to be trivial (at least from the point of view of computational complexity) to exploit.
Oh the irony if the short term fix is to turn on WEP...
Noooo... Most manufacturers will use this as an excuse to push a new model out within the month!
The cynic in me points out that if that's what they do, they'd be having to repeat it after the standard finally gets fixed (for that is where the problem sits). And if I know anything, it's that standards don't get changed very quickly at all.
If you're referring to the transition from analogue to Freeview, I think that was done quite well.
Freeview was around for a long time before they finally switched off the analogue signal. And a basic Freeview box was pretty cheap (I think there were even some help to buy schemes for the disadvantaged). Plus undeniably it was a big improvement.
There's some people using 1930's TVs with a Freeview box. Not bad for backward compatibility (ok, they're using a scan converter too...).
The trouble with having done that is that the reasons to upgrade beyond that become significantly less compelling to the end users. Freeview is still Freeview, which is excellent, plus they've managed to sneak in a couple of HD channels. That's all been handled reasonably well.
And of course what they're doing in America is the equivalent of turning off Freeview altogether and starting from scratch. Doing that here would result in the Daily Mail exploding in indignation...
And Freescale had a dominant position in telephone exchange equipment with PowerQUICC. And there's still some niche users of their PowerPC range of CPUs who will want guaranteed supply (i.e. Uncle Sam, who has a way of insisting on these matters,,,),
NXP do a whole load of stuff that I can't see Qualcomm being interested in at all. Worrying times.
At least BB, imperfect though they are, have been reasonably good at getting Android patches out to their customers. Some other manufacturers just don't seem to bother.
So far as USPs are concernced, that alone is about the only thing that would ever convince me to give Android a go.
BlackBerry's Hub is excellent, by far the best messaging client out there. If it's a bodge, it's only because Android is too lame to allow Hub to be integrated into the UI as deeply as it was on BB10 (where it is truly excellent).
Well, what is a BlackBerry then? A Z30?
I have one of those, it's excellent as a phone and a messaging device, though even that is being eroded by the lack of a lot of social media apps. For the years I've used BB10, the thing that really makes a BlackBerry for me is their Message Hub. Which you can get for any Android phone.
Trouble is a lot of Androids I've played with are, well, yeeeuurrrkk! Especially Samsungs. Android's approach to app permissions is a real turn off; Nougat fixes that, which means a Keyone or newer (they've not put Nougat onto a DTEK60 yet).
I wish BlackBerry would do an iOS version of Hub, because iOS's own messaging (which I'm currently using) is shit.
The Keyone is pretty good based on people I know who use them. But I don't want a keyboard. This new phone look ideal, though obviously not as good as an up to date BB10 phone with a rich and rewarding app ecosystem.
If you want an interesting phone, one of the guys who was involved in the fantastic Psion 5 is now involved in some effort to do a modernised version of that. Running Android (boo), but even so it could be interesting.
Unfortunately, it seems that the reason the hardware is "vulnerable" in the first place is because the operating margins of SDRAM are pared so far back to give us what we also want: high speed, low power memory. AFAIK there's no real hardware fix for this; high speed higher power memory doesn't work (the speed is achieved in part due to the lower operating voltage).
So yes, we can have memory resilient to rowhammer attacks, but it's like that this would also be slower; and that's a tough marketing proposition at the moment. ECC memory helps somewhat - it becomes harder to exploit the physical effect undetected - but it is still vulnerable to a denial-of-service style attack (the memory can still be changed, but now you have memory faults cropping up and a crashed computer).
Stop Executing Everyone Else's Code
Yes, that changes the web a lot - it means server side execution is all that is "safe" - but ultimately it's the only way to guarantee that exploitative software does not get run on our vulnerable hardware.
New York to London by rocket? Ok it's a short flight time, but the journey time will be terrible. First get to the boat. Then motor out to the rocket. Then put on a spacesuit. Then get in the rocket, shut the door. Then complete all the pre launch checks. Then whooosh bang up into the sky and back down again. And then the reverse process. I reckon the whole thing could be slower than flying.
Concorde was very fast of course, but one of the lesser known aspects of Concorde travel was the ground arrangements. They had a dedicated 10 minute check in (none of this 3 hours early nonsense. Though of course they had a lovely lounge if one wished to arrive early). They had dedicated baggage, customs and immigration queues on arrival. Saved about 3 hours airport time off the journey too. So whilst Concorde itself saved about 3 hours, the overall service loped another 3 ish hours off the time too, or about 6 hours quicker.
BA were (still are) running a similar service from London City. Ok it was subsonic, but overall still 3, 4 hours quicker than an ordinary flight from London Heathrow (City airport is very handily placed). The new C series from Bombardier is very interesting because it can manage London City to New York without having to refuel at Shanon in Ireland on the way, saving another hour or so.
I reckon Musk's half hour rocket would take a ton of time...
Blackberries have, like everything else, been made in China for a long time. Their trick is for the OS to be able to cryptographically verify parts of the hardware and boot loader sequencing. The OS (certainly their classic OSes BB10) would refuse to run if it didn't like what it saw.
That's what set Blackberry apart in the eyes of government users; they'd thought about checking the integrity of the hardware and firmware. A bit like a PC's Secure Boot today, but for a phone. It makes tampering very difficult.
Harmonisation of tax rules is effectively handing over full sovereignty. You can no longer spend money on what you want because you cannot choose how much to collect.
Whilst the EU technocrats might favour it, there's not been one single national government in Europe that's expressed a view in favour of such a move (AFAIK). It is as much a political union as anything else, and no national prime minister or president seems keen on giving up their control entirely...
The tax would be levied on the customers of Google, Facebook, etc, not on Google or Facebook themselves.
So if you're a French car dealer looking to advertise to French customers on Google, some money has to flow from your bank account to pay for it. That flow can be taxed. The actual destination for the cash is almost irrelevant.
Evading such a tax would be hard; you can't stop the tax authorities looking at the ads you've bought and reaching conclusions about your company tax returns.
The usability or lack thereof of a car's infotainment software is largely unrelated to the kernel underneath. Any modern kernel running on a decent SoC should be capable of delivering an excellent experience. It's a matter of supreme unimportance if there is a Linux, QNX, BSD or NT kernel under it all.
Someone over at ARSTechnica did a review of ICE systems, and if my memory is correct the QNX ones generally had smoother animations, smoother transitions. Having a hard RTOS certainly helps prevent any jittery movements.
I think one if the big reasons why QNX is popular in that space is that there's good support teams that can help get BSPs right, and it's what a lot of other automotive places use too. It means less time setting up BSPs and hardware, more time writing ICE software, and a commonality across the industry which helps with staffing.
There's companies do that (BSPs, support) with Linux too, but they don't themselves own the OS and so are not in control of its technical development. Linus wakes up one day with a great idea, could be a big fork.
If I've read this right there's a camera on the door bell that is operating all the time, performing facial recognition, reporting visitors back to you.
That's a CCTV system. And you'd have to put up a sign saying so to be able to legally use it in the UK.
I currently have a Nest thermostat. I ditched it's schedule learning as soon as the weather started turning cold and the heating started coming on at weird times of the day. Useless. Now it's just a glorified way of switching on the heating before I set off from the airport to come home.
None of this Stuff Is Going to Work Whilst we Build Houses this Way
The need for a low voltage wire was interesting. It betrays a wider problem in the IoT space; power. Almost everything I've tried has been severely hampered by being battery powered. Radiator valves, burglar alarm sensors, you name it. They would all work a lot better if they could be supplied with 12VDC from a mains powered source.
None of this Stuff is Going to Work Whilst we Write Software this Way
Of the stuff I have that is mains powered, it's then let down by the software. Frankly, it's all a bit shit. Burglar alarms, power switches; it's all pretty rubbish in some way.
The one thing about Nest is that their software is reasonably OK, apart from the auto-scheduling. But on the whole the IoT world just doesn't understand software, and what it's got to do.
If you take the phrase:
“If an autonomous system acts to avoid a group of school children but then kills a single adult, did the system fail or perform well?”
and replace "autonomous system" with "human driver", what you almost certainly have is a case of Driving Without Due Care and Attention, or Causing Death by Reckless Driving. It's an open and shut case of driving too fast for the conditions. You're supposed to be able to anticipate that the group of school kids might move so as to be in one's way. If they already were in the way (say, crossing the road on a blind corner), the driver has no defence whatsoever.
So if an autonomous system does it, the manufacturer of that system has failed to build in enough anticipation into the machine's abilities. If it happens just once, all our self driving cars will then start crawling through town at a snail's pace, just in case.
Er yes, what Pete 2 said.
The difference is that I think humans are far better at anticipating what other humans will do than any machine will ever be. A group of well behaved school kids walking tidily down the pavement emits a completely different set of warning signs to a bunch of kids mucking about. You're wary of the former, you're down right paranoid about the latter.
...and what does he think Java, .Net, Linux, WindowsNT kernel, web servers, etc are all written in?!
No, I won't accept this. Slurs on Word or Excel are unfair, unwarranted, and ridiculous. Such excellent pieces of code should be loved by all, marvels of usability and usefulness.
At least, that's how they are compared to Visio.
Previous efforts have been, well, how can I put it, sluggish?
It kinda cuts both ways.
Intel build in a ME, don't properly tell anyone about it or what it can do, cock it up badly, and we're all left with machines we're wondering whether we can trust or not. And unbeknownst to us (until now), to placate some TLA there's a way of turning it off.
On the other hand there's a bunch TLAs somewhere who have presumably set this mysterious bit in some config file who are perhaps more vulnerable than they anticipated. It turns out that a simple config change can turn the whole damned thing back on again. So they're asking themselves, did our techs really get the config right, and is the config still right?
I don't think Intel have done anyone at all any favours whatsoever.
For some applications, that's not entirely true anymore. Multi-Level Cells in FLASH are not 1 or 0, there's several inbetween.
In asynchronous (i.e. clock-less) electronics there's been some different approaches too: 1, 0, or not sure yet...
Binary logic dominates in CPUs because with volts/no volts representing 1 and 0, there's practically no calibration to incorporate into a manufacturing process. With multi-level logic, e.g. 1, 2/3, 1/3, 0 (2 bits) suddenly it all becomes a lot harder to build.
However, in communications multi-level representation of data is pretty common. QAM (quadrature amplitude modulation, see Wikipedia) involves multiple signal amplitudes, a far remove from simple binary on/off keying. In communications such schemes are used to get more data through a given signal bandwidth. It's the equivalent of storing more than 1 bit in a single memory cell (which is what MLC Flash is doing).
Of course, the reason such signalling isn't used on, say, the memory bus between RAM and CPU is because it takes a lot of electronics to generate and receive such a signal; not good for speed / power. RAM buses these days are complicated enough, what with their propagation de-skew delay lines, more than 1 bit on the PCB trace at a time, etc. But complex modulations are used on links like Thunderbolt, USB3, Ethernet. There's way more than one bit on the wire at any single point in time.
Really these days most high speed buses inside and outside of computers are RF data links, not just a single voltage level on a PCB trace.
Today's 787s and A350s will. Scone our grandkid's older-than-us engineering marvels
I hate this keyboard.
Are yes, the passing of Properly Built Kit, something to mourn indeed.
Though with aircraft, the limiting factor is primarily fatigue life. Commercial airliners have to be built strong otherwise they'd be useless in service. There's A320s and 737s with way more cycles on them than this 747; they're strong. They're often stronger than military aircraft, which tend to do far less flying.
With 787 and A350, carbon fibre is the primary construction material. Provided it's not abused, this looks like it will last forever. Quite literally. No fatigue. Airlines buying these today will likely never replace them (at least not until something radically better comes along). Upgrades, refurbs, certainly but the airframes themselves should last forever. Boeing and Airbus are building aircraft that will mean they're not building so many in 20 years time.
Today's 787s and A350s will. Scone our grandkid's older-than-us engineering marvels.
A good ROI, but also a harbinger of enourmous problems for GE, RR, NASA, USAF, and a few other niche operators of the 747. For some jobs; you really do need 4 engines.
For GE and Rolls Royce, they absolutely need a 4 engine aircraft for engine testing. Everyone has been using 747 because it is ideal. Yet with today's trend for large twin jets, one day there will be no large 4 jet airframes left flying. Even the A380 will one day stop operating. So how then would GE and RR flight test an engine?
NASA uses an old 747 as a flying telescope, SOFIA. This is a remarkable piece of kit, extremely useful for a lot of astronomers across the world. The higher it flies, the better it works. A 747 can fly surprisingly high, thanks in part to having 4 engines (lots of surplus power). I don't think that any modern 2 jet airliner gets anywhere near as high, so SOFIA will one day be diminished.
Airforce 1 is supposed to have 4 jets for all sorts of reasons, mostly related to the USA's nuclear chain of command.
Anyway, there's a few operators for whom 4 engines is an imperative, who have been able to pick up 747s (and maybe A340 and A380s) cheaply and easily, and who have gone on to have a truly beneficial impact on our lives (please feel free to reserve judgement about the merits of AF1). When we stop flying 4 engine aircraft commercially, those niche operators are going to be in difficulty; where's their next one coming from?
Anyone got a plan for that?
And it maybe that they get their way. If all bespoke device drivers get ported into Project Treble, then Google can update literally everything else on the phone, including the kernel, without any input from the manufacturer. or at least that's the idea.
I think people have gotten so used to Android being unupgradable that most people have little idea that it could be different. It will be interesting to see how it goes.
It will place a lot of pressure on those manufacturers that doctor the user interface on their phones. They will still have to do a ton of work to move up OS versions. Those who just stick to the stock Android experience and who put all their drivers into Project Treble will be making phones that are more upgradable by Google. Word will get around.
Of course, they might all just decide to ignore Project Treble and keep things as they are. Cartel...
That's all very well, but you'd better turn off caller ID forwarding on the phone calls you make. Call someone else's Android and Google are using caller ID to track who you're calling. And use cash in the shops. And if you email or call someone else who has an Android phone and you are in their contacts list, Google know who you are and where you live.
It's basically impossible to avoid being profiled by Google.
No, I don't like it either.
I reckon over-broadly worded legislation straying well beyond centuries-old legal limits parading ministerial authority as due process of law and under-scrutinised by Parliament is a dangerous thing.
I also reckon that possibly the notion that a warrant could be served on an individual and binding on a telecoms provider may have been intended to allow someone to collar a bloke driving an Openreach van and tell him, as a representative of a telecoms provider to put a tap on a given line without going through too much paperwork. I further reckon that even if that's the case it's open to misuse far beyond that.
Hang on a minute. A warrant is not and never has been an order, instruction, something compelling. It is permission to act in a way that would otherwise be illegal, those actions being judged a necessity to advance a specific investigation by a minister and/or judge and/or someone else duly empowered to make such decisions. It is not served on anyone, it is given to someone.
The dictionary definition of a warrant is "a document issued by a legal or government official authorizing the police or another body to make an arrest, search premises, or carry out some other action relating to the administration of justice". The key word is authorise; there's no compulsion.
An ordinary policeman who wants to enter a property needs a warrant, but once it has been obtained they're not actually obliged to bust down the relevant door. AFAIK the only way someone can be compelled to do anything is by direct order from a high court judge (or greater), but that's a very different beast to a warrant.
If a warrant were an irrisistable order, there'd be no need to specifically mention telecoms companies in the acts. They are mentioned because a warrant is simply permission, not an order, so the act has to specifically state that telecoms have to help out with the warrant.
I don't see why anone would collar a wireman out on the streets - I'm pretty sure that the networks are more nationally reachable than that. And, strictly speaking, a warrant is basically the only paperwork that someone needs to take an action that would otherwise be illegal. And it's not going to be carte-blanche to do anything, it's going to very specific.
I think that this whole thing has come about because whats-his-name has misunderstood what a warrant actually is. If one considers the act in the context of a warrant being permission (specifically permission for someone who can and wants to assist to actually provide that assistance), and not an order, the wording makes a lot more sense and is far less 1984 than whats-his-name has been making out.
Hmm, well I think given that the act makes specific provisions for people to get a hearing, I think European courts would be the last port of call.
I think another way of looking at this is, invited / asked, that it's an opportunity to help get the job done properly.
Someone asked to help who then refuses to help can't very well then go on to criticise them for how they've gone about doing the job without that help.
Warrant's True Purpose
I think that one thing everyone has forgotten is that for anyone to do anything investigatory, they need the legal top cover offered by the warrant.
Without the warrant an expert cannot legally assist the authorities in their investigatory actions, even if they wanted to. Without the warrant the expert would likely be breaking the law. I think that the act is saying that if an expert is asked, and volunteers, they are protected from prosecution for things they do within the terms of the warrant. Important to have...
A telecoms company cannot just tap a phone conversation that's running over their wires; that's illegal. They themselves need the warrant to carry out what has been requested.
It's the same for the authorities; they themselves cannot act without a warrant saying they can act (which has always been the case).
That's my interpretation of what the act is really getting at. It's likely just an unfortunate turn of phrase with an unforeseen interpretation.
The fact that the act says that only a telecoms company is obliged to respond to a warrant I think gives weight to this line of thought.
What do you reckon?
As far as licensing goes, CDDL was deliberately designed to be incompatible with GPL
As far as the wider picture is concerned, that was a good thing. More OSes have been able to make use of it than if it had been GPL.
Canonical has had their lawyers go over the CDDL terms and they think they can get around the problems by distributing the ZFS module as a binary and distributing the source code separately. Before you jump in and say that "but the GPL says that etc., etc.", the problem isn't with the GPL, the problem is with CDDL. Their lawyers think that this still leaves some technical violations, but that Oracle would have difficulty bringing a case to court based on them.
Hang on, I thought the legal issue being considered by Canonical was whether they could distribute Linux with a ZFS kernel module. AFAIK (IANAL) there's no problem for Canonical from Oracle's direction in distributing OpenZFS under the CDDL license - the license is pretty clear, and FreeBSD / NetBSD seem to have no problems.
I suppose the issue is whether or not including ZFS as a kernel module in Linux counts as distributing ZFS under GPL, not CDDL (due to GPL's claim to extend to all linked code). However so long as Canonical aren't actually changing the CDDL license statement in the source code they clearly wouldn't be breaking CDDL. They will be upsetting Richard Stallman though...
Ps that's an outcome that I'm not so keen on, to put it mildly...
I wouldn't dismiss the RedHat trying to steal Linux. It is always good when you can lock-in customers and get rid of competition.
Quite. If enough kernel devs end up being RedHat employees, they own Linux. Linus is good at driving people away from the project, leaving it vulnerable to a group with a plan and the money and the motivation to dominate the project.
If that happened, and they then forked Linux and went their own way, everyone else has to follow. "You wanna desktop, use our fork. Use Linus's original if you want but there's no desktop for you". It's worked with systemd, it would work with the kernel too. There's simply not enough people who care enough about the specifics of the lower layer stuff to resist it. Most people just want a working system and don't give a damn if the code itself was personally blessed by Linus.
The longer Linus keeps ranting at hardworking kernel devs, the more likely we'll end with Pottering being in charge of the only fork of Linux one can actually use.
I think you're missing the point. Crashes are good, they're Rust's way of pointing out bugs in your code. C is nasty because unless you do something spectacular to cause a segment fault or similar, your code will run quite happily causing mayhem that you might learn abroad during development, or you might not.
Rust itself is still evolving, but is a very good systems language. It's pretty hard to code for, because it doesn't let you get away with mistakes. That's its strength. You write junk code, you're going to be told all about it straight away.
Unlike C, where bugs lie dormant for decades undiscovered.
If ISO standardises Rust, it will become the language of choice for low level stuff like kernels.
I'm a long time C programmer, I love it to bits, but Rust is the writing on the wall. The speed with which Redox has gone from nothing to a running desktop is hugely impressive. The fact that they could bash out a whole new kernel very quickly, and apparently it's pretty bomb proof already, shows that it's a language where you can concentrate on ideas instead of worrying about memory all the time.
Large C projects like Linux will be seen as just too demanding of resources. There's a lot of people spending a lot of time chasing down problems in Linux that simply don't exist if Rust was used instead.
...if GPS does go tits up and the world suddenly finds itself unable to find out where or when anything is, it'll be the:
Biting the hand that feeds IT © 1998–2017