Re: Gamers largely unaffected by KPTI?
Pure guess here - it may be just one single kernel transition to update the world per frame. Which wouldn't be so bad (50 per second?).
2110 posts • joined 23 Apr 2008
Pure guess here - it may be just one single kernel transition to update the world per frame. Which wouldn't be so bad (50 per second?).
At the moment it's not too bad. The built in SatNav is essentially the Android version of Garmin's. So it's actually quite good. It's a bit cheap because even if it gets an Internet connection through your phone it won't fetch traffic updates.
Having found that the ICE is basically an Android tablet I sniffed around inside. There's a browser. So I tried downloading and installing the Amazon app store. Which nearly worked but the version of Android underneath is so ancient it refused to install.
In theory you could get an .apk file on a USB stick and install that.
Currently the absolute best SatNav is a modern Internet connected TomTom. They're fantastic driving tools. They do a few things that Google, Waze etc. just don't do, and it makes a big difference to the driving task. They have a decent Web back end too, so you can do pre-drive route planning very effectively. You can even get your route changed over the Internet whilst you're driving. So one's partner could update where you're headed too, or the route you'll take, whilst you're driving it.
I have one of these with built in cellular, world maps (well, to the extent TomTom have maps anyway, not Japan which is annoying). As a driving aid it's unsurpassed.
BMW's built-in stuff is poor by comparison. It works, kinda, but you're always left thinking that a TomTom is better. BMW share this SatNav with a bunch of other European car manufacturers. They'd all be better off building in a TomTom.
It depends on what's going on in a system. If it's IO heavy, this could be quite bad (lots of interaction with the kernel). If it's compute heavy, possibly this isn't too bad. And it also depends on whose code is running. It's only a problem if you run someone else's code arbitrarily on one's computer.
For Google this isn't too bad. The bulk of their machines are running Google's own code and dishes up search results to Internet clients. For search, maps, Gmail servers Google could take the risk and ignore the patches because they're not running arbitrary code. That's a good thing because the bulk of Google's costs is energy.
For outfits running other people's code (Amazon?) this could be bad because they're all about running other people's code for them. So they need the patches, it will slow them down. And a lot of their cost is energy, so their cost is going to rise.
For the rest of us mere users our computers are going to be slower and therefore use more energy for the same tasks.
Who knows. With Linux of course we can tell. With Windows, possibly benchmarks are our friend if MS are keeping stum about the matter.
From a security point of view it would be better to leave things as they are if the hardware is not effected; better to be running mature code than to be running what seems like a major update out together in a bug hurry.
Perens is not just a.n.other person. He’s been a fairly high profile expert witness in some court cases related to open source licenses.
His utterances in such matters can therefore be considered to have been said with deliberate intent, rather than the spurious ill judged mumblings of a commentard like me. Thus the consequences, should GR ultimately win, would be more severe. I don’t know why he’s bothering to take the risk. If as many people argue GR is an irrelevance, why stick one's neck out?
Indeed, why raise the whole spectre of the corner cases of GPL2 (which is what GR are relying on) when really we'd all rather pretend that GPL2 is fit for the intent of projects like the Linux kernel, when actually it comes up a bit short? If GR did win a GPL2 enforcement case, what's to stop a company like RedHat doing the same thing?
Well if history is anything to go by, the South Koreans will have a nation wide 5G network up and running within the week. Those folk don't hang around on these things.
Alec Guinness would be rolling his eyes...
So the question is, if the battery is replaced does the OS take away the deliberate slow down?
Or is that an unanswered question, the answer to which might be a bit rubbish and Apple don’t really want to answer it?
...for the boffins who constantly remind us that, whilst we do know quite a lot, really the universe is never going to run out of surprises.
Plus top marks for some first rate science, engineering, flying and teamwork.
I mention flying because it can’t be trivial to intercept the shadow of a rock that’s a few billion miles away. It’s not like there is a dark shadow racing across the ground to aim for. Plus it’s a really valuable trick that no other telescope in the world can do, so keeping it funded seems quite important!
Altogether now, hip hip?
And it’s seems that Echo has been a bit of a success, whilst whatever it is that Google are offering gets a market “Meh”.
That’s based on the fact that I know several people with the Echo, and no one at all with Google’s (ie a very scientific and objective measurement...).
So Amazon are winning it, Google aren’t. Kinda makes sense; you can’t really buy stuff from Google like you can from Amazon. Amazon has stuff sell, google don’t.
Google are like “yes you can buy from us, well really it’s a targeted search system not a buying system, we’ll cream off the top but any purchase is between you and the vendor leave us out of it don’t come to us when they’ve pinched your money and posted you a picture of a cabbage instead of the 1TB SSD you ordered no we don’t have our own distribution centres your items will all be delivered separately”.
Google are trying to get to the top of the market the lazy way, ie not having physical presences anywhere other than data centres. It shows. A mate has a Pixel phone, he cracked the glass, but there is no (or was not, may have changed now) spares distribution system or repair network he can go to to get it fixed (it’ll probalably be a market stand job). If it were Apple or Samsung there’d be no problem. Both are in the business of selling hardware and have the supporting infrastructure to suit.
AMD’s encryption for VM guests is very interesting. You can trust that the host nor other VM’s can ever see inside your VM (if you accept the paperwork). That is quite a distinguishing feature, or so I’d have thought. Can’t get the from an Intel host.
Yes, but what Alan Turing broke was the newer Enigma with the plug board, which had stumped the Poles. That was a very clever piece of thinking in his part.
The history has been firmly established for many decades now, and the Pole's hugely important role in the endeavour has been widely acknowledged for a very long time.
On the shoulders of giants and all that.
Going part way as you suggest - paper but machine countable - is a plausible option.
However the benefit of manually counted paper votes is that the result is harder to argue about, gives stronger attestation of the result. If a machine count were contested you'd then have to manually count it; that takes a lot of organisation and time to do if unprepared which is likely unacceptable in such circumstances. May as well be prepared, so why not do a manual count in the first instance...
At the end of the day it's all about perceptions. It maybe acceptable to a population simply to know that there is a permanent paper record and that a manual count could be done if required and individuals can verify that the vote they cast is recorded on their piece of paper. Personally speaking I'd be very interested in the design of the counting machine, because that's the place where something nefarious would be attempted.
Seems like we're heading back towards lugging laptops around...
It requires physical access so it's not a vulnerability. It doesn't matter how often I hear this one it makes me laugh. Somebody with physical access can access all your data and that's not a vulnerability? What exactly do you consider a vulnerability then?
The article has been updated; the trick works from the command line too. So any application that an attacker can get run on the computer can get itself root privileges. So whilst there is no remote vulnerability, it's only one successful social engineering attack away from that.
Pretty dangerous I think, and that alone justifies the early and global dissemination of the news. Leaving this one to fester in private would have left all users everywhere very vulnerable to malicious software.
Is it exploitable over a remote desktop connection? That would be worse.
According to the update to the article it can be done on the command line too. So not vulnerable to a remote attack unless the perpetrator can get something run on the computer first (malicious but otherwise innocuous app, etc). Fishing attack might open up the doors for that.
I have to say that between Apple and Intel we're seeing some stinking cock ups in recent times. It's almost funny. All we need now is for Windows or Linux to join in and we may as well throw every single computer in the planet into the bin. Apart from the ones running Solaris.
Yes, but will it run Crysis?
Or rather the routes used by the gas carriers from Qatar to the U.K. Aren't being watched over.
If that traffic were stopped we're in for a chilly winter here in Blighty.
I honestly think I need to stockpile a few iPhone SEs for when my 5S finally dies (3 years old and no reason to replace it)
I see a lot of SEs around here. Lots of pluses - cheap, they work, they're not burdened with pointless frippery, small, battery life is ok. I have one at the moment. The OS / UI sucks, but I don't really care any more.
Probably getting a BB Motion - monster battery life. It's Android of course so that's another horrid UI...
You say Blackberry, but I hear that my Priv will soon stop getting patches.
Maybe, but it is 2 years old now. To be talking about a cessation of patches on a 2 year old Android is nigh on unprecedented.
Most other Android phones seemingly drop off the manufacturer's radar after 6 months...
IPhone is different of course. BlackBerry is still sporadically updating BB10.
Hopefully the situation with Android will improve, with Project Treble in Oreo. For those manufacturers who don't put a thick skin on top of Stock Android, staying patched through Google's channels should become easier...
With this approach to IT security, and everything else, what do we think their attitude to bugs in their self driving car software is going to be? Reassuringly trustworthy? I think not...
So I won't be getting inside one.
Which OS is stuck on x86?
Er, OS X?
Windows has grown an Arm variant quite recently, Linux is on everything, Solaris comes in Sparc flavours, and all the embedded OSes work on everything else too.
I suspect it started off as a platform on which all the power management code could be run. The idea was that with the CPU looking after itself (power settings, cooling, voltages, clock frequencies, etc), you then wouldn't have to put all that code into the main operating system.
This was sensible, given that getting all that hardware management wrong could fry the silicon to a crisp. Offloading it to a separate microcontroller with a fixed binary blob meant that Microsoft, the Linux community, Apple, and every OS developer didn't have to do it themselves and get it right.
Then the feature creep started.
I'm sure that Intel's intentions where perfectly harmless. Being able to manage a server like you can (mount ISO images, see the console, all sorts of useful admin things can be done from afar) is incredibly useful. Just a shame the made a complete mess of it.
To be honest I can't see a way of implementing remote management of that sort without having an ME CPU bolted on the side with quite a lot of low level access. Though I don't see why that should need the ability to access all physical RAM, all Ethernet traffic, etc.
Whitepines beat me to it.
Yes, OpenPower seems to me to be a very viable way to go. The CPU is genuinely the Central Processing Unit,
Who knows. Perhaps NSA saw what Intel was up to and simply decided to let them get on with it, knowing that they'd fsck it up badly to NSA's advantage.
Why bother coercing / cajoling Intel into slipping in a hidden backdoor when you know they'll build in aircraft hangar sized doors through sheer incompetence... So long as Intel stick to this idea of an ME, there's code there that will likely have flaws.
Raptor Engineering are up to something interesting with OpenPower. Basically with the Power9 CPU from IBM being "open source", they're in a movement to do a completely open source computer (all the way down to the silicon design, board schematics, firmware, and of course the OS + software stack on top). It's all there for one's inspection.
No magic closed source firmware / ME there.
This is likely part of an eventual Europe wide reconsideration of OTT services as telcos. That includes Apple (FaceTime), WhatsApp (everything), Facebook (Messenger), Google (surely they have a message service somewhere but I don't bother learning the names because hey keep throwing them away), Instagram, Line, Snapchat, BBM, etc.
If that happens then the newly anointed Telcos will have to strike a balance between complying with LI laws and their current marketing / public positions vis a vis "privacy". Making a big fuss about privacy now may suit the public mood, but may put them out of business later when their privacy conscious users flee once they introduce LI systems.
Withdrawing from lil ol Belgium is one thing, from the whole of Europe dents the bottom line quite a bit. Belgium has simply set a precedent...
Skype's original peer to peer architecture is of course highly resistant to this kind of thing. An open source equivalent with no corporate backer would be very difficult to intercept. But there's no money in it for anyone, so no one organisation with sufficient marketing clout will ever promote such a thing.
Stock-ish. They run the GR-Security Linux kernel. That's the one which is causing a certain amount of friction between Linux/GPL purists and GR (who serve people like BlackBerry that simply want an OS without a long list of known vulnerabilities).
Couple that with a secured boot loader, and it's still pretty solid, and they may have been able to carry over some of their cryptographic accreditations too (or at least be using the same libraries). AFAIK no one has managed to root BlackBerry's version of Android yet.
Their soft keyboard is very good (in my humble opinion). That's on their Android.
Hub is pretty darned good - though I think the deeper integration that was achieved on BB10 will be something that I'll miss.
Otherwise it seems to be a pretty stock Android experience. That is a good thing - it's easier to keep up with the deltas between Android versions.
You may already know of the following - this'll serve as a record of what we once had as much as anything else...
BlackBerry Travel is a thing not carried over; in fact it's gone from BB10 too. It was a BB-branded front end for Worldmate, and for the seriously busy traveller it was very good; sorted out your flights, hire car and hotels for you, and coordinated with your colleagues too. Many a busy traveller swore by BB Travel for, well, a decade or more.
Worldmate has shut up shop rather than try and compete against Google (who, as is typical, have waded in to the travel market with an inferior but ubiquitous effort. Try booking hotels, flights, and hire car all in one smooth action with Google..).
Another thing that has gone, probably for good alas, is BlackBerry Balance. On BB10 this was effectively a multi-level security system that was the neatest solution to the BYOD problem I've ever seen. It is significantly better than Samsung KNOX. It was perfect for keeping both the User and Company happy. It suffered from being a concept that was pretty hard to grasp. Being BB10 only meant that there wasn't an Android / iOS equivalent to educate the world. There's very little possibility of doing something so rigorously developed as Balance on top of Linux, or iOS.
B, b, bu, bu, but it's so beautiful, it must be perfect, mustn't it?????
I got fed up of Google's poor search long ago. It's woeful. There's little point looking for specialised information these days. Quite often I want an exact string match; can't be done any more. Alta Vista, Alta Vista, my kingdom for an Alta Vista!
So I use Bing instead. No, it's no better, but at least I'm not feeding Google with fake confidence.
There's also a lot of aerodynamic subtlety to the shape of Concorde's wing that was less obvious back then than today. And the wing was also, I think, quite difficult to manufacture.
So if the Soviets did have copies of the drawings, it might not have been readily apparent why the shape was how it was. So perhaps the Tu-144 came out a bit more flat-plated than Concorde owing to a lack of understanding of Concorde's shape and not wanting to simply copy it verbatum (if they had the drawings that is). Still, to get the Tu-144 into the sky was still quite an achievement, even if it wasn't totally successful.
Delta wings are often disliked by some aviation design communities. They can suffer from a lack of controlability at slow speed (the pitched up wing masks the control surfaces at the rear from the air flow, so no control). The fix is canards. The Americans went off deltas some time ago; SR-71/A12, B58, F106, etc are all quite old. In contrast the British (Javelin, Vulcan, Concorde, Typhoon) and French (endless Mirage's, Rafale, Concorde) liked delta wings. Concorde also had weird drag / speed characterstics; as the speed bled off and the aircraft was pitched up to maintain altitude, the thrust would have to be increased; tricky stuff.
For all things techno-geaky about Concorde, it's well worth ploughing through this extensive, multi year thread on PPRUNE.
Good luck to Boom. It'd be nice to see a supersonic champagne tasting session once more (i.e. very civilised). The problem is engines, hopefully they'll be allowed some military power units; very expensive to develop from scratch...
Unless the code is released under GPL2, it cannot be integrated into the Linux source code. There's been enough fuss already about using it as a separately provided kernel module.
The problem I see with that is that once it's released under GPL2, will it continue to be released under the more permissive license that helps out, for example, FreeBSD? It would be a real pity if ZFS went to GPL2, and GPL2 alone, because it would seriously screw things up for the people who are already using it elsewhere.
It could be multi-licensed of course, but license fragmentation can easily lead to source code fragmentation too, unless absolutely every contributor is commited to releasing their efforts under multiple licenses.
I too have never found X to be a problem. GTK is a miserable pile of ordur.
I think if one is doing a lot of work that involves a load of texture maps in 3D, the pipe does become a problem. Hence moving away from that architecture. However...
Of course the only reason why a pipe is problem in that circumstance is that it inherently involves memory copying, a lot of context switches back and forth (especially for large amounts of data flowing), a so forth. However modern Intel CPUs have features that would completely eliminate that problem; internal DMAs. A pipe-like facility could be implemented around internal DMAs, which would be lightning fast (indeed, very fast), and would take far less CPU time to shift data from a client application (e.g. texture maps) to a display server's internals. If mailbox semaphores were possible (I don't know if they are on an Intel platform) the DMA could even ping off a semaphore post to wake up a display server once new data had been provided by the client. Et voila, a client server architecture with far more bandwidth (by definition, the best possible bandwidth) and zero context switches in/out of the kernel.
A thin layer on top would all this to optionally pump client data down a real pipe or socket (for remote servers).
What do you think? A good idea?
It seems that no one involved in replacing X is stopping to think whether or not the client-server model could be retained. I think they've blundered straight on into replacing it with a fairly crude driver architecture without casting an eye around to see what hardware facilities now exist to improve the existing client / server architecture. It'd be a great pity if all that nice DMA silicon that Intel now put in their CPUs ended up not being used by an updated X server for *nix.
From the article:
A lot of companies would never have admitted that the vision of convergence wasn't what people wanted. That's the sort of move that takes guts and honest appraisal of what you're doing, what's working and what's not. The GNOME project has never displayed that kind of thinking. And as far as I can tell, it operates on nearly the opposite premise. It's to early to say, but I predict conflict down the road. Keep a bag of popcorn handy, I believe there will be plenty of fireworks to watch.
GNOME is particularly objectionable. Seems typical of the projects with heavy duty RedHat involvement these days. It's like they thought, let's take everything that's good about a desktop and minimise it or, preferably, throw it away altogether. God knows what all that bloaty code is for, but it's not giving me the desktop environment I want.
Erm, aren't they called firmware viruses?
I seem to recall Lenovo put something into some of their device driver firmware that would reinstall bloatware. Or something like that. Ok so that's not a Mac, but then Macs and PCs aren't so very different.
Neat idea, but this kind of concurrency problem has been sidestepped altogether decades ago. Concurrent formulations such as Actor Model and, more importantly, Communicating Sequential Processes are 1970s ideas. The latter in particular is highly relevant to anyone wanting code that executes concurrently which is provably free of deadlock, livelock and spinlock problems as well as having zero memory sharing errors. There's even a process calculus for it.
CSP was briefly fashionable back in the 1980s, early 1990s (Inmos's Transputers, Occam), but is now alive and well in languages such as Erlang, Rust, Go, Scala. Of those, Rust in particular looks really good (no runtime needed, ideal for all sorts of software and not just desktop applications like web browsers). I'm perverse, choosing to do CSP architectures in C/C++ (I have to have a library...).
This is clearly the way to go for future developments. Sticking with the old "lets share memory and guard it with a semaphore" is not faster to run, is a lot longer to debug (even with tools like this from Facebook), and is prone to stinging you in the arse years down the line when some unexpected sharing issue finally occurs for the first time.
It is also inherently limiting when one wants to scale up a piece of software across a whole network of computers; Actor Model or CSP channels can be network connections; shared memory / semaphores cannot. Shared memory architectures fundamentally require an SMP computer; that's increasingly becoming a bottleneck in future CPU speed improvements; massive chunks of modern Intel CPUs, and especially AMD CPUs, are dedicated to synthesising an SMP environment from an underlying NUMA architecture.
Whereas CSP / Actor Model architectures are entirely happy with NUMA. A computer that is a pure NUMA machine would be a lot more power efficient (or faster, depending on how you want to adapt).
...I'm buying one. I was seriously tempted by a KeyOne, but this one does it for me.
You missed one important Point on this - the plan is to assemble the C series for the American markets at Airbus's factories in the US in order to try and avoid the American tariffs on the C-series. It will be very interesting to see how that plays out.
Whilst the tariff is now a consideration, apparently Airbus and Bombardier were talking about a deal before the tariff was announced. Now, that's either excellent judgement on their part as to how the trade dispute would pan out, or there's more at stake than that.
The C-Series is such an excellent fit against what Airbus is already manufacturing and selling that a deal between the two companies was pretty strong. Airbus had an effective gap in their catalogue (the planes they were offering in the class simply weren't selling). Bombardier had the right aircraft, with certification and excellent in-service reports, but lacked the ability to take it to the world and swamp the market. Put the two together, et voila! A very strong line up, and the manufacturing capacity and financial muscle to make it a world beater.
A consequence of the deal might be that they can sidestep the US tariffs. However, I think that what is more important is that the C-Series is now a serious contender in the world market. And the world market is far larger than the US market. Doing well in the USA would be nice of course, but the real prize (one now within their reach) is the global market. Win that, and losing out in America won't really matter at all. Win that, and the existing Bombardier and Shorts Brothers factories will be kept pretty busy (Airbus haven't got lines lying idle in Europe to soak away the work).
Also everyone is forgetting that the tariffs are yet to be imposed. That issue is in itself not settled until the new year, when the US government determines whether or not Boeing was "damaged" by the under-pricing the US government says they found.
All the nice things we can say about the neatness of an Airbus-Bombardier tie up can also be said about a hypothetical deal between Boeing and Bombardier, if not more so. Bombarider's design is clearly excellent, and Boeing are in dire need of an excellent design to compete in the single aisle market. Why oh why oh why were Boeing more focused on grinding Bombardier into the dust than on recognising the opportunity represented by a financially stressed but technically competent Bombardier? Pride? Over-Confidence? These are dangerous traits.
Airbus have clearly sweet talked Bombardier (and importantly their family shareholder who still have a lot of influence in the company) in a way that Boeing never even begun to consider. Boeing's aggressive trade stance was probably the final straw that forced Bombardier (and the family, and other shareholders) into realising that the future lay in a deal, not in independence.
Now that the deal is announced one has to conclude that the future of the design, and by extension the Bombardier company, employees, etc. could very well be far larger than they ever dared hope for. It's a case a 50% slice of a 2000 airframe program being more valuable than 100% of a 500 airframe program. And given the quality of the design there's no reason to suppose that it won't get to be that big over the coming decades.
Airbus's 60 Year Free Ride
Since February 1987 Airbus have not really had to touch the design of the A320 to keep it competitive. Only recently have they NEOised it. And now they've picked up a better design with lots of growth potential for $1.00. This will see them through for another 30 years, probably. This has got to count as the cheapest ever R&D budget spent in maintaining market share.
Boeing has had 30 years to come up with a 737 replacement design that would actually make Airbus sweat, but hasn't done so. This is a ridiculous, decades long failed strategy by Boeing. And now look what's happened. Airbus has taken another leap ahead for the price of a coffee.
Develop and Compete, or Die. Perhaps Boeing don't believe in Evolution?
The timing is significant. We're about 9 months from the Farnborough airshow; that is an ideal period of time in which to go to potential customers, show them the plans, and get a few sales lined up for announcement at the show. The deal between Airbus and Bombardier is itself not scheduled to close until H2 2018, but I don't think that'll matter.
Reportedly there's already been some hurried analysis by various fleeting planners. There's probably a lot of operators out there tempted by the C-series, but were nervous about Bombardier's ability to fulfil an order. Now that concern has all but gone away, and with Airbardier likely willing to let some early orders go through at knock down prices, the C-series is suddenly back on their radar scopes. There's real financial advantage for the early buyers, so I expect the phone lines will be a bit busy in the next 9 months.
WEP is broken, but I fear that it might now be better than WPA2! So far as I know it takes a little bit of effort to break WEP.
This flaw in WPA2 seems to be trivial (at least from the point of view of computational complexity) to exploit.
Oh the irony if the short term fix is to turn on WEP...
Noooo... Most manufacturers will use this as an excuse to push a new model out within the month!
The cynic in me points out that if that's what they do, they'd be having to repeat it after the standard finally gets fixed (for that is where the problem sits). And if I know anything, it's that standards don't get changed very quickly at all.
If you're referring to the transition from analogue to Freeview, I think that was done quite well.
Freeview was around for a long time before they finally switched off the analogue signal. And a basic Freeview box was pretty cheap (I think there were even some help to buy schemes for the disadvantaged). Plus undeniably it was a big improvement.
There's some people using 1930's TVs with a Freeview box. Not bad for backward compatibility (ok, they're using a scan converter too...).
The trouble with having done that is that the reasons to upgrade beyond that become significantly less compelling to the end users. Freeview is still Freeview, which is excellent, plus they've managed to sneak in a couple of HD channels. That's all been handled reasonably well.
And of course what they're doing in America is the equivalent of turning off Freeview altogether and starting from scratch. Doing that here would result in the Daily Mail exploding in indignation...
And Freescale had a dominant position in telephone exchange equipment with PowerQUICC. And there's still some niche users of their PowerPC range of CPUs who will want guaranteed supply (i.e. Uncle Sam, who has a way of insisting on these matters,,,),
NXP do a whole load of stuff that I can't see Qualcomm being interested in at all. Worrying times.
At least BB, imperfect though they are, have been reasonably good at getting Android patches out to their customers. Some other manufacturers just don't seem to bother.
So far as USPs are concernced, that alone is about the only thing that would ever convince me to give Android a go.
BlackBerry's Hub is excellent, by far the best messaging client out there. If it's a bodge, it's only because Android is too lame to allow Hub to be integrated into the UI as deeply as it was on BB10 (where it is truly excellent).
Well, what is a BlackBerry then? A Z30?
I have one of those, it's excellent as a phone and a messaging device, though even that is being eroded by the lack of a lot of social media apps. For the years I've used BB10, the thing that really makes a BlackBerry for me is their Message Hub. Which you can get for any Android phone.
Trouble is a lot of Androids I've played with are, well, yeeeuurrrkk! Especially Samsungs. Android's approach to app permissions is a real turn off; Nougat fixes that, which means a Keyone or newer (they've not put Nougat onto a DTEK60 yet).
I wish BlackBerry would do an iOS version of Hub, because iOS's own messaging (which I'm currently using) is shit.
The Keyone is pretty good based on people I know who use them. But I don't want a keyboard. This new phone look ideal, though obviously not as good as an up to date BB10 phone with a rich and rewarding app ecosystem.
If you want an interesting phone, one of the guys who was involved in the fantastic Psion 5 is now involved in some effort to do a modernised version of that. Running Android (boo), but even so it could be interesting.
Unfortunately, it seems that the reason the hardware is "vulnerable" in the first place is because the operating margins of SDRAM are pared so far back to give us what we also want: high speed, low power memory. AFAIK there's no real hardware fix for this; high speed higher power memory doesn't work (the speed is achieved in part due to the lower operating voltage).
So yes, we can have memory resilient to rowhammer attacks, but it's like that this would also be slower; and that's a tough marketing proposition at the moment. ECC memory helps somewhat - it becomes harder to exploit the physical effect undetected - but it is still vulnerable to a denial-of-service style attack (the memory can still be changed, but now you have memory faults cropping up and a crashed computer).
Stop Executing Everyone Else's Code
Yes, that changes the web a lot - it means server side execution is all that is "safe" - but ultimately it's the only way to guarantee that exploitative software does not get run on our vulnerable hardware.
New York to London by rocket? Ok it's a short flight time, but the journey time will be terrible. First get to the boat. Then motor out to the rocket. Then put on a spacesuit. Then get in the rocket, shut the door. Then complete all the pre launch checks. Then whooosh bang up into the sky and back down again. And then the reverse process. I reckon the whole thing could be slower than flying.
Concorde was very fast of course, but one of the lesser known aspects of Concorde travel was the ground arrangements. They had a dedicated 10 minute check in (none of this 3 hours early nonsense. Though of course they had a lovely lounge if one wished to arrive early). They had dedicated baggage, customs and immigration queues on arrival. Saved about 3 hours airport time off the journey too. So whilst Concorde itself saved about 3 hours, the overall service loped another 3 ish hours off the time too, or about 6 hours quicker.
BA were (still are) running a similar service from London City. Ok it was subsonic, but overall still 3, 4 hours quicker than an ordinary flight from London Heathrow (City airport is very handily placed). The new C series from Bombardier is very interesting because it can manage London City to New York without having to refuel at Shanon in Ireland on the way, saving another hour or so.
I reckon Musk's half hour rocket would take a ton of time...
Blackberries have, like everything else, been made in China for a long time. Their trick is for the OS to be able to cryptographically verify parts of the hardware and boot loader sequencing. The OS (certainly their classic OSes BB10) would refuse to run if it didn't like what it saw.
That's what set Blackberry apart in the eyes of government users; they'd thought about checking the integrity of the hardware and firmware. A bit like a PC's Secure Boot today, but for a phone. It makes tampering very difficult.
Biting the hand that feeds IT © 1998–2018