Re: Also, unicorns
"F'king hell, we can't win."
Nah, you won my heart, thanks for the heads up. :)
1366 posts • joined 21 Sep 2010
"F'king hell, we can't win."
Nah, you won my heart, thanks for the heads up. :)
"publicly announced partnership where both companies stated as fact that Red Hat would carry out all support of their products on Azure personally using Red Hat staff in shared call centre facilities? "
I am guessing that you understand that Support != Operate != Developing a Product. :)
"There have been few if any vulnerabilities found in the other Azure services which were set up by MS."
By the same token that doesn't necessarily mean that there aren't plenty to be found.
"It's therefore likely this was a self inflicted wound by Red Hat rather than MS not setting it up correctly."
That's pretty unlikely IMO given my experience of Red Hat support. However given my experience of Microsoft's approach to support, and their proven track record of prioritising "time to market" over all engineering concerns, I think it is far more likely that Microsoft's engineering staff have been ordered to roll out a RHEL on Azure Proof-of-Concept to production.
Ultimately it's a Microsoft self-inflicted wound we're seeing here. As a potential customer I would be questioning the processes and financing of Azure at this point because their processes are clearly inadequate, their are clearly incompetent and they are clearly failing to budget enough for mitigation as well judging by the miserly bounty. Mistakes happen, but this is a production system folks - this kind of schoolboy config screw up should have been caught at the PoC/dev stage.
"It's a long time - over 35 years - since I worked on a multiprocessor shared-memory architecture. Presumably HPE reckon that they can implement monitors or semaphores - to control access to shared memory - in such a way as not to adversely affect the Machine's performance."
The state of the art has moved forward a little in that time period (lockless algorithms, LL/SC etc) - the fundamental problem of data sitting on the end of a high latency pipe hasn't changed though. :(
Seems like HPE have tripped over some Denelcor HEP brochures while clearing out some greybeard dens.
Love to play with something like this, but I don't think it does anything to fundamentally change the way systems are built or perform at the base level. The system software and software architecture that sits on it may well deliver a difference - but I'm somewhat sceptical about the chances of that being an advantage that would uniquely apply for this particular assemblage of components.
It really doesn't matter if you are logically moving processes to data or data to processes, or keeping both in situ, data still has to travel down those long distance high-bandwidth links... Those long distance high-bandwidth links will still need plenty of Watts in silicon to drive them regardless of whether the data travels down fibre or wire. :(
Have an upvote DougS:
"Modern CPUs have so much going on that the more complicated decoder required for x86 is simply lost in the noise in a multi billion transistor chip."
I'd still rather that chips did not have complex decoders simply because it makes design verification difficult, and that is is important because I want bug-free chips... I want bug-free chips because it's hard to replace a chip soldered onto a board - or buried inside a rack. The x86 errata sheets & documentation around the various permissions mechanisms tend to be byzantine - and Intel have a rather ugly track record of pretending security vulnerabilities in their chips don't matter. :(
The thing that ARM chips typically lack in comparison to their Intel competition is memory bandwidth - and that requires to you drive a lot of wires at high frequencies - which burns a lot of juice. My suspicion is that ARM chips with equivalent STREAM bench figures may actually burn a comparable amount of power to an Intel chip...
"It won't. Emulation is slow. It's not gonna happen. She fell for this once before, when reporting on the release of the original ARM based Surface."
Emulation isn't necessarily "slow" these days - and hasn't been for decades, FX!32 ran PKZIP 1.5-2x quicker despite the host processor having a 30% clock speed deficit. Typical desktop apps make a lot of API calls - those can be run native, and of course code tends to spend a lot of time executing loops - so caching translated sequences of instructions yields big benefits. Folks running code on JVMs benefit from those same tricks every day - it's not rocket science any more.
Of course it's better not to require emulation in the first place - but old x86 binaries developed for 300Mhz P3's should run just dandy on a 2GHz ARM these days.
"Portland for a protest meetup"
Dunno about protesting, but the beer's great there (or at least it was the last time I visited). :)
"I'd love to see a whitelist-based approach to antivirus. It's good enough for firewalling, and that already works way better than any antivirus package I've seen."
AFAICT SELinux delivers that - and more, without the disk thrashing. Labelled IPSEC + SELinux goes a bit further - giving you a way to identify remote processes and decide if you trust them or not too. I am surprised no one else has mentioned it yet.
"I hope you weren't put off by that! I was just speaking to you in the language of your hero, Linus Torvalds."
Nah, that was Steve Ballmer, surely.
"That or while I wait for the grammar police to act on that apostrophe atrocity…"
I can live with that. :)
"The Note 7's ability to double as a cooking surface for my morning bacon and eggs whilst waiting for the bus was an invaluable feature!"
Apple have already shipped phones that have this feature, maybe you can toast some popcorn with it while the Apple warms up it's lawyers for another round. :)
"There's no need for ECC"
Parity should be for everyone, not just farmers IMO. :)
"Everything, sooner or later, becomes a boat anchor."
MS Surface widgets will make lousy boat anchors.
"Also APIs change over time - consequently the application also needs to change."
Wrong way round.
APIs might grow but should not change or remove existing functionality therefore the application should not need to change except to take advantage of such extended functionality"
You say "should", in practice that just can't be relied upon. There are APIs out there that are insecure by design - what do you do with those ? If apps are tied to those APIs they need to be killed or changed - in my experience the latter works better over the long run (YMMV).
"Nobody who wrote a piece of code 10 years ago is going to bother to keep the libraries up to date."
I guess shared objects ld.so and the weird symbolic linking thing in /usr/lib passed you by then. Also APIs change over time - consequently the application also needs to change. Also, realistically, once you've changed the run-time configuration you should do some regression testing.
"The authentication bug was for an information service & the info that can be gained isn't particularly useful, certainly not a critical prior and not classified by SAP as such."
1) The flaw was *re-introduced* which tells you that SAP are failing to use regression tests to verify that vulns stay fixed. This is a basic process problem that is *likely* to afflict every release of every product they produce.
2) Authentication bypasses give an attacker a platform to launch further attacks within a "trusted" domain, this is not a good thing.
I distinctly recall being told that Open Source (that worked vs vendor stuff that didn't) was not an option because there was no vendor to sue or coerce into fixing the code. Presumably we'll see SAP litigated to death and the source code distributed to the customers in lieu of working product... Naaaaaaaaaaaaah.
"And actually, OS X (now MacOS) predates Linux since it goes back to NeXT. So you can't even get things correct."
... Which in turn is based on the MACH uKernel with some more modern (Free?)BSD userland ...
Personally I'm glad corporations can take decent mature software and incorporate it into their products, the alternative didn't look too clever in the guise of NT 3.x... Sure Windows got better - but years of Not-Invented-Here seems to be have bitten hard in the form of 10.
I'm all for corps reaping their rewards, but equally I would like to see said corporations cut a bit of slack to imitation of good ideas though - just from the point of view of keeping the field open for innovators.
Auditing the inards of a -11 will be few orders of magnitude simpler than auditing the inards of modern SoC. With x86 you have zero hope of auditing it down to hardware level - there is a ton of hidden state in those things and a smorgasbord of security models which all interlock intricately and are documented in prose format. I know which I would prefer to audit. :)
"The message is that everything - applications, network, OS - is cloud and is managed from the cloud."
Sure it is, and virtual Umpalumpas do all the scripting & monitoring too because employing human people to admin a mass-surveillance system is just too risky. :)
It wasn't quick, so there was no love for QBASIC. :)
"yes, both people who allow automatic updates and people who manually scrutinize and approve updates are vulnerable to zero day exploits."
Right, so the update-sceptics are no worse off then.
"your logic is like saying a knife-proof vest or a bullet-proof vest for a policeman is "useless" because it doesn't protect him from a V-2 rocket falling on him from out of the sky."
You may be terrified of zero day exploits but I can assure you that they are nothing "like" ballistic missiles, they are simply vulnerabilities that the vendors have not patched yet. If you are very afraid you could simply disconnect all your machines from the internet and make sure you scan all your imported files for nasties in a test environment first.
"ironically dear old auntie or granny with her computer set to accept patches automatically is LESS of a disease vector"
I doubt there/s much difference in practice, just look at how old some of the "zero days" are, case in point font rendering vulns that allow an attack to run code in ring 0 existed in NT and it's derivatives for over 20 years - despite thousands of updates (and drive by attacks). There have also been updates that introduce new vulnerabilities, the OpenSSL Heartbleed vulnerability is an example of new functionality bringing new vulns. I'm using heartbleed as an example because it's not all MS's fault, and in that particular instance I dodged the heartbleed vuln simply because I felt the risk posed by the update was not worth the reward (functionality that I didn't want).
If you care about vulnerabilities there really is no alternative to research and paying attention to what the updates are doing - because history shows that trusting a single vendor to fix every single hole in a two decade old piece of bloat-ware just isn't enough (and vendors make mistakes)...
". and then your computer gets owned by a 'botnet. and then you're yet another dumb motherfucker who is part of the problem."
Strangely that's exactly what happens with zero day exploits, except in your utopia *everyone* is a dumb mf who is part of the problem.
"Is there a way other than NUMA?"
NUMA is a consequence of real world physics. The only boxes that dodge that issue are ones that don't share memory and ones that slow memory access down to the slowest (old/cheap SMP boxes). FWIW accessing DRAM on a single core box hasn't taken uniform time since fast page mode became the norm (early 90s) either.
In terms of other ways: Re-architect your software to stop requiring threads & big address spaces (Java apps tend to operate in this mode) - and stock up on single socket low-core count boxes with huge caches.
"Cyber attack in 2012. Arrested in March 2015. Sentenced in Sept 2016."
Coincidently that fits in with the timeframe of the massive breach Yahoo! attributed to a "State" actor. Funny that. :P
"Damage that is deserved if they didn't take basic security measures to ensure the security they expect and deserve online."
I don't think anyone deserves that kind of misappropriation of data - mainly because it hurts the customers / chattel as well. I look at it as being inevitable, and the chain of command should be hung out to dry for failing to oversee proper security measures as appropriate.
FWIW I didn't downvote you because your point of view has merit in abstract terms. Have a beer & relax, it's Friday. :)
"The reasoning for this secrecy seems to have been that the NSA wanted to see who was going to use them."
Or to put it another way: The NSA decided that it would prefer to carry on using the exploits (knowing a that a likely malicious third parties had access to them) to protecting US Citizens.
".. it's about actually being unable to fix it as this is not an error in code, it's a weakness in the protocol itself."
I disagree about being unable to fix it. They have dropped bad protocols in the past, sure this is a big one but they should fix the protocol and work on fixing the clients and informing other client developers. If they did that there would be some hope that they work a bit harder to minimise attack surfaces in the future, everyone gets burnt by a protocol eventually - it's how you react to being burnt that counts in the long run.
The reported stance of MS indicates that they are quite happy to be burnt along with their platform and their users.
"It just illustrates how poor software programming is these days."
I suspect that particular vuln only works due to fundamental design flaws introduced with NT 4.0 over 20 years ago. MS were told at the time rolling more stuff into the ring 0 was a dumb idea, but rather than take advice and fix it, their PR & dev teams chose to tell customers it was a good idea because it made their pinball game run faster.
Then they will come for our IP addresses because websites will continue to exist and operate without working DNS addresses just fine and people will continue to be pwned by drive-bys. It would be nice if they worked on helping folks use the internet securely rather than playing www-whack-a-mole at the tax payers expense. I'd love GCHQ to do stuff like out products that do stupid and insecure things such as rendering the contents of random files in ring 0, or adding leaky as a sieve virtualization features to silicon come to mind.
"Since most businesses, of a size suitable to have someone in IT working for them, has moved to a virtualized infrastructure then their cores should not be sitting 99% idle or they haven't sized their system very well."
If you run VTune on the bare metal you will see lots of expensive cache misses where the CPU is sat twiddling it's thumbs waiting for memory to catch up. In old school super-computing this wasn't an issue because the core clock speeds were similar to the *random access* latency of the memory subsystem (and the applications + OS were tuned to maximize page/cache locality.
"Virtualized Infrastructure" workloads are actually particularly rough on caches and TLBs, their memory access patterns are much more random - they are nowhere near as kind to the memory subsystems as a well tuned HPC workload. Consequently the cores spend a lot of time idle waiting for memory to catch up, and in this scenario the OS will report the CPU being 100% busy - if you want to find out how much of that 100% busy is spent waiting for memory you'll need to run VTune or something like that. I wish these kind of stats were more readily available to end-users and sys-admins - the CPU occupancy figures are pretty much meaningless these days - they just tell you when the run queue is exhausted and nothing useful about how busy the system really is. :)
"If an HPC application spends much time waiting on IO then someone needs to call in a real HPC expert to give the setup a once-over, because that's a total waste of time (as you rightly point out)."
Define "much" ! :)
How does a "real HPC expert" magic up no mem-waits on a 16 core Xeon running sparse matrix code with a 16 way set-associative shared L3 cache ? The killer micros have taken over, they are a lot faster than the beasts that came before them - but equally it's also much harder to extract peak performance from them with apps that feature large memory footprints. I'm not having a dig, just pointing out that some problems are inherently awkward. :)
"Supercomputer applications are designed to scale across thousand of cores - so unlike PCs those cores are not idle!"
They still wait on I/O like any other CPU, the speed of light still has an impact on how code is written and networks are built. ;)
"People are compiling Fortran (and to a lesser extent C) to run on supercomputers, they're not using assembly. I guess he must have been talking about support from the Linux kernel community?"
Not just kernels, compilers, profilers and debuggers too. The CPUs shipped argument is pretty one-sided - and it is very unlikely to get better for SPARC because the players with mindshare (ie: Oracle) view SPARC as a cash cow, and they have a track record of actively fighting and sabotaging open source. None of that makes SPARC inherently bad but it does make SPARC harder to use.
"You were there, weren't you."
Briefly, they were interesting (and frustrating) times. Thanks for the PLX info - I must have seen the fixed product. The Intel OEM PPro box running Linux had the speed record - with NT the same box became an I/O bound dog - no amount of tweaking could hide how much x86 NT sucked at talking to disks.
"But this robustness came at the price of performance. Run the same app on a Win16 box and the same box running NT and the Win16 box would be a performance winner."
I found that *most* Win32 binaries ran a lot quicker on a 166MHz Alpha with FX!32 than on Windows 95 or NT on a 200MHz PPro (stuff like PKZIP, Monotype RIP, even the 3D pipe screensaver). DEC did Wintel better than Microsoft & Intel on a tiny fraction of the budget, go figure.
"The authors are working on solving the problems that arise using shared memory for core-to-core communications – cache misses, and loss of coherence."
While it's very *sophisticated* and it might even work, I can't help but feel it would be a lot easier for everyone if they just implemented Transputer style channels instead of trying to reverse engineer the same effect by short circuiting memory traffic.
"Not sure the issue will be with the process side of things. Supposedly, Apple already has 10nm products from TSMC. We might even see them announced later today..."
In which case it's possible that Apple are Bogarting the 10nm capacity, and Apple could well have bought out Fujitsu's slice of the pie to cover their own yield shortfall... ;)
"Computer architecture has not change significantly since Von-Neumann’s day, i.e. not much change since 1945: “...a memory to store both data and instructions, external mass storage, and input and output mechanisms.” O.K. they have become smaller and faster but they are still separate mass storage and memory systems."
This has been done - repeatedly. Virtual memory was conceived of a way of faking it, this just moves the goalpost towards the "Memory is like an orgasm. It's a lot better if you don't have to fake it" end of the spectrum. Pin-bandwidth will remain a serious bottleneck, as will addressing which also adds latency & burns power. That said, NRAM sounds brilliant - let's hope it lives up to the hype. :)
"My favorite "Dark" by Microsoft: letting the type "long" be 32 bit on 64 bit systems.
Yes, the C-standard allows it. It's still insane for a general purpose computer."
I can understand the antipathy - the 32bit long/64bit pointer model was actually employed on 64bit RISC boxes before MS had got around to using 32bits properly. The aim was to reduce the memory footprint of apps - and thereby get less cache misses and increase performance. Believe it or not it did actually work in some cases. Personally I found the existence of long long more irritating, and refused to play the game by using things like int64_t instead. :)
"1) How often did the shell move from /usr/bin/bash and /bin/bash lately ?"
I'll grant you that one, and even if the shell does move it's not a show stopper - easily fixed/worked around/bodged etc.
"2) Code "conservatively" is actually an alias "not using features that make shell scripting less horribad"
I think you're being a bit hard on folks here. I use ksh & bash on a daily basis, so I tend to restrict myself to using features common to both simply because I c.b.a with writing a script twice. Besides if I need the stuff bash brings to the table (as handy as they may be) the chances are I should be working in Python instead. :)
"but when scripting shouldn't one test and declare exactly which executable you want to be running as opposed to relying on a user shell environment to be set up correctly?"
In most cases I would say "no" because the users may well have their shell env setup with the intention of using non-standard executables (eg: if they are cross compiling) and that kind of environment testing code renders scripts pretty much unreadable. If you really want that kind of thing I think it should be put into a dedicated environment setup+validation script.
"This seems like just an anti-Microsoft gripe from the Linux fundamentalists."
We see the same complaints from MS Office lovers every time folks suggest LibreOffice can be used in place of MS Office. Plus in this case MS are intentionally ripping off brand names with the intention of fooling people into thinking they are using the real deal. I'm pretty sure the MS community at large wouldn't react any better to LibreOffice renaming their products Excel, Word, Access and Powerpoint.
"If you were asked to deploy a Linux desktop across your enterprise, would you run for the hills? I would."
Linux desktops have already happened in some few big corps by stealth in the form of Linux powered thin clients replacing desktops connecting to massive Linux servers hosting Windows on VMs.
"Of course there is. It's working via one's own limited company and being treated by HMRC as a real business."
No problem with that, however some interpretations* of IR35 require you to buy tools from your own pocket rather than the company account, adding a ~40% premium to the cost of doing business...
* = Depends on who answers the call at HMRC + phase of the moon.
"Well, having the OS do the decoding of the video stream on behalf of the multiple applications likely using it to me sounds like a good idea…"
I sincerely hope MS isn't doing the decoding in the kernel. They are still shipping fixes for kernel rendering code vulns they introduced with NT 4.0 (20 years ago). :(
"As I wrote, our emphasis has been and should be on the RoW regardless of Brexit."
Fair point, I think most people can also agree that it would have been better for the trade figures to be skewed towards the RoW before lighting the fire under the pan. :)
"FTSE 100 up
FTSE 250 up"
My guess is that folks are moving money into shares because the pound is taking a beating on the currency markets, the prospect of negative interest rates will tend to do that.
"We've been getting a steady stream of complaints that "the new server is no faster (or slightly slower) than the old one" - and invariably the culprit is badly written, singlethreaded code that simply doesn't know how to run in a multicore system."
We have a similar problem, but the root cause is PHBs thinking that more cores on the same memory + cache config = more speed. They are finding out that more cores is f.all use when memory is the bottleneck. With respect to threads, they tend to make the cache/memory contention problem *worse*, the ideal is a bunch of loosely couple processes that share as little memory as possible. :)