Re: $1000+ paper weight
"Everything, sooner or later, becomes a boat anchor."
MS Surface widgets will make lousy boat anchors.
1353 posts • joined 21 Sep 2010
"Everything, sooner or later, becomes a boat anchor."
MS Surface widgets will make lousy boat anchors.
"Also APIs change over time - consequently the application also needs to change."
Wrong way round.
APIs might grow but should not change or remove existing functionality therefore the application should not need to change except to take advantage of such extended functionality"
You say "should", in practice that just can't be relied upon. There are APIs out there that are insecure by design - what do you do with those ? If apps are tied to those APIs they need to be killed or changed - in my experience the latter works better over the long run (YMMV).
"Nobody who wrote a piece of code 10 years ago is going to bother to keep the libraries up to date."
I guess shared objects ld.so and the weird symbolic linking thing in /usr/lib passed you by then. Also APIs change over time - consequently the application also needs to change. Also, realistically, once you've changed the run-time configuration you should do some regression testing.
"The authentication bug was for an information service & the info that can be gained isn't particularly useful, certainly not a critical prior and not classified by SAP as such."
1) The flaw was *re-introduced* which tells you that SAP are failing to use regression tests to verify that vulns stay fixed. This is a basic process problem that is *likely* to afflict every release of every product they produce.
2) Authentication bypasses give an attacker a platform to launch further attacks within a "trusted" domain, this is not a good thing.
I distinctly recall being told that Open Source (that worked vs vendor stuff that didn't) was not an option because there was no vendor to sue or coerce into fixing the code. Presumably we'll see SAP litigated to death and the source code distributed to the customers in lieu of working product... Naaaaaaaaaaaaah.
"And actually, OS X (now MacOS) predates Linux since it goes back to NeXT. So you can't even get things correct."
... Which in turn is based on the MACH uKernel with some more modern (Free?)BSD userland ...
Personally I'm glad corporations can take decent mature software and incorporate it into their products, the alternative didn't look too clever in the guise of NT 3.x... Sure Windows got better - but years of Not-Invented-Here seems to be have bitten hard in the form of 10.
I'm all for corps reaping their rewards, but equally I would like to see said corporations cut a bit of slack to imitation of good ideas though - just from the point of view of keeping the field open for innovators.
Auditing the inards of a -11 will be few orders of magnitude simpler than auditing the inards of modern SoC. With x86 you have zero hope of auditing it down to hardware level - there is a ton of hidden state in those things and a smorgasbord of security models which all interlock intricately and are documented in prose format. I know which I would prefer to audit. :)
"The message is that everything - applications, network, OS - is cloud and is managed from the cloud."
Sure it is, and virtual Umpalumpas do all the scripting & monitoring too because employing human people to admin a mass-surveillance system is just too risky. :)
It wasn't quick, so there was no love for QBASIC. :)
"yes, both people who allow automatic updates and people who manually scrutinize and approve updates are vulnerable to zero day exploits."
Right, so the update-sceptics are no worse off then.
"your logic is like saying a knife-proof vest or a bullet-proof vest for a policeman is "useless" because it doesn't protect him from a V-2 rocket falling on him from out of the sky."
You may be terrified of zero day exploits but I can assure you that they are nothing "like" ballistic missiles, they are simply vulnerabilities that the vendors have not patched yet. If you are very afraid you could simply disconnect all your machines from the internet and make sure you scan all your imported files for nasties in a test environment first.
"ironically dear old auntie or granny with her computer set to accept patches automatically is LESS of a disease vector"
I doubt there/s much difference in practice, just look at how old some of the "zero days" are, case in point font rendering vulns that allow an attack to run code in ring 0 existed in NT and it's derivatives for over 20 years - despite thousands of updates (and drive by attacks). There have also been updates that introduce new vulnerabilities, the OpenSSL Heartbleed vulnerability is an example of new functionality bringing new vulns. I'm using heartbleed as an example because it's not all MS's fault, and in that particular instance I dodged the heartbleed vuln simply because I felt the risk posed by the update was not worth the reward (functionality that I didn't want).
If you care about vulnerabilities there really is no alternative to research and paying attention to what the updates are doing - because history shows that trusting a single vendor to fix every single hole in a two decade old piece of bloat-ware just isn't enough (and vendors make mistakes)...
". and then your computer gets owned by a 'botnet. and then you're yet another dumb motherfucker who is part of the problem."
Strangely that's exactly what happens with zero day exploits, except in your utopia *everyone* is a dumb mf who is part of the problem.
"Is there a way other than NUMA?"
NUMA is a consequence of real world physics. The only boxes that dodge that issue are ones that don't share memory and ones that slow memory access down to the slowest (old/cheap SMP boxes). FWIW accessing DRAM on a single core box hasn't taken uniform time since fast page mode became the norm (early 90s) either.
In terms of other ways: Re-architect your software to stop requiring threads & big address spaces (Java apps tend to operate in this mode) - and stock up on single socket low-core count boxes with huge caches.
"Cyber attack in 2012. Arrested in March 2015. Sentenced in Sept 2016."
Coincidently that fits in with the timeframe of the massive breach Yahoo! attributed to a "State" actor. Funny that. :P
"Damage that is deserved if they didn't take basic security measures to ensure the security they expect and deserve online."
I don't think anyone deserves that kind of misappropriation of data - mainly because it hurts the customers / chattel as well. I look at it as being inevitable, and the chain of command should be hung out to dry for failing to oversee proper security measures as appropriate.
FWIW I didn't downvote you because your point of view has merit in abstract terms. Have a beer & relax, it's Friday. :)
"The reasoning for this secrecy seems to have been that the NSA wanted to see who was going to use them."
Or to put it another way: The NSA decided that it would prefer to carry on using the exploits (knowing a that a likely malicious third parties had access to them) to protecting US Citizens.
".. it's about actually being unable to fix it as this is not an error in code, it's a weakness in the protocol itself."
I disagree about being unable to fix it. They have dropped bad protocols in the past, sure this is a big one but they should fix the protocol and work on fixing the clients and informing other client developers. If they did that there would be some hope that they work a bit harder to minimise attack surfaces in the future, everyone gets burnt by a protocol eventually - it's how you react to being burnt that counts in the long run.
The reported stance of MS indicates that they are quite happy to be burnt along with their platform and their users.
"It just illustrates how poor software programming is these days."
I suspect that particular vuln only works due to fundamental design flaws introduced with NT 4.0 over 20 years ago. MS were told at the time rolling more stuff into the ring 0 was a dumb idea, but rather than take advice and fix it, their PR & dev teams chose to tell customers it was a good idea because it made their pinball game run faster.
Then they will come for our IP addresses because websites will continue to exist and operate without working DNS addresses just fine and people will continue to be pwned by drive-bys. It would be nice if they worked on helping folks use the internet securely rather than playing www-whack-a-mole at the tax payers expense. I'd love GCHQ to do stuff like out products that do stupid and insecure things such as rendering the contents of random files in ring 0, or adding leaky as a sieve virtualization features to silicon come to mind.
"Since most businesses, of a size suitable to have someone in IT working for them, has moved to a virtualized infrastructure then their cores should not be sitting 99% idle or they haven't sized their system very well."
If you run VTune on the bare metal you will see lots of expensive cache misses where the CPU is sat twiddling it's thumbs waiting for memory to catch up. In old school super-computing this wasn't an issue because the core clock speeds were similar to the *random access* latency of the memory subsystem (and the applications + OS were tuned to maximize page/cache locality.
"Virtualized Infrastructure" workloads are actually particularly rough on caches and TLBs, their memory access patterns are much more random - they are nowhere near as kind to the memory subsystems as a well tuned HPC workload. Consequently the cores spend a lot of time idle waiting for memory to catch up, and in this scenario the OS will report the CPU being 100% busy - if you want to find out how much of that 100% busy is spent waiting for memory you'll need to run VTune or something like that. I wish these kind of stats were more readily available to end-users and sys-admins - the CPU occupancy figures are pretty much meaningless these days - they just tell you when the run queue is exhausted and nothing useful about how busy the system really is. :)
"If an HPC application spends much time waiting on IO then someone needs to call in a real HPC expert to give the setup a once-over, because that's a total waste of time (as you rightly point out)."
Define "much" ! :)
How does a "real HPC expert" magic up no mem-waits on a 16 core Xeon running sparse matrix code with a 16 way set-associative shared L3 cache ? The killer micros have taken over, they are a lot faster than the beasts that came before them - but equally it's also much harder to extract peak performance from them with apps that feature large memory footprints. I'm not having a dig, just pointing out that some problems are inherently awkward. :)
"Supercomputer applications are designed to scale across thousand of cores - so unlike PCs those cores are not idle!"
They still wait on I/O like any other CPU, the speed of light still has an impact on how code is written and networks are built. ;)
"People are compiling Fortran (and to a lesser extent C) to run on supercomputers, they're not using assembly. I guess he must have been talking about support from the Linux kernel community?"
Not just kernels, compilers, profilers and debuggers too. The CPUs shipped argument is pretty one-sided - and it is very unlikely to get better for SPARC because the players with mindshare (ie: Oracle) view SPARC as a cash cow, and they have a track record of actively fighting and sabotaging open source. None of that makes SPARC inherently bad but it does make SPARC harder to use.
"You were there, weren't you."
Briefly, they were interesting (and frustrating) times. Thanks for the PLX info - I must have seen the fixed product. The Intel OEM PPro box running Linux had the speed record - with NT the same box became an I/O bound dog - no amount of tweaking could hide how much x86 NT sucked at talking to disks.
"But this robustness came at the price of performance. Run the same app on a Win16 box and the same box running NT and the Win16 box would be a performance winner."
I found that *most* Win32 binaries ran a lot quicker on a 166MHz Alpha with FX!32 than on Windows 95 or NT on a 200MHz PPro (stuff like PKZIP, Monotype RIP, even the 3D pipe screensaver). DEC did Wintel better than Microsoft & Intel on a tiny fraction of the budget, go figure.
"The authors are working on solving the problems that arise using shared memory for core-to-core communications – cache misses, and loss of coherence."
While it's very *sophisticated* and it might even work, I can't help but feel it would be a lot easier for everyone if they just implemented Transputer style channels instead of trying to reverse engineer the same effect by short circuiting memory traffic.
"Not sure the issue will be with the process side of things. Supposedly, Apple already has 10nm products from TSMC. We might even see them announced later today..."
In which case it's possible that Apple are Bogarting the 10nm capacity, and Apple could well have bought out Fujitsu's slice of the pie to cover their own yield shortfall... ;)
"Computer architecture has not change significantly since Von-Neumann’s day, i.e. not much change since 1945: “...a memory to store both data and instructions, external mass storage, and input and output mechanisms.” O.K. they have become smaller and faster but they are still separate mass storage and memory systems."
This has been done - repeatedly. Virtual memory was conceived of a way of faking it, this just moves the goalpost towards the "Memory is like an orgasm. It's a lot better if you don't have to fake it" end of the spectrum. Pin-bandwidth will remain a serious bottleneck, as will addressing which also adds latency & burns power. That said, NRAM sounds brilliant - let's hope it lives up to the hype. :)
"My favorite "Dark" by Microsoft: letting the type "long" be 32 bit on 64 bit systems.
Yes, the C-standard allows it. It's still insane for a general purpose computer."
I can understand the antipathy - the 32bit long/64bit pointer model was actually employed on 64bit RISC boxes before MS had got around to using 32bits properly. The aim was to reduce the memory footprint of apps - and thereby get less cache misses and increase performance. Believe it or not it did actually work in some cases. Personally I found the existence of long long more irritating, and refused to play the game by using things like int64_t instead. :)
"1) How often did the shell move from /usr/bin/bash and /bin/bash lately ?"
I'll grant you that one, and even if the shell does move it's not a show stopper - easily fixed/worked around/bodged etc.
"2) Code "conservatively" is actually an alias "not using features that make shell scripting less horribad"
I think you're being a bit hard on folks here. I use ksh & bash on a daily basis, so I tend to restrict myself to using features common to both simply because I c.b.a with writing a script twice. Besides if I need the stuff bash brings to the table (as handy as they may be) the chances are I should be working in Python instead. :)
"but when scripting shouldn't one test and declare exactly which executable you want to be running as opposed to relying on a user shell environment to be set up correctly?"
In most cases I would say "no" because the users may well have their shell env setup with the intention of using non-standard executables (eg: if they are cross compiling) and that kind of environment testing code renders scripts pretty much unreadable. If you really want that kind of thing I think it should be put into a dedicated environment setup+validation script.
"This seems like just an anti-Microsoft gripe from the Linux fundamentalists."
We see the same complaints from MS Office lovers every time folks suggest LibreOffice can be used in place of MS Office. Plus in this case MS are intentionally ripping off brand names with the intention of fooling people into thinking they are using the real deal. I'm pretty sure the MS community at large wouldn't react any better to LibreOffice renaming their products Excel, Word, Access and Powerpoint.
"If you were asked to deploy a Linux desktop across your enterprise, would you run for the hills? I would."
Linux desktops have already happened in some few big corps by stealth in the form of Linux powered thin clients replacing desktops connecting to massive Linux servers hosting Windows on VMs.
"Of course there is. It's working via one's own limited company and being treated by HMRC as a real business."
No problem with that, however some interpretations* of IR35 require you to buy tools from your own pocket rather than the company account, adding a ~40% premium to the cost of doing business...
* = Depends on who answers the call at HMRC + phase of the moon.
"Well, having the OS do the decoding of the video stream on behalf of the multiple applications likely using it to me sounds like a good idea…"
I sincerely hope MS isn't doing the decoding in the kernel. They are still shipping fixes for kernel rendering code vulns they introduced with NT 4.0 (20 years ago). :(
"As I wrote, our emphasis has been and should be on the RoW regardless of Brexit."
Fair point, I think most people can also agree that it would have been better for the trade figures to be skewed towards the RoW before lighting the fire under the pan. :)
"FTSE 100 up
FTSE 250 up"
My guess is that folks are moving money into shares because the pound is taking a beating on the currency markets, the prospect of negative interest rates will tend to do that.
"You lost. It's happening."
Everyone lost, including folks who got the result they wanted. Savings and assets are all worth a lot less, rent will go up to compensate, tax receipts have already gone down so all that "extra" money will be used to fill the widening hole in the balance sheet. The only folks "getting over it" are leaving the country and taking their money with them.
"We've been getting a steady stream of complaints that "the new server is no faster (or slightly slower) than the old one" - and invariably the culprit is badly written, singlethreaded code that simply doesn't know how to run in a multicore system."
We have a similar problem, but the root cause is PHBs thinking that more cores on the same memory + cache config = more speed. They are finding out that more cores is f.all use when memory is the bottleneck. With respect to threads, they tend to make the cache/memory contention problem *worse*, the ideal is a bunch of loosely couple processes that share as little memory as possible. :)
"Can I ask an honest question? How many of the Bash people who are on here bashing Powershell have actually used it?"
I am not a "Bash" person, but I use it daily... The shell + 'standard' UNIX utilities have ~40 years worth of effort & usage invested into them across all kinds of OSes and hardware ranging from -11's all the way up top 10 HPC clusters. They have proven themselves over and over again, Powershell has to appear to be a lot better than the incumbent to win folks over, Microsoft's entire business is built on this concept.
From my point of view (which doesn't count for a great deal in the scheme of things), Powershell just isn't better. I found it was actually *harder* to use - more verbose, a bit jarring on the eyeball and obviously a lot less familiar than my comfy awk slippers and sed pipe. I'm not saying Powershell is all wrong or fundamentally broken, it's just ugly, ungainly, weird and unattractive to my eyes. By the same token countless "MS People" asserted the UNIX "standard" utilities are also ugly, ungainly weird and unattractive to their eyes. It's the vi/emacs war all over again. :)
In the long run I think a bit of cross-pollination of ecosystems is usually a good thing and this is no exception. I won't be unhappy if Powershell unseats Bourne shell *if* it really is a better option, I just want to get the job done without having to make a drama out of it.
I'll let you have that point, but my first impression of Powershell was that it looked like someone had decided to marry the readability of COBOL with the simplicity, elegance, portability and flexibility of DCL. ;)
Personally I found PS pretty awkward to use - but I've been mucking about with tcsh, ksh and bash for a couple of decades - so I am probably incapable of giving it a fair shake. I can accept that some folks like PS - fair play to them, but I don't understand *why* they like it !
I'll give you a hint: Try searching for "Israel acknowledges it is helping Syrian rebel fighters", it features "Defense Minister Moshe Ya’alon". One of the results should be to an article hosted by the "Times of Israel", the article was published on the 29th of June, 2015.
It has a link to an earlier report about Druze lynching an Ambulance, which states the IDF "has insisted it does not offer medical treatment to Islamist rebels.". It could well be a case of the right hand not knowing what the left was up to.
With respect to Hezbollah not shooting up people treated by the IDF in 2015 there is plenty of material out there - easy to find, plenty of grist for the mill. This particular dimension to the Syrian conflict doesn't get much as much attention as it deserves in my view. YMMV.
1. Quite correct w.r.t the refugees, however your entire points are totally irrelevant to the point at hand. So the answer to my question is : No, you don't think at'webs.bout it.
2. Again, you don't think about it.
3. The world is well aware of this irrelevant point. Again, you don't think about it.
4. Unsupportable and irrelevant supposition. Again, you don't think about it.
I've got my answer loud and clear Matt. The Fail is for you.
"It is certainly PORTRAYED as committing atrocities. But "atrocities" implies intent"
Firing 155mm HE shells onto a crowded beach on a summer's day was done with intent to kill and maim, that the expected result of 155mm HE shells fired into crowded areas.
Most folks living within the borders of Israel would bite someone's arm off if they were offered the chance to live in peace and prosperity. Some folks achieve that, but by and large the Palestinians continue to have their land, livelihoods and homes taken away from them and while that process continues they really have no option but to lay down and die or fight. Reverse that process and they have the option to live in peace.
"BTW, you may want to read up on the Beirut Marines barracks bombing to get an idea of why Hezbollah is designated a terrorist organisation by the US."
I am genuinely curious Matt, how do you feel about the IDF supplying material, intelligence and medical assistance to ISIS folks who are shot up by Hezbollah ?
""Quite Good is nothing to sneeze at, when most things are, ipso facto, Fairly Average. "
The question is "Will you pay ten times more for it?" and the answer in 90% of cases is "NO""
Agreeing violently !
They've done pretty well already with Gen.1, and they are at the start of XPoint's development curve so there is likely a lot of room for improvement on price and performance, I think there is reason to be optimistic - particularly if other big vendors license it. On the other hand Flash has had a couple of decades of competitive development invested in it, there is much less margin for improvement with Flash and much thinner margins.
Neat hack. Slightly relieved that HTTPS & SSH still work. :)
It appears that Oracle have chosen to retain the services of BuSab. :)
Nah, it's just having a long nap in the tarpit.
Yet more ways to exploit rendering code running in ring 0. It's getting dull watching MS punch themselves in the balls, it would be nice to see them admit defeat & take on an idea that originated outside of Redmond. Sadly I suspect that may be a step too far for them.
Of course It is technically *possible* that they may have already taken the lesson, but it would be impossible to tell that from the security bulletin or patch release notes - both seem to have been redacted to uselessness. They are more of a hinderance than a help. :(
Joking apart, it is clear that Redmond's is just going through the motions and their heart is no longer in it. It would be best for everyone to simply disconnect the life support machine from Redmond and use the talent, time & cash freed up to do something more productive for everyone.
"You know the drill: Clean install!
You say you're afraid you have a virus? Clean install!
You have a txt file that won't open? Clean bloody install!"
The has been the norm since day one - including DOS, the question I have is : Why do MS still store user data on the same partition as the system guff given that users are expected to rebuild their OS as a matter of routine ?