...why did you do the math on that? I mean, just...you have too much time on your hands.
5207 posts • joined 31 May 2010
- ← Prev
- Next →
No, my position is slightly more nuanced.
What Microsoft have built is good, provided it is very carefully marketed and positioned and what it can and can't do spelled out clearly at multiple points so that nobody is given false hope, intentionally or not. I'd go so far as to say that the word "prevention" probably shouldn't be used here. Maybe "data leak/loss resistance." Think "fire proof" versus "fire resistant".
This will mean lost sales as people who might have been bamboozled don't buy. It will also mean some others will buy anyways, and combine solutions.
None of that takes away from the solid technical work done. The problem to hand is a hard problem to solve. And, quite frankly, I honestly believe that Microsoft have done the hardest part of this in the creation of their existing technologies.
What remains is more political than technological. Zero-knowledge encryption of Azure + Office 365 can be implemented without too much fuss. They choose not to, nor to even discuss why. And if their government did turn around and tell them "you can't do that", then yes, I would say they should serious consider packing up and leaving.
I don't have a problem with the technology Microsoft is offering. I think it is good technology. But I absolutely have a problem with how it is generally marketed, how the PRs present it to journos, and how every bit of training information focuses on "look at all these features, wow!" but spend little (if any) time being clear about what it can't and won't do.
The issue here is highly political. The tech is a good start, but it is resistance-class, not prevention class. Overselling is likely to do way more harm than good.
And ultimately, it is half assed. The endgame solution required involves making some tough choices to stick up for the customer in the face of massive political pressure to do otherwise. Microsoft is out there putting hundreds of millions into trying to convince us that they're "the good guy", and that technologies like DLP "demonstrate their commitment to privacy and security".
If people start to believe that tripe - and judging by the commenters in this forum, more than a few do - then that is dangerous. And that's where the half-assedness of this whole thing absolutely becomes a real concern.
The code can be respect-worthy while the corporate positioning of the product - and for that matter, the company's overall stance across a line of products - is dangerous. And thus we have a very typical Microsoft situation in which the technology is praiseworthy but the solution (notably, how it is ultimately presented in virtually all official content on the subject) is half-assed.
I don't see any of the above as "steam rising". If anything, it fills me with a sens of...I don't know...defeat. Tired acceptance. Depression. A loss of faith in humanity, even. The sort of feeling of abject impotence and helplessness one feels when they learn that another umpteen billion dollars was squandered by politicians.
I don't have the zeal to be passionate about anything any more, sir. I have the outrage fatigue, and it's largely why I confine my thoughts on the matter to forums nobody cares about and nobody reads.
Donning the armour, saddling Rocinante and making another pass at the damned windmill just isn't in me anymore. I report what I see. I vent my thoughts into the feckless void of El Reg's forums where noone of consequence will see. That's all there is.
Crusades and causes are a game for the young.
Whereas I think that trying to do this piecemeal is worse than not doing it at all. Address the whole of the issue or don't bother. Make people go elsewhere to other vendors that will address the whole of the issue.
But Microsoft's half-assed approach gives a false sense of security, especially when combined with aggressive marketing that is making their DLP seem like it far more than it is. If you have a half-assed solution, be up front about it. But they aren't, really. Not unless you're an uber-nerd and prepared to pour over every least bloody stitch of information on the topic.
So I accuse Microsoft - and others - of shoddy half measures, whilst trying to market as being adequate. It's not. Not by a long shot. And, like shitty antivirus vendors (oh, wait, Microsoft again!) that do things half-assedly, they do far more harm than good by giving a false sense of security.
Shit or get off the pot. But I'm sick of gigacorps half-assing this. It's too important to let them get away with it.
No, I'm talking about DLP as it is being talked to me. DLP in conjunction with tagging at the OS level, mobile security, endpoint security etc. Whatever the term may have been used for, it is being expanded by Microsoft's own marketing droids to cover a more generic "who can access your data, and how".
This seems to be including new proposed offerings like "only allowing certain forms of content to be viewed inside DRMed, tracked online applications" etc. The thing is, if you are going to from "tagging and alerting things as they leave exchange" to "access monitoring and control across the entire data life cycle" (which is absolutely what I am being told is what this term is supposed to now encompass) then I don't think that you can simple "wish away" the threat of malicious actors MITMing (or PATRIOT acting) your data whilst on it's way to, or stored in, the cloud.
So: DLP is either very narrowly "transport rules in exchange" or it is "data lifecycle management" in it's totality. You don't get to pick an choose which aspects of data security and access control you cover just because some of them make you uncomfortable, or you find them inconvenient.
Since Microsoft seem to pushing "DLP" as "more than just exchange transport rules" then I say they've failed until they've addressed all aspects of data lifecycle management.
Yes and no. DLP is technologically an expansion of Exchange transport rules...but ties into stuff baked into Windows Server 2012, InTune and EMM as well.
More to the point, the purpose is to allow enterprises to control who views their data, and under what circumstances. It has aimed to become far more than just "exchange transport rules". As such, we must look to solutions beyond exchange transport rules to solve the goal of DLP:
allowing companies (and ultimately, individuals) to control who can see their data, and under what circumstances. And that goes back to needing a "security first, privacy first" approach to things, from the start. No band-aids.
As for "US.gov will make it illegal not to have back doors"...oh well. If they want to footbullet themselves, go right ahead. Microsoft has the choice to keep their HQ in the US. As do all these other companies. They aren't standing up for our rights by rolling over and complying. Why should I trust my business to them, or hand them my money?
Oh, because "America, fuck yeah?" America: fuck off.
DLP stuff is nice, but I still don't trust my data residing in the American cloud. When will Microsoft offer complete and total zero knowledge encryption such that they - but especially the NSA - cannot get at any of the data stored in Azure, Office 365, etc? And when will this be enabled as a standard option, available to everyone?
Will they encrypt my user data that's being streamed to them from Windows 8/8.1/10 as part of their integration into the OS? What about Onedrive? How do we lock down all that search data sent to Bing such that nobody I don't want can see it? When will that be the default option?
DLP is a great tool, and kudos to Microsoft for doing shedloads of excellent and very difficult work to advance the state of the art in this area. But what's needed is a true "security first, privacy first" approach that goes far - far - beyond what DLP can ever offer.
Re: Raise your hand....
The ends justify the means, eh?
They're just bitter we have a sane and rational made in Canada copyright solution and won't bend over and take the US's TPP copyright demands with a smile and a thank you.
Re: Time for the downvotes, I guess
Would my military or police suppress the citizens and back an autocratic state? No. But I'm Canadian. Would yours?
America is rotten to the core. And I honestly believe your military, national guard and federal policing units would stand with the state, not the people. They've already been trained for decades in "us versus them". Your local Sheriff is just a Sheriff. His revolver and his shotgun mean nothing against the awesome power of an Apache helicopter.
Sorry man, you just live in the wrong country for "the people" to have a say. Probably for generations to come.
Re: Time for the downvotes, I guess
That tired old saw. In the days when the power of an armed citizen roughly equaled that of an armed soldier, I'd agree with you. Even when it took two or three regular citizens to overcome the training of each soldier.
Today, you can "pacify" 30,000 people with a HumVee and a microwave cannon, or simply wipe them out by the tens of thousands with helicopters, daisy cutters or machine gun grenade launchers.
I don't care how many M16s you have on your insurgency shelf at home, if the state wants you dead you will be made dead. Especially if said state is a fully modern Western nation. Hell, we have freaking robots for that now. Flying ones!
Voting means nothing. Nothing at all. What matters - especially in "money is speech, corporations are people" America - is who writes the cheques. Given how much wealth is controlled by so few people, "the people" don't stand a chance to impose their will - or their oversight - no matter who they vote for.
Democracy, or even the concept of a republic, is a lie in a world where the gap of money, power and sheer force of arms between the haves and the have nots has moved from 3:1 to 300,000:1.
Re: Time for the downvotes, I guess
are were real limits, real transparency, and fucking real consequences over all this. "
Re: Time for the downvotes, I guess
Quis custodiet ipsos custodes?
"Battery backed" versus supercap is fairly irrelevant. It's an external power source.
And no, I didn't forget that flash exists on the module, but flash isn't the primary storage interface. It's the backup data retention space. There's just enough juice in the thing to dump the contents of RAM to flash. Well...you hope, anyways. There's actually been some issues with certain NVDIMM setups of these styles losing their supercaps over time (damned TiMn supercaps!) and thus not actually having the juice to fully write out the contents of the RAM before the clock strikes 12 and it turns back into a pumpkin.
NVDIMMs have a way wider use case set than MCS. NVDIMMs are used inside SSDs (well, sort of), RAID cards, modern high-RAM Hard Disks...anywhere where you might have RAM in use for high-speed storage, but require non-volatile storage if the lights go out.
MCS isn't that. MCS is a means of hijacking the DRAM bus to provide a jumped-up version of PCI-E storage. It isn't main system memory, or even main memory for a subcomponent (like a RAID card). It's secondary (or permanent) memory. Like a PCI-E or SATA SSD.
To be more concrete:
NVDIMMs are the sort of thing you put in your RAID card so that you can have 1GB or 2GB of fast DRAM cache on on your RAID card that accelerates your array. When the power goes out, the DRAM would dump it's contents to a flash backup. When the power comes back on, it would load that data from flash, then flush it back out to the disks.
MCS is more akin to the disks that would hang off that RAID card and serve as permanent storage.
If I were designing the ultimate in "bitching systems of the future", I would use NVDIMMs as my computer's main memory and MCS as permanent storage. I could run my databases in-memory without fear, and store my operating system, application, and long-term storage in the MCS modules.
...and now I want to go build a system like that. Hot damn that sounds sexy.
Re: Dec 2 injunction
Sorry, but you are incorrect. Memory Channel Storage is presented to the system as storage. It is not presented as main system memory. Memory Channel Storage is used in a similar fashion to PCI-E storage, SATA storage or other forms of premenant storage.
NVDIMMs are presented to the system as memory. They are used by the system in the same fashion it would use volatile memory, however, it doesn't go *pffft* when the lights go out. NVDIMMs don't write to flash as the primary storage medium. It writes to RAM, then dumps that RAM into flash using an external power source when it detects a power-out event.
They serve different functions and operate in a different manner. The only similarities between the two are
A) Form factor; they both use DIMMs
B) When the power goes out, their contents end up are stored in flash
Under the hood, however, the differences far outweigh the similarities. One example: MCS can be huge compared to NVDIMMs. 400GB, 800GB or more per stick. NVDIMMs use flash as an "oh shit"-class backup medium for RAM, and thus are no bigger than the RAM they back up.
In truth, in many ways, I like the NVDIMM concept better. If only because it means you can use the things bloody forever without worrying about the write life of the flash. You will obsolete the system before the flash chips in an NVDIMM need to be worried about.
MCS? Not so much. As states in the interview, Diablo uses a bunch of consumer-level sells and "black magic" maths to wear level them. How long do they last, really? Given the high price and target (ultra-low-latency databases, etc) I have all sorts of questions about their applicability, survivability, etc. I have a list of technical testing questions a mile long before I'd ever put them into any of my systems.
Not so an NVDIMM. NVDIMMs are simple and straightforward. But htye are so because the operating principles are completely different, as is their ultimate applicability.
Putting RAM with a supercap on a PCB and giving it a SATA 3 interface is closer to being "exactly like an SSD" than MCS is to being "exactly like an NVDIMM". At least the "RAM + Supercap + SATA 3 interface" and the SSD can both only be used as permanent storage.
NVDIMMs are treated by a system more like regular RAM than they are like PCI-E or SATA storage. That's the advantage of NVDIMMs. It's why they're worth buying!
Now I could be wrong - $deity knows that happens often enough - but if that is so, please explain how. For my own erudition. How, other than in the most superficial fashion, is an NVDIMM anything like MCS at all?
Thanks in advance!
Re: Let's call a 'spade' a 'spade' - MCS is NVDIMM is NVvault
"MCS is NVDIMM is NVvault "
Explain how. Because from a technical standpoint I don't understand how at all.
NVDIMM is RAM that writes to flash only after it detects a power out event. It relies on a supplementary power source to do so.
MCS writes everything to flash immediately. It's not sitting in RAM waiting for a power off event. It goes straight to flash.
NVDIMM uses flash as emergency storage. MCS uses flash as primary storage and doesn't have an emergency storage component. They are - to my understanding at least - two completely different products with two completely different goals.
NVDIMMs are RAM modules. They behave as RAM modules. They are presented to the system as RAM modules. MCS is storage. It is presented to the system as storage...and even requires a BIOS patch to do so.
NVDIMMs are amazing and fantastic for in-memory databases, because they allow you to work at DRAM speeds, something MCS cannot do. MCS acts in a fashion very similar to PCI-E flash storage, but without the latency spikes that affect PCI-E storage at high utilization thresholds.
From practical application of the technologies right down to the nitty gritty of electrical signalling they are, to my understanding, two completely distinct products. If you claim otherwise, Please, do share how I am wrong about that fact. I like knowledge.
Next: please explain how "who opened their doors when" matters?
Also: regarding this statement: "Badalone attempts to justify Diablo's position with what would be described as grandstanding - whereas we see the Netlist CEO is more comfortable detailing the issue in court and in the Netlist's financial filings. Take a look at the SEC website for more information on Netlist / Diablo."
I both have to agree and disagree. Is Badalone grandstanding? Absolutely. But Netlist is also not answering important questions all while shedding board members. If Netlist's take on this is "please don't ask tough questions and just wait for the courts to deal with this" then that is a take I cannot sanction. The purpose of journalism is to ask the tough questions. Especially when someone doesn't want those questions asked.
Which brings me to: " Saying Netlist responded with a "canned" sentence doesn't alleviate the need to report information that counters Badalone's claims."
When I uncover any information that makes me believe for a second Netlist has a valid claim then I will gladly report on it, dissect it in detail and explain how this is likely to be a real threat to Diablo's position. I owe Diablo nothing, and care nothing for either company involved beyond gaining a deeper understanding of the technical issues and history that drives the conflict.
I am absolutely willing to do a counter interview with netlist and dive into the technical nitty gritty of their claims with them. I'd love to, in fact. For example, to understand why someone might claim that NVDIMM and MCS are "basically the same". I don't see it that way at all, and would love to be shown how I am wrong. If you know people at Netlist that can do so, please, have them contact me.
"Let's also be fair and recognize Netlist has been winning at the USPTO and in court. Big name companies have already settled (ex. TI)."
If I win a court case against someone's dog biting me, it doesn't mean that I'll win when their cat craps on my lawn. Each claim in each case is to be taken on it's own merits, no?
Re: Dec 2 injunction
All good questions. I'll be sure to track down the CEO of Diablo and ask.
No bias, just trying to understand. I have enough knowledge of the topic to have a lot of very serious technical questions about Netlist's claims. How/why do claims around what amounts to an LRDIMM count against memory channel storage, which is - at least at first glance - completely different.
The only bit that would seem to be the same is that somewhere on those chips there is a widget that allows the CPU to "talk to" more address space than it was designed to. Address conversion, if you will. Electrically and logically you need to address flash completely differently from RAM. But at the end of the day there is still some widget that is allowing you to address more memory on that bus than you should by all rights be able to.
Now, Diablo claims that they have the rights to that particular piece of tech because they, in fact, invented it. More to the point, they claim the contract lets them use that tech. Fair enough; if that's true - and we'll see soon enough, I guess - then what is Netlist on about?
So that leads into the second round of claims: IP around battery-backed DIMMs. Unless you have a patent that basically says "we patent non-volatile memory in all forms" there's nothing similar between a battery-backed DIMM and a flash DIMM. Initial research didn't show Netlist having anything like such an overly broad patent.
Netlist borders on impossible to get hold of, but the Diablo CEO was entirely willing to have a grand old chat. Talking with him helped me understand that technical side of things a lot more, and the details around that cleared up at least some of my misunderstanding around the legal mess.
That said: there's a lot of posturing here, from both companies. From a technical standpoint, I still can't see how Netlist has much in the way of a claim, but I openly admit that the patents involved may somehow be interpreted to be more broad than my non-legal mind is capable of understanding.
The take away is that the dispute here centers around the fact that Diablo once did contract work for Netlist, and then moved on to do their own thing. Netlist feels that Diablo's "new thing" is sufficiently similar to the contract work that they once did that Diablo must clearly have used IP they own, be that inadvertently or purposefully.
Honestly, I have no idea if any of those claims will hold up, because intellectual property law isn't connected to technical realities in any way that I have yet been able to grok. But from a technical standpoint, the technologies involved are pretty far apart...with the exception of the widget that allows the CPU to address a larger address space.
Diablo claims they own the rights to it, and Netlist seems to have dropped all claims to it. So...why are they still fighting? On Netlist's side, I honestly have no idea. They will provide you canned statements about the whole thing, but not sit down and explain their reasoning. On the Diablo side, the reason is - quite clearly - pride.
The Diablo CEO is prideful. What's more, he quite clearly believes he is in the right. He will see this through because he feels strongly that Netlist is morally wrong in having wasted so much of his time and Diablo's money on this whole affair. Having talked to him, I believe that he honestly believes this.
So, I don't know about any of you, but this just keeps dragging me back to the technology side of it. The whole thing really bothers me because I just don't understand it. Is there something about my understanding of how the electrical signalling of the DRAM bus works that is inaccurate? Is my understanding of basic computer components really that flawed?
Diablo's CEO would have me believe my understanding of the gubbins of a computer is more or less correct. Netlist won't provide more than a canned explanation. For now, at least, that's the closest to "understanding" this situation as it looks like I'm going to get.
I welcome any alternative hypothesis - especially technical ones - that explain where or how Netlist has a case here. At the end of the day, all the technologies involved: LRDIMMs, NVDIMMs, Memory Channel Storage...it's all just so cool to me. The nerd in me just has to make sure he really understands how it all works.
There's your mistake: you think of systemd as just a replacement for init. It's not. It is attempting to be - piece by piece - a replacement for every single core element of the OS that isn't a kernel. Including all the fundamental userland tools (and the freaking shell) that we think of as being core to the "GNU/Linux" package.
In very much the same way that Android runs a Linux kernel but is thought of as "Android", not as "Linux", so to is systemd evolving into it's own thing. Mark my words, the GNU toolchain will be next with systemd. He's already gone after everything else, and he won't stop until he, personally, controls the whole goddamned thing.
The three key strains of Linux today are:
Anyone, it seems, can build a userland stack. But at the center of it all, there is still Torvalds. He's ornery. He's blunt. He's to the point. And he's usually correct.
Go ahead and try to make it Systemd/RedHatnix or whatever the hell ego-driven digital phallic madness drives the gravy train next...it won't hold a candle to the semi-benevolent dictatorship of an Angry Finn obsessed with quality control.
systemd/Linux? Well, that's SLES off the list. GNU/Linux or GTFO, thanks. Slackware uber alles?
Re: Bring back Tungsten filaments...
Then you buy shitty LED lamps. I live in Canada, eh? My city has had all LED lamps for a decade or more.
I guess it's just to much trouble for all y'all to invest in $5 piece of plastic to solve the problem. Can't say as I've any sympathy. If ya need to figure out how to cope with snow, maybe you could ask them as have already solved the problem.
Microsoft's war on ease of use continues unabated.
Re: How on earth can the share price continue to rise?
Microsoft has always done dividends. Though, if IBM is any indication, you can do the share buyback scheme for 15 some-odd years and see stock prices rise.
Re: Building for a giant fall.
"Their traditional markets are dead or dying"
Quite the opposite. The numbers show - if anything - there is strong demand around the world for people to retain control over their own data by running their own IT. Shocking, but then, only madmen would ever have questioned "put all your data into the American cloud", eh?
Re: IS (Europe: firewall your data)
"Why wouldn't these people band together and exact vengeance?"
There is no reason to expect them to do anything other than what they are doing. They are leading a crusade. Once upon a time, we did this too. But the reasons for this are not rooted in the past 20 years, but the past 100. This is Britain's mess. The rest of the world is still cleaning it up. Britain must never again be allowed to draw national borders. Ever.
"What would you do?"
Well, me, personally, I'd not be worshiping a god that doesn't exist and killing in it's name. But that's me. If there were a bunch of foreigners bombing my home every bloody day, I'd probably pick up a stick, sharpen it, and go put the pointy end into one of the people making my home go boom.
"What will we do? Fuck."
Wipe them out. All of them. There is only one way this ends. History has taught us this, and we've been dancing around it for the past 40 years.
This is a religious holy war. There is no reasoning with these people. The only answer it complete and utter subjugation. Wipe out their ability to make war. Destroy their ability to organize the radical aspects of their religion. Begin a massive, centuries long campaign to assimilate their culture.
It's horrible. It's awful. It's brutal and it's obscene. It is also the only possible solution that is rational, because every other alternative has them leading an ever-increasingly-well-financed and organized holy war of vengeance against a massively dehumanized enemy (everyone who is not them). It will be the sort of war where outrageous violence and war crimes are considered points of honour and pride, not something you get brought up on charges for.
History has taught us all about this stuff. This is where you control the populace by burning people alive. This is where you ban education except for the select few. This is where you keep those with morals working for you by bringing in a 14 year old girl and slowly murdering her over days in front of the "moral" person and then informing them that for each day of non-compliance another will be killed just like that right there.
This sort of war is where things happen that would blacken your soul to even think about. It is the sort of war where people volunteer to be suicide bombers by the tens of thousands. It is the sort of war that is remembered for thousands of fucking years.
If we do not prevent the formation of an ultra-religious extremist state bent of wiping out the entire population of the earth that disagrees with them then we are looking at the motherfucking sack of Troy, but with SCUDs, Tanks and - eventually - ICBMs.
So what do we do? We end these people. As quickly and as efficiently as is possible, and we pray to our descendants for forgiveness for the sins we are about to commit.
The honest answer to that? Big Data. There are dozens of companies right now offering various cloud-based analytics software offerings that place an "observer" or "agent" in your datacenter. They then hoover up fucking everything. Every scrap of performance data. What's installed where. Peaks and valleys in response times for various infrastructure components, you name it. (See: Cloudphysics, amongst many, many others.)
Then you get into companies like VMTurbo that are now using this data to predict required changes and configurations...and they're getting quite good at it, even when they don't have access to Cloudphysics-like datasets.
Now, as a large company, you start buying these guys up. Not for the software they offer, but because they employ the best Big Data PhDs in the world, and they have amassed petabytes of data that is supremely useful for building out this level of automation. Your first generation robot handlers rely on statically collected information from volunteer canaries and non-automated deployments still using the cloudy analytics stuff. Not perfect, but that's okay, you're not automating the whole world yet; it's early days.
Meanwhile, the boffins are in the back room correlating application design and hardware design with various statistics and building models of how changes in applications will affect the results...then testing them. They are learning to build highly accurate predictive mechanisms that will make VMTurbo look like a child's toy.
And on and on it goes, getting ever more accurate. Instead of needing the "laying of hands" from the High Priests, this sort of stuff is dealt with by using empirical data, advanced prediction algorithms and high-reactivity monitoring that will catch any deviations from the predicted algorithms, adapt, feed that information back into the Big Data systems and refine the algorithms some more.
I should also point out that I've seen prototypes of this stuff actually working, and working on software and configurations never before seen by the prediction algorithms. I've seen them working on dynamic workloads. When you're a tech journalist, you get to see some of these stealth-mode startups. And then you start putting what they offer together with what these other guys offer, and you see that this company is making these acquisitions over here...
So..."how will all this black box magic voodoo work?" The same way a B2 Spirit Bomber stays in the air. Damned fine engineering. Modelling, modelling, modelling, and a fly-by-wire system that makes changes faster than any human could ever dream of doing.
You are about to become obsolete, sirrah. I know you won't believe that until it's upon you and you are staring at your own pink slip, but It's time to upskill.
Resizing LUNs does not add value to the business.
Amazon has an American legal attack surface greater than zero. They thus cannot be trusted to protect your data. End of line.
Re: "Worst one-night stand EVER"
A one night stand that founded a people that went to the motherfucking moon. I'd say it was pretty successful.
Re: "EU officials say the hold up was with the entire shortlist."
"Hmm. If we had a "hotbed of nepotism and graft" icon, what would it look like?"
And not in a "enterprise who needs to build entire datacenters" kind of way. For the 99.9995% of businesses out there who don't build datacenters. For whom renting 1-4 racks at a colo is just fine.
Prove azure's cheaper. For real world workloads, not ones designed from scratch for the cloud. Prove it, prove it, prove it.
Stop your assertions, drop the anonymous coward and man the fuck up with some actual evidence.
There are only 17,000 "enterprises" in this world. There are over one billion businesses. Prove your assertion in the context of the majority.
Proof. Not assertions. Proof.
Ah, but then, you're the sniveling coward who can't think outside of Microsoft's marketing messaging and isn't man enough to post under their own name. I'm sure we'll soon get a completely unverifiable assertion about your self importance in order to back up how much you "know it all to be true", followed by a comparison of Azure to what amounts to VCE for an on-premises deployment, and a bunch of waffle about the manpower cost when you have to run a team of 50 just to light up one rack, oh woe is the enterprise space and all those millions of VMs you support.
Yup. Move along, little doe-eyed brainwashed marketing puppet. The rest of us actually run the numbers.
Now, next, you'll tell me that Microsoft's Cost Estimation Tool for Windows Azure was perfectly justified in telling me that I should expect to pay $2,379,343.52 per month to support the IT of a small business in the cloud, before bandwidth is factored in.
This would be a small business that has an annual income of $5,000,000. Oh, and that I've managed to run successfully on less than $200k for hardware, software, bandwidth and staffing for the past eleven fucking years.
And yet, apparently, $2M a month is cheaper. Of course it is. Because Microsoft says so. Because the cloud. The cloud wants one hundred and twenty times (120x!!!!) the amount of money to run, doesn't include backups, disaster recovery or bandwidth for that price, and has the added "benefit" of putting all my customer data in the hands of the NSA and placing me in violation of various local privacy laws for doing so.
But it's unquestionably cheaper, and Trevor Pott is just a stupid Mirosoft-hating moron who can't understand this simple fact.
Well, I'm glad we cleared that right the fuck up. Cheers for beers, Microsoft marketing chap. In fact, here's one for you now -->
Priced, of course, so that nobody using will ever be able to compete with Azure, which in turn is still more expensive than running a roll your own. And there's no word on a reduction of licensing complexities a-la SPLA, (indeed, MS just jacked up the price another 15-50%). And let's not even touch VDI licensing, or how a virtualised server instance isn't the same as a virtualised endpoint instance, especially for apps specifically coded to detect server instances and refuse to run on them...
It's a cute first try, but for Microsoft to truly compete with in the software defined infrastructure wars will require that they admit the past 15 years of licensing shenanigans were wrongheaded, gut the entire thing, and move to something that's actually partner and end customer friendly.
Not fucking likely.
And that's before we get into talking about efficiency relating to "number of VMs per rack, or the configuration of those VMs, or the IOPS.
That said, if you're married to Microsoft, it's a great offering. Some people are, and this will help them kind-of/sort-of keep up with the Jones. Everyone else will be able to do more while spending way less...but at least it will be sort of close.
Maybe, if everyone's really lucky, they'll figure out that they actually have to compete and they'll get on doing that at some point. Then the prices can come down to competitive levels, density can go up, system center can finally, mercifully be forever expunged, and everyone can win.
Maybe. I live in hope.
Two controllers are required for uptime, not data integrity. Remember: server SANs use object storage, not RAID. So when they do a double local and N remote they aren't going through a RAID controller presenting two LUNs, they're writing to two separate entities.
Oh, and, just by the by, two PCI-E flash cards, which is typically where the initial double local goes do count as two controllers.
Of course, if what you want is is to have your double local and N remote all confirmed committed before you reporting that write back to the guest OS then you'll have to sent that across the network first...but all you need back is a confirmation that it's written on the remote node, not a full copy of the data. Even then, the advanced stuff is doing RDMA writes to things like memory channel flash, which is going to provide you lower latency than a tier 1 storage array.
The thing is, with server SANs, you just have more options than you do with traditional SANs. I can have a highly latency sensitive application running on a node and choose to run it in a "double local + N remote" setup where "N remote" writes are write coalesced and lag behind the double local by a few milliseconds. But I would probably not run that in HA, because I know there's the chance the remote copy isn't crash consistent.
Being a server SAN, however, I have lots of choices. I can pull the disks/cards from the crashed server and bung them into another one, let it pick them up and light the VM up from the crash consistent state. Or, if the original server is a total loss, I can pick up from the copy that's a few milliseconds behind.
Or I could accept the latency of RDMA-to-PCI-E-or-MCS-flash and just run my N remote crash consistent with my 2 local. I've got lots of options. Including ones that allow me to get way better latency than your typical tier 1 array, and ones that let me get way better redundancy. Or, if I build it right (PCI-E interconnects with RDMA-to-MCS), both.
It all depends on what that particular workload's data is worth. And holy shit, would you believe it, I can even set about defining this as a policy for different workload classes, treating different workloads differently without having to set up different storage arrays, or fuck with LUNs ever again.
It's goddamned magical.
As for your "what I'm saying is based on considerably more than a couple of years experience", that's cute. I have "considerably more than a couple of years experience" in storage as well, but server SANs haven't been worth consideration for more than a couple of years, and thus experience with them specifically really can only date back that far. From the sounds of it, however, you don't actually have an experience with server SANs. Maybe that's what's got your Irish up.
But hey, cheers. If you want to feel like you're the top dog, your penis is longer, and you've won the argument, off to sail into the future, I'll let you have 'er. Here's a beer icon conceding my defeat, and I'll not reply to whatever you post after. I've said my piece. You can sit tall astride the internet mountain.
Hi Lusty, I'm sorry, but you're wrong. While ethernet is a possibility for server SAN interconnect, it is by no means the required interconnect. Infiniband is quite popular for latency-sensitive deployments, and direct PCI-E interconnect (see: A3Cube) is also available, and works quite well, thank you.
You might also consider things like "write double local, confirm back to application all while sending data to second node, mark second local write as erasable once second node confirms." Throw in the the fact that this allows for write coalescing in high transaction environments, or vendors like SimpliVity that do inline deduplication and compression - thus are only sending change blocks between nodes, because everything is deduped and compressed before being committed - and you realize that there are a half dozen schemes to drop data volume between servers while preserving write integrity.
Also: the costs on server SANs are dropping dramatically. Look at Scale Computing or Maxta. The downwards pressure has begun in earnest. What's more, as they manage to drive down their CPU/memory usage requirements the toll on your virtual infrastructure is far less. To the point that I seriously doubt you'll get the same amount of storage and the same IOPS with the same latencies from centralized storage vendors. And I can pretty much guarantee you won't 5 years from now, as server SANs commoditize storage for good.
Also also: server SANs are starting to address the issue of CPU usage for storage. A great example of this is SimpliVity's FPGA for inline deduplication and compression. It works, it works well.
Additionally, this statement: "Anything the server SAN guys say to the contrary is from their "testing" which ignores data consistency issues completely in favour of better stats. EMC, NetApp, HP, HDS never ignore data consistency for their tier 1 systems even in testing, hence the apparent difference to the layman." is pure FUD. Not only is it FUD, it's insulting FUD. I absolutely agree that one of the server SAN vendors - and a prominent one - has this problem. The rest emphatically do not.
More to the point, having devoted two years of my life to learning every facet of these systems, I do not appreciate being called "a layman". I promise you, I know more about server SANs than you do...and based on your level of interest and usage of FUD, probably more than you will in the next five years.
The thing about server SANs is that they are not "one size fits all". They can be configured differently for different requirements. Different balances can be struck with them, and tradeoffs consciously made.
Also: "As for using volatile memory for storage, the same is true - yes it's quicker, but only in the same way as strapping solid fuel rockets to your car. Survival rates are considerably lower in exchange for a faster ride."
This is an rare configuration, at least for writes. (Though there is one vendor in particular I know advocates this and insists on calling themselves a "server SAN" when they're nothing of the sort...)
I do see it in server SAN configurations tweaked for VDI. Ones where the node in question will not be storing the golden master or differencing disks, and they are obsessed with cramming every last VM in there. I don't agree with it, but I do know the vendors that do it and they are very up front about the risks.
Long story short: you're working on a whole lot of FUD. If there is one valid concern in the whole lot it is that no single server SAN vendor has yet addressed all of these issues in a single product offering "off the shelf". (The major stumbling block being that most of them choose to stick to Ethernet for simplicity reasons...but that's changing, and I've seen deployments using infiniband from most vendors...and several are looking into PCI-E interconnects for 2015.)
That said, I happen to know of at least four different models that are in development from different vendors that will address everything you raised (and a few other issues) in 2015.
Centralized storage - especially centralized storage costing $virgins from the majors - is simply non-requisite. There are far cheaper alternatives available today, and they are selling like hotcakes. I highly recommend you put down the vendor "war cards" and take some of the high end server SAN offerings for a spin. You'll be pleasantly surprised.
Each person has a different risk envelope. I have lived and breathed server SANs for the past two years and thus they don't seem at all complex to me. Certainly no more so than fibre channel and LUNs!
Do I think that Fortune 500 need to wait for some of these folks to prove out before putting tier 1 apps on? Absofuckinglutely. But not because of the tech; the problem is ensuring that the companies in question have the support networks and experience required to provide true tier 1 class support.
But the tech? The tech is solid...so long as you buy from the right company. At least two of them are pretty buggy still.
But it's ready for tier 2 apps in the fortune 500. It's probably ready for tier 1 in the commercial midmarket. Server SANs are just...really not that hard anymore. They're not special. They're not new.
What is new are the companies providing the tech. They all have growing up to do.
All flash network arrays are going to beat server SANs? Wha?
1) ServerSANs can use things like "memory channel storage" the provide latency traditional SANs can only dream of.
2) The "interconnect problem" with server SANs is the exact same problem that traditional SANs have...with the differencing being that server SANs can get around scaling largely by switching to multicast. Traditional SANs and scaling are...more unique.
Arrays may never go away entirely. There will probably always be room for them as a means of bulk storage. But in the long run, server SANs are going to be hard to beat. Centralized storage was a bandaid. The best solution is always to have the data as close to the processing as is feasible.
"It all depends what 5G turns out to be."
...you're trying to debate this without having researched the proposals that are on the table? Let's be clear: 5G is about delivering up to 10Gbit per cell to far more devices than 4G could dream, largely by using a much higher number of smaller cells scattered all over hell and back. That's why they want higher frequencies; so that you don't end up with cells overlapping in urban areas. Read this, then we'll have a talk.
"It's possible that currently being in Bangalore where 3G services are both ultra-cheap and extremely patchy due to network overloading at peak times has coloured my viewpoint on this, but I do think that enterprise high bandwidth use cases such as VDI just won't see wireless as reliable enough for the foreseeable future and so won't be an investment driver for the technology."
What can I say except "you're painfully, overwhelmingly wrong". 3G services run on long penetration waves. They are slower than sin, and the technologies themselves are utter shit for dealing with an overload of devices. That's like comparing a mid-urban 17th century cobblestone road to a fully modern 18 lane freeway. Even if the traffic volumes are radically different, the freeway is dramatically differently designed - off ramps, no vendors in the streets, banning donkeys and carts, etc - that it just moves traffic more efficiently.
5G is supposed to be the equivalent of taking a modern 4G network and cutting the cell sizes down to 1/10th, while boosting the theoretical maximum cell capacity by 10x. All with a layer of additional technologies to help prevent interference, signal degradation and ensure better handoff.
So yeah, you're just wrong. "high bandwidth" "enterprise" workloads like VDI - which by the way, is rather low bandwidth and increasingly used by consumers, at least here in North America - actually work pretty well over 4G. They'll work far better over 5G.
I'm sorry you live in a technological armpit. I really am. But much of the rest of the world simply doesn't suffer those issues. Canada, for example, has LTE that works like a sonofabitch, and 5G will be absolutely transformative for us.
So yes, people are going to use mobiles for more than checking mail and browsing webpages, slowly. Maybe, one day, they'll install mobile that's not shite where you are too.
But "mobile that's not shit" technology already does exist. You need carriers that aren't shite. (Ones that split the cells and size appropriately) and you need a frequency layout that allows for "not shite" (high frequencies in urban areas to allow for smaller cells, etc.)
Some of us have these. We thusly don't have the sorts fo problems you describe, even on modern technology. The issue isn't the tech. It's the implementation. Which means that when 5G comes around, and splits the cells even more, whilst offering far higher bandwidth per cell....
...well, that'll change the world for a lot of us.
I should point out that I have a list of uses for the kind of bandwidth - and small cell isolation - that 5G will bring. Starting with telepresence/telemedicine, but also moving into various elements of physical infrastructure automation and losing the dependance on fixed wire infrastructure, especially as portable computers and mobiles get to the point of being able to run multiple virtual machines and carry around terabytes of storage.
Hybrid VDI instances are the next gen stuff. Where the VM is run locally but change blocks are synced back, and patches/centralized updated move down in the same way. Application, content and data delivery is increasingly occurring from "the cloud", and there's a hell of a lot more to mobile network device usage than the average smartphone.
4G, even if fully realized, isn't nearly enough to meet today's demands. We're still hamstrung by these shared networks with low bandwidth. 5G just might get us to the point that we can do on mobile what we can do today on wired, but technology moves ahead, and at a rapid pace.
It's insane to think that we'll just stand still because you can only conceive of data networks as being a transmission vector for entertainment. Our whole society is changing. We're moving away from cities designed for long commutes, and even from having to be in the office at all. That will change everything about how we use data. As will the coming "instrumentation" of our everyday lives and the ongoing boom in robotics.
The age of information is not done. Not by a long shot.
So in other words "here is a vague assertion that bandwidth requirements won't grow to such a point that 5G is required, but I'm completely unwilling to back it up in any way save to assert my own lack of imagination". Wow. That's awesome.
I love how you slipped in the "5G will inevitably cost more" there too. Brilliant. Can your next brilliant riposte please be in ALL CAPS or at least ranDomly Mixed casE? You can even throw in some really bad spelling too. Then I can nominate you for CoTW. And it's only Monday!
Two companies with an American legal attack surface. PATRIOT act. *shrug* It's nice that it's lower latency for locals, and all...but it really doesn't change the "foreign government can seize your data" equation at all, does it?
...well, except that now there are two governments able to do it....
Re: Note to self
I'll believe Bing's worth a bent tuppence the day it can search Microsoft's own web properties for things like error messages more accurately than Google can. It's outright embarrassing how hard it is to find useful information about Microsoft's own products and services from their own search engine.
Re: Why can't they just honestly say...
I honestly don't believe you'll "save money" by moving your support to Vancouver. It's Canada's most expensive city, and getting to be on a par with San Francisco.
"Perhaps if they *STOPPED* lying to us"
No chance of that happening in our lifetimes...
Re: "Failed to install"
Track the KB numbers. 94% of the time the reason a patch fails to install is because some other patch in the group either superseded it or stepped on files this update round that the patch-that-won't-install needed to step on.
The other 5% of the time it's because something buggered your ACLs and you need to use subinacl to reset everything.
1% of the time is a goddamned mystery.
Re: Borked my PC
"Remember - Microsoft have NEVER released any product that works properly!"
Neither has anyone else. *shrug*
Everything requires patches. Microsoft make good - even great - software. They also make real stinkers. Windows itself is more the former than the latter.
Wake me when Wayland/Weston are baked, we have a FreeRDP server baked into the distro and someone has taken systemd, gnome 3 and unity out back and done the needful. Then we can really move beyond Windows.
Re: Make a week of it!
I bow in humility. That was masterful. I regret that I have but one upvote to give.
- ← Prev
- Next →