Hi, Office files can generally be recovered after a crash in a manner similar to that discussed here. Hope that helps.
5498 posts • joined 31 May 2010
Would my military or police suppress the citizens and back an autocratic state? No. But I'm Canadian. Would yours?
America is rotten to the core. And I honestly believe your military, national guard and federal policing units would stand with the state, not the people. They've already been trained for decades in "us versus them". Your local Sheriff is just a Sheriff. His revolver and his shotgun mean nothing against the awesome power of an Apache helicopter.
Sorry man, you just live in the wrong country for "the people" to have a say. Probably for generations to come.
That tired old saw. In the days when the power of an armed citizen roughly equaled that of an armed soldier, I'd agree with you. Even when it took two or three regular citizens to overcome the training of each soldier.
Today, you can "pacify" 30,000 people with a HumVee and a microwave cannon, or simply wipe them out by the tens of thousands with helicopters, daisy cutters or machine gun grenade launchers.
I don't care how many M16s you have on your insurgency shelf at home, if the state wants you dead you will be made dead. Especially if said state is a fully modern Western nation. Hell, we have freaking robots for that now. Flying ones!
Voting means nothing. Nothing at all. What matters - especially in "money is speech, corporations are people" America - is who writes the cheques. Given how much wealth is controlled by so few people, "the people" don't stand a chance to impose their will - or their oversight - no matter who they vote for.
Democracy, or even the concept of a republic, is a lie in a world where the gap of money, power and sheer force of arms between the haves and the have nots has moved from 3:1 to 300,000:1.
are were real limits, real transparency, and fucking real consequences over all this. "
Quis custodiet ipsos custodes?
"Battery backed" versus supercap is fairly irrelevant. It's an external power source.
And no, I didn't forget that flash exists on the module, but flash isn't the primary storage interface. It's the backup data retention space. There's just enough juice in the thing to dump the contents of RAM to flash. Well...you hope, anyways. There's actually been some issues with certain NVDIMM setups of these styles losing their supercaps over time (damned TiMn supercaps!) and thus not actually having the juice to fully write out the contents of the RAM before the clock strikes 12 and it turns back into a pumpkin.
NVDIMMs have a way wider use case set than MCS. NVDIMMs are used inside SSDs (well, sort of), RAID cards, modern high-RAM Hard Disks...anywhere where you might have RAM in use for high-speed storage, but require non-volatile storage if the lights go out.
MCS isn't that. MCS is a means of hijacking the DRAM bus to provide a jumped-up version of PCI-E storage. It isn't main system memory, or even main memory for a subcomponent (like a RAID card). It's secondary (or permanent) memory. Like a PCI-E or SATA SSD.
To be more concrete:
NVDIMMs are the sort of thing you put in your RAID card so that you can have 1GB or 2GB of fast DRAM cache on on your RAID card that accelerates your array. When the power goes out, the DRAM would dump it's contents to a flash backup. When the power comes back on, it would load that data from flash, then flush it back out to the disks.
MCS is more akin to the disks that would hang off that RAID card and serve as permanent storage.
If I were designing the ultimate in "bitching systems of the future", I would use NVDIMMs as my computer's main memory and MCS as permanent storage. I could run my databases in-memory without fear, and store my operating system, application, and long-term storage in the MCS modules.
...and now I want to go build a system like that. Hot damn that sounds sexy.
Sorry, but you are incorrect. Memory Channel Storage is presented to the system as storage. It is not presented as main system memory. Memory Channel Storage is used in a similar fashion to PCI-E storage, SATA storage or other forms of premenant storage.
NVDIMMs are presented to the system as memory. They are used by the system in the same fashion it would use volatile memory, however, it doesn't go *pffft* when the lights go out. NVDIMMs don't write to flash as the primary storage medium. It writes to RAM, then dumps that RAM into flash using an external power source when it detects a power-out event.
They serve different functions and operate in a different manner. The only similarities between the two are
A) Form factor; they both use DIMMs
B) When the power goes out, their contents end up are stored in flash
Under the hood, however, the differences far outweigh the similarities. One example: MCS can be huge compared to NVDIMMs. 400GB, 800GB or more per stick. NVDIMMs use flash as an "oh shit"-class backup medium for RAM, and thus are no bigger than the RAM they back up.
In truth, in many ways, I like the NVDIMM concept better. If only because it means you can use the things bloody forever without worrying about the write life of the flash. You will obsolete the system before the flash chips in an NVDIMM need to be worried about.
MCS? Not so much. As states in the interview, Diablo uses a bunch of consumer-level sells and "black magic" maths to wear level them. How long do they last, really? Given the high price and target (ultra-low-latency databases, etc) I have all sorts of questions about their applicability, survivability, etc. I have a list of technical testing questions a mile long before I'd ever put them into any of my systems.
Not so an NVDIMM. NVDIMMs are simple and straightforward. But htye are so because the operating principles are completely different, as is their ultimate applicability.
Putting RAM with a supercap on a PCB and giving it a SATA 3 interface is closer to being "exactly like an SSD" than MCS is to being "exactly like an NVDIMM". At least the "RAM + Supercap + SATA 3 interface" and the SSD can both only be used as permanent storage.
NVDIMMs are treated by a system more like regular RAM than they are like PCI-E or SATA storage. That's the advantage of NVDIMMs. It's why they're worth buying!
Now I could be wrong - $deity knows that happens often enough - but if that is so, please explain how. For my own erudition. How, other than in the most superficial fashion, is an NVDIMM anything like MCS at all?
Thanks in advance!
"MCS is NVDIMM is NVvault "
Explain how. Because from a technical standpoint I don't understand how at all.
NVDIMM is RAM that writes to flash only after it detects a power out event. It relies on a supplementary power source to do so.
MCS writes everything to flash immediately. It's not sitting in RAM waiting for a power off event. It goes straight to flash.
NVDIMM uses flash as emergency storage. MCS uses flash as primary storage and doesn't have an emergency storage component. They are - to my understanding at least - two completely different products with two completely different goals.
NVDIMMs are RAM modules. They behave as RAM modules. They are presented to the system as RAM modules. MCS is storage. It is presented to the system as storage...and even requires a BIOS patch to do so.
NVDIMMs are amazing and fantastic for in-memory databases, because they allow you to work at DRAM speeds, something MCS cannot do. MCS acts in a fashion very similar to PCI-E flash storage, but without the latency spikes that affect PCI-E storage at high utilization thresholds.
From practical application of the technologies right down to the nitty gritty of electrical signalling they are, to my understanding, two completely distinct products. If you claim otherwise, Please, do share how I am wrong about that fact. I like knowledge.
Next: please explain how "who opened their doors when" matters?
Also: regarding this statement: "Badalone attempts to justify Diablo's position with what would be described as grandstanding - whereas we see the Netlist CEO is more comfortable detailing the issue in court and in the Netlist's financial filings. Take a look at the SEC website for more information on Netlist / Diablo."
I both have to agree and disagree. Is Badalone grandstanding? Absolutely. But Netlist is also not answering important questions all while shedding board members. If Netlist's take on this is "please don't ask tough questions and just wait for the courts to deal with this" then that is a take I cannot sanction. The purpose of journalism is to ask the tough questions. Especially when someone doesn't want those questions asked.
Which brings me to: " Saying Netlist responded with a "canned" sentence doesn't alleviate the need to report information that counters Badalone's claims."
When I uncover any information that makes me believe for a second Netlist has a valid claim then I will gladly report on it, dissect it in detail and explain how this is likely to be a real threat to Diablo's position. I owe Diablo nothing, and care nothing for either company involved beyond gaining a deeper understanding of the technical issues and history that drives the conflict.
I am absolutely willing to do a counter interview with netlist and dive into the technical nitty gritty of their claims with them. I'd love to, in fact. For example, to understand why someone might claim that NVDIMM and MCS are "basically the same". I don't see it that way at all, and would love to be shown how I am wrong. If you know people at Netlist that can do so, please, have them contact me.
"Let's also be fair and recognize Netlist has been winning at the USPTO and in court. Big name companies have already settled (ex. TI)."
If I win a court case against someone's dog biting me, it doesn't mean that I'll win when their cat craps on my lawn. Each claim in each case is to be taken on it's own merits, no?
All good questions. I'll be sure to track down the CEO of Diablo and ask.
No bias, just trying to understand. I have enough knowledge of the topic to have a lot of very serious technical questions about Netlist's claims. How/why do claims around what amounts to an LRDIMM count against memory channel storage, which is - at least at first glance - completely different.
The only bit that would seem to be the same is that somewhere on those chips there is a widget that allows the CPU to "talk to" more address space than it was designed to. Address conversion, if you will. Electrically and logically you need to address flash completely differently from RAM. But at the end of the day there is still some widget that is allowing you to address more memory on that bus than you should by all rights be able to.
Now, Diablo claims that they have the rights to that particular piece of tech because they, in fact, invented it. More to the point, they claim the contract lets them use that tech. Fair enough; if that's true - and we'll see soon enough, I guess - then what is Netlist on about?
So that leads into the second round of claims: IP around battery-backed DIMMs. Unless you have a patent that basically says "we patent non-volatile memory in all forms" there's nothing similar between a battery-backed DIMM and a flash DIMM. Initial research didn't show Netlist having anything like such an overly broad patent.
Netlist borders on impossible to get hold of, but the Diablo CEO was entirely willing to have a grand old chat. Talking with him helped me understand that technical side of things a lot more, and the details around that cleared up at least some of my misunderstanding around the legal mess.
That said: there's a lot of posturing here, from both companies. From a technical standpoint, I still can't see how Netlist has much in the way of a claim, but I openly admit that the patents involved may somehow be interpreted to be more broad than my non-legal mind is capable of understanding.
The take away is that the dispute here centers around the fact that Diablo once did contract work for Netlist, and then moved on to do their own thing. Netlist feels that Diablo's "new thing" is sufficiently similar to the contract work that they once did that Diablo must clearly have used IP they own, be that inadvertently or purposefully.
Honestly, I have no idea if any of those claims will hold up, because intellectual property law isn't connected to technical realities in any way that I have yet been able to grok. But from a technical standpoint, the technologies involved are pretty far apart...with the exception of the widget that allows the CPU to address a larger address space.
Diablo claims they own the rights to it, and Netlist seems to have dropped all claims to it. So...why are they still fighting? On Netlist's side, I honestly have no idea. They will provide you canned statements about the whole thing, but not sit down and explain their reasoning. On the Diablo side, the reason is - quite clearly - pride.
The Diablo CEO is prideful. What's more, he quite clearly believes he is in the right. He will see this through because he feels strongly that Netlist is morally wrong in having wasted so much of his time and Diablo's money on this whole affair. Having talked to him, I believe that he honestly believes this.
So, I don't know about any of you, but this just keeps dragging me back to the technology side of it. The whole thing really bothers me because I just don't understand it. Is there something about my understanding of how the electrical signalling of the DRAM bus works that is inaccurate? Is my understanding of basic computer components really that flawed?
Diablo's CEO would have me believe my understanding of the gubbins of a computer is more or less correct. Netlist won't provide more than a canned explanation. For now, at least, that's the closest to "understanding" this situation as it looks like I'm going to get.
I welcome any alternative hypothesis - especially technical ones - that explain where or how Netlist has a case here. At the end of the day, all the technologies involved: LRDIMMs, NVDIMMs, Memory Channel Storage...it's all just so cool to me. The nerd in me just has to make sure he really understands how it all works.
There's your mistake: you think of systemd as just a replacement for init. It's not. It is attempting to be - piece by piece - a replacement for every single core element of the OS that isn't a kernel. Including all the fundamental userland tools (and the freaking shell) that we think of as being core to the "GNU/Linux" package.
In very much the same way that Android runs a Linux kernel but is thought of as "Android", not as "Linux", so to is systemd evolving into it's own thing. Mark my words, the GNU toolchain will be next with systemd. He's already gone after everything else, and he won't stop until he, personally, controls the whole goddamned thing.
The three key strains of Linux today are:
Anyone, it seems, can build a userland stack. But at the center of it all, there is still Torvalds. He's ornery. He's blunt. He's to the point. And he's usually correct.
Go ahead and try to make it Systemd/RedHatnix or whatever the hell ego-driven digital phallic madness drives the gravy train next...it won't hold a candle to the semi-benevolent dictatorship of an Angry Finn obsessed with quality control.
systemd/Linux? Well, that's SLES off the list. GNU/Linux or GTFO, thanks. Slackware uber alles?
Then you buy shitty LED lamps. I live in Canada, eh? My city has had all LED lamps for a decade or more.
I guess it's just to much trouble for all y'all to invest in $5 piece of plastic to solve the problem. Can't say as I've any sympathy. If ya need to figure out how to cope with snow, maybe you could ask them as have already solved the problem.
Microsoft's war on ease of use continues unabated.
Microsoft has always done dividends. Though, if IBM is any indication, you can do the share buyback scheme for 15 some-odd years and see stock prices rise.
"Their traditional markets are dead or dying"
Quite the opposite. The numbers show - if anything - there is strong demand around the world for people to retain control over their own data by running their own IT. Shocking, but then, only madmen would ever have questioned "put all your data into the American cloud", eh?
"Why wouldn't these people band together and exact vengeance?"
There is no reason to expect them to do anything other than what they are doing. They are leading a crusade. Once upon a time, we did this too. But the reasons for this are not rooted in the past 20 years, but the past 100. This is Britain's mess. The rest of the world is still cleaning it up. Britain must never again be allowed to draw national borders. Ever.
"What would you do?"
Well, me, personally, I'd not be worshiping a god that doesn't exist and killing in it's name. But that's me. If there were a bunch of foreigners bombing my home every bloody day, I'd probably pick up a stick, sharpen it, and go put the pointy end into one of the people making my home go boom.
"What will we do? Fuck."
Wipe them out. All of them. There is only one way this ends. History has taught us this, and we've been dancing around it for the past 40 years.
This is a religious holy war. There is no reasoning with these people. The only answer it complete and utter subjugation. Wipe out their ability to make war. Destroy their ability to organize the radical aspects of their religion. Begin a massive, centuries long campaign to assimilate their culture.
It's horrible. It's awful. It's brutal and it's obscene. It is also the only possible solution that is rational, because every other alternative has them leading an ever-increasingly-well-financed and organized holy war of vengeance against a massively dehumanized enemy (everyone who is not them). It will be the sort of war where outrageous violence and war crimes are considered points of honour and pride, not something you get brought up on charges for.
History has taught us all about this stuff. This is where you control the populace by burning people alive. This is where you ban education except for the select few. This is where you keep those with morals working for you by bringing in a 14 year old girl and slowly murdering her over days in front of the "moral" person and then informing them that for each day of non-compliance another will be killed just like that right there.
This sort of war is where things happen that would blacken your soul to even think about. It is the sort of war where people volunteer to be suicide bombers by the tens of thousands. It is the sort of war that is remembered for thousands of fucking years.
If we do not prevent the formation of an ultra-religious extremist state bent of wiping out the entire population of the earth that disagrees with them then we are looking at the motherfucking sack of Troy, but with SCUDs, Tanks and - eventually - ICBMs.
So what do we do? We end these people. As quickly and as efficiently as is possible, and we pray to our descendants for forgiveness for the sins we are about to commit.
The honest answer to that? Big Data. There are dozens of companies right now offering various cloud-based analytics software offerings that place an "observer" or "agent" in your datacenter. They then hoover up fucking everything. Every scrap of performance data. What's installed where. Peaks and valleys in response times for various infrastructure components, you name it. (See: Cloudphysics, amongst many, many others.)
Then you get into companies like VMTurbo that are now using this data to predict required changes and configurations...and they're getting quite good at it, even when they don't have access to Cloudphysics-like datasets.
Now, as a large company, you start buying these guys up. Not for the software they offer, but because they employ the best Big Data PhDs in the world, and they have amassed petabytes of data that is supremely useful for building out this level of automation. Your first generation robot handlers rely on statically collected information from volunteer canaries and non-automated deployments still using the cloudy analytics stuff. Not perfect, but that's okay, you're not automating the whole world yet; it's early days.
Meanwhile, the boffins are in the back room correlating application design and hardware design with various statistics and building models of how changes in applications will affect the results...then testing them. They are learning to build highly accurate predictive mechanisms that will make VMTurbo look like a child's toy.
And on and on it goes, getting ever more accurate. Instead of needing the "laying of hands" from the High Priests, this sort of stuff is dealt with by using empirical data, advanced prediction algorithms and high-reactivity monitoring that will catch any deviations from the predicted algorithms, adapt, feed that information back into the Big Data systems and refine the algorithms some more.
I should also point out that I've seen prototypes of this stuff actually working, and working on software and configurations never before seen by the prediction algorithms. I've seen them working on dynamic workloads. When you're a tech journalist, you get to see some of these stealth-mode startups. And then you start putting what they offer together with what these other guys offer, and you see that this company is making these acquisitions over here...
So..."how will all this black box magic voodoo work?" The same way a B2 Spirit Bomber stays in the air. Damned fine engineering. Modelling, modelling, modelling, and a fly-by-wire system that makes changes faster than any human could ever dream of doing.
You are about to become obsolete, sirrah. I know you won't believe that until it's upon you and you are staring at your own pink slip, but It's time to upskill.
Resizing LUNs does not add value to the business.
Amazon has an American legal attack surface greater than zero. They thus cannot be trusted to protect your data. End of line.
A one night stand that founded a people that went to the motherfucking moon. I'd say it was pretty successful.
"Hmm. If we had a "hotbed of nepotism and graft" icon, what would it look like?"
And not in a "enterprise who needs to build entire datacenters" kind of way. For the 99.9995% of businesses out there who don't build datacenters. For whom renting 1-4 racks at a colo is just fine.
Prove azure's cheaper. For real world workloads, not ones designed from scratch for the cloud. Prove it, prove it, prove it.
Stop your assertions, drop the anonymous coward and man the fuck up with some actual evidence.
There are only 17,000 "enterprises" in this world. There are over one billion businesses. Prove your assertion in the context of the majority.
Proof. Not assertions. Proof.
Ah, but then, you're the sniveling coward who can't think outside of Microsoft's marketing messaging and isn't man enough to post under their own name. I'm sure we'll soon get a completely unverifiable assertion about your self importance in order to back up how much you "know it all to be true", followed by a comparison of Azure to what amounts to VCE for an on-premises deployment, and a bunch of waffle about the manpower cost when you have to run a team of 50 just to light up one rack, oh woe is the enterprise space and all those millions of VMs you support.
Yup. Move along, little doe-eyed brainwashed marketing puppet. The rest of us actually run the numbers.
Now, next, you'll tell me that Microsoft's Cost Estimation Tool for Windows Azure was perfectly justified in telling me that I should expect to pay $2,379,343.52 per month to support the IT of a small business in the cloud, before bandwidth is factored in.
This would be a small business that has an annual income of $5,000,000. Oh, and that I've managed to run successfully on less than $200k for hardware, software, bandwidth and staffing for the past eleven fucking years.
And yet, apparently, $2M a month is cheaper. Of course it is. Because Microsoft says so. Because the cloud. The cloud wants one hundred and twenty times (120x!!!!) the amount of money to run, doesn't include backups, disaster recovery or bandwidth for that price, and has the added "benefit" of putting all my customer data in the hands of the NSA and placing me in violation of various local privacy laws for doing so.
But it's unquestionably cheaper, and Trevor Pott is just a stupid Mirosoft-hating moron who can't understand this simple fact.
Well, I'm glad we cleared that right the fuck up. Cheers for beers, Microsoft marketing chap. In fact, here's one for you now -->
Priced, of course, so that nobody using will ever be able to compete with Azure, which in turn is still more expensive than running a roll your own. And there's no word on a reduction of licensing complexities a-la SPLA, (indeed, MS just jacked up the price another 15-50%). And let's not even touch VDI licensing, or how a virtualised server instance isn't the same as a virtualised endpoint instance, especially for apps specifically coded to detect server instances and refuse to run on them...
It's a cute first try, but for Microsoft to truly compete with in the software defined infrastructure wars will require that they admit the past 15 years of licensing shenanigans were wrongheaded, gut the entire thing, and move to something that's actually partner and end customer friendly.
Not fucking likely.
And that's before we get into talking about efficiency relating to "number of VMs per rack, or the configuration of those VMs, or the IOPS.
That said, if you're married to Microsoft, it's a great offering. Some people are, and this will help them kind-of/sort-of keep up with the Jones. Everyone else will be able to do more while spending way less...but at least it will be sort of close.
Maybe, if everyone's really lucky, they'll figure out that they actually have to compete and they'll get on doing that at some point. Then the prices can come down to competitive levels, density can go up, system center can finally, mercifully be forever expunged, and everyone can win.
Maybe. I live in hope.
Two controllers are required for uptime, not data integrity. Remember: server SANs use object storage, not RAID. So when they do a double local and N remote they aren't going through a RAID controller presenting two LUNs, they're writing to two separate entities.
Oh, and, just by the by, two PCI-E flash cards, which is typically where the initial double local goes do count as two controllers.
Of course, if what you want is is to have your double local and N remote all confirmed committed before you reporting that write back to the guest OS then you'll have to sent that across the network first...but all you need back is a confirmation that it's written on the remote node, not a full copy of the data. Even then, the advanced stuff is doing RDMA writes to things like memory channel flash, which is going to provide you lower latency than a tier 1 storage array.
The thing is, with server SANs, you just have more options than you do with traditional SANs. I can have a highly latency sensitive application running on a node and choose to run it in a "double local + N remote" setup where "N remote" writes are write coalesced and lag behind the double local by a few milliseconds. But I would probably not run that in HA, because I know there's the chance the remote copy isn't crash consistent.
Being a server SAN, however, I have lots of choices. I can pull the disks/cards from the crashed server and bung them into another one, let it pick them up and light the VM up from the crash consistent state. Or, if the original server is a total loss, I can pick up from the copy that's a few milliseconds behind.
Or I could accept the latency of RDMA-to-PCI-E-or-MCS-flash and just run my N remote crash consistent with my 2 local. I've got lots of options. Including ones that allow me to get way better latency than your typical tier 1 array, and ones that let me get way better redundancy. Or, if I build it right (PCI-E interconnects with RDMA-to-MCS), both.
It all depends on what that particular workload's data is worth. And holy shit, would you believe it, I can even set about defining this as a policy for different workload classes, treating different workloads differently without having to set up different storage arrays, or fuck with LUNs ever again.
It's goddamned magical.
As for your "what I'm saying is based on considerably more than a couple of years experience", that's cute. I have "considerably more than a couple of years experience" in storage as well, but server SANs haven't been worth consideration for more than a couple of years, and thus experience with them specifically really can only date back that far. From the sounds of it, however, you don't actually have an experience with server SANs. Maybe that's what's got your Irish up.
But hey, cheers. If you want to feel like you're the top dog, your penis is longer, and you've won the argument, off to sail into the future, I'll let you have 'er. Here's a beer icon conceding my defeat, and I'll not reply to whatever you post after. I've said my piece. You can sit tall astride the internet mountain.
Hi Lusty, I'm sorry, but you're wrong. While ethernet is a possibility for server SAN interconnect, it is by no means the required interconnect. Infiniband is quite popular for latency-sensitive deployments, and direct PCI-E interconnect (see: A3Cube) is also available, and works quite well, thank you.
You might also consider things like "write double local, confirm back to application all while sending data to second node, mark second local write as erasable once second node confirms." Throw in the the fact that this allows for write coalescing in high transaction environments, or vendors like SimpliVity that do inline deduplication and compression - thus are only sending change blocks between nodes, because everything is deduped and compressed before being committed - and you realize that there are a half dozen schemes to drop data volume between servers while preserving write integrity.
Also: the costs on server SANs are dropping dramatically. Look at Scale Computing or Maxta. The downwards pressure has begun in earnest. What's more, as they manage to drive down their CPU/memory usage requirements the toll on your virtual infrastructure is far less. To the point that I seriously doubt you'll get the same amount of storage and the same IOPS with the same latencies from centralized storage vendors. And I can pretty much guarantee you won't 5 years from now, as server SANs commoditize storage for good.
Also also: server SANs are starting to address the issue of CPU usage for storage. A great example of this is SimpliVity's FPGA for inline deduplication and compression. It works, it works well.
Additionally, this statement: "Anything the server SAN guys say to the contrary is from their "testing" which ignores data consistency issues completely in favour of better stats. EMC, NetApp, HP, HDS never ignore data consistency for their tier 1 systems even in testing, hence the apparent difference to the layman." is pure FUD. Not only is it FUD, it's insulting FUD. I absolutely agree that one of the server SAN vendors - and a prominent one - has this problem. The rest emphatically do not.
More to the point, having devoted two years of my life to learning every facet of these systems, I do not appreciate being called "a layman". I promise you, I know more about server SANs than you do...and based on your level of interest and usage of FUD, probably more than you will in the next five years.
The thing about server SANs is that they are not "one size fits all". They can be configured differently for different requirements. Different balances can be struck with them, and tradeoffs consciously made.
Also: "As for using volatile memory for storage, the same is true - yes it's quicker, but only in the same way as strapping solid fuel rockets to your car. Survival rates are considerably lower in exchange for a faster ride."
This is an rare configuration, at least for writes. (Though there is one vendor in particular I know advocates this and insists on calling themselves a "server SAN" when they're nothing of the sort...)
I do see it in server SAN configurations tweaked for VDI. Ones where the node in question will not be storing the golden master or differencing disks, and they are obsessed with cramming every last VM in there. I don't agree with it, but I do know the vendors that do it and they are very up front about the risks.
Long story short: you're working on a whole lot of FUD. If there is one valid concern in the whole lot it is that no single server SAN vendor has yet addressed all of these issues in a single product offering "off the shelf". (The major stumbling block being that most of them choose to stick to Ethernet for simplicity reasons...but that's changing, and I've seen deployments using infiniband from most vendors...and several are looking into PCI-E interconnects for 2015.)
That said, I happen to know of at least four different models that are in development from different vendors that will address everything you raised (and a few other issues) in 2015.
Centralized storage - especially centralized storage costing $virgins from the majors - is simply non-requisite. There are far cheaper alternatives available today, and they are selling like hotcakes. I highly recommend you put down the vendor "war cards" and take some of the high end server SAN offerings for a spin. You'll be pleasantly surprised.
Each person has a different risk envelope. I have lived and breathed server SANs for the past two years and thus they don't seem at all complex to me. Certainly no more so than fibre channel and LUNs!
Do I think that Fortune 500 need to wait for some of these folks to prove out before putting tier 1 apps on? Absofuckinglutely. But not because of the tech; the problem is ensuring that the companies in question have the support networks and experience required to provide true tier 1 class support.
But the tech? The tech is solid...so long as you buy from the right company. At least two of them are pretty buggy still.
But it's ready for tier 2 apps in the fortune 500. It's probably ready for tier 1 in the commercial midmarket. Server SANs are just...really not that hard anymore. They're not special. They're not new.
What is new are the companies providing the tech. They all have growing up to do.
All flash network arrays are going to beat server SANs? Wha?
1) ServerSANs can use things like "memory channel storage" the provide latency traditional SANs can only dream of.
2) The "interconnect problem" with server SANs is the exact same problem that traditional SANs have...with the differencing being that server SANs can get around scaling largely by switching to multicast. Traditional SANs and scaling are...more unique.
Arrays may never go away entirely. There will probably always be room for them as a means of bulk storage. But in the long run, server SANs are going to be hard to beat. Centralized storage was a bandaid. The best solution is always to have the data as close to the processing as is feasible.
"It all depends what 5G turns out to be."
...you're trying to debate this without having researched the proposals that are on the table? Let's be clear: 5G is about delivering up to 10Gbit per cell to far more devices than 4G could dream, largely by using a much higher number of smaller cells scattered all over hell and back. That's why they want higher frequencies; so that you don't end up with cells overlapping in urban areas. Read this, then we'll have a talk.
"It's possible that currently being in Bangalore where 3G services are both ultra-cheap and extremely patchy due to network overloading at peak times has coloured my viewpoint on this, but I do think that enterprise high bandwidth use cases such as VDI just won't see wireless as reliable enough for the foreseeable future and so won't be an investment driver for the technology."
What can I say except "you're painfully, overwhelmingly wrong". 3G services run on long penetration waves. They are slower than sin, and the technologies themselves are utter shit for dealing with an overload of devices. That's like comparing a mid-urban 17th century cobblestone road to a fully modern 18 lane freeway. Even if the traffic volumes are radically different, the freeway is dramatically differently designed - off ramps, no vendors in the streets, banning donkeys and carts, etc - that it just moves traffic more efficiently.
5G is supposed to be the equivalent of taking a modern 4G network and cutting the cell sizes down to 1/10th, while boosting the theoretical maximum cell capacity by 10x. All with a layer of additional technologies to help prevent interference, signal degradation and ensure better handoff.
So yeah, you're just wrong. "high bandwidth" "enterprise" workloads like VDI - which by the way, is rather low bandwidth and increasingly used by consumers, at least here in North America - actually work pretty well over 4G. They'll work far better over 5G.
I'm sorry you live in a technological armpit. I really am. But much of the rest of the world simply doesn't suffer those issues. Canada, for example, has LTE that works like a sonofabitch, and 5G will be absolutely transformative for us.
So yes, people are going to use mobiles for more than checking mail and browsing webpages, slowly. Maybe, one day, they'll install mobile that's not shite where you are too.
But "mobile that's not shit" technology already does exist. You need carriers that aren't shite. (Ones that split the cells and size appropriately) and you need a frequency layout that allows for "not shite" (high frequencies in urban areas to allow for smaller cells, etc.)
Some of us have these. We thusly don't have the sorts fo problems you describe, even on modern technology. The issue isn't the tech. It's the implementation. Which means that when 5G comes around, and splits the cells even more, whilst offering far higher bandwidth per cell....
...well, that'll change the world for a lot of us.
I should point out that I have a list of uses for the kind of bandwidth - and small cell isolation - that 5G will bring. Starting with telepresence/telemedicine, but also moving into various elements of physical infrastructure automation and losing the dependance on fixed wire infrastructure, especially as portable computers and mobiles get to the point of being able to run multiple virtual machines and carry around terabytes of storage.
Hybrid VDI instances are the next gen stuff. Where the VM is run locally but change blocks are synced back, and patches/centralized updated move down in the same way. Application, content and data delivery is increasingly occurring from "the cloud", and there's a hell of a lot more to mobile network device usage than the average smartphone.
4G, even if fully realized, isn't nearly enough to meet today's demands. We're still hamstrung by these shared networks with low bandwidth. 5G just might get us to the point that we can do on mobile what we can do today on wired, but technology moves ahead, and at a rapid pace.
It's insane to think that we'll just stand still because you can only conceive of data networks as being a transmission vector for entertainment. Our whole society is changing. We're moving away from cities designed for long commutes, and even from having to be in the office at all. That will change everything about how we use data. As will the coming "instrumentation" of our everyday lives and the ongoing boom in robotics.
The age of information is not done. Not by a long shot.
So in other words "here is a vague assertion that bandwidth requirements won't grow to such a point that 5G is required, but I'm completely unwilling to back it up in any way save to assert my own lack of imagination". Wow. That's awesome.
I love how you slipped in the "5G will inevitably cost more" there too. Brilliant. Can your next brilliant riposte please be in ALL CAPS or at least ranDomly Mixed casE? You can even throw in some really bad spelling too. Then I can nominate you for CoTW. And it's only Monday!
Your failure of imagination doesn't define reality.
I can come up with quite a few services that are highly bandwidth constrained in today's world, especially via mobile. But then, I'm not limited to conceptualizing mobile as an entertainment salve for the masses. We've a ways to go before we've reached the end of history, or even of innovation in computer science.
But if you've a yen to stand up an say to us all that there is nothing left to invent to drive demand for 5G, please drop the Anonymous Coward and give us your real name. That way we can hold you to it. Until then, you're just one more voice afraid of the future, squeaking from behind the veil of anonymity hoping to hold it back.
640K should be enough for anybody.
Two companies with an American legal attack surface. PATRIOT act. *shrug* It's nice that it's lower latency for locals, and all...but it really doesn't change the "foreign government can seize your data" equation at all, does it?
...well, except that now there are two governments able to do it....
I'll believe Bing's worth a bent tuppence the day it can search Microsoft's own web properties for things like error messages more accurately than Google can. It's outright embarrassing how hard it is to find useful information about Microsoft's own products and services from their own search engine.
I honestly don't believe you'll "save money" by moving your support to Vancouver. It's Canada's most expensive city, and getting to be on a par with San Francisco.
"Perhaps if they *STOPPED* lying to us"
No chance of that happening in our lifetimes...
Track the KB numbers. 94% of the time the reason a patch fails to install is because some other patch in the group either superseded it or stepped on files this update round that the patch-that-won't-install needed to step on.
The other 5% of the time it's because something buggered your ACLs and you need to use subinacl to reset everything.
1% of the time is a goddamned mystery.
"Remember - Microsoft have NEVER released any product that works properly!"
Neither has anyone else. *shrug*
Everything requires patches. Microsoft make good - even great - software. They also make real stinkers. Windows itself is more the former than the latter.
Wake me when Wayland/Weston are baked, we have a FreeRDP server baked into the distro and someone has taken systemd, gnome 3 and unity out back and done the needful. Then we can really move beyond Windows.
I bow in humility. That was masterful. I regret that I have but one upvote to give.
Oh, I've read the innovator's dilemma. Several times. While it contains some worthwhile insights it is absolutely not gospel, nor universally applicable. I've also read that case study, by the way, amongst a few others. I have also done a very thorough look at EMC's acquisition history...as was a looking into the individuals in positions of power and done my homework on current politics, tensions, partnerships, and power games.
EMC is not the rock of ages. They are far more brittle than most are willing to accept. And yes, I absolutely do believe in the applicability of the Jobsian quote "if you don't cannibalize yourself, someone else will."
Arrays are the past. They will be around for the next 25 years, but the peak is here. They are no longer going to drive growth. This is as a result of several other industry trends that can be boiled down to their most poignant essence as is directly applicable to EMC: "resizing LUNs does not add value to the business."
The future lies in software defined infrastructure. It lies in using the computers we buy, not in configuring, managing or maintaining them. To be perfectly blunt, those in charge of EMC haven't figured this out, and they absolutely do not want to hear it.
They are invested – financially and emotionally – in the old way of doing things, and bitter holy wars are fought within that company over any suggestion that the world is changing around them. To say nothing of the intercompany firefights within the federation, or the alienation of partners.
Whatever EMC's past, they are not presently prepared to evolve. Despite this, the biggest change in the computer industry since the introduction of the personal computer is occurring around us. Reality doesn't care what you – or EMC executives – think. It will occur.
Change is always "coming". But there's a hell of a difference between "change is coming" circa 2001, where the only interesting developments of substance in the storage world for the next 10 years were flash (it goes faster than disks. OoooooooooooOOOooo.) and deduplication (EMC lost a fraction of some market share to NetApp. OooooooooOOOooo.)
It's completely different today. The changes coming are not vague, nebulous and in some distant future. They are occurring all around us right now, and there is a veritable explosion of new storage companies out there actively changing the industry. The storage industry hasn't seen this kind of innovation since centralized storage for first introduced.
That is what makes this different.
I could wave my hands and say "change is coming to the operating system market" and try to sound all doom and gloom too. Lots of people do it. The truth is, however, there's fuck all happening there beyond slow, incremental evolution. Even Docker's popularization of containers is not a paradigmatic change. It will be dealt with using existing bureaucratic processes and no major powers will be harmed by that event.
What's happening now in the storage world is totally different. Everything is in flux and even the mightiest can fall. This is the point where empires are won...or lost. It is not a vague element of "change is coming" but, instead, "real change is upon us now...adapt or die."
Screw MS; Openstack is coming along. For the SMB space, at least, it's really worth looking at. Scale Computing does good work. PistonCloud is okay, and Metacloud is amaze.
VMware may be the best of the best...but the reality is that most businesses can do just fine using any of the rest.
The pure form of anything is typically deadly. Libertarianism is no exception. Critical thought needs to be applied to all situations. And even - dare I say it - a little bit of faith. Faith that if we structure our laws such that critical thought needs to be applied on a case-by-case basis, and that if we train courts and judges and citizens in critical thinking, that when the time comes that they are called upon to exercise it, they shall be able to do so.
Do not attempt to feed the masses by building a massive machine to strain the oceans and hand fish out to people. Teach them to fish and regulate the fishing such that no individual may take more than is reasonable for survival. Monitor fish populations to ensure that overfishing doesn't occur. Develop alternate food sources to fishing to deal with times of plight.
Do not attempt to solve douchebaggery by building a massive machine of censorship handing out prescribed thought to the masses. Teach them to think critically and regulate the circumstances under which people may be chastised or censored for exercising free speech. Ensure that education programs exist to teach critical thinking and that your legal system is able to provide trained critical thinkers during times when mob rule overwhelms critical thought.
In all things we must seek a balance, but crafting that balance starts with the ability to think for ourselves, and to make judgements free of the moral and ethical constraints others would impose upon us.
"Do you think that would be OK, or is your position that anyone should be able to say anything to anyone online and they should just suck it up because freedom of speech?"
Nope, I'm not an absolutist; but there are extant laws to deal with threats. Even restrictions on their implementation, such as "a rational person would have to believe the threat was genuine, and not some jerk talking out their arse."
Legitimately threaten to rape/kill someone and you should get slapped. The other side of that is that hyperbole such as "I am going to deorbit a series of tungsten rods on top of Microsoft licensing" is not a credible threat and should be ignored.
No matter how "grotesque" the "abuse", nobody has a right not to be offended. I am all down for finding the people who make legitimate - or concerning enough that a reasonable person would believe was legitimate - threats and punishing them.
By the same token, let's say you are Dude A from Westboro Baptist Church and spend your life saying "gays should go to hell". If I respond to every tweet you sent with "your god doesn't exist" or "you're going to hell, not gays", and otherwise express a counter opinion that isn't legally an offense.
Where it gets murky is persistence and resourcing behind the message. At some point non threatening speech becomes coercion. But we lived in a fucked up society where certain forms of coercion - namely religious indoctrination - are largely protected by law. If I used the same resources, tactics and techniques to attempt to counter the message of a given religious evangelist I would be thrown in jail for it.
The laws and their application around freedom of speech and/or "trolling" are not applied consistently, or even rationally.
Yes, there must be limits to speech. Freedom of speech is about your right not to be suppressed by the government. It doesn't mean you get to walk into a gay nightclub and scream "fags are all going to go to hell" over and over at the top of your lungs.
But there's the key: Freedom of speech is about your right not to be suppressed by the government.
Twitter, as a private entity can implement any censorship on it's service that it likes...but the government should not. It should not step in unless there is an actual threat made.
Then the free market will decide which platforms are best: those that suppress "unwanted" speech of various types, those that are completely open, or those that deal with the issues on a case-by-case basis.
Personally, I believe that if you are going to make censorship of any kind mandatory - at the level of a private entity censoring their physical or virtual establishment, or at the level of a government making laws - then you absolutely must make that censorship apply to all individuals and topics equally.
If it is horrible to troll someone in the name of chauvanism it should be equally horrible to troll someone in the name of feminism. "I am going to rape you" should be treated the same as threats to cut of someone's penis. Telling a woman that they should get back into the kitchen should be treated the same as telling a man he shouldn't be allowed to have the job he has, or attend the school he attends.
if you are going to censor people for championing critical thought and telling people not to believe in religion you should also censor people who evangelize religion and attempt to convert others.
But the law is not applied equally. Nor is it written in a manner that allows equal application. Therein lies the problem, and therein lies the curse of our generation: an apathy that enables those with an agenda to shape the behavior, culture and legally acceptable thought processes of the generations to come.
So I don't find this debate so simple. And I think it is wrapped up in what we want our culture to be. One of morality predefined by tradition, conservatism and a shaping of the very thoughts that will keep established groups in power? Or are we to make a nation of critical thinkers who will learn, and judge and explore for themselves?
Political correctness has a dark side. Be wary of it.
"I’m more saying that if you say racist things, people will conclude you’re a racist and thus also at best an irksome buffoon, and the fact that you yourself didn’t see anything wrong with what you said isn’t really much of a defence."
And that's fine, so long as "you say something stupid and then people ostracize you" is where it ends. But here we're talking about "you say something stupid and then you go to jail." That's what's being proposed by UK.gov.
Laws against trolling amount to a combination of an attempt at a "right to not be offended" and "carte blanche to suppress dissent".
The concept that you could say anything you want and not have people think you're a dick is the other extreme of "freedom of speech", frequently referred to as "the right not to be criticized". Both the right not to be offended and the right to not be criticized don't exist and both are equally idiotic concepts. Any attempt to seal either in law needs to be resisted.
"You can be racist without meaning to; you can give offence when you think you're only having a laugh."
And people who think that there should exist a "right to not be offended" will be the downfall of our civilization.
People say things you don't like. For $deity's sake, man, sack up.