* Posts by Trevor_Pott

5212 posts • joined 31 May 2010

Yes, you heard me – the storage infrastructure WARS are over

Trevor_Pott
Gold badge

Re: Compass points

By "array" I typically mean SAN and NAS. DAS stuff is usually not called an array. It's just called DAS. (Or JBOD). It's a separate thing. It's usually many more disks than you'll find in a server SAN, but it's not shared across multiple systems...or at least not enough systems to make more than a two or three node cluster.

DAS is a very Microsoft thing, at least where virtualization is involved. I know it's still a thing for those few running workloads on metal, but only Microsoft really thinks it's remotely viable for hosts with hypervisors on them. But, hey, it's Microsoft. Being trapped 15 years in the past is quite an advancement for them.

1
0
Trevor_Pott
Gold badge

Re: Compass points

Hey, sorry about that. Maybe this story will help some?

1
0
Trevor_Pott
Gold badge

I...I don't think you understand how these work? Especially how they're installed in enterprises.

As a general rule, enterprises used to buy hardware dedicated to a specific project or workload. Each DB had it's own SAN, it's own servers, etc. But then we found that this was spectacularly inefficient and led to massive underutilisation of resources. Private clouds - or at least virtualisation setups that were closeish - began to become the order of the day.

Resources began to be purchased and pooled based on cumulative predicted need, not based on the individual project or workload. Now the question has become "how best to maintain these sorts of environments."

Something like a Nutanix or VSAN cluster rarely goes beyond 16 nodes, sometimes to 32. You get multiple clusters in a virtual datacenter. You are highly unlikely to have nodes in the cluster that are different speeds/capabilities because clusters tend to live and die as a group. We've seen that even in non-VSAN clusters thus far. Clusters are born, they live and they die as one.

But in the rare instance where clustered are mixed - I run a mixed cluster myself - sysadmins can simply tell workloads to keep the copies on "like" nodes. If you have PCI-E storage on nodes A-D and only SAS storage on nodes E-H, then you can "segregate" the cluster into two.

In theory, you could end up with a workload split along the storage plane, but only if you'd lost enough of one type of node that rebuilding would cause it to put the second copy on the other class of node. As soon as you've repaired the server sin question, policies would take over and make sure your workloads go where they are supposed to.

If your assertion is somehow that server SANs are unable to support SQL or OLTP workloads, well...you're just wrong. You're wronger than wrong.

Believe it or not, server SANs have been around long enough to evolve to handle the concept of diversity in workloads...and to handle workloads that are as demanding as anything you could throw at a traditional SAN. Indeed, I'd challenge a traditional SAN to keep up with the all-flash server SANs. The MCS setups in particular are utterly spectacular.

0
0
Trevor_Pott
Gold badge

Re: Storage or data governance

Exactly why data governance is the new hotness, and new ways to get disks into the datacenter are not. :)

0
0
Trevor_Pott
Gold badge

Did you know that the closest land-based relative to cetaceans (a family that includes whales) is the hippopotamus?

1
0
Trevor_Pott
Gold badge

"Why claim victory for "server SAN" instead of the broader category, except as a marketing move?"

A) "Server SAN" was coined by Stuart Miniman. An analyst, not a marketing bloke, because we needed something to call "lashing together storage from multiple servers and present it to the cluster" that was shorter than "lashing together storage from multiple servers and present it to the cluster"

B) Because not all "clusters of storage lashed together" are the same. Object storage is, for example, going it's own say, despite being something we could reasonably call a "server SAN" as a technicality. The big money isn't in object storage. You can't charge the big margins for it. It's things like VSAN, Nutanix, etc that are wining out and going to form the "default" for enterprise workloads.

C) I calls it like I sees it. Arrays, clusters of arrays and "lashed together clusters of storage that don't run workloads on the same nodes as the storage" are simply not winning out over more modern "hyperconverged" (and by, do I loathe that term) setups.

And it's enterprise workloads that matter, mate. They're where the money is. They're where you get the margin.

1
0
Trevor_Pott
Gold badge

Re: Half hour boot times

5-10 minutes is a long way from a half hour. Also: for the record, you don't tend to need such long boot times in server SAN boxes, because you need RAID cards so complicated that they need to load an OS from the future when they init.

Of course, you could also be seeing extended boot times because you're using 1.5TB of RAM and doing extended memory tests on each boot. Dell in particular seems to like really indulgent mem tests.

0
0
Trevor_Pott
Gold badge

Compute workload om? On you miserable, non-tactile touchscreen craptasm! Why isn't there an edit button in the mobile UI?

:(

0
0
Trevor_Pott
Gold badge

Re: Mainframe zombies?

The mainframe is ultimately what everyone wants, but IBM refuses to price it reasonably.

2
0
Trevor_Pott
Gold badge

Server SANs are hyperconverged. Lash together storage from multiple servers, present it to the cluster as shared storage. Or, more to the point, hyperconvergence is one of the possible means by which a serverSAN can manifest.

Server SANs can be done without running a compute workload on the same node. Then it's not hyperconverged. Run a compute workload om the node and it is hyperconverged.

Marketing terminology. It are the dumbs.

0
0

Microsoft shareholders approve of CEO Satya Nadella's MASSIVE PACKAGE

Trevor_Pott
Gold badge

Re: Future with Nadella

expanded on his cloud strategy to get Microsoft apps on every internet-equipped device on the planet.

"Talk about making pigs fly. Does anyone in their right mind even remotely think that this is doable?"

I do.

"Does you leccy meter that sends your reading to the mothership need a Word Interface? Does it even need to run Windows of any shape or form? Does it need an SQLServer Client or even client access to CRM, BizTalk or any other MS product?"

No, but it probably could report it's data back to Azure, or run a little Microsoft Research-developed micro OS (they have a couple) that are efficient, small, and designed for embedded devices. I think one of them even runs in less than 2MB of RAM.

"My current smart meter does not. I know that it runs Linux."

Congrats?

"Does he think that every Router on the planet is going to have some MS software in it?"

Why couldn't it?

"The list of area where MS will fail is very long."

I can say this about (almost) every OS or software company, if we're being honest.

"Perhaps he should become a Politician. They always promise stuff that is clearly impossible to deliver (if you have half a working brain that is) but they make great sound-bites don't they?"

He's the CEO of one of hte most important corporations on the planet. He is a politician, now.

0
0

Docker part 4: Microsoft CAN'T ignore it. Aux armes, citoyens!

Trevor_Pott
Gold badge

Re: What about illuminos Zones

Don't forget OpenIndianna. Or BSD Jails. Or Virtuozzo. Or OpenVZ. Or Rocket. Or...

Just because you have a containerization tech doesn't mean you have momentum, hype, a community, backing, industry support, an "app store", community input to that "app store", cloud provider adoption, etc. etc. etc.

Docker does.

That makes Docker quantitatively as well as qualitatively different from any of the other containerization techs that have gone before. Technology doesn't matter here nearly so much as politics, damned politics and "moolah, moolah, moolah, moo-la-haaaaaaaaaaa..."

Parallels has virty tech, they aren't a threat to VMware. Virtualbox is groovy, nobody runs a large datacenter on it. Solairs/OpenIndianna/Illuminos jails are awesome and even have enterprise support...but there isn't a heck of a lot of cloud provider adoption, hype or community support.

That isn't to say it couldn't happen. It's just that these are projects by engineers, for engineers. And that means they likely won't succeed where the marketdroids and moneymen walk.

5
0

Australian Government funds effort to secure wearable data pulses

Trevor_Pott
Gold badge

So...anyone with a genetic predisposition towards any of a number of different diseases will pay more to live? Anyone who is poorer (and thus can only afford bulk processed foods that aren't good for us) will pay more to live? Anyone who has had an accident (I can give you a list of people who have had spectacularly costly - read: millions of dollars and counting - worth of medical issues due to being hit by a drunk driver) will pay more to live?

You can't control your genetics. Poverty is a vicious circle that very, very few escape from. Lots of things can happen to you where you have absolutely zero control or ability to prevent them.

And we're going to place people at a disadvantage because of this people for this? There is a thin line here between medical insurance tracking bracelets and eugenics. It's maybe a slower process to use the bracelets, but the end result is the same: pushing those who are "impure" into a position of significant disadvantage such that they will eventually just die off.

1
0

EU cyber-cop: Dark-net crooks think they're beyond reach (until now)

Trevor_Pott
Gold badge

Re: This is someone

Where did I say that marijuana was safe for everyone? Hmm? No chemical is. And yet, that doesn't mean you lock it up and away from everyone. It means you put your time and effort into education. Into making sure you can tailor drugs to the individual, etc.

People with peanut allergies know to stay away from peanuts. People with certain genetics should know to stay away from marijuana. People on the schizophrenia spectrum need to know to stay way from amphetamines. People on the autism spectrum need to stay away from antipsychotics.

We are all different, and it is up to the individual to know their own selves; what they can tolerate, and what they can't. We can test for this stuff now. It's not that hard. I mean, hell, you need to get a blood pressure test before getting birth control pills, why the hell can't we mandate a genetics test before being cleared for marijuana/amphetamines/etc.

There are a huge number of drugs withheld from the market that could do real good in the world. These are drugs that could change the quality of life for tens of millions of people, but are held back because less than 1% of people would experience serious negative side effects.

This is stupid, wasteful and harmful to anyone with an IQ bigger than their shoe size. Those drugs can not only improve life for people, in many cases they can save lives.

But no, they're locked away from everyone because of shortsighted fearmongers who are terrified that they will get into the hands of the less than 1% of people they would truly harm. Any attempts to work out a middle ground - for example, make those drugs prescription only, put money into developing commoditized test to ensure that taking that drug is okay, etc - are stamped out in furious anger by the fearmongers.

Troglodytes. Troglodytes that care nothing for the suffering of others so long as it allows them to impose their narrow, limited worldview on everyone else. I hope each and every one of them suffers greatly from a perfectly preventable illness before dying a miserably, lingering, horribly painful death. One that a drug withheld because of fear of how it would affect less than 1% of people could have mitigated or cured.

0
0

Japanese monster manifests new PETAFLOP POWER

Trevor_Pott
Gold badge

There is a corrections link at the bottom of the article, in the left column. Thank you.

0
0

The magic storage formula for successful VDI? Just add SSDs

Trevor_Pott
Gold badge

Re: God, where did the 1990s come from in here?

You mean the end user experience has been awful because the install you're working with, specifically, cut a shitload of corners and/or you don't know what you're doing.

That's a hell of a different thing from generalizing to "this is how it is for everyone."

0
0

Hacker dodges FOUR HUNDRED YEARS in cooler for SCANNING sites

Trevor_Pott
Gold badge

Re: Too subtle for me.

"It can be, and has been, construed that the fact that someone is breaking into your house constitutes a de facto threat to life and limb. I thoroughly agree with you in that possessions are not worth lives, though I suspect we disagree on the point at which one reasonably might be expected or allowed to use lethal force."

And this is the difference between the American - read; batshit fucking bananas - view of the world, and the Canadian - read: proportional response to events - approach to life.

Americans are xenophobic by nature. Terrified of everything and everyone. They believe that everything and everyone are out to get them, all the time. They believe they are special, important, what they own is important, that everyone wants to harm them, specifically, or just wants to do harm to everyone indiscriminately.

It is very rare to find a Canadian with that twisted worldview. Oh, yes, crime does happen, but there's usually a damned good reason for it. We're taught statistics in school. Repeatedly, over the course of about 8 years.

We understand that while, for example, there are people out there who will kill indiscriminately because they're unhinged, the chances of that are roughly equal to getting hit by lightning twice and then walking out into the street and getting shit on by a bird. Especially because we have the beginnings of a competent mental health care system. (With, admittedly, some lamentable gaps.)

Yes, some dude might be breaking into your house, but the chances that he is going to harm you or wants to harm you are virtually nonexistent. And they are not legally grounds for you to attack him.

0
0

Cryptocurrency cruncher cranks prime number constellation

Trevor_Pott
Gold badge

Re: seti @ home

http://www.p3international.com/products/p4400.html

Explain to me exactly how the NSA is changing the readout on this in real time to match their usage?

0
0

Docker: Sorry, you're just going to have to learn about it. Today we begin

Trevor_Pott
Gold badge

Re: MainFrame

Capitalism doesn't work so much as "devolves". But that's a discussion for another time...

2
0

systemd row ends with Debian getting forked

Trevor_Pott
Gold badge

Re: If systemd is so bad...

Who the metric fuck has the resources to fork an app that's a half million lines of code? What about once it's a million? Two? Ten?

Past a certain point, the fact that it's GPL means nothing.

You are basically making the argument that the American people have power over their government because the second amendment gives them the ability to carry around AK-47s. Ignoring completely the part where the government has everything from tanks to helicopters to drones, and there isn't a hope in hell of "the people" rising up against the government without being beaten back down as domestic terrorists.

The same basic principles apply to open source. Past a certain point, the complexity, integration and sheer size of a project make it functionally impossible to fork, unless you can convince the overwhelming majority of the original developers to move to the fork. (LibreOffice, MariaDB, etc.)

SystemD is a fucking cancer. The goal behind it's ongoing metastasization is nothing more than control of the Linux ecosystem in it's entirety. You can lay down any excuses you want, attempt to dismiss any criticism with a half truth or a technicality, but you can't change the fact that the thing has been rammed up our asses against the express wishes of a huge chunk of the community and with the developers openly and proudly refusing to engage with the community about any aspect of either design or implementation.

it's the same arrogant egofuckery that the Gnome team displayed and it has no fucking place as a hydra-like unkillable monster at the core of open source upon which everything expected to hang.

There is room for exactly one Linus Torvalds in the Linux ecosystem, and he's right where he belongs: keeping the kernel sane.

We don't need a second goddamned kernel running in userland.

2
1
Trevor_Pott
Gold badge

Re: What is systemd

" It is still possible for systemd to be a welcome part of Linux, if the project can listen to the users, respond and change accordingly. Are they capable of that ?"

Emphatically and overwhlemingly no. Which is - above and beyond all other reasons - why so many don't want this shitpile of a viral fuckup on their systems.

It might have been different, had the people in charge of systemd/gnome not been in charge of these projects.

7
1
Trevor_Pott
Gold badge

Re: Debian R.I.P. Best alternative?

"If this is the beginning of the end for Debian and Devuan doesn't get enough traction to be a viable alternative what is the consensus on the best alternative?"

Prayer.

Slack and Gentoo are both probably going to succumb int he next year or two. The BSDs are moving to launchd, which shares many similar issues with systemd.

If Devuan fails, we're pretty fucked.

4
1
Trevor_Pott
Gold badge

Re: If systemd is so bad...

"If systemd is so bad... ...then why are so many distros using it?"

It's called blackmail.

RedHat are behind the whole thing. They spend the money that makes a lot of critical pieces of your average Linux distribution work. Now those things won't work without systemd and/or getting them to work without systemd is a right bitch/there are roadmaps to make them not work without systemd in short order.

The short version of this whole thing is that Poettering - and with him, RedHat - are trying to take the kernel away from Linux Torvalds. They are doing so by creating another kernel in userland that everything depends on. Once they have enough stuff jacked into Poettering's matrix, they'll use it to leverage Torvalds out of the picture and finally take the whole cake for themselves.

Systemd is nothing more than a cynical play for domination and control of the entire Linux ecosystem. To "own the stack" of a modern distro. And since RedHat has managed to co-opt so many core projects, there is precious little to stop them.

"Linux" as we think of it today is on life support. Android/Linux and systemd/Linux are now looking to be the two dominant entities. Traditional Linux - one that adheres to the Unix philosophy - is all but dead. Hopefully, Devuan can save it.

15
2
Trevor_Pott
Gold badge

Re: What is systemd

"IF the leaders of GNOME & systemd were more community minded, this would have never happened. Unfortunately, it's hard to see how this will resolve itself, it would require adult behavior & conciliation, both of which are sadly lacking."

If the leaders of Gnome and systemd were community minded we would have ended up with a rational compromise: something that was a proper (and badly needed) replacement for SystemVinit, but didn't have binary logs, a registry, tentacle dependencies and wasn't trying to become a userland kernel.

It would have been a Good Thing, and we could all move forward happy, and in harmony. Unfortunately, Poettering is an ass and he leads a small nation of other asses on their grand quest to "pacify" the fuzzy wuzzies.

18
1
Trevor_Pott
Gold badge

Re: What is systemd

"The Devuan developers are going to have to address the resource issues as noted above. Good luck to them though if they succeed and give some people what they want. "

I've donated. Money where my mouth is, etc. I can only hope the rest of the community follows. Those of us who can't code still have our part to play in keeping Linux free/libre.

10
0

Docker, Part 2: Whoa! Spontaneous industry standard! How did they do THAT?

Trevor_Pott
Gold badge

Re: Change of heart?

...

What?

Where/how did you pull from anything above that the application or guest OS should be the one moving things around? Whaaaaa?

0
0
Trevor_Pott
Gold badge

Re: Under-played, or future reading?

"Watch this space". ;)

If you've questions after the fourth article is out, I'll be glad to fill in the gaps. Cheers!

1
1
Trevor_Pott
Gold badge

Re: Change of heart?

There are two more installments in this series.

But, if I may, please understand that I am aware that there is more to our industry than "Trevor Pott and people who think like Trevor Pott." I absolutely won't be using Docker until it has things like FT, HA and vMotion...but I am largely a keeper of "legacy" workloads. Traditional applications; not the sort that are optimized for, for example, cloud computing.

Those with money - startups with VC funding, governments and enterprises - absolutely are rewriting extant applications to take advantage of "cloud" technologies. These recodes translate almost directly to being "good for Docker". Also, a huge chunk of all new application development follows that model.

In the old world of the sorts of "legacy" applications that I herd - back when applications were applications, not "apps" - you would have a few components to worry about: file storage, database, the application itself and the client. Eventually "the application itself" and "the client" became more or less one thing as things went to web-based applications. But we still had these three things that absolutely had to be up 100% of the time. Both scale up and scale out were (and remain, for us legacy herders,) A Great Big Bitch Of A Problem.

"Redundancy" comes from VMs, or from NonStop servers. The database isn't allowed to go down. Clusters only work if your database app supports it, and you usually have to convince the vendor (who may not even exist anymore) to recode some chunk of the app/database. If the devs that are left even know how to do that!

Modern "apps" are totally different. They're built from the start to be able to scale out and up. It can collapse down to one core copy of the DB/Files/App or scale out to thousands.

In a modern app, you only need to keep the master copy safe. Everything else can spawn some unlimited number of copies as needed.

In Theory. Truth is, doing so in practice is still A Bitch, but it's not quite A Great Big Bitch.

Then along comes Docker. Docker makes the "scale out" part of the modern apps thing Creepy Meerkat Simples. That's grand. So if you want to run Netflix-class infrastructure, you can basically put your core stuff on a NonStop server, then spawn unlimited Docker instances out on a bunch of cheap metal. AMD SeaMicro, HP Moonshot or Supermicro MicroCloud, anyone?

Voila: a use case for the next generation, albeit not one that I will myself be using any time soon. And - just by the by - it's a use case worth hundreds of billions of dollars. Mind you, so is keeping those legacy workloads running.

Our industry is diverse. And Docker has added to the choices before us. It is only one tool in the toolbox. It is kinda odd and non-standard to us old fogies...

...but I promise you, it will be the #2 Philips Screwdriver of the generation that's just cutting their teeth today.

Edit: let me add that the above should read containerization will be a multi-billion-dollar industry. Whether or not Docker, specifically wins this war is as yet undetermined. That said, containerization's time has come.

10
1

Beyond the genome: YOU'VE BEEN DECODED, again

Trevor_Pott
Gold badge

Re: @Trevor

@J3, wipe them out. All of them. Viral warfare. Fire. Destruction of the planet. I don't care what it takes!

0
0

Part 3: Docker vs hypervisor in tech tussle SMACKDOWN

Trevor_Pott
Gold badge

Re: Not disruptive?

"Most of the SMBs I know aren't running VMs even now, they run their applications on a miscellany of elderly hardware held together by good fortune and the occasional visit of the part-time finance director's second cousin."

Then you don't know SMBs, period. "Small to medium business" covers 1 to 1000 seats, generally. With enterprise being above 1000 seats. (Depending on which government is doing the counting.) The bulk of those companies are in the 50-250 seat range, and as an SMB sysadmin by trade, I promise you they've been virtualised for some time now.

"And as soon as containerisation makes it properly to Windows, those people will be taking it up in their droves, because there's no point running multiple OSs if you don't have to - even just from a licensing point of view."

Wrong again. Ignoring the rest of your prejudiced (and false) remarks, you don't understand at all <i.why</i> most companies use VMs. It is to obtain the benefits of redundancy, reliability and manageability (including snapshots, backups, replication, live workload migration during maintenance, etc) that hypervisors provide. Contianers, at the moment, don't provide that.

SMBs want far more than just the ability to run the maximum number of workloads on a given a piece of hardware. They want those workloads to be bulletproof. They need them to be something that can be moved around while still in use because there aren't any "maintenance windows" anymore. There's always someone remotely accessing something. That's just life today. Hell, that was life 5 years ago. It's like you have a picture of SMBs stuck in a time warp from 1999 and you imagine that they've never evolved.

"Containers aren't just a packaging technology - they depend on the provision of resource management and scheduling in the OS that are equivalent to those provided by a current hypervisor."

Everything depends on " the provision of resource management and scheduling in the OS that are equivalent to those provided by a current hypervisor". Whether running on metal in it's own OS, in a container, or in a hypervisor. I don't understand how this precludes containers from being "just a packaging technology".

"And while Docker may have a little way to go (but I think 30 months rather than 30 years will see some big changes), I think you'd have a hard time persuading the people on non-x86 hardware that their WPARs and Zones are manifestly harder to work with than VM solutions."

No, I wouldn't. Because you are completely ignoring the desired outcome portion of the equation. Containers provide what companies desire when the hardware underneath the container provides the required elements of high availability, workload migration and continuous uptime during maintenance. Run containers on an HP NonStop server or an IBM mainframe and you get all the bits you want while getting the extra efficiency of containers.

But, shockingly enough, most businesses don't have the money to spend $virgins on mainframes or NonStop servers. So they use hypervisors to lash together commodity hardware into what amounts to a virtual, distributed mainframe. They then package up their applications in their own OSes and move them about.

Are containers realtively easy to deploy and somewhat easy to manage? Sure. I'll even go so far as to say they're way easier to deploy than VMs are, but I will remain adamant that VMs are currently easier to manage. What you're missing, however, is that hypervisors democratize all the other things - portability, heterogeneity, high availability and so forth - that are requirements of modern IT. Containers don't provide mechanisms for that, unless you burn down your existing code bases and completely redesign.

"Even IBM praises the benefits of WPARs (containers) over LPARs (hypervisor) in the majority of use cases, even though it supports both and the latter has rather more hardware support than the typical x86 VM. I can't really improve on their reasoning:"

Of course IBM is touting WPARs over LPARs. They sell the pantsing mainframes that make containers a viable technology. And they only ask the firstborn of your entire ethnic group in order to afford it!

"Better resource utilisation (one system image)"

Nobody is debating this one. Containers are more efficient.

"Easier to maintain (one OS to patch)"

One OS reboot takes down 1000s of containers. Also, you get the lovely issue of having to deal with workloads that may react badly to a given patch being mixed in with workloads that might need a given patch, all running on the same OS instance. Funny, container evangelists never talk about that one...

Easier to archive (one system image)

Oh, please. We're not using Norton Ghost here. Ever since Veeam came along nobody in their right mind has had trouble doing backups, DR or archives of VMs.

Better granularity of resource management (CPU, RAM, I/O)"

That depends entirely on how shit your hypervisor is. Funnily enough, VMware seems to be quite good at providing granularity of resource management.

So, of IBM's four-point path to victory, the only thing that really shines as rationale is "efficiency". And it carefully sidesteps some pretty significant issues ranging from price (we can't all afford mainframes or NonStop servers) to manageability (1000 workloads sharing a single OS can actually be less desirable than, say, 10 OSes, each with 100 workloads.)

1
0
Trevor_Pott
Gold badge

Re: Fewer OS instances

@dan1980 except you leave out the part of the Office 2007/2010 licensing that pertains the remote/VDI usage whilst attempting to sing Microsoft's praises of licensing that product. Though you at least acknowledge the media-based madness of the time.

Microsoft doesn't do "rational".

1
0
Trevor_Pott
Gold badge

Re: Fewer OS instances

When, in the history of our industry, has Microsoft licensing been connected to rational thought?

5
0
Trevor_Pott
Gold badge

Re: Fewer OS instances

Let's have that conversation after they've finished integrating Docker into the OS and decided on licensing, eh? I've heard so many different things out of Redmond that I can't put credence to any of it, tbh.

2
0
Trevor_Pott
Gold badge

And yet, in practice, for the workloads I run, I see only 10% improvement. I wholeheartedly believe that for certain workloads, you can get a 10x improvement. Hell, why not. Do you know what I can do with benchmarking tools, when motivated?

But the question isn't about the headline improvements. It's about the average improvements for everyday workloads, for average companies.

Also, along those lines, I absolutely do not buy the latency claims you state. I run a fairly significant lab full of stuff and I have been testing virtualisation and storage configurations for about 4 months solid, 8 hours a day. Every configuration I can get my hands on. From various arrays to hyperconverged solutions to local setups. I've run the same workloads on metal. I've used SATA, SAS, NVMe, PCI-E and am getting MCS stuff in here sometime in the next couple of weeks.

The long story short? Take the ****ing workload off the network and you get your latency back. And there are plenty of ways to go about doing that today. I suspect you'd be shocked at the kinds of performance I can eek out of my server SANs, to say nothing of the kinds of performance I get when using technologies like Atlantis USX, Proximal Data or PernixData!

I respect that you have found a way to use containers to great effect, sir. Yet I find I must humbly submit that your use case of them may well be abnormal when we consider the diversity and scope of workloads run by companies today.

I think it is fairer to say that under the right circumstances, containers can deliver manyfold increases in density and perhaps even performance, however, they are not likely to deliver this for all - or even most - workloads today. Containers need to be built into, just like the public cloud. With the advantage that many public cloud workloads can be migrated to containers with relative ease.

2
0
Trevor_Pott
Gold badge

Re: Fewer OS instances

We still don't know how licensing will shake out. I suspect every Docker container will require an OS license from Microsoft. Otherwise, I agree with you 100% here, sir.

1
0
Trevor_Pott
Gold badge

Re: Not disruptive?

Some problems with your viewpoint:

1) Not everyone values efficiency over ease of use or capability.

2) You give up a lot of ease of use and capability in order to get the efficiency gains of containers.

3) Legacy workloads will take decades to go away.

4) Many/most legacy workloads, as well as a significant portion of new workloads (for at least the next decade of coding) will not have application-based redundancy or high-availability. They'll rely on a hypervisor to provide it.

What you say only makes sense if you assume everyone is going to move to public-cloud style scale-out workloads. We're not all Facebook/Google/Netflix. Industry specific applications don't make that jump well. Ancient point of sales apps, CRMs, ERPs, LOBs, OLTPs, etc won't make that jump...and migration is crazy expensive.

That's if you can convince a company that is absolutely dependent on a 30 year old POS application whose every quirk they know by heart that they should ditch it and migrate to a new one. Because...Docker?

There are 17000 enterprises in the world. Maybe a few hundred thousand government agencies that could be considered as large as one. There are over a billion of SMBs in the world, and they're not going "web scale" with their applications any time soon.

Hypervisors displaced metal because they offered immediate benefit without being too disruptive: you didn't have to recode applications for them, you didn't have to really make huge changes of any kind. As infrastructure got denser, datacenter designs changed, but that was dealt with as part of the regular refresh cycle.

Docker, like public cloud PaaS scale-out apps requires burning what you have down and restarting. Maybe one day, 30 years from now, containers will have displaced hypervisors. If so, I will bet my bottom dollar that the containers of 30 years from now look a hell of a lot more like a hybrid between the containers of today and the hypervisors of today than just a straight up continuation of the current container design.

3
0
Trevor_Pott
Gold badge

Good questions. I will do my best to answer.

"Why should containers do this, surely that is the job of the hypervisor?"

Containers shouldn't do this. They will likely try anyways, as a great many of those who are invested in containers and adjacent technologies want to see them replace hypervisors.

The argument goes that containers are so much more efficient than hypervisors that we should do away with hypervisors altogether and use containers for everything. Based on that, we'll either have to throw all our workloads away and recode everything (not bloody likely, especially since there are any number of workloads that can't be "coded for redundancy" in the container/cloud sense) or containers will have to evolve to add these technologies.

As it stands now, I know a number of startups working to bring hypervisor-like technologies (HA, vmotion, etc) which are in stealth. We are at the beginning of mass market contain adoption, not the end. Thus the technologies enjoyed by the mass market for their current workloads will have to be recreated for previous ones, just as we have for every technology iteration before this.

Does this mean it's rational or reasonable to do so? I argue no. If I run 4 VMs on a node and each node contains 100 Docker workloads, I am giving up a small amount or overhead in order to virtualise those 4 Docker OSes, but gaining redundancy for them in exchange. Meanwhile, I can load up that server to the gills and keep all it's resources pinned without "wasting" much because I'm using containers.

To me, it makes perfect sense to have a few "fat" VMs full of containers. Resilient infrastructure, high utilization. But this is considered outright heresy by many of the faithful, as - to them at least - containers are about grinding out every last fraction of a percentage point of efficiency from their hardware.

"Is it the cost of requiring both that keeps you from using containers 'till the above condition is met?"

No, it is the pointlessness of using both that keeps from using containers. Right now, today, most of my workloads are legacy workloads. They don't convert into these new age "apps" that "scale out", as per docker/public cloud style setups. If I want redundancy, I need a hypervisor underneath.

So, I could do what I talked about above and put my workloads in containers and then put the containers in a hypervisor. That would increase the efficiency of about 50% of my workloads, ultimately dropping the need for an average of two physical servers per rack where a rack contains about 20 servers.

That's a not insignificant savings, so why not jump all over this?

1) Even with the workloads that can be mooshed into containers it will take retooling to get them there. It is rational and logical to move them into containers, but that is a migration process akin to going XP --> Windows 10. It takes time, and is best done along the knife's edge of required updates or major changes, rather than a make-work project on it's own.

2) If I start using containers I need to teach my virtualisation team how to use these containers. That's more than just class time, it takes some hands on time in the lab and the chance to screw it up some. That is scheduled, but I'm not going to adopt anything in production until I know that I can be hit by a bus and the rest of the team can carry on without me.

3) Politics. Part 4 of this series will talk about the politics of Docker. Not to give anything away, but...the politics of containerization is far from settled. I don't want to be the guy who builds an entire infrastructure on the equivalent of "Microsoft Virtual Server 2005" only to have all that effort made worthless a year or two later. Been there, done that.

4) 2 servers out of 20 isn't world-changing savings for me. Oh, that's Big Money if you're Rackspace, but at the scale of an SMB that only has a few racks worth of gear, there's an argument to be made for just eating the extra hardware cost in order to defer additional complexity for a year or two.

Really, in my eyes, it's Docker versus the cloud...sometimes in the cloud.

If I was building a brand new website today, I would have a really long think about Docker. Do I use Docker, Azure, AWS, Google or someone like Apprenda?

The choices are likely to be informed not by the technical differentiators between these platforms, but by business realities ranging from Freedom of Information and Privacy regulations, marketing success around Data Sovereignty, cost and availability of manged workloads.

Do I run my workload in one of the NSA's playground clouds, pick something regional, or light it up on my own infrastructure? Is the particular set of applications I am looking at deploying into my Docker containers available from a source I trust, and likely to be updated regularly, so that I can just task developers to that application and not have to attach an ops guy?

New applications make good sense to deploy into Docker containers. And Docker containers in the hands of a good cloud provider will have a nice little "app store" of applications to choose from.

But if I am lighting them up in the public cloud, do I really care if it's in a Docker container? Those cloud providers have stores of stuff for me to pick from in VM form as well. And I don't care if what I am running is "more efficient" when running on someone else's hardware; that's their problem, not mine.

I'm not against using Docker in the public cloud, but I see no incentive to choose it over more "traditional" Platform as a Service offerings either. If for whatever reason we decide the public cloud is the way to go, I'll probably just leave the decision "Docker/no Docker" up to the developers. The ops guys won't really have to be involved, so it's kinda their preference. I really don't care overmuch.

So from a pragmatic standpoint I really only care about Docker if it's going to run on my own hardware, either as part of my own private cloud, or as part of a hybrid cloud strategy. As we've seen, there are layers of decision-making to go through before we even arrive at the conclusion that a given new workload is going to live on my in-house infrastructure. But let's assume for a moment we've made those choices, and the new workload is running at home.

This is where we loop back to the top and start talking about inertia.

All my workloads are on my own private cloud already. They're doing their thing. If I don't poke them, they'll do their thing for the next five years without giving me much grief. My existing infrastructure is hypervisor-based. My ops guys are hypervisor-based.

If I simply accept that - if I give in to the laziness and inertia of "using what I know and what I have to hand" - then my new applications require no special sauce whatsoever. I can let the hypervisor do all the work and just write an app, plop it in it's own operating system, and let the ops guys handle it. What's one more app wrapped in one more OS?

Change for change's sake offers poor return on investment. So for me to move to Docker there has to be inventive. Right now, today, at the scale I operate, the ability to power down 8 servers isn't a huge motivation. I could write pay for the electricity required to light those servers up by writing 4 additional articles a year.

Two years from now, I may have a dozen applications in the cloud, all coded for this "scale out" thing. I may have gotten rid of one or two legacy applications in my own datacenter and replaced them with cloudy apps. Five years from now...who knows?

It would be really convenient for those new applications to be written to be Docker compatible, scale-out affairs that provided their HA via the design of the application rather than the infrastructure. But I don't know for sure that Docker will be the container company that wins.

For that matter, the hypervisor/cloud companies could see Docker as a threat in the next two years, declare amnesty and agree to a common virtual disk format.

Docker offers a means to make my apps more-or-less portable. Ish. As long as there isn't too big a difference between the underlying systems, they'll move from this server to that one, from private cloud to public. If I kept the OSes at the same patching levels on both sides, I could move things back and forth...though not in an HA or vmotion fashion. That has some appeal.

But a common virtual disk format would allow me to move VMs between hypervisors and from any hypervisor to any cloud. Were this to happen, I'd really lose most of my incentive to use Docker. At least at the scale I operate.

TL;DR

All of the above is a really roundabout way of saying this:

Docker is cool beans for big companies looking to make lots of workloads that require identical (or at least similar) operating environments. (See; scale out web farms like Netflix.)

Hypervisors are just more useful to smaller businesses.

I'm way more likely to care about a technology that lets me easily move my workloads from server to server and from private cloud to public cloud (and back) than I am one that will let me get a few extra % efficiency out of a server. Docker could do this one day. So could hypervisors. Neither really do it today.

Hope that helps some.

7
0
Trevor_Pott
Gold badge

"You're not "giving up" high-availability - that's provided by the design of your production infrastructure and software architecture."

Which is exactly what I said.

Containers let you get more workloads per node, but they don't of themselves give you a means to provide high availability for those workloads. You either use production infrastructure (hypervisors or NonStop servers) which provide you redundancy, or you redesign your applications for it.

Thus containers are not competition for a hypervisor. They are not an "or" product. They are an "and" product.

Considering the hype and marketing of the Docker religious, that absolutely does have to be said to the world, spelled out explicitly and repeatedly.

11
0

Euro Parliament VOTES to BREAK UP GOOGLE. Er, OK then

Trevor_Pott
Gold badge

Re: "suggest they break up the European union first"

Seems to me they're the only restraining the excesses of your xenophobic and increasingly authoritarian police-state worshiping rulers, yes.

1
0

How to get ahead in IT: Swap the geek speak for the spreadsheet

Trevor_Pott
Gold badge

Re: Wow

"The art of what I need to do is make it look like something magical is being achieved (the perception bit) "

This is fucking exhausting. It's a full time job in and of itself. If you're an introvert by nature, well...you'll end up burnt out. Sad, but true.

1
0
Trevor_Pott
Gold badge

Re: Wow

"the techies will always be needed regardless of this business/social skills bullshit"

Why? What do you do that can't be automated?

0
0
Trevor_Pott
Gold badge

Re: "You might have to ditch the laptop and brush up on the dreaded 'soft skills'"

To be fair, "a large busted red head able to operate a laptop" is welcome anywhere and everywhere.

2
0
Trevor_Pott
Gold badge

Re: Its odd

More'n just that. Companies want you to devote you life to them. Bleed their colours, think always with the company's interests in mind. They promise you everything from stock options to raises, from a seat at the table of decisions to little things, like being allowed to put a fish tank in the office.

But everything they say is a fucking lie. What they want is the most possible work out of you, with absolute devotion for the least amount of money. And when you're burnt out, and you can give no more, they discard you.

The problem is that businesses have an entitlement issue. They expect absolute devotion from their employees whilst offering nothing in return. The only way they retain staff is to keep them so busy they can't realistically go forth and look for a better job.

That's not "us" versus "them", because they never even consider "us". We simply don't occur to them. Not on that level. Not even on a level enough to keep promises.

And then you leave, do something else, build you own office and get the goddamned fish tank.

3
0

Randall Munroe: The root nerd talks to The Register

Trevor_Pott
Gold badge

Re: Ring

The man is literally - not figuratively - the prototypical gentleman and scholar of our era.

4
0
Trevor_Pott
Gold badge

Re: Good interview

"On the contrary, if I were to meet him (and recognise him), I would stop him, shake his hand and thank him very much for all that he has produced and to please keep on doing what he does so well."

Very much my reaction when I got to meet they fellow behind Ninite.

I don't know as I'll ever get the honour to repeat that experience (meeting one of me heroes), but should I ever meet Mr. Munroe, I will try to be less of a gibbering idiot than I was with Mr. Kuzins.

As with Mr. Kuzins, I expect that meeting Mr. Munroe would be an experience wherein the ancient axiom "never meet your heroes, you discover they have feet of clay" would not apply. I expect, from all that I have read, that Mr. Munroe would be humble and awkward, rapaciously curious and stunningly intelligent as he is reputed to be.

And that, if nothing else, he's accept my heartfelt gratitude. Not for the wit, or the feeling of belonging, or the unity across the globe with other nerds his works have provided. Not for the humour and wit, intelligence, or even for taking the time to let me thank him.

I expect he'd understand when I said that it is knowing that someone else in this world deals best with confusion, grief, fear, loneliness, anger and even despair through art, thought and seeking to help others. Knowing that Mr. Munroe - how may well be one of the brightest minds of our generation - has the same reaction as I do when confronted with these emotions makes me feel less alone.

He's a private guy. I respect that. He has, however, shared with us glimpses into the difficulties with which he struggles. That he can continue to be witty, humorous, helpful and kind through all of that is of itself amazing.

That he occasionally lets us see the humanity of the artist makes me feel less alone.

My apologies for an inability to express myself appropriately. Better to be mushy here where he's unlikely to read it than to gibber at the man in person.

2
0

Why did it take antivirus giants YEARS to drill into super-scary Regin? Symantec responds...

Trevor_Pott
Gold badge

Re: Nation states?

"Are you seriously trying to tell us that the coders behind the NHS patient records debacle or the MOD procurment process are somehow better then a couple of gifted and motivated amateurs? In fact, can you name a single piece of not-shit software that can be credited to a nation-state? Even BT's fucked up Phorm was written by a private entity."

As a general rule, the "not shit" coders working for a state end up working for the spooks, or the banks. Unimportant things like health care get the mediocre of what's left. They don't really pay all that well to code for health care.

0
0
Trevor_Pott
Gold badge

Re: Nothing on C&C

"Hundreds" of infected machines. Why tap the undersea fibers? Get some men in bright vests to dig a hole in the ground outside the company in question and just put the taps in there. Then the packets aren't leaking across the internet for everyone to see.

0
0

Dropbox sees rival file-piles merely as dots in rearview mirror

Trevor_Pott
Gold badge

"The 451 organisation surveyed 1,000 users"

Were they all American?

0
0

Thought tab sales were in the toilet this year? Hah! Wait for next

Trevor_Pott
Gold badge

Re: Tabs will do fine

You're both wrong and you're both right.

Tablets will sell in large volume. Tablets will sell for less money. The number of tablets is flat-to-growing. The sale price of tablets is plummeting.

1
0

Google Chrome on Windows 'completely unusable', gripe users

Trevor_Pott
Gold badge

Re: How Widespread?

I can say with about 75% certainty it's a problem with chrome's flash player. Right click on the "new tab" button and select "task manager' in chrome. You'll see what tabs are eating the CPU. In every case where Chrome turned to crap for me, it's been tabs with flash. Even though I have flashblock on, the fact that there's a flash element somewhere in the source makes those tabs go nuts.

But it's not consistent. It can go days without a problem. Then one day, I take the notebook out of sleep mode, and *bam*, problem's there. Exit Chrome, restart, let all the tabs reload, and 50/50 the problem will recurr. Reboot the system, restart Chrome, problem's gone again for a few days.

But always, it is the flash tabs that start this chain.

1
0

Forums