probably mostly spinning disk
with some ssd up front as a tier. Since I think most HPC stuff is still throughput based, last I heard SSDs don't give much competitive edge on sequential reads than disk.
1182 posts • joined 19 Jun 2007
with some ssd up front as a tier. Since I think most HPC stuff is still throughput based, last I heard SSDs don't give much competitive edge on sequential reads than disk.
(oops sorry ignore this comment, I was using the old app not the new one)
My DNS provider patched 2 days ago, yesterday wasn't good enough for them!
I got a pretty nice Toshiba android tablet last year, quad core, 2GB or maybe 3GB ram I forget, and something like a 2540xsomething display 9.7" or whatever(good price at the time). Thought I would use it more (got it mainly to replace my old HP Touchpads). But I quickly found that I didn't use the tablet much at all.
My Galaxy Note 3 handles 99.9% of my mobile work, I probably have picked up and used my tablet seriously 5 or 6 times this year. It can go weeks or month or more without even being touched. I don't even take it when I travel. I'm happy to watch movies on the 5.7" phone(which has 96GB of storage).
I have another Samsung tablet 7" which I have spent maybe a grand total of 30 minutes using outside of initial setup (got it cheap). I have another 7" I think HP tablet that was given to me by HP but I haven't taken it out of the box yet (probably will re-gift it) since I know I have no use for it either(that one runs windows).
Ironically enough my HP touchpads (used ONLY as digital picture frames, I don't even bother keeping them on wifi anymore) get more usage than the other tablets.
Feels strange to admit this, but I think I agree with gartner (again, this is pretty rare for me anyway). At the end of the day, container or VM it really doesn't matter, it is a minor technical difference (speaking to the "business" level people gartner talks to).
Techie folks like us care more about the details but from a higher level perspective the concept is very similar of having multiple "instances", on shared hardware. Even if containers aren't as isolated as VMs, the end result is pretty similar.
At the end I think that's all that really matters.
I know you get it, but hopefully management types don't get confused and think they need containers to deploy micro services, when micro services deploy just fine in VMs.
My first round of micro services 8 years ago were on physical hardware(in production), each micro service ran it's own instance of apache on a different port(Ruby on rails at the time, ugh, hello hype bandwagon again) and talked through the load balancers to the other services.
I'll elaborate a bit on my usage of containers since it may be non standard and perhaps the information could help someone. A long time ago in a galaxy not too far away our software deployment model to production was deemed to have two "farms" of servers, so we would deploy to one "farm" and switch over to it. In the early days we were in a public cloud so the concept is we "build" the farm, deploy to it, switch to it and destroy the other farm. Reality didn't happen that way and both farms just stayed up all the time. In the time when we may need more capacity we activate both farms.
After six months or so we migrated to our own hosted infrastructure, where the cost of running two farms wasn't a big deal, because the inactive farm isn't consuming resources it really doesn't cost much of anything to maintain(grand scheme of things).
Our main e-commerce platform is a commercial product we license and it is licensed based on the number of servers you have (regardless of VM or physical or container, or number of CPUs or sockets etc). One server = 1 license. This application is CPU hungry. The license costs are not too cheap ($15k/server/year). For a while we ran this application in production on VMware, this worked fine though it wanted ever increasing amounts of CPU..
In order to scale more cost effectively I decided early last year to switch to physical servers, but I wanted to have the same ability of having two "farms" to switch back and forth. Originally I thought of just having one OS and two sets of directories, but configuration was much more complicated and would be different from every other environment we have. Another option was use a hypervisor (with only two VMs on the host). That seemed kind of wasteful. Then the idea of containers hit me.. it turned out to be a great solution.
The containers by themselves have complete access to the underlying hardware, all the CPU, and memory (though I do have LXC memory limits in place, CPU was more of a concern). Only one container is active at any given time and has full access to the underlying CPU. If a host goes down that is OK, there are two other hosts (and only one host of the 3 is required for operation). Saved a lot by not licensing vSphere(little point with basically 1 container or VM active at any given time), saved complexity in not using any other hypervisor nobody in the company has experience with, and it's pretty simple. The new hosts I calculated had 400% more CPU horsepower than our existing VMs configuration(with both "farms" active). Today these physical servers typically run at under 5% cpu(highest I have seen is 25% outside of application malfunction which I saw 100% on a couple occasions). I don't mind "wasting" CPU resources on them because the architecture deployed paid for itself pretty quickly in saved licensing costs, and allows enormous capacity to burst into if required.
I don't care about mobility for this specific app because it is just a web server. I wouldn't put a database server, or memcache server etc on these container hosts.
Headaches I find with LXC on Ubuntu 12.04 (not sure if other implementations are better) include:
- Not being able to see accurate CPU usage for a container (all I can see is host CPU usage)
- Not getting accurate memory info in the container (container shows host memory regardless of container limits)
- Process list is really complicated on the host (e.g. multiple postfix processes, lots of apache processes, default tools don't say what is a container or what is local on the "main" host OS)
- autofs for NFS does not work in a container (kernel issue) - this one is really annoying
- unable to have multiple routing tables on the container host without perhaps incredibly complex kernel routing rules (e..g container1 lives in VLAN 1, container 2 lives in VLAN2, different IP space, different gateway - when I looked into this last year it did not seem feasible)
I believe all of the above are kernel-level issues, but I could be wrong.
All of those are deal breakers for me for any larger scale container deployment. I can target very specific applications, but in general the container hosts are too limiting in functionality to make them suitable for replacing the VMware systems.
Obviously things like vmotion and things are a requirement for larger scale usage as well, while most of our production applications are fully redundant, I also have about 300 VMs for pre production environments, most of which are single points of failure(because not many people need redundancy in DEV and QA - our immediate pre production environment is fully redundant, well at least to the extent that production is), and it would be difficult to co-ordinate downtime to do simple things like host maintenance across 30-50+ systems on a given host.
Is quite a stretch there, stretch.
Containers have their use cases, and "services oriented architecture" has been around for a very VERY long time (my first exposure to it was 2007 but I'm sure it was around much earlier than that). Containers have been around for a long time too(12-15+ years? on some platform(s) anyway)
When(or if) containers can provide the same level of mobility that VMs have then they will be pretty set to take on VMs. Until that time their deployment will probably be limited to larger scale setups (in the same sense that software defined networking is limited to those setups too).
I do use containers myself, currently I have 9 containers deployed(LXC on Ubuntu 12.04 LTS on physical hardware), along side roughly 600 VMs. The containers are there for a very specific use case and they serve that purpose well. Six of the containers have been running continuously for over a year at this point (e.g. we don't do "rapid deploy and delete"), the other 3 containers are only one month old and haven't seen production use yet.
Containers should be on that hype bandwagon that el reg covered in another article today from gartner, because they are mostly hype. They aren't magical. They aren't revolutionary. They aren't even NEW.
SQLite version 3.8.2 2013-12-06 14:53:30
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> select count(*) from moz_hosts where permission = "2";
10,359 hosts that try to set cookies firefox will not store their cookies, firefox prompts me for every site, and has done so for what feels like 7 or 8 years now. Sometimes I block too much and a site stops working, and I want to go back and allow it to store cookies, or in some cases I just temporarily use another browser for that site.
At this point seems like is when the hardware isn't supported anymore(usually coincides with buying a new computer). Whether it is windows or Linux. Linux on the desktop at least started being "good enough" for me probably in 2007 (been using it as my main desktop since about 1997), since then I haven't come across any improvements that made me real excited to get to the next version. My last "upgrade" was to Mint, explicitly to retain the same Gnome 2 UI that I had with Ubuntu 10.04 LTS (which for desktops I used for a solid year past end of support - only changed to Mint when I upgraded from hybrid drive to SSD).
Windows got to that point for me with XP(probably got that way for most people with XP), nothing in the newer versions has gotten me excited. I know that 64-bit under XP was poor relative to Windows 7, so that is one reason to upgrade but that's about it. I'd trade the XP UI over windows 7 UI/UX in a minute though.
I guess I can say the same goes for a lot of technology products and platforms that I use. The only reason I upgraded from ESX(yes no "i") 4.1 to 5.5 is I could no longer get support on 4.x. The version of Splunk I am using has been unsupported for probably two years now(but it WORKS, no support tickets needed in two years - I do plan to upgrade just not a high priority). The list goes on and on and on..
I haven't been a "serious" windows customer since NT 4 days. I've certainly used it a bunch over the years(including I'm the only one on my company's team that supports what little windows servers we have in house) since but very casually.
I don't know why I care about XIO but for some reason have been loosely following them for a long time. But I specifically remember they had a "head" unit like this many years ago..
I had to look it up, the product I recall was called the Emprise 7000 (2009) which scaled to between 10 and 64 of their ISE enclosures at the time.
I would think it is technically possible to inspect and filter at the carrier level for this kind of thing, since this is processed through their systems(and not some random web page or email or something).
Maybe they don't have this capability, if not not a bad ability to have.
But ironically enough that last QTS data center I can't live without since my own company's equipment is co-located there, it's a nice place, probably a half mile walk from the parking lot to our little corner of it (128 square feet - driving over $220M/year in revenue). Right next to a prison of all things. Can't wait to go back, I love Twin Peaks in Buckhead.
the main bug seems to specifically refer to users with special rights being able to compromise the site not just any random anonymous user.
I would wager most of the wordpress blogs out there probably have just a single account for the one person there(like mine), or have only trusted users (like the wordpress blog for my company, I think all of the users that contribute content have admin access already)
1PB of data will take a long time to import, even if I had 1PB of data to import to their cloud, even at gigabit speeds(on a tier 1 ISP) the latency alone I believe would ensure it takes longer than 3 months to copy that data(at least without massive parallelization).
how much fun I could have with $415M
don't start using windows 10 until 2020, because when it goes to extended support at least they won't be messing with the UI anymore it will be pretty "stable".
Which works out pretty well since that it seems windows 7 goes EOL entirely in 2020 too. For my small use cases of windows I see absolutely no reason to move from windows 7 (most of my work is in Linux mint with MATE on desktop/laptop anyway).
(and yes in case you are curious I don't care if windows 10 makes my apps run twice as fast and doubles the battery life in my laptop, I don't want it any more than I want Ubuntu Unity, or GNOME 3).
that they were backtracking on their aggressive firefox browser release schedule, then I realized it was for firefox OS and thought who cares about that..
(happily running android 4.4.something, android 5.x stay away from my phone please, thanks)
I hope you are right, the linux version works pretty well for me (use it every day for work (for the past 5 years), 98% text chat, 2% voice chat), and no ads.
twice, I'd be willing to pay 10 times what they charge to get everything. I cancelled when they raised prices the first time after looking at my usage and realized I really hadn't streamed more than 2-3 hours in the previous year, and I went through DVDs about once every 3 months.
I keep going back to this Qwest commercial
that is what I want, I'm happy to pay $100-150 maybe even $200/mo for that.
forgot to clarify when I mean data center stuff I specifically refer to servers/storage/networking. I do use co-location providers for actual data center floor space, cooling etc(generally go for the high quality ones with N+1 everything). I use tier 1 ISPs for my internet uplinks.
that I can operate data center stuff much better than amazon, microsoft, google etc as I have demonstrated for the past decade.
They operate stuff pretty well *only if* you're willing and able to change your operating model significantly to fit in their "built to fail" model. Most apps, most orgs do not operate in that model, and sadly many people making the decisions don't realize this when they make those kinds of decisions. I maintain that every development team I have worked with, every company I have worked for has been this way, and the same goes for many others that I know others work at. Most people think cloud is just magic and it will "just work". This is probably closer to reality for SaaS (since everything is abstracted), couldn't be farther from the truth for IaaS (and really who uses PaaS yet these days).
(both my current company and previous company launched their ground-up designed apps in a big public cloud with pretty disastrous results(first and foremost from a cost standpoint setting everything else aside), first company collapsed after I left they were spending easily $400k/mo on cloud hosting(I could do it all in house for around $1M of first year costs and about $150k/year after using tier 1 hardware and software), current company moved out in a few months and I operate their stuff still today, runs smooth as butter, I've had two, count em two server failures in 3.5 years(both recovered automatically), 100% storage uptime in 3.5 years, everyone sleeps well at night - haven't had to rebuild a virtual machine since we moved to the data center 3.5 years ago)
The models are different. The model I work with provides higher levels of performance, availability, and generally significantly lower costs(though haven't priced clouds out in the past couple of years) because we know how to oversubscribe and share resources(doing this right takes experience). It's not as flashy, there are no APIs to dynamically scale up and down, that is a manual process (but realistically we haven't had this need, ever), lifetime of servers is measured in years. We have tons more functionality with our enterprise equipment than possible in a public cloud(I'm not going to bother explaining the details if you don't understand this, not worth my time).
Their model is you have to build your apps to handle that. On paper it sounds smart, but in reality that is a lot of work, most companies opt to build features for customers rather than high availability.
OCP to me is kind of dangerous, I know there are also a lot of people out there (I used to work for one) who just look for any excuse to cut corners on cost, not taking into account the risks involved in going with lower quality stuff ("it's all the same"). If you have the staff and expertise (and time) to handle it, great go for it. Most companies don't(none that I have worked for anyway, I work for small(er) companies).
One company I was at tried adopting this model saying "oh we'll just hire an intern to swap hard disks" for a big hadoop cluster they were going to do. They ignored my suggestions and I left before they bought anything. The first round of cluster build out had 30-40% failure rate on systems for the first year or so(literally halving their hadoop capacity which impacted business because hadoop operates on a quorum model apparently the lead developer explained it to me a year or two after), they never hired interns to "swap hard disks". The leader of the group left not too long after. Company is still around but I heard all investors have pulled out and they are riding on their own(not a position I would want to be in given they went through probably 8 rounds of funding).
I think cloud is a future, but that future is SaaS. IaaS is still a piece of shit when it comes to cloud, I don't really see it getting any better (at least in the biggest clouds). SaaS makes a lot of sense though.
lastly, I still recommend this plugin it makes reading about cloud more enjoyable
(there is one for chrome too I don't use chrome though)
(maybe the plugin altered my comments to my butt I am not sure)
you didn't mention any quotes from here
"This is despite Acer having produced the top-selling Chromebook of last year, effectively tripling its Chromebook business. But Chen was adamant: if necessary, he said, “we will be the last man standing in the PC industry"."
I think a lot of it is compliance... MS going to companies that are not in licensing compliance and saying as part of getting in compliance sign up for office 365 (at some discount) as part of getting in compliance. Oracle seems to be doing similar things with their cloud offering (but perhaps being more aggressive at it).
my 2011 juke gets around 235 miles to the take on pure highway runs, half that in city driving the way I drive it. I plan to get a nismo at some point but probably not for a couple more years.
My Samsung 850 PRO ~500GB drive has written about 9TB of data since about last August (on my primary computer), at that rate the new stuff would easily work for me.
What are you on Trevor? Openstack is most certainly NOT ready. For it to work right you still need significant in house expertise (far more than most any other platform), large enterprises that can dedicate tens to hundreds of people can certainly probably be pretty successful(I heard ebay/paypal had something like 200+ people working on it for example), or outsource Openstack stuff to someone like HP and let them manage it(they have services that will do everything from provisioning to ongoing management - still even with them I wouldn't trust it with my mission critical stuff, the apps I manage aren't able to operate in something that is "built to fail" like a public cloud for example).
Everything I have seen and heard from people using it screams it is not ready, and won't be for some time(where time is easily 1-2 years, probably more). Just because you can plug it in and have it "work" doesn't mean it will WORK. Note that *my* standard for WORK is quite high.
Lots of organizations want people to think it's ready, and if you're ready to either really get your hands dirty (& take on pretty significant risk), and/or devote a good amount of resources to keeping it running then people best steer clear.
Last year I heard a great quote from one of the openstack experts at HP, he said something along the lines of "getting it up and going is easy, keeping it RUNNING is hard".
My former boss works with openstack on a daily basis at a large(r) enterprise (he knows my level for quality) and every time I see him he tells me it's not ready.
as a customer.. i hope it goes pretty smooth. I remember the early compaq/hp days where the company I was at at the time basically anything from HP we had to have shipped UPS RED because it took so long to process, later were told it was mostly due to integration issues between hp/compaq. This went on for what seemed like at least a year.
Last I heard cloudflare is a CDN of sorts, with some extra features on top, designed to be used for inbound traffic into a service.
This verisign thing seems DNS only, and is designed for outbound DNS requests from clients.
Unless cloudflare offers something similar for outbound requests the article makes no connection that I can see.
(not a customer of either company myself)
(most) public cloud infrastructure to me at least doesn't remotely resemble traditional infrastructure. While we are 99% virtualized our VM life spans are measured in years. We've never had a VM failure(e.g. have to rebuild a VM). If a host fails, VMs are automatically restarted on another host within a minute or so(last time it happened it was so fast our monitoring didn't even get a chance to send alerts that more than a dozen production VMs went down). Naming schemes are sane, IP addressing is static, things are reliable. Storage uptime 100% since moving out of public cloud 3+ years ago. 2 VM host failures(hardware) in 3+ years. I can sleep well at night not having to worry about something failing like we constantly had to worry about when hosted in a big public cloud provider. To answer the question, no in the past 15 years I have never worked with a development team that built things to operate smoothly in a public cloud, including the present and previous companies who launched their applications in public cloud from day 1 (first one crashed & burned, current one of course moved out in short order).
There are cloud providers that can provide this, but the "big ones" don't come close (and aren't even trying, it's not their model).
just give firefox ESR a 5 year support cycle as a compromise.
(firefox user since phoenix 0.something I forget, currently ESR 31 though I miss ESR 24 to some extent, 31 is passable)
I thought he was just a blogger. I used to interact with him semi regularly on his EMC blog he never came across to me as someone who should be in charge of anything too important. I can see him probably having a good relationship with customers but making decisions at that level I don't know.
They should of named it something like "Storage Magic v1.0"
because that's what it sounds like they are trying to make.
I think you are quoting single system numbers their website claims 103PB with a 24-node cluster as the top end for NAS or 34PB for SAN(max 8 controllers).
I was thinking similar before I switched to Mint from Ubuntu..then I realized I hadn't upgraded Ubuntu in several years (I stayed on way past the 10.04 LTS desktop support ended because I didn't want Unity among other things). So I figure I won't be upgrading to another major version of Mint unless I get new hardware which means new install for me anyway. My personal laptop is 5 years old now, maybe it has another year or two left to it. But maybe I will go with Mint 17 again when I replace it, like I kept going with Ubuntu 10.04 LTS long after 12.x came out.
looks like I am on 17.1 at the moment.
for me anyway what impact HP's latest 3PAR has on this stuff, since this is as of June of this year, the new 20k systems I don't think ship till August, and they are what I'd consider the first *really* good 3PAR flash systems (not that the 7450 isn't good it is but these 20ks are just so much faster and very cost effective relative to their competition anyway).
if ask has only 0.26% share, what does yahoo hope to gain? obviously distributing with java wasn't an effective strategy for ask.
Get a 7440 then. Same hardware as 7450 but allows spinning rust.
I wish I got a 7440 but was not available when I got a 7450.
Now the two 20k models are NOT the same hardware the SSD version has more cpus and double memory. Hoping HP pulls a 7440 with the 20k
They are functionally the same but the all flash version has 32 more cpu cores and 1.7TB more memory/cache (assuming an 8 controller configuration in both)
I would of liked HP to have offered a "hybrid" version with the faster controllers(just gives more flexibility), maybe that will come in the future (like the 7440 is the same as 7450 just allows for spinning rust)
Jusy security patches. Since ATT started sending android 5 to note 3 users like myself I have had to keep wifi disabled (going on 2 months now) to prevent it from installing. There is NOTHING in android 5 I want. I don't want new features. Phone works fine as it is. I don't care if you double the battery life and make it run twice as fast, phone works fine as it is. I'm already very careful what apps I install and i don't do any purchases or online banking on my phone.
I just wish I could use wifi again without risk getting android 5. I'm happy to try to live with android 5 if I buy a new device. But not on my existing device.
The article implies you can make your storage faster by making the pipe (bandwidth) bigger. In my experience at least the pipe is almost never taxed (even 4Gb FC). I know there are cases that it is but I suspect they are in the minority.
Of course experienced tech readers know this already.
I'm more concerned with queue depths at lower speeds (mainly because older gear has smaller queues) than througput.
My servers are overkill but the cost isn't high. 2x10G links for VM traffic 2x10G links for vmotion and fault tolerance 2x1G links for host mgmt and 2x8G FC links for primary storage (boot from SAN). With exception of FC everything else is active/passive for simplicity.
11 cables out of the back of each DL38x system with power and iLO. Good thing I have big racks with lots of cable management. 4 labels per cable and it takes a while to wire a new box but we add boxes at most twice a year in the past 3 years.
Maybe someday I'll have blades
Discover doesn't start till tomorrow
Pure could of picked a better day to announce, not to be dwarfed by the new 3PAR stuff..
Planet is fine. Humans are fucked. Over population. Planet will heal itself over time.
I plan to have some fun in the meantime because i know there's nothing i can do about it (realistically).
over the roar of my stereo(I think cars next to me can't hear their car over my stereo either). Also not interested in driving more economical. I bought my car to have fun, which means I get shitty gas mileage(even if I drive economically the car itself doesn't get good mileage anyway cruising at 75MPH on open highway I hover around 19MPG for a pretty small car though city driving I'm sure I could boost my mileage a lot if I didn't drive the way I do). and burn my tires out probably in less than half the mileage they are rated for. I love to accelerate(and corner with torque vectoring all wheel drive plant my passengers faces into the windows), I don't like to speed(doesn't get me there much faster). I've never gotten a ticket outside of a broken tail light(last one was 11 years ago). One of my best friends is similar though he drives a Porsche, I drive a Nissan.
The GPS in my car for some reason says I have driven a top speed of 225MPH. I've been fast, not quite that fast though(almost half).
I live 1 mile from my office so I don't have much of a commute.
Going on another road trip starting Friday Bay area to Vegas for a week then down to Arizona before coming back maybe swing by LA or something on the way back.
Twin Peaks Vegas bikini contest here I come, then HP discover next week. So excited.
I think cloud is a good future for the SME, but that cloud is SaaS, not IaaS which is what this article is referring to. IaaS cloud (even as a customer) still requires pretty significant knowledge to operate, of course SaaS does not.
Another thing that is driving cloud adoption I'm sure is just lack of supply of (good) IT folk. If you can't find someone(s) to run infrastructure right then well you are probably better off not trying. Unfortunately (most) companies seem to think it's not worth paying decent money ($200k+) for people to do these kinds of things when in the end it could(depending on person etc) save them million(s) per year. It'd be different if there was more supply (of people) on the market, but it seems clear to me at least on the west coast of the U.S. there is not. I keep seeing and hearing stories about companies paying high six figures per month to cloud providers and either don't care that they can save a lot, or they just think that is "normal".
There was one good article here from someone that said something along the lines of when your cloud spend crosses $20k/mo then you need to start looking at alternatives. For my org cloud spend was about to explode above $100k/mo when we moved out 3.5 years ago, and my previous company was one of the ones spending $300-450k/mo on cloud (which I showed could be done in house with a 3 month ROI, but ultimately the company wasn't interested, I left, and it collapsed not long after).
These are containers not VMs right?
For once i think this gartner report seems good. I'm sure there are a ton of management types out there that think if they install openstack they have an instant cloud.
I keep getting updates from a friend that uses it and the word remains steer clear. IaaS is not important to me we get along fine without it. Too much hype behind openstack. Like SDN too much hype there too.
Happy with vmware and "utility computing". Cloud can go to hell.
Reminds me of a suggestion my VP of technology had at a company I was at several years ago, he wanted to use HDFS as a general purpose file storage system. "Oh, let's run our VMs off of it etc.." I just didn't know how to respond to it. I use the comparison as someone asking for motor oil as the dressing for their salad. I left the company not too long after and he was more or less forced out about a year later.
There are distributed file systems which can do this kind of thing but HDFS really isn't built for that. Someone could probably make something work but I'm sure it'd be ugly, slow, and just an example of what not to even try to do.
If you want to cut corners on storage, there are smarter ways to do it and still have a fast system, one might be just ditch shared storage altogether (I'm not about to do that but my storage runs a $250M/year business with high availability requirements and a small team). I believe both vSphere and Hyper-V have the ability to vmotion w/o shared storage for example (maybe others do too).
Or you can go buy some low cost SAN storage, not everyone needs VMAX or HDS VSP type connectivity. Whether it's 3PAR 7200, or low end Compellent, Nimble or whatever.. lots of lower cost options available. I would stay away from the real budget shit e.g. HP P2000 (OEM'd Dot hill I believe), just shit technology.
"why spend time, energy and money virtualising more than you have to?"
Because in many cases using containers would require much more time, and energy(as in human energy, and thus money) to manage than virtualization.
I have 6 containers deployed at my company for a *very* specific purpose(3 hosts w/2 containers per host each container runs identical workload). They work well. They were built about a year ago, and haven't been touched much since. I have thought about broadening that deployment a bit more this year, not sure yet though. I use basic LXC on top of Ubuntu 12.04, no Docker around these parts. Adapted my existing provisioning system that we use for VMs (which can work with physical hosts too) to support containers.
Containers are nice but have a lot of limitations(lack of flexibility). They make a lot of sense if you are deploying stuff out wide, lots of similar or the same systems, in similar or same configuration (especially network wise). Also most useful if you are working in a 'built to fail' environment, since you are probably not running your containers on a SAN, and unless things have changed containers don't offer live migration of any sort. So if I need to upgrade the firmware or kernel on the host the containers on the host all require downtime.
So for me, 6 containers, and around 540 VMs in VMware.
I've had one vmware ESX host fail in the past 9 years(my history of using ESX, used other vmware products too of course - in this case it was a failing voltage regulator), Nagios went nuts after a couple of minutes, I walked to my desk and by that time vmware had moved the VMs to other hosts in the cluster and restarted them(had to do some manual recovery for a few of the apps). I don't think that happens with Docker does it? (you don't have to answer that question)
they probably mean a lot of the VMs in azure for internal MS stuff that run on hyper-v use the nano stuff.