* Posts by Nate Amsden

1138 posts • joined 19 Jun 2007

Page:

IaaS is OVER, ladies. Time for OpenStack to jump clear

Nate Amsden
Bronze badge

Re: Finally an article that realises the reality..

I think cloud is a good future for the SME, but that cloud is SaaS, not IaaS which is what this article is referring to. IaaS cloud (even as a customer) still requires pretty significant knowledge to operate, of course SaaS does not.

Another thing that is driving cloud adoption I'm sure is just lack of supply of (good) IT folk. If you can't find someone(s) to run infrastructure right then well you are probably better off not trying. Unfortunately (most) companies seem to think it's not worth paying decent money ($200k+) for people to do these kinds of things when in the end it could(depending on person etc) save them million(s) per year. It'd be different if there was more supply (of people) on the market, but it seems clear to me at least on the west coast of the U.S. there is not. I keep seeing and hearing stories about companies paying high six figures per month to cloud providers and either don't care that they can save a lot, or they just think that is "normal".

There was one good article here from someone that said something along the lines of when your cloud spend crosses $20k/mo then you need to start looking at alternatives. For my org cloud spend was about to explode above $100k/mo when we moved out 3.5 years ago, and my previous company was one of the ones spending $300-450k/mo on cloud (which I showed could be done in house with a 3 month ROI, but ultimately the company wasn't interested, I left, and it collapsed not long after).

0
0

Psst. Want a cheap cloud, VM? Google has one. But there's a catch

Nate Amsden
Bronze badge

not VMs

These are containers not VMs right?

0
0

OpenStack private clouds are SCIENCE PROJECTS says Gartner

Nate Amsden
Bronze badge

Re: More of the Blindingly Obvious from Gartner

For once i think this gartner report seems good. I'm sure there are a ton of management types out there that think if they install openstack they have an instant cloud.

I keep getting updates from a friend that uses it and the word remains steer clear. IaaS is not important to me we get along fine without it. Too much hype behind openstack. Like SDN too much hype there too.

Happy with vmware and "utility computing". Cloud can go to hell.

0
0

Reader suggestion: Using HDFS as generic iSCSI storage

Nate Amsden
Bronze badge

sounds terrible

Reminds me of a suggestion my VP of technology had at a company I was at several years ago, he wanted to use HDFS as a general purpose file storage system. "Oh, let's run our VMs off of it etc.." I just didn't know how to respond to it. I use the comparison as someone asking for motor oil as the dressing for their salad. I left the company not too long after and he was more or less forced out about a year later.

There are distributed file systems which can do this kind of thing but HDFS really isn't built for that. Someone could probably make something work but I'm sure it'd be ugly, slow, and just an example of what not to even try to do.

If you want to cut corners on storage, there are smarter ways to do it and still have a fast system, one might be just ditch shared storage altogether (I'm not about to do that but my storage runs a $250M/year business with high availability requirements and a small team). I believe both vSphere and Hyper-V have the ability to vmotion w/o shared storage for example (maybe others do too).

Or you can go buy some low cost SAN storage, not everyone needs VMAX or HDS VSP type connectivity. Whether it's 3PAR 7200, or low end Compellent, Nimble or whatever.. lots of lower cost options available. I would stay away from the real budget shit e.g. HP P2000 (OEM'd Dot hill I believe), just shit technology.

1
0

Hypervisor indecisive? Today's contenders from yesterday's Hipsters

Nate Amsden
Bronze badge

why spend time

"why spend time, energy and money virtualising more than you have to?"

Because in many cases using containers would require much more time, and energy(as in human energy, and thus money) to manage than virtualization.

I have 6 containers deployed at my company for a *very* specific purpose(3 hosts w/2 containers per host each container runs identical workload). They work well. They were built about a year ago, and haven't been touched much since. I have thought about broadening that deployment a bit more this year, not sure yet though. I use basic LXC on top of Ubuntu 12.04, no Docker around these parts. Adapted my existing provisioning system that we use for VMs (which can work with physical hosts too) to support containers.

Containers are nice but have a lot of limitations(lack of flexibility). They make a lot of sense if you are deploying stuff out wide, lots of similar or the same systems, in similar or same configuration (especially network wise). Also most useful if you are working in a 'built to fail' environment, since you are probably not running your containers on a SAN, and unless things have changed containers don't offer live migration of any sort. So if I need to upgrade the firmware or kernel on the host the containers on the host all require downtime.

So for me, 6 containers, and around 540 VMs in VMware.

I've had one vmware ESX host fail in the past 9 years(my history of using ESX, used other vmware products too of course - in this case it was a failing voltage regulator), Nagios went nuts after a couple of minutes, I walked to my desk and by that time vmware had moved the VMs to other hosts in the cluster and restarted them(had to do some manual recovery for a few of the apps). I don't think that happens with Docker does it? (you don't have to answer that question)

3
1

Microsoft's run Azure on Nano server since late 2013

Nate Amsden
Bronze badge

Re: Azure

they probably mean a lot of the VMs in azure for internal MS stuff that run on hyper-v use the nano stuff.

2
0

Dell: Good servers and PCs, but for storage ... oh dear

Nate Amsden
Bronze badge

dell sonicwall good

one area dell is good at in networking is Sonicwall. Been a customer since before they were acquired. Dell hasn't screwed up Sonicwall as far as I can tell. Completely forgot Dell bought em until 30 seconds ago.

1
0
Nate Amsden
Bronze badge

Dell does pretty good in storage

I think at least. I don't use their stuff, but I'd wager storage is a strong #2 behind servers with Dell, with networking being a very distant 3rd. Maybe I'm wrong though. Compellent and Exanet are both pretty good sets of technology. I *wish* HP had something equivalent to an Exanet in their NAS lineup (StoreAll and StoreEasy don't cut it).

For Dell shops at least, I think what they have is good enough for a lot of folks. They've come a long way from being an EMC reseller and just have equallogic (which I'd never touch).

You say they need a better storage culture, they have a better storage culture through their acquisitions. It's miles ahead from where it was 5 years ago.

I'm not a Dell customer right now but I certainly respect Dell a hell of a lot more these days than I did 5-6 years ago in large part because of the progress they've made in storage.

They tried investing in networking but I think their acquisition of Force10 just fell apart(or so it seems anyway maybe I'm wrong), some of the Force10 stuff hasn't been updated in what seems like 8 years now.

1
0

Flash banishes the spectre of the unrecoverable data error

Nate Amsden
Bronze badge

rebuild 600G 15k in 45min

This particular 3PAR array was not heavily loaded at the time(80x15k RPM disks), much of my workload has been shifted to the all flash system since well it's faster and has 4 controllers(also like the 99.9999% availability guarantee on the new system, though previous 3PAR has had 100% uptime since it's deployment 3.5 years ago). But I just wanted to show that RAID even with big drives is still quite doable with a good RAID system (like 3PAR, I think XIV is similar though not as powerful/flexible as 3PAR).

rebuilding from a failure of a 600G 15k SAS drive in ~40 minutes.

http://elreg.nateamsden.com/3par-rebuild.png

Most/all of the drives in the system participate in the rebuild process, and latency remains low throughout. This technology obviously isn't new, easily 12-14 years old, quite possibly older.

I wrote a blog post(sorry for the link) almost 5 years ago on the topic, and even quoted someone who was using RAID 6 with ZFS who suffered total data loss due to multiple failures(in part because it took more than 1 week to rebuild the RAID):

http://www.techopsguys.com/2010/08/13/do-you-really-need-raid-6/

Now I believe ZFS offers at least triple parity raid(optional), maybe they go beyond that too.

I don't have NL drives in my current arrays just 10k, 15k and SSD. My last 3PAR with NL drives was 5 years ago(~320 drives), it rebuilt pretty quick too even with the 1TB disks, since again the rebuild was distributed.

*very* few of my servers have internal storage of any kind.

2
0

Rip up your AMD obits: Gaming, VR, embedded chips to lift biz out of the red by 2016, allegedly

Nate Amsden
Bronze badge

enterprise-class CPU experience

"We're the only guys in the ARM server business that have experience delivering and supporting enterprise-class CPUs"

How's that enterprise-class CPU business doing these days? 1.5% market share?

I was hoping AMD would continue their high end CPU biz, but obviously the brains of that unit were gutted years ago and their road map ran dry (the bulk of my servers are still Opteron 62xx)

For me I have no interest in ARM, but I don't run hyperscale stuff. I think ARM will (sadly?) lose out to

Intel just like AMD did when it comes to data center stuff.

2
0

OpenStack Daddy Chris C Kemp says it's like Linux in 1996

Nate Amsden
Bronze badge

Linux is about the same as 1996

To me anyway.. desktop linux is much more user friendly but really linux on the desktop never went anywhere(I say this as a Linux on the desktop user since about 1997, using it still right now). I think back to my early days in 1996 with Linux, I think it was Slackware 3.x at the time.. I suppose one big leap was things like apt-get which I want to say I first saw in 98 or 99.. but otherwise I don't see a whole lot of big usability enhancements for Linux's main use cases as a server OS.

So I hope usability wise Openstack doesn't look like how Linux evolved usability wise. I won't touch it with a 10 foot pole still too unstable (I get semi regular updates from a guy who uses it quite a bit).

0
0

Atlantis kicks its flashy upstart brethren right in the price tag

Nate Amsden
Bronze badge

you've got to be nuts

To trust 256GB+ to a Supermicro system. Just flat out nuts. I won't deploy anything with anything remotely that much memory on it without something like HP Advanced ECC. That and can Supermicro even identify which DIMM has gone bad if one does go bad these days? My last bout with them was more than a few years ago, for a reason, but at the time the strategy was keep running tests and swapping DIMMs until you find the one that is bad. No easy little light that says oh hey DIMM 9 is bad.

Not to mention more often than not the way you'd find out memory was bad was by a system failure either lock up or crash. I'm sure it happens on occasion on HP gear too but for me I have never seen that behavior in the past 12 years of (on and off) usage of HP w/Advanced ECC (I won't touch HP DL100 series systems for similar reasons of not touching Supermicro for anything business related at least). Going back to DL3xx G2 anyway.

My last round with Dell gear was the R610 series a few years ago, their "version" of Advanced ECC at the time anyway disabled a third to half of the DIMM sockets on the system! Maybe it is fixed now.

0
0

Low price, big power: Virtual Private Server picks for power nerds

Nate Amsden
Bronze badge

power nerd here

I've had a server at a co-location for about 10 years now. Past four years have been in a facility here in the bay area, $200/mo for ~200W of power and 1/4th of a rack. I wouldn't be caught dead with my company's assets in this facility but for my own personal use it works fine. 100mbit unlimited bandwidth with 10s of gigs of backbone connectivity.

Server runs ESXi and a half dozen VMs. Fortunately it's been fairly reliable, last time I had to go on site was a couple years ago for a bad disk and I think to reinstall ESXi on a new USB flash drive.

I love my site to site vpn, and I proxy all my home http traffic over the vpn through the colo, really accelerates things. Home is about 23ms from colo at the moment.

I tried cloud route for about a year(Terremark vCloud Express) a few years ago and well the cost/benefit of doing what I do now is quite a bit higher than cloud was at the time (not going back either). Not touching Azure or amazon with a 5 mile pole.

1
0

EMC to open-source ViPR - and lots of other stuff apparently

Nate Amsden
Bronze badge

openstack tools?

Since when are " Chef, MongoDB, Docker, Cassandra and others" open stack tools? these are all stand alone applications/platforms. Maybe Openstack can use them but people have been using them without Openstack for years, and in some cases before Openstack probably even existed.

I wonder if EMC was just not getting much/any traction with ViPR so they decided to open source it instead. The concept of the product sounded pretty neat at first anyway.

1
0

HP wag has last laugh at US prez wannabe with carlyfiorina.org snatch

Nate Amsden
Bronze badge

why

does anyone care about carly? Makes no sense to me. She was(maybe still is) a contributor on CNBC too. I just fast forwarded through anything she had to say. I could understand if she was still CEO of some big company(well maybe she is I haven't checked).

Maybe if everyone ignores her she will just go away.

7
0

Inside the guts of Nano Server, Microsoft's tiny new Cloud OS

Nate Amsden
Bronze badge

took long enough

I don't know why I remember shit like this

(in reference to MS hotmail FreeBSD->Windows migration)

http://www.theregister.co.uk/2002/11/21/ms_paper_touts_unix/

"We find also that the Windows image size can be a real inconvenience on a big farm: "The team was unable to reduce the size of the image below 900MB; Windows contains many complex relationships between pieces, and the team was not able to determine with safety how much could be left out of the image. Although disk space on each server was not an issue, the time taken to image thousands of servers across the internal network was significant. By comparison, the equivalent FreeBSD image size is a few tens of MB.""

4
1

NetApp's all-flash FAS array is easily in the top, er, six SPC-1 machines

Nate Amsden
Bronze badge

pretty nice results

is this the first NetApp FAS results with reasonable capacity utilization? Usually they pretty high in the unused storage arena.

in any case, pretty fast. good job netapp

2
0

If hypervisor is commodity, why is VMware still on top?

Nate Amsden
Bronze badge

Maybe not a commodity

The article seems to try to claim it is a commodity then goes and says why it's apparently not yet (I would agree it is not yet too).

Citrix has been interesting to me for a while, they won't (in my experience) bat an eye and try to sell you Xen if you say you're interested in VMware. They know VMware is better, Xen is an option if you are *really* cost conscious, but I've admired them to some extent to know that it's not a competitive product and don't try to sell it to the wrong customers. I haven't used XenServer myself but am a Netscaler and XenApp(tiny installation) customer.

It's too bad the state of tools for KVM haven't gotten better yet. There's quite an opportunity out there for someone to step up.. (I haven't used KVM either). But of course historically the open source crew can't stay focused for more than 30 seconds on usability they get bored and start bolting on new features (I say this as a Linux server+desktop user for almost 20 years now).

Maybe KVM's future is just being a background thing in Openstack and the management layers will be built for openstack (again haven't used Openstack but people that I trust that use it tell me it's still not stable, so I have no interest in touching it in the meantime).

For me, VMware has been probably the most solid piece of software I've ever used in my career. I have had a single PSOD in almost 10 years of using ESX/ESXi (PSOD was triggered by a failing voltage regulator). Other than that no crashes, very few bugs (practically a rounding error, but keep in mind I leverage only a fraction of the platforms abilities, probably sticking to things that are the most mature). I've certainly had FAR fewer support tickets with vmware than I have for any other software or even hardware product I have used. Citrix just called me yesterday saying me opening 10 tickets so far this year on Netscaler set off a alarm on their end thinking maybe I'm having a lot of problems and maybe I need more dedicated support resources. I told them no, that is pretty typical. Lots of issues but I am still a happy customer. I've had one memory leak they have been unable to trace for the past 18 months.

VMware just runs and runs and runs......the track record with me is impeccable. The only thing that got me even THINKING about changing was the vRAM tax a few years ago. I have seen absolutely nothing from hyper-v, Xen, or KVM which makes me even want to even look at them still(I do keep very loose track of them).

The cost for vSphere for me is still quite reasonable(enterprise+). I do not subscribe to the most sophisticated management tools, our needs are pretty basic. Especially as systems become more powerful, my newest DL380Gen9s have the newer 18 core/36 thread Intel chips in them. I want to say my early ESX 3.5 systems were DL380G5 with dual socket quad core, I think the entry level ESX at the time(no vmotion etc) was $3500 for 2 sockets. Which came to $437/core (not sure if that included support or not). Latest systems are about $10k for 2 sockets w/3 year support enterprise+ which is about $277/core. So overall cost is down quite a bit(while features are way up), not only that of course per core performance is much better than 9 years ago. I'm happy.

I've been a vmware customer since 1999, I still have my 'vmware for linux 1.0.2' CD around here somewhere. Their stuff works really well, and for my organization and myself the cost is still very worth it. Now that vRAM tax with my 384G servers would of been too much, but fortunately that never materialized as a problem for us (we never used the versions that had that tax imposed).

3
1

D-Link: sorry we're SOHOpeless

Nate Amsden
Bronze badge

does it matter?

at the end of the day in a year what maybe 10% of the affected routers will be patched if that?

0
0

Want to super scale-out? You'll be hungry for flash

Nate Amsden
Bronze badge

reading this

makes me glad I bought a 4-controller 3PAR 7450 up to 460TB of raw flash(and in 5 years I'm sure that number will probably be in the 1PB range with larger capacity SSDs that will most certainly come out in that time). Not for everyone I'm sure but it does the job for me just fine.

Realistically though I don't see my org going past 200TB of raw flash (meaning I don't have to expand the footprint of the original installation which is 8U) in the next 3-5 years(even that is a BIG stretch).

0
0

Thinking of following Facebook and going DIY? Think again

Nate Amsden
Bronze badge

I feel sorry

for anyone who is even facing this question that is not at a big org.

I had clueless management a few years ago at another job, they wanted to a build a 140+ node hadoop cluster and wanted to do it 'on the cheap' by going with some random whitebox supermicro shop. When that decision was made I left the company within 3 weeks(the proposal I had was higher quality gear with NBD on site support I knew there would be a good amount of failures).

I was expecting bad things to happen but the shit really hit the fan it was great to hear the stories. Before they even received the first round of equipment the crap vendor they decided to go with said they had to be paid up front they didn't have the capital to front the customer to buy the equipment. Not only that the experience was so bad the new director of ops (hired a couple months after I left I never knew him) said they would never do business with this company again (this is before the first system landed in their data center).

So they got the gear and installed it. As I suspected they didn't burn anything in up front, so for the following I'd say at least six months they had roughly 30-35% of their hadoop cluster offline due to hardware failures. Because of the quorum hadoop operates in the head hadoop engineer told me basically half the cluster was down for six months while they tried to resolve stuff. It was HILARIOUS. How much business was lost as a result of trying to save about 20-25% on the project budget?

While the project was underway the CTO of the company said they would just hire a college intern to "go swap failed hard disks", as a response to my wanting on site support. As recently as a year ago they had not ever hired any interns to swap hard disks. I knew the quality was going to be crap and we weren't (at the time) staffed to support that. What actually happened proves to me I was right on every level all along, made me feel damn good.

The director in question quit within a year and the VP quit not long after(I believe he was on the verge of being pushed out). The technology side of the company basically fell apart after I left, they hired 9 people to try to do my job but still failed. Two of my former co workers BEGGED to get laid off(so they could get severance packages) when the company had a round of layoffs a few years ago, both got their wish.

Then they went the exact opposite, from super cheap to super expensive deploying a lot of Cisco UCS for a low margin advertising business. Not sure how long that lasted last I heard they refreshed everything onto Dell again.

I do miss that company for one reason though, the commute was literally about 800 feet from my front door. I had co-workers who parked further away than I lived in order to save on parking fees.

Now I live about 1 mile from my job, not too bad still.

9
0

Remember SeaMicro? Red-ink-soaked AMD dumps it overboard

Nate Amsden
Bronze badge

wonder what verizon is doing

They made a big splash about their next gen seamicro public cloud maybe a year ago? Seamicro did a bunch of custom work for them if i remember right

2
0

Got iOS 8.3 installed? Pssh, you are SO last week… version 8.4 is out

Nate Amsden
Bronze badge

here i am

With wifi perhaps permanently disabled now so my note 3 doesn't upgrade to android 5. Sigh. Good thing i don't use much mobile data anyway

0
0

Tintri gets all touchy-feely with latest OS update

Nate Amsden
Bronze badge

yes that's why you have dedicated servers/VMs to run such tools.

Obviously if you have something like a tintri array you're going to have a fair number of servers running off of it. Nobody should bat an eye at one/two/three more for management purposes.

For me I run a XenApp fundamentals server (real basic) for the bulk of our GUI management tools, works really well, good price, and very effective. Larger teams may not be able to get by with Fundamentals though(5 named users max) depending on use case.

1
0

Do your switches run Cumulus Linux? Puppet will pull your strings

Nate Amsden
Bronze badge

auto provisioning is old

The article kind of makes it seem like provisioning a switch automatically is a new thing, the technology is probably 10-15+ years old. New for puppet maybe.

Myself I don't need auto provisioning, network isn't big enough to warrant the need. Even my biggest network which was about 70 switches didn't warrant auto provisioning (now a days that 70 switch network could be collapsed into a couple blade enclosures and a very small number of switches)

Provisioning core switches like edge switches, sounds scary too.

3
0

What type of storage does your application really need?

Nate Amsden
Bronze badge

not SAN

at least not in the traditional sense. This article reads like you are building your own storage system I see no mention of something simple like high availability, or reliability, or replication, online upgrades, caching, storage system architecture, or the multitude of other software/hardware capabilities that modern storage systems offer.

For some folks building their own is the way to go, though for most it is probably not the best idea. Something as simple as firmware (and firmware upgrades) on the storage media can pose quite a problem.

The last time I had to deal with firmware issues on HDDs was about 6 years ago, on Dell servers, and the only way to upgrade the firmware was for someone to go to the server and boot with a DOS boot disk (the firmware didn't even have a changelog we upgraded them as a last resort to try to fix a performance issue and it turned out it worked).

My systems since have been boot from SAN(with only a handful of exceptions), so I haven't had to worry about firmware, the storage system upgrades it for me in the background.

3
0

Cloudy McCloud Cloud HP just said public cloud 'makes no sense for us'

Nate Amsden
Bronze badge

Re: one thing they may lose

I'll forfeit my pay for the week..

0
0
Nate Amsden
Bronze badge

one thing they may lose

The only value I saw HP might get out of their public cloud is experience at some scale with Openstack (they said as much themselves too last year). HP's public cloud (as far as I was told) was mostly built by ex-rackspace people (since rackspace shifted focus away from openstack a few years ago to more managed services etc) who were upset with Rackspace's change in direction.

When HP announced it, I ripped into them myself since they adopted many of the same broken designs that Amazon and others had been offering, biggest one was provisioning.

I never used HP's cloud but I did use amazon's for a couple of years, worst experience in my career. Never again. I don't have any higher expectations for google's or microsoft's cloud either.

IaaS in public clouds is simply broken by design. Maybe PaaS is better in that respect because it can mask some of the deficiencies of the broken IaaS. SaaS public cloud seems the most mature, masking even the failures of the PaaS and IaaS. But of course as you move from IaaS to PaaS to SaaS you lose ever more flexibility and control, for some that is a good thing for many it is probably not.

To-date the only model I have seen give actual good results is SaaS, but of course the scope of products in that space is relatively limited. The company I work for has moved off of multiple SaaS platforms to in house solutions because the SaaS wasn't flexible enough.

0
0

Chef Software cooks up new Chef Delivery DevOps product

Nate Amsden
Bronze badge

as a chef customer

For the past four years I stand by my tagline for chef "makes the easy things hard, and the hard things possible" you can google that term for more background from me on the topic if anyone is interested.

Using Chef was never my idea, and it's *far* too much work(to be worth it) to switch to something else at this point so we just try to live with it.

Before that I had roughly 7 years invested in CFEngine (version 2) and was a much happier person with that product.

Chef has some interesting concepts in it, but such overkill for probably 90% of the organizations out there.

If you are not ready to dive deep into ruby steer clear of chef. I was never prepared to dive into Ruby after having many bad experiences supporting it in the past, and using Chef has just continued to rub salt in those old wounds. Fortunately I have a team mate who is very good at chef so he handles most of it.

I gave my feedback to some of the founders of Chef when I met them several years ago, their response was "oh, you know an apache config file right? postfix? sendmail? bind? Chef is the same you'll pick it up in no time.." (here I am 4 years later and my original comments stand).

Oh and don't get me started on hosted Chef taking scheduled outages in the middle of the day(for paying customers). Just brain dead. What service provider in the world takes scheduled outages in the middle of a business day? I've spoken to the VP in charge of that stuff(friend of mine from 10 years ago) and their excuse either made me want to laugh or cry I'm not sure which.

1
0

Oracle gets ZFS filer array spun up to near-AFA speeds

Nate Amsden
Bronze badge

Where do i get

20k rpm disks?

Haven't heard of them

2
0

Kaminario playing 3D flash chippery doo-dah with its arrays

Nate Amsden
Bronze badge

Re: Same architecture as others... what's different?

For the containers they don't require much management. We don't use docker, just LXC. It is for a very specific use case. Basically the way we deploy code on this application is we have two "farms" of servers and flip between the two. Using LXC allows a pair of servers (server 1 would host web 1A and web 1B for example) to utilize the entire underlying hardware (mainly concerned about CPU, memory is not a constraint in this case) because the cost of the software is $15,000/installation/year (so if you have two VMs on one host running the software that is $30k/year if they are both taking production traffic regardless of CPU core/sockets). We used to run these servers in VMware but decided to move to containers, more cost effective -- the containers were deployed probably 8 months ago and haven't been touched since. Though I am about to touch them with some upgrades soon.

I think containers make good sense for certain use cases, limitations in the technology prevent it from taking over roles that a full virtualized stack otherwise provides(I am annoyed by the fact autofs with NFS does not work in our containers - last I checked it was a kernel issue). I don't subscribe to the notion where you need to be constantly destroying and re-creating containers(or VMs) though. I'm sure that works for some folks - for us we have had VMs running continuously since the infrastructure was installed more than 3 years ago (short of reboots for patches etc). Have never, ever had to rebuild a VM due to a failure (which was a common issue when we were hosted in a public cloud at the time).

0
0
Nate Amsden
Bronze badge

Re: Same architecture as others... what's different?

For me at least managing my 3PAR systems is a breeze, I was reminded how easy it was when I had to setup a HP P2000 for a small 3 host vmware cluster a couple of years ago (replaced it last year with a 3PAR 7200). Exporting a LUN to the cluster was at least 6 operations (1 operation per host path per host).

Anyway my time spent managing my FC network is minimal, granted my network is small, but it doesn't need to be big to drive our $220M+ business. To-date I have stuck to qlogic switches since they are easy to use but will have to go to Brocade I guess since Qlogic is out of the switching business.

My systems look to be just over 98% virtualized (the rest are in containers on physical hosts).

I won't go with iSCSI or NFS myself, I prefer the maturity and reliability of FC (along with boot from SAN). I'm sure iSCSI and NFS work fine for most people, I'm happy to pay a bit more to get even more reliability out of the system. Maybe I wouldn't feel that way if the overhead of my FC stuff wasn't so trivial. They are literally like the least complex components in my network(I manage all storage, all networking, all servers for my organization's production operations. I don't touch internal IT stuff though).

As for identifying VMs that are consumers of I/O I use LogicMonitor to do that, I have graphs that show me globally (across vCenter instances and across storage arrays) which are the top VMs that drive IOPS, or throughput or latency etc). Same goes for CPU usage, memory usage, whatever statistic I want - whatever metric that is available to vCenter is available to LogicMonitor. I especially love seeing top VMs for cpu ready%). I also use LogicMonitor to watch our 3PARs (more than 12,000 data points a minute collected through custom scripts I have integrated into LogicMonitor for our 3 arrays). Along with our FC switches, load balancers, ethernet switches, and bunches of other stuff. It's pretty neat.

Tintri sounds cool, though for me it's still waaaaaaaaaaaayy to new to risk any of my stuff with. If there's one thing I have learned since I started getting deeper into storage in the past 9 years is to be more conservative. If that means paying a bit more here or there, or maybe having to work a bit more for a more solid solution then I'm willing to do it. Of course 3PAR is not a flawless platform I have had my issues with it over the years, if anything it has just reinforced my feelings of being conservative when it comes to storage. Being less conservative on network gear, or even servers perhaps (I am not for either), but of course storage is the most stateful of anything. And yes I have heard(from reliable sources not witnessed/experienced myself) multiple arrays going down simultaneously for the same bug(or data corruption being replicated to a backup array), so replication to a 2nd system isn't a complete cure.

(or many other things, e.g. I won't touch vSphere 6 for at least a year, I *just* completed upgrading from 4.1 to 5.5 - my load balancer software is about to go end of support, I only upgraded my switching software last year because it was past end of support, my Splunk installations haven't had official support in probably 18 months now, it works, the last splunk bug took a year to track down I have no outstanding issues with Splunk so I'm in no hurry to upgrade the list goes on and on).

Hell I used to be pretty excited about vvols (WRT tintri) but now that they are out, I just don't care. I'm sure I'll use em at some point, but there's no compelling need to even give them a second thought at the moment for me anyway.

0
0

In-depth: Supermicro's youngest Twin is a real silent ice maiden

Nate Amsden
Bronze badge

A negative

A negative is it's supermicro.

They have their use cases, but not in my datacenters.

I'll take iLO4 over ipmi in less than 1 heartbeat. My personal supermicro server's kvm management card is still down since last FW upgrade a year ago. I have to go on site and reconfigure the IP. Fortunately i haven't had an urgent need to.

Looking forward to my new DL380Gen9 systems with 18 core cpus and 10GbaseT.

0
1

VMware tells partners, punters, to pay higher prices (probably)

Nate Amsden
Bronze badge

Seems kind of suspicious

Being April 1st

0
0

Microsoft gets data centres powered up for big UPS turn-off

Nate Amsden
Bronze badge

not enough runtime

for MS and google I'm sure its fine(for subsets of their workloads anyway), but for most folks these in server batteries don't provide enough run time to flip to generator. I want at least 10 minutes of run time at full load(with N+1 power), in the event automatic transfer to generator fails and someone has to flip the switch manually (same reason I won't put my equipment in facilities protected by flywheel UPS).

I heard a presentation on this kind of tech a few years ago and one of the key takeaways was that 90% of electrical disruptions last less than two seconds. Not willing to risk the other 10% myself.

2
0

Silk Road coder turned dealer turned informant gets five years

Nate Amsden
Bronze badge

I bet bellevue police were excited

To have a criminal to go after. I lived there for 10 years, great place. Running joke was cops had nothing to do. Police response time at one point I read was under 2 minutes. I had some very amateur drug dealers live next to me in the luxury apts i was at while there. I didn't know until their supplier busted down their door to get after them for ripping him off. Police came and were stuck outside. I think my sister let them in the bldg.

Major international prostitution ring ran(covered many states) from bellevue too for years that was busted 2 years ago (mostly feds).

I miss bellevue. Though from a job standpoint too much amazon and microsoft influence. Moved to bay area almost 4 years ago.

3
0

Mr FlashRay's QUIT: Brian Pawlowski joins flashy upstart Pure Storage

Nate Amsden
Bronze badge

don't need him to show flashray failed

Just have to look at the product. Netapp should be incredibly embarrassed with that product. Seems like they announced it a good year or two too early. (and no, marking it as "controlled deployment" isn't an excuse, control it all you want, keep that kind of shit under total NDA don't tell the public until it's ready)

0
0

F5 hammers out a virtual load balancer

Nate Amsden
Bronze badge

I believe

F5 has had virtual bigip for years just use that. Maybe license limit the features for the price point. But this product just seems like a waste of time.

0
0

The storage is alive? Flash lives longer than expected – report

Nate Amsden
Bronze badge

HP posted this info

HP's latest 3PAR SSDs all come with an unconditional 5 year warranty.

Oct 9, 2014

http://h30507.www3.hp.com/t5/Around-the-Storage-Block-Blog/Worried-about-flash-media-wear-out-It-s-never-a-problem-with-HP/ba-p/172690#.VQrfvUSzvns

"The functional impact of the Adaptive Sparing feature is that it increases the flash media Drive Writes per Day metric (DWPD) substantially. For our largest and most cost-effective 1.92TB SSD media, it is increased such that an individual drive can sustain more than 8PB of writes within a 5-year timeframe before it wears out. To achieve 8PB of total writes in five years requires a sustained write rate over 50MB/sec for every second for five years."

("Adaptive Sparing" is a 3PAR feature)

another post about cMLC in 3PAR:

Nov 10, 2014

http://h30507.www3.hp.com/t5/Around-the-Storage-Block-Blog/cMLC-SSDs-in-HP-3PAR-StoreServ-Embrace-with-confidence/ba-p/176624#.VQrfqESzvns

0
0

IBM's OpenPower gang touts first proper non-Big Blue-badged server

Nate Amsden
Bronze badge

might this

OpenPower thing and tyan and the likes making parts for it end up like Itanium? For a while a few "white box" companies were making Itanium things, market just wasn't there for them(I think HP is the last shop making Itanium systems). I expect the same to happen to Power.

I'm sure it will continue to do fine in it's IBM niche on IBM hardware..

1
0

MacBooks slimming down with Sammy's new 3D NAND diet pills

Nate Amsden
Bronze badge

Re: 5 year warranty for the EVO, not 10 years.

I don't consider myself a "heavy" user of my laptop (compared to some folks I know anyway), it is my daily driver.. Samsung's app says about 5.9TB of data written since my 850 Pro was installed in late August 2014 I want to say. I know there's a way to get this in linux(where I spend 98% of the time booted to), but forgot what tool off hand. Good to know that even at this level wear wise I have a long way left to go.

1
0

MOVE IT! 10 top tips for shifting your data centre

Nate Amsden
Bronze badge

Re: 4 labels per cable

query the port...that doesn't work so well if the cable is not connected though.

The system needs to be able to be used in an offline manor, finding what is plugged into what online isn't too difficult, but when adding/moving/changing stuff once stuff is unplugged it's helpful to know what cable goes where.

4
0
Nate Amsden
Bronze badge

4 labels per cable

My cables get 4 labels per cable, the outer most on each end indicates what it plugs into, the innermost on each end indicates what it plugs into on the other end of the cable. The last systems my co-worker installed he said took him about an average of 1 hour per server for the cabling/labeling (3x1G cables, 4x10G cables, 2x8G FC cables, 2xpower (44 labels) maybe someday I'll have blades). Fortunately he LOVED the idea of having 4 labels per cable as well and was happy to do the work.

I also use color coded cables where possible, at least on the network side. I'm happy my future 10G switches will be 10GbaseT which will give me a very broad selection of colors and lengths that I didn't have with the passive twinax stuff.

Use good labels too, took me a few years before I came across a good label maker+labels. Earlier ones didn't withstand the heat(one of my big build outs in 2005 had us replacing a couple thousand labels as they all fell off after a month or two, then fell off again). I have been using the Brady BMP21 for about the past 8 years with vynl labels(looks/feels like regular labels, I've NEVER had one come off).

Another labeling tip that I came across after seeing how on site support handled things. Even though my 10G cables were labeled properly it was basically impossible to "label" the 10G side on the servers themselves, with 4x10G ports going to each server (two sets of two, so it's important which goes to which port still), I did have a drawing on site that indicated the ports, but the support engineer ended doing something even simpler that I had not thought of (at one point we had to have all of our 10G NICs replaced due to faulty design), which was label them "top left" "top right" "bottom left" "bottom right", for connecting to the servers(these NICs were stacked on each other so it was a "square" of four ports across two NICs). Wish I would of thought of that! I've adjusted my labeling accordingly now.

Also I skip the cable management arms on servers, restricts airflow, I just have semi to-length cables so that there is not a lot of slack. Cable management arms are good if you intend to do work on a running system(hot swap a fan or something), but I've never really had that need. I'd rather have better airflow.

Wherever possible I use extra wide racks too (standard 19" rails but 31" wide total) for better cable management. In every facility I have been in power has always been the constraint, so sitting two 47U racks in a 4 or 5 rack cage still allowed us to max out the power (I use Rittal racks), and usually have rack space available.

Also temperature sensors, my ServerTech PDUs each come with slots for two temp+humidity probes, so each rack has four sensors (two in front, two in back), hooked up to monitoring.

I also middle mount all "top of rack" network gear for better cable flow.

Me personally, I have never worked for an organization that came to me and said "hey we're moving data centers". I've ALWAYS been the key technical contact for any such discussions and would have very deep input into any decisions that were made(so nothing would be a surprise). Maybe it's common in big companies, I've never worked for a big company(probably never will who knows).

2
0

Musk: 'Tesla's electric Model S cars will be less crap soon. I PROMISE'

Nate Amsden
Bronze badge

locations

Of course the problem is more location/quantity of charging stations. It's rare on my road trips(western U.S.) that I'm more than 30 miles away from a gas station at any given time. When I drive at night I have more range anxiety though even with gas, knowing that some gas stations aren't open during really late hours. Was very close to running out of gas one night several years ago because I couldn't find an open gas station(~1-2 AM), eventually found one (it wasn't the brand of gas I wanted to use, I don't recall the brand but didn't have any choice if I wanted gas at that moment). I aim not to get under 60(bare minimum) miles of range when driving late at night before refueling(road trips anyway).

When around home though I often drive my car to the bones(gas gauge stops telling me how many miles are left), I've been told this isn't a great idea to do but I do it anyway, I don't plan to have this car much past the warranty(75k miles), I got it because it's fun not because I want it to last forever or give me wonderful miles per gallon. I can't imagine not having a car after this that doesn't have torque vectoring all wheel drive (or turbo w/direct injection though these two are pretty common now)

0
2

Patch Flash now: Google Project Zero, Intel and pals school Adobe on security 101

Nate Amsden
Bronze badge

good thing

adobe doesn't have to worry about paying cash bounties for security issues

1
0

Devs don't care about cloud-specific coding, right? Er, not so

Nate Amsden
Bronze badge

Just FYI there have been cloud operators that have offered virtual data centers for many years now, I remember talking to Terremark about one such offering just over 5 years ago. The cost was fairly high, my cost for building a new set of gear was around $800k(all tier 1 hardware/software with 4 hour on site support etc), their cost to us was between $300k/month with no up front installation charges, or ~$140k/mo with a $3M installation charge(yes you read that right). But it was possible, in their case it was VMware, and on the $3M install fee that was for Cisco UCS-based equipment.

I'm sure it's a bit cheaper now ..

0
0
Nate Amsden
Bronze badge

I've been saying for nearly 7 years

Greater than 90% of the devs I have worked with (all of which were working on pretty leading edge new application code bases, not talking about legacy code here!) don't understand cloud, and don't write to it.

Having worked at two orgs that launched their apps from day one in a major public cloud both had the same issues because the code wasn't built to handle that environment and chaos ensued(no surprise to me of course). First one is defunct, second one moved out of the public cloud within a few months and I manage their infrastructure today with very high levels of availability, performance, predictability and the cost is a fraction of what we would be paying a public cloud provider.

Seeing public cloud bills in the half a million/month(or more) is NOT uncommon (as absurd as that may sound to many).

I know it's possible to write for this kind of thing, but maintain that every org I've worked for the past 12 years, the business decides to prioritize customer features far above and beyond anything resembling high levels of fault tolerance which is required for true cloud environments. That continues to right now. Again this decision process makes sense to me(cost/benefit etc), at some point some orgs will get to a scale where that level of design is important, most(I'd wger excess of 85%) will never get there though. You can't (or shouldn't) try to design from day one the world's most scalable application because well you're VERY likely to get it wrong, and it will take longer to build(cost more etc). Just like I freely admit the way I manage technology wouldn't work at a google/amazon scale (and their stuff doesn't work for us, or any company I've worked for).

You can fit a square peg in a round hole if you whack it hard enough, but I don't like the stress or headaches associated with that.

6
0

Google chips at Amazon's Glacier with Cloud Storage Nearline

Nate Amsden
Bronze badge

Re: Where's my

Haha, you're funny.

0
0
Nate Amsden
Bronze badge

Where's my

400Megabyte per second internet connection.. yeah that's right I don't have one.

Local storage it is then.

0
2

Doh! iTunes store goes down AFTER Apple Watch launch

Nate Amsden
Bronze badge

Next time use cloud

Because everyone that uses cloud knows there is never outages when you are using cloud. Hurry up apple and deploy cloud!

I installed cloud.bin / cloud.exe (depending on platform) on all of my servers and have had very high reliability since. Other setups may be more complicated.

0
0

Page:

Forums