* Posts by Nate Amsden

1152 posts • joined 19 Jun 2007

Page:

Nutanix vs VMware blog war descends into 'he said, she said' farce

Nate Amsden
Bronze badge

chuck is in charge of strategy at vmware?

I thought he was just a blogger. I used to interact with him semi regularly on his EMC blog he never came across to me as someone who should be in charge of anything too important. I can see him probably having a good relationship with customers but making decisions at that level I don't know.

2
0

A looksee into storage upstart Hedvig's garage

Nate Amsden
Bronze badge

they picked a terrible name

They should of named it something like "Storage Magic v1.0"

because that's what it sounds like they are trying to make.

0
0

ETERNUS embiggens at peak of Mt Fujitsu (it's a drive array, not Tolkien)

Nate Amsden
Bronze badge

netapp goes higher than that

I think you are quoting single system numbers their website claims 103PB with a 24-node cluster as the top end for NAS or 34PB for SAN(max 8 controllers).

0
0

Linux Mint 17.2: If only all penguinista desktops were done this way

Nate Amsden
Bronze badge

Re: Lack of upgrades is a killer for me

I was thinking similar before I switched to Mint from Ubuntu..then I realized I hadn't upgraded Ubuntu in several years (I stayed on way past the 10.04 LTS desktop support ended because I didn't want Unity among other things). So I figure I won't be upgrading to another major version of Mint unless I get new hardware which means new install for me anyway. My personal laptop is 5 years old now, maybe it has another year or two left to it. But maybe I will go with Mint 17 again when I replace it, like I kept going with Ubuntu 10.04 LTS long after 12.x came out.

looks like I am on 17.1 at the moment.

1
1

HP one of the fairest, claims Gartner's magic quadrant on the wall

Nate Amsden
Bronze badge

will be interesting to see

for me anyway what impact HP's latest 3PAR has on this stuff, since this is as of June of this year, the new 20k systems I don't think ship till August, and they are what I'd consider the first *really* good 3PAR flash systems (not that the 7450 isn't good it is but these 20ks are just so much faster and very cost effective relative to their competition anyway).

1
0

Yahoo! displaces Ask in Oracle's Java update crapware parade

Nate Amsden
Bronze badge

sounds like a waste

if ask has only 0.26% share, what does yahoo hope to gain? obviously distributing with java wasn't an effective strategy for ask.

13
0

All-flash array reports aren't all about all-flash arrays, rages Gartner

Nate Amsden
Bronze badge

Re: Call them what you want

Get a 7440 then. Same hardware as 7450 but allows spinning rust.

I wish I got a 7440 but was not available when I got a 7450.

Now the two 20k models are NOT the same hardware the SSD version has more cpus and double memory. Hoping HP pulls a 7440 with the 20k

0
0

Corrective lenses needed for Gartner's flashy array vision

Nate Amsden
Bronze badge

3PAR controllers not the same

They are functionally the same but the all flash version has 32 more cpu cores and 1.7TB more memory/cache (assuming an 8 controller configuration in both)

I would of liked HP to have offered a "hybrid" version with the faster controllers(just gives more flexibility), maybe that will come in the future (like the 7440 is the same as 7450 just allows for spinning rust)

0
0

Bank: Without software mojo, Android OEMs are doomed to 'implode'

Nate Amsden
Bronze badge

don't want upgrades

Jusy security patches. Since ATT started sending android 5 to note 3 users like myself I have had to keep wifi disabled (going on 2 months now) to prevent it from installing. There is NOTHING in android 5 I want. I don't want new features. Phone works fine as it is. I don't care if you double the battery life and make it run twice as fast, phone works fine as it is. I'm already very careful what apps I install and i don't do any purchases or online banking on my phone.

I just wish I could use wifi again without risk getting android 5. I'm happy to try to live with android 5 if I buy a new device. But not on my existing device.

0
1

Where’s the best place for your infrastructure bottleneck?

Nate Amsden
Bronze badge

i/o capacity

The article implies you can make your storage faster by making the pipe (bandwidth) bigger. In my experience at least the pipe is almost never taxed (even 4Gb FC). I know there are cases that it is but I suspect they are in the minority.

Of course experienced tech readers know this already.

I'm more concerned with queue depths at lower speeds (mainly because older gear has smaller queues) than througput.

My servers are overkill but the cost isn't high. 2x10G links for VM traffic 2x10G links for vmotion and fault tolerance 2x1G links for host mgmt and 2x8G FC links for primary storage (boot from SAN). With exception of FC everything else is active/passive for simplicity.

11 cables out of the back of each DL38x system with power and iLO. Good thing I have big racks with lots of cable management. 4 labels per cable and it takes a while to wire a new box but we add boxes at most twice a year in the past 3 years.

Maybe someday I'll have blades

0
0

Pure Storage pushes all-flash array purification

Nate Amsden
Bronze badge

Re: maybe

Discover doesn't start till tomorrow

0
0
Nate Amsden
Bronze badge

maybe

Pure could of picked a better day to announce, not to be dwarfed by the new 3PAR stuff..

0
0

There's data in your dashboard, so liberate it from Big Auto's grasp

Nate Amsden
Bronze badge

Re: can't listen to it

Planet is fine. Humans are fucked. Over population. Planet will heal itself over time.

I plan to have some fun in the meantime because i know there's nothing i can do about it (realistically).

3
4
Nate Amsden
Bronze badge

can't listen to it

over the roar of my stereo(I think cars next to me can't hear their car over my stereo either). Also not interested in driving more economical. I bought my car to have fun, which means I get shitty gas mileage(even if I drive economically the car itself doesn't get good mileage anyway cruising at 75MPH on open highway I hover around 19MPG for a pretty small car though city driving I'm sure I could boost my mileage a lot if I didn't drive the way I do). and burn my tires out probably in less than half the mileage they are rated for. I love to accelerate(and corner with torque vectoring all wheel drive plant my passengers faces into the windows), I don't like to speed(doesn't get me there much faster). I've never gotten a ticket outside of a broken tail light(last one was 11 years ago). One of my best friends is similar though he drives a Porsche, I drive a Nissan.

The GPS in my car for some reason says I have driven a top speed of 225MPH. I've been fast, not quite that fast though(almost half).

I live 1 mile from my office so I don't have much of a commute.

Going on another road trip starting Friday Bay area to Vegas for a week then down to Arizona before coming back maybe swing by LA or something on the way back.

Twin Peaks Vegas bikini contest here I come, then HP discover next week. So excited.

4
15

IaaS is OVER, ladies. Time for OpenStack to jump clear

Nate Amsden
Bronze badge

Re: Finally an article that realises the reality..

I think cloud is a good future for the SME, but that cloud is SaaS, not IaaS which is what this article is referring to. IaaS cloud (even as a customer) still requires pretty significant knowledge to operate, of course SaaS does not.

Another thing that is driving cloud adoption I'm sure is just lack of supply of (good) IT folk. If you can't find someone(s) to run infrastructure right then well you are probably better off not trying. Unfortunately (most) companies seem to think it's not worth paying decent money ($200k+) for people to do these kinds of things when in the end it could(depending on person etc) save them million(s) per year. It'd be different if there was more supply (of people) on the market, but it seems clear to me at least on the west coast of the U.S. there is not. I keep seeing and hearing stories about companies paying high six figures per month to cloud providers and either don't care that they can save a lot, or they just think that is "normal".

There was one good article here from someone that said something along the lines of when your cloud spend crosses $20k/mo then you need to start looking at alternatives. For my org cloud spend was about to explode above $100k/mo when we moved out 3.5 years ago, and my previous company was one of the ones spending $300-450k/mo on cloud (which I showed could be done in house with a 3 month ROI, but ultimately the company wasn't interested, I left, and it collapsed not long after).

0
0

Psst. Want a cheap cloud, VM? Google has one. But there's a catch

Nate Amsden
Bronze badge

not VMs

These are containers not VMs right?

0
0

OpenStack private clouds are SCIENCE PROJECTS says Gartner

Nate Amsden
Bronze badge

Re: More of the Blindingly Obvious from Gartner

For once i think this gartner report seems good. I'm sure there are a ton of management types out there that think if they install openstack they have an instant cloud.

I keep getting updates from a friend that uses it and the word remains steer clear. IaaS is not important to me we get along fine without it. Too much hype behind openstack. Like SDN too much hype there too.

Happy with vmware and "utility computing". Cloud can go to hell.

1
0

Reader suggestion: Using HDFS as generic iSCSI storage

Nate Amsden
Bronze badge

sounds terrible

Reminds me of a suggestion my VP of technology had at a company I was at several years ago, he wanted to use HDFS as a general purpose file storage system. "Oh, let's run our VMs off of it etc.." I just didn't know how to respond to it. I use the comparison as someone asking for motor oil as the dressing for their salad. I left the company not too long after and he was more or less forced out about a year later.

There are distributed file systems which can do this kind of thing but HDFS really isn't built for that. Someone could probably make something work but I'm sure it'd be ugly, slow, and just an example of what not to even try to do.

If you want to cut corners on storage, there are smarter ways to do it and still have a fast system, one might be just ditch shared storage altogether (I'm not about to do that but my storage runs a $250M/year business with high availability requirements and a small team). I believe both vSphere and Hyper-V have the ability to vmotion w/o shared storage for example (maybe others do too).

Or you can go buy some low cost SAN storage, not everyone needs VMAX or HDS VSP type connectivity. Whether it's 3PAR 7200, or low end Compellent, Nimble or whatever.. lots of lower cost options available. I would stay away from the real budget shit e.g. HP P2000 (OEM'd Dot hill I believe), just shit technology.

1
0

Hypervisor indecisive? Today's contenders from yesterday's Hipsters

Nate Amsden
Bronze badge

why spend time

"why spend time, energy and money virtualising more than you have to?"

Because in many cases using containers would require much more time, and energy(as in human energy, and thus money) to manage than virtualization.

I have 6 containers deployed at my company for a *very* specific purpose(3 hosts w/2 containers per host each container runs identical workload). They work well. They were built about a year ago, and haven't been touched much since. I have thought about broadening that deployment a bit more this year, not sure yet though. I use basic LXC on top of Ubuntu 12.04, no Docker around these parts. Adapted my existing provisioning system that we use for VMs (which can work with physical hosts too) to support containers.

Containers are nice but have a lot of limitations(lack of flexibility). They make a lot of sense if you are deploying stuff out wide, lots of similar or the same systems, in similar or same configuration (especially network wise). Also most useful if you are working in a 'built to fail' environment, since you are probably not running your containers on a SAN, and unless things have changed containers don't offer live migration of any sort. So if I need to upgrade the firmware or kernel on the host the containers on the host all require downtime.

So for me, 6 containers, and around 540 VMs in VMware.

I've had one vmware ESX host fail in the past 9 years(my history of using ESX, used other vmware products too of course - in this case it was a failing voltage regulator), Nagios went nuts after a couple of minutes, I walked to my desk and by that time vmware had moved the VMs to other hosts in the cluster and restarted them(had to do some manual recovery for a few of the apps). I don't think that happens with Docker does it? (you don't have to answer that question)

3
1

Microsoft's run Azure on Nano server since late 2013

Nate Amsden
Bronze badge

Re: Azure

they probably mean a lot of the VMs in azure for internal MS stuff that run on hyper-v use the nano stuff.

2
0

Dell: Good servers and PCs, but for storage ... oh dear

Nate Amsden
Bronze badge

dell sonicwall good

one area dell is good at in networking is Sonicwall. Been a customer since before they were acquired. Dell hasn't screwed up Sonicwall as far as I can tell. Completely forgot Dell bought em until 30 seconds ago.

1
0
Nate Amsden
Bronze badge

Dell does pretty good in storage

I think at least. I don't use their stuff, but I'd wager storage is a strong #2 behind servers with Dell, with networking being a very distant 3rd. Maybe I'm wrong though. Compellent and Exanet are both pretty good sets of technology. I *wish* HP had something equivalent to an Exanet in their NAS lineup (StoreAll and StoreEasy don't cut it).

For Dell shops at least, I think what they have is good enough for a lot of folks. They've come a long way from being an EMC reseller and just have equallogic (which I'd never touch).

You say they need a better storage culture, they have a better storage culture through their acquisitions. It's miles ahead from where it was 5 years ago.

I'm not a Dell customer right now but I certainly respect Dell a hell of a lot more these days than I did 5-6 years ago in large part because of the progress they've made in storage.

They tried investing in networking but I think their acquisition of Force10 just fell apart(or so it seems anyway maybe I'm wrong), some of the Force10 stuff hasn't been updated in what seems like 8 years now.

1
0

Flash banishes the spectre of the unrecoverable data error

Nate Amsden
Bronze badge

rebuild 600G 15k in 45min

This particular 3PAR array was not heavily loaded at the time(80x15k RPM disks), much of my workload has been shifted to the all flash system since well it's faster and has 4 controllers(also like the 99.9999% availability guarantee on the new system, though previous 3PAR has had 100% uptime since it's deployment 3.5 years ago). But I just wanted to show that RAID even with big drives is still quite doable with a good RAID system (like 3PAR, I think XIV is similar though not as powerful/flexible as 3PAR).

rebuilding from a failure of a 600G 15k SAS drive in ~40 minutes.

http://elreg.nateamsden.com/3par-rebuild.png

Most/all of the drives in the system participate in the rebuild process, and latency remains low throughout. This technology obviously isn't new, easily 12-14 years old, quite possibly older.

I wrote a blog post(sorry for the link) almost 5 years ago on the topic, and even quoted someone who was using RAID 6 with ZFS who suffered total data loss due to multiple failures(in part because it took more than 1 week to rebuild the RAID):

http://www.techopsguys.com/2010/08/13/do-you-really-need-raid-6/

Now I believe ZFS offers at least triple parity raid(optional), maybe they go beyond that too.

I don't have NL drives in my current arrays just 10k, 15k and SSD. My last 3PAR with NL drives was 5 years ago(~320 drives), it rebuilt pretty quick too even with the 1TB disks, since again the rebuild was distributed.

*very* few of my servers have internal storage of any kind.

2
0

Rip up your AMD obits: Gaming, VR, embedded chips to lift biz out of the red by 2016, allegedly

Nate Amsden
Bronze badge

enterprise-class CPU experience

"We're the only guys in the ARM server business that have experience delivering and supporting enterprise-class CPUs"

How's that enterprise-class CPU business doing these days? 1.5% market share?

I was hoping AMD would continue their high end CPU biz, but obviously the brains of that unit were gutted years ago and their road map ran dry (the bulk of my servers are still Opteron 62xx)

For me I have no interest in ARM, but I don't run hyperscale stuff. I think ARM will (sadly?) lose out to

Intel just like AMD did when it comes to data center stuff.

2
0

OpenStack Daddy Chris C Kemp says it's like Linux in 1996

Nate Amsden
Bronze badge

Linux is about the same as 1996

To me anyway.. desktop linux is much more user friendly but really linux on the desktop never went anywhere(I say this as a Linux on the desktop user since about 1997, using it still right now). I think back to my early days in 1996 with Linux, I think it was Slackware 3.x at the time.. I suppose one big leap was things like apt-get which I want to say I first saw in 98 or 99.. but otherwise I don't see a whole lot of big usability enhancements for Linux's main use cases as a server OS.

So I hope usability wise Openstack doesn't look like how Linux evolved usability wise. I won't touch it with a 10 foot pole still too unstable (I get semi regular updates from a guy who uses it quite a bit).

0
0

Atlantis kicks its flashy upstart brethren right in the price tag

Nate Amsden
Bronze badge

you've got to be nuts

To trust 256GB+ to a Supermicro system. Just flat out nuts. I won't deploy anything with anything remotely that much memory on it without something like HP Advanced ECC. That and can Supermicro even identify which DIMM has gone bad if one does go bad these days? My last bout with them was more than a few years ago, for a reason, but at the time the strategy was keep running tests and swapping DIMMs until you find the one that is bad. No easy little light that says oh hey DIMM 9 is bad.

Not to mention more often than not the way you'd find out memory was bad was by a system failure either lock up or crash. I'm sure it happens on occasion on HP gear too but for me I have never seen that behavior in the past 12 years of (on and off) usage of HP w/Advanced ECC (I won't touch HP DL100 series systems for similar reasons of not touching Supermicro for anything business related at least). Going back to DL3xx G2 anyway.

My last round with Dell gear was the R610 series a few years ago, their "version" of Advanced ECC at the time anyway disabled a third to half of the DIMM sockets on the system! Maybe it is fixed now.

0
0

Low price, big power: Virtual Private Server picks for power nerds

Nate Amsden
Bronze badge

power nerd here

I've had a server at a co-location for about 10 years now. Past four years have been in a facility here in the bay area, $200/mo for ~200W of power and 1/4th of a rack. I wouldn't be caught dead with my company's assets in this facility but for my own personal use it works fine. 100mbit unlimited bandwidth with 10s of gigs of backbone connectivity.

Server runs ESXi and a half dozen VMs. Fortunately it's been fairly reliable, last time I had to go on site was a couple years ago for a bad disk and I think to reinstall ESXi on a new USB flash drive.

I love my site to site vpn, and I proxy all my home http traffic over the vpn through the colo, really accelerates things. Home is about 23ms from colo at the moment.

I tried cloud route for about a year(Terremark vCloud Express) a few years ago and well the cost/benefit of doing what I do now is quite a bit higher than cloud was at the time (not going back either). Not touching Azure or amazon with a 5 mile pole.

1
0

EMC to open-source ViPR - and lots of other stuff apparently

Nate Amsden
Bronze badge

openstack tools?

Since when are " Chef, MongoDB, Docker, Cassandra and others" open stack tools? these are all stand alone applications/platforms. Maybe Openstack can use them but people have been using them without Openstack for years, and in some cases before Openstack probably even existed.

I wonder if EMC was just not getting much/any traction with ViPR so they decided to open source it instead. The concept of the product sounded pretty neat at first anyway.

1
0

HP wag has last laugh at US prez wannabe with carlyfiorina.org snatch

Nate Amsden
Bronze badge

why

does anyone care about carly? Makes no sense to me. She was(maybe still is) a contributor on CNBC too. I just fast forwarded through anything she had to say. I could understand if she was still CEO of some big company(well maybe she is I haven't checked).

Maybe if everyone ignores her she will just go away.

7
0

Inside the guts of Nano Server, Microsoft's tiny new Cloud OS

Nate Amsden
Bronze badge

took long enough

I don't know why I remember shit like this

(in reference to MS hotmail FreeBSD->Windows migration)

http://www.theregister.co.uk/2002/11/21/ms_paper_touts_unix/

"We find also that the Windows image size can be a real inconvenience on a big farm: "The team was unable to reduce the size of the image below 900MB; Windows contains many complex relationships between pieces, and the team was not able to determine with safety how much could be left out of the image. Although disk space on each server was not an issue, the time taken to image thousands of servers across the internal network was significant. By comparison, the equivalent FreeBSD image size is a few tens of MB.""

5
1

NetApp's all-flash FAS array is easily in the top, er, six SPC-1 machines

Nate Amsden
Bronze badge

pretty nice results

is this the first NetApp FAS results with reasonable capacity utilization? Usually they pretty high in the unused storage arena.

in any case, pretty fast. good job netapp

2
0

If hypervisor is commodity, why is VMware still on top?

Nate Amsden
Bronze badge

Maybe not a commodity

The article seems to try to claim it is a commodity then goes and says why it's apparently not yet (I would agree it is not yet too).

Citrix has been interesting to me for a while, they won't (in my experience) bat an eye and try to sell you Xen if you say you're interested in VMware. They know VMware is better, Xen is an option if you are *really* cost conscious, but I've admired them to some extent to know that it's not a competitive product and don't try to sell it to the wrong customers. I haven't used XenServer myself but am a Netscaler and XenApp(tiny installation) customer.

It's too bad the state of tools for KVM haven't gotten better yet. There's quite an opportunity out there for someone to step up.. (I haven't used KVM either). But of course historically the open source crew can't stay focused for more than 30 seconds on usability they get bored and start bolting on new features (I say this as a Linux server+desktop user for almost 20 years now).

Maybe KVM's future is just being a background thing in Openstack and the management layers will be built for openstack (again haven't used Openstack but people that I trust that use it tell me it's still not stable, so I have no interest in touching it in the meantime).

For me, VMware has been probably the most solid piece of software I've ever used in my career. I have had a single PSOD in almost 10 years of using ESX/ESXi (PSOD was triggered by a failing voltage regulator). Other than that no crashes, very few bugs (practically a rounding error, but keep in mind I leverage only a fraction of the platforms abilities, probably sticking to things that are the most mature). I've certainly had FAR fewer support tickets with vmware than I have for any other software or even hardware product I have used. Citrix just called me yesterday saying me opening 10 tickets so far this year on Netscaler set off a alarm on their end thinking maybe I'm having a lot of problems and maybe I need more dedicated support resources. I told them no, that is pretty typical. Lots of issues but I am still a happy customer. I've had one memory leak they have been unable to trace for the past 18 months.

VMware just runs and runs and runs......the track record with me is impeccable. The only thing that got me even THINKING about changing was the vRAM tax a few years ago. I have seen absolutely nothing from hyper-v, Xen, or KVM which makes me even want to even look at them still(I do keep very loose track of them).

The cost for vSphere for me is still quite reasonable(enterprise+). I do not subscribe to the most sophisticated management tools, our needs are pretty basic. Especially as systems become more powerful, my newest DL380Gen9s have the newer 18 core/36 thread Intel chips in them. I want to say my early ESX 3.5 systems were DL380G5 with dual socket quad core, I think the entry level ESX at the time(no vmotion etc) was $3500 for 2 sockets. Which came to $437/core (not sure if that included support or not). Latest systems are about $10k for 2 sockets w/3 year support enterprise+ which is about $277/core. So overall cost is down quite a bit(while features are way up), not only that of course per core performance is much better than 9 years ago. I'm happy.

I've been a vmware customer since 1999, I still have my 'vmware for linux 1.0.2' CD around here somewhere. Their stuff works really well, and for my organization and myself the cost is still very worth it. Now that vRAM tax with my 384G servers would of been too much, but fortunately that never materialized as a problem for us (we never used the versions that had that tax imposed).

3
1

D-Link: sorry we're SOHOpeless

Nate Amsden
Bronze badge

does it matter?

at the end of the day in a year what maybe 10% of the affected routers will be patched if that?

0
0

Want to super scale-out? You'll be hungry for flash

Nate Amsden
Bronze badge

reading this

makes me glad I bought a 4-controller 3PAR 7450 up to 460TB of raw flash(and in 5 years I'm sure that number will probably be in the 1PB range with larger capacity SSDs that will most certainly come out in that time). Not for everyone I'm sure but it does the job for me just fine.

Realistically though I don't see my org going past 200TB of raw flash (meaning I don't have to expand the footprint of the original installation which is 8U) in the next 3-5 years(even that is a BIG stretch).

0
0

Thinking of following Facebook and going DIY? Think again

Nate Amsden
Bronze badge

I feel sorry

for anyone who is even facing this question that is not at a big org.

I had clueless management a few years ago at another job, they wanted to a build a 140+ node hadoop cluster and wanted to do it 'on the cheap' by going with some random whitebox supermicro shop. When that decision was made I left the company within 3 weeks(the proposal I had was higher quality gear with NBD on site support I knew there would be a good amount of failures).

I was expecting bad things to happen but the shit really hit the fan it was great to hear the stories. Before they even received the first round of equipment the crap vendor they decided to go with said they had to be paid up front they didn't have the capital to front the customer to buy the equipment. Not only that the experience was so bad the new director of ops (hired a couple months after I left I never knew him) said they would never do business with this company again (this is before the first system landed in their data center).

So they got the gear and installed it. As I suspected they didn't burn anything in up front, so for the following I'd say at least six months they had roughly 30-35% of their hadoop cluster offline due to hardware failures. Because of the quorum hadoop operates in the head hadoop engineer told me basically half the cluster was down for six months while they tried to resolve stuff. It was HILARIOUS. How much business was lost as a result of trying to save about 20-25% on the project budget?

While the project was underway the CTO of the company said they would just hire a college intern to "go swap failed hard disks", as a response to my wanting on site support. As recently as a year ago they had not ever hired any interns to swap hard disks. I knew the quality was going to be crap and we weren't (at the time) staffed to support that. What actually happened proves to me I was right on every level all along, made me feel damn good.

The director in question quit within a year and the VP quit not long after(I believe he was on the verge of being pushed out). The technology side of the company basically fell apart after I left, they hired 9 people to try to do my job but still failed. Two of my former co workers BEGGED to get laid off(so they could get severance packages) when the company had a round of layoffs a few years ago, both got their wish.

Then they went the exact opposite, from super cheap to super expensive deploying a lot of Cisco UCS for a low margin advertising business. Not sure how long that lasted last I heard they refreshed everything onto Dell again.

I do miss that company for one reason though, the commute was literally about 800 feet from my front door. I had co-workers who parked further away than I lived in order to save on parking fees.

Now I live about 1 mile from my job, not too bad still.

9
0

Remember SeaMicro? Red-ink-soaked AMD dumps it overboard

Nate Amsden
Bronze badge

wonder what verizon is doing

They made a big splash about their next gen seamicro public cloud maybe a year ago? Seamicro did a bunch of custom work for them if i remember right

2
0

Got iOS 8.3 installed? Pssh, you are SO last week… version 8.4 is out

Nate Amsden
Bronze badge

here i am

With wifi perhaps permanently disabled now so my note 3 doesn't upgrade to android 5. Sigh. Good thing i don't use much mobile data anyway

0
0

Tintri gets all touchy-feely with latest OS update

Nate Amsden
Bronze badge

yes that's why you have dedicated servers/VMs to run such tools.

Obviously if you have something like a tintri array you're going to have a fair number of servers running off of it. Nobody should bat an eye at one/two/three more for management purposes.

For me I run a XenApp fundamentals server (real basic) for the bulk of our GUI management tools, works really well, good price, and very effective. Larger teams may not be able to get by with Fundamentals though(5 named users max) depending on use case.

1
0

Do your switches run Cumulus Linux? Puppet will pull your strings

Nate Amsden
Bronze badge

auto provisioning is old

The article kind of makes it seem like provisioning a switch automatically is a new thing, the technology is probably 10-15+ years old. New for puppet maybe.

Myself I don't need auto provisioning, network isn't big enough to warrant the need. Even my biggest network which was about 70 switches didn't warrant auto provisioning (now a days that 70 switch network could be collapsed into a couple blade enclosures and a very small number of switches)

Provisioning core switches like edge switches, sounds scary too.

3
0

What type of storage does your application really need?

Nate Amsden
Bronze badge

not SAN

at least not in the traditional sense. This article reads like you are building your own storage system I see no mention of something simple like high availability, or reliability, or replication, online upgrades, caching, storage system architecture, or the multitude of other software/hardware capabilities that modern storage systems offer.

For some folks building their own is the way to go, though for most it is probably not the best idea. Something as simple as firmware (and firmware upgrades) on the storage media can pose quite a problem.

The last time I had to deal with firmware issues on HDDs was about 6 years ago, on Dell servers, and the only way to upgrade the firmware was for someone to go to the server and boot with a DOS boot disk (the firmware didn't even have a changelog we upgraded them as a last resort to try to fix a performance issue and it turned out it worked).

My systems since have been boot from SAN(with only a handful of exceptions), so I haven't had to worry about firmware, the storage system upgrades it for me in the background.

3
0

Cloudy McCloud Cloud HP just said public cloud 'makes no sense for us'

Nate Amsden
Bronze badge

Re: one thing they may lose

I'll forfeit my pay for the week..

0
0
Nate Amsden
Bronze badge

one thing they may lose

The only value I saw HP might get out of their public cloud is experience at some scale with Openstack (they said as much themselves too last year). HP's public cloud (as far as I was told) was mostly built by ex-rackspace people (since rackspace shifted focus away from openstack a few years ago to more managed services etc) who were upset with Rackspace's change in direction.

When HP announced it, I ripped into them myself since they adopted many of the same broken designs that Amazon and others had been offering, biggest one was provisioning.

I never used HP's cloud but I did use amazon's for a couple of years, worst experience in my career. Never again. I don't have any higher expectations for google's or microsoft's cloud either.

IaaS in public clouds is simply broken by design. Maybe PaaS is better in that respect because it can mask some of the deficiencies of the broken IaaS. SaaS public cloud seems the most mature, masking even the failures of the PaaS and IaaS. But of course as you move from IaaS to PaaS to SaaS you lose ever more flexibility and control, for some that is a good thing for many it is probably not.

To-date the only model I have seen give actual good results is SaaS, but of course the scope of products in that space is relatively limited. The company I work for has moved off of multiple SaaS platforms to in house solutions because the SaaS wasn't flexible enough.

0
0

Chef Software cooks up new Chef Delivery DevOps product

Nate Amsden
Bronze badge

as a chef customer

For the past four years I stand by my tagline for chef "makes the easy things hard, and the hard things possible" you can google that term for more background from me on the topic if anyone is interested.

Using Chef was never my idea, and it's *far* too much work(to be worth it) to switch to something else at this point so we just try to live with it.

Before that I had roughly 7 years invested in CFEngine (version 2) and was a much happier person with that product.

Chef has some interesting concepts in it, but such overkill for probably 90% of the organizations out there.

If you are not ready to dive deep into ruby steer clear of chef. I was never prepared to dive into Ruby after having many bad experiences supporting it in the past, and using Chef has just continued to rub salt in those old wounds. Fortunately I have a team mate who is very good at chef so he handles most of it.

I gave my feedback to some of the founders of Chef when I met them several years ago, their response was "oh, you know an apache config file right? postfix? sendmail? bind? Chef is the same you'll pick it up in no time.." (here I am 4 years later and my original comments stand).

Oh and don't get me started on hosted Chef taking scheduled outages in the middle of the day(for paying customers). Just brain dead. What service provider in the world takes scheduled outages in the middle of a business day? I've spoken to the VP in charge of that stuff(friend of mine from 10 years ago) and their excuse either made me want to laugh or cry I'm not sure which.

1
0

Oracle gets ZFS filer array spun up to near-AFA speeds

Nate Amsden
Bronze badge

Where do i get

20k rpm disks?

Haven't heard of them

2
0

Kaminario playing 3D flash chippery doo-dah with its arrays

Nate Amsden
Bronze badge

Re: Same architecture as others... what's different?

For the containers they don't require much management. We don't use docker, just LXC. It is for a very specific use case. Basically the way we deploy code on this application is we have two "farms" of servers and flip between the two. Using LXC allows a pair of servers (server 1 would host web 1A and web 1B for example) to utilize the entire underlying hardware (mainly concerned about CPU, memory is not a constraint in this case) because the cost of the software is $15,000/installation/year (so if you have two VMs on one host running the software that is $30k/year if they are both taking production traffic regardless of CPU core/sockets). We used to run these servers in VMware but decided to move to containers, more cost effective -- the containers were deployed probably 8 months ago and haven't been touched since. Though I am about to touch them with some upgrades soon.

I think containers make good sense for certain use cases, limitations in the technology prevent it from taking over roles that a full virtualized stack otherwise provides(I am annoyed by the fact autofs with NFS does not work in our containers - last I checked it was a kernel issue). I don't subscribe to the notion where you need to be constantly destroying and re-creating containers(or VMs) though. I'm sure that works for some folks - for us we have had VMs running continuously since the infrastructure was installed more than 3 years ago (short of reboots for patches etc). Have never, ever had to rebuild a VM due to a failure (which was a common issue when we were hosted in a public cloud at the time).

0
0
Nate Amsden
Bronze badge

Re: Same architecture as others... what's different?

For me at least managing my 3PAR systems is a breeze, I was reminded how easy it was when I had to setup a HP P2000 for a small 3 host vmware cluster a couple of years ago (replaced it last year with a 3PAR 7200). Exporting a LUN to the cluster was at least 6 operations (1 operation per host path per host).

Anyway my time spent managing my FC network is minimal, granted my network is small, but it doesn't need to be big to drive our $220M+ business. To-date I have stuck to qlogic switches since they are easy to use but will have to go to Brocade I guess since Qlogic is out of the switching business.

My systems look to be just over 98% virtualized (the rest are in containers on physical hosts).

I won't go with iSCSI or NFS myself, I prefer the maturity and reliability of FC (along with boot from SAN). I'm sure iSCSI and NFS work fine for most people, I'm happy to pay a bit more to get even more reliability out of the system. Maybe I wouldn't feel that way if the overhead of my FC stuff wasn't so trivial. They are literally like the least complex components in my network(I manage all storage, all networking, all servers for my organization's production operations. I don't touch internal IT stuff though).

As for identifying VMs that are consumers of I/O I use LogicMonitor to do that, I have graphs that show me globally (across vCenter instances and across storage arrays) which are the top VMs that drive IOPS, or throughput or latency etc). Same goes for CPU usage, memory usage, whatever statistic I want - whatever metric that is available to vCenter is available to LogicMonitor. I especially love seeing top VMs for cpu ready%). I also use LogicMonitor to watch our 3PARs (more than 12,000 data points a minute collected through custom scripts I have integrated into LogicMonitor for our 3 arrays). Along with our FC switches, load balancers, ethernet switches, and bunches of other stuff. It's pretty neat.

Tintri sounds cool, though for me it's still waaaaaaaaaaaayy to new to risk any of my stuff with. If there's one thing I have learned since I started getting deeper into storage in the past 9 years is to be more conservative. If that means paying a bit more here or there, or maybe having to work a bit more for a more solid solution then I'm willing to do it. Of course 3PAR is not a flawless platform I have had my issues with it over the years, if anything it has just reinforced my feelings of being conservative when it comes to storage. Being less conservative on network gear, or even servers perhaps (I am not for either), but of course storage is the most stateful of anything. And yes I have heard(from reliable sources not witnessed/experienced myself) multiple arrays going down simultaneously for the same bug(or data corruption being replicated to a backup array), so replication to a 2nd system isn't a complete cure.

(or many other things, e.g. I won't touch vSphere 6 for at least a year, I *just* completed upgrading from 4.1 to 5.5 - my load balancer software is about to go end of support, I only upgraded my switching software last year because it was past end of support, my Splunk installations haven't had official support in probably 18 months now, it works, the last splunk bug took a year to track down I have no outstanding issues with Splunk so I'm in no hurry to upgrade the list goes on and on).

Hell I used to be pretty excited about vvols (WRT tintri) but now that they are out, I just don't care. I'm sure I'll use em at some point, but there's no compelling need to even give them a second thought at the moment for me anyway.

0
0

In-depth: Supermicro's youngest Twin is a real silent ice maiden

Nate Amsden
Bronze badge

A negative

A negative is it's supermicro.

They have their use cases, but not in my datacenters.

I'll take iLO4 over ipmi in less than 1 heartbeat. My personal supermicro server's kvm management card is still down since last FW upgrade a year ago. I have to go on site and reconfigure the IP. Fortunately i haven't had an urgent need to.

Looking forward to my new DL380Gen9 systems with 18 core cpus and 10GbaseT.

0
1

VMware tells partners, punters, to pay higher prices (probably)

Nate Amsden
Bronze badge

Seems kind of suspicious

Being April 1st

0
0

Microsoft gets data centres powered up for big UPS turn-off

Nate Amsden
Bronze badge

not enough runtime

for MS and google I'm sure its fine(for subsets of their workloads anyway), but for most folks these in server batteries don't provide enough run time to flip to generator. I want at least 10 minutes of run time at full load(with N+1 power), in the event automatic transfer to generator fails and someone has to flip the switch manually (same reason I won't put my equipment in facilities protected by flywheel UPS).

I heard a presentation on this kind of tech a few years ago and one of the key takeaways was that 90% of electrical disruptions last less than two seconds. Not willing to risk the other 10% myself.

2
0

Silk Road coder turned dealer turned informant gets five years

Nate Amsden
Bronze badge

I bet bellevue police were excited

To have a criminal to go after. I lived there for 10 years, great place. Running joke was cops had nothing to do. Police response time at one point I read was under 2 minutes. I had some very amateur drug dealers live next to me in the luxury apts i was at while there. I didn't know until their supplier busted down their door to get after them for ripping him off. Police came and were stuck outside. I think my sister let them in the bldg.

Major international prostitution ring ran(covered many states) from bellevue too for years that was busted 2 years ago (mostly feds).

I miss bellevue. Though from a job standpoint too much amazon and microsoft influence. Moved to bay area almost 4 years ago.

3
0

Page:

Forums