* Posts by Nate Amsden

1109 posts • joined 19 Jun 2007

Page:

Oracle gets ZFS filer array spun up to near-AFA speeds

Nate Amsden
Bronze badge

Where do i get

20k rpm disks?

Haven't heard of them

2
0

Kaminario playing 3D flash chippery doo-dah with its arrays

Nate Amsden
Bronze badge

Re: Same architecture as others... what's different?

For the containers they don't require much management. We don't use docker, just LXC. It is for a very specific use case. Basically the way we deploy code on this application is we have two "farms" of servers and flip between the two. Using LXC allows a pair of servers (server 1 would host web 1A and web 1B for example) to utilize the entire underlying hardware (mainly concerned about CPU, memory is not a constraint in this case) because the cost of the software is $15,000/installation/year (so if you have two VMs on one host running the software that is $30k/year if they are both taking production traffic regardless of CPU core/sockets). We used to run these servers in VMware but decided to move to containers, more cost effective -- the containers were deployed probably 8 months ago and haven't been touched since. Though I am about to touch them with some upgrades soon.

I think containers make good sense for certain use cases, limitations in the technology prevent it from taking over roles that a full virtualized stack otherwise provides(I am annoyed by the fact autofs with NFS does not work in our containers - last I checked it was a kernel issue). I don't subscribe to the notion where you need to be constantly destroying and re-creating containers(or VMs) though. I'm sure that works for some folks - for us we have had VMs running continuously since the infrastructure was installed more than 3 years ago (short of reboots for patches etc). Have never, ever had to rebuild a VM due to a failure (which was a common issue when we were hosted in a public cloud at the time).

0
0
Nate Amsden
Bronze badge

Re: Same architecture as others... what's different?

For me at least managing my 3PAR systems is a breeze, I was reminded how easy it was when I had to setup a HP P2000 for a small 3 host vmware cluster a couple of years ago (replaced it last year with a 3PAR 7200). Exporting a LUN to the cluster was at least 6 operations (1 operation per host path per host).

Anyway my time spent managing my FC network is minimal, granted my network is small, but it doesn't need to be big to drive our $220M+ business. To-date I have stuck to qlogic switches since they are easy to use but will have to go to Brocade I guess since Qlogic is out of the switching business.

My systems look to be just over 98% virtualized (the rest are in containers on physical hosts).

I won't go with iSCSI or NFS myself, I prefer the maturity and reliability of FC (along with boot from SAN). I'm sure iSCSI and NFS work fine for most people, I'm happy to pay a bit more to get even more reliability out of the system. Maybe I wouldn't feel that way if the overhead of my FC stuff wasn't so trivial. They are literally like the least complex components in my network(I manage all storage, all networking, all servers for my organization's production operations. I don't touch internal IT stuff though).

As for identifying VMs that are consumers of I/O I use LogicMonitor to do that, I have graphs that show me globally (across vCenter instances and across storage arrays) which are the top VMs that drive IOPS, or throughput or latency etc). Same goes for CPU usage, memory usage, whatever statistic I want - whatever metric that is available to vCenter is available to LogicMonitor. I especially love seeing top VMs for cpu ready%). I also use LogicMonitor to watch our 3PARs (more than 12,000 data points a minute collected through custom scripts I have integrated into LogicMonitor for our 3 arrays). Along with our FC switches, load balancers, ethernet switches, and bunches of other stuff. It's pretty neat.

Tintri sounds cool, though for me it's still waaaaaaaaaaaayy to new to risk any of my stuff with. If there's one thing I have learned since I started getting deeper into storage in the past 9 years is to be more conservative. If that means paying a bit more here or there, or maybe having to work a bit more for a more solid solution then I'm willing to do it. Of course 3PAR is not a flawless platform I have had my issues with it over the years, if anything it has just reinforced my feelings of being conservative when it comes to storage. Being less conservative on network gear, or even servers perhaps (I am not for either), but of course storage is the most stateful of anything. And yes I have heard(from reliable sources not witnessed/experienced myself) multiple arrays going down simultaneously for the same bug(or data corruption being replicated to a backup array), so replication to a 2nd system isn't a complete cure.

(or many other things, e.g. I won't touch vSphere 6 for at least a year, I *just* completed upgrading from 4.1 to 5.5 - my load balancer software is about to go end of support, I only upgraded my switching software last year because it was past end of support, my Splunk installations haven't had official support in probably 18 months now, it works, the last splunk bug took a year to track down I have no outstanding issues with Splunk so I'm in no hurry to upgrade the list goes on and on).

Hell I used to be pretty excited about vvols (WRT tintri) but now that they are out, I just don't care. I'm sure I'll use em at some point, but there's no compelling need to even give them a second thought at the moment for me anyway.

0
0

In-depth: Supermicro's youngest Twin is a real silent ice maiden

Nate Amsden
Bronze badge

A negative

A negative is it's supermicro.

They have their use cases, but not in my datacenters.

I'll take iLO4 over ipmi in less than 1 heartbeat. My personal supermicro server's kvm management card is still down since last FW upgrade a year ago. I have to go on site and reconfigure the IP. Fortunately i haven't had an urgent need to.

Looking forward to my new DL380Gen9 systems with 18 core cpus and 10GbaseT.

0
1

VMware tells partners, punters, to pay higher prices (probably)

Nate Amsden
Bronze badge

Seems kind of suspicious

Being April 1st

0
0

Microsoft gets data centres powered up for big UPS turn-off

Nate Amsden
Bronze badge

not enough runtime

for MS and google I'm sure its fine(for subsets of their workloads anyway), but for most folks these in server batteries don't provide enough run time to flip to generator. I want at least 10 minutes of run time at full load(with N+1 power), in the event automatic transfer to generator fails and someone has to flip the switch manually (same reason I won't put my equipment in facilities protected by flywheel UPS).

I heard a presentation on this kind of tech a few years ago and one of the key takeaways was that 90% of electrical disruptions last less than two seconds. Not willing to risk the other 10% myself.

2
0

Silk Road coder turned dealer turned informant gets five years

Nate Amsden
Bronze badge

I bet bellevue police were excited

To have a criminal to go after. I lived there for 10 years, great place. Running joke was cops had nothing to do. Police response time at one point I read was under 2 minutes. I had some very amateur drug dealers live next to me in the luxury apts i was at while there. I didn't know until their supplier busted down their door to get after them for ripping him off. Police came and were stuck outside. I think my sister let them in the bldg.

Major international prostitution ring ran(covered many states) from bellevue too for years that was busted 2 years ago (mostly feds).

I miss bellevue. Though from a job standpoint too much amazon and microsoft influence. Moved to bay area almost 4 years ago.

3
0

Mr FlashRay's QUIT: Brian Pawlowski joins flashy upstart Pure Storage

Nate Amsden
Bronze badge

don't need him to show flashray failed

Just have to look at the product. Netapp should be incredibly embarrassed with that product. Seems like they announced it a good year or two too early. (and no, marking it as "controlled deployment" isn't an excuse, control it all you want, keep that kind of shit under total NDA don't tell the public until it's ready)

0
0

F5 hammers out a virtual load balancer

Nate Amsden
Bronze badge

I believe

F5 has had virtual bigip for years just use that. Maybe license limit the features for the price point. But this product just seems like a waste of time.

0
0

The storage is alive? Flash lives longer than expected – report

Nate Amsden
Bronze badge

HP posted this info

HP's latest 3PAR SSDs all come with an unconditional 5 year warranty.

Oct 9, 2014

http://h30507.www3.hp.com/t5/Around-the-Storage-Block-Blog/Worried-about-flash-media-wear-out-It-s-never-a-problem-with-HP/ba-p/172690#.VQrfvUSzvns

"The functional impact of the Adaptive Sparing feature is that it increases the flash media Drive Writes per Day metric (DWPD) substantially. For our largest and most cost-effective 1.92TB SSD media, it is increased such that an individual drive can sustain more than 8PB of writes within a 5-year timeframe before it wears out. To achieve 8PB of total writes in five years requires a sustained write rate over 50MB/sec for every second for five years."

("Adaptive Sparing" is a 3PAR feature)

another post about cMLC in 3PAR:

Nov 10, 2014

http://h30507.www3.hp.com/t5/Around-the-Storage-Block-Blog/cMLC-SSDs-in-HP-3PAR-StoreServ-Embrace-with-confidence/ba-p/176624#.VQrfqESzvns

0
0

IBM's OpenPower gang touts first proper non-Big Blue-badged server

Nate Amsden
Bronze badge

might this

OpenPower thing and tyan and the likes making parts for it end up like Itanium? For a while a few "white box" companies were making Itanium things, market just wasn't there for them(I think HP is the last shop making Itanium systems). I expect the same to happen to Power.

I'm sure it will continue to do fine in it's IBM niche on IBM hardware..

1
0

MacBooks slimming down with Sammy's new 3D NAND diet pills

Nate Amsden
Bronze badge

Re: 5 year warranty for the EVO, not 10 years.

I don't consider myself a "heavy" user of my laptop (compared to some folks I know anyway), it is my daily driver.. Samsung's app says about 5.9TB of data written since my 850 Pro was installed in late August 2014 I want to say. I know there's a way to get this in linux(where I spend 98% of the time booted to), but forgot what tool off hand. Good to know that even at this level wear wise I have a long way left to go.

1
0

MOVE IT! 10 top tips for shifting your data centre

Nate Amsden
Bronze badge

Re: 4 labels per cable

query the port...that doesn't work so well if the cable is not connected though.

The system needs to be able to be used in an offline manor, finding what is plugged into what online isn't too difficult, but when adding/moving/changing stuff once stuff is unplugged it's helpful to know what cable goes where.

4
0
Nate Amsden
Bronze badge

4 labels per cable

My cables get 4 labels per cable, the outer most on each end indicates what it plugs into, the innermost on each end indicates what it plugs into on the other end of the cable. The last systems my co-worker installed he said took him about an average of 1 hour per server for the cabling/labeling (3x1G cables, 4x10G cables, 2x8G FC cables, 2xpower (44 labels) maybe someday I'll have blades). Fortunately he LOVED the idea of having 4 labels per cable as well and was happy to do the work.

I also use color coded cables where possible, at least on the network side. I'm happy my future 10G switches will be 10GbaseT which will give me a very broad selection of colors and lengths that I didn't have with the passive twinax stuff.

Use good labels too, took me a few years before I came across a good label maker+labels. Earlier ones didn't withstand the heat(one of my big build outs in 2005 had us replacing a couple thousand labels as they all fell off after a month or two, then fell off again). I have been using the Brady BMP21 for about the past 8 years with vynl labels(looks/feels like regular labels, I've NEVER had one come off).

Another labeling tip that I came across after seeing how on site support handled things. Even though my 10G cables were labeled properly it was basically impossible to "label" the 10G side on the servers themselves, with 4x10G ports going to each server (two sets of two, so it's important which goes to which port still), I did have a drawing on site that indicated the ports, but the support engineer ended doing something even simpler that I had not thought of (at one point we had to have all of our 10G NICs replaced due to faulty design), which was label them "top left" "top right" "bottom left" "bottom right", for connecting to the servers(these NICs were stacked on each other so it was a "square" of four ports across two NICs). Wish I would of thought of that! I've adjusted my labeling accordingly now.

Also I skip the cable management arms on servers, restricts airflow, I just have semi to-length cables so that there is not a lot of slack. Cable management arms are good if you intend to do work on a running system(hot swap a fan or something), but I've never really had that need. I'd rather have better airflow.

Wherever possible I use extra wide racks too (standard 19" rails but 31" wide total) for better cable management. In every facility I have been in power has always been the constraint, so sitting two 47U racks in a 4 or 5 rack cage still allowed us to max out the power (I use Rittal racks), and usually have rack space available.

Also temperature sensors, my ServerTech PDUs each come with slots for two temp+humidity probes, so each rack has four sensors (two in front, two in back), hooked up to monitoring.

I also middle mount all "top of rack" network gear for better cable flow.

Me personally, I have never worked for an organization that came to me and said "hey we're moving data centers". I've ALWAYS been the key technical contact for any such discussions and would have very deep input into any decisions that were made(so nothing would be a surprise). Maybe it's common in big companies, I've never worked for a big company(probably never will who knows).

2
0

Musk: 'Tesla's electric Model S cars will be less crap soon. I PROMISE'

Nate Amsden
Bronze badge

locations

Of course the problem is more location/quantity of charging stations. It's rare on my road trips(western U.S.) that I'm more than 30 miles away from a gas station at any given time. When I drive at night I have more range anxiety though even with gas, knowing that some gas stations aren't open during really late hours. Was very close to running out of gas one night several years ago because I couldn't find an open gas station(~1-2 AM), eventually found one (it wasn't the brand of gas I wanted to use, I don't recall the brand but didn't have any choice if I wanted gas at that moment). I aim not to get under 60(bare minimum) miles of range when driving late at night before refueling(road trips anyway).

When around home though I often drive my car to the bones(gas gauge stops telling me how many miles are left), I've been told this isn't a great idea to do but I do it anyway, I don't plan to have this car much past the warranty(75k miles), I got it because it's fun not because I want it to last forever or give me wonderful miles per gallon. I can't imagine not having a car after this that doesn't have torque vectoring all wheel drive (or turbo w/direct injection though these two are pretty common now)

0
2

Patch Flash now: Google Project Zero, Intel and pals school Adobe on security 101

Nate Amsden
Bronze badge

good thing

adobe doesn't have to worry about paying cash bounties for security issues

1
0

Devs don't care about cloud-specific coding, right? Er, not so

Nate Amsden
Bronze badge

Just FYI there have been cloud operators that have offered virtual data centers for many years now, I remember talking to Terremark about one such offering just over 5 years ago. The cost was fairly high, my cost for building a new set of gear was around $800k(all tier 1 hardware/software with 4 hour on site support etc), their cost to us was between $300k/month with no up front installation charges, or ~$140k/mo with a $3M installation charge(yes you read that right). But it was possible, in their case it was VMware, and on the $3M install fee that was for Cisco UCS-based equipment.

I'm sure it's a bit cheaper now ..

0
0
Nate Amsden
Bronze badge

I've been saying for nearly 7 years

Greater than 90% of the devs I have worked with (all of which were working on pretty leading edge new application code bases, not talking about legacy code here!) don't understand cloud, and don't write to it.

Having worked at two orgs that launched their apps from day one in a major public cloud both had the same issues because the code wasn't built to handle that environment and chaos ensued(no surprise to me of course). First one is defunct, second one moved out of the public cloud within a few months and I manage their infrastructure today with very high levels of availability, performance, predictability and the cost is a fraction of what we would be paying a public cloud provider.

Seeing public cloud bills in the half a million/month(or more) is NOT uncommon (as absurd as that may sound to many).

I know it's possible to write for this kind of thing, but maintain that every org I've worked for the past 12 years, the business decides to prioritize customer features far above and beyond anything resembling high levels of fault tolerance which is required for true cloud environments. That continues to right now. Again this decision process makes sense to me(cost/benefit etc), at some point some orgs will get to a scale where that level of design is important, most(I'd wger excess of 85%) will never get there though. You can't (or shouldn't) try to design from day one the world's most scalable application because well you're VERY likely to get it wrong, and it will take longer to build(cost more etc). Just like I freely admit the way I manage technology wouldn't work at a google/amazon scale (and their stuff doesn't work for us, or any company I've worked for).

You can fit a square peg in a round hole if you whack it hard enough, but I don't like the stress or headaches associated with that.

4
0

Google chips at Amazon's Glacier with Cloud Storage Nearline

Nate Amsden
Bronze badge

Re: Where's my

Haha, you're funny.

0
0
Nate Amsden
Bronze badge

Where's my

400Megabyte per second internet connection.. yeah that's right I don't have one.

Local storage it is then.

0
2

Doh! iTunes store goes down AFTER Apple Watch launch

Nate Amsden
Bronze badge

Next time use cloud

Because everyone that uses cloud knows there is never outages when you are using cloud. Hurry up apple and deploy cloud!

I installed cloud.bin / cloud.exe (depending on platform) on all of my servers and have had very high reliability since. Other setups may be more complicated.

0
0

CloudFlare launches nameserver DDoS shield

Nate Amsden
Bronze badge

if you are serious about DNS

then you use a serious DNS provider, someone like UltraDNS, or Dynect. I've been a Dynect customer for many years. Neustar(owns UltraDNS) keeps trying to talk me into UltraDNS (I am a Neustar customer in other areas) but Dynect does the job, never seen an outage in almost 7 years (or *any* service degradation for that matter), they claim 100% uptime over 10+ years I think. 15 second SLA. Dynect gets DDoS'd a ton as well(they run an RSS feed with service updates), never seen an impact. UltraDNS has had some high profile outages due to DDoS..

Maybe cloudflare's stuff is ok(still too new of a service for me to consider), though it wouldn't drag me away from dynect. Cost is very reasonable, service is good, uptime for me has never been anything but 100% (I remember reading about CloudFlare's juniper issue a while back...)

Also if you're *really* serious about DNS then you'd use more than one provider.

Not affiliated with dynect in any way just a happy customer, and surprised to see some folks out there not take internet-facing authoritative DNS too seriously(like you can throw a couple of BIND systems out there and be done with it, or rely on something like godaddy). Now for hobbyist stuff that is fine(I host my own DNS for my ~2 dozen domains), but for the companies I work at(that make real $$), I want something *good* (if not the best).

1
0

Intel gives Facebook the D – Xeons thrust web pages at the masses

Nate Amsden
Bronze badge

Re: Errr - cooling?

Looking at a news report from MS's server designs they released April 25 2011, talks about 57U racks with up to 96 servers consuming 16kW of power in their shipping containers (IT PAC as they were called, not sure if they are still called that). Able to operate in 95 degree ambient temperature(inlet temperature - they can use water to cool outside air as high as 105 degrees down to 95 degrees at the time anyway - the servers themselves are air cooled though). Of course that was five years ago, cooling techniques have only improved since.

So yeah hot/cold containment is what it's all about still, assuming your gear can operate safely at those temperatures..

SuperNAP near Vegas, looking at an article about them again from 2011

"The over 31,000 cabinets inside the SuperNAP range anywhere from a few kW and can go as high as 30kW."

They have some pretty crazy patented containment stuff though. Inlet temps at Supernap according to that article(July 2011) range from 68-72 degrees.

While such high density sounds cool I suspect in most cases it doesn't make much sense outside of highly specialized facilities. I have a picture in my mind of a dense rack I saw about 10 years ago, at the time probably 8kW, in the middle of a cage, with about 200 square feet of dead space around it because the facility couldn't support the true density of that system.

0
0

Whoa, bro – it's all go for cool crow X-IO's AFA show

Nate Amsden
Bronze badge

I wish

HP would publish the latest 7450 SPC-1 numbers(I've been told they haven't had the time to test it), the latest numbers available(7400) are really old, the claims at the time was the original 7450 was upwards of 55% faster than the 7400.

And in 12/2013 HP released an OS enhancement leveraging PCI Express' Message Signaled Interrupts(MSI) which gave the 7450's read I/O numbers a 40% boost over identical hardware on the previous OS version, add that to the 55% number above and well the 7400 isn't the best comparison point.

At this point the 7450 is old enough that they probably won't spend the time to test it, Gen4 systems are coming up on four years old (August 2011 - even though 7450 is newer it uses the same ASIC)..

0
0

HP battles back against white-box with Foxconn-built Cloudline servers

Nate Amsden
Bronze badge

wonder if there are order minimums

Not that I am the target of these kinds of systems but I remember Dell DCS at one point I read that they had a policy they wouldn't engage unless it was at least 1,000 systems or something (not sure if that is accurate or not).

(Oh sorry I re-read the article and it says they are only available in volume so I guess what volume)

Wonder what sort of quality corners have been cut...myself I won't touch supermicro or the likes with a 50 foot pole for anything even remotely serious for business use. Been burned too many times. Happy to pay more to get the HP quality and features in Proliant.

I don't buy the statement on tier 1 quality for sure. I won't touch Proliant 100-series servers either. So unless these are magically DL300 series or better quality they won't be tier 1 (for x86 anyway) in my mind.

0
4

Yes our NAS boxen have a 0day, says Seagate: we'll fix it in May

Nate Amsden
Bronze badge

get what you pay for

Want better security? buy something with real support, nobody in their right mind should buy a NAS from a company like Seagate and expect anything great out of it.

Want better protection buy an enterprise product, they are not perfect of course but at least organizationally they are much better geared to deal with this kind of thing, I have no expectation that Seagate (or any of the 10s to maybe 100 small NAS vendors out there) to have that level of structure. I say that as someone who has worked closely with software development teams for the past 15 years now(not related to storage, more related to SaaS/online transaction type systems)

0
2

PernixData chap: We are to storage as Alfred Nobel was to dynamite

Nate Amsden
Bronze badge

Power

How often are Pernix systems deployed with dedicated UPSs for each one? Even distributing writes across multiple hosts doesn't protect you if the power goes out. Of course most data centers have redundant power but in rare cases that is not sufficient enough which is why RAID controllers often have batteries on them, and larger storage systems (such as HP 3PAR) have larger batteries in them to de-stage the contents of data cache to a local SSD/disk in the controller(not part of the attached storage which has lost power), and since there are two copies of the data to be written in cache, two copies are written to local disk (in the event one of those disks fails, I've had two local disks in my oldest 3PAR fail in the past 3 years at different times) so the system can remain w/o power indefinitely without risk of data loss.

I remember one data center outage in Seattle (fortunately I had not been a customer of that facility in 3+ years at that point) where they had a fire in the power room and knocked the facility offline for roughly 40 hours. They had the facility running on generator trucks for several months while they repaired it. Obviously people with storage systems that had batteries keeping their memory from shutting down were probably worried, not knowing when power might be restored. And no, many of them did not have any kind of disaster plan including Microsoft's own "Bing Travel" which was down the whole time too. I remember being told some NetApp systems took upwards of 12+ hours to restart doing file system checks or something.

So assume you lose power to all of your racks at the same time what sort of setup does Pernix have to protect against this? Many data centers don't allow the use of a regular UPS(fire code), or if they do perhaps require integration with EPO. Some IBM blogger told me an interesting bit that in most cases fire code will allow a UPS as long as it doesn't run for more than a few minutes or something(there is a hard limit on runtime).

From what I recall Pernix operates on "bog standard" hardware which means they'd need enough power for the entire server to run long enough to dump the contents of (unwritten) memory to persistent storage.

I am kind of surprised the Pernix people didn't call out specifically their response to power issues in this article. Or maybe their use of memory is limited to read operations only, and operates as a write through cache to SSD, in which case no need to preserve it. For me that wouldn't help much as my workload is 90%+ write.

1
0

CoreOS goes native on vSphere and vCloud Air

Nate Amsden
Bronze badge

I bet

You expended more effort writing the article than VMware did certifying CoreOS !! Probably took them less than 5 minutes of actual work (since they are a big company I'm sure it came with a dozen hours of meetings though).

0
0

Ouch! Google crocks capacitors and deviates DRAM to root Linux

Nate Amsden
Bronze badge

I wonder

How well something like HP's Advanced ECC or IBM's Chipkill which go well beyond basic ECC would hold up to this sort of attack. Myself I don't deploy any serious systems without this technology, as the systems tend to have dozens to hundreds of gigs of ram and ECC alone just doesn't cut it in my past experience anyway.

Last I looked I could not find good info on IBM's ChipKill but HP has good info here on Advanced ECC:

ftp://ftp.hp.com/pub/c-products/servers/options/c00256943.pdf

some text from the pdf

"To improve memory protection beyond standard ECC, HP introduced Advanced ECC technology in 1996. HP and most other server manufacturers continue to use this solution in industry-standard products. Advanced ECC can correct a multi-bit error that occurs within one DRAM chip; thus, it can correct a complete DRAM chip failure. In Advanced ECC with 4-bit (x4) memory devices, each chip contributes four bits of data to the data word. The four bits from each chip are distributed across four ECC devices (one bit per ECC device), so that an error in one chip could produce up to four separate single-bit errors.

Since each ECC device can correct single-bit errors, Advanced ECC can actually correct a multi-bit error that occurs within one DRAM chip. As a result, Advanced ECC provides device failure protection

Although Advanced ECC provides failure protection, it can reliably correct multi-bit errors only when they occur within a single DRAM chip."

2
0

Whoops! AVG data centre KO'd by 'unplanned' outage

Nate Amsden
Bronze badge

Re: Doesn't sound datacenter related

data centers most certainly do not need computer storage. They need power, they usually need cooling. They usually need walls and a roof. Data center outage to me implies power outage, natural disaster, physical structural damage etc.

Quite likely this facility is shared(AVG doesn't sound like a big company, the facility my company's equipment in is more than 500,000 square feet and we have our 16x8 little corner of it) and probably has dozens to hundreds or more clients in the datacenter.

0
0
Nate Amsden
Bronze badge

Doesn't sound datacenter related

They said it's a storage issue as you quoted in article.

0
0

Sleepy eNom bombs websites in HUGE DNS OUTAGE – remains silent despite gripes

Nate Amsden
Bronze badge

Maybe

They don't use twitter and didn't see complaints. Perhaps they were working on the issue the whole time. (Or not I don't know)

I know I don't use twitter so would understand if they didn't either.

0
0

MWC 2015 roundup: Strap on a wearable and cover your face in sickly VR goo

Nate Amsden
Bronze badge

Can't wait

For this tech bubble to explode

0
0

Another day, another load of benchmarketing, this time from HDS

Nate Amsden
Bronze badge

So what's the alternative?

Having a level playing field is a good thing, unless someone can come up with a better test than SPC-1. It sure as hell beats the 100% read tests so many vendors like to tout.

It's not realistic to expect people to bring in a dozen platforms(even if they can, a big reason I am a 3PAR customer today is NetApp outright refused me an evaluation in 2006 so I went with the smaller vendor and well I'm happy with the results) to test with their own apps.

When my (current) company moved out of a public cloud provider 3 years ago, we were looking at stuff(of course I have a 3PAR background) and were looking at 3PAR and Netapp at the time. We had *NO WAY* to test ANYTHING. We had no data centers, no servers, nothing(everything was being built new). Fortunately we made a good choice, we didn't realize our workload was 90%+ write until after we transferred over(something I'm very confident that the NetApp that was spec'd wouldn't of been able to handle).

I spoke to NetApp(as an example, I don't talk to EMC out of principle, same for Cisco) as recently as a bit over three years ago and again they re-iterated their policy of not giving any eval systems(the guy said it was technically possible but it was *really* hard for them to do)

Last time I met with HDS was in late 2008 and they were touting IOPS numbers for their (at the time) new AMS 2000-series systems. They were touting nearly 1M IOPS.. then they admitted that was cache I/O only(after I called em on it - based on the people I have worked for/with over the years most of them would not of realized this and called them on it).

So unless someone can come up with a better test, SPC-1 is the best thing I see all around, from a disclosure and level playing field standpoint by a wide margin(beats the pants off SPEC SFS for NFS anyway).

I welcome someone coming up with a better test than SPC-1, if there is one (and there are results for it) please share.

4
0

HDS blogger names HDS flash array as latency winner

Nate Amsden
Bronze badge

They aren't completely in charge of it, they could pick an artificially lower level if they wanted, but there is some upper limit(forgot what exactly been a couple years since I looked at it) that say if ANY of the response times are above something like 30ms the results at that level are not accepted.

0
0

SOLD: Emulex – for 34% less than shareholders were offered 6 years ago

Nate Amsden
Bronze badge

I'm just one

but I have no plans to shift away from FC for my primary storage protocol for new and existing deployments. The cost of FC is minimal(in the grand scheme of things for me anyway) and provides a high level of availability and maturity that others still can't touch (includes FCoE).

My environment is small though, at this point about two dozen physical hosts powering $220M/year in e-com transactions. Maybe we get to a $billion/year with four dozen hosts who knows (FC still cheap then).

0
0

Give in to data centre automation and change your life

Nate Amsden
Bronze badge

Re: Puppet & Chef

I have crons on all ~500 of my systems that use chef to auto restart chef if it takes more than 80MB, runs every 4 hrs. 408 restarts in past 24 hrs. Seems to be pretty reliable, set the cron up over two years ago, never have had an issue that I can recall. I have several other crons that are set to restart chef under various failure scenarios(getting stuck etc).

The topic came up of possibly migrating off of chef because it is too complex. As much as I hate chef, migrating off is more work than I'm willing to invest, I remember simply just replacing a broken CFEngine implementation at a company a few years ago with a good implementation, not even changing the version by much. In the four 9s environment to do it safely took well over a year to do. Chef sucks for most things I want it to do(wasn't my choice and I wouldn't use it today, not sure what I'd use, CFEngine v2 worked great for me for ~8 years), but it's not bad enough to switch to something else.

I hate ruby too, using chef just rubs salt in that old wound. Fortunately there are other people on the team that do much of the work with chef, so I can focus more on stuff I care about (one of the driving reasons why I didn't fight the fight to replace it two years ago).

But automation.. there are of course levels of automation. The author of the article basically lost me at "web scale". Obviously 99% of orgs will never see anything remotely resembling web scale. We pumped more than $200 million in revenue through a dozen HP physical servers and two small HP 3PAR storage arrays. Have since added more gear, still sitting at less than 3 full cabinets of equipment though. Getting to $4-700M in revenue maybe we add another cabinet(have one sitting empty at the moment already and with our new all SSD 3PAR I/O is really not much of a concern - I can get 180TB of raw flash in the 4 controller system that is installed now without taking more space/power).

We have quite a bit of automation, but to get significantly further, to me the return just isn't there. Spend 6 months to automate the hell out of things that may otherwise take you two weeks to do manually during that time? Seems stupid. I got better things to do with my time.

2
0

May the fourth be with you: Torvalds names next Linux v 4.0

Nate Amsden
Bronze badge

don't break compatibility since forever

So binary drivers built against 2.6 will work fine against 4.0 huh? Yeah, that's what I thought. Breaking compatibility seems to be by design.

Of course I gave up on hopes of Linux ever getting a stable ABI for drivers probably 10 years ago.

I do miss the even odd releases of what was it 2.2.x and 2.3.x? days? where one was feature and one was stable. Course since they abandoned that concept I abandoned the thought of compiling kernels ever again.

(Linux user for 19 years - desktop linux for past 17 years including now)

17
7

Superfish: Lenovo? More like Lolnono – until they get real on privacy

Nate Amsden
Bronze badge

I used to love thinkpad

Back when it was IBM.. when Lenovo bought it I switched to Toshiba. Currently my daily driver is a i7 Tecra A11 from 2010 with Nvidia graphics and Samsung 850 Pro SSD (primary OS is Linux). Works great.. though I miss my on site support contract, that expired last year. It's not ultra portable by any stretch but it spends 97% of it's life plugged in sitting on a table or desk anyway.

Last Thinkpad I used I think was 2006.

0
0

HP flicks white box switch: NOT a Facebook wannabe? Stuff our open kit in your cloud

Nate Amsden
Bronze badge

How is HP not a "big boy" in network switching? Last I recall they were a clear 2nd to Cisco, *way* ahead of any of the other players by double digit % market share.

Not that I plan to use this, I am happy with my switching platform (not Cisco, and also not HP).

4
0

Hitachi smashes SPC-1 benchmark, boasts: We HAF ways of crushing 2 million IOPs

Nate Amsden
Bronze badge

Re: Who would use

yeah discounted to the tune of 58% off hardware and 39% off software. List price is just over $4.4M. Also they are less than 1% away from being disqualified due to too much unused storage -- 44.31% vs 45% is the max). Still very impressive results in any case, even if it does take up two cabinets :)

0
0

HTTP/2 spec gets green light: Faster web or needless complexity?

Nate Amsden
Bronze badge

i can see myself

using http/1.1 for the next decade. The bottleneck in my experience is in the apps, not in the network or protocols.

I'm sure super optimized people like google etc that is not the case but it seems to be for most of the folks out there.

2
8

Traditional enterprise workloads on an all-flash array? WHY WOULD I BOTHER?

Nate Amsden
Bronze badge

for me

The cost of the AFA (3PAR 7450) was so good that it made sense to get it with the 2TB SSDs, because we are more I/O bound than space bound (though not I/O bound enough to *really* need an AFA). The data set is small enough that it fit easily in the system(about 12TB of logical data). Moving from a 3PAR F200 which is 80x15k disks, to the 7450 which is obviously much faster, consumes much less rack space(I can get about 180TB of flash in less space than the F200 has about 64TB of disk), less power etc.. F200 is end of life anyway, and end of support in November of next year.

Before the 2TB SSDs I was planning on perhaps a 7400 hybrid.. but the big SSDs made it an easier decision to just go AFA. Though I would prefer to have a 7440 which allows both disk and SSD (purely a marketing limitation not a technical one).

Note that most of the AFA offerings out there seem to be stuck using small SSDs (well south of 1TB from what I've seen) for whatever reason. I'm expecting to see at least 3 or 4TB SSDs on my 7450 easily within 2-3 years which means way north of 200TB of raw flash in my initial 8U footprint. I don't need millions of IOPS(average now well south of 10k), but to know everything is on flash and will get consistent performance is a nice feeling -- and the cost is not bad either.. and the data services are there in the event I need em (I do leverage snapshots heavily, not replication etc though). Also I get a true 4 controller system which is important to me for my 90% write workload. Add HP's unconditional 5 year warranty on all my 3PAR SSDs, and I don't have to care about wearing them out (obviously they have proactive failing etc and I have 6 hour call to repair support).

YMMV.

3
0

ONTAP isn't putting NetApp ONTOP

Nate Amsden
Bronze badge

i wonder

Given the state of FlashRay if NetApp can't make progress fast enough on it, given it is a whole new platform that is not related to Ontap if they don't throw in the towel and acquire some startup to provide the technology(while quietly smothering FlashRay with a pillow in the night). I don't know who they should acquire if anyone I haven't been playing too close attention to the storage startups in recent years.

0
0

Brocade reels in app delivery controller biz from Riverbed

Nate Amsden
Bronze badge

Re: What is...

Nothing

0
0
Nate Amsden
Bronze badge

new name

Others may remember it as Zeus load balancer..I used it for a bit before riverbed bought em, seemed like a pretty good piece of software.

0
0

HP’s Mr 3PAR, David Scott, is retiring

Nate Amsden
Bronze badge

I met david

For the first time in vegas last year at discover. Really cool guy. I had no idea what to expect. He said he reads all my comments on el reg, so if you're reading this David you rock!!

Nate

2
0

Storage BLOG-OFF: HP's Johnson squares up to EMC's Chad Sakac

Nate Amsden
Bronze badge

I have been interested in trying some of the newer things, I just don't have an environment where I can take that kind of risk. Past companies had a lot more gear to play with..everything is so efficient and important here though that I try to be careful.. We do have a Nimble array at another site (sourced by IT I wasn't involved in that).

But I suppose the advanced data services aren't all that critical - I mean I don't use a whole lot on 3PAR. I don't use replication, I don't use many of the more advanced things (same goes for most tech I use, for some reason I tend to stick to the core stuff which tends to be the most solid whether it be storage, networking, vmware etc..). But the maturity aspect was important obviously. I've had my share of issues on 3PAR over the years.. I didn't think they would make it into the all flash world, I've been amazed at what they have accomplished though.

I'm sure XtremIO can work for a lot of folks..same for Pure storage and others..

2
0
Nate Amsden
Bronze badge

my 7450

My 4-node 7450 (while far from heavily loaded) is averaging around 0.4ms. The official HP tools say this configuration is rated for 100,000 IOPS @ 90% write with sub 1ms latency w/RAID 1 (my workload is 90% write). With dedupe on I don't know what the number is(I'm sure it's less), but my actual workload these days is in the range of about 6,000 IOPS(migrating from 3PAR F200 with 80x15k disks 100% uptime since it was installed just over 3 years ago), so it's good enough for me for a long time to come.

I chose RAID 1 because I have 4 disk shelves (including two which house the controllers), RAID 1 gives shelf level availability so I can lose a shelf of disks and stay online(not that it has ever happened to me). That and 90% write is hard on the backend for RAID 5. That and given the large SSDs capacity was not an issue.

The cost was good enough to easily justify this route vs a hybrid or all disk setup -- though I wish I could of had a 7440 instead of 7450(identical hardware) so I would have the option of running disks for bulk storage if I wanted. Perhaps HP will unlock this self imposed marketing limitation in the 7450 in the future I don't know..

Main reasons I went with 7450 vs others outside of 9 years of 3PAR experience is I wanted a true 4-controller system(mirror cache writes in event of controller issue which is important in my 90% write workload), I wanted a mature platform for running this $220M+/year of transactions.

EMC cold called me earlier in 2014 and I talked to them for a couple hrs, but I was never about to consider XtremIO it is too new(also too inefficient power/rack/etc wise at the time this environment had 2 cabinets now it has 4 which will last 2-3 more years - 7450 takes up 8U and I can grow to nearly 200TB raw flash without more space). I do like the prospect of being able to have upwards of 500TB of raw flash in my system if it came to that(currently have ~30TB, maybe go to ~45TB some time this year - don't really see going beyond 75TB in the next 3-4 years but who knows).

I've learned a lot about storage over the years, by no means claim or even try to be an expert, but one thing I've learned is be more conservative. So for my small environment I opt for a mature architecture.

5
0

vSphere 6.0 is BADASS. Not that I've played with it or anything. Ahem

Nate Amsden
Bronze badge

Re: plugin hell

I run a Citrix XenApp fundamentals server(5 user license - which is damn cheap though no support) for my team. Has vSphere on it, 3PAR mgmt tools, firefox and IE browsers(mainly for managing Netscalers), and in combination with our VPN allows me to use all of them from my phone(Note 3 w/stylus which helps a lot running vsphere client on it) remotely if required. XenApp has Linux, Mac, Windows etc clients too. Myself I run a local windows VM for other things, and run Xenapp client in that (even though my host computer is Linux). My team mates are all macs though, they use the mac native xenapp. The time it saves managing the netscalers alone with their thick java client(vs running the client over a WAN connection) pays for Xenapp by itself, yet alone vsphere etc..

I've barely touched the vsphere web console, I used to think the .NET client was bad, now I prefer it (as a linux user). I have a couple of windows 2012 VMs which need the latest hardware version which means I have to use the web console if I want to change their configs (fortunately that is a rare occasion).

0
0

Page:

Forums