Feeds

* Posts by Nate Amsden

827 posts • joined 19 Jun 2007

Page:

Fusion-io: Ah, Microsoft. I see there's in-memory in SQL Server 2014... **GERONIMO!**

Nate Amsden
Bronze badge

Re: PCIe or RAM

sounds like they can, but it can be pretty costly to install several hundreds of gigs or TBs of DRAM vs using Fusion IO..

0
0

VMware reveals 27-patch Heartbleed fix plan

Nate Amsden
Bronze badge

ha ha

another set of bugs that don't affect me. ESX (yes I don't like the thin hypervisor) 4.1 and vCenter 5.0 baby(KB says both not affected).

on top of that Netscaler fronts all of my linux boxes so no issues there either.

this heartbleed thing is much to do about nothin for me.

0
0

Revoke, reissue, invalidate: Stat! Security bods scramble to plug up Heartbleed

Nate Amsden
Bronze badge

thanks citrix

Not vulnerable on Netscaler.

Also checked one of my former employers who I am almost certain still runs F5 gear and they are not vulnerable either.

just another day..

0
0

AMD unveils Godzilla's graphics card – 'the world's fastest, period'

Nate Amsden
Bronze badge

what more can a gamer ask for?

How bout good stable drivers? I keep seeing reports of people having issues with AMD drivers.. Maybe they fixed them for the most part by now. I have been a Nvidia user since 1999 or so (for the most part Linux exclusive). Very few issues for me on nv over the years. I hear NV has issues too on linux at least on some bleeding edge systems (my desktop+laptop runs Ubuntu 10.04 with 2.6 kernel still). I do have one Radeon at home (onboard graphics) though all it does is a picture slide show (though it is stable at doing that - I did install the proprietary drivers on Linux in order to get HDMI audio working through the card)

2
1

Rackspace refuses to enlist in cloud's latest price cutting war

Nate Amsden
Bronze badge

Re: Rackspace is probably screwed

you forget that most IT teams are not competent. Nor are the developers who think they can run stuff in some amazon or azure cloud because "they have an API".

4
0

The cloud awaits... but is your enterprise ready for the jump?

Nate Amsden
Bronze badge

Keep it simple

Just dont do it. Save yourself some money and headache and tell that cloud provider(s) to go away.

3
0

WD 'restoring connections' after WEEK of MyCloud outages

Nate Amsden
Bronze badge

they were probably

having issues with the taps the NSA gave them to install on their systems to make traffic capture easier.

0
0

Greenpeace reveals WORLD'S FILTHIEST CLOUDS – and the cleanest may shock you

Nate Amsden
Bronze badge

more efficient my ass

amazon is incredibly inefficient, they are built for inefficiency. The amount of wasted EVERYTHING there due to inability to pool resources amongst services is obscene.

heads up their asses, per usual.

1
9

Puking! protester! forces! Yahoo! 'techie! scum!' to! ride! vile! bile! barf! bus! to! work!

Nate Amsden
Bronze badge

Re: Blame Mountain View

I suppose I'm in the minority but I avoid SFO/Oakland/Berkley. I've been to Berkley once in the last 3 years. I've never stopped in Oakland (driven past it on I-80 while headed north), and SFO well I like to tell people I've spent more time in Seattle than SFO since I moved (back) to California almost 3 years ago now (the amount of time spent in SFO in the past 3 years for me probably 6 hours - I avoid most areas of Seattle as well only one area I go to for my favorite bar Cowgirls Inc - tried for two years to find a replacement for that in the bay area (I'd even go to SFO for it) without any luck).

Food I am not picky(most of it tastes the same for me from a $5 steak at sizzler to a $50 steak at a fancy joint not that I am a steak person just using that as a reference), drinks I drink jack+coke, not complicated.

2
2
Nate Amsden
Bronze badge

i don't know about the protesters

but I sort of feel sorry for the employees. I mean it must feel like they are cattle that are being herded(that's how I'd feel at least). I walk by Youtube every day on my walk to work and I see the google buses.

To be totally honest it literally took me over a year before I realized what they were. The buses say stuff like "GBUS TO MTV", when I saw MTV I thought of the tv station and was just totally confused. It was only recently that I realized mountain view (doh). I'm sure a lot of them are paid well(I'm paid alright, underpaid to be sure but the quality of life currently is the best I've ever had at a company so I'd trade $$ for that without blinking an eye), but I've always hated the prospect of being a number at a company. So I've worked mostly for smaller companies where I'm much less likely to be a member of the herd(assuming I ever have been).

But I see the cattle outside youtube herding onto the busses (I tend to wander in to the office around noon so I don't always see a lot of cattle). Sometimes I see one of those google mapping vehicles too, those look pretty funky with the spinning shit on top.

some of my co-workers have 45-60+ minute commutes in the bay area here, I just can't imagine that myself, mine is about 20 minutes walking door to door(used to be 10mins until they moved to a new office almost 2 years ago), and I don't have to put up with high density overcrowded crap like SFO(ugh, I can't stand SFO, or any big crowded city really).

Still not as good as one job though where I was literally across the street, with a window view of my apartment building from my desk at work(unfortunately my apartment was on the other side of the building so attempts to connect to my personal wifi from the office failed). Some of my co-workers actually parked farther away than I lived because it was cheaper than paying for on site parking. If the management at that company didn't go to absolute shit I'd probably still be there.

7
3

Ubuntu N-ONE: 'Storage war' with Dropbox et al annihilates cloud service

Nate Amsden
Bronze badge

seems like an opportunity

for Ubuntu to partner with another provider dropbox, box or whatever to provide this service for customers.. They seem to like to do that sort of thing anyway (I recall the search box /amazon stuff they tried to pull ?? I'm still on Ubuntu 10.04 LTS on my desktops/laptops yes I know it is EOL)

4
0

Tintri: We have ZERO interest in adding compute to storage

Nate Amsden
Bronze badge

way late to the party

Other vendors have supported open stack, KVM etc for years, Tintri is the late comer here not the other folks. I still firmly believe once VVOLs come out tintri will lose their only edge and they'll end up being someone like Pillar or XIO.

I have a friend or two at Tintri and have had a few long discussions with him on the tech. Their platform has it's use cases but it's pretty limited. Application aware they are not, they are VMware hypervisor aware. Wake me when they integrate with actual applications like Oracle, MSSQL, MySQL(haven't seen anyone integrate with that myself), Exchange, SAP etc etc(looking at their website now I see no indication they are very VM-centric). Wake me when they allow you to take a snapshot of a data set and send that snapshot to another VM(AFAIK you can't do this with VMFS hence my own heavy use of raw device maps for databases - see http://elreg.nateamsden.com/MySQL%20Snapshot%20diagram%20for%20Staging.png)

They have interesting tech but nothing that interests me personally. I want something much more flexible and agnostic. Last I spoke to my friend over there he said Tintri still did not allow NFS exports directly to guest operating systems, they were VMware-only. So now they are adding KVM ..make the storage platform flexible enough to do more things. At the time(about a year ago) he said there wasn't much interest in being more flexible, more agnostic. I have to assume that is because they didn't have enough resources to do it so they stuck to what they did best (probably a good idea).

But adding compute to storage has always been a stupid idea, whoever came up with the concept needs to be taken out back and shot. It's one of those things where when I hear it I really don't have a response, my brain just can't think of how someone could come up with such a incredibly stupid concept to begin with. Now if the system is built from the ground up ala Simplivity and that Nuatix people I think that is different, though I still think both of their approaches are too limited and feel that they will at some point offer a storage-only version of their platform for better scalability(& margins - I've said this before on el reg). The combined platform will be fine for real small SMBs, there's no point in combining such a system for larger scale your too limited with fixed units for scaling. Sometimes you need to scale compute, other times storage, sometimes both, forcing the customer to do both every time is not an efficient way of operating.

I'll be happy with 3PAR for a while yet myself(mission critical storage anyway). They are killing it in the mid range these days and their stuff is only getting better. The core technology folks from 3PAR are still very much leading the march and are heavily shielded by the head of HP storage (ex-3PAR CEO) to do what they do best.

3
3

Aw, SNAP. It's too late, you've already PAID for your storage array

Nate Amsden
Bronze badge

Re: can't afford to test?

[ended up being much longer than I expected]

Some vendors don't even provide units for testing. I may not be a 3PAR customer today if NetApp hadn't outright refused to lend me an eval unit back in 2006. I talked to NetApp again in 2011 and the rep(different rep, different territory) said even in 2011 he would be *really* hard pressed to justify an evaluation unit, something to do with their internal processes or something. I don't know what HP's stance is on 3PAR evals these days, my last eval unit from them was in 2008(pre HP acq) and it was two racks of equipment. Basically we gave them a set of requirements and we agreed that we'd buy the product if it met those requirements. I suspect at least for HP given that 3PARs are much cheaper now that they would be open to evals, I know HP has *given* 3PAR arrays to some big customers absolutely free no strings attached I believe in order to provide incentive to test them out I suspect in order to try to convince folks to migrate off of P9000/9500 for their next purchase.

I don't know what other vendor policies are, though usually the smaller startups are happy to give out eval gear. Testing is difficult though, I've never worked at a place that has had more than minimal resources to properly test something. I've never been at an org that had a test lab for infrastructure period. One of the companies I was at bought millions and millions of $ of storage, others typically 500k-1M.

My current company we moved out of a public cloud provider to our own hosted stuff - when we evaluating what to get then we had absolutely nothing to test with. No data center, no servers nothing. Fortunately I already had experience with most of the products and we didn't need much testing. The only thing that caught us very much off guard from a storage perspective is we were assuming a typical 60-70% read ratio on the storage because we could get no reliable metrics from our cloud provider. Turns out we were over 90% write(almost all reads were coming from cache layers above storage). Fortunately the 3PAR system we bought was able to hold up with the initial configuration for about a year(good architecture) before we needed to add more resources, that % of writes is quite expensive!

Another storage related story, going back to 2008 again, bought a 2-node 150TB 3PAR T400 with a two node Exanet cluster to replace four racks of BlueArc storage which had NAS controllers that were going EOL/EOS. When testing the evaluation 3PAR/Exanet system we were on a tight time line and were doing stuff on the fly. I asked the on site Exanet engineer if their system was thin provisioning friendly. He said yes, he had worked with Exanet on 3PAR before and they did thin provisioning fine.

So I exported a bunch of storage to the Exanet cluster somewhere around 90TB usable. And we started testing, everything went awesome, performance far exceeded that of the previous system. We bought the stuff and migrated more production stuff onto it. The workload was very write-delete-write heavy. As time went on I saw the disk space on 3PAR going up and up and up, but Exanet holding fairly steady. Obviously NOT thin provisioning friendly(Exanet not re-using deleted blocks before allocating new). I ran some numbers and determined, oh shit, if this continues to it's logical conclusion I will exceed the capacity of my 3PAR system. The 3PAR system was 4-controller capable, and I was at the maximum physical capacity(150TB at the time) of a two node system. So if I had to add ANY more disks I needed two more controllers(big big expense). A year or two later a software update came out that allowed that generation of 2-node controllers to scale to I believe 200TB. Part of the issue was they were on a 32-bit operating system until something like 2011.

So I ran some numbers with my 3PAR SE, and determined that if I converted the system from RAID 50 (3+1) to RAID 50 (5+1) that we would have sufficient disk space for Exanet(and others - vmware, MSSQL, MySQL) to grow into and not run out. We had six drive shelves so 5+1 made more sense anyway from an efficiency standpoint.

I started the process, unlike a lot(all?? I don't know) of other systems this could be done on the fly without any impact to the applications. I remember talking to HDS at about the time we were going to evaluate 3PAR (HDS was partnered with BlueArc and has since acquired them) and asked them this very same question - can you change your RAID levels etc on the fly. They said yes - but you need blank disks to migrate the data to(at the time referencing their AMS2500 system which they were proposing this was back in Nov 2008 that platform was brand new at the time - it didn't even have thin provisioning yet! and HDS refused to estimate pricing for TP which was slated to be released in 6-8 months). Big caveat right there! I'm not sure how others do it.

Anyway with 3PAR of course no blank disks are required you just need space to copy the data to(since the disks are virtualized finding available space typically isn't too hard). Pretty simple command line, basically one command per volume to migrate, you can migrate a half dozen or so and the system self throttles based on other activity on the system. I'd fire off a set of migrations and they'd take a few days to complete, as the system filled up over time there was more data to move with each set of volume changes so the time required was longer. Towards the end it was something like 2 weeks to move 7-8 volumes, so it's not like I was having to babysit the thing I'd submit some commands check on it when I was bored and after a few days/week submit the next set of commands.

On a very heavily loaded system it took me roughly 5 months of 24/7 conversions to complete the process of probably 110-120TB of raw data. Nobody ever knew what was going on other than me giving periodic updates as to the progress of the process. I was pretty happy, it's one of the core reasons I like 3PAR and am a rabid fan/customer, if you make a mistake up front(or down the line) you can correct it pretty easily without application impact or complicated data migrations. I don't live and breathe storage by any stretch(though to some I give that impression) it is only a very small part of what I do from an ops standpoint. In this case it was a decent amount of data that had to be re-ordered which took a while but we could do it.

So yeah couldn't afford to do a really good test there, things fell short once we moved to real production but we were able to correct the issue without any additional purchases, or downtime, or complex data migrations.

After Exanet went bust I advocated for a NetApp V-series to replace it(2010), in part because it's SPECsfs numbers were double that of Exanet at the time, so I figured it's got to be at least as fast as a two node Exanet cluster. Again not a lot of time to test, and I was in my final weeks at the company. I left and they later deployed it, and their workload just wasn't compatible with NetApp, even though the V-series was more powerful it fell down hard(cpus pegged) and Netapp reps unwilling to help(they even threatened the customer that NetApp was pulling support for 3PAR systems - which of course did not happen).

Another advantage to the 3PAR architecture is the customer was able to migrate from Exanet to NetApp on the same back end storage without any complex stuff going on, the back end was entirely virtualized of course so it's just a matter of allocating storage to one, and unallocating it from the other(no concept of disk based raid so both systems had access to every spindle on the then 4-node T400). I'm not sure specifically what process they went through as the bulk of that was done after I left, but had I been there I know the way I would of used and it wouldn't of had much impact at all(you still have to move the data to the other filer since they are on different file systems of course).

I didn't know the mgmt at the customer at the time there was heavy turnover after I left, but basically HP went to them and said "we'll own the problem" and the customer with with HP instead(they bought I think a 4 node X9000 cluster to connect to their 3PAR, in that case they did no testing either). I don't know what happened after that, maybe it worked, maybe it failed. The NetApp rep(s) in that territory responsible for some of that shit left or were fired a couple years ago. I was told that even people inside NetApp did not like them, but for years felt they couldn't do much about them since they were pulling in good numbers.

I spoke with one company who also went to 3PAR many years ago they did real testing, I got a copy of their ~60 page test guide and all the things they went through - honestly quite over kill but they are a big org. They were consolidating several mid range NetApp arrays onto a big 3PAR with NetApp V-series on the front end.

4
0

Amazon Workspaces: A dish best served later

Nate Amsden
Bronze badge

Re: How much for roll your own on amazon

Using amazon for anything is like getting repeatedly kicked in the nuts.

2
0

VDI a 'delightful' experience... Really?

Nate Amsden
Bronze badge

another option for VDI

is hp moonshot. Each user gets a quad core cpu, a 128-core GPU, 8GB ram, 32GB SSD, with 180 users(180 servers) in a 4.3U chassis (no hypervisor, no shared storage, so maybe not "VDI" but gives a very similar result likely with a far better user experience). It's a neat approach at least.

HP calls it HP Converged system 100 for hosted desktops.

0
0

Intel just installed itself in Hadoop guru Cloudera – and why their rivals should be worried

Nate Amsden
Bronze badge

just waiting

for facebook will probably buy cloudera for $30 billion in stock.

3
2

Forkin' 'L! Facebook, Google and friends create WebScaleSQL from MySQL 5.6

Nate Amsden
Bronze badge

well I guess

that means we don't need mongo anymore right, web scale mysql combined with the optimized devnull storage engine and off we go..

1
0

Amazon HALVES cloud storage prices after Google's shock slash

Nate Amsden
Bronze badge

cheap my ass

"Cheap" isn't what comes to mind when I see company after company spending well into six digits a month hosting at amazon. Often times five fold or more than doing things in house. But again often times they don't know any better and think that is just "normal".

3
0

Why it's time to wrap brains around software-defined networking

Nate Amsden
Bronze badge

SDN is horse shit

for anyone other that those ultra large service providers and massive enterprises.

It's all hype. You can google "techopsguys sdn" and the first link will go to my 4,300 in massive detail talk as to why SDN is over rated(I confirmed my suspicions with the person who created SDN which was what inspired the article). In a nutshell networking companies are trying to make L2/L3 networking exciting (it's not). They failed with DCB/DCE/FCoE 5-6 years ago and they are failing again with SDN today.

The network is not the bottleneck, I cover that in depth, obviously far more than I could include in a comment on el reg (especially because I can't have pictures and diagrams etc etc).

Maybe some day it will be for mere mortals, but by then it will be so automated you won't even know it's there. You won't have to "prepare" for it, it will just be there. No special training, no special products, it'll be basically be transparent and make the network easier to manage.

If you are not an ultra large enterprise or service provider and you think you need SDN, more likely you did an absolutely terrible job building your network.

I think that is still at least say 5 if not 10 years out.

7
2

Google slashes cloud storage to $0.026 per GB. Your move, Amazon

Nate Amsden
Bronze badge

Re: Mobility

amazon tries to make their services as much of a roach motel as possible. Really depends on the customer and what level they build to. At the two companies I have been at which used amazon services neither integrated very closely with the APIs so it wasn't difficult to move to our own hosted solution(though in one case they are still using SQS even 2 years after we moved out, they've wanted to replace it with something better, hosted in house but they haven't had the time to complete a project to that extent yet - the way they use SQS is quite horrible as well it's a severe bottleneck at times).

Some folks really like those APIs though and they'll probably stay and hope amazon matches the prices in the future.

2
0

Analyse this: Gartner eyes up EVERYONE'S mid-range arrays

Nate Amsden
Bronze badge

no EQL?

Wonder if that means Dell really doesn't want to put that platform forward anymore.. though their storage website has a big award picture claiming EQL was #1 in iSCSI for 2013.

I wonder what the point was for all that effort if Gartner comes out and basically says buy whatever you want it doesn't matter, seems like wasted effort. They say agility and cost are important but they don't seem to measure either (maybe "manageability" could include agility but they weight it very low for something they seem to think is important). Maybe they used some random number generators to make the results and created the report in a couple of hours for some easy money.

To me there is certainly a significant rift between a number of the different systems on the list.

1
0

AMD: Why we had to evacuate 276TB from Oracle DB to Hadoop

Nate Amsden
Bronze badge

sounds like

they didn't know what they were doing to begin with. Or perhaps weren't able to do the right thing to begin with and had an organically growing system that nobody seriously managed and they just let it grow until it hit a major breaking point.

Not that I'd use Hadoop to replace Oracle, rather go with something like Vertica instead especially for a small data set like 276TB.

2
12

Reality check: Java 8 finally catches a multi-core break

Nate Amsden
Bronze badge

seems Oracle's developers need it too..

Installing Oracle 11.2.0.4 on Linux right now and the installer is using Java 1.5.0_51

For something as trivial as a software installer I would of think it would be more up to date..but I am not an Oracle expert I'm sure it's par for the course.

Looks like Java 5 saw it's last public update at least in October 2009.

3
0

Gartner: Array makers. Think performance counts? WRONG

Nate Amsden
Bronze badge

Pillar

HAH.

One of my friends told me a few months ago about someone he knew that got hired (back) into Pillar as a sales rep, and his account? His account was to sell Pillar to ORACLE (internally of course). How hard can that be? Selling a product you make to your internal groups.. Hard enough apparently because he struggled to even do that! Not even Oracle wants to use Pillar! Sad. Just put it out of it's misery already.

Pillar certainly should not be on any high end storage list nor should IBM XIV (it's single system scalability has always been a joke - I mean come on max of 180 disk drives in 2014?). Nor should HDS HUS, Huawei, and I'd argue no NetApp is high end, not with their architecture. Sure you can build a "big box" with lots of disk drives but the architecture falls apart compared to things like VSP, or VMAX or 3PAR(and yes that means the NetApp clustering doesn't make the cut either).

HDS VSP and P9500 certainly do seem to be good high end kit though obviously very complicated systems with an ever shrinking market size. HP has all but stopped marketing the P9500 from what I've seen post 3PAR acquisition and is aggressively going after P9500 customers with 3PAR systems with good success last I heard. 3PAR can't do everything VSP can for sure(and it may never - some of those things there just isn't enough demand for them), though I think the capabilities gap is greater in the reverse(VSP can't do everything 3PAR can do, or at the very least does them very poorly relative to 3PAR).

1
0

Seattle pops a cap in Uber and Lyft: Rideshare bizs get 150-driver limit

Nate Amsden
Bronze badge

seems reasonable

I have read some things that indicate the folks that run Uber are pretty shady.....

Having lived in the Seattle area for 10 years (moved back to California almost 3 years ago), I don't recall noticing that many taxis outside of the airport at least. I did use taxi services probably 4-6 times in the time I was there(every time was to/from airport).

So I was curious how many there are, and looked it up, found a report on Komo news (local news station there) that says there are 700 taxis in the Seattle and surrounding county (included the city where I used to live), with plans to increase that to 900 over the next two years.

So 150 per company doesn't seem bad at all, with a quick search I see at least a half dozen what seem like taxi or taxi like services. Granting these new players 150 cars a piece seems very reasonable.

Uber's tactics just seem poor, so I for one won't be using them(never have used them).

I suppose it shouldn't surprise me that Uber is based out of SFO, a city I personally hate and avoid at pretty much all costs even though I live only 20mins away from it. I'd rather drive 40 minutes to SJC to do something if it means avoiding anything SFO related(I haven't had to resort to doing that yet).

Seattle is almost as bad as the hipsters there try to copy SFO, so I avoid a good 90% of Seattle as well. I like the "east side"(as it's called) of Lake Washington though.

1
5

'Amazon has destroyed the unicorn factory' ... How clouds are making sysadmins extinct

Nate Amsden
Bronze badge

Re: as one of those unicorns

A bit more to my post, there is also a massive sink of fail when it comes to many orgs and their own hosted stuff too, so obviously running your own isn't always the best thing unless you have some good folks running it. It's almost equally astonishing to me the leverage some IT executives have at massive over spending on solutions for problems that are just obscenely over spec'd because they don't know any better. But somehow are able to get their boards of directors etc to approve their massive budgets..

I work at smaller companies so don't see that angle of things very often at least first hand anyway. Someone I used to work with just joined up with a much bigger company and they have quite a bit of gear and somehow they have measured that their utilization of their own equipment is roughly 7% (and they have done quite a bit of virtualization apparently). At the same time they seem to be constantly "out of capacity". just poorly operated....sad to see.......situations like that repeated again and again.

--

The caveat to my posts I suppose are for my experience, all of the companies I have worked for over the past 14 years have been for companies that wrote their own software. In cases like that it just makes sense(pretty much always) to run your own infrastructure. Though it is still cost effective to do co-location at even moderate scale (I think even folks like Twitter use colocation extensively they are a big tenant in the QTS facility I am in). Building data centers from the ground up is really for those at truly massive scale. When I say data center I'm talking minimum a megawatt or two of power, not talking server closets or small server rooms.

15
1
Nate Amsden
Bronze badge

as one of those unicorns

As you call them..

It is pretty depressing to see how much companies have wasted on cloud services, at any scale. It's shocking to see so many clueless morons who think it's normal to be spending a half a million a month or more on cloud services. When it's quite common to have ROI of well under 12 months (some times 6 months or less) to build stuff out yourself. Of course you may have to pay people more to attract the talent. But hey if paying someone(s) more means your able to save an extra few million a year it's probably worth it (biggest "doh" face I can make).

I remember one cloud discussion where the installation fee alone was 4x my own costs for building things outright(with tier 1 equipment and 4 hour on site support).

But word is getting around slowly but surely what a scam most cloud services are(have yet to see one that isn't myself) from either a cost, features, or availability standpoint(or all of them combined). I talk to more and more folks who want out, or have moved out, but those don't make the news.

I heard of one rarity of a startup in Seattle who was spending 25% of their REVENUE on cloud have since moved out(or are in the process of moving out I forget -- with a 6 month ROI going to of all things high end Cisco UCS gear not exactly bottom barrel), and the rarity bit is the CEO is apparently overly happy to talk with anyone about how terrible their experience with Amazon was. Usually companies just keep quiet and try to forget about it or something.

Fortunately I have not had to bang my head against incompetent management over cloud in a couple years or so(having spent my own time managing crap in Amazon for about two and a half years - by far the worst experience in my professional career - and the management at Amazon cloud whom I met with personally are equally frustrating to deal with - one of them tried to get me fired once for a blog post), and not anticipating having to do that again anytime soon.

I sort of wish I could do more, I mean I see so much fail going on but I just can't come close to even starting to address all of it. I wish I could. I wish I could help more companies from making stupid f#@$@ mistakes by going to providers like amazon.

this image has been making it's way around recently amongst folks I know and it makes me feel good to see it.

http://thenubbyadmin.com/wp-content/uploads/2014/01/SAY_CLOUD_AGAIN.png

37
2

Object Storage suppliers: Bikers? Dentists? Or Biker Dentists?

Nate Amsden
Bronze badge

not much demand

outside of very large service providers I think most of which do it themselves.

A file system may not scale to a trillion objects, so intead have 50 or 100 file systems(on many arrays if required). file or block interfaces are much easier to integrate and manage as well. no need to modify apps or special tools to use them.

if you are massive scale with custom apps then object storage makes sense but the # at that level is tiny(though the amount of storage that tiny # has is of course huge)

0
0

X-IO to heat up ISE storage bricks with iSCSI access

Nate Amsden
Bronze badge

Re: "two triple disk RAID 6 failures"

last I spoke to them 5 yrs ago they did it by re certifying the drives in the array. since most failures are not real failures. for example most vendors fail a drive forcefully after some sumber of read errors. Xio had/has seagates special rma recertification tech built right in(nobody else does).

if a drive has a lot of failed bits then xio can fail individual platters and keep the rest of the drive going. There is enough spare capacity in the system that overall usable capacity is not affected. el reg had a good article on Xio's tech a few years ago(maybe 6).

at the time they claimed their field data showed they could offer 7 year warranties but the decided to go for 5 instead.

I have never used their stuff but it seems they have always been hampered by lack of features and poor software relative to say the traditional tier 1 folks.

the tech behind their disks is pretty cool but not enough to get someone to acquire them or significantly grow. They missed the boat.

I have been very very loosely following them for many years. Still a die hard 3par customer though.

(blame errors on my phone kb)

1
0

Twittapocalypse! Twitter implodes, locked out tweeps around the world

Nate Amsden
Bronze badge

Re: Pity it's not permanent

here's hopin next time it is

0
0

Toshiba: Our 2.5-incher does the same job as a 3.5-incher

Nate Amsden
Bronze badge

Re: Title: Self-encrypting?

Software encryption often has a measurable overhead to performance, complexity, and risk(is it configured right, is the encryption methods supported by the software vendors etc). letting the hardware do the work automatically is simpler, faster, and safer.

It allows those in the industries that require such things perhaps financials and stuff to just get the drives and enable encryption and forget about it. They aren't concerned about the NSA cracking their stuff it's more CYA for regulatory reasons.

2
0

High-end SAN peddler Dot Hill delivers profit – market says 'meh'

Nate Amsden
Bronze badge

high end?

that's a joke, right?

3
0

Steve Ballmer: Thanks to me, Microsoft screwed up a decade in phones

Nate Amsden
Bronze badge

sounds like

they're about to get blown out of the water on car audio/navigation systems too....whether it is Ford likely dropping windows or the new android/IOS stuff that is coming out.......

16
0

MtGox MELTDOWN: Quits Bitcoin Foundation board, deletes Twitter

Nate Amsden
Bronze badge

bring it!

burn baby burn

2
1

Sandisk breaks 128GB barrier with new $199 MICROSD card

Nate Amsden
Bronze badge

compatibility

What's the compatibility look like for existing phones? Or will this only work in newer(yet to be released) phones. Samsung Note 3 for example uses SDXC but specifically mentions limit of 64GB. Not sure if that is just because that was(assuming it was) the largest available at the time or what.

It looks like the SDXC spec supports up to 2TB so hopefully it'll work with the Note 3 at least..

2
0

Mozilla takes wraps off 25 DOLLAR Firefox OS smartphone

Nate Amsden
Bronze badge

and a screen resolution that takes you allllllllll the way back to what, 1992 ?

3
12

Microsoft may slash price of Windows 8.1 on cheap 'slabs

Nate Amsden
Bronze badge

didn't they do this for netbooks

a while back?

Looks like it was called windows 7 starter edition, though I don't see mention of what the cost was.

7
0

Just how much bang does a FAS8040 box give you for 500,000 bucks

Nate Amsden
Bronze badge

interesting comparisons

To the five year old 3PAR F400.

NetApp performance: 86k IOPS

3PAR performance: 93k IOPS

NetApp usable capacity: 32TB (450GB disks)

3PAR: 27TB (147GB disks)

NetApp unused storage ratio: 42% (this is fairly typical for NetApp systems on SPC-1 from what I've seen)

3PAR unused storage ratio: 0.03% (numbers available in the big full disclosure document , full disclosure document not available for NetApp system yet)

NetApp Price: $495k (this is list)

3PAR Price: $548k (I assume this is discounted, though there is no specific reference to list or discounted pricing in the disclosure that I can see readily available - obviously the pricing is 5 years old and the F400 is an end of life product no longer available to purchase as of November 2013).

Last I saw the NetApp clustering was not much more than what I'd consider workgroup clustering, sort of like how a vmware cluster is, that is a volume doesn't span more than a single node (or perhaps node pair but in NetApp world I think it's still a single node). I believe if your using NFS then you could use a global namespace across cluster nodes perhaps and span that, but that's more of a hack then tightly integrated clustering.

I admit I do not keep up to date on the latest and greatest out of NetApp, but about 18 months ago I was able to ask a lot of good questions of a NetApp architect(I think he was at the time at least) specifically around their clustering and got good responses -

http://datacenterdude.com/netapp/netapp-dataontap-81-reponse/

Of course that is Ontap 8.1, according to this article on el reg the latest is 8.2, so I'd wager there can't be anything too revolutionary in version increment of 0.1 from an architecture perspective at least.

http://www.theregister.co.uk/2014/02/19/netapp_fas8000_midrange_box/

I don't mean to try to start a flame war or anything but I found the comparison interesting to myself, having dug a bit into SPC-1 results over the past few years, the disclosures are quite informative, which is why I find it's a useful test that goes beyond the headline numbers.

1
1

HP rides data center growth out of sludgy IT market

Nate Amsden
Bronze badge

3par still growin strong

converged storage revenue up 42% year over year, last I saw HP considered that to be mostly 3PAR.

Traditional storage revenue down 17%.

1
0

Huawei's TOP MUTANT smashes aside puny SPC-1 contenders

Nate Amsden
Bronze badge

Re: Those are great numbers

they did demonstrate AFA performance with respectable numbers last year -

http://www.storageperformance.org/benchmark_results_files/SPC-1/Huawei/A00133_Huawei_OceanStor-Dorado2100-G2/a00133_Huawei_OceanStor_Dorado2100-G2_SPC-1_executive-summary.pdf

Not that I'd use their stuff in any case(any more than I'd use something like LSI or Infortrend I group Huawei in that tier of service).

Numbers are a far cry from the first hybrid system to post SPC-1 which was IBM a few years back.

0
0

NetApp, tuck yourself in – your mid-range is showing: New FAS8000 on sale, ONTAP updated

Nate Amsden
Bronze badge

how different is the design?

I mean aren't most NetApp arrays basically just X86-64 servers with ram, nvram, pci express etc? How does this system differ in design so that it is more "built for the clustering" ? Is the software that runs on top somehow different than other NetApp arrays with the clustering software? I wouldn't expect it to be since that is one of NetApp's claims to fame.

Just seems like it is the same with just more powerful hardware...

2
0

Better late than never: Monster 15-core Xeon chips let loose by Intel

Nate Amsden
Bronze badge

devil in the details

for the windows comparison.

They say UNPLANNED downtime...

Take planned and unplanned into account please.

Or just don't bother we all know the answer.

I have no doubt that modern windows server is quite stable, but the frequent reboots for updates is well still quite a problem at least on win2k8(as well as win7), I'll be trying win2k12 in the next month or so.

4
0

NetApp shows off tech specs of FAS array BIZ BEAST

Nate Amsden
Bronze badge

Re: cache?

my spindles are already not being interrupted with reads given my 94-97% write ratio(average) on my 3PAR arrays.

We were close to getting NetApp a couple years ago, really glad we didn't after seeing those ratio numbers though. We didn't have the numbers at the time since we moved from a public cloud(the company had no infrastructure of their own at the time) which has shit for performance metrics(those that it did have you couldn't rely upon).

read caching is handled upstream from the array in my case (application layer).

Been hammering on 3PAR myself for nearly 5 years to get SSD write caching. Don't care about fancy SSD read caching, that won't do shit for me.

2
3

I want SDN and I want it now!

Nate Amsden
Bronze badge

SDN is horse shit

Really too much to put in a comment box, so google "techopsguys sdn" for my analysis of SDN. I was able to ask the creator of SDN a couple key questions which I used to confirm my own beliefs of what SDN is, and I rip the SDN concept to shreds in a 2,000 word blog post complete with pictures and diagrams.

SDN is to networking as FCoE was to storage. It's all marketing.

Same goes for cloud but that is another rant.

Really the only people that can truly benefit from SDN are service providers that have massive scale(e.g. 50-100k systems and up), and have very very frequent changes. If your operating at say a few thousand systems or less SDN is just stupid. It fails to address the core problems of networking complexity.

If SDN helps you at smaller scale then you've probably done something horribly wrong with your network design before deploying SDN. Or you picked the wrong gear. I cover that in the post as well.

Perhaps the Juniper stuff looks cool because otherwise their gear is just too complicated to manage(there's been solutions on the market to handle that aspect for 15 years).

2
1

SME storage challengers emerge one feature at a time

Nate Amsden
Bronze badge

Re: HP's answer

I'm confused what you say by LeftHand is for really small clients, do you mean Synology or Qnap is for really small clients? It looks like StoreVirtual VSA scales to 50TB per instance which is quite a lot of storage for an entry level appliance. Performance I think varies depending on what kind of hardware your using. I have been told what might be called horror stories for performance on Lefthand with network RAID 5 (at least relative to 3PAR RAID 5 performance). Though having a shared nothing design does have a couple advantages over 3PAR(or any other SAN that is not shared nothing) from an simplicity/availability standpoint.

I mean as far as I know HP's own openstack cloud uses Lefthand storage (in part because it was the first HP storage platform to support Openstack, 3PAR support came about a year or two later).

I think StoreVirtual is a interesting product at least(I only used it once for a few minutes a couple years ago). I still believe HP should/will kill off the Lefthand hardware(which are just HP servers) in favor of entry level hardware arrays being based on 3PAR instead and make Lefthand exclusively a software only solution. They won't come out and say that though (much like they didn't come out and say they were killing off EVA for quite some time after they bought 3PAR). I think they will kill off the P2000 line as well for the same reason.

I agree their marketing needs some work I have talked directly with the marketing folks in HP storage on this topic on several occasions and they finally got my point last August I think, though I am not sure if they've done anything about it yet(I haven't seen marketing info for their stuff since in the event they updated anything), I wrote specifics on the conflicting messages HP is sending out about storage here last year:

http://www.techopsguys.com/2013/07/31/hp-storage-tech-day-other-store-products/

(jump to the Storevirtual section)

2
0
Nate Amsden
Bronze badge

HP's answer

to those little boxes is the HP Storevirtual VSA. Every Gen8 proliant comes with a free 1TB license (of course more capacity is available licensing based on capacity I think). Install it on your server hardware on top of VMware or Hyper-V and have full fault tolerance between servers with network RAID(real "HA"). Not only block replication but thin provisioning, online expansion, sub lun auto tiering, snapshots, live movement of data between storage systems, blah blah you know the rest. All the software features come enabled(I believe) out of the box. Don't need a full fledged SAN? Then don't buy one. I don't think the HP lefthand stuff is capable of online data migrations to upgrade to a 3PAR transparently(yet anyway) like you can with HP EVA..

Or if StoreVirtual is too complicated for some reason HP has StoreEasy as well.. again no SAN required. I'm certain Dell has equivalents to StoreEasy on their end, I believe StoreEasy uses Windows Storage server(and I think Dell uses that too on their entry level gear).

Not sure if IBM/Dell have something equivalent. I know NetApp has a VSA but last I checked it had no high availability(it did do replication I'm sure).

2
0

Europe shrugs off largest DDoS attack yet, traffic tops 400Gbps

Nate Amsden
Bronze badge

i was part of a DDOS last week

I have a 1U server at a colo and my ISP contacted me last week saying they got reports that I was part of a NTP DDOS and that I need to fix my shit..

Which had me confused because the IP they claim that participated in the attack was the IPMI interface of my server.. (since I'm fairly limited in what I can put at the DC it's hard to put the IPMI device behind a firewall)

Upon further investigation it seems that the NTP client on the IPMI interface was less of a client and more of a client with a server attached.

After I disabled the NTP client the vulnerability was closed. I'm not expecting the vendor (Supermicro) to ever release a fix(server is a few years old) fortunately not having NTP on IPMI is not a big deal. The IPMI interface has a built in poor man's firewall though not sure if it would impact inbound NTP requests and I'm too worried to enable it in the event I need to connect to it from a network it is not configured to recognize.

The support team at my ISP gave me a handy command to verify whether or not you could be impacted(not sure if this means you are vulnerable or if it means there is just a possibility)

ntpdc -n -c monlist <IP>

And sure enough with the NTP *client* enabled on the IPMI interface (well web-based IPMI) the system responded, like a server would respond (I guess, haven't spent any time researching this)

Anyway found it strange/stupid that something that claimed to be a client only would be vulnerable enough to participate in an attack.

At my company we were indirect victims of a NTP based DDOS on 1/2/14 when our upstream ISP got hit by a 100Gbps attack for another customer (am assuming it was a gaming company). They handled it pretty well, not a big impact to us(spotty VPN connections, occasional site connectivity errors) but our bandwidth usage is pretty small.

2
0

Server tech is BORING these days. Where's all the shiny new goodies?

Nate Amsden
Bronze badge

building blocks

servers are little more than building blocks. Last I heard you can go out and get Pernix software and put it on most any server, so what is the issue here. Servers are about hardware. There's not a lot of software with them outside of management functions, and there should not be.

Same goes for fusion IO's acceleration software, go out and buy it and slap it on a HP or Dell or whatever server, or have your VAR do it for you if you don't want to do it yourself.

Next thing you'll be seeing this reg author wondering why the server vendors don't make their own hypervisors too. Since the hypervisor has had a massive order of magnitude of impact on servers vs any fancy storage caching scheme.

Now what I'd like to see is better integration between the various guest operating systems(Linux I suppose is the one I care about most) and the hypervisor (e.g. automatically shutting off vCPUs if they are not required(as in making it impossible to schedule anything on a vCPU from within the guest until the other vCPU(s) are heavily loaded instead of trying to load balance), freeing up buffer cache automatically when it is not being actively used, perhaps even some sort of control plane communication between guests(co-ordinated by the hypervisor) so they can tell each other what they are doing and perhaps make more intelligent decisions on resource utilization). I'm talking kernel level stuff here - I don't think this sort of thing can be done with the stuff vmware tools has for example.

0
0

Is FCoE faster than Fibre Channel? Who knows? Just run your own tests

Nate Amsden
Bronze badge

don't rely on vendor studies for FCoE

Just make it simpler.

Don't use FCoE. It's been a market failure since day one. I remember sitting through what was it 5-6 years ago now various presentations from the NetApp folks (and one or two from Brocade) talking about how great FCoE was. I never bought into it and I still don't. The added cost of a real FC network IMO is quite trivial in the grand scheme of things for the benefits that you get(greater stability(more mature etc), isolated network, etc). It's pretty crazy even now the sheer number of firmware updates and driver fixes and stuff that are going out for various converged network adapters(and I'm sure the aggregators ala UCS as well as HP Flexfabric and any others).

Applications often do fine if there are network issues(last round of issues I had was with a manufacturing flaw in a line of 10GbE NICs about two years ago - fortunately since I had two cards in each server it never caused an outage on any system when they failed), network goes down no big deal things recover when they come back.

Storage of course is unforgiving, any little glitch and shit goes crazy. File systems get mounted in read only mode, applications crash hard, operating systems crash hard etc. Last major storage issues was with a shitty HP P2000 storage system(since replaced with 3PAR) which on a couple of occasions decided to stop accepting writes on both of it's controllers until I manually rebooted them. Each time at least an hour downtime to recover various systems that relied on it. Fortunately it is a very small site.

Keep it simple. If you really really want to use storage over ethernet, I suppose you could go the iSCSI route, and/or NFS though that'd certainly be a lower tier of service in my book. I have a friend who has done nothing but QA for a major NIC manufacturer on iSCSI offload for the past decade and he has just a ton of horror stories that he's told me over the years. That combined with the wide range of quality of various iSCSI implementations has kept me from committing to it for anything critical. I still do use it though mainly for non production purposes to leverage SAN snapshots to bypass VMware's storage layer and export storage directly to the guests to work around bullshit UUID storage mappings in vSphere since 4.0.

Now if your using UCS, I'm sorry, from what I've seen/read/heard those blade systems have very limited connectivity options so you may be stuck with ethernet-only storage. At least HP(and others I assume) give you options to use whatever you want.

When a new good VM server can cost well over $30k a pop with vSphere enterprise+ licensing (and a few hundred gigs of ram) - the cost associated with FC is totally worth it. I'm sad that Qlogic is getting out of the FC switch business.. though they seem to continue to sell their 8Gbps stuff which I will use for as long as I can. I always found the Brocade stuff more complicated than it needed to be.

6
0

HP execs Bradley and Donatelli ready to walk: reports

Nate Amsden
Bronze badge

leaving at end of the month

vs staying until March

What's the diff?

March of next year maybe?

0
0

Page: