* Posts by Nate Amsden

1679 posts • joined 19 Jun 2007

HPE's Australian tax failures may have been user error

Nate Amsden
Silver badge

Re: Let me guess...

Kind of related here but my first all flash 3PAR system the installer wanted to install it in the middle of what was left of the rack for expansion purposes. I have been a 3PAR customer for a long time and knew what I wanted, I needed/wanted it right where I told him to put it. He racked it wrong and I had him unrack it it and fix it. He said that was a bad idea for upgrades I told him with the size of SSDs and the number of available slots in the system it is extremely unlikely we will ever add another shelf to that array for it's lifetime(almost 2 and a half years in and the system is 30% full today, it started at 16% populated, maybe it gets to 50% in the next 18 months).

1
0
Nate Amsden
Silver badge

Re: Certainly was user error

How can you compare the 1st generation of a blade system vs a 5th generation (ASIC wise, system wise it could be 6th or 7th or more) generation system that has been maturing for at least 14 years now. It would be like saying don't deploy current HP blade system because the original ones many years ago were bad.

0
0
Nate Amsden
Silver badge

by default

3PAR systems will protect against an entire shelf failing(they call it "cage level availability") but it does restrict what types of RAID you can use. e.g. if you have only 2 shelves of disks you can only use RAID 10, not RAID 5 or RAID 6. If you have 8 shelves you could use RAID 6+2 if you really wanted RAID 6. If you wanted RAID 5 the minimum would be 3 shelves for RAID 5 2+1). There are also minimum numbers of disks required on each shelf as well as minimum numbers required for adding to shelves(e.g. if you have a 4 shelf 3PAR system and you want to add more disks/SSDs the minimum SSDs you can add is 8).

Otherwise the admin of the array can change the default behavior to not protect against a shelf failing (3PAR calls it 'magazine level availability', although the concept of magazines is no longer in the Gen5 8k/20k hardware systems they kept the term for now). Though changing this behavior has no effect on minimum numbers of drives per shelf or minimums on upgrades per shelf.

You can also customize having some volumes cage level availability and others magazine level, just like you can have some volumes on RAID 5,some on RAID 6, some on RAID 10, all while sharing the same spindles/SSDs (or you can isolate workloads on different spindles/SSDs if you prefer, and you can of course move stuff around between any tier or media type without app impact).

Back in the early days of 3PAR for eval systems they would encourage the customer to unplug a shelf, or yank a live controller to demonstrate the resiliency(provided it was configured for default/best practice of cage level availability).

In the 11 years I have had 3PAR I have yet to have a shelf fail, though I do stick to cage level availability for all data wherever possible. The only time I have moved a 3PAR array was at a data center that had weight restrictions per cabinet, so we had installed steel plates to distribute the weight more, and needed to get the array up on the plates. My 3PAR SE+ 1 or 2 other professional services people at the time came on site, we shut the system down, removed all of the drive magazines, and moved the cabinets up onto the steel plated platform and re-inserted everything, re cabled everything and turned it back on. Those 3PAR systems could get up to be 2,000 pounds per rack fully loaded, and I think the data center had a limit of 800 pounds per cabinet or something(in a highrise 10+ floors up).

7
0

Brocade SAN sales butchered by hyper-converged upstarts

Nate Amsden
Silver badge

broacde networking down 30%+ due to broadcom?

HP networking also was down 30%+ so possibly a larger issue than uncertainty for broadcom

0
0

Australian Tax Office's HPE SAN failed twice in slightly different ways

Nate Amsden
Silver badge

about 7 years ago(before HP acquired 3PAR) I had a big outage on one of my 3PAR arrays at the time(took about a week to recover everything(actual array downtime was about 5 hrs) as the bulk of the data was in a NAS platform from a vendor that went bust earlier in the year and we had not had time to migrate off of it), in short from the incident report said

"Root cause has been identified as a single disk drive (PD94) having a very rare read inconsistency issue. As the first node read the invalid data, it caused the node to panic and invoke the powerfail process. During the node down recovery process another node panicked as it encountered the same invalid data causing a multi-node failure scenario that lead to the InServ invoking the powerfail process.

[..]

After PD94 was returned, 3PAR’s drive failure analysis team re-read the data in the special area where ‘pd diag’ wrote specific data, and again verified that what was written to the media is what 3PAR expected (was written by 3PAR tool) confirming the failure analysis that the data inconsistency developed during READ operations. In addition, 3PAR extracted the ‘internal HDD’ log from this drive and had Seagate review it for anomalies. Seagate could not find any issues with this drive based on log analysis. "

Since then the Gen4 and Gen5 platforms have added a lot of internal integrity checking (Gen5 extends that to host communications as well), the platform that had the issue above was Gen3(last of which went totally end of support in November 2016, I have one such system currently on 3rd party support).

The outage above did not affect the company's end user transactions, just back end reporting(which was the bulk of the business, so people weren't getting updated data, but consumer requests were fine since they were isolated).

I was on a support call with 3PAR for about 5 hours that night until the array was declared fully operational again(I gave them plenty of time for diagnostics). It was the best support experience I have ever experienced(even to today).

I learned that day that while striping your data across every resource in an array can give great performance and scalability, it also has it's downsides when data goes bad.

At another company back in 2004 we had an EMC Clariion CX600 suffer a double controller failure which resulted in 36 hrs of downtime for our Oracle systems. I wasn't in charge of storage back then, I don't know what the cause of the failure though the guy who was in charge of storage later told me he believes it was his fault for misconfiguring something that allowed the 2nd controller to go down after the first had failed. I don't know how that can happen as I have never configured such a system before.

3PAR by default will distribute data across shelves so you can lose an entire disk shelf and not have any loss of data availability (unless that shelf takes out enough I/O capacity that it hurts you).

That was by far the biggest issue I have had on 3PAR arrays as a customer for the past 11 years now, but they handled it well and have done things to address it going forward. I am still a (loyal) customer today, I have had other issues over the years, nothing remotely resembling that though.

I realized over the past decade that storage is really complicated, and have come to understand(years ago of course) why people invest so much in it.

Certainly don't like to know there are still issues out there, but at the same time if such issues exist in such a widely deployed and tested platform it makes me even more weary to consider a system that would have less deployment or testing(naturally would expect this on smaller scale vendors).

At that same company we had another outage on our earlier storage system provided by BlueArc (long before HDS bought them). Fortunately that was a scheduled outage and we took all of our systems offline so they could do the offline upgrade. However where BlueArc failed is that they had a problem which blocked the upgrade(and could not roll back) and they had no escalation policy at their company. So we sat for about 6 hrs while the on site support guy could not get anybody to help him back at BlueArc. My co-worker who was responsible for it finally got tired of waiting(I think he wasn't aware on site support couldn't get help) and started raising hell at BlueArc. They fixed the issue. A couple months later the CEO sent us a letter apologizing and said they had implimented an escalation policy at that time.

2
0
Nate Amsden
Silver badge

quite possible software bugs

HPE sent me this this morning about an urgent patch required on one of my 3PAR arrays(Gen5) that addresses problems involving controller restarts and downtime

http://h20564.www2.hpe.com/hpsc/doc/public/display?docId=c05366405

"This array is vulnerable to unexpected restarts and data unavailability without this critical patch installed. This critical patch includes quality improvements to 3.2.2 MU2 and 3.2.2 EMU2 that prevent unexpected array or controller node restarts during upgrade, service operations and normal operation."

Looks like the release notes were written in December, not sure if the patch is that old and only recently got escalated to urgent or if the patch is new and just completed testing.

My other arrays(Gen4) run older software so I guess are not affected by the issue though I have been planning on upgrading them so they are running the same code across the board where possible.

1
0

XPoint: Leaked Intel specs reveal 'launch-ready' SSD – report

Nate Amsden
Silver badge

Re: Not as good as some?

write endurance hasn't been a factor for most workloads for years now already on enterprise systems.

Many(most?) vendors already offer unconditional warranties 3-5 years. Looking at the first AFA my company bought about 26 months ago (while it doesn't do a lot of traffic relative to what the system is capable of, it is running at about a 90% write workload- databases and close to 1000 VMs), the oldest SSDs are down to 97% of their expected lifespan for endurance. All of these SSDs if bought on the regular market would be sold as "read intensive". At this rate I suppose I may be lucky if the SSDs break below 90% of lifetime before the unconditional warranty expires(5 year).

The 2nd round of SSDs was added almost a year ago, and the third round in November I think, those two sets of SSDs are at 100% of life remaining.

0
1

Why software engineers should ditch Silicon Valley for Austin

Nate Amsden
Silver badge

work remote

Spent 10 years in seattle area. Got sick of it. Spent 5 years in bay area. Got sick of it. Last 18 months in bay area worked from home even though office was 1 mile away.

Now live in central valley in CA. Cost of housing less than half. Still close to bay area if I need to go (seems like a few days per month now, company pays for hotel etc).

I have one teammate who works from home in Australia. Another guy in spain. My senior director works from home in NY state. Another guy is hoping to move to NH and work from home there. Another guy semi on my team works from home in TX. Another in Kentucky. My director likes to endure his 2+hours of commute each day to and from the office in bay area but pretty much everyone else is remote.

I suppose if your a hippy and like the SF scene go for it. For everyone else, there's nothing in the bay area that justifies the cost of living. (Unless you are fortunate enough to make at least $200k/year)

Maybe I will get sick of where I am at some point. But I do not see myself returning to live in bay area or seattle (technically east side in bellevue i absolutely hated seattle itself) except as an absolutely last resort.

5
0

Dell's XtremIO has a new reseller in Japan – its all-flash rival Fujitsu

Nate Amsden
Silver badge

Re: Bluray vs. HD-DVD?

blu ray is certainly big out there, whether it is in standalone players, or PS4s and Xbox ones, or in the larger scale archive space(el reg has had a couple articles on massive scale blu ray archiving).

The things you speak of are nice and fancy, but the reality there is a long time before traditional storage goes away, and pretty much all of the major vendors(I can't think of any exceptions) have products or technologies in the newer spaces, and have had them for years.

People have been saying for as long as I can remember that tape is dead, yet capacity shipped for tape continues to grow.

Storage is a tough thing to get right, really complex, distributed storage even more so.

4
0

Intel's Atom C2000 chips are bricking products – and it's not just Cisco hit

Nate Amsden
Silver badge

Re: Uh oh

Looks like those have 3 year warranty so would be surprised if they didn't fix it..but maybe you have to wait for them to fail.

1
0

Pure unsheathes the FlashBlade, cuts out NetApp legacy system

Nate Amsden
Silver badge

reminds me of an old 3par slide

Long before the days of flash, I think this was probably 2005 or 2006 time frame

http://elreg.nateamsden.com/3par-netapp.jpg

The comparison was for 208TB usable with 86,000 IOPS at 20ms latency. The claim was 13 clustered NetApps in 21 cabinets vs 1 3PAR in 4 cabinets.

0
0

GitLab.com melts down after wrong directory deleted, backups fail

Nate Amsden
Silver badge

my money would be on bad management

It seems like their setup was rather fragile. I'd put my money on not having enough geek horsepower to do everything they wanted to do. Having been in that situation many times. Even having a near disaster with lots of data loss(and close to a week of downtime on backend systems), company at the time approved the DR budget, only to have management take the budget away and divert it to another underfunded project(I left company weeks later).

One place I was at had a DR plan, and paid the vendor $30k a month. They knew even before the plan was signed that it would NEVER EVER WORK. It depended on using tractor trailers filled with servers, and having a place to park them and hook up to the interwebs. We had no place to send them(the place the company wanted to send them flat out said NO WAY will they allow us to do that). We had a major outage there with data loss(maybe 18 months before that DR project), they were cutting costs by invalidating their Oracle backups every night to use them for reporting/BI. So when the one and only DB server went out (storage outage) and lost data, they had a hell of a time restoring the bits of data that were corrupted from the backups because the only copy of the DB was invalidated by opening it read write for reporting every night (they knew this in advance it wasn't a surprise). ~36 hrs of hard downtime there, and still had to take random outages to recover from data loss every now and then for at a least a year or two later. Never once tested the backups (and the only thing that was backed up was the Oracle DB, not the other DBs, or web servers etc). Ops staff so overworked and understaffed, major outages constantly because of bad application design.

Years later after I left I sent a message to one of my former team mates and asked him how things were going, they had moved to a new set of data centers. His response was something like "we're 4 hours into downtime on our 4 nines cluster/datacenter/production environment" (or was it 5 nines I forget).

I've never been at a place where even say annual tests of backups were done. Never time or resources to do it. I have high confidence that the backups I have today are good, but less confidence that everything that needs to be backed up is being backed up, because in the past 5 years I am the only one that looks into that stuff(I am not a team of 1), nobody else seems to care enough to do anything about it. Lack of staffing, too few people doing too many things..typical I suppose but it means there are gaps. Management has been aware as I have been yelling for almost 2 years on the topic yet little has been done. Though progress is now being made ever so slowly.

The place that had a week of downtime, we did have a formal backup project to make sure everything that was important was backed up (as there was far too much data to back up everything(and not enough hardware to handle it), much of it was not critical). So when we had the big outage, sure enough people came to me asking to restore things. Most cases I could do it. Some cases the data wasn't there -- because -- you guessed it -- they never said it should be backed up in the first place.

Been close to leaving my current position probably a half dozen times in the past year over things like that(backups is just a small part of the issue, and not what has kept me up at night on occasion).

I had one manager 16 years ago say he used to delete shit randomly and ask me to restore just to test the backups (they always worked). That was a really small shop with a very simple setup. He didn't tell me he was deleting shit randomly until years later.

It could be the geeks fault though. As a senior geek myself I have to put more faith in the geeks and less in the management.

2
0

Between you and NVMe: NetApp dishes on drives and fabric access

Nate Amsden
Silver badge

netapp flash cache

Does it cache writes now? I just tried poking around for some docs but didn't find an immediate answer. Last I read/heard(4-5 years ago) their flash cache was for reads only(for the org I am in where we have roughly 90% writes, caching reads in flash doesn't excite me).

3PAR(I am a customer) does flash caching as well, but their architecture too limits the flash cache to reads(unless things have changed recently). EMCs flash cache could/can do both reads and writes, never used it so don't know how well it works, sounded good though.

I think removal of SCSI overhead from a typical enterprise array will probably not have a noticeable impact on overall performance(as in reduction in enough cpu cycles to do other things, of course in the 3PAR world those operations are performed on ASICs).

But if/when NVMe gets to the same price (maybe +/- 10% even) as typical SCSI/SAS, then there will be little reason not to do it just because ..well because why not. I think while the overhead of SCSI does introduce latency, I also expect it to be more robust. In talking with one NVMe startup CEO and his team about a year ago I was kind of scared the levels that you need to go to in order to get high performance (direct memory access etc), just seems..very fragile.

0
0

What might HPE do with SimpliVity?

Nate Amsden
Silver badge

competitors should be scared

The smaller ones anyway, probably not DEMCVMWARE or Nutanix. All of the smaller fry HCI folks should be scared though(have little doubt they won't publicly admit that though), maybe one or two more will get lucky and get bought by Lenovo or maybe HDS or NetApp or Cisco. Their niche just got a lot smaller

(disclaimer I have never used HCI anything from anybody, my shit remains entirely unconverged at this time)

After reading the article I wanted to toss out there that 3PAR has duplication of course but it lacks compression. That feature is coming...it has been coming since I first heard about it 5 or 6 years ago..it's just around the corner..or so I was told 18 months ago. Hopefully the wait will be worth it (3PAR customer for 11 years now).

1
0

Happy Friday: Busted Barracuda update borks corporate firewalls

Nate Amsden
Silver badge

Re: Why even use hardware firewalls in the first place?

yeah like in most things these days much of the value is in the software. And for sure at least many Sonicwalls do not run x86 CPUs(I want to say none do but that may not be accurate).

With something like a sonicwall being able to scale to 96 CPU cores(high end), while OpenBSD is stuck at 1 is obviously not a good sign of progress on the software front.

I don't know if Linux is any better, I use linux on 99% of my systems, though my (personal) firewalls run OpenBSD, my work firewalls have been sonicwall for the past 5 years(no complaints).

While last I checked F5 used Linux underneath (and Citrix uses BSD), both run pretty custom networking stacks to get high performance. F5 was limited to a single CPU up until about 2008 or 2009 I think it was, they had SMP boxes before that but the network traffic couldn't scale beyond 1 CPU(2nd CPU could be used for 3DNS or something)

1
0
Nate Amsden
Silver badge

Re: Why even use hardware firewalls in the first place?

Do you recall the story of the FBI putting a backdoor in the BSD IPSec stack ? Someone recently told me about that. I'm sure I heard about it at the time but forgotten until recently.

http://www.theregister.co.uk/2010/12/15/openbsd_backdoor_claim/

To me it's impossible to tell for any given vulnerability if it was deliberate or not, would be difficult to prove either way.

1
0
Nate Amsden
Silver badge

Re: Why even use hardware firewalls in the first place?

openbsd firewalls are nice (I have been using openbsd with pf since about 2004, and freebsd with ipfw before that I think) but hardware appliances typically are capable of a lot more, mainly in layer 7. If all you need is basic layer 3/4, then openbsd can be fine depending on support requirements(installing it is still a pain for me but i don't do it very often). If you want deep packet inspection with rules to be able to handle that, the commercial boxes tend to have those features in a more user friendly form.

Looks like OpenBSD is still limited to 1 CPU for PF https://www.openbsd.org/faq/pf/perf.html which is too bad I would of thought that had been addressed by now. With such powerful multi core systems on the market it would be pretty cool to see. My first "big" openbsd firewalls were in 2005, a pair of dual socket single core I think they were, with pfsync running between them with about 8x1Gbps interfaces. Though actual throughput was limited to around 500Mbit (I think because of interrupt overhead?? CPU never got closed to being pegged)

500Mbit is of course plenty for most internet connections. That particular use case was a bridging openbsd firewall between internal gig network segments.

4
0
Nate Amsden
Silver badge

not understanding

Is this literally a firmware update that was pushed out(as in OS upgrade or something) ? (didn't know any vendor did that for any kind of product) Or was it just a config update/change ?

0
1

Apple eats itself as iPhone fatigue spreads

Nate Amsden
Silver badge

my last upgrade

was HP Pre3 to Galaxy note 3 (still use it), bought a Note 4 too recently but it's sitting on a shelf, I'm fine with my Note 3, Note 4 has more pixels but for casual use I can't see a difference(I'm sure I could under certain conditions). I haven't personally owned any other Android phones, and no IOS either.

The specs were pretty stark:

Telco network speeds 7Mbit -> 42Mbit (6X)

Screen 384,000 pixels -> 2,073,600 pixels (5.4X)

CPU Single core 1.4Ghz -> quad core 2.3Ghz (guesstimate at least 5X)

Memory: 512MB -> 3GB (6X)

Storage: 8GB(internal, no SD slot) -> 32GB(internal), 128GB SD (20X)

Camera: 5MP -> 13MP (though I use it in 8MP mode) (1.6X)

Battery: 1230mAh -> 3200 mAh (2.6X)

Weight: 156g -> 168g (though weighs quite a bit more now with wireless charging back and glass screen protector, I don't have a scale to know exactly how much more, maybe 200g)

The note 3 has replaced any use I had for tablets as well.

Sometimes I miss the keyboard of the Pre3 though my hands are very big and it was hard to type on anyway. The stylus on the note is handy for precision stuff for sure, though I don't do much hand writing or drawing.

Note 3 works fine for what I do, it would of been nice to get more security updates, but I am very careful on what I use the phone for anyway.

0
0

IBM old guard dropping like flies in POWER and cloud restructure

Nate Amsden
Silver badge

"We are aggressively reinventing our systems portfolio for cloud, data and AI,"

Maybe those jobs are going to WATSON

1
0

Windows 10 networking bug derails Microsoft's own IPv6 rollout

Nate Amsden
Silver badge

Re: IPv6...

doesn't that whole concept of route organization go out the window if people start taking their subnets with them to other ISPs (perhaps in other parts of the world?)

or does that not happen at the low end (is that what you mean by "private address space" ?)

1
0
Nate Amsden
Silver badge

DNS generally is quite stable you are correct, but I would have to believe that many people that deal with networking stuff do have several of their key IP addresses memorized if nothing else out of habbit. Of course I am referring to internal IPs, I don't memorize IP addresses of websites. But of things like key network devices, key servers etc, often the IPs just stick.(not that I practice and try to memorize them).

I do remember one time a looong time ago, someone edited a zone file, and put an invalid character in it, and reloaded bind. Bind gave a syntax error but kept using the old zone file. Fast forward 6 months and the server is rebooted or BIND has to be restarted for some reason and the zone drops out (the important zone) big problems.. fortunately I wasn't at the company anymore when that happened :) At about that time I started writing my own custom DNS validation script, which at the present time does a dozen different checks as part of the zone update process to make sure it goes through smoothly or aborts before anything bad can happen. I recall one time more recently a co worker put an invalid character in a host name and ran the script, which runs named-checkzone as part of the integrity checks. The zone checker passed, but then my zone check failed when it noticed the DNS did not reload the zone properly(serial number didn't match what was in the zone). Took a while to track down thought it was a bug in my script but turns out it did exactly what it was supposed to do. I was happy.

I can think of dozens if not hundreds of times over the years I have used raw IP addresses to try to connect and test stuff when things went foobar, or when I just wasn't sure what was going on(and I can't remember the last time I had a problem with DNS availability at the server end(e.g. my production network runs 8 load balanced name servers internally and yes use Dynect DNS for external), normally the issues are client based).

Now MAC addresses, those I never remember, hell I can't even remember or recognize the MAC prefix yet alone the whole thing.

12
0
Nate Amsden
Silver badge

I hadn't yet heard of IPv6's larger MTU sizes (or if I did I forgot) but I would be curious just to see the amount of impact it has to performance vs IPv4 on 1500 or 9k jumbo. Most folks tend to think that 9k jumbo doesn't really give any real benefit over 1500 even at 10G speeds. Myself I do run jumbo frames for a dedicated cluster network segment for my vmware hosts (mainly for vmotion / fault tolerance traffic on dedicated NICs). Evacuating a VM host I can pretty much peg the 10G link(only time I ever come close to saturating a 10G link) though I haven't done any comparisons with standard frame sizes or measuring cpu usage etc.

I also allocate 9k frame sizes for my iSCSI stuff though the clients to-date are all standard frame so with MTU auto negotiation the jumbo never gets used on iSCSI (I had plans to put clients in dedicated jumbo frame VLANs but never really needed to).

2
0
Nate Amsden
Silver badge

Re: "...what chance is there for any other company to?"

I read that "what chance is there" as for more typical SMB type shops. T-mobile is obviously a very large service provider which has tons of internal network expertise on staff. When I deployed my first layer 3 network back in 2004 the network engineer I was working with was struggling with the concept of routing(maybe took him a good 18 months or so to really start to think at layer 3, that concept was new to me too at the time though I picked it up quick).

Another network engineer I worked with years later struggled with the concept of why it's a bad idea to set tcp/udp session timeouts on a firewall to "1 week", and why the state table filled up and the firewall stopped passing new traffic why failing over to the backup didn't fix the problem(state replication duh). His solution until I came along was to power cycle both firewalls simultaneously (and he said Cisco support did not have any ideas either as to the cause of the outages -- they didn't realize it was state table related until I showed up).

So point is there are a lot of network engineers out there that well have a hard enough time in the IPv4 world, don't try to put IPv6 on them. Yet alone people who aren't "network engineers" who don't even understand the concepts of say a default gateway.

3
1
Nate Amsden
Silver badge

Re: Not that awful

I for one, like NAT. Having independence from IP addressing from an external entity is nice. My first real job back in the 90s we had a /24 for the office, all computers had a real internet IP(statically assigned if I recall right), I believe they have a firewall but I don't remember. I do remember at one point the ISP said they were forcing an IP range change on us, and it was quite a lot of effort at the time to change everything over(I wasn't in the IT group but did help with some things on the side). With DHCP on the client side that would be easier these days for sure.

Obviously(or not who knows what people expect these days) my nearly 1,000 servers/VMs use static IPs (assigned during provisioning), and it's very handy to know I can hook up any ISP externally I want on any IP space and not have to worry about changing things internally since NAT takes care of that for me.

IPv6 folks like to pimp that you can just get a routed subnet that is portable across ISPs etc, which I'm sure in some cases(or maybe most, I don't know) is possible, but it's still extra steps that doesn't bring me any benefit to make up for the headache that is IPv6.

I also like the ability to define my own internal networks, obviously with IPv4 being in short supply for a long time doing it with real IPs for most orgs is impossible. So NAT to the rescue. At the end of the day NAT works for the vast majority of use cases out there, and as the old saying goes if it ain't broke don't fix it.

I even use NAT within my laptop, my VMs running in vmware workstation have a "host only adapter" configured with a private subnet that connects to my laptop so no matter what subnet my laptop happens to be on I can always connect to the VMs (host file entries) since the internal IP is always there.

22
1
Nate Amsden
Silver badge

one article? I've been loosely following IPv6 for what seems like close to a dozen years now and I still have absolutely no interest in it (and yes I do run networks as part of my regular job, have been for nearly 17 years now though network engineer is not my "primary" role). The only real place these days where it MIGHT be needed is client networks, whether it is broadband, mobile etc. Many of those are on carrier grade NAT (including my phone on AT&T's network, just checked again 10.x IP address and no it's not on wifi). I have never had an issue tethering through my phone to AT&Ts CGNAT for any reason (other than shit signal strength).

My org has a need for a small number (3 /27s) of IPv4 space in our data centers, lots of name based virtual hosting via Citrix Netscalers and SNI for SSL allows a large amount of reuse of external IPs. I have a server at a colo in hurricane electric with about a half dozen IPs (they even asked me if I wanted IPv6 and I said no because, well I don't care about it).

I too dislike the hex-like addressing scheme. But at the end of the day IPv6 gives me nothing. Now if it is implemented upstream from me in a transparent way then I don't give a shit.

The people I have seen that seem to complain the most about NAT and IPv6 seem to be heavy proponents of peer to peer stuff, or VoIP or other use cases where NAT has historically had some issues with. Though VoIP through NAT doesn't seem to have been a problem for maybe a decade now for most, and I really have no interest in peer to peer anything.

Oh and those that bitch about overlapping IP spaces, which I admit can be an issue for larger companies that may be connecting to many others, or acquiring companies. Personally though I have not witnessed an IP overlapping issue that either I or a network engineer at a company I worked at has had to deal with since I want to say roughly 2004 (maybe that's a hint that companies I have been at often times have no VPNs etc to outside organizations)

Feel free to keep playing around with your IPv6 tunnels though if it makes you happy. I understand the "big guys" in the service providers need to get serious about IPv6 in many cases, but 98.424324325% of the rest of the world needn't care.

23
5

Google loses Android friends with Pixel exclusivity

Nate Amsden
Silver badge

Re: what does android updates have to do with ads

Citation needed are you kidding me? Just look at the sales of android devices that have a track record of not getting updates. All i read is complaints on how it seems every major android vendor and carrier don't send patches.

That hasn't stopped sales, so it's obvious it's not a problem for consumers. If updates were that critical then IOS would have larger market share, what is it down to, 15% now or something?

5
2
Nate Amsden
Silver badge

what does android updates have to do with ads

People get the ads in the apps regardless of what version of android they have.

And the global market has spoken loudly that people don't care about getting OS updates (there is a vocal minority that does care of course).

For me of course I'd like to get security patches(I'd even pay for a subscription service ) but I don't want UI changes.

Also would be nice to have the ability to roll back any update whether OS or app in the event it causes problems. I think I read on IOS for OS updates people can do this but not android (maybe possible after rooting and unlocked bootloader I don't know)

From my galaxy note 3 on android 4.4.4. I use another note 3 on a regular basis with android 5 and its quite frustrating to use. Also bought a note 4 recently hoping it would have 4.4.x too but it had 5.0.1 I think which was annoying too. Google was releasing patches for 4.4.4 as recently as sept 2016.

Some day i won't have a choice anymore, in the meantime I do my best to keep wifi off so ATT can't upgrade my phone to 5.

7
0

UK.gov departments are each clinging on to 100 terabytes of legacy data

Nate Amsden
Silver badge

funny

How some people see a number like 100TB and try to extrapolate that to some small number of hard drives you can buy from any number of places.

More likely such data is stored in a dozen or more racks of storage arrays(maybe avg 300 to 450gb enterprise FC or SAS disk sizes) for online data processing.

0
1

Red Hat's OpenShift Container Platform openly shifts storage into the hands of devs

Nate Amsden
Silver badge

no mention of chargeback?

How can anyone allow self provision without constraints or at least charge back for budgeting?

4
0

HPE gobbles SimpliVity for US$650m – well below recent valuations

Nate Amsden
Silver badge

Re: Not the right price

Or risk losing out to someone like lenovo or maybe HDS by doing so. Who knows maybe someone else will make a bigger bid. Back when 3par was bought Dell thougt it was good my rep came to tell me they were going to Dell and bam, next day or so HP comes out with hostile bids.

3
0
Nate Amsden
Silver badge

Re: Buy Simplivity now? Really?

Haven't used any HCI myself but the cpu overhead for storage just seems insane, have heard or read some systems want 4 cores per node to do that shit. Having an ASIC sounds great on paper though as another poster said perhaps it is slow I don't know of course never used it. My latest boxes already use 2x22 core xeons, wouldn't want to give up 4 cores from even those.

Beefing up the Xeons is difficult because well of course they have to wait for intel to do that.

0
0

Snapchap snaps back: Snapchat Snapbrats' Snapstats are Snapcrap

Nate Amsden
Silver badge

Re: Who to believe?

Easy for me having worked for 2 different social media companies that imploded. Bullshit numbers are the name of the game. Last social media company I was at their web traffic numbers were going down at a 45 degree angle on the graphs month after month.

On my exit interview they had the balls to say I was wrong and their traffic wasn't going down the tubes. I only ran the servers that ran the site what could I possibly know more than HR on website traffic.

First social media startup we delivered tons of emails. Companies like yahoo and google started banning us because of too many bounces. We asked to remove the users from the system so there would be less bounces and more email would go through. The answer was "no, that would hurt our user numbers".

That company was basically in the same market as linkedin. They would have contests to see how many users employees could get to sign up. (Same at 2nd social media company too). I never participated in that shit. My linkedin network(1st to 3rd degree?) alone literally exeeded the number of users at my employers social media company which at the time was around 50k I think. Clients would pay big bucks to use the site to recruit people. Software was clunky and buggy. We had to fly people on site to hold their hands to use the software. That was expensive. They stopped that practice and usage dropped like a rock overnight.

Oh oh!! I forgot this bit too. That company was at one point going to be THE partner for social hiring shit on facebook. They had an agreement and everything. Then literally at the last minute (few days from launch) facebook opened up a bunch of APIs and killed the agreement. They gave us some free advertising credits or something to compensate. I laughed so much. Such clueless folk. CEO has started several companies and last I heard every single one has gone down in flames. Why the hell do people keep giving that guy money. One of his more recent companies lost several hundred million or something. He seems to be always looking for the next SHINY.

Only social media I use is linked in. Very light usage of that even. But I do find it useful for career shit. I am terrible at keeping in touch with folks.

3
0

Cancel! that! yacht! order! Marissa! – Verizon's! still! cold! on! Yahoo! gobble!

Nate Amsden
Silver badge

Re: Oh they fucking know

I don't think they know. Verizon wants one(get out or get a much cheaper price) thing yahoo wants another(stick to the deal). Sounds like the contract to buy didn't have enough legalese to settle the situation.

I wouldn't be surprised if it ended up in court. Kind of surprised it hasn't already. I assume because both companies fear losing control over the situation by handing a decision to a 3rd party.

3
0

Regular or premium? Intel pumps out Optane memory at CES

Nate Amsden
Silver badge

Re: Useful for?

My lenovo p50 has dual 512g samsung 950 pros(pcie). And a sata samsung 850 pro 1tb.

The size of these are so small intel is really having to scrape the bottom of the ocean to find a use case.

Most orgs will want hot swap high availability storage for their databases.

3
0

This 'cloud storage' thing is going to get seriously big in 2017

Nate Amsden
Silver badge

Re: Stick with RAID and LTO instead.

Amazon? The company that built themselves on the back of oracle databases and hitachi storage?

0
0

Splunk slam dunk as FC SAN sunk by NVMe hulk

Nate Amsden
Silver badge

Re: Nice, if you don't know how Splunk works

Won't touch supermicro with 10 foot pole for anything that has a scent that makes me think of production workload.

So there's one reason.

Been using splunk for 11 years, I think I know how it works.

1
0
Nate Amsden
Silver badge

70 billion is nothing for splunk

Unless you are needing to search over all 70 billion ay once.

Company I am at has 30 billion events in splunk without a special setup.

No takeaway here everyone knows NVMe is fast.

0
0

Uber's self-driving cars get kicked out of SF, seek refuge in Arizona

Nate Amsden
Silver badge

maybe uber should move to AZ too

if AZ is so friendly maybe Uber should take their employees there too.

Glad that they were held to the rules in one case though. What I'll perhaps never understand is why they didn't just register. I mean with the amount of cash they are burning they don't need the revenue from those self driving cars picking up people. Just drive them around the city and simulate such activity, they could even launch wads of (1? 5? 10? more?) dollar bills out the window every now and then to attract attention.

Maybe Uber will finally implode some day that would be a good day for me.

I don't ride taxis often, few times a year generally(in the U.S. I either drive where I am going to or rent a car at destination), to-date I do not recall a negative taxi experience (typically I book them(by phone) at least 6 hours in advance so maybe that is part of it).

5
0

Bad news: Exim hole was going to be patched on Xmas Day. Good news: Keyword 'was'

Nate Amsden
Silver badge

Re: unattended-upgrades

debian user for 18 years(for personal stuff, work stuff is mostly ubuntu, and before was centos or RHEL), though I still do not trust unattended updates, even on stable(haven't run testing I think since 2003,never ran unstable). I don't recall issues of the top of my head, but I still prefer the peace of mind of knowing that the change is going through.

To my knowledge none of my personal systems have ever been compromised(I have run internet connected debian systems since 1998 and Slackware before that - debian powers my personal email server, DNSs, etc), and on the systems of my employers the only ones that I was involved with that had been compromised have been ones that I was not responsible for(on that note the number is 3 or 4 compromises over the past 16 years).

Perhaps too paranoid, or not paranoid enough not sure.

2
2

Gluster techie shows off 'MySQL of object storage' Minio projects

Nate Amsden
Silver badge

anyone using mysql for simple key value store

is probably using the wrong product.

2
0

HPE says 3PAR problem that broke Australia was a one-off. Probably

Nate Amsden
Silver badge

got to be careful with high automation

trying to get the most efficient, and automated system possible, there will always be gaps, and sometimes big ones that people don't expect.

If you want a "real" backup then the backup cannot be connected to the primary (e.g. real time replication or clustering). A tightly integrated backup protects against many failure scenarios but obviously cannot protect from all.

I endured a similar event on a 3PAR system close to 7 years ago now, I learned a lot during the process. The support(at the time) was outstanding (since HP took over it has been closer to adequate than outstanding), and made me a more loyal customer as a result. That case at the time 3PAR determined was a one off as well(at least at the time). The backups I had at the company were limited to small scope tape backups due to limited budget. Fortunately I was able to pull some miracles out of my ass and bring everything back online in a few days (storage array itself was back online in a few hours). After all of that the company axed the disaster recovery budget I worked on for a month in order to give the funds to another project that they massively under budgeted for. I left a couple of weeks after that.

I was part of another full array failure data loss event more than a decade ago on an EMC system, that was an interesting experience as well, I wasn't responsible for that system at the time(I supported the front end apps). Maybe 35 hours of downtime, and we were recovering from the occasional corrupted data thing in Oracle for the next year or two that I was at the company.

The key is of course to realize no system is invincible. There are bugs, there are edge cases, and in highly complex environments those can be nasty. It's certainly very unfortunate that this customer got hit by one of those, but it wasn't the first, and it won't be the last.

The biggest outages I have been a part of have been application-stack related.

Some of the more recent management I work with freak out when shit is down for an hour or two, oh my they have no idea how bad things can get.

This kind of thing has also kept me more in HP/3PAR's court(customer now for almost 11 years), because if this kind of thing can happen to a storage system that is roughly 10 years old then I can only imagine the issues that can happen with the startups. These big 3PAR boxes get a lot more testing and more deployments etc.

But it's also probably indication that HP won't ditch Hitachi for the ultra high end just yet(where they have 100% guarantees).

In general perhaps I am lucky, or maybe just lazy that I don't encounter more issues because I tend to not leverage much of the functionality of the systems I use. Take 3PAR for example some people are surprised that I haven't used the majority of the software available for the system(e.g. never used replication). Part of that is budget, part of that is I know there are more bugs in the more complex things(on any platform).

Same with VMware, I file on average 1 ticket with HP/VMware support per year over the past 4 years, currently running almost 1,000 VMs. Runs smooth as hell, very few issues, and again much of the more advanced stuff (even though we use enterprise+) goes unused (but we do use distributed virtual switches and host profiles that are in ent+). I have seen lots of complaints over the years about vmware bugs that I honestly have never seen, I guess because I just don't have a need for those features. The only crashes I have gotten have been because of hardware failures (maybe 6 in the past 5 years, and none in the 6 years before that at least while I was at the companies). And no - no plans for vsphere 6 anytime soon.

Same goes for my ethernet switches, the feature set I need on those hasn't changed in a decade. List goes on...

at the end of the day you have to realize what you are protecting against. Right now I am trying to get a tape system approved (with LTFS over NFS) for offline backups. What I am protecting against there is someone breaking into our systems and deleting our data AND our backups. Having offline tape(stored off site) is a good tried and true method of protecting data. I don't expect to use it ever, we use HP StoreOnce for backups & off site backups, but still someone could delete data from those just as they could delete data from an API-based cloud system.

Co-ordinating someone to return all of our tapes and delete them is a far bigger task.

Dealing with tape directly isn't fun, I am hoping that LTFS over NFS will make it pretty easy since all of our backups write to NFS as is(on StoreOnce), so adapting them to LTFS should not be difficult. Certainly am aiming to avoid working directly with fancy tape backup software at least.

It would be really cool if StoreOnce could automatically integrate with tape, so I could write to NFS to storeonce and then have it write it to tape on the backend. It would remove some steps I will otherwise have to do myself. I know there is 3PAR->Tape automation but that is too low level and relies on use cases that don't cover what I do for the most part.

9
0

Samsung, the Angel of Death: Exploding Note 7 phones will be bricked

Nate Amsden
Silver badge

does this update require wifi ?

(Note 3 person not Note 7)

Just curious. e.g. I have been blocking AT&Ts attempts to install android 5.0 on my note 3 for probably 18 months now just by keeping wifi off 99% of the time since the update requires wifi to be enabled.

(I have another ATT note 3 with the latest android 5 that is supported on it and still much prefer 4.4.4.)

So just curious if people wanted to block the update might it be as simple as doing what I am doing.

6
0

We grill another storage startup that's meshing about with NVMe

Nate Amsden
Silver badge

hey lior

just saying hi to lior(one of the founders, I know him and a couple other Excelero folks from Exanet days) I assume he is reading this article.

On the topic of host vs controller caching one important aspect is if you are caching reads or writes. Caching writes at the host layer is much more complicated of course. Qlogic wanted to try this with their Mt Rainier tech(and I was excited at the time), but from what I heard that aspect of the tech never got close to making it.

Now if you are hyper converged then caching writes at the host is more feasible I imagine.

For me, the bulk of my org's workloads are in excess of 90% write. Most of the read caching is handled in memory in the app layers (wasn't designed with that purpose in mind just ended up being that way)

0
0

Standards body warned SMS 2FA is insecure and nobody listened

Nate Amsden
Silver badge

educate the users

but don't remove the option.

At my org where we use Duo approx 18% of the users (according to monthly Duo report) use SMS or Voice, about 65% use the Duo Push app(most of the rest use a duo generated pass code). This number hasn't changed over the past 6 months. At one point I noticed for whatever reason people located in what might be considered non 1st world(not knowing off the top of my head what constitutes 1st world) countries seemed to be more likely to use voice as the 2nd factor, at least in my org.

I don't expect Duo to remove the option, though I suspect their admin UI has the ability to turn off various forms of 2 factor if the companies wish to(I haven't checked).

0
0

Congrats America, you can now safely slag off who you like online

Nate Amsden
Silver badge

now if

only every social media type thing that allows you to "like" something would allow you to "dislike" as well then maybe we can make some more progress.

29
8

Nutanix makes thundering great loss, stock market hardly blinks

Nate Amsden
Silver badge

Re: So

judging a company on one quarter's numbers? The graph is right there in the article. It reminds me a lot of Violin's graph, here is an article from 2013 about them

http://www.theregister.co.uk/2013/11/25/mistuned_strings_on_violin_cause_financial_discord/

heat is on to keep the growth going.

I think if anyone is buying into VDI right now it is probably safe that Nutanix will be around in 3-4 years even if they are in the toilet at that point(see how violin continues to hang on by a thread).

0
0

Internet Archive preps Canadian safe haven to swerve Donald Trump

Nate Amsden
Silver badge

Re: Over reaction?

yeah for sure it's over reaction, internet archive is already subject to existing laws whether it may be copyrights, porn or other things. Whether or not anyone has ever gone after them for such things I have no idea.

If they are going to move somewhere there have got to be better places to go than canada, which is what the 52nd or 53rd or something state?

11
18

Storage newbie: You need COTS to really rock that NVMe baby

Nate Amsden
Silver badge

what does that mean?

"The idea behind E8 is that not 100 per cent of what an AFA does inside the array must be done in the AFA itself, and since NVMf requires a high-bandwidth low-latency network anyway, there will be no performance hit if those things are done outside the array"

What are those things ? It sounds like this thing has absolutely no data services, so they get high performance(which is how Violin started??). How can some of those things like replication, or snapshots be done outside the array on a shared volume ?

It will probably be a few years until controllers can catch up to NVMe, just like it took several years for them to catch up to regular old SSD. Maybe by 2020 ?

Until then people will have to make due with compromises on features if they need raw performance.

Fortunately for most customers this is a non issue since as the article says regular old SAS SSDs are plenty fast already, and will be fast enough for a long time to come.

2
0

I'm not having a VMware moment – there's just something in my eye

Nate Amsden
Silver badge

Re: Hooray for single points of failure!

Mission critical(which is really in the eye of the beholder) data already sits on centralized storage for probably 98% of organizations out there. You simply cannot get the reliability and stability(and data services) with internal storage systems on any platform. Even the biggest names in cloud and social media make very large scale usage(relative to your typical customer anyway) of enterprise class storage systems internally.

Certainly you can put "critical" data on internal drives, though it's highly unlikely that truly mission critical stuff (typically databases and stuff - that may be responsible for millions or more in revenue) would sit on anything other than an external storage array (likely fibre channel). Ten or so years ago VMware brought a whole new life to centralized storage simply because of vMotion.

If you don't understand that then I don't have time to discuss it further.

Though the person who is touting this idea in the article sounds neat, getting that kind of thing done right is far easier said than done and am not sure when it may happen (certainly none of the solutions on the market are even close). Some solutions do file well, others do block well, others do object well. Nobody comes close to being able to do it all well on a single platform. Maybe it will be another decade or so before we get to that point if we ever do.

I think at this point speeds of flash is really not important anymore(outside of edge cases). What is far more important is simply cost. Cost is improving but obviously has quite a ways to go still. Many data sets do not dedupe, and lots of datasets come compressed already (e.g. media files). Need to wait to continue to get the cost of the raw bits down further.

SAS-based SSD systems will be plenty fast for a long time to come for most workloads.

I have some mission critical systems that do not use our SANs, though they are generally stateless(web or app servers), there is no mission critical data on them.

2
0

Forums