Re: no dns security this is what happens
There need to be third and fourth options for upvoting based on terrible humor.
44 posts • joined 4 May 2015
While your argument is sound, the ability to quickly and easily provision environments at aws/azure/google is often so developers and project leaders can get around those pesky business processes that slow down "innovation".
We have lots of business processes in place, but it didn't stop developers from going out on their own and setting up business critical systems at AWS without going through any of the business processes that exist. Then they throw it all over the fence to the ops team when their are operational problems they didn't think through and the business tells them to hand it over (and its a steaming pile of poo, and comes with documentation. That's if most of the team even sticks around (the smart ones left because they did all this resume driven architecture to advance their careers).
I have not worked at an org that didn't have good visibility into its own on prem environments. You know what subnets the network and security engineers have provisioned, and from there its easy to scan the network and find any rogue systems. Any of these systems that go unclaimed can either be shut off or their ports can be disabled. Not so easy to do in the cloud.
Sadly, this is the same story everyone is dealing with in the cloudy world and developers.
Anyone who has let devs run free with a credit card or an account in the cloud winds up with stories like this. I am not suggesting that security and systems engineers don't make similar mistakes, but developers just don't think about, or have a lot of experience with locking down these environments, but the bean counters and CIOs that want to be "cloud enabled" and "flexible" and want to "innovate" just keep allowing this to happen.
We had a bunch of open S3 containers with data on it we didn't want in the wild. We got the email from AWS telling us about it, and it took days not only find out who was responsible for it, but who had the creds to do anything about it. It was a sole developer.
EBS is a very secret system that no one is allowed to understand. The only details AWS will provide about it are its general service and uptime/redundancy, but you aren't allowed to know how it works or how its redundant.
I can't understand why anyone in technology wouldn't want to understand how it works, but all these developers and PHBs seem to be fine with not knowing.
I had a days long discussion with a developer at one point explaining to them that I have never had a storage failure on a raid system in the more than 20 years I have been doing this at many different orgs, some of them fortune 500. When it finally dawned on him that its not normal for businesses to incur data loss due to disk failure, he was shocked (because he was a developer and just thought about things realted to what he knew, like his desktop computer).
No offence to any devs out there...
Yes, I get this sentiment - it does make sense when you imagine it, but there is still maintenance and operational work to be done with a cloud environment, albeit different maintenance.
And while the meatbags are expensive, so are the meatbags that one would need to hire to have an effective cloud deployment. Letting developers with 20 years of coding experience and zero experience working in operations or systems leads to badly designed cloud deployments.
Now the push is all about Hybrid deployments, so you still need the meatbags that run all the on premises gear as well as the meatbags that are dealing with the cloud deployments.
As to innovation - if a company has business needs that require innovation - problems that need solving, cloud is rarely the answer to the problem. Nothing that can be done by any of these cloud providers changes business needs all that drastically. The only case I can see it being a game changer is the ability to grow exponentially on demand to deal with slashdot effect as we used to call it.
Any other innovations can be done on premise just as well as in the cloud.
Hell of a lot cheaper. I had heard a good guide line is that once you hit $5K a month in cloud costs, doing it in colo is cheaper. I can attest to that.
We have colocation sites with 8 racks and using loads of power, and the service is less than $10K a month. None of these sites has had an outage in the 9 years I have been around.
Storage arrays are way cheaper than they used to be, and flash.
Servers are cheap, especially if you buy used ones.
Hypervisors are a commodity these days.
Internet bandwidth is cheap - certainly when you don't pay by the bit sent.
Lots of people are leaving the cloud as the notion of it being cheaper is just laughable.
Ha Ha! As if. We got our fourth pair of Sales Associates in eight years, each pair more worthless than the last. We complained extensively again about the uselessness of calling support, they hadn't solved any of our problems in the last four years, etc.
These two actually told us, yes, we are sorry its so horrible, but one option was to actually pay them MORE money for BETTER support. We were agog. I am ready to switch to KVM, with all the pain it would produce.
Five years ago one could call VMware support and get someone in Atlanta, or Ireland in 2 minutes, and that person would solve a problem 90% of the time. Now you call and you might get someone that tells you to send the logs, and recommend reinstalling.
Once I had an engineer on the phone for an issue with a windows server VMware service not starting with a domain account, and he told me I should switch to the local account - or just switch to the appliance because it was way better. That was his one and only answer. Needless to say we figured it out ourselves as usual.
"Are coders buying these instances on their own, for Q/A and test & dev, also without properly securing their instances?"
Yes. Every time we turn around someone decided to launch more instances before they quit and told no one about it, let alone the teams responsible for picking up the pieces. Its hard to secure something when no one knows about it and doesn't want to turn it off because they don't know what it does.
This is a governance issue, but I have yet to meet a developer or DevOps person that doesn't eschew any form of governance. Governance is a roadblock, slows down innovation, blah blah blah blah....
All flash arrays, as stated in another reponse, are really successful and worthwhile because of the low latency. This is especially important for what I am sure you would deem legacy systems.
One can now switch from a hybrid flash/disk based system for their production environment and move to all flash. Any and all latency issues that previously existed essentially disappear. After the switch in our environments, we had people pleasantly surprised at how much faster things were in general, almost to the point that things ran as fast as they thought they always should.
We were able to make this remarkable change overnight without changing a single line of code on any of our applications, for 50% less than we paid for the Hybrid system, and it takes up 1/10 of the rack space.
As to requiring three copies. That sounds appropriate if you are dealing with non-raid technologies, but the notion that having a production storage system with built in redundancy (raid with double parity and spares, failover/spare controllers and ports) and a DR system in another location is more than adequate.
What you seem to be advocating is to build applications with a new infrastructure design, but most companies do not have the time or resources to rewrite everything from scratch and stick it at AWS, so finding other ways to increase performance and efficiency on the old stack is of paramount importance, and flash arrays are as impactful as hardware virtualization was a decade ago.
I love this sort of talk:
On hand with a canned quote was Eric Shepard, research veep at IDC: "Organisations around the world are increasing investments on data centre technologies that eliminate inefficient silos and support business-centric rather than infrastructure-centric decisions."
I always want to know which of these options is less expensive in the long wrong. Don't get me wrong, silo'd groups (I only work on storage, or even worse, "I only work on EMC Clariion at this company", silos in silos) definitely need to go.
What I have issue with is when business-centric decisions in IT have a worse financial impact than proper infrastucture-centric decisions, ie using technology that isn't the flavor of the month but keeps costs down.
The great news about this absurdity is that it is good for the consumer, i.e. businesses. Competition is good.
We already had very good pricing on our previous NetApp purchases, which for some reason has a lot to do with the business line one's company is in.
When we switched to AFF a few years ago, our pricing was even better, mostly because Pure was nipping at their heels. Pure has forced NetApp to be far more competitive in terms of pricing, our VAR at the time said it would be NetApp's undoing (his exact words were that they were dropping their pants).
Seeing NetApp's more recent financial statements seem to show otherwise.
The EVA wasn't crap at first. It was a pretty well thought out array in the time the few fiber channel storage solutions were a clarriion (talk about crap, and expensive), an EMC DMX (the most expensive and complicated storage one could buy) and an FC connected netapp.
The EVA was far more flexible and easier to use than the EMC options, and it actually performed very well.
HP bought compaq, and all the EVA engineers said screw this and quit, and HP didn't have anyone that really knew the product anymore. One of its biggest knocks was a lack of async replication. They put that in place, but it WAS crap.
By that point HP was already drooling over 3par, most likely because they knew the EVA was doomed since they couldn't adequately develop it into a modern product.
Getting a little tired of the constant harping on hyperconverged. Everyone seems to want to get in on this game, but its only really good for people who have a very steady software to compute ratio of systems, and on the whole this isn't a lot of businesses.
Some, if not most, database systems require TBs of storage all by themselves compared to the amount of CPUs and memory required. Lots of tiny web and app boxes require almost no storage whatsoever (but demand a lot of CPUs and memory).
I have never worked somewhere in my many decades career where the above was not the case.
Now you see companies like NetApp offering hyperconverged that isn't really hyper converged. Its just a bunch of VMware hosts with Storage hosts that look like hyper converged-sort-of-kind-of. It certainly sounds good to them.
"EMC VMAX AF uses latest/greatest procs. Will never dedupe because it overruns them." - I have no idea what you mean here, unless you are saying the VMAX is so fast, that the processors can't keep up with dedupe? EMC runs them too hot? I think the really answer they will never dedupe is that would mean fewer sales dollars for EMC.
"HDS uses multiple procs--more than most. Best Practices still require Cache Partitioning and other measures to minimize the CPU impact." Can you explain why this is bad?
"Pure claims to use latest/greatst procs. Hasn't added a single feature in 4 years." I think their filer offering is certainly a new feature. Since they are new product, they don't really have anything else to add, as would be the case with all the legacy systems.
"- IBM V series--their #1 seller--no dedupe same reasons as EMC" - I don't understand the point you are trying to make about EMC, ergo I don't get the point with IBM (unless its about IBM making more money since their customers have to buy more disks)
"NetApp has numerous limitations with Dedupe, Compression, Thin, and so on." Huh? ONTAP 9 has a lot of excellent new features, and dedupe, compression and thin provisioning have been options with them for a long time, and are more impressive with AFF (I can say from experience). I think they continue to make great strides in this business.
Don't get me wrong, I have always liked what I have read about 3PAR (except the data loss part, which frankly has happened to every storage vendor at some point), but i don't get any of your points against 3par's competition here, they all seem to be explanation free.
3Par also seemed like the perfect aquisition for HP in 2010. I was an EVA customer then and got burned by the EVA team picking up and leaving after HP aquired compaq back in the day. HP was totally screwed by that. Not only could they not add new features to it, they couldn't fix bugs.
We had a DR event with one of our primary HP EVAs, and we quickly learned that without the Primary EVA being online, you couldn't make the destination EVA volumes writeable (or anything-able). We actually got HP people on site, and they couldn't figure out how to fix it, they just wanted to fix the primary array (which they did). It only made sense at that point to buy something to replace the EVA, and 3PAR was the perfect fit.
While I get that the reg needs articles to keep eyeballs to see ads, this seems a bit of a ridiculous story to bash a company with.
Since most businesses can only now afford AFAs, I doubt many of them are not going to purchase an array because it only has single digit TB SSDs.
When we went from our old spinning disk array 4 years ago to a new shiny array with all sorts of efficiency tools, let alone incredible performance improvement, we went from 2 full rack spaces to 20U of rack space. This four year old array has grown a bit and now has about 3 times the usable space it did initially, and its still not a full rack of space.
When we make the move to AFA,and we get back down to 20U of space for the whole enchilada, we would be thrilled, but its not what's important. The bottom line is what anyone cares about in the business. Have you seen the prices of these 8 and 15TB SSDs? They are astronomical. I highly doubt using 1/4 of the rack space makes up the difference of the enormous cost of going with this mega dense SSDs.
VMware is on a downward trajectory, no doubt. Companies that are starting out will likely go to AWS or Azure since that is the model everyone is chasing these days, but there are still many companies that are far in bed and under the covers with VMware, and they aren't going to go anywhere.
We have fairly large VMware environments - almost 3000 vms altogether. The bean counters and sales people all think we have to move to AWS, but they have yet to look at the costs. It would be a minimum of 3 times the price without even factoring in everything we would need to change to deal with AWS and their differences.
I agree with tabbu, VMware used to always be excellent software and support was equally excellent. It has all turned to shit. We are deploying esxi 6, and surprise, some VMs just don't want to migrate from the esxi 5 hosts to 6. The only fix is to turn them off and back on. This was esxi 6 update 1 mind you, we waited until that came out.
Opened a ticket for said problem, since we have about 100 of these VMs now in production that can't migrate, so we can't finish the upgrade. Update 2 has a fix for these VMs that won't migrate. Fantastic.
Apply the fix - and now all the sudden the VMs that wouldn't move before move, but they run like complete shit on the esxi 6 boxes until you shut them down or move them back to the esxi 5 box. Support had no answers for me. I told them we'll just leave it on the old version without the patch, and get downtime to shut down these boxes. What a joke.
The next undocumented feature is that you can no longer delete datastores with the delete command. You have to unmount them first. I got two fellows on support calls who barely spoke my language, and they told me its always been this way - when I mentioned that its never been this way in the 10 years I have used the product, they had no answer. I told them to escalate - no answer.
Told my sales rep and closed the ticket, but I gave them an earful. I don't know how we can give them this much money every year in support and get what is now clearly EMC level support. grrrrr.....
Needless to say, I don't think we have a lot of options. Vitualization has allowed us to save a lot of money and get by with fewer people. I am trying to imagine even getting the time to look at another option, let alone migrate almost 3000 machines to it.
This sounds incredible. I want one, but...
CIFS isn't included in GA. Why? I would think this would block a bunch of sales, because who knows when it going to arrive, and a lot of people depend on cifs, rightly or wrongly, for their business.
And then it begs the question - what version?
I know I am Mr. NetApp Apologist, but comparing a dlink NAS to a netapp filer is like comparing a usb 1.1 2GB flash key to an SLC SSD. They are both solid state storage, but one is far more capable than the other (and yes, more expensive).
The big selling point - or downside depending on your perspective - with NetApp is the proprietary file system they use called WAFL. This allows for some nifty things that can be done to solve many problems - mostly around snapshots. NetApp was one of the first to offer deduplication, and you can run block devices (luns - iscsi or FC) ontop of this file system and garner the same benefits.
Being able to use NAS or block based storage on the same device and the same pool of disks is pretty cool - we have moved from FC to NFS in one our vmware environments as it has allowed us greatly reduce the number of datastores (there are no limits to the number of VMs on a datastore due to locking or whatever) we have while providing comparable performance to FC. Its also much easier to manage than FC or iscsi.
BTW - calling a storage array (network endpoint of disks) a SAN (storage area network) is like calling a Windows File Server a LAN (local area network). I could throttle the marketing person that decided to call a storage array a SAN in a PowerPoint to some PHB somewhere (probably at Dell).
Those don't seem like caveats. CDOT with flash drives? I am trying to see the downside of it.
If you don't like CDOT, go buy something from another vendor, but there are a lot of benefits to CDOT that are not available from other vendors (Built in NAS, replication, dedupe, interoperabillity with other FAS platforms, inter-pair clustering). Some of these might be available, but not all of them in one product.
I, for one, can't understand why NetApp would buy Solidfire.
No surprise - EMC takes in more money because they have the most expensive gear, especially when you start talking about storage for mainframes - they basically have a lock on all mainframe storage sales.
As a customer/end user, I am thrilled that all these new storage options are cropping up, as they are forcing the old players to start charging less for similar capacity, performance and features, so I don't think its any surprise that revenues will start to go down.
If I were a salesperson that worked for a VAR or one of these companies, I might be a little more concerned.
I have been ranting about the misuse of the acronym SAN to anyone who would listen, and obviously that went nowhere.
Storage Area Network - if everyone started referring to Fiber/iscsi Switches as the SAN I probably wouldn't have blinked, but it just took a bunch of marketing morons somewhere to think Storage Array = SAN and that was that. Now its fing using everywhere - SAN SAN SAN SAN. Makes me want to start fires....
ONATP - much to many people's chagrin - is actually something that can help NetApp survive all the cloudiness.
PHBs always want to put us in AWS, and our first problem with that is that we can't get the data their in a timely fashion. We have large volumes of NAS data that would take weeks to migrate in any traditional fashion - even using some one off software for replication - it would be a risky move at best, and then there is always the question of what happens when we find out we don't want to be at AWS, and maybe we want to do it ourselves again or move our data to some other provider where NetApp also has filers built in. Maybe they do the same sort of virtual filer model with Google or Azure in a few years.
Both of those problems are solved by this. A NetApp customer can effectively migrate in and out of AWS without too much risk or difficulty, just snapmirror to and fro - block or nas.
You can't say the same for just about any other storage vendor out there. We looked at this for a DR project, and it was still far less expensive to do it ourselves, but at some point I could see that changing.
Public cloud adoption makes sense if you are starting a company from scratch and do want to invest in starting your own cloud. It also makes sense if you don't have any sort of privacy requirements for your clients/customers. If you do have lots of PII data from customers, public clouds are a non-starter. Doesn't matter how much encryption and control you can claim to have, it just takes one security officer from one large client to say no, and no public cloud for you.
My last two employers and current employer are all heavy users of internal clouds (VMware). It works extremely well, and saves a bundle verses traditional servers. Its relatively expensive vs HyperV or oVirt or any of the competitive products, but for the most part the initial investment has been made. We can find engineers that are well versed in it. It works.
When we purchase hardware, compute specifically, we spend less than it costs to purchase the virtualization software. We buy used servers and cheap memory, and all of our apps run very well. While the latest intel chip and 12gb sas sound great, but they make next to no improvement our environments, so why spend the money?
Years ago I asked our Server sales rep how the company was doing now that we don't purchase 100 servers a year, but just a few heavy duty blades. It must not be good for their pocketbook. He said it was bad, but you have to change with the times. Now we don't even purchase new servers from them. No surprise that revenues continue to go down.
He just needs more money, but it is impressive at how screwed up EMC is as a company. They actually seem ripe to be fiddled with (or broken up).
Not too many companies have competitive products in their portfolio like EMC does, and it makes one wonder how they can keep it going.
Management would normally pick the winner out of competing products and wind them down the losers. Customers could be persuaded and incentivized to move to the new platform.
Keeping up teams of support, dev, QA and the like for all those products gets very expensive and very cumbersome.
Dedupe can be extremely valuable to some instances and not to others. To not offer it does seem like Nimble is missing out - AFA or hybrid.
We have hundreds of VMs that are the same, cloned over and over. The dedupe like crazy. We have datastores that are 90% deduped. Aside from the space savings, these bits get cached using less space, so the performance is greatly enhanced on a hybrid flash array.
When we patch all the Windows boxes that have been prestaged with the patches, the storage array barely blinks since its all cached in the read cache.
I think its crazy that some arrays don't offer NFS. I have colleagues that have a very nice VDI setup in a hospital - all non-persistent desktops - very sharp. They are using an AFA for storage, but its FC only. Since they have an incredible amount of vm creates and deletes every day, they needed to start a script that runs the FC unmap command to get their space back in a thin provisioned lun. If they had NFS this would be a non-issue. But in their case, post process dedupe wouldn't help them a whole lot since they are constantly provisioning VMs. Inline dedupe might help them quite a bit, but its not offered on this AFA.
They also had a former employee create the unmap command to run on all the luns from all the vm hosts, and that was a pretty sort point they had to figure out since he left - but its another matter.
For anyone saying no company cares about how much things cost, I would love to work where you do, sounds great.
We have a very small vmware cluster in one location that isn't production or all that important in any sense. Its running on a decade old FC array that takes up more room than a car, and we only need 5TB of storage.
Wanted to buy a filer, but mgmt said no. We bought a commodity server and figured we would install freenas, nextena, rockstor - check them all out and see what worked best. For VMware I a big fan of NFS, and not a fan at all of iscsi. We will use it, but only if we don't have much of a choice. We also wanted CIFS so we could get off the virtualized Windows File Server.
Rockstor - Has tons of promise. I really like brtfs, but doesn't do LACP, so its out unless everything else sucks more.
Freenas - NFS performance is terrible for VMware - apparently this is a common problem. ISCSI is excellent, but I don't want to use iscsi. Spent several hours trying to get it to join our AD, and couldn't do it. Also apparently a common issue. At this point I said screw it, on to nexenta.
Nexenta - Took a while to sort out the inanities of using LACP out of the box. The initial wizard forces you to use a standard access connected link, which I didn't want to do. After swearing at it and figuring out how to get into the shell instead and create an LACP group I was able to get it working. Took a lot of rebooting after creating LACP connections to work - which is not a good thing in my book - but it worked.
AD join was a piece of cake, and CIFS performance is nothing short of incredible. I was very much looking forward to...
NFS - great as long as you run it over a primary interface. Tried setting this up to use a private non-routeable subnet between the VMware hosts and the nexenta box to fence off traffic, and no matter what I do - its not useable. I can mount the NFS shares over the private vlan, vmware seems happy with them, but any writes do not occur. Run the same mount over the primary nexenta interface and it runs great.
I started a discussion on the nexenta forums, and the responses were pretty poor. Everyone kept asking me about the ZFS config and all sorts of other questions that are not the problem.
Haven't tried iscsi yet.
Nexenta seems promising, but it also seems to have its quirks. I am sure if I had support and training it could be very usable, but I am not 100% sold on this being a one for one replacement for a netapp filer.
Again - this really seems like a article to start bashing netapp.
To say NetApp isn't innovating right now seems a like you aren't paying attention, but you don't seem like someone interested in having a conversation. It does make me understand why people have no interest in participating in comments sections.
I will check out nutanix, but so far I haven't found anything that explains how file sharing works. If you have a link to their filer offering I would like to see it.
I beg to differ. The NetApp monitoring software (DFM) is relatively easy to configure, very flexible, and the one of best pieces of monitoring software I have seen from just about any vendor, let alone a storage vendor.
Add this to autosupport, and I don't know how anyone could consider NetApp monitoring something that needs vast improvement to compete with other vendors.
I had a VP tell us out of the blue to come up with real numbers to predict growth for some small subset of our volumes yesterday, and DFM had this data without us even having to configure anything. It provided easy to read colorful charts - took about 3 minutes to figure out where to find them and email them away.
Agreed, but I am not sure if you are implying that EMC or NetApp is more complex. One has only one or two products (three now?), the other has competing products for the same use case (two backup solutions, multiple block arrays, multiple NAS boxes, etc) and none of them simple.
NetApp has caused a lot of their own pain by forcing people to adopt CDOT since you have to go through a complex migration to get there (and buy new gear). Most small to medium sized companies have no need for clusters of storage gear.
I would bet 90% of current netapp customers would be happy to keep running 7 mode filers if they continued to develop it. Forcing cluster mode on them is inviting competing companies to come in and sell them something new.
Its based on experience with using recoverpoint with Clariion/VNX/DMX products.
If the new version uses the snapshots inherent to the array itself for local recovery, that's great, but the version of recoverpoint for the products I listed above require a full lun for a local copy of the prod data for local recovery if you want to make clones you can use for other purposes.
And yes, the 50% is based on how long you want to keep those point in time roll back bookmarks, and what the change rate is. Its also highly recommended that you put the journal disks on fast RAID 10 disks so as to not adversely impact performance. Very costly and in fact more than 50% if the environment is using RAID 5 for the prod data.
I pointed out in my post that Recoverpoint is great technology, it just not simple, and in its previous iterations, not inexpensive or efficient. Sounds like EMC is making it a bit better with the x-bricks. Huzzah.
Recoverpoint is a very cool technology, although its far more than most people require for their environments. The best part about it for EMC is that you need to purchase more that twice as much as your current capacity requirements to have a local copy, plus more storage for all the writes you want to keep point in time backups of. If you want to use recoverpoint for just remote copies, you don't need twice as much, but any other vendor's similar technology doesn't have this requirement, you can just make local clones without that full duplicate set of storage.
And add to that that you need enough IO to satisfy all these copies to maintain good performance, and its a solution that gets very expensive and complicated as it grows. Granted, with SSDs you don't have to worry about IO too much, but if you already have an environment that is very dependent on latency and high UI performance, this could matter a bit.
And recoverpoint is crazy expensive without all the extra disk required for local copies.
At least they finally have a flash product that you can replication on - albeit a very expensive and complicated one.
Biting the hand that feeds IT © 1998–2020