489 posts • joined Tuesday 19th June 2007 21:44 GMT
maybe it's because the IBM mainframe lock-in stuff is reeeeeeeeeally old news(on mainframe front anyway)?
I could see a possible advantage if your running an oracle application going with and oracle engineered system, I can't imagine anyone spending the effort to do so for non oracle applications though.
It seems often with those big software packages the cost of the hardware is dwarfed by the cost of the software anyway.
in mem vs ssd
HP really thinks there will be a lot of folks out there doing large scale stuff in memory in single images? Seems strange.. I think most expect most of that to be done with fast SSDs, and systems that use large amounts of memory would instead be some sort of distributed system.
Anyone know if SAP has or is building an alternative to this in memory system to running on SSD instead? I'm not familiar with the platform myself, but it strikes me as an older product line which was developed long before SSD.
Also the DL980 is a 8-way system not 4-way. Though perhaps SAP only supports 4 sockets on the 980.. I doubt it though - DL980 tops out at 4TB which given the memory controllers are on the CPUs would mean all 8 CPUs are required.
where do people get the idea that netflix streams direct from amazon? the vast vast VAST bulk of that streaming data comes from CDN (whether limelight or netflix's own custom cdn stuff), not from the amazon cloud.
Youtube is similar though you could perhaps argue the CDNs that google manages is part of their cloud since they are the owner and operator of everything end to end. -- http://www.theregister.co.uk/2010/03/17/the_size_of_the_googlenet/
one good thing on this board is at least they have the option of using 12 DIMMs per CPU. I always thought it strange that it seemed like nobody other than HP provided the full 12 DIMMs on the Opteron 6xxx motherboards. Everyone else - as far as I could tell (Tyan included - just checked again their latest & greatest is still 8 slots/cpu) topped out at 8. HP doesn't have 12 DIMMs per socket on the blades since there isn't enough space on the blade to do it. I believe the extended ATX form factor has similar constraints, though most server vendors use proprietary form factors anyway.
With VDI - I suspect the software licensing and storage costs etc will not make it worth while to skimp on lower tier hardware for the server end of things.
Give me HP Advanced ECC or IBM Chipkill for anything with more than 16GB of memory. The risks involved with bad ram on Tyan/Supermicro systems are too high. I believe the newer Xeons have some additional enhanced ECC stuff on them, which helps Dell and other folks that don't have their own advanced memory protection. Opterons lack this support on-chip though(as far as I know).
it's taken flash quite a long while to mature. I'm not surprised Seagate and others haven't jumped in - they haven't needed to. Most of the flash guys pitching raw hardware are not rolling in cash, and the rest of the flash folks make all their money off of software.
Enterprise storage arrays also could not properly leverage flash (I'd argue that most of them still can't even now) the controllers are not powerful enough to push the millions of IOPS that flash is capable of doing.
There also seems to be a lacking of flash standards and more importantly performance metrics. You can take a 15K RPM disk from vendor A and compare it to vendor Y it will be similar. Flash there seems to be a thousand different configurations and everyone is making grand claims and nobody is backing them up.
Cost has been high as well, the only cheap flash is shit flash. Well there is some middle ground but again really needs software layers above to smooth it out - something the HD vendors have never really done, and I wouldn't expect them to get into that business because they'd be competing with their customers. El Reg had a good article about Xiotech's technology a few years back and how many big OEMs threatened Seagate they would stop buying disks if Seagate went to market with it directly.
Sounds like this Sun guy more than anything is tired of waiting around for dirt cheap high reliability flash and was hoping the big guys would jump in and scale things out to lower the cost.
and you'd have a fresh 2 year att contract that you have to deal with ? I don't think att would sell these for 99c without a contract.
easy to change suppliers?
suppliers of what? energy? Everywhere I have lived in the U.S. there has been a single supplier of energy for me as a customer in a particular region(I have spent a few years overseas as well but was too young to be involved in that sort of thing at the time).
This whole smart grid is just a security disaster waiting to happen. Sad to see.
facebook should copy netflix
and shut down their data centers and move everything to amazon. They will save a lot of $$ I am sure!!
The netflix strategy is really nothing new, google was doing it years before them
Google should move to amazon too now that I think about it.
be sure to send me my commission mr bezos.
Re: The right thing being
I agree, someone needs to hurry up and shoot that guy before he kills any more companies
where do they get these numbers
Up to 300 VMs in a rack ?? 300 doing what? If I look at my own VM infrastructure and that of my previous companies for example we could probably get upwards of 1000-2000 VMs in a rack depending on server hardware config. The constraint is because of memory, not CPU and for the most part not I/O (relatively speaking). Like most physical servers, the virtual servers spend *most* of their lives idle from a CPU and I/O perspective. Though applications are always running and consuming memory so memory requirements are high relative to CPU and I/O.
Currently in the process of doubling the memory in our servers to 384GB (dual socket Opteron 6100), as the CPUs run at under 15% on average with peaks not being much higher. This workload isn't specific to my present position - I look back over the past decade at all the companies I have worked at and things are similar with very few exceptions(those few exceptions would have more than enough aggregate CPU capacity to burst in utilization without requiring a special configuration).
Are these 300 VMs running VMark or something is that where they get the data from? If not then how do these folks (and others too, not picking on Fujitsu specifically I see similar claims from other vendors constantly especially with these integrated solutions).
Re: lost almost all respect for consumer reports
I'm saying those issues does not deserve it getting a 99% rating.
I'm not saying the issues should prevent anyone from buying the car. My problem isn't with the car it's with the rating.
Re: lost almost all respect for consumer reports
I have not -- I'm not in the market for $90k car.
If they focus on driving it should still lose a bunch of points for lack of places to charge up, and range. They even specifically mentioned the issue where if you don't leave it charging overnight it loses something like ~10-12/miles of capacity overnight (they said the company is working on fixing that).
Stuff like that is not worth a 99/100.
There have got to be better/more fun to drive luxury cars than the Tesla and have significantly more range and flexibility. Of course they are not as green..
I have no doubt it's probably the best electric car out there.. but they emphasised that too much over everything else.
lost almost all respect for consumer reports
I watched an interview with consumer reports on this topic on CNBC this morning.
I was pretty shocked.. maybe I shouldn't of been but basically they said this car has problems - range limits, issues with not leaving it plugged in over night - "it's not a car without issues - if you want a problem free car look elsewhere". Is one approximate quote from the consumer reports guy.
At the same time the same guy says "this car is better than any other car we've EVER (EVER!) reviewed"
that makes absolutely no 3@$#@ sense WTF.
He went on to say stuff like - "if at some point you can charge the car in 4 minutes we may make it 100/100" or something like that. Currently apparently the fast charging process takes 30 minutes for half a charge. At the moment those charging stations are obviously very very few and not so far between.
I have no doubt that this car is a nice car, I'm sure it works well. but they are giving far too much credit to the electric nature and "green" that it is towards it's overall score.
Totally ridiculous score, absolutely insane. It should probably be more in the 65-75 range at best. With other all electric cars perhaps sub 50. Maybe rank them higher if gas in the U.S. was closer to $15/gal (fifteen).
Saving $$ on gas is obviously not a concern if you are paying upwards of $90k for a car. If being green is that important you can get significantly more impact by buying carbon offset credits. Though that doesn't make as much of a statement(in public) as driving an electric car.
such a waste
such a waste of $$, wish they would of pumped that $10B into other things like mobile(RIP WebOS), even Vertica for big data.
Now HP is dumping even more cash into Autonomy to try to save face.
of course they believe that
Until amazon offers features that rival that of a private cloud things and eliminate the whole "built to fail" model. On top of that provide usage based billing (pay for what you use not for what you provision -- S3 has this model already). As well as pooling of resources (give me a dozen CPU sockets and half a TB of memory and let me provision however I want -- e.g. no more fixed instance sizes).
Well they can keep the built to fail model for customers that wish to use it.
Until they do those simple things (and a few others too) they have no leg to stand on as far as the argument public vs private cloud.
fix the cost structure too. Most everyone who has used Amazon (and other cloud players) understand what a joke the pricing structure is across the board, largely due to extremely poor utilization of the system.
conceptually it is quite simple - I think Amazon's archaic design prevents them from doing this and it's easier for them to argue the exact opposite than fix their shit. Their model works fine in very limited use cases, but falls apart quickly for I'd wager 90%+ of workloads out there. That doesn't stop people from trying though. Throw enough money at it and you can make it hobble along. But then your not saving $$ at that point.
As a debian user for the past 15 years
Sounds like another solid release. Like many Debian stable users I don't care much about having the latest and greatest. Ubuntu 10.04 LTS has been my desktop of choice for the past 3 years (mainly for the reason cited in the article Debian didn't have the latest drivers). Though now that 10.04 is going EOL (on Desktop) maybe the next jump for me would be back to Debian 7. My laptop is almost 3 years old so hardware support should not be an issue. I have more than a year of on site support left for the laptop and it still works quite well so no reason to change that either.
My (personal) servers run Debian stable too. I'll give Debian stable a month or two to sort any other bugs out before upgrading any of my servers, and probably six months before I consider upgrading my laptop+desktop. I'm tired of the frequent firefox updates on Ubuntu, though I suppose that will end now since it is EOL. I've had plugins that have been broken for weeks because of the latest firefox 20. I held the packages on my other desktop so they don't get upgraded until the plugins are updated to be compatible.
The one good thing I hear is Debian 7 has a way to preserve the GNOME 2 look & feel which is, more than anything what I want. My Ubuntu setup works very well for me and I have no interest in changing that.
I still do wish Linux had better driver support -- Take the e1000e driver for example. I can understand the kernel in Ubuntu 10.04 LTS when it was released did not support the NIC on my desktop. So I compile it from source, no problem. But I would expect as time goes on that this driver would get updated. But sadly it has not. So every time I update the official kernel on the system I have to recompile the driver to get networking to work.
It is very unfortunate to have drivers tied to specific kernel versions. A user should not be "forced" into a newer version of a distribution only because of a driver(s) for their hardware. It's probably my biggest complaint with Linux. Fortunately most of my servers are VMs, so the driver situation there has been stable for many years now.
Oh how badly it sucked hacking Linux installer kernels together just so the installer would see the NIC, or the storage controller.
one of the rare times
Hell I can't think of a single time.. Where Slashdot posted the news before El Reg. (not that I understand why anyone on slashdot would care about BMC).
Oh the shame!
I haven't laid eyes directly on anything BMC as far as I know since 2004, when the company I was at was using BMC Patrol internally (well it was installed, I never saw anyone use it - it seemed to be one of those tools that required a full time person to operate).
wonder what the ratio is now
I haven't had a company issued cell phone since early 2008. Companies since have offered to cover some of my cell phone bill(rarely all of it - current plan with mifi option is around $140/mo with ATT - sort of ironically cheaper than Sprint was when I had a dedicated mifi device(both combined being $175/mo vs sharing w/my phone now). Though I have never taken advantage of it. Just too lazy to fill out the form each month. Part of my job is being on call. My co-workers and boss are similar, I don't believe they expense that stuff either. Many years ago before "unlimited" plans I did expense my bills as they were frequently significantly more per month.
Not sure what my current company's policy is on cell phones. I also provide my own laptop too (bought it to replace company laptop at my last company - couldn't stand to use a Mac, kept same laptop for this job too). Also supply my own work chair (Aeron - again bought for last company their chairs made my legs feel numb after a long day - I don't want to keep the Aeron chair at home due to cats).
I could get a company laptop at my current place (been using a desktop at the office). I have opted not to-date because I walk to work(1 mile), and I don't like to carry anything with me that doesn't fit in my pockets. I'm sure laptops are a more tempting target to steal from an office after hours so I opted for a boring desktop instead.
I could probably get part of my broadband expensed, if not all of it since I work from home a lot- again not sure what the policy is there if we have anything. Those sorts of things are fairly disorganized. The only stuff I have expensed has been travel related.
I don't nickel and dime my companies for everything, and they don't nickel and dime me for everything either (vacation time etc). In the end everyone's life is simpler and most likely happier. I realize I am probably in the small minority when it comes to this kind of thing..
I've always felt that A10 was fishy when they never quote layer 7 performance, only layer 4. What are we still in the early 00s ? Pretty much everyone should be on layer 7 these days, and in most cases should probably use load balancer configurations that re-use http connections on the back end servers (F5 calls it oneconnect, forget what Citrix calls it - I think just connection multiplexing - and it's on by default on Netscaler - last I used F5 it wasn't on by default but not hard to enable). From what I was told this particular technology was created by Netscaler, and then cross licensed to F5 (in exchange for F5's cookie handling stuff).
Layer 4 load balancing was more useful back in the day when the code couldn't leverage more than a single cpu core. I remember back with BigIP 6400 I think it was - several years ago dual socket beefy systems, but only 1 cpu could be used for load balancing (we used the other CPU for global DNS). Nowadays, even though the code isn't *that* good, it's good enough to run on multiple cpus, hence crank up a few more features and deliver a better experience to the end users as well as to the back end servers.
layer 4 I suppose is fine for non HTTP/HTTPS protocols.
A10 does do Layer 7, but for some reason I've never - ever seen them quote performance numbers.
I used F5 for a long time, though past couple years have been using Citrix to see how it works.
lack of demand
I think a lot of it is lack of demand. There is demand of course in large service providers -- those places often have resources to do a lot of things on their own and not rely heavily on enterprise company products. The sheer customer count that needs object storage I believe at this time is quite low (those that feel a need to have it in house). Hence not having much adoption of any of the enterprise object storage technologies.
My company for example uses some S3 for things - but total usage is overall quite small and doesn't justify the need to have object storage in house. Really I think until your into some decent scale with I'd wager at least half a PB of data projected to be stored would folks really get into looking at object storage vs regular NAS which of course is far simpler to manage.
Also don't forget Red Hat storage - from a support and tech standpoint they seem to have a pretty decent and flexible offering. I would not use it as a file server or as a transactional storage system as their marketing sometimes try to convince you to do but as an object system it seems ok.
Re: don't understand
blocks of data are mirrored that is RAID 1. Yes I realize they do sub disk RAID in a similar manor to 3PAR (though 3PAR is of course fully ASIC accelerated)
The CPUs in the V7000 aren't very powerful hence the big performance impact. I wasn't referring to software raid on the server side, this is software RAID on the array side. IBM engineers say they did it for cost reasons.
If the CPUs were powerful then there wouldn't be a big performance hit.
how XIV can be considered good. 180 disks, nearline only, and RAID 1 only. When I first saw it a few years ago it seemed like it had some promise, but IBM really hasn't seen to have done much with it other than add a SSD read cache. I mean that's all they get out of a platform that has basically a dedicated CPU core for every 2 spindles on the system?
Earlier versions of XIV were of course crippled by the 1Gbps ethernet back end.
The only folks I can see really buying XIV are already big IBM shops. I know it's easy to manage but its crippled in so many other ways.
XIV has some good SPC-2 throughput numbers but is soundly trounced by HP P9500 / HDS VSP on both performance and cost.
V7000 uses software RAID too(for cost reasons), their RAID 5/6 performance is terrible. I was surprised to hear that, would of thought IBM would of done better for what appears to be the technology they are positioning for the future. Maybe some next gen system will fix that failing. Imagine using the V7000 real time compression to save a bunch of space! Only to have to lose that space because you're forced to use RAID 10 for performance.
Other than that V7000 seems like a very decent flexible platform.
what is the best file manager? As a Linux user I think my favorite file manager is the two pane explorer from XP days, not fond of the newer windows 7/win2k8+ one that tries to be more dynamic in the left pane, drives me nuts..
easy to show growth
when your starting at near zero (relatively speaking to non appliance stuff)
$10k is not a bad number
If only some of the places I knew had as much sense as these folks.
One company I was at for a while was spending upwards of $400,000/month (at the peak - then came back down to mid-upper $200k/mo).
You see stuff like that and you think "Face wall"
I offered to get them out and save millions, but the board didn't want to. (everyone else did including execs).
I suppose needless to say that particular company is pretty much dead now having axed ~85% of their staff late last year.
I know of a few other places in similar boats - spending six figures a month on cloud while simultaneously not wanting to get out - for stupid reasons too. They know the ROI but they don't care. They believe in the cloud. They bought it hook line and sinker. They experience the outages, the bad performance, the poor support, you name it. After all that they still don't care.
I'm hoping the crappy economy eventually hangs those people out to dry...but am not holding my breath.
Fortunately I haven't had to deal with such stuff in a long time now, stress levels very low now I don't have to worry about amazon crap failing left and right.
I agree with the rest
that this looks like a pretty pointless attempt at trying to capitalize on the cloud buzz. terrible idea.
might their revenues crash even more
Now that they have opened up their things so customers can build their own flash cards using 3rd party flash on FusionIO controllers? Apple may not go that route since they don't seem to care about the latest custom thing. But Facebook I believe certainly will.
wasn't storewize an acquisition ?
though maybe was another co..
SONAS does that still primarily use DDN hardware with GPFS on top?
cisco 6500 with sup720 is today's?????????????
wasn't that completely and totally obsolete about 7 years ago? I think even Cisco has been trying to get people off of this platform for the past 4-5 years.
We have switches that can do a Tb of fabric PER SLOT now
We have switches that have more switching capacity in 1U(1Tbps+) than a decked out CAT6500 could possibly hope to achieve in what was it 14U? 10U? 64x10GbE line rate in 1U available from a dozen different vendors (Cisco likely included)?
I have looked at Tintri off and on, have had a few conversations with a friend over there (who is probably reading this now, HI!). I still feel that the upcoming VVOL stuff will level the playing field from a Tintri UI/insight perspective. It sounds like a nice VDI box but outside of that niche I don't see it being a threat to any of the big players.
Per VDI it certainly would be nice to see some sort of standarization as to measuring the cost of VDI. all costs including servers, storage, hypervisors, OS licensing, support, anything else I forgot. Sadly it seems that every VDI cost analysis I have seen a vendor publish seems to have a unique perspective as to what they consider cost.
latency. unless that customer is actually hosting applications in the cloud the latency to the cloud is still extremely high and bandwidth costs are still much higher than hosting your own storage.
And if they are hosting applications in the cloud most likely they are in for a world of hurt (relatively speaking). Of course some won't realize it because they don't know any better.
one of my data centers is 7 hops away from amazon's east coast firewalls for S3, 19ms on a gigabit link(Tier 1 ISP). Maybe if I'm lucky I can get 4MB/sec on a single stream connection to S3. I've talked to a couple different cloud storage appliance vendors, the ones that tier (more or less) to cloud storage, and so far none of the ones I have spoken to do any sort of fancy splitting of data files into smaller objects to transfer in parallel to improve throughput. But even if they did that only addresses throughput -- bandwidth is still quite expensive. 10GbE/8Gbps FC to your local storage is cheap .. 10GbE to the internet ..not so much.
The volume of data growth continues to vastly outstrip the decreasing cost of bandwidth over time.
it can make sense for cold data, not hot though, probably not even warm (other than acting as a backup of some kind). It's just too far away, takes too long to synchronize any more than trivial amounts of data.
don't get me started on the absolute shit that is amazon's EBS. Hello 1990s. It works fine as long as your not doing any reads or writes to it.
storing music, photos, video etc make a lot more sense for that kind of stuff(S3 object storage)
Amazon cloud(among others) is sort of like a roach motel as far as lock in goes for storage. Sure you can get your data out -- but it may take you months to do it(vs a local array is obviously an order of magnitude(s) faster). I know of one company for example that has hundreds of TB in S3 and they want to move out they just don't know how(with the least application impact). When they started they thought they were smart, but now they are paying the price and hurting bad (not only because of S3 but because they are "forced" to use other amazon services in order to perform work on the data because they can't effectively get it out). Each day that goes by the data set gets larger too.
But if the data is cold then it probably doesn't matter, won't get accessed much, so no big deal if it is slow. S3 makes good sense for that kind of stuff. And unlike most other amazon services they actually bill you based on what you use (rather than what you provision).
Re: "Forget volleyball, there are systems to crack"
Yes their guys were off world exploring the universe through the Stargate.
people don't value privacy
They may say they do, but ask the question another way the answer often changes. There was a survey/study(?) a while back that placed the value people have on their privacy at something along the size of the price of a candy bar? I'm not sure where I'd value my own data but it would be really high.
Myself I felt fairly alarmed that when I configured my car loan to be accessible on line(same company that I get car insurance/home owners insurance from) they asked me to verify myself by asking questions like how old is my sister, what month my mother was born in (multiple choice answers) -- data I have obviously not ever submitted to them (nor are either of them customers of that company). I suppose it should not surprise me that they can associate this kind of thing but it still does.
If it is a linux box (for example) and it is setup correctly then yes a serial console will be fine to recover a system if it is stuck from a failed fsck. In most cases full bios is accessible (3ware bios for as long as I can remember did not work with serial console), linux boot loader, full linux kernel messages, single user mode works fine, multi user.. whatever. You can even access the magic "sysreq" sequence over serial port in most cases. Serial makes good for logging too, the terminal servers can often send data going to the consoles to a syslog server.
For DRAC and HP iLO at least you can normally stick to serial consoles (virtual serial ports) to access linux systems w/o having to pay for the enterprise/Advanced license for those that don't need things like virtual media.
I agree with other posters that this article is quite weak -- perhaps would of been good to cover solutions for such types of systems that do not have integrated management in them like someone mentioned Raritan. It's not cheap, but it works fine. One deployment I deployed raritan on top of remote serial consoles(many many years ago), I had one raritan drop in each rack that on site people could connect in the event serial console was not adequate.
I have a friend who runs a big lab at MS, all HP stuff but they too use Raritan KVMs (no integrated PDUs as far as I know) instead of the integrated iLO. It's just what they are used to.
Re: Managed PDUs and a serial console
windows has had serial console support for a while now.
I have not personally tried it. But I do remember EMS support going all the way back to some Cyclades terminal servers I had in 2004. I'm sure it's gotten better in the last 2 generations of windows servers.
Re: A real driver's car...
I can't hear or feel the engine in my car for the most part -- stereo is usually too loud and the engine runs smooth.
New car designs should take note of Nissan's ICON system (which my car has) which is basically a bunch of buttons that change function (and have an LCD in the back of them so they show different things when the mode changes), so you can better manage the system w/o having to look at it, since it is physical buttons you are touching. Almost have the best of both worlds - physical buttons that have the ability to be dynamic. One of the knobs is dynamic as well.
video on the function(16 seconds):
is this really changing anything?
Was there anything unique about the appliances or were they just PowerEdge boxes with the software pre-installed? I suspect the latter, in which case this is basically a non issue. I bet Dell replaces it with a package offering that basically allows the customer to order 1 SKU to get both hardware and software and the customer installs the software separately.
HP does have an object storage product on the market in their HP StoreAll (NAS) platform - while it is primarily a NFS system it has object storage abilities as well (complete with APIs and stuff). I have never used it, but it's there. They don't have a dedicated object-only platform(e.g. it still relies on RAID as far as I know), not sure if they need one depending on how well the StoreAll stuff works.
Xen is dead
Xen was dead when SuSE and Red Hat abandoned it in favor of KVM. Amazon forked Xen years ago and are now stuck as their changes are not compatible with the newer Xens (not the first time Amazon made that mistake by modifying software too much and not maintaining compatibility).
Only folks left using Xen are those that have been stuck using it. I've talked to many people at Citrix, they won't hesitate to recommend VMware. I commend them for that -- they freely admit that Xen is there for those that don't have the budget for VMware, but they fully acknowledge it's not in the same league. It's refreshing to see a company to own up to something like that (I say that as a Citrix customer - though not for Xen).
The Xen architecture is obsolete, it really always was. It's never been competitive, and now pretty much the entire open source community who was on that bandwagon got off of it years ago. The only reason why people hopped onto it is because they didn't want to fork over their $$ to vmware, not because it was ever a good product. Now that they have a promising product and architecture in KVM the future of open source hypervisors and clouds and shit looks much brighter (I say this as a vmware customer for the past 14 years).
sad state of affairs
when stuff like this happens.
Society has gotten to be so over sensitive and paranoid. Some places better than others still. When the president of the U.S. feels he needs to apologize for complimenting another person it.. well I don't have words to describe this. Other than *big sigh*
"The advantage is that a single control facility using the control plane automates the setting up of network resources for virtual machines in servers, without needing a small army of qualified technicians using different network interfaces, which can take days and is error prone."
It wouldn't take an army, nor days, nor high amounts of error if the DAMN NETWORKING SOFTWARE WAS SIMPLE TO USE. Major bonus points for designing the network in a way that can scale into the future and does not require constant reconfiguration (e.g. adding vlans, etc).
The networks I have built (I'm not a network engineer by trade - so I've only built about a half dozen over the years) are built this way. Using equipment that allows me to manage them in a way that requires minimal resources, minimal training(if I was starting from scratch), and lots of bandwidth (way overkill for my needs but it was cheap).
But nooooooooo -- so many folks try to clone Cisco IOS look and feel, an interface that has been stuck in the 80s. Cisco has too many customers with heavy investments in this interface that they have not been able to simplify it without triggering an uproar not only among the customers that are used to it - but internally on all the $$ they make on training and certifying people. Ugh, makes me sick.
Some hope this new SDN stuff will finally rid the world (for the most part) of these shitty IOS interfaces and replace them with something better. I'm not sure, but if we get anything out of SDN I suppose that is something I would be happy with. Otherwise I am sick of hearing about software defined anything.
I saw some information on a major outage at one of the service providers my company uses to process credit cards, they were out for 8 hours because, among other things a STP flood in their network. They decided it was not safe to fail over to a backup data center so they worked to correct the problem in the main data center. STP?? really???? what an obsolete protocol. Even my early networks built almost a decade ago did not use STP, there have been alternatives to STP for a good 15 years now(probably longer).
(and yes to answer your question I do not use networking equipment that uses a Cisco-style UI - though they have a optional software module which allows users to use some Cisco commands to make the transition easier)
I like utility computing
Probably not the same sort of thing that was hyped a decade ago. But I have viewed utility computing as basically a set of servers with hypervisors on them (in my case that's always been vmware), some good "utility" storage (in my case that's always been 3PAR), networking, load balancers. Configuration management software rides on top (currently Chef, though in my past - I much prefer CFengine). Ops team provisions systems as needed into the environment after assessing resource requirements and whether or not a new system is needed vs using an older system (or vise versa). Systems run smooth and pretty much everyone is happy.
I have also spent a significant amount of time building and maintaining EC2 infrastructure for the same purposes, the only value I got out of that was at least I can tell people yes I used that for two years and yes it is absolute shit. I won't go into the cost equation again since that one is obvious.
Making decisions on infrastructure based purely on "business needs" can quite often be the bad way to go. Specifically the business may only "need" X, but we know in the future they will quite likely need "Y" too (business is not sure at this time). So business invests to get "X", then a year or so later, they end up ripping "X" out and replacing it with something that can do "Y", maybe it doesn't get entirely ripped out, maybe it just sits as some legacy infrastructure that causes the company regular pain in the ongoing years. Obviously there are significantly more costs in doing both.
The biggest driver I have seen for "business needs" has been cost. They always pay for it in the end, whether or not they pay less by going with the right solution or pay more by having to replace what they originally bought with the right solution later (assuming they have the staff to determine what the right solution is - unfortunately often times this is not the case). Though I have seen situations where there has been greater than 100% turnover in staff over a relatively short period of time which then results in significant change - but even then the ratio is probably still 50/50 whether or not this change ends up being positive.
Making decisions based purely on technology is equally bad of course too - there was a local vendor here that came out to visit not long ago, a storage vendor, and asked if we were happy with our storage, I said yes. He asked how much cache we had - I told him, and he said "oh we can get you a TERABYTE of cache!" -- I honestly didn't know how to respond to that on the spot the statement was so absurd -- I mean did he really think we had the budget to make that sort of purchase? Obviously I know such systems exist but it makes little sense in bringing them up since we don't need them and upper management won't pay for them. Feel free to brief me on a technology architecture that we can leverage at our current level of scale/budget, and then grow into -- at some point getting to this terabyte of cache you speak of if we need it. That's something I'd be interested in hearing. It was an interesting/stimulating conversation though it's not often I get to talk storage to folks, so I enjoyed it.
I've seen that time and time again over the years. It's good to find out what the business needs of course, but you should still pursue the best technology choice (even if that means the best choice is not the latest and greatest).
won't lose sleep
"Oracle won't lose much sleep about failing Oracle won't lose much sleep about failing to sell servers and storage.to sell servers and storage."
Yeah - especially when it's likely Oracle will make more profits on the solution than whomever is selling the hardware regardless.........!!
mostly read cache?
I assume 256GB is mostly (if not entirely) a read cache? or are there hefty batteries in there to maintain it in the event of power failure. It seems more and more storage systems are taking the approach of only having enough battery power to flush the write cache to persistent media rather than try to keep the cache active for the duration of the outage. Flushing 100GB+ of data to disk/flash probably takes quite a while!
EMC Fast cache - which seems to be the most sophisticated and advanced SSD caching of the enterprise companies as it provides both read and write caching. (as a staunch 3PAR supporter it pains me to admit this every time, sigh - though these new upstarts aren't mature enough for my liking - so I'm happy to take the trade off)
Netapp's cache by contrast is a read cache only. (read caches wouldn't do squat for my 92-95% write workload)
not fox news
El reg has it wrong here. Fox news is not what they are talking about, that channel is already a cable channel, and not available OTA (well maybe it is in NYC, though I doubt it). They are referring to the fox affiliates. You know Simpsons, Family guy, and other fox programming on the local FOX TV channels, which are available over this new service.
Looking at my local Fox affiliate I don't notice too much stuff that looks to be Fox specific content, at the most it seems like on average a couple shows a day. The rest is syndicated from other networks or stuff that looks more generic like daytime talk shows.
how can you tier to ram?
"Avere claims that, with new hybrid technology, it is the first vendor to automatically tier data across four media types: RAM, Flash/SSD, fast SAS disk, and nearline SATA/SAS disks"
RAM is of course is volatile, so tiering to it makes no sense. Obviously you can use it as a cache, even a mirrored cache for writes, but using it as a tier???? What's the advantage(over using it as a cache) even if it did make sense?
"The big bad switch is the EX9214, which is a 16U chassis that has fourteen slots for line cards. The EX9200 switch fabric delivers up to 240Gb/sec per slot at full duplex and the midplane in the top-end EX9214 delivers up to 13.2Tb/sec of bandwidth and supports 1 million MAC addresses and up to 32,000 VLANs. The EX9214 doubles up the port count over the EX9208, delivering 480 Gigabit ports, 320 10GE ports, 48 40GE ports, and 20 100GE ports."
13.2Tbps of bandwidth yet only 48x40Gbps ports? (~4Tbps?) or 320x10GbE (~6.4Tbps) ? I guess they have room for expansion -- I am curious is the limitation in the switch fabric itself right now (e.g. is the 13Tbps number quoted purely the midplane and not actually what the switch can handle).
I recall many years ago(mid 00s) my reps telling me about "Cisco math" where they add up the bandwidth of the slots, as well as the aggregate bandwidth of all of the ports(for local switching that would not traverse the slot fabric) and that would get you your bandwidth number. They did that to one of their own chassis switches at the time to show comparison, then later changed their minds. The numbers went from 1.2Tbps to 360Gbps - pretty big difference.
The large number of MACs and VLANs is quite impressive though.
I have always been partial/biased to Extreme networks for my stuff(mainly for ease of use) - so for comparison the Black Diamond X8 which started shipping to customers early last year in a 14.5U chassis has up to 20Tbps of capacity(15Tbps if you remove the N+1 fail over so really 15Tbps). Not sure why El reg never seems to cover them while they cover other smaller players like Arista etc.. maybe the vendors are paying the companies to post the articles. Extreme has never been too big on advertising at least as long as I've been a customer(long time).
ANYWAY, it sports up to 192x40GbE line rate connections(can use break out cables for high density 10GbE as well), also has 10G SFP as well as 10G copper (both of these are lower density due to # of connectors required vs using 40GbE +breakout cables). Nothing slower than 10G though(would be a waste!). 2.5Tbps(full duplex) per slot (8 slots). Though significantly lower Mac addresses (128k) and VLANs (4k) vs this new Juniper offering.
It has a unique design on the inside - not using a back plane or a mid plane, they call it "Direct Orthogonal Data Path Mating System". As far as I know it still uses Broadcom ASICs, Extreme hasn't developed their own ASICs in many years now. I've been waiting to see who next might come out with something that can compete with that level of bandwidth in a similarly sized package, so far haven't noticed any.
The X8 powers the recently launched NCSA Blue Waters supercomputer(it's also used by the LINX among other customers of course)
just because rackspace is using it
doesn't mean you should be. Rackspace obviously has a significant chunk of internal developer talent to help manage/maintain/fix the system. 99% of other organizations won't have anything resembling that.
When Red Hat releases their supported Openstack (based on the Openstack release from last fall) things should improve quite a bit for more normal prospects of OS, though don't go jumping into the deep end right away. Red hat was pretty up front with disclosing how the community around OS does not have much interest in supporting anything but the bleeding edge (which very common for open source projects that are evolving quickly). Fortunately most big important open source projects are mature and don't suffer from that mindset.
Last I heard(end of last year) the RH Open stack stuff should be out of tech preview any time now.
Larry only cares about engineered systems
They don't need a separate independent storage system. Larry himself said if the regular Sun hardware business goes to $0 he doesn't care(http://video.cnbc.com/gallery/?video=3000119877&play=1). There's no margin in it. They will only develop stuff that can directly contribute to engineered systems and I believe that means tight integration with the applications(e.g. Oracle ASM) which means less work needs to happen on the storage end of things which means those storage solutions are not suitable for anything other than engineered systems.
The rest of the stuff will just wither and die on the vine.