Re: Operational Incompetence
ahh, GSA 18F I reckon.
41 posts • joined 14 Nov 2013
it really is shameless. And especially since nobody gives a good damn what they do on their respective devices. Really, some twit has been clamoring to have application state follow them from phone to desktop to slate? What, one pathetic puke out of a million? How many thousands of man-hours were spent writing all this spyware when even microsoft doesn't use it in any useful capacity (serious bugs in preview releases being ignored, anyone?); ignoring for the moment they aren't remotely entitled to have it in the first place without my express say-so.
Whatever happened to shooting dead the morons be they marketing, managment, or engineer who though "using my computer" needed to mashed into an activity feed, ala farcebook?
> (due to the need to actually get the tapes from a remote location, etc)
I hope you're not suggesting Glacier is tape-based. There isn't a tape drive to be found in any of their AWS data centers. Tape drives are unreliable as hell. And the data-cartridges are barely any better.
The 'nines' thing is ridiculous, I agree. I very much doubt they are calculating those availability numbers in the event a few of the regional datacenters get nuked or otherwise suffer EMP or other massively disruptive event. Granted who cares at that point if your data still exists...
Typical S3 requires half of the erasure-coded blocks to be available. Legacy Glacier used the same N:M ratio because they wouldn't have to reconstitute the data in order to store it. The new service may well change the N:M ratio but I rather doubt it. Instead of 3 datacenters they may have bumped it to 5 and/or transparently copied the a minimum 'N' to an alternate region. Most likely the cost savings are achieved by using ever larger disks (eg. 12+TB vs 4TB) and ever larger disk enclosures (used to be 96 per tanker) which is probably closer to 240 and maybe even higher.
The tiers of service time is just a job-sorting/priority mechanism. If you buy 'fast' restore, your job gets put at the top of the heap to get scheduled. If you have medium and your window is closing, it job gets bumped up so it can complete within SLA.
> In the end somebody decided that money mattered more than safety -- probably not deliberately
> but this is the kind of sloppy "it'll probably be just fine" thinking which sooner or later kills people.
Indeed. Companies need to learn how to say NO. When someone came up with the idea of putting ever bigger engines on the thing and it's no longer stable, the idea should have been buried, not sustained with "hey johnny is systems can write us up a software solution". NO god-damn NO! And in it's soul-less pursuit of sales and profits the aircraft co. decided to do something stupid the regulator should have jumped all over them and stopped it dead in its tracks.
From https://www.pprune.org/tech-log/615709-737max-stab-trim-architecture-2.html I really like the bit about " And then we had STS, which trimmed the stabilizer without pilot input. Huh???? ... But rational was to tell the pilot ( I use pilot to assert whoever was in charge of moving controls), he needed to trim for the new speed/AoA."
This is meddling on the part of the software by do-gooders. By interfering in the natural operation of the aircraft the pilot has now mentally checked-out. Any pilot paying attention would feel and recognize right quick he need to re-trim without being "helped", just like it had been for decades prior. If not, then by god he's not fit to sit in that seat! Then they layered on yet another nanny function because Boeing in this case made a DELIBERATE choice to say 'Yes' to some retard at the airline or in marketing. The customer, as a rule does NOT know what they are talking about and I'll bet airline execs don't have a clue or care about physics, they just want to cram more seats into the same space and have it fly farther and faster or negligible fuel. At some point an adult needs to stand up and say, "No, we're not doing that, this 50 year old design can not be modified further." The bane of modern technology is that the software programmers always pipe up with "we can write some code to 'fix' that". And as we've found out they did a typical CRAP job of it and didn't bother to follow the RULES that had long since been established.
Revised engine nacelle and blade design for better thrust and fuel efficiency is again, fine. Decreasing drag with those winglets - brilliant. Upsizing, rotating and shaping the engine so the plane is no longer stable - STOP right there and do not execute! Or go hire yourself out to Lockheed and work on fighter jets.
We do NOT need to fly at the ragged edge of performance. We do NOT need to carry ever more ridiculous numbers of helpless/hapless souls at one time. We do NOT need razor-edge efficiency in lift or engine performance that *require* ever more complex software solutions to try to bash it back into some flyable shape. We do not need more fancy software to make up for ever less skilled and mentally not-engaged pilots to pretend they know what they're doing. Progress does NOT have an infinite endpoint. Every activity has a cost and human beings are LIMITED. Apparently modern man has decided that all costs can be papered over and with ever increasing amounts of software.
Same shit in motorcycles - not to jack the thread. First it was ECU and FI. Ok, reasonable and simple improvements that didn't overwhelm the meat or fundamentally change the relationship between rider and machine. Now we have cornering ABS, corner-by-corner brake and throttle maps, launch control, and gd fly-by-wire etc. All of it completely pointless and unnecessary to the task at hand - riding the damn thing from point A to B. You now have world-class racers, the best in the world who can literally get away with being as clumsy as a 2-bit street hack; pinning it and not getting their ass thrown over the moon. Worse, you have said street hacks with but 5% of the talent and skill riding machines that without electronic nannies would have found themselves quickly in the ER or morgue. "electronics this, electronics that" you hear incessantly in interviews. NO, god damn it! If you can't *directly* control the hydraulics of your brakes and regulate the engine with the throttle (again with no electronic, "here I'm detecting some slip, I'll take over") then the whole thing is a farce. We want to see skilled individuals doing their craft, not who has the best software developer and smartest algorithm and sensors all but riding the bike for him.
Back to planes - "here, hold my heading and altitude for a couple minutes while I root around in my flight bag for the PB&J and a cup of coffee, so long as sensors appear nominal, otherwise warn me and let go" is/was a proper and acceptable degree of improvement. Though properly this should be and has been solved for decades via "yo, co-pilot, you have the stick". When someone else is doing the flying (eg. the computer) the natural tendency is for the human brain to check out.
How many millions of hours were logged by mere teenagers in WW2 and wars since in transport planes, flying on partial panel, in lousy weather and getting shot at? Yeah, yeah the big bomber losses were atrocious but it wasn't because the pilot didn't know how to fly or the damn computer was second guessing them based on a shot-out sensors.
Chasing unreasonable efficiency and lower costs is now taking lives and as the chorus for "more AI because it's better than people" is only going to make the failures bigger and costlier and more importantly the pilots increasingly helpless to diagnose and recover within the limited (by physics) window of opportunity. If you're going to have a pilot in command then the plane must fundamentally comport with human limitations, not spew thousands of messages and alerts at him to the point that their ability to cope is overwhelmed - the damn programmers again (I don't mean just the guy writing the code, but the whole foodchain). The computer must by definition be no more than an advisor or really dumb help. Otherwise toss the pilot out on his ass and have the computer run the entire show.
If the introduction of computers are making a significant improvement in safety, then the conclusion is some combination of:
1) the damn things are too complicated for humans to fly which by definition means the trajectory of design is WRONG.
2) the skill level of the pilots it highly uneven and probably insufficient
The answer isn't more computer, it's smaller, properly designed planes, fewer, simpler planes and more expensive seats. That or just go to drones and be done with it. If PiC screwup kills only 150 people at a time that's better than killing 800 because the gd computer was interfering and worse could NOT be removed or sufficiently sidelined because the airplane requires the computer to even fly at all, and some programmer decided the software (and it's suppodedly non-dodgy failure detection logic) knew better than the supposedly trained people with hands on the yoke.
I meant to add in my little screed that the first rule of being a pilot is to fly the damn plane. There should NOT be any software systems to second-guess the pilot(s) beyond anything more than an advisory role. There should be NO automatic trim. If you can't be arsed to pay attention to trim and adjust it with deliberate control inputs as speed and elevation changes you are NOT flying. You're barely supervising and probably inattentively at that.
Heck I used to sit in the jump seat on Tokyo to Anchorage when I was a kid for hours and well remember the clattering of the trim wheels as they spun.
I have no beef with how the Airbus (Air France?) software behaved. It's supposed to yield (I would argue software shouldn't be in control at any time...) on demand and for any reason. That the pilots screwed up is unfortunate but is the cost of flying silly fast at silly altitudes where the coffin corner is much too easy to hit. Shaving safety margins so excessively to save money is where this whole modern society has gone off the rails.
If you can't reliably hand-fly the plane from end to end you're NOT doing your job, your fitness to task is not acceptable, and your route is too damn long, and/or you're under-crewed. We don't tolerate autopilot for trucks or chartered busses, so why do we allow 200+ person vehicles to just let the fakakta computer run the show? Flying should be expensive, it should require the absolute best physical specimens with the sharpest minds and attention spans actively engaging the controls.
IMO pilots as a class are incredibly naive about how unreliable computers, sensors, and software is. Part of that I'm sure has to do with the manufacturers lying their asses off as well because millions of hours in fancy computers and control software THEY decided to develop have to be paid back somehow. Like e_is_real_i_isnt said, there are 6 basic controls - none of them should ever be run by a computer. Frankly I'd add FADEC to that as well although that is one "computer" that would be nearly impossible to remove. Then again, there have been crashes caused by them misbehaving as well.
When you have to have a computer run things because it's too complicated for a human to run things, we've gone way too far.
The pilots are either competent or they aren't. there is NO reason whatsoever for software written by blithering idiots to override and countermand the pilot. He is GOD. If he is an imperfect, distracted, negligent god, then sucks to be you for the passengers but this was a clear case of software elevating itself above the meat.
Programmers, fuck off! Sensors are fallable - it's fine to chime and red light, even buzz/alarm when something sure looks wrong but do NOT interfere.
> I also think that things like erasure coding is just a terrible idea in general.
> File or record replication is the only sensible solution for modern storage.
Hardly. Triple replication for small objects and EC for large is specifically intended to guarantee integrity and availability of data when things get silently corrupted or nodes become unavailable. The obvious tradeoff for this is "wasted" space for duplicates and CPU time to compute EC both on write and read.
Using Ceph to run live VMDK (as opposed to storing the initial bootstrap and subsequent snapshots) is nuts, I agree.
Netapp has fast "write ACK" time because it's simply written to battery/flash-backed RAM and can de-stage at it's leisure. Till of course the write load overruns the ability to checkpoint and flush said 1st level cache and even the 2nd tier if so equipped.
Object/Ceph store for RDBMS workloads would be appalling. Operating System disks are pretty much write never so if there were a way to implement NFS-root on top of Ceph without a lot of work that might be interesting. Or replace EXTFS with native in-kernel CephFS, that might be something.
f'k me, all this heartache over desktop environments, icon sets, and changing window dressing? If you're running anything more than TWM and some xterms with screen, you're doing it wrong. FileManagers (Nautilus, Konqueror, etc), really? You people can't use cp and mv? Good *diety*, what is wrong with *nix users these days.
IBM is dead-man walking. I've worked on several projects with them (GCS and not) and while there are good people here and there the corporate body-politic is a bunch of morons and looking for every opportunity to screw-over the client with ridiculous "solutions" and vastly overpriced "services".
RedHat only had a value position by virtue of "commercial support" and for a while that support was pretty good. It's steadily gotten worse though and per-node value for dollar likewise. I've personally ripped and replaced hundreds of RHEL for CentOS and kept maybe a dozen or two "critical" systems on RHEL for the so-called support just to keep the auditors or 3rd parties happy. In the cloud-space Amazon-Linux or Debian-derivatives are the default and there is basically no reason to consider any of RedHat's offerings.
RedHat's Gov't consulting division will fit right in with IBM GCS though - peas in a pod.
I'm guessing it's really just an optimization of what a traditional filesystem does. Instead of the OS requesting an arbitrary series of blocks based on it's own housekeeping records of 'path/file' which maps to a series of inodes and from there a series of block IDs, the OS just asks for a 'key' and the SSD has it's own list of extents/blockIDs that map to the 'value'. So instead of all that record-keeping at the OS level, all it needs to do is hash(path/file) and send that result to the storage device.
This removes at least one look-up table being maintained by the SSD and probably gives it flexibility in moving things around. It's probably very efficient on linear read/write but probably sucks on partial writes akin to the RAID-hole.
I could see AWS/S3 using these in the Data tier since Amazon already implements it like that - storage node manages it's in-chassis storage as key (blockID) -> list of device::extents that only it knows internally.
Corp hands me a Windows laptop. Therefore I install Cygwin/X and Windows is just a window manager. WSL is useful when I want to test a real *nix instead of Cygwin's "almost". I don't know, it seemed to me X11 was downright trivial.
'startx' (in Cygwin or other Windows Xserver) with '-listen tcp'
From Windows or Cygwin, '<Xprogram> [--display ...]'
For WSL to access the Xserver do either of:
'xhost +localhost' in the context of the Xserver
inside WSL 'ln -s /mnt/c/Users/<username>/.Xauthority'
Then you can inside WSL
<Xprogram> --display localhost:#
absolutely NOTHING special needed. I'm not opposed to someone wanting to get "paid" for putting in some effort but frankly this barely merits postcard-ware or has everyone really forgotten how to do *basic* X11 commands?
it's not actually policy - it's millennial stupid-fks who can't conceive of doing anything without video. And it's a back-door way to engage in racial and age discrimination. I have no problem telling HR to get bent and not taking jobs offered by imbeciles.
Funny thing, they all back down and continue the interview process over the phone. There's almost always a face-to-face step in the interview process anyway so VC is just bs. I've only had to report one org to the Equal Employment Commission for illegal acts.
Nobody needs or wants to see you. That's what "disable Mic and camera" in BIOS settings are for. Otherwise some sharpie or a quick wrap with an awl takes care of the cursed thing.
"Interviews are via skype"
"that's nice, I don't do skype"
"well, that's our policy"
"you're talking to me on a perfectly good phone are you not?"
Fkn twits and their sorry-ass video-conference software that has atrocious audio quality, can't even do party-lines right, doesn't work worth a damn unless the Internet is perfect, and only works if you install their agent. Bite me. dial the god-da** phone. It's worked for 200 years and 10x better than your retarded VC setup.
no cameras allowed in some areas and some companies. Forget mass consumers - the great unwashed are morons chasing "shiny" without rhyme or reason and zero consideration of what risks they are thoughtlessly exposing themselves to.
You don't need a different case. The camera hole is a punch out. You don't need a different software stack. You don't even need a different production line. You just tell the pick-n-place machine not to add a camera to the next 100 devices. The camera device simply doesn't show in udev and any software that tries to use a camera simply outputs a "no camera device detected" and moves on. Or is it asking too much of imbecile programmers to do any actual error handling in their code?
There are phones and there are entertainment gadgets that pretend to be a phone. BB has already lost the mass phone market it's more than high time to go back to first principles.
* best damn RF performance
* No GD camera option, or a single 8Mpix for receipts and documentation
* REMOVABLE battery
* 3.5 audio jack
* power button on different side than volume/convenience
* both USB3 and USB2 power connectors
* fuk wireless charging - talk about pointless
* Z10 or iSE or Bold 96xx form factor max
* heavy-handed application security - configurable rights management against all apps but especially Android apps (most of which are utter shit in security) to eviscerate their access to local and network resources, contacts, and other datastores.
this was SOP 2 years ago and probably before that. We hosted all kinds of websites on AWS but had to write a compatibility module just so the Walmart websites could be deployed on Rackspace.
EC2 statistics aren't interesting. S3 and CloudFront stats are where the useful analytics are housed.
Amazon Retail is a separate business unit. As is the entirety of the Virginia datacenter footprint and staff - it doesn't even have 'Amazon' in the company name.
I worked for a company that contracted with the Coke distributor to supply free drinks to the staff as a corp benefit. There was a "secret" Pepsi fridge and there was hell to pay if folks grabbed a Pepsi from the other floor and left it in the Cola fridge.
> If one of these guys is using tape
no one with half a brain cell uses tape for cloud purposes.
> surely any reasonable sized user is encrypting everything they put there, aren't they?
baw haw ha ha ha! oh my. ha ha hahahahha
> Or in other words, you can buy two drives a year for the price of same amount of Galcier storage!
But you're forgetting that Glacier/Google are using erasure coding. So to do an equivalent you need at least a Raid6 but more like a zVol with 5 parity disks, active scrubbing and integrity checks. So let's say you have to build a ZFS server with enough disks and power it up every month and run a full check of all files each time.
No, Glacier et. al. are not cheap. But it's probably cheaper than you doing it yourself or if not that, at least better done than what your average joe could hope to do.
apparently doesn't know that every Netapp FAS, IBM DS, EMC SAN head, DataDirect enclosures (and many others) all come from Xyratex? Seagate's drives face an uphill battle simply because their product has a seriously bad reliability history. Sure the 10k/15k SAS line hasn't been bad but NOBODY will touch a Seagate SATA drive even if was made in Taiwan instead the utter, unmitigated shite that their Chinese facilities produce.
He should have said "the duopoly has bought the entire USG, lock, stock, and barrel and your so-called representatives HATE you and will do anything to deny you lower prices and actual competition." Verizon and ATT are only the way they are because they have (cheaply acquired no less) legal cover.
S3 is lazy delete. Just because you delete stuff in the console does NOT mean it's actually gone. You have ~3+ days to get it back. EBS snapshots are stored in S3 AFAIK. The first and gravest mistake (aside from using cloud services in a sloppy manner in the first place) was to not IMMEDIATELY call AWS support and get the account administratively locked. They handle loss of account control routinely.
a recent academic paper (FAST '13) cited a Fast '08 paper by Welch, Abbasi, Gibson et. al. explained that Panasas used 12+4+2 erasure coding or was it 14+2+2. (pyramid codes). Likewise ;login of 12/2013 by James Plank (highly recommend reading it). In any event, nobody who wants to stay sane uses RAID. It's all erasure coding all the time. Each object is independent coded when saved. Each hard disk is just that, a simple hard disk.
while I also question the utility of such a heavy enclosure, any moron knows you don't design for SATA but rather SAS. personally I would have put a fan bank in the middle as opposed to relying exclusively on the ones at the back but the ODMs who build this stuff (DDN is just an OEM) have these nifty things called temperature probes and IR guns.
Horizontal drive placement is HORRIBLE for space usage and cabling but more importantly air-flow.
Nobody is going to be replacing drives in these very often. If you're not using 3x replication or erasure coding you're a world class idiot.
I'm sure every last-mile operator is salivating at having their cake and eating it too (charging both the provider AND the customer for the same bits). But the telco has to put in faster circuits and more expensive equipment in your neighborhood so those HD streams actually get to you. That costs a LOT more than the gear needed to service the "usual and customary" traffic pattern your neighborhood PoP exhibits the other 20 hours of the day...
Is there greed in abundance? You bet!
> By the way, the top tier, 100Mbit symmetric without caps costs under $30/mo where I live
And when everyone in your building decides to all do even just 20Mb/sec all at the same time? The PoP equipment will choke. Your building probably only has 1Gb of back-haul if that. They say you can have 100Mb, sure, but they have ample reason to believe there aren't 10 of you in the building at the same time.
huh? since when am I an executive/corporate shill? Your Netflix BW is massively subsidized by all the other idiots who pay thru the nose for their Comcast service but don't actually use it. Uni-cast video is the stupidest idea possible.
If a substantive percentage of Comcast customers could switch in a heartbeat to another last-mile operator because Comcast was a greedy SOB and was deliberately not upgrading the peering points (or paying the likes of Level3 the amount demanded) to handle the severely lop-sided traffic flows, then yes, Comcast would magically discover they could afford to do so. Admittedly they have maximum incentive to screw over their own customers and especially where it concerns services that compete with their own. I fully support regulatory prohibition between last-mile and content ownership.
Problem with upgrading the peering points for the 4 hours/day that they get slammed to kingdom come, is that it makes no economic sense in the aggregate when it sits effectively 'idle' the other 20 hours a day. No rational CIO would go along with such a proposal on their own corporate circuits (unless the economic activity during the time period covered the costs) so why on earth do people expect Comcast to do it? eg. your steady-state is 100Mb/s burstable to 1Gb and you are on 95% billing. If you exceed the imputed 'cap' you get hit with fees as you should. Does that necessarily mean you can justify an upgrade to 5Gb:20Gb/s because for a couple hours your get a lot of traffic?
So let's work with the 1cent/GB figure even though you don't cite any evidence in support thereof. Is that what it costs Comcast to deliver from one end of it's network to the other? It is capex, physical plant, power, and people? Or is it gross expenses divided by the number of GB moved in the same time period across all network points?
So at 2.2GB/hr that means I should be able to get 410 hrs of content/mo which obviously nobody uses. I'd be very surprised if average was more than 30-40GB/mo. For $9/mo Netflix has to pay their people, pay the filthy-greedy content owners, pay all their transit providers, pay to co-lo at strategic network hops, pay for a zillion hard drives and servers, pay legacy CDN services (all gone?), and pay for object storage, etc. all of whom have to make a profit in their own right.
So while maybe the average Netflix user costs Comcast 30-50 cents a month (from a simplistic viewpoint), and let's suppose that Netflix pays their network provider the same amount, what does it cost Comcast to run a 100Gb/s WDM interconnect and backhaul those same ~20,800 streams of 4.8Mbit/sec (2.2GB/hr) to the customer during prime time? I expect it'll be a whole lot more than 1cent/GB. Is it 10x?
The US market is anything but. The subsidies are rife and the margins likely obscene. But someone has to pay the piper be it Netflix or the Comcast customer using Netflix. Comcast is under no obligation to eat the costs of "acceptable unicast-HD streaming performance" out of the goodness of their hearts. Did your residential contract come with QoS terms and maximum allowable bandwidth limits during peak times? Hell, did they even provide minimally guaranteed bandwidth figures? Or just "best effort" clauses? That doesn't mean Netflix shouldn't be spending their money with Comcast to get their distribution servers placed appropriately so that the customer experience is not unnecessarily dependent on the quality of interconnection points.
Netflix is a zero. They need to charge a hell of a lot more to stay in business once they are forced to pay last-mile and other intermediaries what providing their service truly costs. And when they do, their user-base will evaporate like a morning mist. Or they could stop being dumb and drop streaming entirely since nobody lacks the storage needed to spool it to disk ala a DVR, and so what if you can't watch something "right damn now"? Should have planned ahead. It might be interesting if they wrote their client to use Bit-Torrent style cooperation.
> the ISP's eyeballs always pay and once they've paid they get to see whatever they want
Yes, but there are NO QoS guarantees whatsoever - QED your Netflix stream is crap because every other tom, dick, and harry (your neighbors) is trying to stream Netflix at the same damn time. In order for all of 'you' to enjoy HD streaming and quit yer bitchin' Comcast would have to significantly upgrade their interconnection points with the Tier-1(s) that are carrying the Netflix source. (Netflix doesn't run their own network to enable them to do 'peering'.) Admittedly Level3 has their own CDN business so Netflix could utilize that but that just moves the origin closer to the Inter-Exchange but not to the OTHER SIDE of it.
There is only one answer and it's Netflix content servers sitting in all key points in Comcast's network. Akamai pays to put their servers in Comcast facilities. Granted, their footprint is tiny compared to a content distribution point, and they don't get charged "screw you" pricing.
I do agree that last-mile carriers should be prohibited from owning Content assets of any kind. Otherwise, as someone else posted, there is HUGE temptation to screw with their competitors and erect barriers to entry; both real and imagined. The FCC et. al. should enforce complete separation.
The tier-1 carriers generally don't charge each other or the 2nd tiers as long as the traffic crossing the interconnect points is roughly equal. Netflix uses L3 as one of their long-haul carriers. For the privilege of using that backbone (aka transport) they pay L3 a pile of money. Now when the traffic hops off L3 and onto the various L3-to-Comcast peering points, all of a sudden there is a gross imbalance between traffic flows. L3 says "pay up", Comcast says "bugger off". Comcast' lawyers call up Netflix and say "WTF, you pony up the money L3 wants us to pay since the gross imbalance if your damn service". Netflix says "but, but but, if we have to pay you, our service fees would have to double or triple! Plus you (comcast) have an unfair advantage since your own video service stays inside your network". Comcast says, "Bite me! For years you've been lying to the public and investors as to what the TRUE cost of offering your service really is. And we're not going to be bullied into subsidizing you! So don't even try playing the Neutrality card, ass**** because this has nothing to do with neutrality. Alternatively, why don't you put video distribution servers in our interconnection points (aka datacenters) and pay us for power/space and top-of-rack bandwidth, then. That will go a long way toward equalizing the traffic flows and L3 will stop sending us nasty emails."
Netflix says "Fine. here's your money"
All those stupid enough to assert "but Comcast' customers are paying for their service so Comcast should just deliver." Like hell! The true cost of delivering HD-streaming video from the likes of Netflix isn't REMOTELY included in your bill even if you have the cross-subsidized triple-play packages. What Comcast should do is charge customers who want to use Netflix during 5-10pm a nice $20 extra a month. They can use the money to pay off L3 and upgrade the WDMX equipment and circuits at peering points.
The whole telecomm industry in the USA is full of fraud. The people who use "no" traffic heavily subsidize those that download all kinds of sh*t including Netflix. You should pay for what you consume. but then you wouldn't be able to advertise "unlimited" Internet. The proper solution is to stop hiding Internet costs in TV packages and telephone. Priced Business Internet service? It's 3x residential because there is no cross-subsidy AND they provision their network such that they EXPECT you to use your level of bandwidth a LOT more than a home-user would.
Residential Internet should be charged by the Megabyte and congestion charging should also apply. If you use streaming, your MB charges should likewise be higher. It's common practice when resources are scarce to charge REAL costs, and it's the proper, fair, market response. Internet bandwidth isn't free nor is it unlimited, the incessant whining of GenY/Z notwithstanding.
<quote> Though ironically the FC SAN will do the hadoop job better than amazon cloud can do just about everything else. </quote>
You can save mucho money if you buy an EMC CX3-80 or otherwise expired products that you can still get next-business-day parts delivery of any number of outfits. Admittedly Hadoop is still better served by keeping disks local (10K rpm can be a real benefit) but I've done it.
I built infrastructure for a similar marketing analytics firm in Reston and we saved a BUNDLE compared to what AMZ cost us per month and we weren't hitting it that hard. 6mo worth of Amazon spend was our capital investment and we got an easy 2x improvement in speed and massive improvement in elapsed time consistency. For base-load, EC2 pricing will kill you. If you have a truly massive set of jobs that can run for a few hours/days and then get torn down, then AWS is well worthwhile. VERY few people have such workloads and furthermore have the scripts to build/destroy their environment to take advantage of the unique capability.
sheesh, nobody sane uses RAID anymore. At >2TB raid6 is a must and none of FB/Amazon/Google use RAID. Erasure coding (eg. 7 choose 15) pays a footprint penalty of 2x the original data but has HUMONGOUS better resiliency against loss (I only need 7 intact pieces to reconstruct the data). Yes the maths cost CPU but really, CPUs are stupid fast so nobody cares about that cost.
It ain't only FB trialing WD's drives in their cheaper and faster than tape cloud storage solution.
@Tom, you do realize the WEBUI is for not for serious users, right? It was written so the riff-raff would quit their bitching and moaning about having to use a CLI or write *gasp* some code. If you have ~dozen hosts, sure go ahead. But if you use cloud you are EXPECTED to use the API in a programmatic fashion.
Cloud is NOT designed to be user-friendly. It's designed for people who do massive scale (and transient) computing. Yes, sure 98% of the accounts are using AWS as "cheap" (HA!) colocation alternative but that's not the design target. EBS is a sop to the clueless who couldn't wrap their minds around the fact that cloud computing is DISPOSABLE computing. You were never supposed to have persistent data except in S3. You spin up a volatile instance, fetch what you need out of S3, do some work, save your results back to S3, and die in a sudden flash of light.
And yes you can launch a zillion instances all at once. There are customers that do precisely that. Spawn a few thousand and do a CPUID check and terminate those they don't like.
> individual "system" controls an entire rack's worth of disks. I can't find anyone doing that
AWS/S3 uses 96 disk enclosures in 4U. Glacier 3 enclosures per 'head'. I've seen other setups of 60, 70 (HP MDS600), or SGI's 84 drive enclosures connected 4-up to a single head. The redundancy is not computed within the unit. It is computed ACROSS units using software erasure-coding. So even if a full rack of disks blinks out, you're just peachy. Now if you lose too many racks that have the ~same set of objects on them (across datacenters even) such that you bust the N:K resiliency, then yes, LOTS of objects/files/customers can be effected.
AWS is used by people who
1) don't have IT compute/network staff (or cheaped out and got spaffers)
2) don't want to enter into long term contracts for CoLo (which may not be available locally)
3) want to kick developers out of the on-prem equipment and stop being bothered by them
4) meager capital budgets and/or don't know what they really need HW wise and want ~ZERO lead times if they want to reconfig the mix. aka prototype and what-if
5) have absolutely massive jobs and BW needs that would be extremely expensive to sink capital into (or sign ISP/CoLo contracts for) when they don't need it steady-state
6) can't do math
Where AWS makes little sense is when you have a modicum of sufficiently skilled staff, and a known base load of compute and I/O with reasonable growth rates.
I've personally moved organizations OUT of AWS for big-Data and other workloads because on-prem was 1/3rd the cost and ran 50%+ faster and more *predictably*. But their big-data wasn't at the mind-blowing scale and it was running continuously. Just 4 months worth of AWS spend bought the entire on-prem stack.
Especially with all this nonsense over RoHS (hello tin whiskers), and cutting corners by making in China vs Taiwan or the Phillipeans, even enterprise drives are starting to suck. A certain large cloud provider decided to run their JBOD chassis in such a way as to suck in HOT air and wondered why drives would fail by the bushel. I always spend the extra $20 and buy the SAS models.
Anyone want to help me write an erasure coding module for Linux MD?
Biting the hand that feeds IT © 1998–2019