4317 posts • joined 31 May 2010
Re: Getting more peopel to adopt IPv6
To my understanding radvd still requires the systems ask for new IPs when a change occurs. In an IPv4 world, I never have to have my internal systems change anything, ask for a change, restart, hup or whatever. The external IP changes but the internal IP stays the same. Everything behind the firewall continues to work *exactly* as it was before, with zero administration. All that changes is the edge device (which picks up that the IP address changeover has occurred) and DNS (driven dynamically by the edge device.)
Now, I could try mucking about with prefix validity lifetimes, but then I'm still changing the IP address that the applications on that system see. There's all sorts of applications that need restarts to handle address changes and that's very, very bad.
The solution, of course is using ULAs with 1:1 NPTv6 or Map66 at the edge.
Ivory tower types my not like that we all have 30+ years of legacy cruft to drag around, but fuck them in the face with a rototiller. I couldn't care less. We do have 30+ years of legacy cruft and that isn't going away.
Applications don't like having their IP addresses changed. That means that you either have to set up the application for all possible IP addresses (and defend all possible IP addresses) before the app starts. Frankly, this is often not possible in cases where you are trunking in a second ISP to handle load changes or ahead of a known outage/changing contracts/etc.
Alternately, you have to restart your apps every time a change occurs. That's just flat out unacceptable.
Radvd doesn't solve these problems. All it can let you do is assign new global IPs to your systems when a change occurs, assuming that the stars align right and the things actually handle multi-IP stacks properly, actually honour route expiration and so forth.
Load balancing, as you said, requires NAT. I don't think the future is overloading NAT as we have in the IPv4 world, unless you live in Canada where the ISPs are douchecanoes that don't hand out prefixes. (May they burn in the eternal fires of their own greed.)
At a minimum you are going to do 1:1 prefix translation NAT to get proper load balancing, which is exactly what I use and advocate, and something that makes the ivory tower nerds' heads explode in an ideological rage.
To them, end-to-end is a religious concept that takes precedence over ease of use, profitability, manageability and even common sense. They will attack your professionalism, question your parentage and I wouldn't be surprised if they'd just shank you in the street with a sharpened toothbrush for having the temerity to suggest that "horrible internet breaking kludges" like 1:1 prefix NATing are required in the real world.
I can't stand those fascist wastes of carbon. I would not shed a tear if each and every last one of them get cholera and shit themselves to death. We wouldn't be in this mess, requiring "kludges" like prefix NAT if they had removed head from sphincter at any point in the multi-decade development of the IPv6 protocol to acknowledge the actual functional reality of the world in which the protocol - and the applications that use it - must actually function.
The network will adapt to serve the needs of the applications that make the business money. The business will not adapt to serve the desires of the people designing the protocol. That's life, and the ivory tower types need to fucking deal with it.
Re: Getting more peopel to adopt IPv6
My ISP doesn't hand a prefix. It only hands off individual IPv6 addresses to devices directly connected to the modem*. Nothing else.
No other ISPs offer IPv6 to end customers at all here.
Even if I could get prefixes, what if I want to dual-home, or to switch from one ISP to another? The end-customer and SMB ISPs don't offer BGP to us, and that's assuming we're even trained to handle such a thing. I can buy dual-port IPv4 routers that do load balancing and failover and don't require me to renumber my entire network to accomplish it every time there's a failover.
How does IPv6 handle these scenarios? Hmm?
I'm deeply interested, because all I get from the ivory tower types it "sit on it and rotate, those scenarios don't matter, prole."
My response to them is "your end-to-end doesn't matter, assclown" followed by IPv6 NAT. When the protocol is ready to meet my needs then I'll use it as designed. Until then, fucks given about ideological purity of the protocol = 0.
*I don't care if you want to "blame my ISP." Eat 10,000 sacks of wiggling phalluses if that's your response. There are no choices of ISP for the end customer. What the ivory tower douchepopsicles have failed to comprehend from day one is that the ISPs dictate terms to customers, not the other way around. And the ISPs give zero fucks about anything except how to extract the most possible money. I don't care about what the spec is, or how it's intended. Only how it is actually used and what's available to me. Both from ISPs and from device vendors. Everything else is masturbation of the most pointless and vapid variety.
"I suspect the resistance I often observe regarding IPv6 addressing, loathing of SLAAC, and devotion to DHCP, is fundamentally Calvinistic."
No, it has everything to do with wanting control over out own networks and endpoints. Even little things like "renumbering" in failover or dual-homing scenarios for SMBs that don't have BGP. We don't care what ivory tower intellectuals want. We want functional, cheap, secure and private. IPv4 does that now. IPv6 destroys it.
Re: "one that we have never heard of"?
"Yep - that seems most likely to me too. Windows Phone is already selling more than Apple iPhones in 16 territories."
Poverty tier territories. How is it against Android and Symbian in the same places? Or against "nothing at all?"
Windows Phone: the mobile OS where you have to give a really long think about using it, even when the alternative is "nothing at all."
Why not build your own VSAN?
Do check that AHCI is supported first, hmm? Oh, it's not? You need to pay for ...and pay for...and...oh! How interesting! I'll just...wait, a little bit of...wow. That's expensive.
Maybe, maybe not. It really depends on who is obtaining clue when.
"Dude, what the hell do you think Motorola is going to do; you think they're sitting under some huge mountain, cracking their knuckles as their evil plan to trick users into divulging information which is on the hardware of Motorola phones already bears fruit?"
Yes. There's no money in shipping hardware; or most software, for that matter. The real money is in advertising. Do I believe that Moto - and every other tech company out there - want to know where I am and what I am doing 24/7/365? Absofuckinglutely. That's big money, and they'd be doing a disservice to their shareholders if they didn't attempt to scrape and sell it.
"Again, you assert "worst case" that is, to be blunt...dated. I run huge databases virtualised all the time. Ones that pin the system with no ill effects and no noticeable difference to metal."
Whatever happened to your previous statement that it is only by pushing the system to the redline that we learn tings? Which of the two assertions isn't true? :)
I don't see a contradiction. I pin my systems. In testing and in real-world workloads. I believe it is absolutely required. What i don't believe in - and let's be perfectly clear here - is finding edge case scenarios that don't work and brandishing them as a reason to avoid a technology in all instances. If there are edge case scenarios or configurations that don't work, let's find them and then either fix the issue or not use the technology for that application.
That said, I can - and do - push my systems to the redline with my workloads and I do not see the results you see. That says to me that your results are dependent on your config and your workload. Thus I cannot use your scenario as a generic "virtualisation imposes a huge penalty" catch-all, nor can I extrapolate from the fact that you encountered a high-overhead scenario to state categorically that this is why something like "virtualised server SANs" are a bad idea.
"losing 40% of performance would also crank up your costs by a similar amount. That's not an insignificant hit to the bottom line."
I agree, losing 40% of performance is a big hit. That said, my experience from both synthetic lab testing and real-world results do not show a 40% hit, or anywhere near. Closer to 6% for redline workloads with 4% being the average.
4-6% falls into the "perfectly acceptable tradeoff for convenience" category for me. Again: I cannot accept your "40%" assertion without testing, and thus I will use my own numbers for the time being (and the workloads that I am aware of) and say "virtualisation is a great technology with a more than acceptable tradeoff."
"But the most important point I would like to make is this: "Don't listen to my numbers - produce your own based on your workload." By all means, use my methodology if you deem it appropriate, and/or point out the flaw in my methodology. But don't start with the assumption that the marketing brochure speaks the unquestionable truth. Start with a null hypothesis and go from there. Consensus != truth, and from what you said I think we both very much agree on this."
When have I ever accepted consensus on anything? Point me to an article where this has occurred. I test things all the time. It's my job. If I disagree with your take on virtualisation it's because your numbers not only aren't close to mine, they're in a different postal code.
I don't have a chip on my shoulder about virtualisation, or metal, or really any technology. Frankly, I don't give a damn one way or another. What I am saying is the following:
1) Your numbers have to be reproduced before they can be believed
2) Determination of how relevant your workloads are to real-world workloads as run by, well, anyone has to be made.
3) If you can evidence reproducible workloads that show 40% virtualisation overhead then there are people at VMware that will want to see this, reproduce it themselves and solve the problem by making a better hypervisor. I know many of them. They're good people.
In my experience, virtualisation is between 4% and 6% overhead for every workload I've tried. If you've workloads outside that range, I consider them an exception. An interesting one, worthy of investigation, discussion and remediation, but until we get more widespread testing on various workloads to see where they fall between my experience and your own I simply don't have enough data to kybosh hypervisors as a concept.
Again, you assert "worst case" that is, to be blunt...dated. I run huge databases virtualised all the time. Ones that pin the system with no ill effects and no noticeable difference to metal. I also strongly disagree with your assertion that you cannot give up an erg of performance in the name of convenience; that may be your personal choice, it certainly isn't mine.
As for "running virtualised causing a substantial overhead on memory I/O" I have maintained this particular item for some time. Specifically that "features" within most hypervisors to optimize RAM usage create a dramatic overhead on the system and they need to be weeded out. There is also the issue that many virtualised systems = many OSes caching to RAM. This changes the game versus each system having it's own dedicated setup, more than CPU sharing, I believe.
Databases used to be a big problem on hypervisors. 3-4 years ago. We've come a long since then, and it's only the true edge cases that still show issues. That said, isolating an edge case enough to reproduce it on modern equipment and hypervisors is always a fun exercise.
So if I seem skeptical, that's why. You write like someone who did a bunch of testing in the ESXi 4 era, went "pfaugh, virtualisation" and then put up a "get off my lawn" sign until the end of time. 2-4 years ago, I wouldn't have put a gigantic 100GB DB2 instance in a hyeprvisor. Today? Not a problem. Oracle still gives me shit...but that's Oracle. MSSQL doesn't bat an eye about being virtualised and Pervasive runs like a dog no matter what you stuff it into.
MySQL can be tuned to run in anything. I have virtualised instances that work fine, others don't. I haven't, however, seen a different to metal worth writing about in years.
Now, maybe my databases are "poorly optimized." They certainly are I/O bound in the extreme. That said, I test with real-world workloads, not theoretical constructs. As I said above, I'd love to assemble a lab with a real-world workload that can reproduce what you're saying. It sounds fun to explore.
That said, if I seem skeptical, please bear in mind that your discussion does mirror any of dozens of conversations with some rather closed-minded anti-virtualistion folks that can't let go of stuff from the beforetime and look at what is on the table now.
Thus testing. IT should be about the numbers, not about religion. Not for you, me, or anyone. Ultimately, that was the point of the article I wrote: there's too much religion in IT. From marketing and sales to even the phoney baloney whitepapers many companies knock together.
Let's get down to the testing. Reproducible results that we can then determine applicability, market impact, use cases and so forth. That's the information needed to properly advice clients. :)
"On the subject of leaving resources dedicated to the host/hypervisor, that is all well and good, but if you are going to leave a core dedicated to the hypervisor, then that needs to be included in the overhead calculations, i.e. if you are running on a 6-core CPU, and leaving one core dedicated to the hypervisor, you need to add 17% to your overhead calculation."
I never said "leave a core dedicated to the hypervisor". I said reserve it some space. Typically 500Mhz or so.
As for this:
"the I/O saturation was a non-issue because the write caching was enabled, the data set is smaller than the RAM used for testing, and the data set was primed into the page cache by pre-reading all the files (documented in the article). The iowait time was persistently at 0% all the time."
I would have to conduct my own testing. My lab results consistently show an ability to saturate RAM bandwidth on DDR2 systems. Your results smell like an issue with RAM bandwidth, especially considering that's where you're pulling your I/O. I will look to retry by placing the I/o on a Micron p420M PCI-E SSD instead.
I also disagree with your assessment regarding near/far cores on NUMA setups. Just because the hypervisor can obfuscate this for guest OSes doesn't mean you should let it do so for all use cases. If and when you have one of those corner case workloads where it is going to hammer the CPUs ina highly parallel fashion with lots of shared memory between then you need to start thinking about how you are assigning cores to your VMs.
Hypervisors can dedicate cores. They can also assign affinity in non-dedicated circumstances. So when I test something that I know is going to be hitting the metal enough to suffer from the latency of going across to fetch memory from another NUMA node I start restricting where that workload can play. Just like I would in production.
Frankly, I'd also start asking pointed questions about why such workloads are running on a CPU at all, and can't I just feed the thing a GPU and be done with it?
I flatten my systems all the time, not just in testing, but in production. I run full-bore render engines in a virtualised environment and I just don't see the issues you describe. That makes me very curious where the tipping point between my workloads and your simulation is. What needs to change in order to experience this dramatic drop in capability? Do I need to be on the lookout for it in my future workloads, or is it an artifact of using an ancient CPU or a peculiar testing configuration.
I don't have answers to these, but I've added it to the list of things to find out.
@Gordan; yup, I had missed it. In my defense, I hadn't slept in 4 days due to datacenter migration.
Let's address a few issues in the testing methodology you state:
"1) Testing is done by fully saturating the machine."
Testing should always be done by pushing the machine to the red line, otherwise we learn nothing.
"2) Not leaving any cores "spare"."
Leaving cores "spare" doesn't present a real test. However, the host instances should have reserved RAM and CPU on any production virtualisation deployment. It's a fairly common mistake not to enable this, and typically results in Xen/KVM showing badly compared to properly deployed instances. The hosts instances need wiggle room to do their jobs, especially with "noisy" VMs.
3) Pinning cores helps, especially in cases like the Core2 which has 2x2 cores, which means every time the process migrates, the CPU caches are no longer primed.
I pin cores all the time and never run into the issues you describe here. I have flattened multiple generations of systems and still don't see the disparity you do. What I wonder is if it is related to the Core 2.
Back in the Core 2 days I used AMD stuff, and they were well ahead of Intel in terms of hardware virtualisation support. Today's processors have any number of improvements over that old design and the introduction of proper hardware support in these generations of processors may explain the discrepancy.
The only time I have ever seen results like you describe is when I am able to saturate the RAM bandwidth. This is entirely possible with DDR 2 systems, especially when you are allowing memory deduplication on the systems, something that - at least in ESXi - is enabled by default.
I'd also have to look at your I/O subsystems as being suspect. It smell a lot like I/O thrashing. I will see if I can scrape together any equipment from that era and place it against both the AMD Shanghai systems I have as well as my modern Intel Xeons. I am very curious to see what will happen when I pin them.
Hey; took a brief look at this, and noticed a few problems straight away. First up:
"(VMware Player 4.0.4, Xen 4.1.2 (PV and HVM), KVM (RHEL6), VirtualBox 4.1.18)"
Only Xen and KVM are hypervisors, and they are the two that are the easiest to tune improperly. You don't have ESXi or Hyper-V here, and they are the real test of what virtualisation can do. VMware Player and VirtualBox are not hypervisors...or at least not Type 1 hypervisors. There is going to be a huge penalty for running those. Everyone in the industry knows that. That's why they aren't advocated for production anything.
I am really shocked you got such bad numbers for Xen and KVM, which leads to me wonder how they were configured...but being Xen and KVM, if you look at them funny they'll run like crap.
Very interested to see numbers with ESXi and/or Hyper-V!
Re: read cache does nothing
Yep, in my testing you have to have at least 30% read as part of your workload for it to have a tangible difference. As I have said many a time: storage isn't one-size fits all. Everyone's a little different and there's more than enough money in the space for everyone.
...even if I don't quite understand why anyone (excepting very select niches) would choose some of 'em
Do you happen to have information on specific workloads I can test in my lab that would prove your claim? I'd love to test that and write it up. Please get in contact if you have details!
Re: didn't you answer your own question ?
Not all CSAs are complex. Proximal is "fire and forget". vFlash will be there in August, I'm sure. There are others that are as simple, or close to. It's only when you start layering on the features that the CSAs get into "job security" territory...and I start wondering "why don't just go full-hog, use a server SAN and be done with it?"
Re: Pretty good reading
Twitter finds a way. That makes me think you just aren't working hard enough at your trolling. :)
Re: Excellent work!
It's a possibility. It will depend on how full my dance card gets.
Re: What's this thing suddenly coming toward me very fast?
Re: 17TB ??
No kidding. I have installed more than 17TB of flash in Q1. Me. With my broke-ass clients. 17 Terabytes whaaaaaaaaaaaa?
Re: re. mirror
"They have to be 100% reflective at all encountered laser frequencies, but they're not, so they would heat up, degrade and vapourise."
Aye, but properly designed they not only reflect some of the oncoming fire but function as ablative armour. Plasma sheilds could be useful for a starship looking to project a field that works somewhat like a proper navigational deflector. If we could find a way to regenerate ablative reflection armour we'd have a half-decent combat hull to boot.
Perhaps a substance that could be secreted onto the hull that would instantly harden/freeze such that it had the relevant reflective and ablative properties? The issue with both ideas (plasma shields and regenerative ablative armour) is that is having to carry the stuff around everywhere. If you get too far from port and get into the shit you have to limp back home to refill your defensive capabilities.
Now, if you could collect the relevant elements using a bussard collector (perhaps by parking next to a gas giant, star of other friendly source of volatiles) then you might be able to make all of this lovely stuff in situ. Which brings us back to the same problem as in the paper: power.
The magnetic confinement for plasma shielding and the bussard collectors would require enormous amounts of power. Terawatts upon terawatts. Element separation, refinement and manufacturing of polymers for your armour would also take a stupendous amount of power.
Matter/antimatter is unlikely as a power source: even if we could figure out how to make antimatter without using a significant fraction of the output of a star, you piss away more than half the energy from the reaction as unrecoverable "energy" like neutrinos. That leaves fission and fusion. Fission because Uranium is bloody everywhere and fusion because - while fission is cute and all - fission just can't deliver the power needed.
So, in order to play the space combat game with even the remotest chance of survivability, each starship will require at least two power plants: a fission "spark plug" and a set of truly enormous fusion reactors to output the kind of energy needed. Napkin maths say that you're probably looking at a ship so large that Kirk's Enterprise* would be considered a shuttlecraft beside it.
Which means, quite simply, "not in our lifetimes."
*Not the Jar Jar Trek version
Re: Gates Foundation is EVIL
"Gates philanthropy does not excuse the decades of lying, cheating, stealing, and ruining of other people's lives and businesses that Gates (and Ballmer and others) performed in order to acquire all that money."
Maybe, maybe not. But people change. Ever think that maybe Billy G did? He was evil. There is a distinct possibility that he currently is not.
@Ledswinger if you are advocating that laws be drafted and enforced* under which you could seek financial recompense for time spent reading an article that under no sane interpretation were you forced to read then please, for the good of our entire species die quickly and without issue. I fear such arrogantly entitled idiocy may not only contaminate the gene pool directly through propagation of your lineage, but by proximity, in a manner similar to DNA-destroying high-energy photons.
Espousing a belief in remuneration for time wasted voluntarily - even if done only semi seriously - is, in my opinion, "high energy stupidity" of such overwhelming composition that it should be added to the Geneva convention with all possible haste.
*Or that extant laws be mangled such that they be turned to such a vile perversion of social justice.
It's all pretty simple, really. Software defined storage players have commoditised everything from teiring to deduplication. Anyone of any size can now make better use of their storage. That reprieve won't last, and soon we'll be back to the disk vendors, hat in hand.
Only this time, we'll be running more workloads against the disks, and deduplication, compression and $deity knows what else as well. So we'll need faster disks. More SSDs. Hybrids and so forth.
I never ends.
No one size fits all. Everyone's workloads are a little different...and there's more than enough money in the storage market for everyone. Unless you're marketing. Then $company and $product solve everything. *sigh*
Re: ARM Good Stuff but what about the Patents
If you think for a second that Intel has the patent portfolio to take on the entire IT industry, you're mad. Intel versus ARM is Intel versus everyone. Do you honestly think their last act would be to SCO their own customers?
If they did, they could kiss becoming a high-end fab company goodbye.
Re: When you think about it...
"It's easier to not tweet than to tweet. I prefer the simplicity of silence."
Says the fellow who has posted enough on an even more obscure medium - The Register's comments section - to have attained Silver Badge status.
At least on Twitter I can keep up with my friends and other people across the industry. Things like vBeers are organized on Twitter. I get together with and socialise with Tweeple that are part of my local daily life and my professional life.
At the El Reg forums seem to get used for it bitching and elitism.
Now, I'm no better - I use the forums for bitching and elitism too - but at least I'm not out of touch enough to keep trotting our the old trope that Twitter is all about "self interest." Twitter is an instant messenger for conversations that don't need - or may benefit from not being - kept private. It's a shitty, badly designed, limited and terrible replacement for IRC.
What it isn't is a "microblog," no matter how much that may have been the original design, or how much some people want to think that label still applies.
There's plenty wrong with Twitter, but $deity man, please get plugged in enough to bitch about that actual problems with the service. Like that fact that it's predecessor (IRC) was far - far - better for the task than Twitter is today.
I see that rather than participate usefully and transparently in this conversation, offering the benefits of your claimed experience in a trustworthy manner you have chosen instead to resort to assertion and belittling. How very disappointing. You almost had me believing you might be more than a pseudonym with an axe to grind. Sadly, however, the standard of discourse on the internet doesn't appear to have been raised today.
How do I deduce there is emotion? You cannot separate a discussion of one item - in this case my assessment that "in general, VC of tech companies aim for 10X" from a discussion about Pure. They are two separate things entirely.
Personally, I share some of your concerns about Pure. I don't personally believe they're worth 10X. In fact, based on publicly available information, I'd say even their current ~$3B valuation is more than a little hopeful. They're a niche hardware solution in a world going SDS; their current offering isn't revolutionary, it isn't going to change the world, and it isn't enough to see them through to the end of the decade. If this is all they have, they're dead.
Now, that said, I don't believe for a second that Pure has all cards on the table. They have a lot of the top folks from the industry. EMC, Veritas, 3Par/HP and more have all lost minds to this lot. I am not so naive as to think that they don't have R&D ongoing and even - given their size - a skunkworks project internally working to get a "one more thing" ready for prime time.
Is it enough/will it be enough to make $3B when the bubble collapses in 18-24 months? No idea. I don't have enough visibility in there to know what the cards held close really are. I do know that a lot of really bright, really experienced people have gone over to Pure; the sort of people who do structured Due Diligence before accepting positions. That says to me that there is something more than meets the eye there, even if I, personally, do not know what it is.
I am, however, certain that the VCs involved would not be dumping this kind of cash into Pure unless they were convinced it wasn't going straight to hell in short order. Have you looked at who's investing?
As regards my claim that "tech VCs generally seek 10X," I am basing this statement off of guidance given to me by numerous VCs, CxOs and VPs throughout the valley. I have good reason to trust their guidance and advice. I also made a gross generalization about a field in which there is a certain amount of subtlety, something that any reader of this comment thread should have easily been able to pick up on.
You claim to have an "extensive background in venture capital." This then raises the following issue: I have on the one hand a pseudonymous commenter on an internet technology blog making an assertion about generalized guidance that runs counter to the claims made by individuals I know and trust.
I do not dismiss out of hand that you could be correct. Alternately, you could be a pedant, taking offense at a generalization.
Still further you could be someone emotionally invested in the fate of Pure, seeking to grasp at any available straw to discredit everything I said by focusing on a generalization for which any number of exceptions could easily be found. (Given your inability/unwillingness to separate Pure from the 10x statement in order to discuss this more granularly, I lean towards this interpretation.)
Ultimately, I don't have enough information about you to judge. You're a pseudonym: functionally anonymous and with no posting history. I do have information from my sources, and even from just watching the market. I even - shocker of shocker - have information and analyses that I can't reveal because it would compromise my sources . That's part of the job.
So, we're at a crossroads here. One one where you and only you can set the direction. This seems to matter a great deal to you - and it matters not at all to me - so it seems fair that the ball is in your court.
I use my real name, and information about me is easy to find all about the internet. Send me an e-mail. Tell me who you are, what your credentials, work experience and so far are. Whom do you represent? Whom do they represent? What do you feel I am wrong about, and why?
I'll gladly arrange to do a full-blown interview with you, then take that information and sit down with my other contacts and get their point of view on the matter. We'll see what they have to say and present the information in an article.
I don't have a problem being wrong. When that occurs, I want to know how and why, where I made mistakes or was misled. I want to know what I need to know to correct the error and then I usually write a blog about it so that I can share my new understanding with my readers.
So: who are you? An experienced hand attempting to correct the errant ways of a rookie, merely an anonymous voice on the internet, or someone with an axe to grind?
Learning and spreading what I learn is my goal. What's yours?
Have a great day! ---> Beer, because everyone needs to chill once in a while.
I am perfectly aware of the low rate of success - especially "grand slam" success - of venture capitalism. I don't know what gave you a different impression, but it wasn't anything I wrote.
Additionally, I never said VCs get 10x. I said they want 10x. It's the goal they try for, especially in tech. Thus what they push companies to structure themselves for, especially those heading towards an IPO, as opposed to acquisition.
Reading comprehension. Try it some time.
Edit: I find it exceptionally weird that you signed up an account today just to post that one comment based on what appears to be a singular lack of reading comprehension. It makes me wonder all sorts of things about your motivations...but also why the above commentary so deeply upset you.
Spinning rust is commoditising. Building the physical box that storage goes on is commoditising. The features that yesterday EMC and NetApp could charge squillions for are commoditising. Storage is not.
The demand for storage is unlimited. The challenges of storage are equally overwhelming. We continually need to store our stuff in new ways, with differing levels of redundancy, or long-term, or temporarily, or securely, or in a tiered fashion....the list goes on. There are storage challenges we haven't even thought of yet because the technologies to cause those challenges hasn't yet emerged.
Compute is nothing more than a race to the bottom on the price of silicon. CPUs, GPUs, ASICs and more; who can make more numbers crunch faster. We hit the ceiling on single-threaded speed ages ago and it's been stagnant ever since.
Networking is - like storage - a potentially unlimited market. Unlike storage, networking has been dominated by a monopoly for so long that the single biggest innovation that can occur is breaking the monopoly and commoditising what we already have. That is occurring as we speak.
SDN in the form of Openflow and like things will occupy networking nerds for the next decade, if not more. There isn't room in that market for too much innovation, because the battle to defeat Goliath still hasn't been won. Cisco's icy claws need to be uncurled first, and that will stall networking for some time. Besides, networks aren't the bottleneck today: storage is.
There is plenty of room for storage to grow yet. Whether you personally like the startup scene or not.
Let me try to be clear about this: commoditisation is a good thing. Companies have only so many dollars to spend. When the money is no longer going into high margin proprietary hardware then it can go into actual innovation. Slash the margins and get the hardware for near-cost and then you can invest your time (as an industry) on doing amazing things in software. And amazing things are being done!
The value of startups doesn't come from creating locking and milking customers for as much margin as possible by making things as incompatible as possible or requiring that you buy every replacement part/service contract/etc from the original kit shifter at stupid prices. It comes from making something people actually want using the best, brightest and most leading-edge talent that's out there.
The value of startups comes from creating a culture and a working environment that lures away the best and brightest from the fossilized legacy vendors, starving them of talent so that they cannot possibly compete.
Legacy vendors strangle customers, squeezing them until every last dime is extracted, then discarding them without a second thought. Startups strangle legacy vendors, draining the lifeforce from them, the customers and eventually the mindshare until they are top dog and must engage in legacy practices in order to defend their territory.
This is the circle of digital life. You may not like it, but you do have to live with it.
Addendum: the first hit for "nobody ever got fired for buying IBM" is the Wikipedia page for Fear, Uncertainty, and Doubt. Which largely makes my point for me, but just because I feel the need to ram this particular one home...
Queensland bans IBM from future work. We live in a world where you absolutely can get fired for buying IBM.
Welcome to the future. Your preconceptions are no longer valid. Enjoy.
You're right, of course. "Who's on the box" has mattered a great deal in the past, and will continue to be a strong factor into the future.
That said, for all the reasons I argued above, I do believe that the power of name-brand inertia is less important. There is one other reason not mentioned there: scope. The Amdahl v IBM battle occured mostly back when there were far fewer companies with computers, period...let alone companies with the kinds of complex infrastructure that we have today.
The impact of a few people who can be bribed or who are so conservative they can't conceive of alternatives is greatly diminished by the sheer scope of the marketplace.
Unlike oh, so many of my peers I don't believe that "One size fits all." The idea that one - and only one - company must emerge dominant in a given field and that a company is only "worth" anything if it is that dominant company is completely fucking outdated and overwhelmingly ludicrous.
Look at storage. Storage is huge. It's a truly enormous field with unlimited growth potential. There is more than enough room for multiple companies to do amazingly well and a great many people to get spectacularly, stupefyingly, mind-blowingly rich.
You are absolutely correct in that people assume that large companies will "catch up" to the startups. Sometimes this assumption is right (usually because a big company acquired a startup, rather than innate innovation). Many times it's wrong. Even when it is right, the large company's solution is increasingly of lower quality, promotes lock-in and is frequently proprietary. This last is important in an era where so many are moving towards rapid-iteration technology departments powered by "as-a-Service" this and "Software Defined" that.
This is more than just some buzzwords. It's a discussion about how IT should be delivered. The DevOps movement - amongst many others - is an acknowledgement that corporate - and especially enterprise - IT has failed both the business and the users. Consumer IT leaps ahead, corporate IT lags behind.
You can't close that ever-widening gap by doing everything exactly like you've always done it before, relying on the same companies with the same release and refresh cycles. You have to take some "risks", even where the "risk" you're taking is simply stepping out of your comfort zone and using a different vendor.
Will every single company on earth do this? No. Do they have to for everything I've said above to be true? No.
That's where failure of imagination comes in to play. We live in a world where you no longer need "all" or even "most" of the world to follow, herd-like, in the same direction. The industry is diverse. It is complex. And companies seek the means to differentiate themselves form one another ever increasingly by doing their IT differently than their competitors.
That's right: many of today's companies are finding that doing IT differently from the "established industry best practices and "safe" vendors is what is giving them their competitive edge.
The old ways are dying. The idea that every single company will do IT the exact same way using products and services from the exact same vendors is almost extinct. There are so many vendors offering so many products today that this was ultimately inevitable.
So yes, not everyone is going to take the "risk" of trying a startup." Then again, that no longer matters.
Re: How much for P2P traffic deal?
Isn't it obvious? P2P means you're a dirty pirate and thus you should be drowned in acid while having your eyes consumed by a thousand angry ants. You're a blight upon the earth and your genetic lineage is worthless. Only the unclean of heritage and impure of mental capacity would question the unquestionable and inalienable natural right of corporations to hold copyrights eternally. In the name the actual content creators, of course.
Sending data from one end user to another would never mean that you were attempting to take advantage of the past 40 years of technological development to launch a new business where everyone involved operates from their home. It would never mean that you might want to run a fish cam to show off your 180 gallon fish tank in real time or enable collaborative distance learning for home-schooled s/children/workers/retraining adults.
That's nonsense. You are a consumer. Consume! Pay your subscription for internet, for mobile, for cable, for Azure, for e-mail, for web services, for rent and gas and power and everything else. Your paycheque comes in and it goes out to subscribe and to rent. You are not allowed to own a goddamned thing, you poxy whoreson. That is reserved for your betters, prole. You will pay your life subscription and you'll be grateful for the privilege!
Any attempt to better your social station, to innovate, or to change the power structures that exist in society today makes you not only a bad person, it makes you a criminal. By default. There is no inquisition. There is no trial. You are guilty until proven dead.
Now, where's my fucking money?
Re: "She was having unprotected sex for money, but with other men also, not me only!"
"If she wants to sell her body, that's her perfectly legitimate choice."
Aye, but if you're having unprotected sex with strangers then you have sex with me on the basis that we are in an ongoing relationship I do believe there are both moral and ethical requirements to let me know about my potential exposure to incurable sexually transmitted diseases.
I've no issue with the lady sleeping with whomever she enjoys; for fun or profit, her body is hers. It's where the fluid intermingle that lives can be ruined and I believe both parties in any even semi-committed relationship owe each other a duty of care.
"So from where I'm sitting, it really does look like there's a skills shortage. Otherwise I think that we'd be swamped with applications."
...or you're utter shite at writing job adverts. Here's a giggle: tried posting on the El Reg forums? Pretty sure that you'd fill the positions overnight.
"While pay is half the battle...there is also a genuine skills shortage in IT"
No. Pay is 100% of the battle. If there is a skill shortage it's because all those with the skills and experience left to for jobs that pay more and offer more respect. Like plumbing. Or grave digging. Or hooking on the streets in a furry costume.
"94% of Brit tech bosses just can't get the staff these days"
Have they tried offering a living wage?
Re: This sort of thing doesn't happen
Engage rage before finishing reading?
Heartbleed allowed you to attack servers hanging on the net. Anything that presented a vulnerable OpenSSL-backed service, really. This requires the user to go to the site.
Also: Linux is evil cancer that only nerds with no lives would ever use and Microsoft is unicorn farts that tastes like rainbows.
This sort of thing doesn't happen
if you use Microsoft. Microsoft is used on more servers than Linux, and it's more secure. And it doesn't have the heartbleed vulnerability. And it's perfect in every way.
Edit: crap, I forgot to push Anonymous Coward. Welp, that's egg on my face, then...
The middle class has been in decline in western nations for about 25 years now.
Also, sorry to hear about the lack of "being around" for long. We'll try to make it as memorable a time as we can, hmm?
And yet, I sit somewhere between left libertarian and social democrat. If we (representatives of radically different philosophies) can approach agreement on this issue...
Re: An open question to the anti-net-neutrality crowd:
Trust a nerd to believe you can solve social problems with technology. *sigh*
Look, I don't care what the technology can do. Just because TCP has the ability to do QoS doesn't mean that QoS should be used on the public internet. I'm perfectly aware that this is a capability of the protocol, and I use it within the bounds of my own network so that I, and only I can decide what priority different classes of traffic get on my network. In fact, my edge routers are even able to look at QoS settings on the network and determine which packets get priority for access to the internet. That is how I determine the quality of service of my network.
There's the critical bit there. I determine the quality of service of my network. Nobody dictates it to me, certainly not by discriminating based upon whether or not I am requesting packets from a company that competes with my my ISP.
You can bang on about FRAND/RAND as a solution to the social issues of abuse of monopoly or pesudo-monopoly position, but I've yet to see many examples of that actually working in the real world. Unless I'm missing something, your anti-net-neutrality stance is lodged firmly in mistaken economic beliefs like "the free market actually works". It doesn't, certainly not when there is the option for a monopoly to exist. It's as big a myth as trickle down economics.
So really, that's what this boils down to. There are plenty of examples in our history in which companies - including many of the very same companies that are in question with this very issue - have abused monopoly power, influenced regulators and politicians to the detriment of customers and generally been gigantic assholes. There are far fewer examples of "the invisible hand of the market" simply clearing everything up and making abuses go away.
If you have a means of guaranteeing that investment gets plowed into ever better infrastructure perpetually, that service is universally available, that speeds and quality increase over time, that prices won't become gougingly predatory for end customer and that barriers to entry will remain low-to-non-existent for new entrants, I'm all ears.
So far, imposing net neutrality and a shitload of regulation seems like the only way to achieve the above. Simply letting those in power do whatever they want is absolutely, positively, without a shadow of the remotest doubt going to result in the exact fucking opposite. There is no reason whatsoever to believe otherwise.
Additionally, as for your parting missive:
"and above all else that no internet provider is allowed to prioritize packets from services they own above those of services from competing providers.
Not even the routing and control protocol traffic required to maintain your network's stability?"
Don't be asinine. You're attempting to pin an extremist viewpoint on me when under no circumstances have I evidenced such. Routing and control traffic is and should be considered to be part of the infrastructure itself. It is necessary overhead to make the system work.
As I had stated plainly in my posts, I have zero problem with certain items having priority on the public internet, so long as the rationale behind their having priority was obvious, transparent and clearly grounded in the common good. (For example, 911 or telemedicine traffic.)
As a society we make "common good" exceptions for every traffic and communications network. In times of emergency our governments have all sorts of powers ranging from your duty to pull over when an emergency vehicle has lights/sirens on so they can pass to priority use of comns equipment by government officials during a crisis.
Do not try to set up a straw man by pretending that I am some ideological purist trying to impose a radical and absolutist agenda. That's bullshit and you fucking know it.
What I am seeing is the best outcome for small business owners and end customers in a fashion that doesn't completely ruin the ability for ISPs, CDNs, content distributors and even the rightsholder mafia to make money. I seek to prevent any one group from gaining absolute control and I seek to prevent vertical market integration which would lead to monopoly positions, anti-competitive barriers to entry and egregious - I would go so far as to say economically dangerous - pricing.
Let me be even more clear here, just so that we can all speak the same language: western society is becoming one that is based on the production and distribution of intellectual capital. We cannot - we must not - allow the distribution system of that intellectual capital to become controlled by a small oligarchy.
To do so would place us at a spectacular disadvantage compared to other nations which see the value in ensuring fast, reliable, cheap and (mostly) equalized access to the economic "market" that will define the twenty-first century. Everyone - rich or poor - needs to be able to both buy and sell wares in that market place and they need to be able to do so unfettered.
If you hand an oligarchy the vice and place our collective economic testicles in the middle, don't be so shocked and shaken when the start tightening the thing demanding money.
No "technical capability of the TCP/IP protocol" is going to solve that. Even toothless FRAND/RAND rules (that don't solve the issue of barrier to entry int he first place, they only assure that the few who make it over the barrier get equal prices) just don't solve the problem.
People aren't rational actors. It's about time those who worship disproven economic theory got that through their heads. It's kind of important when you're trying to build a society based on rules and technologies that not only have never existed before, be up until a few generations ago, we couldn't have even imagined ever would exist.
At least one of the "cash for clunkers" tax rebates here applies to the purchase of any vehicle which meets a certain litres per kilometer efficiency rating, even if it's used. The program isn't to sell more cars, it's to get the existing stuff that's really inefficient off the roads.
Re: "Moz's C/C++ replacement Rust"
Honest question: what about C#? My understanding is that is has quite a following, one that is fairly steady and unlikely to be "faddish". Or is that considered part of C/C++?
Re: I want my slow, soft wearing flash...
You and facebook both. Problem is you both want it cheap, and that isn't of interest to anyone when fab capacity worldwide is at 100% all the time.
"Ooooh, new project for the weekend! Gonna build build me a Dalek outta a parrot and a Roomba!"
I'm never sleeping again.
@ vlbaindoor Re: Oh goody.
Scurrilous vagabond! I find your deleterious insinuations both haughty and contemptuous. Your vapid and irrational comments lead me to believe that your lineage could be none other than a cockferret for a father and a cuntweasel for a mother! Perhaps it is worth the application of effort to steer your course away from the seemingly inevitable douchepocalypse and towards a glorious nerd-calming fuckfest of personality-altering proportions.
May you find peace and the chance to chill. ---> Beer, because it's Friday.