18 posts • joined Sunday 28th October 2007 07:02 GMT
"But scientific consensus is what validates discovery, you don't appear to understand this crucial part of the functioning of science."
Not 100% correct:
The ability to repeat the results using methods and materials is what vaildates discovery. You prove or disprove the theory based on experimentation and oberservation." This is how you separate the quacks from the real deal. you don't get that from a bullying and well funded, well connected Anglo- American clique who routinely fails to provide the data to properly analyze their "science".
If consensus was the primary driver of science it would be little different than the consensus of astrologers, psychics, and the much derided "religious" types.
The actual applicability of climate models is dubious at best which means that they are pretty worthless at this point. They Say La Nina, But we get El Nino.... and all of a sudden ENSO is no longer climate, but weather... <rolls eyes>
" e: The Met office: You seem to fall into the trap of mixing up weather and climate. Nine years is weather, climate is longer term. To make it more simple - Climate is what you expect, weather is what you get."
ohs noes, the weather vs. climate hair splittling talking point.
So climate is weather sampled over time over an "area" (The boundaries of which aren't all that clear) The problem is, how much time? 5 years, 50 ? 1000? The AGW cheerleading crowd frequently sites short term changes as evidence of global climate change (as small as 1 year of hurricane activity ) When other folks bring up the fact that there's been no net warming in 10 years, that's just mere weather and not climate.
So 1 year = Climate change if the pattern fits the narrative
10 years = Mere weather, because the pattern doesn't fit the narrative.
As you probably gather, the weather vs. climate thing is really an arbitrary and heuristic distinction which is used as a lazy argument to refute skeptics who point out the obvious fact that our ability to predict weather isn't so good, and that that same problem might also apply to the far future climate predictions.
Keep moving those goal posts!
BTW AC, there is a "scientific" way to fix your problem that seems outside the scope of your political and non scientific talking points.
Perhaps moving heavy industry off the surface of the planet would work? If you're concerned about the extinction of the species there are a hell of a lot more obvious risks that the retarded debate of whether we can manipulate and repeat the GIGO problem with climate models in the same way we've done with the financial models. After all, in order for the species to survive, we need to leave earth. Seeing as the Sun will go poof someday, destroying the fracking planet completely.
"d surprisingly, you didn't lose me by the reference to "Survivor". No, it was by the sheer ignorance regarding networking. I'm far from a network guru, but even I know about multicasting. I may not know how to set it up or use it, but at least I know about it. You know, Class D using the old class system of addressing. I'll grant you that I've never seen it used in practice. Regardless, this "broadcast" scenario you analogize is EXACTLY what multicast was designed for. In fact, unless I'm mistaken, old versions of Norton Ghost used multicasting for restoring images over a network."
Umm... show me a *production* (as in open to the public) multicast delivery service that runs across different ASes on the "public" internet. HINT: There isn't one. And multicast wasn't invented to handle the type of video delivery that people want, which is *on demand* and not fixed schedule programming.
All HTTP delivery of video is currently Unicast IP, with a single stream per feed. In fact, all video on demand systems operate this way, as there is no point in delivering video via multicast if only two people want the feed at that time.
Richard is correct in this case, because Youtube doesn't do multicast. The rest of you can come back and make your point when they're ready to deliver via this method. ;-)
" Agreed - and the arse-kicking new Sun Storage boxes probably don't help them. Dave HItz on his blog blathering on about hybrid storage boxen, meanwhile Sun have actually delivered, and used SSD in a pretty far-out way to boot."
Quit smoking crack. NetApp was shipping Hybrid Flash/SSD's (Flexscale) as an add on to their stuff before Sun's Fishy Storage was ever launched.
On Top of that, they haven't released any Spec Numbers for their boxen, and if it's anything like regular ZFS + Slowaris it's going to need a lot of help to catch up with NTAP performance wise.
"Distro" packages are for noobs...
"The whole point in the distribution centric model is that the QA and integration work can happen in a tested environment.'
The above statement is crap, esp. in light of the fact of the Debian SSH fiasco, and the fact that still to this day upgrades often mutilate large swaths of /etc customizations.
Most pros rather prefer to use their own source compiled apps for production purposes; usually "distro"packages are there when you're in a hurry or don't care. Oo3 definitely falls into the "don't care" category.
"I bet that when factory owners told their in house technicians that they were going to scrap their local generators and buy electricity from the national grid the techs told them they were mad. Cries of "But what if the power station fails, or the power cables are damaged, or the local substation blows up" must have been heard up and down the country."
This is true to a point; because it looks like the owners heeded those warnings:)
Think about it: Most folks still buy UPS systems, generators, etc. Why is that?
Because the Grid fails, even redundant Grids.
Cloud computing is useful for many things, but it is not a Panacea; it's a main frame dressed up in new clothing. The power of cloud computing doesn't mean that you'll be outsourcing computing needs ; on the contrary it simply means that savvy shops can build the same main frame type environment on the cheap.
iscsi performance is "good enough"
"10Gb iSCSI... now that's somewhat of a con job isn't it? Do you even get full gigabit potential out of today's iSCSI?"
Yes, I do. Actually with link aggregation, I can get 2Gbps + throughput with MPxIO and *SOFTWARE* drivers using ghetto onboard NICs. With the amount of juice available with today's multicore procs, you've typically got the head room to spare for most jobs. Sure, you need a kick ass array, but that's applicable to any storage setup.
Frankly, I rarely see applications that require that much throughput or "performance"... If you actually look at the IO requirements of most mid size shops, any of the related technologies are slightly over kill.
"My current SAN units SATURATE a 4Gb line... that's a SINGLE host talking to the SAN. Good SAN arrays can easily take full advantage of 4Gb and 8Gb today... something that iSCSI simply doesn't allow even at the lowly 1Gb speeds. Is that worth something? I think so. You can knock fibre channel tody because of its cost... certainly true. You do pay for the ability to get full performance (with or without aggregation). If 100MB/sec floats your boat, and you don't need more than that, I say iSCSI is going to work fine for you. If you need 400MB/sec+, then I KNOW, you'll be better off with FC."
It's been my observation that the performance of a storage network is typically Array or Disk bound; the interconnect is rarely the problem.
For VMware, NFS over 10GbE is the most ideal solution for scalability.
Flat Rate Fee Structure is The Problem..
The issue from a business model perspective is thus:
ISP's pay transit typically on usage based terms.
"Consumer" grade flat rate you're not charged per bit, but rather for a Hose that may be constricted somewhere up stream. This is certainly an issue for your ISP and is a major reason why George Ou has a more technically grounded point than his detractors... even though cable co's and telco's did this to themselves because they believe that people generally aren't willing to pay for data service; it's considered an add on to existing revenue streams like TV or Voice, so they never bothered building the billing infrastructure into their OSS systems.
Discussing the technical remediation issue without first addressing the core business problem is at best a bandaid, at worst a PR disaster.
I propose that consumer grade ISP's switch to a usage based billing model; this would provide the "transparency" everyone wants: if you leech you'll pay more than granny who sends a couple emails every week. Plus you'd get the added benefit of cutting down on abuse issues related to bot nets; you get infected and start spewing crap on the network, you'll end up paying for it.
Customs has always been teh suck
Customs agents in just about every country have the right to inspect your baggage when you enter and leave the country; they don't require probable cause or a warrant of any sort.
In the past the reason for doing so has been to prevent smuggling, I don't understand why people get their boobies in a twist about this; my worst customs experience was in Germany of all places.
It'll be up to the courts or the legislature to decide if your computer is something other than a more modern suit case and if your password is something other than a modern implementation of a luggage lock.
OH, and BTW, These Rights that you hold so dear also come with the responsibilities. Cooperating with Customs when you cross the border is one of these.
I know that's over looked a lot these days by the masses of selfish ingrates who populate the western world, but for fucks sake you begin to sound like those morons who think seat belt laws are a bad thing; Sometimes reasonable precautions for public safety are not a reason to yell OMG! FACisM!
It doesn't matter who they work for; the fact remains that consensus is not proof. realclimate.org folks have a vested interest in maintaining their position, as do many of the other bandwagon jumpers... You can wallow in the funding game, but that's a distraction and certainly doesn't buttress the scientific underpinnings of your argument.
Please go argue with these folks on a scientific basis, if you harp on their funding I'll know for a fact you don't know what you're talking about.
I disagree entirely with the notion of scientific consensus, it's crap honestly, it's sort of like agreeing with the Wall Street concensus that Enron was actually making money, even though there was plenty of legitimate suspicion that they were a fraud.
Koolaid drinking and group think afflict scientific and academic types, I'd say more so than in other endeavors, and what's a better way to get funding that predicting doom or the compromise of our precious bodily fluids by the fiendish florida^H^H^H^H "big oil"?
Also, if it's just a matter of physics, what kind of physics? Traditional Newtonian/Einstein or Quantum ? They don't necessarily agree...
Another question: Why did the AGW crowd decided to fly to Bali, mostly in private Aircraft? Seems that the biggest proponents of the theory don't actually *act* like this is a real issue, so why should anyone else?
Is this just another case of four legs good, two legs better but with spiffy lab coats?
Would you please back up your position with some facts? What we are focusing on here is the behaviour and limitations of specific access layer technology, not the "Internet."
Also, most "noted Internet founders and experts" haven't touched a real operational network in many many years (Vint), and I state that bozos like Craig Newmark are hardly "Internet" experts. They're bloody webmasters, and mostly likely wouldn't be able to explain the difference between ATM and Ethernet if you blocked their access to wikipedia.
Please go read some books, I'd specifically recommend anything by Oliver Heckmann as a good start.
While TCP RST's would not be a good choice for shaping traffic for things like web browsing or other interactive traffic (BE of EF), it's not a bad way to deal with bittorrent and other p2p protocols that don't play well on the network. At least they don't outright block or filter it like they do with other obnoxious traffic (like NETbios over TCP for instance.)
Targeted TCP RST's allow you to throttle the bit torrent stuff while still providing ample headroom for Webbrowsing and streaming... if you were strictly rate limited to bandwidth queues, your link would suffer fairly awful congestion and you'd be calling Comcast support bitching about the problem. Compounding the issue is you can't just "Rate Limit" bit torrent traffic, as it' hops TCP ports and takes measures to obfuscate itself; dealing with this requires "discrimination" via application fluency or flow analysis.
Even if the cable co's sent out new CPE capable of of dealing with bittorrent in a better fashion, there's no guarantee that the protocol won't get modified in a manner that by passes the new restrictions; they aren't *that* stupid.
"A marketing blurp regarding an award, especially one as vague as that one, doesn't mean much. One of the founders also having donated time/code to FreeBSD also has NOTHING to do with this discussion nor my question."
It's not a marketing blurb from NetApp, it's an IEEE press release. If the IEEE thinks your work is "innovative", that speaks volumes even if you consider it "vague" (even though it mentions WAFL *explicitly*).
What's so innovative about NTAP patents? Ask the IEEE
"So, Anonymous Coward, what exactly is so innovative about the NetApp patents?"
I'd also suggest digging through some old NetBSD code... you'll find Mr. Hitz's name sprinkled about. He was doing Open Source before it was "cool".
For all the Sun trolls... this one's fer you.
"Ever heard of cognitive dissonance? Must be hard on you"
Care to elaborate on that? Please tell me how the run up to this case was any different than Sun's trolling of Azul. I'm getting a little sick and tired of you Sun cheerleaders making lame pejoratives without the facts to back it up, so please, at a minimum use complete sentences when trying to flame and tell us exactly why it's so utterly and completely different than the Azul take down?
As for Sun's censorship of Azul comments on Schwartz's blog, I've made quite a few postings personally and only the ones contrasting and comparing the Azul case didn't make it through. hmmm... makes you wonder.
Proof enough? Perhaps you should go try yourself. You'll notice that postings to Dave's blog go through without delay, and postings to Sun's are "moderated" by Marketing Trolls
Also, I've never seen a NetApp employee butcher their own name in writing, but it's certainly possible.
Do you have any evidence of them doing so ? ;-)
I personally don't think we'd be here today if Sun wasn't demanding $36Million in a supposed "cross-licensing" deal.
 It's conjecture on my part that the blog censors in question are actually marketing "people." They could be lawyers; or the site might have a bug that black holes anything with "Azul" in the message. Stranger things have happened. Perhaps Dave's site is faster due to the superior storage backend ;-)
Schwartz's blog is more heavily censored
Your metric for your public relations score board is flawed.
It has been my observation that Schwartz's blog team censors their blog far more than Hitz's.
Many posts about the similarities with Azul on Schwartz's blog never made it to public view. If there's any case to compare this with, that's the one, and Sun's behavior in that case clearly demonstrates that they routinely engage in trollish behaviour with smaller competitors.
I'm also puzzled that you believe Schwartz is some rhetorical genius... he can't even spell "NetApp" without looking retarded.
- World's OLDEST human DNA found in leg bone – but that's not the only boning going on...
- Lightning strikes USB bosses: Next-gen jacks will be REVERSIBLE
- OHM MY GOD! Move over graphene, here comes '100% PERFECT' stanene
- Beijing leans on Microsoft to maintain Windows XP support
- Google's new cloud CRUSHES Amazon in RAM battle