17 posts • joined Saturday 14th April 2007 19:14 GMT
Re: What is AMD up to?
don't read gamer reviews of intel vs amd power consumption and then draw conclusions about either HPC or webscale applications. these are throughput boxes, where the workload is embarassingly parallel and (for webscale at least) not flops-heavy. such servers are simply never idle, for instance (or their being used wrong).
Re: fuzzy math?
that's correct: "enterprise" disks only ever use quite narrow bands of the outer part of the disk, since that gives the lowest latency. these disks are sold on iops, not bandwidth. (which is why, more than ever, they sell to a shrinking niche market. think SSD...)
uh, cloud is expensive
you know Amazon's profit margin is HUGE, right?
Re: Accuracy of results
whohasthefastestcomputer.com is just a flash plugin - very little relationship to the true speed of the computer it runs on, and totally unrelated to HPL.
Re: Let me overclock it plz :D
HPC doesn't generally overclock for two main reasons. first, overclocking is, by definition, running the system outside of spec. unless the specs were stupid, that means less reliable or robust - higher FIT, etc. second, overclocking dramatically increases power dissipation, and operating at scale means optimizing for performance/power, which means a strong preference for lower clocks.
speculate on AWS margins?
I was looking at AWS prices recently, and even comparing to retail prices for servers, space, power, networking, I don't see how AWS could run at less than 20x markup. that's pretty amazing, even compared to, oh, say Apple. could it be that AWS gives incredibly steep discounts to large customers? or could they have some kind of exorbitant hidden costs?
AWS costs between $250 and $700 per year per ECU; purchasing your own servers, running them for 3 years, and throwing them away will cost you somewhere around $50/ECU-year. if you get hardware at wholesale and build/operate your own datacenters, the cost is probably close to half that.
Hazra needs to work on his rhetoric. simply claiming pcie3 is "necessary" makes him laughable - a simple appeal to authority. _why_ is it necessary? show us the numbers demonstrating realistic cases where it helps.
the best examples I can think of are high-end IB and some kinds of IO-intensive GP-GPU codes. failing to provide an actual example, he looks like a marketing weasel.
if an attacker so 0wns your network that they control DNS and can MITM all traffic, you're basically screwed. but this doesn't mean you need to cache everything - just the root certs. and those should be updated via your OS's standard update mechanism (after all, you have to trust them just as much as you have to trust your kernel, tcp stack, etc)
this is really the way it should always have been - separating ssl from domain mechanisms was just a historic oddity.
the big change here is that the current nasty, parasitic SSL-cert industry goes away. lots of them won't be happy. no customers will regret this though.
possibly the stupidest cloud vapor yet
why haven't people realized that VM doesn't improve security or reduce admin load? fussing with hardware is so infrequent.
but obviously diebold is worried about NFC and people having secure and easy ways to buy and/or get cash advances. ATM's are today's buggywhip...
you pay for whatever rank on top500 you want. and it has almost nothing to do with the performance of real codes. but you're right: the interconnect does sound interesting, since it's the only novel part. it's a shame there's so little info available about it.
and this is news how?
it's a bit sad that's the best he could manage, and that he thinks it's worth talking about. compare to the current article about google's plan to manage 1e7 servers - probably very few of them in luggage.
2-3 KW limit is a lie
why do you let asshole vendors get away with claims like that one about 2-3 KW/rack? it's absurdly untrue, but if you pointed that out, it would also implode most of IBM's spin.
fact: it's not hard to build rooms at a bit over 10 KW/rack - normal raised floors and standard Liebert chillers. with rack-back radiators or more careful air-engineering, much higher is achievable.
it's the call that's unsafe
it's not the clumsiness of holding up a cellphone that makes calling from the car unsafe. the problem is that the call itself steals enough of your attention that you are no longer a safe driver. please to not give people the mistaken impression that hands-free makes it safe to call while driving!
amdahl's law, for real
do you really think these guys don't intimately understand parallelism? but look up Gustafson's law instead - that's the relevant one here, since the point of this cluster is to scale up the problem, not to solve a small problem really fast...
why cardboard at all?
ultimately, any significant cluster winds up racked. so why not ship racks fully installed? the cluster I care for daily certainly arrived like that: ~30 racks, with a minimum of cardboard, no excess power cables, etc. there _were_ 30 skids, but the shipping company took them away. one rack arrived bashed in, but I'd guess that damage rate is comparable to the cardboard-intensive approach. naturally, preconfigured racks work well with putting leaf switches in each rack, for instance.
have these people looked at disk prices recently? raw storage costs about $250/TB, so everyone with half a brain is wondering what's worth the 10x markup. sure, you have to put the disks _in_ something, and yes, there still is some modest value in 15k rpm and 24x7 vs 9-5 duty-cycles. but disk is cheap, and particularly at the block level, may not make sense to try to centralize. especially if you consider performance. there is significant value in providing multiprotocol, shared file-level access, especially with features like snapshots, replication, multisite caching, etc. but those are largely a small matter of programming, and therefore hard to charge arms or legs for...