There may be a glut in the housing market in many of the Western economies, but when it comes to data centers, there's pent-up demand for more modern facilities. So business should be booming then, right? Wrong. As the cost of compute, storage, and networking capacity has dropped, allowing companies to – in theory – get a lot …
Whilst it all looks good and fun, I'm not so sure hp's comparisons hold up. Are they making them to expensive, traditional, office buildings converted to DCs? Here in the UK, you'll find many DCs, especially those of the big hosting companies, are built as hangar-like warehouses, similar to those cheap structures you see at US outlet malls, then fitted out with insulation, cages, etc. Many of them look from the inside like you're in a bigger Portacabin! The big problem in the UK is finding a "greenfield" site the local council will let you build on, and then getting enough power and WAN links into the DC in what is usually a non-urban location. Not building it cheap. Of course, it could just be that stereotypical British tightness means we are actually ahead of the US in one area of IT for a change!
Good idea but...
Try flogging that to someone needing to put a DC is a vaguely close proximity to an urban area or even in the city...
There are 2 flaws to this DC: first that the DC wings can't be attached together to create a larger floor area for servers (having several buildings is very wasteful), and second that they're not multi-storey. I know there are 2 problems with that, the air intakes and heat extraction through the roof, but while it's a worthy goal to reduce the cost of a DC, HP also need to minimise the area of ground the DC covers. Solve that, and I can bet you that they will sell several high-rise DC's almost instantly.
Sum it up quite well.
...In the UK (and Japan and many other densly populated areas)
Buldings are cheap, land is expensive. Unless you can build up, you are chucking millions (10's of millions) away.
This design must have been out forward to describe a set of design parameters but as such would have failed the first week of any architectural studies program. It fails it's first criterion of being a near to urban construction. I think it's only appropriate to Scottsdale, Arizona. What's wrong at HP? This level of stupid fits in the 1950 class. It might be viable under-ground.
No raised floors?
Since you don't need to bring AC to the server racks, nor do they need any cabling.
Network and storage connectivity are all wireless these days, aren't they?
"RFC 802.3zz, Power and FCoE over wireless Ethernet", most likely.
Somebody might tell the artist who created the drawing that 5kW is nothing these days per rack,
a single HP c7000 Blade Chassis consumes more then those 5 kW already, and four of those c7000s fit into a single 2m rack.
So if each of the 20 tiny squares in each green row in the "M8" Module represents 1 rack, these racks would only be used to 1/4, or, the designers aren't aware of the actual power rating of 20 to 25 kW per rack.
BTW: dumb techies like me still use "kW" instead of "KW", and the term for a server is "SERVER" and not "SEVER".
RE: No raised floors
".....four of those c7000s fit into a single 2m rack....." Whilst you can fit four c7000s into a 2m rack, you have to be very careful over the rack you choose. If you fully-populate four c7000s then you will marginally exceed the weight limit of a standard hp rack (and then you have the fun of trying to lift the top one into the rack at head height!). You will also need to drag in twice as much power into one rack as most datacenters already have available under the floor, and require serious AC to shift the heat generated. We usually don't rack more than two c7000s per rack as it is easier and cheaper to spread them across more floorspace than provide the additional power cables and AC. We do have one City DC where we had to put four in each rack due to lack of floorspace, but that was the exception.
One of the best solutions I've seen so far was a converted factory up in Leeds. They had one big shopfloor with a metal platform made of steel grillwork making up a second floor running the length of the building. The original factory had machinery on both floors and the platform was cleared for a serious tonnage. As the ground-clearance on the lower floor was a shade under 2m, the hoster put all the system racks on the second floor and then ran cables and had the patch panels on the lower floor, effectively using the ground floor as a raised floorspace which you could walk through upright. Very nice for cable work. Considering the number of DCs I see with a shallow and cramped raised floor and then twenty-odd feet of wasted airspace above the racks I often wonder why more people don't us a larger underfloor space (airflow maybe?).
- Twitter: La la la, we have not heard of any NUDE JLaw, Upton SELFIES
- China: You, Microsoft. Office-Windows 'compatibility'. You have 20 days to explain
- Apple to devs: NO slurping users' HEALTH for sale to Dark Powers
- Is that a 64-bit ARM Warrior in your pocket? No, it's MIPS64
- Apple 'fesses up: Rejected from the App Store, dev? THIS is why