We set you a challenge to join our expert panel and answer questions from our readers on how to deal with your server challenges. This week we've got the first of a series of instalments on this topic. We welcome the first contribution from our resident reader experts, Adam Salisbury and Trevor Pott. You can read their advice …
All good, common sense
Having gone through much of the process described, I can say that the comments are all good. OK, so we're definitely in the 'small' category with three racks and about 10kW of load, but the process is the same.
A year ago, we had an unmanaged and unmanagable setup - multiple small UPSs (all on their limit), no proper power system, no cable management, and what could best be described as a mess. We now have an expanded server room, with 3 new 47U racks, a single large modular UPS, and proper power distribution and network cable management. We get load data from the UPS, so we have graphs of that, plus graphs of air temperatures (we use just airflow for cooling).
ONe thing not mentioned, servers do indeed increase their power consumption according to environmental conditions. Overall, our power consumption varies by around 5% with inlet air temperature once it goes above about 20˚C. Since not all the servers actually have any power management (eg variable fan speeds), I suspect that some of the servers vary their loads by more than 5%.
The big challenge now is persuading the boss that we do need to upgrade the ventilation/cooling system - we only just managed last June, and at times we had servers raising over-temp alarms. I suspect that now it's cooled down, the boss will decide we don't need to do anything, and it will be too late when it turns hot again.