Re: re-using an existing field is very risky too
One point of failure versus 8, plus the network connections and their configurations, plus the extra firewall rules and other security. Every patch to be done in development, test, user acceptance (hope with network connectivity tested too) and, finally, on eight live servers, none of which, in all probability, are really fault tolerant as they are cheap Linux boxes on what would once have been PC hardware or even, as in my last job, virtual systems sharing the physical box with all sorts of other systems/applications, all competing for possibly over-committed resources on the basis that not all VMs and their applications will need all the configured resources at once (of course, it happens and of course your DB or your application die, horribly - experienced it). And, of course, the VM or network or whatever are configured and managed by a specialist group to whom you can only specify your requirements and hope that it suits their strategy for meeting budget constraints.
Suddenly, one or two large servers on highly fault tolerant (power supply, memory boards, disc etc.) hardware and much simplified network connectivity with decent load capacity look interesting. Of course, they cost a lot initially; supplier support is probably not the cheapest. But now your critical service has fewer points of failure and software maintenance is a lot easier to plan, quicker to do, quicker to back out if necessary. As someone else pointed out, the standard uptime and performance of mainframes is in a different league from cheap, distributed systems.
I remember, following a high profile system failure at another firm, through poor procedures widely reported in the press, raising it with my management: it was in a foreign country, at a different bank and not understood as relevant or a chance to learn from the mistakes of others. Well, it would have needed effort and thought at a management and budgetry level. It also involved outsourcing to India; as that was and is the mandatory policy, it was very inconvenient so best ignored.
Lesson? Note any weak point discovered. Report them and keep a copy, with recipients, date and reaction or lack of. It will not help to get action. But it may provide protection later. If you are the SA put in this invidious position: no matter how self confident you are, make a written plan, get it reviewed and do it in anal detail, down to the full command lines or screen captures for each host and exact timings from the practice run you did on the test systems (did n't you?). Do not assume you will be in top form on the day or remember everything when the upgrade is running late or a network outage interrupts. Try and insist on a "four eyes" principal.
Yes, it's boring; it's not "agile" and it is not "clever". But it decreases the chance of disaster. Oh, and do monitor really closely for a full business cycle, whether that is an hour or a month, to detect problems and handle them before they reach disaster proportions (difficult in high volume transaction processing such as a trading environment; but try).