Double fat tree
I've always just built things out with two big ol' core switches cross connected with a pair of switches in each rack. Each server has one nic plugged into each switch, each switch is connected to both core switches. The servers have 2x 1-gig links, the ToR switches are 24x 1-gig + 4x 10-gig, and the cores are 512x 10-gig + 12x 40-gig. Anything can catch fire without the servers ever losing connectivity or even noticing a degradation in speed (20 servers in each rack, so each can only push 2 Gb at most and with each switch pushing 40 Gb to core).
The core switches are then cross-connected with a 4x 40 Gb trunk to each other and a similar trunk to a pair of F5 Viprions with 40-Gb interfaces.
A ToR, a core switch and one of the load balancers could fail simultaneously and no one would notice a thing except for a few failed connections for a second or two, at most. It may be expensive, but downtime is even more so. It also means that we save a crap-ton of money on salaries since we don't need 24/7 coverage to maintain 5-nines of up time on the network because failures can wait until the morning to fix.