350,000 queries per second until...
...the query planner screws up. PostgreSQL relies heavily on statistics to determine the order in which conditions should be applied to queries. The goal is to apply the most selective conditions first and merge results in a manner appropriate for the number of rows involved. Most of the time it works well. On the other hand, there's that rare case where it goes horribly wrong. Anything could be the cause. Maybe your storage is slower because a RAID is rebuilding. Maybe a temporary software bug in your app caused a load glitch. It can simply be that a customer of your app accidentally searches for information dated in the year 212 rather than 2012. PostgreSQL now decides that no indexes are valid for a query, or maybe temporary indexes should be used, or maybe the whole thing should be a filtered cartesian product that won't finish before the death of the Universe. It's for those rare cases where you're screwed. You frantically search the Internet for everyone's superstitious tricks - increase the statistics buckets, vacuum analyze, more autovacuum processes, delete indexes, create indexes, disable plans, tune the cost estimates, and on forever. Can you manually specify a query plan to get up and running quickly? Absolutely not! It's forbidden in the PostgreSQL world. It happens again, and again. Resources used by malfunctioning queries are causing autovacuum to stall. Queries that used to work are failing one after another. You're not screwed anymore. You're dead. PostgreSQL's data is in a state where only a day of downtime can get the statistics and indexes working again. And after that a day of downtime, you still have no guarantee that all the voodoo tuning has made it any better. You can only wait for it to happen again.