Reply to post: Minimum specs PCs for testing

Ooh, my machine is SO much faster than yours... Oh, wait, that might be a bit of a problem...

Norman Nescio Silver badge

Minimum specs PCs for testing

Oh yes.

A long time ago, a large financial institution with 100s of offices in many countries decided it wanted to upgrade an in-house bit of software fundamental to customer service.

The architecture of the network was very centralised - a mahoosive pair of datacentres in one country with expensive frame-relay (I told you is was a long time ago) links, and even more expensive leased lines (yes, very long ago) to those benighted places that didn't have frame-relay yet.

The replacement application was coded up on PCs by the application developers, who were all housed in a multi-story office in the same city as the data-centres, with a testing server in the same building as the developers.

It had been decided that the first place to get the new application would be the country that was at the end of the longest, thinnest, most expensive leased lines as the frame-relay solution would be significantly cheaper. The application had passed all its functional tests, and a roll-out plan had been agreed. PCs had been loaded up with the new software, the notice had been given to the local telco to cancel the leased lines, the replacement frame-relay links had been ordered and installed. All systems go!

The complaints flooded in. The application was unusable. Customer queues were frighteningly long. Telco suppliers were hauled over the coals for providing connections that were manifestly not working properly.

Except.

The application so lovingly coded by the applications developers was written as a 'client-server' style application, with all the data held centrally. The application developers had been in the same building as the server - in fact, on the same (fast by then current standards) LAN. This meant that some fairly standard network efficient practices had not been followed - entire database tables were being transmitted from server to client. This worked well with the small tables on the test server on the some LAN, but not with production sized-tables being squirted across thin, long network connections.

It took 18 months to re-code the application.

Roll-out had to be halted, and the business reverted back to the old application. Upgrading the frame-relay links was a non-starter - even if the capacity could be obtained, it was far too expensive even for this financial institution, and it wouldn't solve the problem as the network latency also killed performance (a double whammy).

So not only should you test applications on minimum spec PCs, you should also test them on minimum spec networks (you can get nice 'networks in a box' with configurable latency, capacity and error rate*), so you know your spiffy new applications will work in the boondocks. It's also advisable to use a comparable volume of test data to the production application to expose unindexed (i.e. sequential) searches, and table joins across the network).

NN

*Oddly enough, the large financial institution bought several of these.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2019