Knowing what I know about the way LSE IT works
I'd be inclined to the view that it was more to do with incompetence than sabotage.
The London Stock Exchange is investigating a suspected sabotage after a trading platform was crippled for two hours. The outage hit the Turquoise system yesterday. "Investigations... have revealed that human error was to blame for the disruption that began at 0823am this morning," the LSE said in a statement. "Preliminary …
...there was an exodus of operational staff not too long ago as MIT started to take over functions and from what I hear the last two issues have been due to staff incompetence - could be internal spin. I suspect the LSE needs a scapegoat to postpone the main market migration, as they were never going to be ready, knowing MIT with their partner IBM have struggled quite badly to get where they are now and I suspect not even the LSE knows what MIT are actually implementing. TQ MIT platform has now had 2 outages since go live - first was a "network issue" a likely story and now "sabotage". Hmm...the last time I heard network issue as a reason was when Tradelect went live, which ended up being a really badly coded stored procedure amongst other issues with code.
To be fair to the LSE they have been aggresive in trying to get on top of their trading system performance at a reasonable cost and streamlinig the LSE fullstop - gone are the days of partners (erm Accenture) charging £80million for a trading system. Now they just have cheaper labour out of Sri Lanka (although adding IBM as a partner will be Accenture all over again just MIT manage them not the LSE), probably the same mess mind you. The LSE will get through this but expect a main market migration towards the end of Q1 next year together with a couple of outages soon after go live as is the norm.
What are the odds the markets have a field day in shorting LSE stock again;-) I would if I gambled.
So my earlier posts on the subject were correct when I suggested that people stop trimphilising for linux for a migration that hasn’t happened yet, is behind schedule but does make economic sense as they bought the company for 15m when they were paying more per year to Accenture. Turquoise failed before in October, so 2 failures in 1 months doesn’t look great, so 2 failures in 5 years for the old system wasn’t that bad. Lets see if and when the new main system goes live and how it will perform. I am sure there is more to blame on shoddy implementations than platforms being Windows or linux. Jury is out on the decision to move. What I had absolutely no doubt is that SQL Server can run the LSE and that’s with the 2000 version.
The first day after a major migration to a new network, and the first time it is put under any load, it all goes T.U. and the powers that be have announced possible sabotage.
Did it involve an RJ45 plug not meeting up with it's female friend?
Did they even test it before they all buggered off home on Sunday?
...there's *always* a serious problem with implementations. Something to do with most of the decisions being made by managers who have no understanding of technicalities, or even reality. (One firm's decision to roll out a new logging system at a huge call centre ON THE 1ST OF JANUARY, at time 00:00:01, is just one example that springs to mind.)
IT ? They've *heard* of it.
Not so long ago, a company gave us a case study regarding their superduper, low-latency software and clustered hardware bundle. We took a look and decided it was too fun not to play with it, but we asked to visit the case study, another City-based company. To avoid further embarrassment we'll just call them Compnay X. The Head of IT at Company X was only too happy to show us around and let us see the hardware (commercial blades), but I wasn't convinced by the wonderful mesh design they had, for "enhanced redundancy". The HoIT at Company X said that it had been "fully tested" and to prove it he pulled a cable out of the chassis running the trading app cluster - and lost the whole trading floor! Moral of the story - there's testing and then there's real testing, and problems missed during testing WILL hit you at the worst possible time in production!
In the past, I've recommended cutting one main power feed to test a system that had two independent power feeds and dual redundant cross-linked battery backups. I was told that would not be acceptable since any problems would cause major inconvenience to ongoing systems integration work.
When that happens, you either scream and climb the walls or you nod your head with a funny little smile on your face.
It seems a bit mad to call this sabotage. It has everyone talking about the incident (including main stream newspapers) and focuses attention on the roll out. Unless of course this is the objective.
Me?
I would be holding something shiny up as a distraction while working on a low profile problem-free roll out. Once it is all done then you shout from the roof tops about the really fast system you have just implemented.