back to article IT 'heroes' saved Maersk from NotPetya with ten-day reinstallation bliz

It's long been known that shipping giant Maersk suffered very badly from 2017's NotPetya malware outbreak. Now the company's chair has detailed just how many systems went down: basically all of them. Speaking on a panel at the World Economic Forum this week, Møller-Maersk chair Jim Hagemann Snabe detailed the awful toll of …

Anonymous Coward

'internet was not designed to support the applications that now rely on it'

10 days and only “a 20 per cent drop in volumes”.

Did he say how it ended up costing them $300m?

7
0
Anonymous Coward

Re: 'internet was not designed to support the applications that now rely on it'

"...in the near future, as automation creates near-total reliance on digital systems, human effort won't be able to help such crises."

Its good that Maersk acknowledge this. Since the media often downplay 'Automation' risks and ignore how breaches, hacks and malware will factor into things. Overall, I suspect many corporations are still just thinking: "Its ok, I won't get that malware 'AIDS' shit, that only happens to others".

33
0

Re: 'internet was not designed to support the applications that now rely on it'

Seems a reasonable figure

(10/365) * (20/100) * annual turnover

Not got an exact figure for the turnover but revenue was over $35 billion.

Add overtime and other sundries and its a big bill.

21
0
Silver badge

Re: 'internet was not designed to support the applications that now rely on it'

So 20% drop = $300m less?

Boggles the mind...

5
0
MMR

Re: 'internet was not designed to support the applications that now rely on it'

Overtime, lots of takeaways, overtime, consulting and probably lots of overtime.

10
0
Silver badge

Re: 'internet was not designed to support the applications that now rely on it'

Imagine signing off on $3.8million for energy drinks and pizza.

:)

19
0
Silver badge

Re: 'internet was not designed to support the applications that now rely on it'

...and maybe late delivery penalties or ships overstaying their welcome in port due to the manual process.

12
0

This post has been deleted by its author

Anonymous Coward

Re: 'internet was not designed to support the applications that now rely on it'

rmason,

How do you think 'Just Eat' got so big so quickly !!! :) :)

0
0

Re: 'internet was not designed to support the applications that now rely on it'

I know that FedEx has penalty clauses built into contracts where they provide services. I have no way of knowing if they ever paid a 'fine' but I did hear that they had every available employee had sorting. With a revenue of $50B a cost of 0.6% would not be surprising.

Maybe if all these companies would listen to their security people and patch they could have saved most of that money.

0
0
Silver badge
Pint

I hope

That all the staff that pulled this off were well rewarded.

Because frankly that's a phenomenal effort that deserves it.

91
0
Silver badge
Pint

Re: I hope

I agree.

Also, I hope for IT's sake that there's a "Let's make sure this doesn't happen again" rather than finger pointing exercise .

Another one -->

38
0
Silver badge

Re: I hope

They have at least been acknowledged which is already a huge leap forward from the normal management responses of "why did you not prevent this" and ""that's the IT budget blown for the next 5 years, cancel the refresh program and forget asking for any overtime money"

41
0

Re: I hope

That all the staff that pulled this off were well rewarded.

Because frankly that's a phenomenal effort that deserves it.

Annoys me that companies don't shout about how well their IT departments recover in situations like this. If they'd had a fire etc they'd be thanking those staff who helped PUBLICLY but IT is seen as a shadow department, we can't possibly talk about those people..

33
1
Anonymous Coward

Re: I hope

Their main IT support is via IBM so you can guess the chances of reward were between Buckleys and none.... (unless you were a manager.)

They had lots of heroes including the local techie who had the presence of mind to turn off one of the DC's once they realised what was happening - that saved their AD.

We'd heard bits and pieces of what had gone on during the recovery (usual stuff you'd expect - block switches, block ports until each host was confirmed a clean build in a segment then slowly remove the blocking/site isolation.) We didn't, however, see any emails publicly acknowledging their efforts.

19
0
Gold badge

Re: I hope

"They had lots of heroes including the local techie who had the presence of mind to turn off one of the DC's once they realised what was happening - that saved their AD."

Hmm. Yes. I imagine the rebuild might have taken more than 10 days if it had included typing in a new AD from scratch.

10
0
Silver badge

Re: I hope

...rather than finger pointing exercise .

Like WPP did.

8
0
Silver badge

Re: I hope

"cancel the refresh program"

It looks as if the refresh programme was brought forward.

9
0

Re: I hope

I can't tell all the details as it is covered under many agreements, but as a person who was/is involved in this process with our managed service team (who was part of this recovery process as well) I can just say that there is no finger pointing.

There is a lot of constructive changes and solid plan which is being implemented how to lower the risk (you can't rule it out) of such incident happening in the future.

It was really exceptional to see how Maersk team handled it and how all involved parties (Maersk IT, managed service teams - ours included and external consultants) managed to pull it together and recover from it.

Also it is really exceptional how Maersk is sharing the knowledge about it and what happened and how they handled it.

We will be covering our lessons learned and experience from this event soon (next week) on our blog -> https://predica.pl/blog/ if you want to check it out.

17
0

Re: I hope

But they do - they are speaking about it on multiple events and Maersk IT got prised for what they delivered in this case.

3
0

Re: I hope

This wasn't handled through standard IT support. IBM probably had it role there but it was mostly recovered by Maersk IT team, consultants from vendors (you can imagine which one) managed service teams like ours and external consultant.

Lots of people were working there on shifts spending time on-site for several days to make it happen.

I can't say about public e-mails from Maersk, but Maersk IT team is very open on what they did and how, I saw members of the team speaking about it on few sessions.

I said this earlier in this thread but just for the information - I can't provide official information on it, but our managed services team who was part of this recovery effort spending couple of weeks on site is cooking a blog post with details to be published next week on -> https://predica.pl/blog/ if you want to check it out.

3
0

Re: I hope

Main challenge was to have data from AD to start recovery.

Main question here is - how many organisations have procedure for forest recovery, which is mostly logistics task with good understanding of AD as a service.

My consulting experience from last 20 years tells me that 99% of organisations doesn't have it and never thought about it as something which will never happen.

6
0

Re: I hope

And BTW - good forest recovery plan has "typing in a new AD from scratch." planned somewhere along the path of recovery if it is not meeting business requirement of recovery time. It was included in each "GOOD" recovery plan I saw and had a chance to build or read.

2
0
Anonymous Coward

Worse things happen at sea.

Compared to a ship sinking with possible loss of life, IT isn't their biggest problem. One ship and cargo probably cost more than $300 million.

Their DR plan must be good; comparing how the reality compared to their plan would make a good case study.

0
0

Re: I hope

>Main question here is - how many organisations have procedure for forest recovery

You just made my butthole clench so fast I'm missing a circle of fabric from my chair.

3
0
Bronze badge
Pint

Re: I hope

I've (thankfully) never had to restore AD from a backup, and Bog as my witness, I hope to never need to.

Pulling the plug on a DC was *definitely* a heroic measure- even if it's not a FSMO, if it's a global catalog server, it can be promoted to one and used to rebuild from.

0
0
Anonymous Coward

Re: I hope

> Annoys me that companies don't shout about how well their IT departments recover in situations like this.

Err... This guy just did. Hence the headline.

0
0
Anonymous Coward

Re: I hope

> consultants from vendors (you can imagine which one) managed service teams like ours and external consultant.

Job well done, Tony. Congratulations. :-)

0
0
Silver badge

Re: I hope

I had to recover an AD site once, it had only one PDC, no BDC's, but luckily there was a recent backup made of the PDC (Server2k3 and NTBackup).

Process was to reinstall Server2k3 on a clean server, run ntbackup to restore the AD backup, and we were back in business again. Only niggle was hoping that Windows Activation would go through as I was not in the mood to faff around with that - but it went through just fine. I then set up a BDC just in case, but still continue to make backups from the PDC juuuust in case.

And recovering the forest is no biggie as there's about 60 users - but a backup and BDC makes things so much easier.

But yes, forest recovery, especially with multiple sites and domains need to be addressed. Setting it all up from scratch by hand leads to errors and mistakes if due care is not taken.

0
0
Anonymous Coward

Re: I hope

@ WPP 3 global AD forests, 1000s servers, dozens backup environments, 10000s workstations all encrypted. Networks are still wide open. They will get hit again.

1
0
Anonymous Coward

Re: I hope

"@ WPP 3 global AD forests, 1000s servers, dozens backup environments, 10000s workstations all encrypted. Networks are still wide open. They will get hit again."

I mean it's not as if WPP hasn't had these type of issues in the past with the constant stream of new companies/offices being taken on or consolidated into existing offices. Of the networks that were hit, some used to have systems in-place to stop this sort of thing however they were probably left to rot or unmanned and unmonitored while IBM waited to employ someone in India a year or two after the onshore resource was resource actioned. Or IBM have offered an expensive solution that WPP don't want to buy and neither side has the expertise to come up with a workable solution...

And there is some network flow data being sent to QRadar systems in-place now to identify issues but whether they would identify issues fast enough to stop senior IBM managers from making bad decisions is a different story. Unless it was a temporary solution and it's been removed pending agreement on costs.

Still, I'm sure WPP wouldn't want to upset such a valuable client in IBM by making public what the damage actually was.

0
0
Bronze badge

Re: I hope

Sounds like the attitude has not changed much from when I was Maersk Data (separate, but very close to The Client). Get it sorted, we do paperworks later. And recognise effort.

0
1
Bronze badge

Re: I hope

My time at Maersk Data (IT subsidiary until 2005) made me paranoid about backups. They considered to make mirrored systems kilometers apart.

0
1
Anonymous Coward

The last person who fixed a malware outbreak...

... got thrown under a bus by the UK government and then arrested by the US Feds. I wouldn't touch the repair and removal of malware without a waiver signed and a lawyer present.

36
4
Silver badge

Re: The last person who fixed a malware outbreak...

"thrown under a bus" as in wasn't told he was going to be arrested. That's not really the same thing and would have been a really big ask of GCHQ.

6
5
Silver badge

Re: The last person who fixed a malware outbreak...

They didn't mind getting him in to consult for them whilst they surely knew about the US intentions.

6
2
Silver badge

How did they managed it? I would love to hear the side of things from an IT techie... it is stunning... the mind just boggles.

16
0

Lot's of people working on it on multiple fronts. Lots of logistics - for example, you might be surprised how little USB storage pens are on the stock in the shops :).

One important aspect - no panic! Don't do things if you don't fully understand the scope and what hit you.

Besides technical details there are lessons from the management side:

- do not let people doing the work being bothered by people throwing random questions or issues, put some "firewall" to handle it

- good communication where are you with efforts and what is recovery time is crucial. Dedicated team for it might help you A LOT.

As I wrote in other replies here we will be covering it soon on our blog from our managed services team perspective who was on-site to help recover out of it for couple of weeks.

8
0

This post has been deleted by its author

This is a ransomware, so it is a massive quarantine, wipe, reinstall and safezone effort.

0
0
Anonymous Coward

Maersk's own experience is that the attack it endured cost it between $250m and $300m.

Now if they had spent only a fraction of that preventing it.

22
4
Silver badge

Re: Maersk's own experience is that the attack it endured cost it between $250m and $300m.

I'm sure (!?) that after his "Road to Damascus" moment Mr Snabe also directed that a further $300m be invested in DR facilities and redundant systems while also doubling the systems security budget.

Back on planet Earth.....

4
0

Re: Maersk's own experience is that the attack it endured cost it between $250m and $300m.

Probably not.... now the IT department has un-stealthed itself as still having some staff left then the CEO is asking "how come these people are still on the payroll.... why haven't they been outsourced already?"

9
1
Anonymous Coward

Re: Maersk's own experience is that the attack it endured cost it between $250m and $300m.

Depends on who was responsible for letting it happen. If the service provider they will lose the contract at renewal. If internal at least they recovered.

0
0
Bronze badge

Re: Maersk's own experience is that the attack it endured cost it between $250m and $300m.

The company may have changed (I stopped working with the main IT supplier in 2007), but it was then a company that was aware how important IT was to them. Doing any work at their servers at headquarter, even when employed by a subsidiary, meant having one of their guys standing next yo you.

If they think it makes sense to invest further, they will do it.

- I hear a comment from an ext. consutant once: "we sat at this meeting, and as we talked hardware costs went up from 50k to 200k in 20 minutes - and the customer (Maersk) didn't blink".

0
1
Silver badge

Youd think, with companies with large PC deployment maybe, just maybe, a light bulb might come on a ping 'Maybe we need to put a bit more diversity into the client OS?'

I mean FFS. Its a company. Most of the applications can sit beyond a browser now.

12
8
Silver badge

But Office! But Outlook!

12
0
Silver badge

Both of which work in a VM.

Sure, your VM is toast, but off-line backups and the underlying OS are all good yeah?

5
0

But even for browser apps you need service to authenticate people and some common services like e-mail. With 100k+ employees it is not that simple.

1
0

20% drop going to manual

Beancounter: 'How much are we spending on IT?'

15
0

Page:

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Forums

Biting the hand that feeds IT © 1998–2018