When did El Reg start reporting non-news tech stories.
Must do better
Customers of 123 Reg suffered more tricks than treats this morning when a DDoS attack hobbled the registrar's services. Users were confronted by DNS lookup failures until early this afternoon, when 123 Reg said it managed to get the attack "contained" and services restored. Inevitably, the delay provoked customer gripes. .@ …
The 20th century version used to be called smurf.
I "fondly" remember how 1d10tz used to resolve disagreements on IRC by knocking each other out with that. Some academic class B networks used to offer up to 20000 times amplification factors over OC3s. Facing the result in an average ISP was like trying to stop the Niagara falls with basic plumbing tools.
This is just more of the same - what goes around, comes around. We are now back to the point where an average script k1dd10t can knock nearly any service provider off the Internet. This is not new - we were there before in 1997-2000. We were there ~ 5+ years ago at the beginning of DNS amplification attacks. We will be there again later. It is the nature of the beast, pretending that what is happening is something that never happened before is simply disingenuous.
I know nothing about ISPs so this is a genuine (probably naïve) question:
Don't ISPs analyse their traffic in some way? I mean, is there not some analytics that goes "Hmmm this IP address is suddenly sending a metric fuck-ton of pings/http gets/DNS lookups per minute, which is not regular for this user. Looks like he's (probably unwittingly) contributing to a DDoS. Cut him off until he phones us"?
Or is that illegal or something because it would mean inspecting the users data? If that's the case, just get GCHQ to do it.
the can but the chose not to.
Just one example.
fail2ban scans log files (e.g. /var/log/apache/error_log) and bans IPs that show the malicious signs -- too many password failures, seeking for exploits, etc. Generally Fail2Ban is then used to update firewall rules to reject the IP addresses for a specified amount of time, although any arbitrary other action (e.g. sending an email) could also be configured. Out of the box Fail2Ban comes with filters for various services (apache, courier, ssh, etc).
Fail2Ban is able to reduce the rate of incorrect authentications attempts however it cannot eliminate the risk that weak authentication presents. Configure services to use only two factor or public/private authentication mechanisms if you really want to protect services.
A couple problems with that approach:
* if the user was simply uploading something huge (maybe a first time backup of Google photos or something) you'd trip the cutoff.
* the issue isn't one or two endpoints moving a metric fuck-ton of data, it's a metric fuck-ton of endpoints sending a moderate amount of data.
You can analyse the traffic as much as you want, but if the sheer volume of traffic overwhelms the analytic capacity of your firewall or the backplane capacity of a switch or router anywhere in the path of the traffic you are hosed.
You either need to increase the capacity at the perimeter of your network or play nicely with upstream providers to limit the traffic hitting your network. The problem with this is that however much capacity you put in, skiddies with access to botnets of compromised PCs or millions of shitty IoT devices can probably exceed it.
If you want to mitigate an issue with your DNS provider going titsup, then setup a secondary DNS with another supplier. ns.123-reg and ns2.123-reg may have borked, but you could have ns3 and ns4 with a different provider, in a different geolocation.
It's all well and good using a provider, but you still need to take responsibility for your own "setup" and put in place some redundancy.
Biting the hand that feeds IT © 1998–2019