Researchers probing a previously unused swath of internet addresses say they've stumbled onto the net's most blighted neighborhoods, with at least four times as much pollution as any they've ever seen. The huge chuck of more than 16.7 million addresses had never before been allocated and yet the so-called darknet was the …
Mobile internet traffic?
Some 3G providers use 1.2.x.x for transparent http proxies. I wonder if an of that traffic has leaked on the public Internet.
non routable addresses
I could never understand why ISP allow packets addressed to 192.168.X.X and the other non routable IP addresses onto the network. If the sending address doesn't belong to the connection it came from then surely they should be dropped.
Good idea, doesn't happen enough
Yes, responsible networks do egress filtering as well as ingress filtering. There are plenty of not-so-very-terribly-responsible networks out and about, though.
One of the boxes on my network kept trying to connect to 220.127.116.11. That was a financial application.
If you look it's packets addressed to the REVERSE of those addresses (which are valid, and routable, if not currently allocated). I guess it's apps/scripts or maybe some kind of reverse DNS cockups causing these..?
Nothing to do with DNS
The problem is with endianness and trying to feed binary values to the network before running them through the appropriate "fix it" function. The berkeley sockets api provides the htons/htonl/ntohs/ntohl set of functions for that. Forgetting to use them easily comes to pass when developing software on a big endian machine (IP networking is big endian too) then port it to a little endian machine (x86 is, for example) and forget to properly test the freshly ported software. And, you know, being an average or even a freshly minted wage slave pro-grammer who for this reason or that doesn't understand endian issues.
Reverse lookups for RFC1918 ranges are a different problem and come about differently: All those silly home network NAT "router" boxes only forward DNS requests, so if a machine on that network does a reverse query for a private address it'll end up at the root servers, who can't answer. Millions of clueless boxes sending millions of queries can get a bit of a burden. So they gave in and setup global servers that answer those reverse queries with an appropriate translation of STFU N00B to DNS. At least that relieves the root servers from having to send errors -- error execution paths tend to be less performant and in the case of DNS the caching requirements are different. For that sort of scale, significantly so.
Just block those ips
at your edge. There are what, 4 of them?
missing htonl ?
I'm thinking that code ported from big endian processors like Motorola to small endian processors like Intel explains the 18.104.22.168, 22.214.171.124 and 126.96.36.199. On big endian machines you can omit the htonl macro and you still get Network byte order. There are lots of code examples available on the Internet that do just that.
Whole block polluted?
Surely it's somewhat alarmist to call the whole block of 2^24 addresses "polluted", when the article seems to suggest that the garbage data is entirely directed to a handful of discrete addresses within this range? Discard the affected addresses, and utilise the vast remainder.
You think that IPv6 is going to fix this? Think again.
As the saying goes, money talks, bullshit walks.
NO-ONE is going to give up IPv4 because they have so much money riding on it.
IPv6 will instantly make lot's of owners "poor" because of the value drop.
In the event that current IPv4 owners take on a correspondingly larger amount of IPv6 space (as was bandied about at one time), value will STILL drop because of the huge increase in space in already unallocated areas anyway.
Is there any surprise it's moving at a "snail's pace"?
am I the only one?
Why hasn't some miscreant offered to buy 188.8.131.52, want free propritary data, just configiure it to listen on all 65535 ports.