Mobile internet traffic?
Some 3G providers use 1.2.x.x for transparent http proxies. I wonder if an of that traffic has leaked on the public Internet.
Researchers probing a previously unused swath of internet addresses say they've stumbled onto the net's most blighted neighborhoods, with at least four times as much pollution as any they've ever seen. The huge chuck of more than 16.7 million addresses had never before been allocated and yet the so-called darknet was the …
The problem is with endianness and trying to feed binary values to the network before running them through the appropriate "fix it" function. The berkeley sockets api provides the htons/htonl/ntohs/ntohl set of functions for that. Forgetting to use them easily comes to pass when developing software on a big endian machine (IP networking is big endian too) then port it to a little endian machine (x86 is, for example) and forget to properly test the freshly ported software. And, you know, being an average or even a freshly minted wage slave pro-grammer who for this reason or that doesn't understand endian issues.
Reverse lookups for RFC1918 ranges are a different problem and come about differently: All those silly home network NAT "router" boxes only forward DNS requests, so if a machine on that network does a reverse query for a private address it'll end up at the root servers, who can't answer. Millions of clueless boxes sending millions of queries can get a bit of a burden. So they gave in and setup global servers that answer those reverse queries with an appropriate translation of STFU N00B to DNS. At least that relieves the root servers from having to send errors -- error execution paths tend to be less performant and in the case of DNS the caching requirements are different. For that sort of scale, significantly so.
I'm thinking that code ported from big endian processors like Motorola to small endian processors like Intel explains the 1.1.168.192, 1.0.168.192 and 1.2.168.192. On big endian machines you can omit the htonl macro and you still get Network byte order. There are lots of code examples available on the Internet that do just that.
Surely it's somewhat alarmist to call the whole block of 2^24 addresses "polluted", when the article seems to suggest that the garbage data is entirely directed to a handful of discrete addresses within this range? Discard the affected addresses, and utilise the vast remainder.
As the saying goes, money talks, bullshit walks.
NO-ONE is going to give up IPv4 because they have so much money riding on it.
IPv6 will instantly make lot's of owners "poor" because of the value drop.
In the event that current IPv4 owners take on a correspondingly larger amount of IPv6 space (as was bandied about at one time), value will STILL drop because of the huge increase in space in already unallocated areas anyway.
Is there any surprise it's moving at a "snail's pace"?