As if that was a good idea
as the title says.
I'm assuming the thing worked by some kind of automated process (keyword, links analysis..), as human reviewing of their so called dark web is clearly beyond the capacity of any company.
Even if it works most of the time, if a significant amount of the mismatches actually block people out from relevant pages, the damage is potentially much larger than the gain in productivity.
E.g. if i implement a (non-trivial) algorithm in a software but am denied access to a site with information on it, the cost of that is quite unquantifiable. If that information was just a pointer to a more efficient implementation, maybe the cost is zero except we could have annoyed the user less by less waiting time, or the competitor could use that alternative and outperform us. If the site was about a significant flaw in the method, it could cause us very direct financial damage if wrong results are presented to the user.
Probably 90% of what the filter responds to is actually utter dross. Not hard, as that holds for all of the interweb. Nevertheless actually useful information is found quite often off large/established sites where it is most likely mixed with content on other topics (pets,children, other weird hobbies of the site author...). Err yes, there actually are blog-like sites which are not completely braindead.
/Considered the Big Brother icon. But BB knows what you are supposed to see. Whereas this new fangled filter thing clearly can't know.