Illegal material is, well, illegal, and Facebook are supposed to be responsive to notifications of such material. The trouble is that they're not responsive enough. Instead they rather give the impression of doing the opposite; they don't take down illegal material quickly or effectively.
Is that their fault? Well, it's a natural consequence of their chosen business model. Free to use with no effective user identity checks means it's too easy for the nastier (minority) members of society to hijack the service for their own despicable reasons. And society is basically moving to a position where such a chosen business model is not compatible with what society wants.
Facebook have had many, many years to get on top of this type of problem, and they simply haven't done it. They, nor anyone else, cannot expect to be allowed to run such a naive business model indefinitely. At some point they are inevitably going to be told "you're grown up now, you should know better". If they have not been sufficiently strategically savvy to understand that, that's not our fault. Same goes for Twitter, Google, etc.
Ok, so they're all talking about AI filtering, things like that. And if they are planning on claiming that these measures will be adequate, surely they then won't mind being classified as a publisher. On the other hand if they're not sufficiently confident in such filtering measures to accept classification as a publisher, then by definition they're not good enough at filtering.
"Instead of targeting Facebook with new laws, as The Times would, we should instead target those who misuse the platform to promote illegal things."
There's no need to target such people. What they're doing is already illegal. Do you really think that people who post such material are going to read the Times, pay attention, and stop what they're doing? I don't think so.
The problem is that Facebook, Twitter and Google's YouTube make it far too easy for such people to enjoy anonymity. Handing over an IP address is cooperation, but it's not very effective cooperation; it takes a lot of work to unwind an IP address to discover a person's identity. Worse, Facebook's WhatsApp even guarantees that it won't aid the police with their inquiries.
If we're to effectively target abusers of the platforms, then the platforms need to know more about who the users actually are. That's got to be something more than an IP address, a made up user name, and fake details. The trouble is the only real way that a social network can be sure of who a user is is to have had some financial relationship with the user (e.g. a completed credit card transaction). That's very much not compatible with the social networks' current business model.
To make Facebook (and Twitter, and other social networks) liable for users' content would almost certainly lead to the Defamation Act 2013 being substantially amended to remove that important protection. That would have a chilling effect on free speech – ironically, the very effect the act was passed to stop.
I sincerely doubt that it would have a chilling effect on "Free Speech". It would have a chilling effect on illegal material, libellous and abusive posts, and so forth. In the UK and much of the rest of world we do not have the right to libel or abuse someone in a public forum. Not even in the USA (where you can say anything, but there is no guarantee that this is free of consequences Obligatory XKCD). There are laws about libel and abuse.
The problem is that the social networks have made it too easy for people to break such laws and get away with it. Enforcement of these laws is severely hampered by the business model of the social networks. Quite a lot of very ordinary people are very fed up with that state of affairs.
In contrast, a justifiable post / article / publication is, by definition, not libellous or abusive; if you can prove a point, then you'd win in court. Ask Ian Hislop.