It's turtles all the way down, with fake news tags being potentially fake.
Doug Hofstadter could relate.
The best way to tackle fake news using artificial intelligence is to go straight to the source, according to a new study. Specifically, smartypants at the Massachusetts Institute of Technology (MIT) in America, Hamad Bin Khalifa University (HBKU) in Qatar, and Sofia University in Bulgaria, trained a classifier to determine how …
Ah, yes, the self-referential eternal golden braid. As another old coot who has happened to be at quite a number of events later reported on (or hidden on a back page) by media....it's all pretty much fake, if it bleeds it leads, if if doesn't bleed enough, make something up.
I've seen photogs take pix of a "rescue worker hero" giving CPR to a woman dead for 45 minutes in an auto crash I witnessed...and published on the front page. Partisan events...don't even go there. Plenty of outright lies to sell ads, not even getting into the partisan "make up your own reality" baloney where the biggest lie is that if the other side is wrong, yours is right - what a load of dingo's kidneys (to quote the other Doug). Since you have no meaningful input to who is chosen to represent a "side" or indeed what "sides" there are...It's been a long time since I've run into anyone who thinks any of the sides represent them whatever. Only oligarchs/$BIGCORP are represented anymore.
How about all the partisans who say they want to pick "those other guy's pockets" to give you what you want - till you figure out there ARE NO OTHER GUYS - it'll all connected and as Akbar said "it's a trap".
If you've seen the wheel turn a few times, it's obvious. Sad to watch so many newcomers get fooled and ripped off...
I'll bet I could categorize left or right at least that well just based on the names of the sites, without seeing any content at all! Since an "AI" can't understand articles well enough to determine political slant beyond word analysis, and has no way of ascertaining factual accuracy unless it has already been told by people some facts to compare with, it all seems rather pointless.
Hyper partisans see themselves as being only slightly left or right of center - they believe they are part of the "silent majority" in the country. Any evaluation of a source as "left" or "right" (let alone "center") will not be accepted by those who most need to be slapped upside the head and told they are an extremist. They judge "facts" based on the partisan news sources they choose (which they don't see as biased) and when the AI judges the truth or untruth of those "facts" differently they'll simply claim the AI is biased and reject it.
Hyper partisans see themselves as being only slightly left or right of center - they believe they are part of the "silent majority" in the country.
A good way to approach this is to look at your opinion and try to find views that are more extreme in a variety of directions. If there aren't any, you're probably out on the fringes. If you are more lazy, you may find the first chart in this article helpful. If you are both lazy and an extremely right-leaning partisan, you will find the second chart to be more your cup of tea.
well, when Fox News, which has members of BOTH the left AND the right on panel discussions on various shows, is called "right wing" or "far right", instead of 'fair and balanced', you know that the bias is already built-in. They've got both Geraldo Rivera _AND_ Juan Williams, after all (both liberals), as well as Hannity and Laura Ingraham (both conservatives). Of course you need opposing opinions to have a discussion. And that's the point. And yet, I'm sure the "judgement" would be for Fox News being "far right" or "extreme right" by any bot, because THAT KIND OF BIAS WAS PROGRAMMED IN.
So the main point is: The BIAS of the programmers will be exposed by the bots. Any 'training' algorithm will be flawed because of THEIR BIAS. This easily explains the 60-70% accuracy. Half of the sites labeled "right" may actually be "center" !!! (that would make it about 2/3 correct, if my math is working properly).
The problem is made worse by trying to categorise the sources as left, right, reliable etc. These are largely subjective, making the categories biased.
What would be better would be posting a link with each 'fact' to the source of that fact, so readers could click back through the trail and judge for themselves whether they believe the actual source.
The fact Fox News brings a few token liberals that deliberately make arguments filled with holes so the conservatives can "win" does NOT make them balanced. Until a couple years ago, Fox News was definitely right leaning but I wouldn't call it biased. It went completely off the rails though after Trump won the nomination, and seems to become worse as time goes on. Now it is basically the equivalent of what Pravda was in the days of the USSR.
If they figure out how to determine the facts of an event can they let the criminal justice system know, they seam to find it hard and time consuming.
But just because the bots results don't match their particular political views and biases (of which everyone has) doesn't automatically mean the bot is faulty.
Often it is hard to determine the facts of an event, government mass surveillance was 'fake news' until Snowden was able to show the evidence, but other people had been aware of those same programs for years and reported on them much earlier, they just didn't have copies of the documents to prove it. So this bot would have rated them as having low accuracy because it would have been unable to find supporting evidence.
Also how wide does this bot search for the evidence to support a statement? If I write an article about a tweet Trump has made and he then deletes the tweet will the bot say my article is inaccurate because of lack of evidence? Can the bot perform ORC on screenshots to figure out their contents? What about those images that are a group of screenshots linked by arrows to show how everything links together, can this bot figure those out?
This bot (and even human fact checkers) can only try and cross check things against open sources, closed sources such as confidential informers cannot be verified. If you automatically consider anything that cannot be verified to be fake then you destroy investigative journalism.
It would help if they gave their definition of 'fake news'. So far it appears that everyone has a different definition of 'fake news' depending on their social and political views - one persons fake news is another's real news.
Until there is a universal acceptable definition of 'fake news' all the rabbiting on about it is pointless.
I could give you some examples that I think everyone would agree are fake news, eg the claim that Peter Jones invested in an automatic money-making machine during a Dragon's Den show that returned some stupidly high % of his investment in only 20 minutes, and the article is mocked-up to look like it is from The Mirror.
Then there's the claim that a pizza shop that doesn't have a basement has a paedophile ring in the basement that it doesn't have.
I suppose the question is where you draw though.
If they define fake news precisely they might find that almost all sources of general news vomit copious amounts of genuine fake news. Mostly this is done for ratings and to have 'an edge'. Also, how much nominally factual news like the latest murder in Southwest Atlanta (a notoriously crime ridden area for decades) is over-hyped as major crime wave when it was 2 drunks having an arguemnet with knives or guns.
Associated Press, a large not-for-profit news agency, scored high for accuracy. Organisations like Russia Insider, known for having a pro-Kremlin stance, was ranked as medium, whereas lesser known niche websites like Patriots Voice, set up by a Republican husband and wife team, was low.
Nope. Not even a smidgen of bias here.
Biting the hand that feeds IT © 1998–2019