WiReD magazine's editor-in-chief Chris Anderson has just seen the end for scientific theories. And it is called Google. This remarkable revelation was triggered by Google's research director Peter Norvig. Speaking at O'Reilly's Emerging Technology Conference in March. Norvig claimed: "All models are wrong, and increasingly you …
I realise there's a risk of bias here....
...but frankly this sounds like someone trying desperately to get some attention in the run-up to the relaunch of a certain magazine in print, rather than an idea that actually has any merit.
For a start, I'm fairly sure that if there were much basis to the notion of wagonloads of data removing the need for silly little things like, oh, models of how interactions happen, there'd be someone in the scientific community pointing it out. Beyond that, isn't it just the same old problem that's always been around with empiricism?
If Anderson is right about this, surely there was little point in him forming his theory, let alone writing an article about it; a quick Google search would have made the idea self-evident.
Arts history major by any chance?
I can only assume that the originators of this idea did art-history degrees.
How can the morass of (dubious) data on the interweb supplant orignial research into quantum mechanics, high energy physics, astronomy, medicine, or any other real science?
Sure - you can study google to get an idea of mass psychology of crowds. Hari Seldon would be interested no doubt. I await with interest the first google-announced Seldon Crisis.
But unless aliens are injecting information into Wikipedia (now that would explain a lot) its hard to see how original data can be obtained without actual research, and how sense can be made of this without theorising a structure or process behind the data.
With lots of data you can derive pretty accurate empirical models - engineers have done this for millenia and mostly their structures stay in place if the empirical model is good.
That doesn't help you understand whats actually going on though. Remember - Science is about /understanding/.
Incidentally your comment re science having unprovable 'facts' is incorrect. These are predictions, not facts. Its a common misunderstanding of nonscientific people.
The bottom line
There's an awful lot of BS written by so-called philosophers of science about whether the scientific method is valid, and whether we can trust sense experience, and whether there's such a thing as causation.
The bottom line is that science has proven spectacularly successful in producing a coherent model of the way the universe works, using three simple tools: observation, reason, and Occam's Razor, which tells us to always favour the simplest explanation.
When someone like Anton Wylie suggests that quantum mechanics is "wrong" and says that it "caricature[s]... a more complex underlying reality", I wonder whether he actually knows anything about quantum mechanics. Some people may be uneasy with its underlying laws, such as Heisenberg's Uncertainty Principle, but the fact remains that quantum mechanics models the sub-atomic world with astounding accuracy. Branches such as quantum electrodynamics have yielded theoretical predictions which experiments have subsequently verified to accuracies of one part in a trillion. If that's a caricature, it's one that would make Rory Bremner green with envy.
I leave the last word to Randall Munroe, author of the excellent comic strip xkcd:
Science. It workls, bitches.
"So the irony is that science, having made its tribal lay with the philosophical school of empiricism over three centuries ago, and seemingly having derived sustenance from it, now has to kill it to go forward. The alternative for scientific theorising, if Anderson is correct, is to be killed by it - by it and Google."
Science doesn't have to kill anything to go forward. Google processing empirical data is "science" just as much as a human processing empirical data is. The result is correlations, just as it always has been. Following which, the human mind will deduce theories or models, which can be tested. I don't see a program that takes a correlation, guesses some possible mechanisms, and then specs out a definitive experiment to decide which it is. That'll be post-strong-AI, not post-Google.
But, it may well be that *in fact* the correlation data alone is enough to develop technologies based on the science, in which case human theorizing may die, but science will live on. This may even be inevitable as we point the light of science at more and more complex systems. Human theorizing may cease to function altogether.
There's just no issue to answer here. Science and human theorizing aren't the same thing. The rebuttal to Anderson is simply to point this out.
Data is meaningless
Without an interpretation to convert it into information.
That is most commonly done by hanging it off a model.
Of course you can argue that given sufficient experimentation there is no need to calculate plancks constant theoretically...
another armchair philosopher
throughout history real progress has been hampered by influential thinkers (eg aristotle) who basically had no clue about the scientific method and did more harm than good
this latest "revelation" is from a guy that clearly has no clue about mathematics and modelling. With loads of data you can at best do black box models. Try using one of those to predict tomorrow's weather, never mind understand what's going on. Their extrapolation capacity is next to zero too.
we're quite safe from google just yet!
Models are meaningless
Without some understanding of the effect you are seeing.
It is possible to fit models to almost any data set and get astonishing levels of fit, but unless you understand what effect you are going after, and expect that effect to be persistent into the future, the models are useless for prediction.
Not even wrong
In the immortal words of Wolfgang Pauli, Anderson's latest idea is so bad it's "not even wrong".
I don't see how googlesque prostrations at the altar of the almighty algorithm fulfill Occam's Razor.
The way I see it, the point of a theory is to simplify an observed dataset. Without subjective theorising, you cannot simplify anything other than unnaturally regular datasets.
Any "model" emerging from a truly empirical computer analysis is as complex as the dataset that was put in. A computer's simplification or generalisation will be uninformed, inasmuch as the data it uses to make the generalisation will be the same as the data it is generalising. As such, any flawed dataset would, in generalising, degrade itself -- hence it would be doubly flawed.
So the computer can't simplify -- which is against Occam's Razor.
A person, however, will generally be inspired by an analogous process or a thought experiment. However bizarre that analogy may be, it is a process directed from outside of the statistical set from which the theory is derived.
The human's conclusion may or may not be more accurate than the computer's, but the human's conclusion is a simpler model, which was what ol' Geoff was after.
The algorithm works for Google because correlation is adequate for their purposes: they don't need to change people, they just want to know how to sell stuff to them. A few false positives or negatives won't hurt.
For medicine and in many other scientific fields, change is the end goal, and understanding of causation is the only reliable way to affect change. Imagine carrying out nuclear research without theorising. How would you collect the data? Well I'm not allowed to theorise that splitting atoms is dangerous, so I'll do it at random in random locations. Or not.
Dead vulture because this is not of the low-brow standard that we've come to expect from ElDregs.
re David Harper
You seem to have confused me with Anderson! I don't suggest quantum mechanics is wrong. But if QM is inconsistent with general relativity, it must be false if GR is true. No-one needs to know anything about QM or GR to hold that.
The "Science. It works, bitches." pov is not enough to demolish Anderson's argument. One doesn't need to imagine malicious social policy demons wondering aloud what public good comes from investing in certain types of research - the BBC reports that 40% of UK physicists will shortly be without funds even for conference travel. But what if "what works" is really only an inter-subjective placebo effect - related to, say, constantly rising GDP and incomes? Unless you mean that it's easier for us to demolish the shaman's hut than for him to demolish CERN, MIT, Jodrell Bank. ;-)
The End of Theory?
Unless Chris Anderson has been for a stroll up the mountain and returned with tablets of stone bearing holy writ can I suggest that his proposition sounds very much like standard statistical methodology . Acquire a massive data set and run it through an algorithmic sieve to obtain results. How does this invalidate or supersede any other method of scientific enquiry?
The Pharmacuetical Industry Beat Google at its Proposed Game
Ever read the fine print on that patent medicine? Beyond the list of side effects there will be a statement that essentially says the manufacturer does not know why the drug is effective. Essentially, enough data has been collected to deem it "effective" by the (in the US) FDA.
We are bombarded with health recommendations based on statistical evidence. There may indeed be correlations such as obesity and high cholestoral, or stress and heart disease but the underlying cause, if known, is lost in the noise of telling us we are "at risk".
The drug industry, which finances the research, doesn't care about why your body decides it must reinforce its arteries with plaque, but will gladly invent a pill to disolve it when somebody decides it's bad for you.
Having decided to develop a particular treatment, research goes into selecting, refining, and testing the end-product. But because the original cause is unknown, the product is not a cure. But then, cures are bad for business.
It's not that there aren't people diligently investigating blood pressure regulation, but it takes a lot of time, and once understood, there may not be a payout. Compare that with the decreasing cost of collecting and analyzing data, and it doesn't take much imagination to see where business/government will put its money.
So meanwhile, we sheep line up at the witch-doctor's for a potion to treat our complaints, real or socially imposed. As primitive as it is, it's the best we've got.
I concur with everything
...or, to put it in language I can understand without needing a quadruple espresso first, Anderson's datafest can tell you what is happening, but not why. Models are dead? I don't think so.
Ah, so is this the flatlanding that Wilber goes on about, when the "externals" are the only realities considered, and the internal "models" thrown away or considered worthless?
It doesn't strike me that you can follow one line or the other and be balanced, throwing out models for facts alone seems like a dry and possibly ethically challenged viewpoint, as bad a throwing out facts and just living in a world of arbitrary models.
I was wondering if there is any correlation between
(a) someone being a cheerleader for all things “emergent”
(b) the same person talking out of his a**e.
Or maybe I should look for a model-based explanation?
I love the way people with some grounding in science and none in philosophy dismiss important questions that they have no answer to out of hand. As far as I'm concerned that's the biggest problem with science- if the average scientist doesn't have a model or they don't have a theory that matches the data they will discard the data. Doubly so for those working with a commercial agenda that requires certain results.
I don't see any value in the idea that large masses of data can replace the scientific method, no matter how big the data is, but maybe it can sometimes be used to indicate when a theoretical model needs to be discarded because it simply doesn't work, rather than trying to ram square data into round models. It could in fact be a saviour of empirical science rather than it's death knell.
If I'm honest I don't see why Hume was involved here at all, but it's nice to see the empiricists getting a bit of a mention in the online press. They are rarely perceived these days.
Next time: Berkeley's theories of perception and God- the first example of a client-server archictecture?
Quantum mechanics or General relativity must be 'wrong' if they are incompatible? Oh deary, deary me.
This does not paint an accurate picture. Take Newtonian mechanics, they were not 'wrong' they were just subsumed in to a larger body of physical theories which is Einstein's GM. A new version that was somehow born out of quantum mechanics would not make GM wrong per se.
I'm not entirely sure non-scientist commentators understand this. Indeed, when they talk about science going from revolution to another almost life fashion it doesn't sound like any new knowledge at all has been gained. Yet here we are typing on computers. Ungrateful swine.
Google end science? My ass.
A huge pile of BS, of course. Maybe the more inept theory I've been given to see in my labgeek life. For once, statistical correlation only works if you can assign a relevant threshold, which can only be worked out by theoretical means. Even then, complex systems (such as the ones relevant to sociology, health, biology, physics and whatnot) WILL give raise to random apparent associations between independant sets of data, and you need models to work them out. Give me any large set of data and I can prove that cigarettes make you gay or that blond people are genetically prone to becoming drug addicts or that your hated ethnical group of choice is "scientifically" predestinated to be a hopeless bunch of murderers, burglars and copyright infringers. You name it, I can prove it.
This kind of bias are the reason why scientists need to build models that can explain the data, because non-explainable correlations are most certainly experimental biases. And, the more data you throw at the statistical analysis algorithm, the more "false positive" results you get, so the more you need models.
In short, more data = more need for models. NOT less.
re: re David Harper
"if QM is inconsistent with general relativity, it must be false if GR is true. No-one needs to know anything about QM or GR to hold that."
This is surely a misunderstanding and/or misinterpretation of both QM and GR. QM is not "One Theory", but rather a series of methods and theories developed over time to describe subatomic processes.
GR looks like a single theory since it is generally contributed to a single man (Einstein), but that does not mean that it is an all-encompassing theory or even that all aspects and interpretations of it are correct.
So claiming that either QM is wrong or GR is wrong is a simplistic child’s notion of how theories and science works.
Any theory is in essence flawed since it represents a simplification or (if you like) a model of reality. This is necessary because we only have limited resources available to hold the model (our individual brainpower and/or computer processing power).
In the case of QM we may also be hindered by the fact that it is literally not possible to build a model of something extremely small using only things that are much larger.
So scientific theory should not be confused with the world we use it to describe.
Why is it that if I make a clay model of the world, nobody would confuse it with the real thing, but if I make a mathematical model of the world (or an empirical model - or a Google search model), then suddenly some philosopher starts to think that the model IS the world.
QM and GR may or may not be in conflict with each other - at least as we currently interpret them. This does however not make one of these wrong - only incomplete or in the worst case flawed.
Or maybe we just don't understand these theories well enough - I certainly don't.
So to get back to the quote in the beginning of this rather long diatribe:
If we accept the notion that theories can be "wrong" because they are not perfect or because they contradict each other, then your entire article could be dismissed as "wrong" on the basis of a single inconsistency.
In the real world, you are probably right on some points and wrong on other. If you are mostly right, then your article presents a good theory (or rather collection of theories.
Except for natural selection, generally speaking reductionism has the advantage over holism that it describes a direction in which causality is likely to make sense. Correlation is indeed not enough from which to infer causation by itself, but good practical common-sense intuition about the physical world never needed to be rejected by practising physicists.
It's something like the continuing popularity of Platonism among mathematicians.
In any case, the author's reference to the "tribal lay" of science gives the game away: this referenced a famous Kipling quote, "There are nine and sixty ways of constructing tribal lays,\And every single one of them is right!". So no need to get excited.
A minor point, but pharms do also design drugs on the basis of theory. See http://genome.wellcome.ac.uk/doc_wtd020912.html
Two words: combinatorial complexity
The data requirements for automatically generating any sort of reasonable interpolative model are obscene - just consider the number of runs vs precision in the pin-dropping Monte Carlo method of finding pi. 10000 samples will get you a value of ~3.17, and you're well into the millions before you get even as close as 22/7.
As others have noted, extrapolating from such a black-box is almost impossible, even given infinite data - if you don't have an idea of what it's doing, or an extremely good fit to a trivial model (e.g. linear), you're stuffed as soon as you go out of the range of inputs previously seen.
Data mining is not, and never has been, a replacement for inference. It might throw up some unusual correlations to investigate, and it might simplify testing of a hypothesis, but it can't replace the crucial step in the middle: understanding.
QM and GR
Anton, I'm sorry for putting Anderson's words in your mouth.
However ... you did say "But if QM is inconsistent with general relativity, it must be false if GR is true. No-one needs to know anything about QM or GR to hold that."
No, no, thrice no.
Quantum mechanics and general relativity are both accurate and consistent models in their specific domains.
We may currently be unable to formulate a model which applies GR at a quantum level, but physics has faced similar problems in the past and found a solution. GR itself was the solution to shortcomings in Newtonian mechanics, but we can still use Newton's laws to send a spacecraft a billion miles across the Solar System with sufficient accuracy to fly it through a narrow gap in the rings of Saturn. Even a scientific model that has supposedly been superseded can still be useful.
As to the "Science. It works, bitches" point of view, it applies at several levels.
At a pragmatic, everyday level, it means that the electronics inside your computer and the laser in your DVD player were made possible through our understanding of quantum mechanics, whilst both quantum mechanics and general relativity provide the scientific basis for the satnav system in your car.
At the level of scientists at CERN and Jodrell Bank, the fact that science allows them to model the universe consistently and accurately from the sub-atomic scale all the way up to clusters of galaxies justifies the vast amounts of money, time and effort that go into these major publicly-funded science projects. Do you suppose that the government would hand over £2bn to British physicists and astronomers each year if it thought that they were no better than woo-merchants such as homeopaths?
As someone who holds two degrees in physics I would like to point out that this is bollocks of the highest order
"Either they are wrong, for example they "caricature... a more complex underlying reality" (quantum mechanics),"
Bullshit! Show me this underlying reality. The only thing QM says is that if you do X then the needle on you're measuring device will move by Y units. The "underlying reality" idea was created by people who could not accept QM - which is interesting given that QM is verified by actual data whereas the "underlying reality" idea has no evidence whatsoever.
"Anderson has recognized that when instruments take the measurements for science, human perception is no longer relevant within the Berkelian epistemology. "
And who reads the instrument? That's right, a human. In fact, this is often used along with Schrodinger's cat to illustrate how the observer is fundamentally a part of the experimental set-up - i.e. if the cat is both dead and alive until the observer opens the box, then the observer has both opened and not opened the box until you walk into the lab to decide which it is.
"You seem to have confused me with Anderson! I don't suggest quantum mechanics is wrong. But if QM is inconsistent with general relativity, it must be false if GR is true. No-one needs to know anything about QM or GR to hold that."
More shite. Neither is wrong they are simply not Grand Unified Theories.
Besides, data will never replace theory until the human race stops asking why and how that data came to be. Anderson would be best off locked in a room with Ray Kurzweil and fired into the sun.
"But if QM is inconsistent with general relativity, it must be false if GR is true."
Well, seeing as neither fully describe what happens to things across the board they are BOTH "false" at some level. If that wasn't the case one would have been discarded to join Newton's theory as good but superseded attempt at describing how things work.
This also glosses over the fact that Physicists are well aware of the limitations of both these theories and a working hard in finding better ones.
In any case, yes it is very handy to have a large heap of data being able to ask a question and then get an answer back. However if asking random specific questions only leads to narrow answers to that particular question. It's all when and good asking: "I have a cart of mass 1000kg being pushed by a force of 100N, how fast did the carts we witnessed in the past accelerate?" and getting the answer "They averaged 0.1 m/s^2." but it still requires another leap of thought (and a few more good questions) to get from this question to F=ma.
...but riddled with bollocks. I think you can cut through a lot of this by being sensible, leaving room for uncertainty, and not trying to divide everything up into absolute categories like 'true' and 'false'. Realising that every theory, model and philosophical tool, even fundamental logic, appears to be at least a little bit broken, and accepting it, would probably be a good start.
Models and theories are useful in as much as they allow predictions to be made; they will be applicable to a certain range of scale and conditions. What's the problem with that?
Also, we do this stuff because its interesting; where's the fun in throwing everything into a giant database and seeing what correlations are thrown up? If I wanted to do that I would have gone into insurance...
Correlation does not mean causality
I work in a scientific company that also does huge amounts of data mining. If I had a nickel for every time someone trips over some unlinked items correlating, I'd be rich. e.g. We have a very high correlation between the number of cars in our parking lot in New England and the price one of our Chinese suppliers charges for one of our raw materials.
If Google is claiming people should ignore the underlying mechanisms (and their ability to predict things) and only use hind-sighted data correlation, then maybe the SEC needs to look at Google's books to see how Google is cooking them.
I haven't read the whole thing yet, but it's the best article I've read to date on the issue. A couple of points I'd like to add:
* good description of causation, but no mention of correlation
* there's a difference between being valid and being verified. A tautology is valid in logic, but it is not in any sense verified. A tautology is essentially meaningless.
metaphysics is dead?
I don't think so...
Anderson is a bit like a mole
Always digging around for tasty grubs and can't see worth a penny.
He's forgtten that we'd no sooner toss the scientific method out than willingly put out our own two eyes--because it has fundamentally proven itself consitent over the years. Even multiversal aliens will develop the same method in their own timeframe, despite (possibly) different sensory inputs/dimensions.
Magnus is correct about data--Google data is only useful in datamining. We present a theory, we run some numbers, and then let the compy number-crunch trillons of bits that would've taken us at most several *years* in physical space to come up with *some* result. Then we look at the end product, arbitrate whether or not it fits the theory, and do a little detailed back-checking.
In the end, *we* decide if a theory is valid. Not Google. Or Anderson, for that matter.
(Hume, though, is a perfectly good fellow and has plenty of good points.)
Possibly there is a "model" for this but I've derived my conclusions off my "extensive" data-set obtained from the pharmacy and news media.
If google (or your program) was god and it's data set was the universe, then sure, everything checks out.
A Million Monkeys Typing a Million Years
I can't help but wonder if Google et al, through its data mining effort won't be just as likely to discover the complete works of Shakespeare as the Unified Theory of Life, The Universe, and Everything.
Oh yes, they have millions of PEOPLE typing -- so it probably will be the complete works of Joan Collins.
Postmodernism is great value
One of the difficulties with philosophy courses has been that it's rather difficult to use multiple choice questions in tests. By introducing postmodernism as the main thrust of the syllabus, however, it has become possible to mark essays and assignments automatically with an appropriate consonance word count plus MS grammar checker.
This obviously saves a good deal of time and effort which would otherwise need to be spent thinking about the issues and generally has a more predictable outcome.
GR, QM, and scientific wrongness
To say that a theory is 'wrong' implies that there is another theory that is 'right' – a sort of absolute truth. But even if we identify this theory, we will not necessarily be able prove its correctness, so the terms 'right' and 'wrong' aren't meaningful when attached to scientific theories.
What we can speak of is the accuracy of a theory – how closely it matches our observations. Even if one theory is not the most accurate under all circumstances, it may still be good enough to be useful, especially if the calculations are simpler. Newtonian mechanics is less accurate than relativity when describing the motion of galaxies, but it's sufficiently close to be used for almost all human-scale engineering. The curvature of the Earth is small enough it can be considered flat when drawing a street map of a city. For planning airplane routes, modeling it as a perfect sphere is adequate. If you need precise control of a satellite's orbit, you may need to treat it as an oblate spheroid with local variations in gravity.
Given a sufficiently large amount of data, it isn't hard to find correlations. As a few people have mentioned, you can also find plenty of false positives, things that happen at the same time but have no relationship whatsoever. The only way to tell the difference is by understanding causality. Computers are great for telling us what has happened, but cannot explain why.
The theory to end all theories
Quite an oxymoron, isn't it?
Correlation vs. models?
If these "theories" were true, about doing everything by correlation, then we would conclude that:
1 - Storks *do* bring babies to the world,
2 - Global Warming is caused by Pirates.
Why? Because if you "only analyze the data you got", you'll find a correlation between ups and downs of population and stork numbers during WWII, and you'll see that the number of pirates at sea has dropped while the global mean temperature has gone up.
re QM, GR and consistency
If A is inconsistent with B, and if A, then not-B. This is a theorem of logic, in the intended sense of logical consistency, under the usual interpretation of A and B as descriptive statements, and having the possible values "is true" or "is false". Interpretations of the propositional calculus with the values "is right" and "is wrong" would be novel and interesting, perhaps as a way of inferring values from facts.
Schrodinger's cat exemplified the problem - it's only relevant to QM.
re "when instruments take the measurements for science, human perception is no longer relevant within the Berkelian epistemology." Steve writes "And who reads the instrument? That's right, a human." One paper by the (very pro-physics) logical positivist philosopher Carnap has an appendix in which he speculates on science being done by machines - it dates from the 1950's (I don't have the reference to hand), and post-dates Schrodinger's cat. Your Room 101 would need to hold more than Anderson and Kurzweil.
YES YES YES
pastafarianism and Foundation in the same thread.... I nominate this, "Thread of the Day"
The most telling phrase in the article
... is "Truth is what we can all agree on."
Herein lies the first problem. Before proved otherwise, it was commonly held that the earth was flat and the centre of the universe. Unfortunately, this did not make it true, it just meant that (despite the empirical evidence of things falling downwards and the stars moving across the sky) everyone was wrong. Science keeps revising its opinion of reality. That's ok, but it should relabel its findings "our current understanding of the subject" rather than "truth".
There is an unholy grouping of mathematics (where things can be proved as truth, but don't relate that much to the real world), theory-experiment-observation-conclusion science which relates to the real-world but can be rather difficult to apply to large-scale problems and statistical analysis (which skips the whole experiment/control set part) which can be used to show almost anything. Its all lumped together under "science" but the assumptions, methodologies and acceptable practises of one area cannot be readily applied to the others.
The second problem is that, especially when statistical analysis is the main form of enquiry, there are so many ways that an investigation can be misinterpreted. A prime example of this would be the current climate debate. Factors such as the base-comparison year, "normal" climate variation, how the stats are collected all contribute to the confusion. This is due to our understanding being derived from statistical models rather than the ability to comprehend how the incredibly complex system of global weather really works. This appears to be the main thrust of the article. The abandonment of an understanding of "how things work" in favour of "we see this particular statistically relevant correlation."
Then there is the "googlisation" of things. There is a vast echo-chamber where truth is determined by a top-10 PageRank(tm) statistical analysis not of the problem domain, but of the textual content. When truth is defined as "what we all agree on", the word no longer means, "that which reflects reality". With enough links to my weblog (or research paper) I can determine what is truth because I can influence what is read. (see http://www.theregister.co.uk/2003/04/03/antiwar_slogan_coined_repurposed/). This can be seen not just on the internet but in all mass-media. Politicians repeat the same sound-bite, the same mantra over and over again because commonly heard phrases are commonly accepted as true.
It must be true if lots of people are saying it!
Excellent point... but wrong conclusion
The amount of data will fundamentally change the "scientific method." In the narrow sense of the world.
We'll stop mucking about with dumb clinical trials and ridiculous social experiments. Absolutely. And it will profoundly change those fields.
But the data and the math will merely make fuzzy bullshit fields like psychology more similar to very concrete fields like physics. Physics has long been dominated by data and math.
Models will absolutely have a place. For starters, data will be far from enough to make predictions. (A physicist can build a airflow simulation for airplanes because of the mathematical models his data has let him construct. It is impossible to build such a simulator using a raw database.) For enders, statisticism hardly satisfies humans, who will always apply their heads to figure out the models behind things they see. The models which, perversely, are not arbitrary but essentially embedded in reality itself.
Then a guy observes that the births in northern France are related to the number of storks... Yeah, right.
Don't feed the troll
I don't even know where to begin with just how wrong this guy is.
There is a bit of an intellectual turf war going on at the moment - check Waterstones and you'll see a growing range of pop-science books written by researchers from the 'hard' sciences covering areas once considererd to be beyond the reach of straight-forward modelling techniques. These guys are applying their empirical based theories of physical systems to human based systems that were, until recently, the sole territory of social scientists, political scientists and economists. Its not surprising that the intellectually wooly inhabitants of web journalism land (very few real scientists in that community) are getting pissed off (there's plenty of this sort of knee-jerk anti-science thinking on El Reg too).
This 'we don't need science' thing is a response to that direct encroachment of physics and chemistry into the 'soft' sciences and the humanities. The Islamic community went through a similar thing back at the start of the last millenium when they decided that the information garnered from studying religious texts gave a better understanding of reality than philosophy. This was very handy for religious leaders and it cheered up no end the more religiously minded section of the populous who had got tired of being undermined by rational thinking and pesky logic.
The worrying thing is that the move away from science as a means of measuring the value of data is an easy way for politicians and policy makers to avoid having to demonstrate their dogmatically motivated beliefs are actually correct in any meaningful way. In a society ruled by the principle of 'believe what you want' it is also convenient for the population to be able to hold onto their predjudices without fear of being proved wrong by well-conducted double-blind testing of social policies.
Philosophs should study mathematics
To better understand things like hypothesis, postulate, theorems...
There are interesting philosophic concepts in basic mathematics, that can easily be proven with the use of models and can't be demonstrated with empirical datas... Just one to start with :
What's infinity ? There are an infinite number of N numbers (1 2 3 ... infinity) which is quite a lot, since it's an infinite number...
Yet that infinite number is not enough to count how many numbers you can find in R numbers, like those between 1 and 2. Easily demonstrated in 2 minutes with a pen and paper to your favorite Paris Hilton type of scientist, thanks to Cantor.
So, in 2 minutes with a pen and paper and a model you can demonstrate a simple concept (that can't be proven by empirical means), that there are different "levels / meanings" of infinity...
I'd like to see how much time it takes to a philosoph to put this in practice : defining infinity... And we're not even talking about an expandable infinite model...
Well, the human language and what we could call empirical common sense is great up to a point, but if you want to make things work, you better leave philosophy for the boring long winter sunday evenings, and start to work on mathematics and models.
If I dared, I would say that philosophy is some sort of cerebral dysentery : a shitload of ideas, but they're all crap.
P.S. : Matemathics and models can demonstrate that there is no possible model for that old "squaring the circle" quest. Interesting paradox, one could say : a model to prove that you can't find a model for something apparently as easy as changing the shape of a very defined surface...
Re: Philosophs should study mathematics
"...but if you want to make things work, you better leave philosophy for the boring long winter sunday evenings, and start to work on mathematics and models"
When I was learning stats, we were given an example of how dangerous it is to rely on correlation. Basically, this involved taking a load of random numbers, applying a couple of transformations, and then doing a correlation with lunar cycles, thus "proving" that unicorn breeding cycles are determined by the moon. The point is that you can almost always find correlations between different data sets if you try. Correlations on their own don't prove anything - you need theory behind them to explain why there may be a correlation, and you generally need some other stats to test the theory.
For example, it has long been known that there is a correlation between snowshoe hare and lynx population numbers in Canada - the 11-year cycle. The correlation itself is quite striking, but it is not enough to explain why the two are linked. Does the lynx drive the hare cycle, or the other way round? I did a statistical analysis of this some time ago, using principal components analysis (PCA). This showed that in fact it is the hare that drives the cycle - the lynx population grows and shrinks in response to the hare population changes. The hare population is primarily driven by other factors, such as changes in vegetation. Applying the PCA, and then interpreting the results of it, was not something I could do without the underlying theory and models of population dynamics.
So Anderson is talking bollocks. It seems to me this is another part of the concerted campaign against science that seems to be going on at the moment. It is very fashionable to claim that scientists are mad, bad or only in it for the money. Now we are being told there is no philosophical basis for science itself. Yeah right.
On the First Principles of Government
"Nothing appears more surprising to those, who consider human affairs with a philosophical eve, than the easiness with which the many are governed by the few; and the implicit submission, with which men resign their own sentiments and passions to those of their rulers. When we enquire by what means this wonder is
effected, we shall find, that, as FORCE is always on the side of the governed, the governors have nothing to support them but opinion. It is therefore, on opinion only that government is founded; and this maxim extends to the most despotic and most
military governments, as well as to the most free and most popular."
cooking with potential energy
Data without theory puts me in mind of an article from the late lamented Journal of Irreproducible Results about the subject line. The "researchers" repeatedly threw a frozen turkey off the roof and measured its temp. The data points perfectly fit an exponential curve, asymptotic to ambient air temp. But they decided to do a least squares best fit straight line, and came to the conclusion that if the experiment continued long enough, the bird would eventually reach a temperature of 350 degrees F. The energy crisis solved!
Oh, and isn't it a *lack* of pirates that causes global warming? I.e. they prevent it, presumably because pirates are so cool.
- Updated HIDDEN packet sniffer spy tech in MILLIONS of iPhones, iPads – expert
- Peak Apple: Mountain of 80 MILLION 'Air' iPhone 6s ordered
- Students hack Tesla Model S, make all its doors pop open IN MOTION
- BBC goes offline in MASSIVE COCKUP: Stephen Fry partly muzzled
- PROOF the Apple iPhone 6 rumor mill hype-gasm has reached its logical conclusion