... or use the issue-tracking system or any of the other added-value paraphernalia GH surrounds git with
112 posts • joined 11 Jan 2010
... or use the issue-tracking system or any of the other added-value paraphernalia GH surrounds git with
This is a good trick you are playing.
Obviously it is not possible to 'validate a model' in the sense either of proving that the program is correct (ie that it implements the algorithms that it claims to) which is impractical for almost all programs, or of proving that the algorithms themselves correctly simulate the climate, since (a) we know they don't, certainly that they don't to very fine detail, (b) we don't even know all the processes that make up the climate yet, and (c) even if (a) and (b) were true there is SDIC.
Finally, we can't even do the kind of empirical testing with climate models that we can (or could) do with CFD simulations of nuclear weapons, for instance, since we don't get to repeatedly build an earth and set it going to see if it agrees with what our model predicts.
So, as I said, it's a clever trick to ask for something that you know can't be done and make it sound reasonable. Not bad for someone living under a bridge.
But there are, of course, things that can be done, apart from the standard process of incorporating new and computationally expensive processes in the models and running everything on finer scales as computer power increases as well as fixing bugs and improving algorithms.
You can run your climate model for a period in the past, and see how well it agrees with what happened. People do this, a lot, and use the results to correct models.
You can take multiple climate models and run them with the same input data and compare their results. People do this: they've called Model Intercomparison Projects (MIPs).
You can run a single model with a varied set of inputs (an ensemble) and see how its predictions vary. You can do this in a MIP as well. People do this.
They still don't need bomb-proof communications: at least some (and I would suspect all, albeit with no evidence) of the Paris attackers' communications were in plain.
I think that the principal worry here is that we are governed by stupid people with very bad educations.
For the glow-in-the-dark question, the violet laser will have higher-energy photons and so will be kicking whatever it is needs to be kicked into an excited energy level. The other lasers will then either give whatever it is enough of a kick that it falls back rapidly or, perhaps, kick it into some yet other excited energy level where it is much more nearly stable, so appears dark.
There are almost certainly more ARM cores in your PC than Intel ones,
Yes, and to be clear this is the sense I meant by 'chaos': deterministic but displaying SDIC. And the interesting thing is that climate does not display SDIC: you can throw any old thing (within reason) into the initial conditions and it all converges, and people test this (for models, not for worlds).
That's right, of course. And if you can take a suitable average over that variation, and that average does not itself thrash about, then the average is not chaotic. And that average is what climate is.
The specific thing you care about is that whatever averages you define as 'climate' do not display sensitive dependence on initial conditions, and it appears that indeed they do not.
I understand what climate is. The point is that the various means over weather &c which are climate do not seem to exhibit chaotic behaviour. They do exhibit various instabilities (ice ages) but I don't think there is evidence of chaos. In particular there is no sensitive dependence on initial conditions as far as I know (and indeed you can check this in models by running ensembles, and people do this).
(And note: by 'chaos' I mean 'deterministic chaos' in the formal sense, not just 'variability'.)
Just to be clear that I was talking about deterministic chaos, as you are I think.
And I will revise what I said slightly: climate may indeed be chaotic, but there are bounds to the behaviour (you get ice ages, but you don't get Venus), and the chaos, if it is there, is there only on very long timescales. So my point is that it turns out that climate is indeed usefully predictable over timescales wecare about.
No, things that are predictable are not chaotic, almost by definition. And if they are chaotic then knowing more initial data helps only a tiny smount. Weather, for instance, is chaotic, and is therefore essentially unpredictable beyond a fairly short time, no matter how much computational rssource you have. Climate is not chaotic, although I am not sure if it is known why,
The interesting thing is that climate isn't chaotic. Weather, of course, is, but climate isn't. It's clear that it's not chaotic mostly because we're here to measure it: if it was chaotic then it would lash around all the time because of SDIC, and it's pretty unlikely that a planet with a climate like that would support the evolution of intelligent life. Life, I think, could arise, but a planet with a chaotic climate isn't going to support farming, for example.
That tells us that it's not chaotic (as, of course do climate records of various kinds ans the great success of models at predicting climate, buffoons like Lewis Page notwithstanding), but not why: it's a complicated nonlinear system so there almost certainly will be chaotic regions in the phase space, but we're not in one. Something people worry about is that we could be pushing things into one but this seems unduly alarmist to me.
No, they probably can't be found by chance. They can, however, be found by someone with a lot of time on their hands and the willingness to try a huge number of random prods at the system to see if it has any holes, in exchange for some momentary fame on the intarwebs. Such people do exist: 35 years ago they were pressing random buttons on calculators to get them into funny and interesting states and solving Rubik's cubes, today they poke at phones. I think doing interesting things to calculators and cubes was, well, more interesting, sadly.
If this does indeed fix the bug, then it looks like it took them several hours to do so. No doubt everyone will still harp on endlessly about it.
(Not an apple fan particularly, just amused at the tedious name-calling.)
I think that's right unfortunately: you can either blow whistles or work. But it would be sufficient, here, for people to simply refuse to do the work.
However I think you are probably right: the stupidity can be isolated in the pointy-heads who don't actually have significant design input: the people doing the implementation were merely venal. I still think they should be liable (the designers/implementers) for that.
Someone who was not a manager wrote the code that does this. Someone understood that the code could be written and enough about the way the system works to know what it would do. These people, like it or not, are designing cars.
And of course they were being told what to do bu the evil 'top people' but you know what: they had a choice, which was to have walked and/or blown the whistle. (And yes, you can do that: I did (walked)).
I can't work out how the decision to do this was made. If you just ignore all the ethics of it (which is probably a bigger issue, really) they must, at some point, have asked themselves a question which was something like 'OK, we know how to cheat: should we? Well, if we don't get caught we get to make a bunch of money, but if we do get caught, we will destroy or very badly damage the company and will certainly at least have destroyed our own careers even if we avoid jail. So, how dumb are the people who test cars then? Not that dumb, probably. Oh, and it will take *one* whistle-blower in the company, who will be entirely justified, to cause this catastrophe to happen.'
It's just really surprising to me that they would have decided to do this: granted they're evil, they seem also to have been catastrophically stupid, especially given the whistle-blowing risk which must have been just extreme.
So I do wonder if there is more to this than meets the eye, because I don't want to think that people that stupid might be designing cars.
I can't speak for the particular case you're talking about, but I think there is increasing evidence that a lot of mass-extinction events have been warming-related catastrophes. In particular the Deccan Traps released a really enormous amount of greenhouse gases during their formation and there's at least some evidence that this may have contributed to the K-Pg extinction (ie the extinction of the non-avian dinosours). I think the current notion is that it was a combination between warming because of this and a big meteorite strike.
So, let's get this right:
– you have seen the supposed raw data;
– which you know has been modified secretly;
– but the actual raw data is unavailable.
So, well, how do you know it has been modified? And are you, really, claiming that people are modifying the data and then removing the original data and that you can prove this (without the original data being available any more, because all trace of it has been removed).
And who is doing this? Is it the space lizards again?
So you 'support those that demand that the climate scientists show their actual data and how they manipulated it to get the results they did and also I uphold the request that the computer models be properly validated by people outside the climate change bubble.'
Well, of course the actual data is available, as are the sources to the models and their configurations. Often they are not quite open source – UM (the Met Office model) for instance is used for NWP as well and so is not completely open source. But anyone who is interested can sign whatever license agreement is involved (which won't involve money) and can then look at it and review the code, and I'm sure they would be very happy for people to do that.
But, somehow, climate sceptics never do, which is odd. You could be the first!
The whole Venus thing is interesting. It's a nice first-year physics problem to produce entirely naive estimates of what the surface temperatures of the (non-gas-giant) planets should be, just assuming that they are black bodies, and knowing the distance from the Sun, and either working out based on the solar constant or looking up the power output of the Sun.
The answers are pretty good: you get about 278K for Earth (5.5C) which is right within a surprisingly good margin. And you can do the other non-gas-giant planets and they're OK as well (within a few percent, which is amazing considering how naive the estimation is).
Except Venus, where you predict 328K (55C) but the actual temperature is 735K: more than twice as high (Venus is hotter than Mercury).
Well, clearly whatever is doing that has nothing to do with water, since there's no significant water on the other planets either. So it must be something else.
(I'm not suggesting Earth will end up like Venus!)
If you want to talk about high end why not talk about actual high end: what proportion of the top500 supers use SPARC? Do you really think the people who build these things are not measuring performance (and in particular I/O performance: anyone can fill a room with cores, but that's not a supercomputer)?
I slightly take back the above: I've tried the app on iOS and it does effectively offer offline maps and is sufficiently better than the old one that I've subscribed to it and will probably nuke the old one. They need an iPad (or tablet in general for the non-Apple-faithful) version still and they need to offer a painless (yes, I'm sure I can mail GPX files to myself and import them somehow: I want it *not* to be painful, since both of these apps are on the same device) migration path for routes and some discount for purchased maps.
Mapfinder exists now, provides offline maps which you can buy and is generally a really fine app. This new thing sort-of half exists, provides maps you rent and which are not available offline but perhaps might be at some future point.
But Mapfinder is orphaned or nearly so: all those maps you've 'bought', along with the routes you've created, will vanish into the ether when it dies. But meantime anyone who actually wants an OS mapping app in any of the places where OS maps are most useful needs to use it, and throw money ino a hole.
And this situation is going on for years, because, um, it apparently takes longer to write a mapping app than it took to write OS/360, that famous software catastrophe (OK, it hasn't, yet: it has merely taken longer than the first several versions of Unix or X).
And it seems to be beyond the wit of the Ordnance Survey to offer either an upgrade path or subscriptions for the current app which can be migrated to the new one.
Is it? What do apple scrape from you? What does Googlebook scrape from you?
good point: that seems like either a bug or malice.
Don't be stupid: what they are doing is asking for your consent to change the terms of the contract. If you don't consent then they are exercising their option to terminate the contract.
Did anyone think they weren't collecting this stuff? Really? Especially if you use Facebook credentials to log in?
We are all horrified by storage of nuclear waste, which is dangerous for a while and of which there are relatively small quantities. But storing CO2 in the ground, in necessarily vast quantities and which is dangerous for ever, why that's just fine.
I think it does show that it doesn't work: the idea is that the stock price of a company should represent the future earnings to be had from those stocks, as assessed by the market. But stock prices flap around like this all the time which really shows that the market is absolutely terrible at that assessment. Economists then stick their fingers in their ears and go 'la la la' because the data is massively inconvenient for their pretty theories.
I think the famous Ken Thompson compiler hack demonstrates fairly conclusively that, if you are paranoid, you really can not trust that the binaries you have don't contain nasties, even if you compiled them yourselves, with a compiler you compiled yourself, from sources which did not contain nasties. Yes, there are ways around this, but they require heroic amounts of work and attention to detail. (And of course I am not suggesting that the tools we trust do contain backdoors: merely that they might.)
They don't really depend on very special processors: so long as people keep making reasonably performant desktop-class machines, that's all they need really. For instance, at the place I work (which has recently commissioned a Cray), the Cray's processors are significantly slower per-core than the predecessor machine's: there are just a lot more of them. What makes a super be a super rather than a big cluster of machines (the sort of thing googlebook use for what we now must call 'big data' but is really 'embarrassingly parallel problems') is not the processors, its the interconnect, and that's something that they already engineer themselves (I don't know if they make it, but that doesn't really matter: do Apple make their own machines in any real sense, for instance).
Does anyone still care about 'the desktop'? Wasn't that some 90s thing?
Because, of course, this will never happen to GitHub, never I tell you! And reddit will never turn into another slashdot which no-one reads any more. Oh wait, hasn't that already happened to reddit?
Well, why do people still paint or draw or, in fact, do almost anything? They do it because they enjoy it. And some of us actually don't enjoy sitting in front of a computer any more than we need to.
Good point: I hadn't thought about things like the F6. Would not appeal to me but no doubt was (is I think?) built like a tank.
My guess is: colour slide (E6) almost now, colour negative probably 10 years, B/W never.
I think to call this 'niche' would be understating how small its market must be. It's of interest to people who shoot film (a not-tiny market), using a small and rather uninteresting subset of film cameras (90s Nikons and perhaps Minoltas: both ugly jellymold 90s SLR families with really nothing interesting about them at all), and finally people who want to know details of every shot for reasons which escape me. That must be at least several people (of which how many read The Register? probably all of them).
I use film pretty much exclusively, and I'm mildly obsessive about recording things: I do record shot detail for my LF camera like many LF users (at several pounds a frame you kind of want to know that so you can save money next time), and I record film / lens information for smaller formats, but it's never even occurred to me to want to know the exposure/aperture for each frame on a roll of 35mm. And surely at least part of the point of using film is exactly not having to use horrid plastic 90s cameras.
I think that Pluto's orbit has resonances with Neptune's, you can work out how long those resonances take to settle down and it's too long: it's been in orbit around the Sun for long enough that it should be cool.
This is a dumb question I suspect. How does Firefox know that, as of today (or yesterday, or whenever it was) it should block flash? There hasn't been a new version in the last couple of days, so the only way I can see that it's doing this is by, reasonably frequently, asking Mozilla. While I don't mind that (I have it check for updates and send health reports anyway), I bet there are people who do: even if it isn't sending any real information (which it doesn't need to) it is pretty much inherently sending stuff like IP address information and so on. There doesn't seem to be any really obvious way of preventing it doing this.
It's essentially usenet, except it's owned by a company who have ultimate control and there is advertising. So it's a way of spending an enormous amount of time arguing with idiots, there is porn and, inevitably, a very large number of people with really offensive attitudes (I'd be interested in knowing what reddit's demographic is, but I'd guess it's essentially 18-25 white US male). Unlike usenet most forums ('subreddits') are moderated and there is voting so spam is not such a problem (but don't say anything that might be unpopular, even if it is true). Like usenet it is slowly being overwhelmed by awfulness of various descriptions, but in this case, since it is owned by a company who can potentially be held liable, it will fall to bits in different ways.
I suspect it's on the way down now, but I would suspect that having fairly recently walked away from it after 8 years or so (the 20 years before that being wasted on usenet).
It's sad that there seems to be really no good way of actually finding what the news is in various specialised fields which is stable in the long term.
Actually this isn't the problem at all. Fortran is really a single-purpose language: it does high-performance, large-scale, numerical code. No-one working on a Fortran system is worrying about the performance of OS kernels, video games or database interfaces, because it isn't used for that: all they have to worry about is getting large-scale numerical codes to run, really fast. And there are people who have a lot of money to spend on this – the kind of people who are interested in CFD simulations which run for millionths of a second of model time.
The end result of all this is that Fortran systems have very, very good numerical performance. C systems, empirically don't, and C++ systems don't even have support in the standard for it. Fortran is really the only game in town for this stuff, as it has always been.
Other sites have made claims about 'charging in a minute', so let's take that seriously.
A (current) phone battery is something like 3.8V, and something like 1500mAh. That's 5.7Wh or, in sensible units, about 20kJ of energy (the voltage and current don't matter: the energy does). If we're going to charge this in a minute, then we're going to need to dump that much energy into it in a minute: this is about 340W, and (at 3.8V) about 90A.
All this assumes the battery is 100% efficient: I've both assumed that the voltage does not droop much as it discharges and that charging is completely efficient. Based on figures scraped from people selling phone replacement batteries, I think the voltage does not droop significantly, but I have no idea what the charging efficiency is.
If charging is extremely efficient, then, given rather thick cables and some fairly macho connectors in the charging interface this might work. This is not going to be charging over USB: given that a dodgy connection in the charging interface would probably result in a fire I imagine these will be some kind of screw-down connectors. The currents are less than a car battery provides when starting a car (which can go up to 200A) but not much less, so the connectors are going to be the same sort of thing.
If charging is not extremely efficient, a substantial amount of power will be dumped as heat in the device being charged. For something the size of a phone, it will need a heatsink, and possibly liquid cooling.
Alternatively: they are making things up to get funding.
This is something which someone should investigate, I think. If I'm lucky I get a good picture (something I'd consider printing) from a roll of 35mm film, while from 5x4 my rate is something between 1 in 3 and 1 in 2.
Ah yes, I see: "Other banking institutions across the world are also using this technology with their customers" so it must be OK. In 2000-2005 "other banking institutions across the world" were busily selling subprime mortgages. So that meant it obviously had to be just a fine thing to do, didn't it?
Security by bandwagon.
Well, it depends what you mean by 'somewhat-open-source'. Symbolics LispMs came with essentially all the sources to the system (there may have been some microcody stuff omitted), which you could (and did) modify. It didn't help them.
Whether things keep going depends on two factors: whether Google can sell you ads based on them (or charge you directly, perhaps) and whether Google itself survives. Let's assume that they will survive for a reasonable length of time. Then I think the answer to your questions are: gmail is fine, as it has to be an amazingly good stream of really targeted personal information; Google Earth is probably dead already; Picassa I assume will probably go (but I don't know much about it).
However the underlying message here is: if you keep your valuable data on systems which you do not either own yourself, or have contracts with teeth on (ie 'you break the system, you pay me money') then you are a fool.
We are all, of course, fools to some extent.
Usenet can continue to exist so long as there's code for an nntp server that will compile and someone willing to run it on an internet-facing machine.
But that's not the point: what Google have is as complete a stash of the history of usenet as anyone: they have posts going back to May 1981, which is within ~ a year of the start of usenet. If they decide that is no longer interesting then who else has that, or is that history just gone?
I wonder how long it will be before people realise that relying on a commercial organisation who you don't pay to keep irreplaceable information on your behalf is just a little bit dumb?
Yes, that is what it means: it means it will start enough of its electronics that it can hear commands from Rosetta and then sit there listening for them while charging the battery.
I think that's it in a nutshell, with the important caveat that point 1 is not the case.