Re: for the 'drones can't cause problems' crowd.
Yes, foam can damage the leading edges of wings ... when the leading edge is travelling at about Mach 2.5, and is made of extremely specialised material designed to cope with heat, not impact.
121 posts • joined 11 Jan 2010
Yes, foam can damage the leading edges of wings ... when the leading edge is travelling at about Mach 2.5, and is made of extremely specialised material designed to cope with heat, not impact.
I presume it's just a matter of time before some nasty person thinks to intentionally fly a drone into the engine of some aircraft (yes, I realise this is hard to do, not least because drones aren't very fast, but that doesn't make it not possible). Indeed, since I've thought of it, I would assume that the nasty people already have, and indeed that the people whose job is to defend us from nasty people have too.
So it seems to me that there are really three possibilities here.
1. Actually, drones don't present any kind of serious risk to aircraft, so no-one is worrying about the problem.
2. They do present a risk, people don't know what to do, and are just waiting for the bad thing to happen and a lot of people in the plane to get killed (best case) or the plane to fall out of the sky over (say) London (much worse case).
3. Our defenders are not terribly smart and have not realised there's a risk.
I think, on the whole, I believe in (1), although I am confused as to why there all this fuss each time an incident happens if that is true. My second choice would be (3).
It did work quite well with handguns, yes, if the aim was to reduce the associated death rate. Obviously it did not completely remove the problem of people shooting each other, but you only need to compare statistics with countries which restrict firearm ownership much less aggressively to notice quite a significant difference.
You do realise that English is pretty much entirely the result of persistent incorrect usage of other languages, of course?
Well, article 50 has not been invoked yet the value of the pound has fallen. That's because markets are trying to asses and take account of the risk that it will be invoked: this increases the chance of bad financial things happening in the UK and therefore makes the things denominated in pounds cheaper.
This is exactly the same thing: since there is a significant chance of problems with the UK participation in various projects the rational thing to do is to have less UK participation to reduce the chance of such problems. This isn't discrimination, it's rational behaviour.
Not quite. Λ represents the energy density of the vacuum itself. In other words, it says that there is a certain amount of energy which exists just because there's space.
I think it really depends what you mean by 'a driver of expansion'. General Relativity says that matter and energy cause spacetime curvature and that this is how gravity works. But it doesn't say *why* they curve spacetime, it just gives some equations which let you work out what the curvature is.
Well, similarly, it says that there is (essentially) some zero-point energy even in a vacuum, which causes even the vacuum to have some gravitational effect. And it gives you a way of parameterising that zero-point energy, but again doesn't say anything about *why* it should exist.
How satisfactory you find this depends on who you are I think. I'm fine with it, but quantum field theory people want to explain it away in terms of some vacuum state of the field.
The cosmological constant can be an explanation but it's not an altogether satisfactory one. The Einstein field equations have, essentially, only three parameters: c is the speed of light and is, in some sense, not a parameter but just a scaling factor (it tells us what a second is in metres). G is Newton's gravitational constant, and it tells use how strong gravity is as a force. And finally Λ (big Lambda) is the cosmological constant and tells us something about how the vacuum behaves.
All of these constants are things you need to measure: nothing tells you what G should be except going out and measuring it, and the theory gives no reason for it to have any particular value (well, if it was zero the theory would be vacuous).
The same thing is true of Λ: it's a parameter of the theory which needs to be measured. For a long time, on grounds that turned out to be rather spurious, Λ was assumed to be zero, but as with G the theory doesn't have any opinion on what it should be, and you need to measure it.
So, both G and Λ are things that you need to measure, and if you are happy that G is just some unexplained parameter, then you really should be happy that Λ is as well. Of course, really it would be nice to explain both of these in terms of some other theory since we kind of know that General Relativity can't be a correct theory in various limits. But, on the other hand, people like theories with a very small number of free parameters because they are so hard to adjust to fit the data: if you have a mass of free parameters you can tweak your theory to explain a huge range of phenomena, which means it is very hard for your theory to be wrong: it turns into something Ptolomaic where you can just keep adding epicycles (free parameters) and the theory can never be wrong. And two or three free parameters is a very small number (the standard model of particle physics, for instance, has 19 I think): it's really the smallest number such a theory can have, so GR is quite compelling in that respect.
(The spurious grounds for assuming that Λ should be zero were essentially that its original use was to try and support a steady-state model of the universe, where there is no expansion or contraction. And it won't do that: although you can adjust Λ so that the universe does not expand or contract, the solution isn't stable: any tiny perturbation will cause it to either start expanding or contracting. So, in fact, GR, even with Λ, makes a strong prediction that the universe is either expanding or contracting. This was not known at the time GR was created, and Einstein didn't trust his own theory enough to make what would have been a very bold prediction about the large-scale structure of the universe. If he had done so he would, no doubt, have won a second Nobel prize for it, as the prediction turns out to be true. Instead he decided that, since Λ would not support a steady-state model it should be zero: but there is no reason to assume that at all.)
It is not the case that 'all the issues raised by Brexit used to be done differently before the EU treaties came info force' simply because there has been technological progress: before the EU treaties came into force the internet was in its infancy (indeed, depending on which treaties you mean, it did not exist at all), and you may have noticed that it has made a significant difference to the way we do business.
... or use the issue-tracking system or any of the other added-value paraphernalia GH surrounds git with
This is a good trick you are playing.
Obviously it is not possible to 'validate a model' in the sense either of proving that the program is correct (ie that it implements the algorithms that it claims to) which is impractical for almost all programs, or of proving that the algorithms themselves correctly simulate the climate, since (a) we know they don't, certainly that they don't to very fine detail, (b) we don't even know all the processes that make up the climate yet, and (c) even if (a) and (b) were true there is SDIC.
Finally, we can't even do the kind of empirical testing with climate models that we can (or could) do with CFD simulations of nuclear weapons, for instance, since we don't get to repeatedly build an earth and set it going to see if it agrees with what our model predicts.
So, as I said, it's a clever trick to ask for something that you know can't be done and make it sound reasonable. Not bad for someone living under a bridge.
But there are, of course, things that can be done, apart from the standard process of incorporating new and computationally expensive processes in the models and running everything on finer scales as computer power increases as well as fixing bugs and improving algorithms.
You can run your climate model for a period in the past, and see how well it agrees with what happened. People do this, a lot, and use the results to correct models.
You can take multiple climate models and run them with the same input data and compare their results. People do this: they've called Model Intercomparison Projects (MIPs).
You can run a single model with a varied set of inputs (an ensemble) and see how its predictions vary. You can do this in a MIP as well. People do this.
They still don't need bomb-proof communications: at least some (and I would suspect all, albeit with no evidence) of the Paris attackers' communications were in plain.
I think that the principal worry here is that we are governed by stupid people with very bad educations.
For the glow-in-the-dark question, the violet laser will have higher-energy photons and so will be kicking whatever it is needs to be kicked into an excited energy level. The other lasers will then either give whatever it is enough of a kick that it falls back rapidly or, perhaps, kick it into some yet other excited energy level where it is much more nearly stable, so appears dark.
There are almost certainly more ARM cores in your PC than Intel ones,
Yes, and to be clear this is the sense I meant by 'chaos': deterministic but displaying SDIC. And the interesting thing is that climate does not display SDIC: you can throw any old thing (within reason) into the initial conditions and it all converges, and people test this (for models, not for worlds).
That's right, of course. And if you can take a suitable average over that variation, and that average does not itself thrash about, then the average is not chaotic. And that average is what climate is.
The specific thing you care about is that whatever averages you define as 'climate' do not display sensitive dependence on initial conditions, and it appears that indeed they do not.
I understand what climate is. The point is that the various means over weather &c which are climate do not seem to exhibit chaotic behaviour. They do exhibit various instabilities (ice ages) but I don't think there is evidence of chaos. In particular there is no sensitive dependence on initial conditions as far as I know (and indeed you can check this in models by running ensembles, and people do this).
(And note: by 'chaos' I mean 'deterministic chaos' in the formal sense, not just 'variability'.)
Just to be clear that I was talking about deterministic chaos, as you are I think.
And I will revise what I said slightly: climate may indeed be chaotic, but there are bounds to the behaviour (you get ice ages, but you don't get Venus), and the chaos, if it is there, is there only on very long timescales. So my point is that it turns out that climate is indeed usefully predictable over timescales wecare about.
No, things that are predictable are not chaotic, almost by definition. And if they are chaotic then knowing more initial data helps only a tiny smount. Weather, for instance, is chaotic, and is therefore essentially unpredictable beyond a fairly short time, no matter how much computational rssource you have. Climate is not chaotic, although I am not sure if it is known why,
The interesting thing is that climate isn't chaotic. Weather, of course, is, but climate isn't. It's clear that it's not chaotic mostly because we're here to measure it: if it was chaotic then it would lash around all the time because of SDIC, and it's pretty unlikely that a planet with a climate like that would support the evolution of intelligent life. Life, I think, could arise, but a planet with a chaotic climate isn't going to support farming, for example.
That tells us that it's not chaotic (as, of course do climate records of various kinds ans the great success of models at predicting climate, buffoons like Lewis Page notwithstanding), but not why: it's a complicated nonlinear system so there almost certainly will be chaotic regions in the phase space, but we're not in one. Something people worry about is that we could be pushing things into one but this seems unduly alarmist to me.
No, they probably can't be found by chance. They can, however, be found by someone with a lot of time on their hands and the willingness to try a huge number of random prods at the system to see if it has any holes, in exchange for some momentary fame on the intarwebs. Such people do exist: 35 years ago they were pressing random buttons on calculators to get them into funny and interesting states and solving Rubik's cubes, today they poke at phones. I think doing interesting things to calculators and cubes was, well, more interesting, sadly.
If this does indeed fix the bug, then it looks like it took them several hours to do so. No doubt everyone will still harp on endlessly about it.
(Not an apple fan particularly, just amused at the tedious name-calling.)
I think that's right unfortunately: you can either blow whistles or work. But it would be sufficient, here, for people to simply refuse to do the work.
However I think you are probably right: the stupidity can be isolated in the pointy-heads who don't actually have significant design input: the people doing the implementation were merely venal. I still think they should be liable (the designers/implementers) for that.
Someone who was not a manager wrote the code that does this. Someone understood that the code could be written and enough about the way the system works to know what it would do. These people, like it or not, are designing cars.
And of course they were being told what to do bu the evil 'top people' but you know what: they had a choice, which was to have walked and/or blown the whistle. (And yes, you can do that: I did (walked)).
I can't work out how the decision to do this was made. If you just ignore all the ethics of it (which is probably a bigger issue, really) they must, at some point, have asked themselves a question which was something like 'OK, we know how to cheat: should we? Well, if we don't get caught we get to make a bunch of money, but if we do get caught, we will destroy or very badly damage the company and will certainly at least have destroyed our own careers even if we avoid jail. So, how dumb are the people who test cars then? Not that dumb, probably. Oh, and it will take *one* whistle-blower in the company, who will be entirely justified, to cause this catastrophe to happen.'
It's just really surprising to me that they would have decided to do this: granted they're evil, they seem also to have been catastrophically stupid, especially given the whistle-blowing risk which must have been just extreme.
So I do wonder if there is more to this than meets the eye, because I don't want to think that people that stupid might be designing cars.
I can't speak for the particular case you're talking about, but I think there is increasing evidence that a lot of mass-extinction events have been warming-related catastrophes. In particular the Deccan Traps released a really enormous amount of greenhouse gases during their formation and there's at least some evidence that this may have contributed to the K-Pg extinction (ie the extinction of the non-avian dinosours). I think the current notion is that it was a combination between warming because of this and a big meteorite strike.
So, let's get this right:
– you have seen the supposed raw data;
– which you know has been modified secretly;
– but the actual raw data is unavailable.
So, well, how do you know it has been modified? And are you, really, claiming that people are modifying the data and then removing the original data and that you can prove this (without the original data being available any more, because all trace of it has been removed).
And who is doing this? Is it the space lizards again?
So you 'support those that demand that the climate scientists show their actual data and how they manipulated it to get the results they did and also I uphold the request that the computer models be properly validated by people outside the climate change bubble.'
Well, of course the actual data is available, as are the sources to the models and their configurations. Often they are not quite open source – UM (the Met Office model) for instance is used for NWP as well and so is not completely open source. But anyone who is interested can sign whatever license agreement is involved (which won't involve money) and can then look at it and review the code, and I'm sure they would be very happy for people to do that.
But, somehow, climate sceptics never do, which is odd. You could be the first!
The whole Venus thing is interesting. It's a nice first-year physics problem to produce entirely naive estimates of what the surface temperatures of the (non-gas-giant) planets should be, just assuming that they are black bodies, and knowing the distance from the Sun, and either working out based on the solar constant or looking up the power output of the Sun.
The answers are pretty good: you get about 278K for Earth (5.5C) which is right within a surprisingly good margin. And you can do the other non-gas-giant planets and they're OK as well (within a few percent, which is amazing considering how naive the estimation is).
Except Venus, where you predict 328K (55C) but the actual temperature is 735K: more than twice as high (Venus is hotter than Mercury).
Well, clearly whatever is doing that has nothing to do with water, since there's no significant water on the other planets either. So it must be something else.
(I'm not suggesting Earth will end up like Venus!)
If you want to talk about high end why not talk about actual high end: what proportion of the top500 supers use SPARC? Do you really think the people who build these things are not measuring performance (and in particular I/O performance: anyone can fill a room with cores, but that's not a supercomputer)?
I slightly take back the above: I've tried the app on iOS and it does effectively offer offline maps and is sufficiently better than the old one that I've subscribed to it and will probably nuke the old one. They need an iPad (or tablet in general for the non-Apple-faithful) version still and they need to offer a painless (yes, I'm sure I can mail GPX files to myself and import them somehow: I want it *not* to be painful, since both of these apps are on the same device) migration path for routes and some discount for purchased maps.
Mapfinder exists now, provides offline maps which you can buy and is generally a really fine app. This new thing sort-of half exists, provides maps you rent and which are not available offline but perhaps might be at some future point.
But Mapfinder is orphaned or nearly so: all those maps you've 'bought', along with the routes you've created, will vanish into the ether when it dies. But meantime anyone who actually wants an OS mapping app in any of the places where OS maps are most useful needs to use it, and throw money ino a hole.
And this situation is going on for years, because, um, it apparently takes longer to write a mapping app than it took to write OS/360, that famous software catastrophe (OK, it hasn't, yet: it has merely taken longer than the first several versions of Unix or X).
And it seems to be beyond the wit of the Ordnance Survey to offer either an upgrade path or subscriptions for the current app which can be migrated to the new one.
Is it? What do apple scrape from you? What does Googlebook scrape from you?
good point: that seems like either a bug or malice.
Don't be stupid: what they are doing is asking for your consent to change the terms of the contract. If you don't consent then they are exercising their option to terminate the contract.
Did anyone think they weren't collecting this stuff? Really? Especially if you use Facebook credentials to log in?
We are all horrified by storage of nuclear waste, which is dangerous for a while and of which there are relatively small quantities. But storing CO2 in the ground, in necessarily vast quantities and which is dangerous for ever, why that's just fine.
I think it does show that it doesn't work: the idea is that the stock price of a company should represent the future earnings to be had from those stocks, as assessed by the market. But stock prices flap around like this all the time which really shows that the market is absolutely terrible at that assessment. Economists then stick their fingers in their ears and go 'la la la' because the data is massively inconvenient for their pretty theories.
I think the famous Ken Thompson compiler hack demonstrates fairly conclusively that, if you are paranoid, you really can not trust that the binaries you have don't contain nasties, even if you compiled them yourselves, with a compiler you compiled yourself, from sources which did not contain nasties. Yes, there are ways around this, but they require heroic amounts of work and attention to detail. (And of course I am not suggesting that the tools we trust do contain backdoors: merely that they might.)
They don't really depend on very special processors: so long as people keep making reasonably performant desktop-class machines, that's all they need really. For instance, at the place I work (which has recently commissioned a Cray), the Cray's processors are significantly slower per-core than the predecessor machine's: there are just a lot more of them. What makes a super be a super rather than a big cluster of machines (the sort of thing googlebook use for what we now must call 'big data' but is really 'embarrassingly parallel problems') is not the processors, its the interconnect, and that's something that they already engineer themselves (I don't know if they make it, but that doesn't really matter: do Apple make their own machines in any real sense, for instance).
Does anyone still care about 'the desktop'? Wasn't that some 90s thing?
Because, of course, this will never happen to GitHub, never I tell you! And reddit will never turn into another slashdot which no-one reads any more. Oh wait, hasn't that already happened to reddit?
Well, why do people still paint or draw or, in fact, do almost anything? They do it because they enjoy it. And some of us actually don't enjoy sitting in front of a computer any more than we need to.
Good point: I hadn't thought about things like the F6. Would not appeal to me but no doubt was (is I think?) built like a tank.
My guess is: colour slide (E6) almost now, colour negative probably 10 years, B/W never.
I think to call this 'niche' would be understating how small its market must be. It's of interest to people who shoot film (a not-tiny market), using a small and rather uninteresting subset of film cameras (90s Nikons and perhaps Minoltas: both ugly jellymold 90s SLR families with really nothing interesting about them at all), and finally people who want to know details of every shot for reasons which escape me. That must be at least several people (of which how many read The Register? probably all of them).
I use film pretty much exclusively, and I'm mildly obsessive about recording things: I do record shot detail for my LF camera like many LF users (at several pounds a frame you kind of want to know that so you can save money next time), and I record film / lens information for smaller formats, but it's never even occurred to me to want to know the exposure/aperture for each frame on a roll of 35mm. And surely at least part of the point of using film is exactly not having to use horrid plastic 90s cameras.
I think that Pluto's orbit has resonances with Neptune's, you can work out how long those resonances take to settle down and it's too long: it's been in orbit around the Sun for long enough that it should be cool.
This is a dumb question I suspect. How does Firefox know that, as of today (or yesterday, or whenever it was) it should block flash? There hasn't been a new version in the last couple of days, so the only way I can see that it's doing this is by, reasonably frequently, asking Mozilla. While I don't mind that (I have it check for updates and send health reports anyway), I bet there are people who do: even if it isn't sending any real information (which it doesn't need to) it is pretty much inherently sending stuff like IP address information and so on. There doesn't seem to be any really obvious way of preventing it doing this.
It's essentially usenet, except it's owned by a company who have ultimate control and there is advertising. So it's a way of spending an enormous amount of time arguing with idiots, there is porn and, inevitably, a very large number of people with really offensive attitudes (I'd be interested in knowing what reddit's demographic is, but I'd guess it's essentially 18-25 white US male). Unlike usenet most forums ('subreddits') are moderated and there is voting so spam is not such a problem (but don't say anything that might be unpopular, even if it is true). Like usenet it is slowly being overwhelmed by awfulness of various descriptions, but in this case, since it is owned by a company who can potentially be held liable, it will fall to bits in different ways.
I suspect it's on the way down now, but I would suspect that having fairly recently walked away from it after 8 years or so (the 20 years before that being wasted on usenet).
It's sad that there seems to be really no good way of actually finding what the news is in various specialised fields which is stable in the long term.