Re: Phew!
Which might still happen on the last leg of the descent
Mission control to LOHAN: DUCK!!
4245 publicly visible posts • joined 24 Apr 2007
I have used it so often in building telescope mounts, and it just gives a thrill to see things suddenly move smoothly, without wobble or stiction. Not surprised it got the canards sorted. Smoke from servos and sticky canards don't mix, unlike smoke from apple wood and marinated magret de canard.
\
Darn, hunrgy now
Has he got a white Persian cat?
Kudos to Elon, though, he has the balls, and business mind to do stuff I dreamed of as a kid watching the Apollo programme unfold. I got my kids very excited when I pointed to Mars in the sky, and told them two robot cars made on earth were trundling around there. I do hope at some point I can point there and say there are people on that red dot in the sky.
Maybe SIMD is not the best turn of phrase. As I see it, FPGAs could work well if the processing steps are fairly predetermined. I do not see FPGAs working on the data-driven processing order required for my kind of image and volume filtering work, but then I haven't got much experience with FPGAs, so maybe I am wrong
Absolutely true. Reprogramming FPGAs is a bottleneck to many. A software platform which could ease that pain would help, but I trust that might be as difficult as compilers which "automatically" recognize how to parallellise code. I have seen examples of the latter which handled quite a few situations admirably, but feed them e.g. a queue based algorithm and they are stuck. Many challenging problem require real originality.
FPGAs do offer interesting ways to extend the power of computers, especially for SIMD type situations (and there are loads of these). I will see if our library has that paper available.
I thought Moore's law was not so much an assertion as an observation. I gather he observed a trend in the data so far, and suggested the exponential growth might go on for a while yet. I do not think he envisaged it to last as long as it has.
I do tire of people who still suggest Moore's Law will come to the rescue of their pathetically slow algorithms. I always like to point out that even if Moore's Law continues unabated, the amount of data their quadratic, cubic, or even exponential complexity algorithms algorithms can handle will not grow in the same way. Instead, the amount of data will grow by the square or cube root of two for each doubling, and in the exponential case you can add one data item per doubling of speed.
The end of Moore's Law might put an end to this form of sloppy thinking
I spotted one of the two through my solar scope, and caught the aftermath of one on camera (it is the bright spot on the left on the linked image). The active region shown here has produced quite a bit of fireworks, lately.
As can clearly be seen, these were not pointing anywhere nearly in our direction (even accounting for the rotation of the sun. Even an X20+ would be highly unlikely to harm us in that case. The headline is therefore the equivalent of "Bullet aimed elsewhere fails to kill man"
Boffins really, really need a sense of humour. If you cannot poke fun at life, the universe, and everything, you might end up starting to take yourself and (worse) your ideas way to seriously. You might even end up believing in them, which totally scuppers your critical attitude. A good capacity for self-deprecation or even self mockery is important in science.
This is why I like writing the odd paper for Annals of Improbable Research. Must get that paper on pasta-antipasta collision experiments finished.
For many problems, getting things to work efficiently on a GPU is limited by the small working memory of the stream processors, and the costs of transferring data to and from this working memory. Besides, certain tasks have parts best performed on the CPU, and other parts better suited to the GPU, be removing the need to pump data from one memory partition to another, speeds could increase dramatically.
Concrete AND nails?! That's, posh, that is
We would have just LOVED to have concrete. We had a ball of depleted uranium (IF we were lucky), and died of radiation poisoning after EVERY game, and our dad would come and dance on our grave and sing "Hallelujah!"
And if you tell kids they never believe a word of what you are saying
to freak out my missus. While she is working in windows mode, and slips out to make a cup of coffee, slip in the phone (which is behind the screen and therefore not visible), tap the screen, and sit back to observe the ensuing panic. As my wife panics whenever a button or menu is shifted in software she demanded me to install, I cannot imagine the degree of panic caused by an "inexplicable" change in OS.
Tempting, very tempting
Not just government officials, but many others in big data. Too often they assume that arbitrary aggregation will result in better statistics (e.g., because the standard error in the mean is reduced), whereas all too often piling up data from different sources in fact obscures certain effects.
Well-written article again.
I do not doubt that (in the words of sir Patrick Moore) this will have "all the crackpots crawling out of the woodwork". A type 1a in M31 would be about magnitude 5, but that would not cause a GRB-like event. I would expect this to be brighter (although much depends on extinction by dust)
True, as a scientist you do not like throwing away anything, but with projects like LOFAR, SKA, and the like there is little other option. One trick in the case of LHC-like is to store the detected tracks in a parametric way rather than storing every point that made up the track. That makes a big difference. Might we miss things? Certainly, can we store all the data for later reuse? Not at the moment.
You really want to be able to do both. Processing faster to get the information from the data and store every bit quickly.
They need faster data reduction methods. We are actually working on much faster processing methods to extract the information from the raw data, so the raw data need not all be stored before processing. They do use preprocessing methods for reducing storage now, but existing methods do not scale well, so for ever faster data rates better, lower complexity algorithms are needed.
Not that I would mind kit with the specs they want. I want that too. And not to run Crysis!
I actually rather like my 13" laptop (16:10 ratio, rather than 16:9). I am looking to replace it by a 13-14", no bigger. I am also very happy with my 10" ASUS transformer pad. That seems to be a MUCH better format for a tablet/laptoplet(or notebooklet) than this offering from MS, and gives me 15-16 hours of use (with less grunt, but I get that from my laptop. I do not mind having two or three devices, but maybe I am weird (i.e. not the category the marketeers are interested in)
When you publish with any scientific publisher you almost always have to sign a copyright transfer agreement. Many publishers simply use this as a means to clear themselves of copyright infringement charges, because you have to declare that the material is your own, and you have the right to sign over the copyright to them. Any extraneous material must be covered with a separate permission from the holders (which is never a problem, because the holders are only too happy that their material is being used, and that therefore they are cited).
Some publishers also want exclusive rights to publication, although all I know allow you to use your own work freely in non-commercial publications like e.g. a PhD thesis. Springer in its Lecture Notes in Computer Science requests you not to put your material on-line until one year after publication. It is not a prohibition, but since they ask nicely I tend to comply. IEEE allows you to post your material provided you show a clear copyright statement and state that the material is only provided for quick dissemination of scientific results for research and educational purposes, but not for any commercial use. This is entirely reasonable.
What the ASCE is doing seems a bit harsh, but the authors must read what they are signing. A colleague of mine crosses out any condition he does not like, initials the changes, signs the forms, and sends it in. He has never been challenged on these changes. I suggest all authors in ASCE publications follow that example.
Alternatively, you can publish in open access journals (or use the open access scheme of some journals with hybrid publishing format). Somewhat more expensive, but compared to the cost of doing the research itself it is nothing. You then simply link to the version on the journal's website and everybody can access it.
The only downside of a replicator based on this mechanism is the formation of an anti-hamburger together with your hamburger. It may be a balanced diet, in a manner of speaking, but could lead to explosive indigestion to which the phrase "blast radius" would seem to small (swamp dragons would be jealous)