More shitty animated GIFs
Are El Reg actively trying to kill their readership through mass epileptic fits?
53 posts • joined 14 Mar 2012
Are El Reg actively trying to kill their readership through mass epileptic fits?
I can't think of any reason to use RAID-0 in a NAS box.
For much of the target market I'd say that Raid-0 is probably the preferred solution, steering away from it based on purely general concerns strikes me as a lazy knee jerk recommendation. As with any choice of technology the specifics of the requirements (should) alter the evaluations.
Most home users are going to want high capacity, small size, low power draw, low noise and low cost. Those are all requirements that steer you away from redundant storage. Your typical home user has no need for high availability so provided you have a backup Raid-0 is perfectly adequate. If you don't have a backup no form of Raid is appropriate since your real problem lies elsewhere.
According to Wikipedia, the UK hasn't made up its mind whether GMT means UT1 or UTC so is there some ambiguity. But in all non-nautical contexts, I would expect GMT to mean UTC.
The relevant legislation is clear - it means GMT. GMT is its own time standard which isn't really surprising given that it predates any of the Universal Time standards as the original world time system. The nearest UT equivalent is UT0 but even that isn't a direct substitute since the precise methods by which they are defined differ - the GMT definition is slightly sloppy for modern standards of precision.
The ambiguity arises from the popular conflation of the GMT to mean UTC, UT1, or whatever else the user wants it to mean. "Basically equivalent for everyday purposes" gets morphed into "exactly the same" or even "equivalent by definition". GMT has no fixed relationship with any of the universal time standards. It's also the kind of incredibly subtle issue that I wouldn't trust Wikipedia at all for - everyone fervently maintains what they believe is correct and precise, even if as a poster here remarked they are actually using a simplification of a simplification.
Confusing matters even more in the UK is that although legally the time is defined as GMT, most precision time sources, such as the speaking clock, MSF, GPS and the BBC time pips all actually transmit UTC.
If you look closely, you'll see that the 4-o'clock position is marked with what appears to be "IV". But the correct (in the sense of traditional) indicator for 4-o'clock on Roman Numeral clock faces is, perhaps surprisingly to some, "IIII".
I'd argue that it is one of the few clocks that you can argue is correct. Modern usage of Roman numerals would indeed make 4 IV. However, the Romans themselves didn't use the modern prefix-to-subtract notation so 4 would indeed be IIII as most clocks use. However, in that case 9 would also be VIIII, which I can't recall ever having seen on a clock. You can argue about whether it is right or wrong but it is at least consistent, as opposed to most Roman numeralled clocks which are unambiguously wrong regardless of which system you use.
But the mechanics of the flyby's of the Voyagers was such that they'd *never* get near Pluto
That was quite possible, and indeed it had been pencilled in as one possible path to take: it is also the primary reason there were two Voyagers in the first place. The plan was that after Saturn the probes would diverge, one going to Uranus and Neptune, the other going on to Pluto - the orbital mechanics didn't allow a single probe to go on to all three but you could still choose between the two alternatives.
As it turned out Titan looked too interesting in the initial Voyager 2 findings, so Voyager 1 was diverted to try (and largely fail) to find out more. After that the option was gone.
Leather: renewable byproduct of providing the world's population with the protein it needs to survive.
Leather knock-off: plastic from the petrochemicals industry and indirectly a byproduct of fossil fuel production.
I think I know where I would put the green money.
Alpha Centauri is the closest star to us and is the third brightest star in the southern sky. Not invisible to the naked eye!
It's generally (but not universally) accepted that the Alpha Centauri system is the closest to Earth but the nearest known star is Proxima Centauri - the debate is whether it is part of the Alpha Centauri system, not in the distances. At magnitude 11.05 it is also around 100 times too faint to see naked eye.
Maybe star formation is stopped, but most stars are small and have very loooooong lives, often much longer than the current age of the universe.
It pretty much means the same thing since even the brightest red dwarves are very dim - less than 10% the luminosity of the Sun - we can't see them over intergalactic distances. Bear in mind that even the closest star to us in the night sky is completely invisible to the naked eye and difficult to identify even with a good amateur telescope.
Unfortunately we're still going to have to burn you at the stake for getting 'affect' and 'effect' the wrong way round.
I actually noticed that one was wrong with two minutes of the edit window still open, so quickly edited and changed affect to effect. Noticed a couple of minutes later that I'd actually changed the wrong one but by that time the edit window was shut. ;-)
But the Moon is already outside the Van Allen belts.
How can you tell a magnetic field is 4 Billion Years old?
Certain types of rock retain an imprint of the magnetic field of the time they were formed. That imprint itself effects the magnetic field of the planet, but the affect is highly localised. If you get sufficiently up close and personal you can begin to separate those imprints on the surface rocks from the greater bulk of the planetary field, estimate the age of the rocks and you can then determine what the planetary field as a whole was at that time.
At least that is what they are claiming, I haven't delved deeper than the abstract and it isn't my specialism. Personally I find it surprising that sort of unraveling of a messy dataset can be done at a range of tens of kilometres, but I certainly haven't the background to contradict them.
Although a cheap microscope device like this might be quite useful for anyone who wants to get into analysing de-capped chips on a tight budget - that is to say, burning the top off of an IC with acid to physically look inside and read out firmware, etc...
Not really, given that feature sizes on ICs have been under the wavelength of visible light for more than twenty years now. That means that you can't see the features with any optical instrument even in theory, yet alone with a low end device such as this. Reverse engineering of ICs has always been a job for electron microscopy.
No, it is what passes for a sensor these days - the A->D stuff and basic processing are on the sensor chip itself, it makes sense to do the conversion from low level analog signals as soon as possible and on a chip already doing analog processing at any rate since it is a poor match for digital-optimised processes. A camera module will include the lens and focuser in addition to the bare sensor.
Really, do people not understand that the internet is not an interference-free, guaranteed availability network?
Do you really think they never stopped to consider issues like that? That an entire team of engineers spent years and millions developing a product in one of the most highly regulated sectors imaginable and no-one stopped to evaluate the connectivity requirements? Of course not. What they understand but apparently you don't is that high quality connectivity is available if you need it. It is utter naivete to assume that because something is TCP/IP for interoperability it is going to be operated over any old crap like the £10/month DSL connection you have at home.
Instead look at e.g. the MPLS cloud providers. If both sites are already connected to the cloud you can ring up and say "We need a 10 Mbit path between these two nodes". They reconfigure the switches and call you back a couple of hours later - "You have 10 Mbit, latency no more than 5ms". Those are not aspirational theoretical maximums, subject to traffic levels and with a trailing wind but guaranteed minimums - the link will not so much as fart.
So yes, I can see legitimate use for devices such as this, operated by competent individuals and backed up by competent IT professionals who understand how to specify what they need. If the circumstances demanded it I'd be happy to be operated on by such a device and so would you be. Clearly most of the time you would rather the surgeon was there in person but that isn't always possible. Consider the stories of professional divers who receive the most horrendous injuries but can't be taken to hospital until they have spent a month in depressurization. Would you rather be left there for that month with only basic nursing care, or would you rather they use something like this on day one?
Maybe knowing how a builder thinks means he's better placed to write software that other builders find intuitive, or which performs functions that clients in the construction industry want more efficiently. Learning to code isn't tough, but understanding the job you're writing software for is.
I can see plenty of other cases - the poster here a few days ago that observed on needing data centre racking that calling in the joiners to build racks into the room would be cheaper and more elegant than buying and adapting around whatever massproducedracksareus.com happen to be flogging. Or the IT guys in a hospital I was visiting a few months ago routing cat6 through a wall. I could see the problem straight away, so it was no surprise to see the following day the cable had been ripped out and a couple of other guys discussing how to repair the damage. All because the IT whizzes had decided that the notices literally every ten feet along the wall (Fire wall - do not penetrate) didn't apply to them.
Both cases are on the fringes of IT - they won't be covered in an IT course but IT professionals still consider them fair game for an amateurish stab at them. Sometimes it works, sometimes it doesn't. Yes, the builder is a fairly extreme example but there are plenty of more direct examples - maths, electrical engineering, physics (cough), management, marketing, domain specific knowledge for the application at hand. The broader the skills base the more likely it is someone will spot pitfalls before they arise just as I've lost count of the times I've heard IT pros talk of the "resistance" of data cabling, as if resistance is in any way a meaningful metric for data cabling.
You can dismiss all that expertise before the altar of dedicated IT skills but it is an incredibly foolish attitude. And don't complain a week on Thursday that they are all career politicians with no knowledge of the real world, because it is exactly what you are demanding here.
Is that picture taken under available sunlight, passive IR, some composite of wavelengths? Are the spots 'bright' because they're whiter or hotter or what?
Most of the pictures I've seen so far are unfiltered, i.e. what the CCD detects. That's essentially visible light extending some way into near IR.
Secondly, those of us who learned about IT by experience, rather than in an academic environment, tend to be far more broad-minded and less bigoted than those who spend five years in university, come out waving their silly little bits of paper around, convinced that they now know 100% of what there is to know about IT, and that they can go out and tell us who have been working in the industry for 30 years that we're doing it all wrong.
The best people have a diversified knowledge base. I certainly would agree that new graduates (of any technical discipline) need a little seasoning but after perhaps five years commercial experience they are much more rounded, they've picked up practical knowledge and experience of real situations and all the surrounding areas outside their original discipline - management, administration and record keeping etc etc. In contrast self taught and on-the-job learning is always something of an unknown quantity and remains like that throughout one's career - there are frequently huge areas of ignorance, a lack of investment in learning and if something works that is the end of story even if a different approach could have been cheaper, better, or less effort. Formal educations fills in many of those gaps, not always well enough to provide all the answers but enough to pose the right questions, the real issue more often than not.
There's also a question of investment - if you've spent three years at uni you have a hell of a lot of groundwork under your belt before you start tackling real world problems. Without that the temptation is always to cover the bare minimum to achieve the aim in hand even if a more sophisticated approach would be more profitable in the long run. I'm in physics myself (primarily astrometry techniques) and I see it all the time - people have gone to great effort to achieve something and do so badly which when you consider and immediately ask "Why didn't you use a ____?". Within IT I frequently see and hear of completely the wrong tools being used because it is the tool they know. For example there is a choice between spending a week hand-writing a huge, unmaintainable parser or investing a fortnight learning Lex & Yacc for no immediate benefit, but subsequently writing the same parser in a morning. The self-taught tend to take the first approach. The CS graduate has already taken the second.
Time and time again, I see people tell me that I'm wrong, because they are applying their basic, limited knowledge, to a situation that I have more advanced knowledge about. They are convinced that I am wrong, and once I engage them in conversation about it, they just start sprouting all sorts of erroneous cobblers, without realising how stupid they are making themselves look. Just a couple of days ago on this very forum, I had one idiot trying to tell me that read errors that have been detected by CRC checking are somehow responsible for silent data corruption. By very definition, that obviously cannot be true - that is what the CRC is there for. I have also been told that the bad144 utility reads the inbuilt defect map from a hard disk's controller. Errr, OK... Maybe in a parallel universe.
The tone of this immediately put me off, it reads as "I'm so much better than everyone else here, the way my skills developed is the One True Path and anything else is wrong." You cannot present a logical argument that you are enriching yourself by denying yourself avenues of learning. There are many sources of skills and knowledge - academic education, industry certification courses, reading and private study, practical commercial experience, pet projects, discussions with your peers, even media reporting. The truly skilled individual has and does expose themselves to as many of them as possible. Stating at the outset that you are not going to consider one of the principle sources, and then treating those around you with contempt because you consider them to be beneath you is impoverishing your sources of skills and learning. That is ultimately to your detriment rather than your benefit.
Out of interest I did look through your posting history to find the discussion to which you refer. If I called it I would say that you actually lost the argument. You made an initial claim which was challenged with a chain of reasoning showing that in the common case what you said didn't apply. You introduced a lot of smoke and mirrors and arguments based on the rarer cases but at no point did you defend your original claim against the argument against it. Therefore you lost by default.
What have you learned from that? Nothing, because they are trolls and unfit to untie your shoelaces. That is not the approach of someone who learns wherever they can as an investment in the future.
You can always use AB+ to kill it.
Kill them all: it is easy enough since El Reg only uses GIFs for the annoying animations - everything else is either PNG (most UI components) or JPEG (photos). Add a custom ABP pattern:
You'll never notice it except when it finds something you didn't want in the first place.
...has only just about reached the level of the TRS-80. Up till now it's been more like the MK14/SCaMP level. The problem is the manufacturers can't see back to the beginnings of the home microcomputer age when "huge" sales initially meant, at most, a few 1000 units for dedicated hobbyists.
That's true in more than the size of the market but also the range of applications. A few weekends ago my fiance brought home a stereolithographic front panel bezel for a prototype something to do the surface finishing on. I saw it before he started work - no, the finish wasn't perfect but you could see it didn't need more than final finishing - the accuracy and fine definition were there already. It also felt nice and robust. He can't have disappeared into his workshop for more than half an hour for a light sand and spray paint. When he'd finished it could easily have been made using injection moulding and high quality moulding at that. Commercial application, commercial process (from a bureau service), commercial price tag and a very quality result. You can see the applications straight away.
Now look at the home 3D printers, producing unconvincing plastic blobs that look and feel as if they are about to fall apart at any moment. Where are the applications? Oh, I can see why a home user might want such a device but they don't fit the bill - they lack the structural characteristics for DIY, the model engineers or the self-described "makers" and they lack the precision and finish for the fine scale model makers or the artisans. That leaves you with plastic toys, and expensive and poor quality toys at that. When the novelty wears off what is the real use?
Your comparison to the home computer market is a good one but it goes further than you initially drew it. Yes, call it around the TRS-80, Spectrum, C64 or whatever. I don't see there being a linear successor to today's 3D printers just as there wasn't to those computers. Instead you had a story of divergence - the gamers went off to the Amiga and Atari and the "serious" home users went off to the likes of the Amstrad PCW. They only reconverged later on the PC, by which time you could say the technology was truly ready. I think 3D printing has to go through a similar story in the short term, supporting larger prototypes and acceptable strength on one hand and much better accuracy on the other. It's only when they come back together again that the technology is genuinely ready for the mass market.
Put that way it isn't something that is almost ready for prime time, we're at least two generations away. Businesses that do not appreciate that and plan accordingly are doomed to the same end as many of the firms behind those pioneering machines of the 80s.
@1980s_coder: that is fantastic! Do us another one, please!
Getting things like that to work is surprisingly easy in practice. Occasionally you may need to twist the plain text more than you would like. Sometimes things just work out conveniently and you consider yourself lucky.
Contrary to what you might at first assume, there are enough ways of phrasing any given concept to give considerable flexibility and allow both plain text and cipher to appear natural. Re-ordering of the points you wish to make is always another option to allow things to pan out in a seemingly natural manner. Each time you do that, however, you have to ensure the plain text still flows naturally without hopping between disjoint concepts. When other options fail there are also any number of general joining words that can be fitted in to almost any sentence to help out.
Your vocabularly also helps out massively - use a thesaurus if you are having massive difficulties. Often it isn't really necessary and the other approaches allow you to express yourself clearly enough. Unless you have really painted yourself into a corner the inclusion of obscure terms should be avoided where possible. Realistically, however, they may be necessary from time to time. Similes and metaphors are another approach to use sparingly, if you use them to excess the message appears too flowery and poetical.
Eventually, however, you do need to come to the point and make it clearly and unambiguously. Lexicographer's playthings are interesting puzzles but are not an end to themselves.
Finally, always end with something that sounds completely natural - it helps create a better impression of the composition as a whole.
My comments in that post were deliberately nonsense, posted just to hide a message that nobody, (except seemingly one person), noticed.
We got it. A single word does not constitute a message. No real information was provided. Key points were not made. Elaboration and arguments were entirely absent. Really, then, you got no more than you deserved.
Can people please learn the difference:
GMT - Time zone, same as EST or CET
UTC - Time Standard. Yes it is the "same" as GMT, but they are NOT the same thing.
If you are going to be pedantic at least get it remotely correct. They are two distinct measurements: GMT is the "natural" time, i.e. as determined by the rotation of the Earth. UTC is governed by atomic clocks and needs periodic insertion of leap seconds to keep it roughly in sync with GMT which itself neither has nor needs leap seconds. GMT and UTC both approximate the time at Greenwich but for any given instant the time is slightly different when expressed in each system.
Did I mention chemical bonds or crystals? No. I referred to it as a "diamond" (in quotes), meaning a so-called diamond, because various people had called it a diamond, not because I thought it would be even slightly reasonable to call it such a thing.
Exactly. This is an article relating to crystalline carbon and you claim it to be degenerate matter. It isn't and it can't be.
Take a FAIL point, dude. The "new kind of physics" you are seeking hides behind the terms "compact star", "electron degeneracy pressure" and more generally "degenerate matter". You should also take a tour around "Pauli exclusion principle" for a minor diversion and a bit of background.
Take a fail point yourself. Electron degenerate matter is unable to form any kind of chemical bond (because of the lack of electrons) and therefore unable to crystalise.
it is similar to the effective technique used by amateur astronomers to get rid of atmospheric blur, and get pictures of Jupiter and other planets that rival ones taken by the Hubble.
While there's obviously some borrowing of conventional image stacking techniques this isn't a simple evolution of prior art - it's perfectly justified to consider it as a novel approach. Conventional image stackers depend on an implied context that is common to all images - same target, detector, resolution, filters, intensity levels for starters. While current algorithms can compensate for target placement and image rotation artifacts you are best trying to keep those as near as possible the same too at capture since it gives greater certainty to the analysis.
The novel step here isn't the image stacking per se, it is the combination of images where none of that shared context holds and taking and combining the information present in each image even if it is not shared by the others. For example purely by eye I would suspect the image in the lower left of the example composite presented includes an IR component not present in the other images. Conventional stacking would simply discount and throw away that image since the computed Strehl ratio would be much lower than the others. However, if you look at the composite you can see that it does contribute fresh information not present in the other images.
I'm waiting for an opinion piece by someone at The Register saying that the Spot is really gaining size, and that the methodology Hubble used to measure its size is discredited. Also some shite about a hockey stick.
Which wouldn't necessarily be shite in and of itself. We have observed that the spot has completely disappeared at several points in the past, obscured by higher level cloud. No, I haven't looked at the research in detail but there had better be some more substance than a few photos behind it.
12 billion light years = 8 billion years?
Is that right? If so, why?
Because the Universe is constantly expanding, which affects how far the light must travel even while it is en route, and the effect gets ever more pronounced the further away you look. If you take the galaxy in question here it has a redshift of 4. That means we are seeing it as it was 12 billion years ago, because the light has had to travel that many light years. However when that light set off we were only 3 billion light years away. The galaxy itself is now roughly 23.6 billion light years away.
Don't worry if it makes your head hurt - at these kinds of distances astrophysicists use redshifts almost exclusively, in part because it's a single metric that avoids these kinds of ambiguities.
One final point to clarify my original post - the actual paper is perfectly correct, it's the press release that is wrong.
...although in this case the error lies squarely with the press office of the Carnegie Institution for Science rather than the re-reporting of it.
12 billion light years away? That'll be a little over 8 billion years ago then, when the universe was already 5 billion years old and fairly mature, not a mere 1.6 billion years. How did that get through peer review? DID it get through peer review? No: let's have a look at the paper and we see they quote a redshift of 4, which correlates to 12 billion years ago but a distance of almost double that, once you correct for expansion during the interim.
Sigh. How much work went into this? How many hundreds of thousands of dollars? And then at the final hurdle the publicly announced results are Bowdlerized by some English or Media Studies graduate working at the press office.
a Jupiter-mass object out to 1 light year (63,000 AU), where it would still be within the Sun's zone of gravitational control. A larger object of 2–3 Jupiter masses would be visible at a distance of up to 7–10 light years."
Is that the sum total of your evidence? Quoting Wikipedia verbatim about the technical capabilities of WISE, rather than what is has actually been used for to date?
In other words you are completely ignoring the work that the WISE team have done, which they have announced now, restricting their claims to what may legitimately be claimed based on that work. Instead you have substituted what WISE is theoretically capable of, as if once you have the instrument you don't even need to turn it on to observe the null result. That isn't a scientifically robust argument, you wouldn't even accept that in everyday conversation.
Nemesis remains a fringe theory because zero evidence has been turned up in its favour. And it will rightfully remain a fringe theory.
You are overlooking several factors here. Firstly you choose to ignore the fossil evidence that led to the hypothesis being proposed in the first instance. You ignore the geological evidence to the same effect - sure it is a little sketchy but it is highly suggestive. You are completely ignoring the fact it has been published repeatedly in peer reviewed journals. You ignore that process of expert review, who concluded the theory had merit in order for it to be published, because you know better than them.
So, you ignore or dismiss evidence that is contrary to your position. You throw in irrelevant factors that do nothing to support your case as if they were final trump cards. You ignore the opinions of experts. Those are the hallmarks of a scientifically illiterate crackpot theory, not a properly published, legitimate proposal. Just because the theory naturally appeals to the "end of the world is nigh" brigade doesn't make it any less credible.
One thing the Nemesis hypothesis has always been very clear on is the size of the orbit - in order to get the period right it needs a semi-major axis of around 95,000 AU. These chaps can make meaningful assertions up to only 42% of that distance and less than 7.5% of the volume of space, and this is somehow "proof"?
Yes, Nemesis is unlikely but it is a legitimate minority opinion, dismissing it as crackpot science is in itself a demonstration of scientific ignorance, since the whole idea is surprisingly and annoyingly difficult to conclusively disprove. In their eagerness to "prove" the falsehood of the theory they are guilty of far worse junk science.
There's a few ISPs out there whose authentication procedure is "ah, you're coming from that wire. You must be genuine."
Perhaps on cable, but nowhere for DSL modems. That's the wonder of local loop unbundling, DSLAMs and MPLS. Without the correct virtual circuit indicator (not set by default, except possibly for ISP own-brand - as opposed to ISP supplied - stuff) it has no idea where to go. VCI 0 is generally a BT Openworld default "Your router is misconfigured" thing.
No "small?" No "Big?" Can we at least have "massive" blackholes in between the "intermediate" and the "supermassive". After all, "super" implies bigger, better, above or higher in a sequence so tacked onto massive implies that massive must come before super while massive itself creates a mind image of something humungously bigger than some puny "intermediate" thing so we ought to have "big" in between them too.
You are missing a step between "micro" and "intermediate" which is stellar-mass black holes. Those are typically in the tens of solar mass range, as distinct from "micro" black holes (around molecular mass) which are purely theoretical but at the root of all those "end of the world is nigh" trolls whenever a new particle accelerator is built.
Nor is there any gap between "intermediate" and "supermassive". That is the scientific rather than everyday sense of massive, i.e. "has mass" as opposed to "very big". Thus a supermassive black hole has a lot of mass, but even the micro size is massive.
I've been in places where they relied on old programs because they were too tight to get a replacement written. I have warned managers of the need to migrate from DOS and 16-bit applications for safety and maintainability too - until I gave it away as a waste of time in 2004!
In my opinion, it is those managers who put their companies at such risk that should be fired, pronto.
In that case it's simple: we'd never hire you.
As a slightly different but essentially identical example: I work at a university where we have a telescope mount controlled from an old XENIX app. We were able to upgrade that to Openserver 5 but even that is 15 years old now. Anything more recent via XENIX emulation simply doesn't work since it needs semi-direct hardware access via ioctls. Cost of an new and equivalent mount? £130,000. Are you offering to stump up?
That's still small fry: for example Royal Mail have hundreds of millions tied up in Integrated Mail Processors. There's a fancy front end on some of the newer machines but underneath it all they are still DOS based. Are they supposed to scrap that investment on your whim too?
Remember, you haven't qualified your statement in any way so you are either ignorant of reality or chequebook happy. There are plenty of apps out there tied to one specific platform or other for any number of reasons. Simply wishing them away doesn't work.
You'll easily be able to point out which law prevents publicly accessible web-pages from being scraped then?
You mean, like, I don't know, the Berne Convention? It's only been law for 100 years or so. The law does not change according to what you want it to say.
The robots.txt protocol is indeed a bit coarse, but I understand that the propositions from Google to the EU included some sort of mechanism to give websites more control over what data Google can grab and display.
I don't know why I speaking up for Murdoch here but the law is clear - robots.txt has exactly zero legal clout.
Ah, two downvotes already for stating ".docx has been around since Word 2007, i.e. 3 releases, hardly changing "every version".
Can my downvoters can get both hands on the keyboard for a minute and let me know which part is mis-leading or inaccurate? :)
.docx is not a format. Neither was .doc which preceded it. It sprouts new bells and whistles every version which is why you have the problems using the same file between releases. You can't even trust pagination of the same document to be the same between two different versions.
Why are people down voting you for pointing out that fact...The internet is a funny place.
So, the recommendation is to brick these routers by installing a firmware they are not capable of running? A sledgehammer is a quicker and functionally identical method of "fixing" this issue.
Although not responsible for the original downvote I get tired to this relentless "DD-WRT is great" bullshit. In particular this idea that a $50 consumer grade device becomes a $1000 enterprise router with a change of firmware - "See, it does everything that this more expensive router does".
Apart from simple performance of course - packet throughput is frequently less than 1% of the more expensive device. It's frequently much worse than even the original firmware - those extra functions don't come for free but take extra processing time. This is leaving aside that third party firmwares, DD-WRT especially, usually aim for device coverage as opposed getting it to work properly on any single device. That frequently means a less powerful wifi signal if the antennae is not optimally configured. How many open source developers wanting a cheap, capable router have access to an EMI testing lab? That'd be none of them.
Yes, DD-WRT has it's place but all too often it is advocated in an axiomatic fashion by the relentless fiddlers. Like here for instance where the router does not support it. Too often it simply devolves to the point of "See, look what I've done, aren't I clever?" when the reality is no extra functions were needed so it is actually "I've made my router slower and less powerful to show how clever I am".
Even state-owned universities in the US get most of their income from tuition fees, endowments and alumni. Very little state subsidy - it's anti free market, you know. What there is tends to be in the form of grants for specific research projects in the same way as many companies sponsor research.
No, as I said, do the sums. That is all that is needed for a parallax measurement of a single star. According to the Tycho-2 catalogue here there are 5,227,058 stars between 9th and 12th magnitude. That makes 10,454,116 observations which is obviously a very small needle in a very large haystack.
Look at the list of nearby stars - they were generally detected not by parallax but by proper motion, and parallax only then used to measure the distance to the star in question, i.e. once it had already been identified. The problem with Nemesis is that it would have the same proper motion as our own solar system, and therefore not moving with respect to us, save for a very small orbital movement which to all intents and purposes looks the same as a much more distant star.
As for the detection of that brown dwarf recently that is precisely what I alluded to in my earlier post: you can detect those "easily" by looking for a bright IR source with no corresponding visible source. You can't do that for stars on the main sequence.
Both red and brown dwarves have been proposed at different points. The brown dwarf alternative is increasingly difficult to defend since infra-red sky surveys should have picked it up by now. The red dwarf alternative is much more difficult to dismiss since a nearby red dwarf to all intents and purposes looks identical to a much more distant red giant. To distinguish between them needs full spectroscopy of each candidate star individually, which is both costly and time consuming.
The magnitude range for a red dwarf has been estimated at between 9 and 12. That covers millions of stars. You can dismiss a lot of those immediately because they are the wrong overall colour (i.e. not red) but it still leaves you with tens to hundreds of thousands of possible candidates.
A lot of people have been very dismissive of the Nemesis hypothesis over the years but personally I've always found it unlikely, but been unwilling to entirely rule it out. You do the sums and a red dwarf could in fact be very close by, but next to impossible to distinguish from all the other background stars.
There was an awful lot of double-bluff going on. Letting Coventry be bombed to bits was the most notable one but there were plenty of others: One tactic was to thoroughly depth-charge an area of sea that was presumed to be empty. This had the the effect of a) suggesting faulty intelligence at work and b) the co-ordinates so bombed, which they made sure it was within sight of German intelligence, could be used as a fresh crib to break a new day's codes.
The SDK and kernel are two separate bodies of code. The what the kernel's licence says is a complete irrelevance to the licensing of something else.
I've just given you another one. If you'd read the comments you'd have seen someone had found the exact device around an hour after this went up. It's an ethernet transformer. You don't get a thumbs up for going to the Delta website and plumping for the first thing mentioned on the front page with a "that's close enough" attitude.
Actually NT 3.x was a relatively pure microkernel system with external device drivers. It was only with NT 4.0 that the video driver was moved back into the kernel to boost performance.
The CCD cells are on a 4.3µm pitch but they are binned for movie mode - you only get 1080p resolution. Have a nice time enjoying those three pixel images.
I was one of the thumbs down. I didn't comment at the time simply because I couldn't be bothered to go through the sums to show why.
DSLRs _are_ completely unsuited to planetary imaging. It isn't just the lack of movie mode on many models - sensor size is an equally important factor that is endemic to the format.
Since we're talking about the Canon 550D consider that it uses a 22.3mm sensor. This is a comparatively large sensor that is great for photographers since it reduces the cropping factor, but is a positive disadvantage in this application. The pixels are arranged on a 13.8µm pitch. A prime focus image of Ganymede at opposition (the most favourable time) is 25µm diameter even at the relatively generous 2400mm focal length of a Celestron C11. That makes a prime focus image area of less than 3 pixels. No amount of post processing can do anything with such limited information - even calculating the Strehl ratios (the first stage of the staking process) is more speculation than concrete mathematics.
Compare with one of the webcam derived imagers such as the NexImage 5 with a pixel pitch of 2.2µm. That gives you an image area of 101 pixels. That's enough for the stackers to begin working profitably with the data collected. The effective resolution can then be boosted by perhaps a factor of 10 or more as a side effect of the stacking process, depending on how many images you have to work with.
In practice however, you'd want to use eyepiece projection on both imagers rather than prime focus to get a decent resolution in the first place. Not a problem with the small sensors of the webcam imagers but if you extend the image over the much greater area of the DSLR sensors the curvature of the focal plane is magnified and completely takes over. The CCD sensor is flat, but the resulting focal "plane" is not. As a consequence only a portion of the image can be in focus at any given focus setting.
The OP's comments do not suggest a suggestion of appropriate tools based on knowledge and experience, but a case of techno-lust - looking around for relatively high-end equipment and assuming it must be exceptional regardless of the circumstances. I wouldn't knock that camera for wide field work, but it is a patently ridiculous suggestion for something like this.
There was no race to the Moon. After they had men in orbit the Russians looked at their next goals and decided against a manned mission to the Moon. It is only American propanga that portrays a "race" to the Moon - a race with precisely one competitor. A race which ultimately led nowhere - it was 40 years since they were last there, they couldn't go there today and haven't been anywhere else since.
In contrast ventures such as the ISS would have been completely impossible if it was not for decades of Russian experience and expertise developed on their own space station programme.