108 posts • joined Thursday 12th April 2012 13:05 GMT
Re: Why a hexagon?
I stand corrected - you're probably right: http://en.wikipedia.org/wiki/Saturn's_hexagon
Re: Why a hexagon?
A very good question.
From the article: NASA explains that the storm "folds into a six-sided shape because the hexagon [in the image] is a stationary wave that guides the path of the gas in the jet".
Which essentially says that it's six-sided because it's a Hexagon! I've added the omission/qualifier because without it the statement appears to be redefining the meaning of "hexagon" to mean a "stationary wave".
So why is it hexagonal? Well, I suspect that it's just an imaging artifact due to the photo being a composite of six images; the entire polar hemisphere can't be photographed in a single shot because half of it will be in shadow and two thirds of what isn't in shadow will still be significantly darker than the middle third that most closely faces the Sun; to get an evenly illuminated image, as shown, you need to take multiple photos and stitch them together.
So there are not six nodes but just one, which is probably due to solar heating at the boundary of the storm facing the Sun with the result that it shifts a few degrees of latitude further away from the poles but which stays in the same place i.e. facing the Sun, as the planet revolves.
Re: Please!!! Won't somebody think of the children?
How can you express annoyance at peoples' inability to understand the difference between the omissive and possessive apostrophe yet enjoy eventuate?
Ahh... I think I understand now.
Satellite bathymetry sound complicated...
...except it essentially just means extrapolating water depth from surface colour.
"...the Red Planet
may be have been more volcanically active than originally believed."
Re: But faster than light time becomes imaginary
The Lorentz formula for relativistic time dilation, which is basically just the summing of space and time movement vectors using Pythagoras's right-angle triangle formula, tells us an awful lot of really interesting things about space-time.
Perhaps the most important thing it tells us is that it shows that our movement through space and our movement through time are similar enough phenomena that the two vectors can actually be summed: space and time are not two entirely different things, for if they were then the vectors couldn't be summed to reach a meaningful answer. Relativistic time dilation has been proved to work in accordance with the Lorentz formula, probably the best known example being the timing allowances that need to be built in to the GPS satellites to account for both gravitational and relativistic time dilation.
The nature of the Pythagorus/Lorentz formula is such that there is no scope for discontinuities in the curve produced with the consequence that it sets upper and lower bounds for the rates of movement through both space and time: we term the fastest you can go as 'c' and the slowest is zero. This means that your individual vectors through space or time can't exceed 'c', or go slower than zero within the dimensions being summed, or in other words, you can't exceed 'c' in this universe because, as far as we can tell, Pythagorus/Lorentz always applies.
For anyone who's interested, take the Lorentz formula, simplify it to produce a factor by removing the relative terms, and then, using normalised values (0 = 0, 'c' = 1), punch it in to a spread sheet and plot the results - you'll get a quarter-circle arc of radius 1 ('c'). So whilst Lorentz proves to be true we exist only on that one-dimensional arc, curved through space-time; never inside or outside.
Once you've done that, you can then start speculating about the rest of the circle ;-)
Re: Well, someone will have to mention the SFX in 2001...
Nope, I didn't mean the slit-scan sequences but the model and instrument display graphics work, which still don't look dated and which made early real CGI attempts look pretty poor in comparison (compare the high-res instrument displays in 2001 with the relatively low-res efforts in Star Wars etc.).
Thanks for the explanation on slit-scan photography but I actually coded some stuff to churn out animation frames a few years ago.
Not quite invisibility
This isn't really invisibility because whilst it may cancel reflections it doesn't substitute a replacement image. To use the Harry Potter analogy, when he put it on you'd see a pitch-black silhouette instead of what's behind him. As far as radar goes, well, it would probably be ok for above-the-horizon targets because you wouldn't get a return from the sky anyway, so the 'hole' wouldn't be apparent, but for targets below the horizon, where a discriminator is needed to pick the target out from the background returns then that 'hole' might be noticable, at least at relatively short ranges.
Re. "bypass[ing] obstacles" yes, it could certainly act as a repeater in the radio spectrum, and by messing about with phasing it could make itself appear to be bigger or smaller, or in a different place, but then that's not making it invisbile.
As for making it work in the visible spectrum? Well, I think they've got their work cut out there. With enough processing power you could make it work for _one_ point of view, but not for many points of view because the system would need to simultaneously project different backgrounds, not only for different directions but also for different perspectives, which isn't simply a processing issue; each single (re)emitter in the array would simultaneously need to send different signals corresponding to all the possible viewing positions. For example, if we imagine we're using an array of LEDs to display a visible spectrum replacement image then each individual LED would need to appear to be a different colour and brightness depending upon where it's being viewed from.
Well, someone will have to mention the SFX in 2001...
...so I just did
Re: There's science and then there's wild guesswork
I don't think we need to question our understanding of how planets form to answer this; all that's needed is a sequence of interactions with other bodies in the system.
In fact, the solution is implicit in the question: if the planet formed further out from its star, as we believe it must have, then what caused it to migrate inwards? The only plausible* explanation is that an interaction with another** massive body in the system disrupted the planet from the stable orbit in which it formed and sent it inwards towards its star. Whilst on its way then, it must have interacted with a third body in such a way to have its course changed for a second time, leaving it in its current orbit. Incidentally, due the the equal-and-opposite etc. the second interaction would have also left the third body in a modified orbit, either moving it further out or further in and if inwards, considering the distances, it probably collided with the star.
Its currently considered highly possible that the bodies in our system didn't originally form in their current orbits: the gas giants in particular are now thought to have been formed much closer to Sol and have subsequently migrated outwards to their current orbits.
* plausible = non-artificial
** There's a small chance it could have been a 'run-away' extra-system body just passin' through.
Re: He's right.
I'm _guessing_ that perhaps Linus's annoyance with undiscoverable cpu features isn't that they're not documented but that he can't use a small number of methods or functions to identify them and instead needs unique code for each one. I'll swiftly add that this isn't due to lazyness but because of the increased amount of code that needs to be maintained, and with more code you have a greater likelyhood of errors.
In coding terms it's a bit like having a line of code for every data record you need to process instead of processing all data records in a single loop.
However, this is all nothing to do with what the article was about, which was, essentially, is he getting too stressed out?
Well, there's no doubt that he's in a _very_ stressful position: the Linux kernel runs on a wider range of h/w than any other other bit of s/w that's ever existed and trying to coordinate, collate and implement the correspondingly wide amount and variety of code modules submitted by an equally wide variety of code submitters, from individuals to global corporations, can be nothing but stressful.
So is he getting too stressed out? No, what he said was was not to be taken literally and was clearly a tongue-in-cheek gesture to express his annoyance, the severity of the gesture indicating his degree of annoyance. This is something that every human being does from time to time: someone does something a bit stupid or thoughtless and the person who is annoyed by it replies with a deliberately irrational, out of proportion and over the top response. It's no different to that young chap who threatened to blow up an airport because of repeated delays due to the volcanic ash cloud or Jeremy Clarkson suggesting that certain people who had annoyed him should be shot in front of their families.
Personally, I'm more concerned with the mental wellbeing of people who appear to be either incapable of discerning the real meaning of these gestures or, even more worryingly, use them as the basis for an attack on someone. Why even more worringly? Because if you feel entitled to launch an attack on someone whom you don't know because of something they said to someone else whom you don't know then you must be _really_ fscked up.
"In small stars, the gamma rays created in the explosion take a short time to reach the edge of the star, where they are propelled into space. With huge stars, the explosion takes much longer to travel through the star, meaning the gamma ray burst is longer."
Not quite right, I'm afraid.
It's correct that the larger the star, the longer it will take for the effects of an 'explosion' (not really the right word to use) at its center to propagate to the star's surface, but what we're talking about here is the duration of that 'explosion', or in other words, how long the 'explosion' went on for.
Whilst it's possible that the passage of the initial effects of the event changed the characteristics of the star as it passed through it, it's extremely difficult to imagine what sort of change could have occurred that would've slowed the effects from the final part of the event to the degree observed; in short, it would've taken roughly the same time for both the first effects and the final effects to make the journey.
To be sure though, a larger star will be required for a longer event; a smaller star would not have enough matter to sustain a longer event.
As no mention of the relative intensities of the gamma rays from this long burst, compared with a more typical short burst, adjusted for distance, of course, was made then assuming the intensities were similar it means that the rate of matter-energy conversion was about the same, which means that more matter had to be converted, over a longer period of time, which means a larger star. If, however, the intensity of the longer burst was relatively greater it means that a higher rate of matter-energy conversion was going on, and as the period of time remains unchanged it means that the star must have been even bigger.
Re: QNX Photon microGUI
If RIM could be persuaded to open-source QNX it would certainly shake things up.
Re: There is only one thing a text editor needs
...A journal. I remember (I blame the COBOL thread for making me reminisce) the 'full-screen' editor on a DEC VAX 11/780 that simply journalised every editing action. Was fun to crash out of a long editing session, restart the editor and sit back and watch it re-apply every key-stroke up to the point of the crash. Did rectangular-block copy&paste too.
Re: 'mind-crippling' COBOL
Whilst I'd agree that 'writers cramp' wasn't such a problem after terminals, that was pretty late in the day; for years it was writing your source on box-paper, to be handed over to the punch-room where, although likely as not, the punch-girl would spot that you'd missed a full-stop in the Identification Division but was forbidden to correct it, your opus would be faithfully rendered, over the course of a day or two, into a monumental stack of punch-cards which, when compiled, produced a goodly quantity of processed trees, manuscript style, telling you several thousand times that there was a compilation error.
I still like COBOL though, a lot: it's a pretty damn good language for collating data. Loved Redefines in the Data Division. Although the source is very wordy and precise, it compiles small and efficient.
(Oh goodness - just remembered Segmentation - actually had to use it once!!)
QNX Photon microGUI
Anyone remember the QNX Photon microGUI that was part of the QNX demo floppy?
Debian has supported Arm since 2.2 'potato' released 2000-08-15. Debian is a lot more 'major' than you might think.
Accept a well earned pat on the back...
...for a very good article.
Missing the point, I think...
Where I think everyone's missing the point, even ARM perhaps, it that everyone's still thinking in terms of the current paradigm. When the maturity & process differences are taken into consideration the ARM chips offer the potential of 10^2 x times as many processing cores as the x86 architecture does, for the same practical considerations, and what this means is that 10^9 core MPP systems are now viable. At that scale we can drop the Turing Machine model, where processing is separated from data, and progress to an Object Machine, where every element of data is an active processing element - data that finds and synthesises its own associations with other data elements.
At this point, software evolution can occur, in the literal sense, where you can start by generating random sequences of logic and see if they do anything useful and then randomly modify those sequences to see if any of them still work, and if they do, do they work better, or do they do anything new? So the issues about MS Vs. Linux (to symbolically represent all such arguments) are going to become moot in the not too distant future, because all software will be written by software.
Around the same time, if not before, extremely high-precision analogue electronics will be combined and integrated with digital electronics so that we can do not just binary digital processing but higher-base hardware processing, which means that you could, for example, feed a base-3 digital processor a compatible pair of base-2 & base-3 instructions at the same time and get two completety different and correct answers to two completely different tests. This is still the same old data processing paradigm; the next paradigm will be a different way of dealing with data.
Anyways though, the ARM architecture will get us to those 10^9+ MPP systems we really need for a start.
It might work for them...
...because, in the field of MPUs, at this point in time, there are two distinct growth curves to be considered; those of application processing and storage processing.
Intel's high-performance chips are ideally suited to application processing because application processing is still largely linear and intrinsically unsuited to parallel processing. AMD's new low-power chips, on the other hand, are ideally suited to parallel processing jobs like handling storage.
Right now, and until there's a big breakthrough in AI, the growth in demand for application processing will decline whilst the demand for storage processing will grow.
In this context, it's interesting that both companies are heavily promoting future low-power MPUs; Intel with its Atom and AMD with its ARM Opterons, and what's significant here is x86. x86 is ubiquitous and the architecture that everything else is compared with but there's no way that x86 will be the architecture in use in 20 years time; the problem for Intel is that they only want do x86, something that will eventually be supplanted.
Re: Ummm. Sort of helpful
What this article was good about was separating the quantum computing device from the IO device; the tricky bit isn't the qpu calculating all the possible answers, but rather it's the selection of the particular answer you want.
Think of it this way: the answer you get from a qpu is a 2-D area of smoothly blended different colours but the answer that you need is in the form of a 1-D line on a graph; there are infinitely many 1-D lines within that finite 2-D area, so the tricky bit is pulling out the particular 1-D line, from all the other 1-D lines, that you need to get the answer to the particular problem you're trying to solve.
Re: This may or may not be a great article
As someone who has read the Reg' since about two months after MM started it, and spends an awful lot of time thinking about this theoretical physics stuff, I think this is certainly one of the best articles El Reg' has ever done.
...not to mention the > 30000 firearm related deaths in the U.S. every year.
According to http://www.cdc.gov/nchs/fastats/injury.htm the total number of firearm related deaths in the U.S. for 2009 was 31,347.
Which is the greater U.S. tragedy; the ~3000 killed by the 9/11 terrorists or the 30000+/year killed by each other?
Sensationalism in science
If there's one thing that's sure to rub me up the wrong way it's the sensationalism of science. Science is intrinsically pretty damn sensational but it does need an adequate understanding of the science to perceive just how sensational it is. Unfortunately, this article seemed to focus more upon sensationalism than understanding and I suspect that more was said in this Reg article, in pursuit of sensationalism, than was present in the Science publication.
For example, to say "a third of these pairs are eventually expected to merge..." is fair enough but to continue with "...as the vampire consumes the last of its hapless friend." is not only sensationalist but a bit dubious. The problem here is that it gives the impression of the O type star gradually vanishing as it is 'hoovered' up by the companion when what will actually happen, as the companion star acquires mass from the O type star, is that both stars will expand and eventually merge. The companion star will expand because of the additional mass it has acquired and the O type star will expand because the reduction of its mass will mean less gravitational pressure upon its core, countering the outward pressure from the fusion going on within. The outer atmospheres of both stars will merge first whilst the two inner cores spiral in towards each other, also to finally merge. However, this process will be quite an eventful process as both stars will have to constantly re-balance gravitational pressure against fusion pressure.
Re: How the Pros Do It
Whilst the thermite scheme will probably work it's a bit of a BF&I solution.
I like the idea of using a smaller motor to start the main motor though: form a miniature version of the main motor fuel charge that sits just inside the main combustion chamber and seals the main motor nozzle, facing backwards so that it fires towards the main motor fuel when ignited. Even if the seal isn't perfectly air-tight, it would provide enough of a seal to prevent the excessive expansion and cooling of the fuel vaporised by the igniter.
You'd need to calibrate the miniature so that it burns just long enough to start the main motor fuel before burning itself out to clear the nozzle; you could also add some thermite to the mini charge just for good measure, the thrust from the mini charge ensuring that the thermite is directed towards the main fuel instead of possibly just falling away (due to the motor being angled upwards on the launch rod, which it will be when Vulture2 is actually launched).
More balloons, but not for lifting...
On the basis that the problem is due to the low ambient pressure resulting in the too rapid dispersal of the initially vaporised fuel, before it can ignite the rest of the fuel, I started thinking along the lines of using an equalising chamber to maintain pressure in the combustion chamber. For example, imagine two small balloons attached neck-to-neck, with one balloon inside the chamber and the other threaded through the nozzle to the outside, this double-balloon then being inflated to just above ambient ground-level pressure: as the ambient pressure drops both balloons will start to expand, the inner balloon expanding into and filling the combustion chamber and maintaining pressure within it. Once the rocket motor has fired I wouldn't expect it to have problems burning through the balloons as it's got to burn out the igniter wire anyway.
The tricky bit is ensuring an air-tight seal around the ignition wires and the balloon neck as they pass through the nozzle and enter the combustion chamber although as the balloons expand they'll tend to occupy any gaps and improve the seal. Of course, you'd also need to take into account the fact that the igniter wire will have to run around the inflated balloon within the combustion chamber i.e. against the walls which will mean more wire in the chamber than usual.
Alternatively, if you think that the motor would be able to burn through a fine nylon mesh, just stick a small balloon inside the chamber that's prevented from expanding out through the nozzle by a fine nylon mesh placed over the inside of the nozzle opening. However, the same issues re the routing of the igniter wire apply.
Re: Echo and SCORE
This rang a bell with me too. After a quick bit of digging around I think it was project "West Ford" that we're remembering. Wikipedia link: http://en.wikipedia.org/wiki/Project_West_Ford
Worrying in many ways
..."knowledge economy." This is a very scary idea because if it is to work then it'll mean restricting access to knowledge; if we think that patenting software is a bad idea just wait until the rules are rewritten so that knowledge can be patented too.
"...from friction comes life." From friction comes heat; energy that is wasted and, due to entropy, lost forever.
'"Those documents that have come into circulation, thanks in large part to the wcitleaks website set up to publish them, could alarm "credulous members of the public," Touré said...' If you're doing something that may alarm people there're two ways to handle it: either publish a credible and binding explanation to reassure everyone or just try to demean and discredit anyone who shows concern.
The pictures don't do it justice
The artist(s) who produced those pictures just didn't grasp the size of gas giants. To say that the gas giant, when seen from the rocky world, would be much larger in the sky than those images suggest would be a gross understatement; the gas giant would fill most of the sky.
Unless I'm missing something here...
Doc, Happy, Bashful, Sneezy, Grumpy, Sleepy, and Dopey == dwarves != leprechauns
I think that this is the most significant issue. However, I believe that one of the benefits that Intel has claimed for MIC is that it'll run existing x86 code, which implies that it has the 'full' x86 architecture.
Having said that I guess it's possible that they've removed some of the more general purpose underlying hardware to make it more efficient in HPC and then re-implemented around the missing hardware via slightly less efficient and more complex microcode to make it fully x86 compatible again.
Either way though, it doesn't seem that it can excel in both scenarios.
Re: Just seems like a not very good idea
Your paranoia is showing through there; you've got yourself all in a tizzy about RISC vs. x86 when that wasn't what I was talking about.
x86, as an HPC 'co-processor' doesn't seem like a good idea because it's a general purpose instruction set; HPC is all about efficiency but incorporating a large and comprehensive instruction set, a significant proportion of which won't be used in typical HPC workloads and which just wastes space, is inefficient.
You are aware that, for a considerable number of years now, x86 has been implemented via microcode on what is essentially (but not purely) 'RISC'y hardware? Furthermore, if Intel were so satisfied with x86 why did they bother with iAPX, i960 and IA-64? Do you think that all those organisations that are using large systems based upon POWER and even SPARC are doing so just out of spite and resentment at x86?
No, x86 is fine for general purpose computing, even though it _is_ clunky and inelegant, but this is because it's comprehensive, well known and widely supported. Only a tiny proportion of people involved in software development (mostly compiler devs) need to care about the underlying architecture; the rest of us only interact via relatively high-level abstractions.
Btw, similar arguments apply to the embedded space too; this is another area where x86 doesn't make much sense and isn't widely used (The i960 did make some inroads to this area but it is now largely dominated by ARM and IBM Power architectures).
Just seems like a not very good idea
I won't go as far as saying it's a bad idea but the Intel MIC just doesn't make sense to me:
i) Using x86 architecture cores for HPC; isn't that going to mean a lot of redundant x86 instructions that need to be implemented in the hardware/microcode but which will be rarely used in typical HPC workloads?
ii) 8GB RAM seems grossly inadequate for 50+ x86 cores; for 50 cores that's just 160MB/core, which doesn't seem enough if they're going to be running x86 application code, which itself will need some degree of OS support running on the same core and sharing that 160MB allocation.
iii) The benefit of being able to run existing x86 code unchanged on the MIC seems highly questionable and hardly efficient. Is this not a brute force approach when a more elegantly designed MPP solution would be far more efficient?
Intel are very clearly not stupid but I'm beginning to think that, having failed to replace x86 themselves, with their own iAPX, i960 & Itanium IA-64 architectures, the only horse they have left to flog is the ageing and inelegant x86. I can't really see this as a viable long-term proposition.
Re: Balloon to Platform... twist of cable, string, and rope...
Any sort of "rope" will be far too heavy, and a 'climbing' quality rope unnecessarily expensive as well.
I've previously suggested Dacron Big-game fishing line - a quick check on e-bay shows a spool of 180lb x 100ft braided Dacron line for £5.11.
Re: Follow your star - further
I agree that stable planetary orbits are very unlikely in a close binary star system but a relatively distant binary system isn't going to work either because, whichever way you look at it, the star that is accelerated out of that system must pass relatively close to the BH, and into a relatively steep gravity gradient, to achieve that acceleration.
The only point I was actually disagreeing with was a previous post that suggested that a planet could follow its star in such an encounter. The original article was about isolated 'warp-speed' planets, not 'warp-speed' systems, and it's precisely because the planets are stripped from their systems in a close encounter with a BH that they occur. The article also mentions runaway stars and it's quite possible that some of these stars originally had planets; if the planets weren't sucked into the BH during the encounter then they would very likely end up as 'warp-speed' planets. They would wouldn't be heading in the same direction and at the same speed as the star though.
Re: Follow your star ?
The magnitude of the force acting upon the star would not be the same as the magnitudes of the forces acting upon orbiting planets. For the star to be accelerated to such a high degree it would have to closely approach the BH but at such close distances the gravitational gradient is so steep that there would be considerable differences between the forces that the star and any orbiting planets would experience.
For example, if the planet is orbiting its star at 0.5 AU (a bit further out than Mercury is from Sol) and the star then passes the BH at a distance of 1.0 AU (the distance of Earth's orbit around Sol) then the force acting upon the planet will be between 4x and 1/3rd the force acting upon the star, depending upon their relative positions. The only way that they would experience the same magnitude of force is if they were to pass either side of the BH, which would still leave them heading off in different directions.
Makes a lot of sense
Samsung's product range extends far beyond consumer goods. I've no doubt that their heavy engineering and ship building products, all of which need intelligent control systems, will end up using Linux based solutions.
Relevance to modern climate change?
During the middle Miocene there was a period of general cooling of the climate. A result of this is that forests appear to have declined whilst grasses diversified and increased. The new grasslands took-up more CO2 than the forests they replaced had done, reducing the proportion of CO2 in the atmosphere and resulting in higher O2 levels than previously, so yes, the proportion of CO2 was lower and the proportion of O2 in the atmosphere was higher even though the overall climate was cooler.
The general cooling of the climate during the Miocene is thought to be a consequence of the southwards tectonic drift of the Antarctic continent towards the South Pole such that it eventually reached a position where it was southerly enough and isolated enough that the Antarctic Circumpolar Current was able to develop. The Antarctic Circumpolar Current, in turn, increased the build up of the Antarctic ice sheet, which itself had started during the preceding Oligocene period when Antarctica was already well on its way southwards.
This doesn't really seem to be relevant to modern climate change.
Use a spring scale
Attach the launch truss to the balloon via a simple spring scale.
Chose spring scale with a range of 1/2 to 1/4 (guesstimate) of the total weight of the truss (incl Vulture 2) so that once the balloon's inflated and the truss is suspended the spring scale will be fully extended and at the limit of its travel, so as to avoid any bouncing off the upper stop during the ascent.
Botch a new lower limit stop, at about 1/8 (another guesstimate) of the full range, so that the spring is still under some tension when there's no load upon it and it tries to fully retract. Then epoxy an insulated contact onto the scale, beside the new lower stop, and another to the moving spring scale indicator: when the balloon bursts, the load drops to zero, the spring scale retracts and makes a firing circuit when the scale indicator hits the lower stop.
21% sounds too generalised...
...and a bit flawed overall.
In addition to questioning the validity of extrapolating the sizes of reptiles that lived in a higher oxygen content environment using figures derived from mammals living in a lower oxygen content environment (it was only because of the higher oxygen content in prehistoric times that those giant dragon flies and various other giant insects were able to evolve), I'd like to know a lot more about how they came up with that figure of 21%.
From the article it sounds like they ended up with an _average_ factor of 21% as a result of a rather diverse range of animals including, but probably not limited to, polar bears, giraffes and elephants and whilst it can be argued that they're all 'big' they have very different types of body. For example, I'd be very surprised to learn that both giraffes and elephants share that same 21% factor.
Furthermore, the healthy BMI of an animal can vary according to whether it's in captivity or not; polar bears in captivity tend to be leaner than wild polar bears, which have to live in more extreme conditions.
Any idea if they also included hippopotamuses or elephant seals in their data sample?
Re: Mechanical Trigger Failsafe
How about replacing your tape measure with a simple spring scale?
Chose a spring scale with a range of 1/2 to 1/4 (guesstimate) of the total weight of the truss (incl Vulture 2) so that once the balloon's inflated and the truss is suspended the spring scale will be fully extended and at the limit of its travel, so as to avoid bouncing off the upper stop during the ascent.
Botch a new lower limit stop, at about 1/8 (another guesstimate) of the full range, so that the spring is still under some tension when it fully retracts. Then epoxy an insulated contact onto the scale, beside the new lower stop, and another to the spring scale indicator so that when the balloon bursts and the spring scale retracts it makes a firing circuit.
I think the the idea of attaching anything to the parachute canopy is risky because of the possibility of it interfering with the opening of the canopy. As well as the risks of the canopy failing to inflate quickly at high altitude and of the long release cord getting twisted around the tethers, there's also a risk of it being blown taut and into a loop (think of a fishing rod with only a top eye and no intermediates) which may or may not interfere with its intended operation.
Early cosmology relies upon 'old' radiation which, by definition means it has come from a very long way away and radiation that has come from a very long way away will have been red-shifted due to the expansion of the universe. The problem here is that the planck length and time units would appear to set limits on the shortest possible wavelength/highest possible frequency of radiation, so it would seem that X-ray detectors are limited to seeing stuff from within a limited distance, and therefore age.
I have to wonder what that term means in the context of this experiment because, as others have pointed out above, there seem to be rather a lot of factual or misleading inaccuracies in what she said.
In order of appearance in the article:
"At the center of each black hole is the event horizon..." Umm... at the [debatable] center of a Black Hole (BH) is the singularity. The Event Horizon (EH) isn't, as far as we know, any sort of entity in its own right but is just the distance from the singularity where a number of interesting things happen.
"Einstein predicted that the effects of the event horizon would bend light..." Einstein did not predict that the EH would bend light. He predicted that the presence of matter would bend space-time and that light would appear to bend as it followed a straight path through this bent space-time. The EH, not being something that actually exists in its own right, does not have or cause 'effects'. Rather, the EH is itself an 'effect'.
"NuSTAR is also looking to investigate supernovae, particularly the most recent ones that still retain evidence of what caused the Big Bang" WTF??? This must be a contender for oxymoron of the year. Type 1b, 1c & 2 supernovae require relatively large stars and, because of their size, could not have existed long enough to be relevant to the Big Bang (BB). Type 1a supernovae are believed to involve white dwarf stars and these could conceivably be old enough to be relevant to the BB except that even the very first stars are not thought to have formed until ~400 million years after the BB, well after all the really interesting origin related stuff had been and gone.
Re: @John Smith 19 - - Anyone notice 460Ghz operating frequency?
Now you've got me confused. Earlier you seemed to be arguing with me that valves and transistors can be regarded as digital devices yet here you're saying the opposite and agreeing with my original point that whilst these devices can be used in digital applications they are still fundamentally analogue devices.
Re: @LeeE -- re:Since when were thermionic valves etc... -- Your point is a non-sequitur!
You've used an awful lot of words to make the same point I did in my original post: "Sure, valves were used in digital equipment..."
The output from valves and transistors is proportional to the input on the grid/base so the devices are intrinsically analogue; the output from a true digital device would not be proportional to its controlling input.
Neither is linearity, or lack of it, a factor. More pertinent is Robert E A Harvey's point: "By saturating them hard you can defone (sic) two discrete states with a rapid transition between them".
This is absolutely true, but the key points here are 'define two discrete states' and 'rapid transition'. Firstly, the two states have to be defined because they are not inherent whereas in a true digital device there would be no need to define the two discrete states because they would be inherent. Secondly, whilst the transition between the two states may be 'rapid' the transition between the two states in these devices requires that the level passes through intermediate values whilst it does so whereas a true digital device would not have intermediate levels and could only be at one level or the other; the rapidity of switching doesn't really come into it (although in a truly digital device you'd still need to wait for the superposition of the two states to resolve [to one state or the other] before you could use it).