96 posts • joined Thursday 12th April 2012 13:05 GMT
"In small stars, the gamma rays created in the explosion take a short time to reach the edge of the star, where they are propelled into space. With huge stars, the explosion takes much longer to travel through the star, meaning the gamma ray burst is longer."
Not quite right, I'm afraid.
It's correct that the larger the star, the longer it will take for the effects of an 'explosion' (not really the right word to use) at its center to propagate to the star's surface, but what we're talking about here is the duration of that 'explosion', or in other words, how long the 'explosion' went on for.
Whilst it's possible that the passage of the initial effects of the event changed the characteristics of the star as it passed through it, it's extremely difficult to imagine what sort of change could have occurred that would've slowed the effects from the final part of the event to the degree observed; in short, it would've taken roughly the same time for both the first effects and the final effects to make the journey.
To be sure though, a larger star will be required for a longer event; a smaller star would not have enough matter to sustain a longer event.
As no mention of the relative intensities of the gamma rays from this long burst, compared with a more typical short burst, adjusted for distance, of course, was made then assuming the intensities were similar it means that the rate of matter-energy conversion was about the same, which means that more matter had to be converted, over a longer period of time, which means a larger star. If, however, the intensity of the longer burst was relatively greater it means that a higher rate of matter-energy conversion was going on, and as the period of time remains unchanged it means that the star must have been even bigger.
Re: QNX Photon microGUI
If RIM could be persuaded to open-source QNX it would certainly shake things up.
Re: There is only one thing a text editor needs
...A journal. I remember (I blame the COBOL thread for making me reminisce) the 'full-screen' editor on a DEC VAX 11/780 that simply journalised every editing action. Was fun to crash out of a long editing session, restart the editor and sit back and watch it re-apply every key-stroke up to the point of the crash. Did rectangular-block copy&paste too.
Re: 'mind-crippling' COBOL
Whilst I'd agree that 'writers cramp' wasn't such a problem after terminals, that was pretty late in the day; for years it was writing your source on box-paper, to be handed over to the punch-room where, although likely as not, the punch-girl would spot that you'd missed a full-stop in the Identification Division but was forbidden to correct it, your opus would be faithfully rendered, over the course of a day or two, into a monumental stack of punch-cards which, when compiled, produced a goodly quantity of processed trees, manuscript style, telling you several thousand times that there was a compilation error.
I still like COBOL though, a lot: it's a pretty damn good language for collating data. Loved Redefines in the Data Division. Although the source is very wordy and precise, it compiles small and efficient.
(Oh goodness - just remembered Segmentation - actually had to use it once!!)
QNX Photon microGUI
Anyone remember the QNX Photon microGUI that was part of the QNX demo floppy?
Debian has supported Arm since 2.2 'potato' released 2000-08-15. Debian is a lot more 'major' than you might think.
Accept a well earned pat on the back...
...for a very good article.
Missing the point, I think...
Where I think everyone's missing the point, even ARM perhaps, it that everyone's still thinking in terms of the current paradigm. When the maturity & process differences are taken into consideration the ARM chips offer the potential of 10^2 x times as many processing cores as the x86 architecture does, for the same practical considerations, and what this means is that 10^9 core MPP systems are now viable. At that scale we can drop the Turing Machine model, where processing is separated from data, and progress to an Object Machine, where every element of data is an active processing element - data that finds and synthesises its own associations with other data elements.
At this point, software evolution can occur, in the literal sense, where you can start by generating random sequences of logic and see if they do anything useful and then randomly modify those sequences to see if any of them still work, and if they do, do they work better, or do they do anything new? So the issues about MS Vs. Linux (to symbolically represent all such arguments) are going to become moot in the not too distant future, because all software will be written by software.
Around the same time, if not before, extremely high-precision analogue electronics will be combined and integrated with digital electronics so that we can do not just binary digital processing but higher-base hardware processing, which means that you could, for example, feed a base-3 digital processor a compatible pair of base-2 & base-3 instructions at the same time and get two completety different and correct answers to two completely different tests. This is still the same old data processing paradigm; the next paradigm will be a different way of dealing with data.
Anyways though, the ARM architecture will get us to those 10^9+ MPP systems we really need for a start.
It might work for them...
...because, in the field of MPUs, at this point in time, there are two distinct growth curves to be considered; those of application processing and storage processing.
Intel's high-performance chips are ideally suited to application processing because application processing is still largely linear and intrinsically unsuited to parallel processing. AMD's new low-power chips, on the other hand, are ideally suited to parallel processing jobs like handling storage.
Right now, and until there's a big breakthrough in AI, the growth in demand for application processing will decline whilst the demand for storage processing will grow.
In this context, it's interesting that both companies are heavily promoting future low-power MPUs; Intel with its Atom and AMD with its ARM Opterons, and what's significant here is x86. x86 is ubiquitous and the architecture that everything else is compared with but there's no way that x86 will be the architecture in use in 20 years time; the problem for Intel is that they only want do x86, something that will eventually be supplanted.
Re: Ummm. Sort of helpful
What this article was good about was separating the quantum computing device from the IO device; the tricky bit isn't the qpu calculating all the possible answers, but rather it's the selection of the particular answer you want.
Think of it this way: the answer you get from a qpu is a 2-D area of smoothly blended different colours but the answer that you need is in the form of a 1-D line on a graph; there are infinitely many 1-D lines within that finite 2-D area, so the tricky bit is pulling out the particular 1-D line, from all the other 1-D lines, that you need to get the answer to the particular problem you're trying to solve.
Re: This may or may not be a great article
As someone who has read the Reg' since about two months after MM started it, and spends an awful lot of time thinking about this theoretical physics stuff, I think this is certainly one of the best articles El Reg' has ever done.
...not to mention the > 30000 firearm related deaths in the U.S. every year.
According to http://www.cdc.gov/nchs/fastats/injury.htm the total number of firearm related deaths in the U.S. for 2009 was 31,347.
Which is the greater U.S. tragedy; the ~3000 killed by the 9/11 terrorists or the 30000+/year killed by each other?
Sensationalism in science
If there's one thing that's sure to rub me up the wrong way it's the sensationalism of science. Science is intrinsically pretty damn sensational but it does need an adequate understanding of the science to perceive just how sensational it is. Unfortunately, this article seemed to focus more upon sensationalism than understanding and I suspect that more was said in this Reg article, in pursuit of sensationalism, than was present in the Science publication.
For example, to say "a third of these pairs are eventually expected to merge..." is fair enough but to continue with "...as the vampire consumes the last of its hapless friend." is not only sensationalist but a bit dubious. The problem here is that it gives the impression of the O type star gradually vanishing as it is 'hoovered' up by the companion when what will actually happen, as the companion star acquires mass from the O type star, is that both stars will expand and eventually merge. The companion star will expand because of the additional mass it has acquired and the O type star will expand because the reduction of its mass will mean less gravitational pressure upon its core, countering the outward pressure from the fusion going on within. The outer atmospheres of both stars will merge first whilst the two inner cores spiral in towards each other, also to finally merge. However, this process will be quite an eventful process as both stars will have to constantly re-balance gravitational pressure against fusion pressure.
Re: How the Pros Do It
Whilst the thermite scheme will probably work it's a bit of a BF&I solution.
I like the idea of using a smaller motor to start the main motor though: form a miniature version of the main motor fuel charge that sits just inside the main combustion chamber and seals the main motor nozzle, facing backwards so that it fires towards the main motor fuel when ignited. Even if the seal isn't perfectly air-tight, it would provide enough of a seal to prevent the excessive expansion and cooling of the fuel vaporised by the igniter.
You'd need to calibrate the miniature so that it burns just long enough to start the main motor fuel before burning itself out to clear the nozzle; you could also add some thermite to the mini charge just for good measure, the thrust from the mini charge ensuring that the thermite is directed towards the main fuel instead of possibly just falling away (due to the motor being angled upwards on the launch rod, which it will be when Vulture2 is actually launched).
More balloons, but not for lifting...
On the basis that the problem is due to the low ambient pressure resulting in the too rapid dispersal of the initially vaporised fuel, before it can ignite the rest of the fuel, I started thinking along the lines of using an equalising chamber to maintain pressure in the combustion chamber. For example, imagine two small balloons attached neck-to-neck, with one balloon inside the chamber and the other threaded through the nozzle to the outside, this double-balloon then being inflated to just above ambient ground-level pressure: as the ambient pressure drops both balloons will start to expand, the inner balloon expanding into and filling the combustion chamber and maintaining pressure within it. Once the rocket motor has fired I wouldn't expect it to have problems burning through the balloons as it's got to burn out the igniter wire anyway.
The tricky bit is ensuring an air-tight seal around the ignition wires and the balloon neck as they pass through the nozzle and enter the combustion chamber although as the balloons expand they'll tend to occupy any gaps and improve the seal. Of course, you'd also need to take into account the fact that the igniter wire will have to run around the inflated balloon within the combustion chamber i.e. against the walls which will mean more wire in the chamber than usual.
Alternatively, if you think that the motor would be able to burn through a fine nylon mesh, just stick a small balloon inside the chamber that's prevented from expanding out through the nozzle by a fine nylon mesh placed over the inside of the nozzle opening. However, the same issues re the routing of the igniter wire apply.
Re: Echo and SCORE
This rang a bell with me too. After a quick bit of digging around I think it was project "West Ford" that we're remembering. Wikipedia link: http://en.wikipedia.org/wiki/Project_West_Ford
Worrying in many ways
..."knowledge economy." This is a very scary idea because if it is to work then it'll mean restricting access to knowledge; if we think that patenting software is a bad idea just wait until the rules are rewritten so that knowledge can be patented too.
"...from friction comes life." From friction comes heat; energy that is wasted and, due to entropy, lost forever.
'"Those documents that have come into circulation, thanks in large part to the wcitleaks website set up to publish them, could alarm "credulous members of the public," Touré said...' If you're doing something that may alarm people there're two ways to handle it: either publish a credible and binding explanation to reassure everyone or just try to demean and discredit anyone who shows concern.
The pictures don't do it justice
The artist(s) who produced those pictures just didn't grasp the size of gas giants. To say that the gas giant, when seen from the rocky world, would be much larger in the sky than those images suggest would be a gross understatement; the gas giant would fill most of the sky.
Unless I'm missing something here...
Doc, Happy, Bashful, Sneezy, Grumpy, Sleepy, and Dopey == dwarves != leprechauns
I think that this is the most significant issue. However, I believe that one of the benefits that Intel has claimed for MIC is that it'll run existing x86 code, which implies that it has the 'full' x86 architecture.
Having said that I guess it's possible that they've removed some of the more general purpose underlying hardware to make it more efficient in HPC and then re-implemented around the missing hardware via slightly less efficient and more complex microcode to make it fully x86 compatible again.
Either way though, it doesn't seem that it can excel in both scenarios.
Re: Just seems like a not very good idea
Your paranoia is showing through there; you've got yourself all in a tizzy about RISC vs. x86 when that wasn't what I was talking about.
x86, as an HPC 'co-processor' doesn't seem like a good idea because it's a general purpose instruction set; HPC is all about efficiency but incorporating a large and comprehensive instruction set, a significant proportion of which won't be used in typical HPC workloads and which just wastes space, is inefficient.
You are aware that, for a considerable number of years now, x86 has been implemented via microcode on what is essentially (but not purely) 'RISC'y hardware? Furthermore, if Intel were so satisfied with x86 why did they bother with iAPX, i960 and IA-64? Do you think that all those organisations that are using large systems based upon POWER and even SPARC are doing so just out of spite and resentment at x86?
No, x86 is fine for general purpose computing, even though it _is_ clunky and inelegant, but this is because it's comprehensive, well known and widely supported. Only a tiny proportion of people involved in software development (mostly compiler devs) need to care about the underlying architecture; the rest of us only interact via relatively high-level abstractions.
Btw, similar arguments apply to the embedded space too; this is another area where x86 doesn't make much sense and isn't widely used (The i960 did make some inroads to this area but it is now largely dominated by ARM and IBM Power architectures).
Just seems like a not very good idea
I won't go as far as saying it's a bad idea but the Intel MIC just doesn't make sense to me:
i) Using x86 architecture cores for HPC; isn't that going to mean a lot of redundant x86 instructions that need to be implemented in the hardware/microcode but which will be rarely used in typical HPC workloads?
ii) 8GB RAM seems grossly inadequate for 50+ x86 cores; for 50 cores that's just 160MB/core, which doesn't seem enough if they're going to be running x86 application code, which itself will need some degree of OS support running on the same core and sharing that 160MB allocation.
iii) The benefit of being able to run existing x86 code unchanged on the MIC seems highly questionable and hardly efficient. Is this not a brute force approach when a more elegantly designed MPP solution would be far more efficient?
Intel are very clearly not stupid but I'm beginning to think that, having failed to replace x86 themselves, with their own iAPX, i960 & Itanium IA-64 architectures, the only horse they have left to flog is the ageing and inelegant x86. I can't really see this as a viable long-term proposition.
Re: Balloon to Platform... twist of cable, string, and rope...
Any sort of "rope" will be far too heavy, and a 'climbing' quality rope unnecessarily expensive as well.
I've previously suggested Dacron Big-game fishing line - a quick check on e-bay shows a spool of 180lb x 100ft braided Dacron line for £5.11.
Re: Follow your star - further
I agree that stable planetary orbits are very unlikely in a close binary star system but a relatively distant binary system isn't going to work either because, whichever way you look at it, the star that is accelerated out of that system must pass relatively close to the BH, and into a relatively steep gravity gradient, to achieve that acceleration.
The only point I was actually disagreeing with was a previous post that suggested that a planet could follow its star in such an encounter. The original article was about isolated 'warp-speed' planets, not 'warp-speed' systems, and it's precisely because the planets are stripped from their systems in a close encounter with a BH that they occur. The article also mentions runaway stars and it's quite possible that some of these stars originally had planets; if the planets weren't sucked into the BH during the encounter then they would very likely end up as 'warp-speed' planets. They would wouldn't be heading in the same direction and at the same speed as the star though.
Re: Follow your star ?
The magnitude of the force acting upon the star would not be the same as the magnitudes of the forces acting upon orbiting planets. For the star to be accelerated to such a high degree it would have to closely approach the BH but at such close distances the gravitational gradient is so steep that there would be considerable differences between the forces that the star and any orbiting planets would experience.
For example, if the planet is orbiting its star at 0.5 AU (a bit further out than Mercury is from Sol) and the star then passes the BH at a distance of 1.0 AU (the distance of Earth's orbit around Sol) then the force acting upon the planet will be between 4x and 1/3rd the force acting upon the star, depending upon their relative positions. The only way that they would experience the same magnitude of force is if they were to pass either side of the BH, which would still leave them heading off in different directions.
Makes a lot of sense
Samsung's product range extends far beyond consumer goods. I've no doubt that their heavy engineering and ship building products, all of which need intelligent control systems, will end up using Linux based solutions.
Relevance to modern climate change?
During the middle Miocene there was a period of general cooling of the climate. A result of this is that forests appear to have declined whilst grasses diversified and increased. The new grasslands took-up more CO2 than the forests they replaced had done, reducing the proportion of CO2 in the atmosphere and resulting in higher O2 levels than previously, so yes, the proportion of CO2 was lower and the proportion of O2 in the atmosphere was higher even though the overall climate was cooler.
The general cooling of the climate during the Miocene is thought to be a consequence of the southwards tectonic drift of the Antarctic continent towards the South Pole such that it eventually reached a position where it was southerly enough and isolated enough that the Antarctic Circumpolar Current was able to develop. The Antarctic Circumpolar Current, in turn, increased the build up of the Antarctic ice sheet, which itself had started during the preceding Oligocene period when Antarctica was already well on its way southwards.
This doesn't really seem to be relevant to modern climate change.
Use a spring scale
Attach the launch truss to the balloon via a simple spring scale.
Chose spring scale with a range of 1/2 to 1/4 (guesstimate) of the total weight of the truss (incl Vulture 2) so that once the balloon's inflated and the truss is suspended the spring scale will be fully extended and at the limit of its travel, so as to avoid any bouncing off the upper stop during the ascent.
Botch a new lower limit stop, at about 1/8 (another guesstimate) of the full range, so that the spring is still under some tension when there's no load upon it and it tries to fully retract. Then epoxy an insulated contact onto the scale, beside the new lower stop, and another to the moving spring scale indicator: when the balloon bursts, the load drops to zero, the spring scale retracts and makes a firing circuit when the scale indicator hits the lower stop.
21% sounds too generalised...
...and a bit flawed overall.
In addition to questioning the validity of extrapolating the sizes of reptiles that lived in a higher oxygen content environment using figures derived from mammals living in a lower oxygen content environment (it was only because of the higher oxygen content in prehistoric times that those giant dragon flies and various other giant insects were able to evolve), I'd like to know a lot more about how they came up with that figure of 21%.
From the article it sounds like they ended up with an _average_ factor of 21% as a result of a rather diverse range of animals including, but probably not limited to, polar bears, giraffes and elephants and whilst it can be argued that they're all 'big' they have very different types of body. For example, I'd be very surprised to learn that both giraffes and elephants share that same 21% factor.
Furthermore, the healthy BMI of an animal can vary according to whether it's in captivity or not; polar bears in captivity tend to be leaner than wild polar bears, which have to live in more extreme conditions.
Any idea if they also included hippopotamuses or elephant seals in their data sample?
Re: Mechanical Trigger Failsafe
How about replacing your tape measure with a simple spring scale?
Chose a spring scale with a range of 1/2 to 1/4 (guesstimate) of the total weight of the truss (incl Vulture 2) so that once the balloon's inflated and the truss is suspended the spring scale will be fully extended and at the limit of its travel, so as to avoid bouncing off the upper stop during the ascent.
Botch a new lower limit stop, at about 1/8 (another guesstimate) of the full range, so that the spring is still under some tension when it fully retracts. Then epoxy an insulated contact onto the scale, beside the new lower stop, and another to the spring scale indicator so that when the balloon bursts and the spring scale retracts it makes a firing circuit.
I think the the idea of attaching anything to the parachute canopy is risky because of the possibility of it interfering with the opening of the canopy. As well as the risks of the canopy failing to inflate quickly at high altitude and of the long release cord getting twisted around the tethers, there's also a risk of it being blown taut and into a loop (think of a fishing rod with only a top eye and no intermediates) which may or may not interfere with its intended operation.
Early cosmology relies upon 'old' radiation which, by definition means it has come from a very long way away and radiation that has come from a very long way away will have been red-shifted due to the expansion of the universe. The problem here is that the planck length and time units would appear to set limits on the shortest possible wavelength/highest possible frequency of radiation, so it would seem that X-ray detectors are limited to seeing stuff from within a limited distance, and therefore age.
I have to wonder what that term means in the context of this experiment because, as others have pointed out above, there seem to be rather a lot of factual or misleading inaccuracies in what she said.
In order of appearance in the article:
"At the center of each black hole is the event horizon..." Umm... at the [debatable] center of a Black Hole (BH) is the singularity. The Event Horizon (EH) isn't, as far as we know, any sort of entity in its own right but is just the distance from the singularity where a number of interesting things happen.
"Einstein predicted that the effects of the event horizon would bend light..." Einstein did not predict that the EH would bend light. He predicted that the presence of matter would bend space-time and that light would appear to bend as it followed a straight path through this bent space-time. The EH, not being something that actually exists in its own right, does not have or cause 'effects'. Rather, the EH is itself an 'effect'.
"NuSTAR is also looking to investigate supernovae, particularly the most recent ones that still retain evidence of what caused the Big Bang" WTF??? This must be a contender for oxymoron of the year. Type 1b, 1c & 2 supernovae require relatively large stars and, because of their size, could not have existed long enough to be relevant to the Big Bang (BB). Type 1a supernovae are believed to involve white dwarf stars and these could conceivably be old enough to be relevant to the BB except that even the very first stars are not thought to have formed until ~400 million years after the BB, well after all the really interesting origin related stuff had been and gone.
Re: @John Smith 19 - - Anyone notice 460Ghz operating frequency?
Now you've got me confused. Earlier you seemed to be arguing with me that valves and transistors can be regarded as digital devices yet here you're saying the opposite and agreeing with my original point that whilst these devices can be used in digital applications they are still fundamentally analogue devices.
Re: @LeeE -- re:Since when were thermionic valves etc... -- Your point is a non-sequitur!
You've used an awful lot of words to make the same point I did in my original post: "Sure, valves were used in digital equipment..."
The output from valves and transistors is proportional to the input on the grid/base so the devices are intrinsically analogue; the output from a true digital device would not be proportional to its controlling input.
Neither is linearity, or lack of it, a factor. More pertinent is Robert E A Harvey's point: "By saturating them hard you can defone (sic) two discrete states with a rapid transition between them".
This is absolutely true, but the key points here are 'define two discrete states' and 'rapid transition'. Firstly, the two states have to be defined because they are not inherent whereas in a true digital device there would be no need to define the two discrete states because they would be inherent. Secondly, whilst the transition between the two states may be 'rapid' the transition between the two states in these devices requires that the level passes through intermediate values whilst it does so whereas a true digital device would not have intermediate levels and could only be at one level or the other; the rapidity of switching doesn't really come into it (although in a truly digital device you'd still need to wait for the superposition of the two states to resolve [to one state or the other] before you could use it).
Re: Another two balloon idea
Yes, that is one of the potential problems I mentioned. However, the outer balloon won't simply collapse when it bursts because it's under positive pressure i.e. the higher pressure gas that was inside it, and in most of the slow-mo films of bursting balloons that I can remember seeing the bursting envelope seems to mostly follow its pre-burst outline and not simply collapse inwards. Here's a youtube link to a good example: http://www.youtube.com/watch?v=ejWf8iXjXZk
If there's enough separation between the two envelopes then it _might_ work, remembering that the inner balloon, being much smaller, won't be under nearly as much stress as the outer balloon.
I'm quite happy to admit that I'm not totally convinced that this scheme would definitely work but it would be a relatively easy and inexpensive experiment to try, starting with ordinary air-filled party balloons and progressing to larger balloons if the smaller experiments are encouraging.
If it did work though, then it would allow a (relatively) stable launch at optimum altitude.
Re: re:Since when were thermionic valves (vacuum tubes) digital devices?
It seems rather disingenuous to use a quote that starts by explicitly referring to linear devices as justification for claiming that they're switching devices when the switching behaviour only occurs if the devices are being operated outside their design parameters. It's about as valid as describing an automobile as an aircraft on the reasoning that when it's driven off a cliff it travels through the air.
Another two balloon idea
I haven't completely finished thinking this idea through yet, but anyway...
The basis of the idea is to use one balloon inside the other, both being partially inflated but so that the sum of their inflations equals that of a single balloon. The release is then triggered by the bursting of the outer balloon, which will be larger and therefore under more stress than the inner balloon. The inner balloon, being under less stress, should not burst and whilst having insufficient lift to maintain altitude, should result in a controlled and reduced rate of descent, at which point Vulture 2 is sent on its way.
Fabricate a short length of coaxial tubing with, at the upper balloon attachment end, the inner length of tubing being longer than the outer. The two coaxial tubes need to be brought to separate inflation feeds at the bottom inflation end of the combined coaxial tube. With both balloons deflated, attach one balloon to the inner coaxial tube and then carefully feed this balloon inside the other, which is then attached to the outer coaxial tube.
When the balloons are to be inflated, partially inflate the outer balloon first and then inflate the inner balloon, the aim being to achieve the same total volume and pressures in the combined balloons as you would in a single balloon.
The thinking behind this is that you start by imagining a single inflated balloon and then ask what would happen if there was an internal membrane separating the inner volume of the balloon from its outer volume? The pressure within the balloon was uniform within the total volume before introducing the membrane and just introducing the membrane should not change this, so the pressures inside and outside the membrane will still be equal. Where it gets more tricky is when you try to factor in the stresses on the envelopes of the two balloons when both are under positive but equal pressure and how the relationship between the pressure of the two balloons will change as they ascend and expand.
I can see two potential issues with this straight away: the risk of the shock of the outer balloon bursting triggering the inner balloon to burst as well, and detecting the bursting of the outer balloon.
I'm envisioning the two balloons being inflated to somewhere between 50:50 to 75:25 percent (inner:outer) so that there would be quite some clearance between the two balloon envelopes, which should increase in absolute terms as both balloons expand and which, I would hope, would provide sufficient safety margin against the shock of the outer balloon bursting triggering the burst of the inner.
As to detecting the bursting of the outer balloon, the best way I can think of is by some sort of shock gauge, although I suspect that this would need to be disarmed until the whole ensemble has reached relatively smooth high-altitude air, to avoid being triggered by low-altitude turbulence.
...some of the earliest digital devices...??
Since when were thermionic valves (vacuum tubes) digital devices?
Sure, valves were used in digital equipment before being widely supplanted by transistors but even transistors are intrinsically analogue devices. Digital utilisation, with both types of device, just comes down to using discrete bands of analogue levels within the continuous range of levels the devices are capable of.
Re: Two balloon
I can't see where it says that the two balloon option is being considered again.
But how would you tether two balloons, one above the other anyway? You can't run the tether through the lower balloon and even if you could attach the upper balloon tether directly to the top of the lower balloon you'd be adding to the stress upon the lower balloon because you'd be stretching it, which would likely lead to premature bursting. On the other hand, if you tried to run the upper balloon tether around the lower balloon's envelope it'll press into it, which would also not be a good thing.
In theory, a four-balloon set up could work. You'd have three lower balloons suspending a triangular platform, with the fourth balloon running up between them on its own tether. If you could rely upon all the balloons bursting at exactly the same altitude the upper balloon could then be used as the trigger. However, I strongly suspect that differences in manufacturing and inflation would give a margin of error of several thousand feet, which would mean that the upper balloon tether would have to be corresponding long; in practice this one is a no-go too.
Re: Control electronics
"Just as a reply to the accuracy bit, even a 1960's mechanical altimeter is going to give you better accuracy than ~30m. The resolution for that sensor I linked to is under a meter even in low accuracy power saving mode."
I'm happy to accept that a mech baro altimeter can give better than ~30m accuracy but I just don't think it's needed for the Vulture 2 aircraft.
I must confess that I didn't look at the specs of that sensor earlier but have done now and it is a nice bit of kit. The main issue with it seems to be noise but that's only going to be a problem with relatively high sampling rates, which you don't really need in a relatively low-rate altimeter. Passing the data through a digital moving-average or noise-spike filter is pretty trivial anyway.
Re: Control electronics
I don't have a problem with carrying a baro (or IAS) sensor, at least for academic purposes, but it just doesn't make sense to me to say that you "don't need the GPS altitude", which you'll be getting anyway, and instead rely upon an additional a baro sensor which, in this case, can only infer altitude from an initial field level calibration.
I certainly accept that the GPS altitude is going to be less and less accurate as it gets higher and higher, and may not be usable at all above 59k ft, but I don't see that as a real issue. At low altitudes the GPS altitude accuracy is going to be ~30 metres but I can't see a baro sensor really being that much more accurate. Furthermore, why would we need to know, for flight control reasons, what altitude we're at? Vulture 2 isn't going to need to do a controlled landing and it's absolute (ground) speed will naturally decay as it descends into thickening air (as per our previous discussions about IAS). Like I said in an earlier post, I suspect it'll just be landed like a free-flight model which, as far as I'm aware, amounts to flying it into the ground at it's best (lowest) sink-rate.
I can't see that Vulture 2 is going to be expected to fly a nominated course, as you would in your glider when attempting a 'task'; it just needs to end up in the landing region and then stay in the vicinity of that region until it makes friends with the ground again. It won't need to follow a pattern or approach from a particular direction, so it doesn't really need to know altitude or airspeed to ensure it complies with timings, patterns or directions.
In any case, there's still going to be the potential issue of pitot freezing and unless it can be ensured that it won't happen it can't be relied upon as a primary sensor.
The background info was interesting though.
Re: Control electronics
I think we're on the same page here. To summarise: a GPS to tell it in which direction to fly (navigation) and a MEMS gyro & accelerometers to detect and control aircraft attitude, keeping it within the controlled flight envelope, whilst it heads there.
You wouldn't need accelerometers in the wingtips though, and just looking at differential readings from one in each tip would only tell you how much the wing was flexing. However, whilst this wouldn't be important for the actual flight it might be of interest to the aero and material science students working on the airframe design as it would provide useful comparison data between the predicted and actual structural behaviours.
I think an active uplink to the aircraft would be fun but would add weight, complexity and power draw, and would be another potential point of failure, not only by simply not working but by latching on to the 'wrong' signal, should one just happen to be present (you'd have to ensure a secure and authenticated connection).
Re: cog, thrust line, drag & angular momentum
The CoG of an airframe is not dependent upon trim; it's just down to the distribution of mass around the airframe and all fixed wing aircraft are designed to have their CoG at least very slightly ahead of their center of lift, more usually called the Aerodynamic center, for stability and recovery reasons (an aft CoG makes it very difficult to recover from a stall, where you've lost aerodynamic control, because it will tend to continue to promote the pitch-up attitude whereas a forward CoG will tend to pull the nose down, reducing the AoA and allowing recovery from the stall).
I can't see that where drag applies on the airframe, at least at the sub-sonic speeds that Vulture 2 is going to be achieving, is really going to be an issue.
Taking the material and size limitations of Vulture 2 into consideration, I can't really see it doing much effective gliding at very high altitudes; a controlled descent to thicker air is probably going to be the main objective immediately following launch. I think that trying to boost the speed to achieve "normal" flight at such high altitudes is going to be incompatible with the sort of high aspect-ratio wings you'd need for gliding there.
It's worth having a look at the Perlan project though, which aims to get a glider up to ~90k ft and beat the current gliding altitude record of just over 50k ft.
Re: Control electronics
Certainly, barometric, GPS and accelerometer instruments are not mutually exclusive but that isn't really the issue here. Neither GPS nor accelerometers on their own would provide a working solution because accelerometers can't give you location and GPS can't give you attitude rates; they comprise two halves of a single system that need to integrated to produce a working solution.
Sure, you could also carry a barometer but integrating that into the autopilot/FCS not only increases weight but also complexity (in addition to the complexity overhead of integrating the baro data you'll also have to arbitrate between that data and the GPS data when they inevitably disagree).
I'm not sure what you mean by "inertial measurement" here; the accelerometers will be providing inertial data but Vulture 2 isn't going to be carrying a relatively massive inertial reference platform, which you'd need for inertial navigation. In any case, there's the GPS for that (yes, it may not work above 59k ft but that still leaves you those 59k ft to glide back to the landing area once Vulture 2 has descended to that altitude, which should be enough)
I don't think that ram ports built into the airframe will work. I believe that the reason they're on tubes is because they need to take readings from outside the boundary layer whereas sticking them in the nose or the leading edge of the wing, which is where you get the greatest accelerations of air around the airframe, would produce some wildly skewed readings. Having multiple sensors is no guarantee against icing failure either because the conditions that cause one sensor to ice up will apply to all the others too.
I can't see why you would want to switch between flight control and navigation systems in such a relatively simple vehicle just because you're nearer to the ground; it would be as effective as forming a coalition government from two fundamentally different ideologies.
A conventional pitot tube works by ram air pressure, which is combined with static pressure to derive IAS and establishing static pressure in itself is not simply a question of making a hole in the side of the airframe and sticking a sensor in there; the boundary layer airflow across the opening of the hole results in a venturi type effect so the static pressure needs to be calibrated along with the ram pressure. Would not a reversed "negative energy" pitot tube just give you a variation on static pressure? Also, would it guarantee that you get no icing? I've not heard of such a device and unless they already exist in a suitably small, light weight, low power and inexpensive package then I suspect that devising one, and then calibrating it across the environmental envelope of Vulture 2, is going to be a bit beyond the scope of the project.
Re: welcome to the attempted scam called cloud
A good effort: 8.75/10
Nice abusage of capitalisation throughout the piece, and the extra spaces before the comma in the very first sentence, after "scam", were a very nice and subtle touch, worth 0.5 bonus points, as was your intermittent grammar, which was worth another 0.25 bonus points (I've added both bonuses to your total overall score).
However, I had to dock you two points because you forgot about spelling and only added a token gesture right at the end, almost as an afterword, when you should have spotted the rather obvious opportunity to use 'FOLLED' right at the beginning, and because your very first word had a correctly placed omissive apostrophe. It is also of some concern to see that you included fullstops at the end of nearly every sentence; as you are well aware this is covered in the very first lesson so I'll put this aberration down to the undoubted stress of learning but you must ensure that it never happens again.
All in all though, not bad, grasshopper.
Re: Control electronics
I only have good friend who is a glider pilot, but anyway, the issue of local weather systems is relevant in the context of calibrating a barometer that is expected to be used in LOHAN because these weather systems do not extend up to the altitude that's expected to be reached. The local weather system prevailing at the time of launch will dictate the ground level air pressure and because the playmonaut won't be able to recalibrate it during the flight it can only be set once, at ground level.
I'm happy to accept that a GPS won't be usable above 18km (a little over 59k ft) but I suspect that a barometer that was calibrated at ground level won't be very useful either, as LOHAN approaches the altitude that PARIS achieved, which was nearly 90k ft.
If airspeed is going to be an element of the autopilot scheme then the issue of pitot freezing cannot be ignored and must be addressed because the loss of such a critical input to the autopilot would be show-stopper. However, pitot heating in Vulture 2 is not really going to be a viable option due to weight, complexity and energy requirements.
Whilst you will certainly need to do a good landing, on an airstrip, in your glider, because you'll want to use it again without having to carry out major repairs, this isn't so much of an issue with Vulture 2. Being much smaller and lighter, it'll be landed like a free-flight model and, being made out of nylon, will probably suffer less damage.
"A good pressure sensor could just about tell you when to flare out for landing if you know the ground-elevation of your landing site" Sure, if by 'good' you mean one that's accurate to +/- 1ft at field level. In practice you'd be lucky to achieve +/- 20ft with a 'good' one - just don't try flaring at +20ft agl (or -20ft agl either, for that matter).
Because of the potential problems with ensuring that instruments like a pitot and barometer provide reliable data to the autopilot/FCS over the full range of the environmental envelope I strongly suspect that they'll not be used at all and a purely dynamic FCS, more akin to a lightweight missile guidence system, such as you'd find in air-to-air or ground-to-air missiles, rather than an aircraft type autopilot, will be used instead.
- Geek's Guide to Britain INSIDE GCHQ: Welcome to Cheltenham's cottage industry
- 'Catastrophic failure' of 3D-printed gun in Oz Police test
- Game Theory Is the next-gen console war already One?
- BBC suspends CTO after it wastes £100m on doomed IT system
- Peak Facebook: British users lose their Liking for Zuck's ad empire