up to 1,000 kelvin (726.85 degrees Celsius).
Please, please, please, in the name of all that's unholy, how do you get a 5sf conversion result from an approximated 1sf input and believe it's good?
Astronomers claim to have found the oldest star yet discovered – a 13.5-billion-year-old sun hovering on the edges of our Milky Way, according to a new study. The star may sound unremarkable, 2MASS J18082002-5104378 B, but its contents are pretty special. It’s classified as an ultra metal-poor star (UMP) and is “the most metal …
I imagine that writing "about 727 degrees Celsius" would have sufficed, which, whilst still implying unjustified precision, is at least without the two decimal places of 726.85 that make that form particularly jarring.
However, given it's an estimated upper bound, any of 720, 725, or 730 would probably have been better.
I think that the number of significant figures can (surprisingly) be continuous. You know both how many digits you know, and the uncertainty in the first digit you don't know, which is a continuois quantity. Another way pf seeing this is to consider what you're actually trying to represent which is an interval on the real line.
I see what you're getting at but even if it were a valid idea (and I suppose convention could make it so) the decimal wouldn't work like that. A value of 5.0 (or 5.0000) says that you've got zero uncertainty in the sixth digit, while 5.9999 says that you've got 99.99% uncertainty in the sixth digit yet a tiny step more gives you certainty. For that to be sensible you'd have to be describing the certainty rather than the uncertainty, and then you'd still have to have a way to calculate it. You can do that realistically for a well-known constant such as pi but much less so for any measured initial quantity. Wouldn't it just be better to stick to error bars, which don't have to be symmetrical either?
As I understand it the normal body temperature of 98.6 degrees came from converting 37 degrees Celsius to Fahrenheit. The original 37 degrees was an average body temperature of several individuals rounded to the nearest degree Celsius so had ~ 2 significant digits. The 98.6 is bogus accuracy and, in fact, is a little high.
Hydrogen is used in cooling, for example, generators in power stations because its low density, high specific heat, and high thermal conductivity make it a good coolant.
So the "Hydrogen is a poor coolant, so early gas clouds were very hot..." paragraph might need re-working.
"Unlike metal-enriched gas that efficiently cools via dust and metal-line emission, metal-free gas can only cool significantly via atomic (H), molecular (H2), and deuterated (HD) hydrogen emission. Hydrogen can only cool gas down to temperatures T < 10^4K, and H2 is a poor coolant at T < 200 K. While HD can cool gas below T ≈ 200 K, the small cosmological ratio of deuterium to hydrogen limits its contribution to cooling. "
In space, a gas cloud cools by radiation. There's almost no gravity, so there's no convection, and there's no conduction as there's nothing to conduct to. Hydrogen might be a good coolant in some circumstances, but not in this one.
Isn't everything but helium, hydrogen, and lithium ultimately made by a star (perhaps layers blown off instead of core though)? "We're made of star stuff." as Sagan famously said. I think the models assume Mercury was made out of the same gas cloud the rest of the solar system was though.
Astronomers have the slightly weird habit of calling all elements beyond helium in the periodic table "metals". Statistically, they are right most of the time, but it confuses those with any education in chemistry. A low mass, ultra metal poor star like this will only cook up helium from hydrogen during its extremely long main-sequence lifetime, i.e. no metals even by astronomical standards. When its main-sequence life ends, it may start creating carbon, but I doubt it will produce any real metals
"I thought stars cooked metals themselves, at least as far down Mr Medeleev's bedsheet as iron."
The original stars only had Hydrogen and helium to burn and then over time via fusion create some of the other elements. My understanding is that only the smaller elements form that way, and it's not until the star dies and explodes that you get the higher numbered elements (including metals above Iron). So now you have a gas cloud containing a much wider variety of elements, so stars that form from that new cloud will contain those, and have more metal within them from day one.
So for instance since gold is only created within a supernova, if you detect it within a star then it must have formed from a cloud created by a previous generation of stars, while if it has none of them (eg metal poor) then it's much older and possibly from an earlier point.
Your understanding is incomplete. Metals are created by the fusion process in a star, as far down the periodic table as iron. You don't have to start with them in the mix.
When the star makes iron, that's the end of the line, because Iron takes more energy to turn into other elements than it throws out in the process. That's when the star starts wending its way toward a possible supernova event, but starts dying whether or not it will experience a giant space kablooey.
But isn't there a limit to what elements a star can fuse based on its mass (and therefore pressure in center of star)? Yep looks like most Red Dwarfs can't fuse helium so forget heavier elements like carbon and oxygen.
>Helium fusion can occur in stars with more than about 0.4 solar masses (420 Jupiter masses). Less than that and our ball of helium would never get hot enough to fuse.
>The new star is only about 14 per cent the mass of the Sun
I thought stars cooked metals themselves, at least as far down Mr Medeleev's bedsheet as iron.
They do, however stellar nucleosynthesis depends very heavily on the mass of the star. This one is too light to do much more than slowly burn hydrogen to helium, so its metallicity today is probably very similar to its metallicity when it formed.
Biting the hand that feeds IT © 1998–2019