Why the confusion?
A politician has answered a question that they wanted to be asked, not a question that they may or may not have actually been asked. This is normal.
281 posts • joined 12 Apr 2012
A politician has answered a question that they wanted to be asked, not a question that they may or may not have actually been asked. This is normal.
"With latency mainly due to the sluggish render farm performance, the artists couldn’t work as the files were just not available fast enough for them.”
That quote seems to be saying that the bottleneck was in producing the increased number of ray-traced CGI image-frames, which is down to compute, not IO. Sure, each scene/frame to be rendered will need the scene/frame data to be loaded before rendering can start but the render time will be far greater than the IO time.
Hydrogen is the easiest element to fuse and so yes, all stars start by fusing Hydrogen.
Briefly, as the gas that is to become a star collapses and compresses under its own gravity it reaches a point where the Hydrogen at the center of the gravitationally collapsing gas becomes heated and compressed sufficiently for fusion to occur. The outpouring of energy from the the newly started fusion process then counteracts further collapse due to gravity but once most of the Hydrogen in the core is fused the out-pressure from fusion drops and the star resumes gravitational collapse until the Helium in the core, produced by the fusion of the Hydrogen, can start fusing. The Helium fusion, along with the renewed gravitational collapse, leads to Hydrogen fusion in a shell around the Helium core (which is when the star starts its giant phase). This process may repeat several times, depending upon the characteristics of the star, with successively heavier elements fusing in the core, surrounded by multiple fusing shells of progressively lighter elements.
"not massive enough to sustain hydrogen fusion"
If it was not sustaining Hydrogen fusion then it wouldn't be a star; it would just be a gas giant. It does, in fact, sustain Hydrogen fusion but via Deuterium fusion, which is the second stage of the Hydrogen-1 proton-proton fusion process that powers larger stars.
Interesting that they've gone for a liquid-fueled motor rather than solid, not withstanding the greater efficiency and controllability of liquid-fueled motors.
I can't buy the idea that the apparent reduction in light, over time, from Tabby's Star is because "The problem with using a hundred years' of observations, the group argue, is that the source data from “Digital Access to a Sky Century @ Harvard includes half a million glass plates shot between 1885 and 1993, using a number of different instruments and cameras."
Going even further than MartinG, I'd expect the SOP would be to only consider plates that include sufficient additional stars to allow the entire plate to be calibrated, comparing every star (and galaxy) in the plate being calibrated with every other plate/image in which any of those stars and galaxies appear.
This wouldn't eliminate the problem with variations in the sensitivity of the emulsion across each plate but as the plates would have been prepared specifically for scientific measurement purposes I'd expect the 'noise' variability across each plate to be pretty low, and certainly way below 20%, which the team seems to think is a typical noise level (is 20% noise even science?).
Fwiw, Kepler's noise floor appears to be around 80 ppm, whilst two of the measured variations in brightness from Tabby's Star, in recent times by Kepler, were by 15% & 22%.
...if it'll exclude .gov actors from its results?
Perhaps that's why the statement says it "would generate, anonymise, and share threat data" instead of 'collate'.
Mercury's not as hot as Venus so 'Take a walk on a hot planet' would be more accurate.
I think that the real problem here is combining both the logic and the gui in the app, not whether it runs on x86 or ARM. If the gui is split from the app logic then re-compiling the app logic to run on a different architecture or platform is relatively easy; you then just create whatever interfaces you need to control/talk to the logic in the app.
A lot, even a majority, of server software already works this way.
All those trees around the complex == firewood?
"The engineers found that icosahedral nanoparticles, which have 20 different sides, stored less energy than cube- or pyramid-shaped nanoparticles."
This seems sort of predictable - the Hydrogen will react with the Palladium nano-particles via the surface of the nano-particle and, for the same mass/volume, an icosahedron will have a lower surface area than a cube or pyramid shaped nano-particle.
But I guess that's why they were testing them in the first place.
"More light means more detail can be captured. The more pixels, on a sensor that is smaller than a larger sensor with less pixels is going to struggle with noise"
It doesn't quite work that way around; the more light, in terms of quantity of light, that a lens can collect, governed by its aperture, in combination with the sensitivity and dynamic range of the sensor, dictates how much noise there will be, whereas the resolution of the lens and sensor dictates how much detail can be captured.
The size of the sensor is pretty irrelevant, in terms of noise, but given that there's a limit to the minimum size for each pixel, a larger sensor can have more pixels. However, a larger sensor needs a longer focal-length lens to get the same image proportions, and the longer the focal-length of the lens, the further away it needs to be from the sensor e.g. a 50mm lens will produce a ~46 deg image on a 35mm sensor, with the optical center of the lens being 50mm from the sensor, but you need an 80mm lens to produce the same image with a 60mm sensor and it needs to be 80mm from the sensor (sorry for using old film sizes - couldn't quickly find typical focal lengths and sensor sizes for phones). The upshot is that with increasingly thin phones, it's not possible to move the lens further away from the sensor, to allow a larger sensor & lens.
Getting your workload prioritised sounds simple enough but doesn't always work.
I was once in a situation, working for a large London Borough, where I had ten different projects on my work list. When I asked the management to prioritise the ten projects, six of them were assigned priority one, three priority two and one priority three, which didn't really solve anything.
The situation wasn't helped by the fact that the effective prioritisation scheme that the management used was to pacify whichever client made the most fuss/noise; each day it would be "drop everything and work on client X/Y/Z's project". So although I could try to plan work, to make the most effective use of my time, it was pointless because any plans I made were more than likely to be overridden on a daily basis.
It finally reached the point where, on arriving at work one day, my manager told me to visit four different clients, in four different locations around the borough, and "pretend to work on their project" (and he did say "pretend" because he knew that after having to drive between the different locations and then back stuff up before doing anything I wouldn't actually have any time left to do any proper work).
It seems axiomatic to me that if people had something better to do, as in terms of more rewarding/satisfying, than social media then they'd be doing it, so the fact that they are resorting to social media indicates that they haven't.
In some ways, social media is a bit like religion: a crutch for those who need it.
The funding and the go-ahead for the F-35 project would have been on the basis that it was a necessity. However, the project's delays and lateness rather seem to indicate the opposite.
I remember when it was just proper news articles and features 'round these parts.
The article says that the Shellphish team has members in the US, France, China, Brazil, and Senegal but the flags shown in the team photo seem to indicate that they're from the US, Italy, China, Senegal, Russia, Germany and India.
"Raised a few eyebrows" would've been about right.
"Microsoft's decision to bring SQL Server 2016 to Linux caused great excitement in the open-source world this week."
I found this announcement quite, but not totally, surprising, and rather more interesting, because of some of the possible implications for the future, but exciting? No, not even mildly.
"Any object being pushed through the air with a positive aspect ratio will push the air it displaces down and forwards."
I assume by "positive aspect ratio" you mean positive Angle of Attack (AoA), but the problem with this explanation is that, as the AoA increases, the lift vector would reduce and the drag vector would increase, and stalls due to upper airflow separation wouldn't be an issue because, in this explanation, all lift is generated below the wing. In practice though, both the lift & drag vectors increase with higher AoAs until upper flow separation leads to a stall and loss of lift.
It's also worth considering a non-symmetrical aerofoil with positive camber; such an aerofoil can have a completely flat lower surface but will still generate lift at zero AoA.
It is not simply Newton's Third Law; if it was just down to Newton's Third Law then delta-winged aircraft, such as Concorde, wouldn't work. The downwash from an aerofoil occurs at the trailing edge of a wing which, in a delta-winged aircraft, is right at the rear of the aircraft, so if the lift came from downwash then it would produce a turning moment about the CoG/CoP, not a lifting moment through it. Furthermore, the direction of airflow over a wing isn't simply from the leading edge to the trailing edge but also along the wing to the wingtips, where quite a bit of energy is wasted in producing tip-vortices, hence the incorporation of winglets.
"One-third of all HTTPS websites were potentially vulnerable..."
Either they're vulnerable or they're not vulnerable.
Some people don't get irony.
"IIRC most of the UK air space is reserved for the MOD"
Not so. Around the world, with a few local/national variations, airspace is divided into seven Classes (A to G) in accordance with ICAO specifications. The major difference between the classes with regard to access is the degree of ATC control. Entry in to ICAO class A - D airspace requires ATC clearance under all flight rules, entry in to class E requires ATC clearance under IFR & SVFR but not for VFR. ATC clearance is not required to access classes F & G. Broadly speaking, classes A to E are referred to as Controlled Airspace (with the exception of VFR flights in class E) and classes F & G are Uncontrolled Airspace.
However, within these classes and zones, there are a number of relatively small areas where further military/security restrictions or controls apply e.g. AERE Harwell in the UK and Thurmont, Maryland, the site of the Presidential retreat Camp David.
Military aircraft, when not operating within one of the military/security areas mentioned above must comply with the appropriate rules for the ICAO class of airspace within which it is flying.
I believe, that these days, in the UK at least, most military aircraft stay in controlled airspace, in part because there were a couple of mid-air collisions between fighters transiting in VFR i.e. at low-level and light/GA aircraft, which also generally operate in VFR; the speed at which the fighters were travelling didn't leave enough time for the 'see and avoid' VFR rule to work.
A potential problem with using stadia as a measure of quantity of methane is that, when in use as stadia, they will be producers of methane.
Probably be a good idea to rename it before putting it on sale in the UK.
Quite. It's difficult to see the point of the lower image because it just seems to be the ordinary image with a height-map overlaid upon it using an alpha channel. The problem this causes is that the shadows and highlights in the underlying image are distorting the height-map colours and the height-map colours are burning out areas of the underlying image. Just the height-map, on its own, would've been more meaningful.
Can't see the connection at all.
Ok, they've incorporated a polymer, the rigidity of which can be altered by passing an electric current through it, in the wings, which is interesting in itself. However, this has nothing to with the patagium of bats, flying squirrels, pterosaurs or whatever, where the intrinsic flexibility/rigidity of the patagium membrane doesn't change.
Sure, the aforesaid animals can (or could, in the case of the pterosaurs) alter the flexibility of the patgium, but only by stretching it, in the same way that a sheet of cloth is less flexible when stretched and under tension than it is when loosely supported.
Seems a bit weird that they feel they need to make this spurious comparison when what they're doing is already interesting enough on its own. Mind you, it's also a bit weird that they've chosen a Wing-in-ground-effect vehicle to do their testing.
Here are some numbers using the mass of Earth (M) as a guideline...
c= 299792458 m/s
M= 5.97237e+24 kg
Key: Schwarzschild Radius = rs, Escape velocity at rs = evrs, Escape velocity at rs + 2m = evrs+2m
I give the evrs+2m i.e. two metres out from rs to give an idea of the relative escape velocity, and thus the gravitational gradient that a tall person would experience.
Mass= 5.97e+024 kg, rs= 0.0088700671 m, evrs= 299792458 m/s, evrs+2m= 19920866.79 m/s.
So the Schwarzchild radius for an Earth sized mass is a smidge under 9 mm and the difference in escape velocity 2m from its event horizon is about 279871591 m/s - that's a pretty steep and unhealthy gradient.
At ten times the Earth's mass...
Mass= 5.97e+25 kg, rs= 0.088700671 m, evrs= 299792458 m/s, evrs+2m= 61779736.59 m/s
The Schwarzchild radius is now a little under 90 mm and the +2m escape velocity difference is now about 238012721 m/s - still far too steep.
At one hundred times the Earth's mass...
Mass= 5.97e+26 kg, rs= 0.88700671 m, evrs= 299792458 m/s, evrs+2m= 166172922.8 m/s
...the ev difference has come down to 133619535.2 m/s - still too steep... In fact, it's not until we get to around a mass of 5.97e+35 that the ev difference is less than 1 m/s...
Mass= 5.97e+35 kg, rs= 887006709 m, evrs= 299792458 m/s, evrs+2m= 299792457.6 m/s
...the Schwarzchild radius is now 887,006.09 km and whilst you might survive the difference in ev it would still be uncomfortable. To get to less than 1 mm/s difference in escape velocity we need to increase the mass by another factor of 1000, to 5.97e+38 kg, at which point the Schwarzchild radius becomes 887006709435 m, or 887,006,709.4 km - that's pretty big. In fact, that radius is about 44 times greater than the current distance of Voyager 1 from Sol.
However, the estimated mass of our galaxy is between 1.15e+42 kg and 1.69e+42 kg, so it would seem that a sufficiently large BH would be between about 1/1930th and 1/2829th of our galaxy.
Proviso: I think I've got the numbers right but wouldn't mind someone checking them.
As DNA said "Space is big. Really big. You just won't believe how vastly, hugely, mind-bogglingly big it is. I mean, you may think it's a long way down the road to the chemist, but that's just peanuts to space..."
"'Island-dwarfing' is a known evolutionary phenomenon that sees species isolated on islands shrink."
There's also the phenomenon of 'Island gigantism' that sees species isolated on islands enlarge.
The firing mechanism of modern nuclear bombs consists of an array of shaped explosive charges arranged around a hollow fissionable core and depends upon all of those shaped charges being detonated at precisely the right time, to within extremely tight limits, for the bomb to 'work'; premature detonation of one or more of the shaped charges, or failure of any one of the shaped charges to detonate at precisely the right time will result in the fissile core, along with the rest of the bomb, being blown apart. The biggest bang you'll get in these cases will be from the shaped charges detonating and the biggest problem will be the local radioactive pollution from the destroyed fissile core.
Whilst there are other safeguards preventing the premature arming and ignition of the firing mechanism, there's not really a need to be able to disarm a nuclear bomb in the event of a launch failure.
Edited to add: pretty much what Nigel 11 says above - started typing before he posted but got called away, so ended up posting after him.
Editorial? Seemed more like an advertising brochure to me.
I can endorse your observations regarding getting work when you're older. I started my first job in IT a few weeks before my 18th birthday and I'll be 59 this summer. Haven't been able to get work in IT since my mid-forties and now arthritis and arterial disease means I can't do much in the way of manual jobs either.
At the last interview I got, two or three years ago via a 'Job Fair' where I had a good face-to-face chat with the director of a small IT service provider, he eventually admitted that whilst he'd like to employ me because he thought my experience would be valuable he could get two 'IT apprentices' for less than it would cost him to employ me on the minimum basic wage.
To be honest, I actually quite enjoyed the simplicity and lack of BS in the warehouse work I ended up doing for a couple of years, until I could no longer do it, because work finished at the end of each day - no worrying about phone calls in the middle of the night because a system had gone down and other stuff like that.
"Writing software did not exist in any large number until the late 80s, when two major events happened - ANSI C and 80386 with an MMU."
WTF? First programming I did was in 1970/71, at school, using 2B pencils to code BASIC on cards. By around 1974 I was working in COBOL, as were a considerable number of other programmers. In fact, I'd wager that there were many more programmers back then than there are now because there were no off-the-shelf 'packages' for businesses to simply buy; most companies had to write their own systems, combining utilities, such as sort programs that were part of the OS, with their own original programs in 'batch' jobs to produce printed reports on fan-fold manuscript. This also lead to doing quite a bit of JCL/SCL to control the batch jobs.
It was quite a few years later before UNIX became viable for more than just research; the usage of C outside of research and OS development just wasn't really needed until GUIs arrived, which was around the mid 1980s.
I wouldn't say that the problem with SNMP is that it's horribly complex, because you can't have a solution that's flexible without some degree of complexity. The biggest problem I've had with SNMP is with OIDs that aren't persistent between reboots.
"a country that is known for its [...] compassion, for its love and support for those less fortunate"
And that compassion, and love and support for those less fortunate is expressed through Operation Sovereign Borders.
Your mention of Dan O'Bannon called to mind Bomb #20 in Dark Star.
"Let there be light..."
"...a dive toward the sun, then use (rather large) magnetic fields to catch a ride on a solar ejection..."
I think not. Some numbers from Wikipedia: "The Sun has a magnetic field that varies across the surface of the Sun. Its polar field is 1–2 gauss (0.0001–0.0002 T), whereas the field is typically 3,000 gauss (0.3 T) in features on the Sun called sunspots and 10–100 gauss (0.001–0.01 T) in solar prominences."
"The Sun's dipole magnetic field of 50–400 μT (at the photosphere) reduces with the inverse-cube of the distance to about 0.1 nT at the distance of Earth."
For comparison: the magnetic flux density at the surface of a neodymium magnet is about 1.25 T
So, even discounting the issue of finding the energy to generate a large magnetic field for the probe, it's not going to have much of a field from the Sun against which to operate, even within the Solar System, let alone between the stars. And that's assuming that, instead of using high-mass radiation shielding, you can use the probe's magnetic field to protect it from the intense radiation it'll experience when it passes close to Sol.
The sad reality isn't that "We've become too shallow to seek the stars any more" but that it's just not possible without fusion energy, and although fusion research is still on-going we're still quite some way from a working solution.
Without a high efficiency energy source, where efficiency equates to duration, there's just no way a probe could accelerate for long enough to achieve a high enough % of 'c' to reduce the journey time to less than millennia before it ran out of fuel.
I suppose they'll need to remain unnamed if they expect to keep any customers after revealing that they've been snooping on the content of emails sent through their service.
"Next week - Network Rail reports leaves on line caused by Russian hacking...."
Effectively already happening. Our politicos, and their right-wing media bitches have been blaming Russia, and more specifically Putin, for everything they possibly can; anything to divert attention from their own lies and hypocrisy.
"Website admin cPanel hacked, loses a bunch of folks' contact details"
Looks interesting, I thought to myself; how are they going get in touch with their customers if they've lost the contact details? Call me pedantic, but in I.T. you really do need to be pedantic if you want your systems to work as intended.
It's interesting because if the radio waves from the quasar are "varying wildly in strength" it suggests that whatever is causing the variations has high density/intensity but, at the same time, the variations are relatively rapid, suggesting that whatever is causing the variations is also either relatively small, at least at astronomical scales, or very big and moving very, very quickly indeed.
I wouldn't have thought that interstellar gas lenses fit either scenario very well; if the variations in the lenses are small enough to match the rate of variation then it's difficult to see how they could be intense enough to produce the degree of variation but, on the other hand, if they were large and moving quickly then they'd be interacting with other interstellar gas and radiating on their own.
I think your post hints at the real motivation behind government outsourcing; reduced costs/value-for-money are plausible excuses, but the real reason is avoidance of responsibility.
Where work is done in-house you need to ensure your boss understands what he's demanding of you so that he can't turn around afterwards and say that you haven't done what he said, but this means that he needs to both justify and accept responsibility for his decisions. Responsibility remains 'in-house' and is easy to identify.
When a project is outsourced though, all that the provider needs do is satisfy the spec, regardless of whether what was specified actually matched what was required, or even whether it was complete nonsense; the PHB didn't need to understand the problem/requirement in the first place and there was no-one in a position to point this out to them. End result is that neither the PHB, nor the provider have to accept responsibility - they can both argue that the other has made the mistake.
"It's looking like A380 production might end before 747s
The only customers [for the A380] are the few Gulf hub airlines and they have bought their fleet."
Whilst it's true that the number of customers, orders and deliveries of A380s will never approach the numbers that the 747 achieved, you're somewhat wide of reality:
There are a couple of things that worry me about this project. When it first aired in El Reg I commented that if the headline photo (same one used in this article) is anything to go by then the occupants are likely to end up suffering from severe motion sickness; far from floating around serenely, it's clear from that headline photo that the 'podule' is swinging rather badly.
Now, just as worrying, there's an (artist's) image of the podule apparently ascending/cruising beneath a deployed and inflated parafoil... Um, well that's not going to work because deployed like that the parafoil canopy would simply collapse and, what with the inevitable twisting and spinning during ascent, would leave the tethers to the canopy badly twisted; there'd be little chance of the canopy successfully inflating during the descent. Moreover, with that configuration, there appears to be no scope for a backup canopy, let alone a smaller drogue chute to stabilise the podule before deploying the main chute.
But then, even assuming that they actually have a more practical deployment scheme, the use of a parafoil ensures a relatively high horizontal speed at touch-down; the landing is going to be more than a little bumpy so unless you're strapped in with a full five-point harness you're going to be injured as the podule tumbles across the ground. I wonder how much repair work to the podule is going to be needed after each landing.
You're thinking of the High Sensitivity Array (HSA), not the Very Large Array (VLA). The HSA is based upon the Very Long Baseline Array (VLBA), which spans the U.S. - Hawaii, but also incorporates the VLA (NM - USA), Arecibo (Puerto Rico), Greenbank (WV - USA) and Effelsburg (Germany) telescopes.
"As Snowden shrewdly observes, the alcohol guidelines aren’t written for the public, which will simply ignore them..."
Indeed, the public will ignore them, but I can't really buy the idea that it's just to achieve faux moral one-upmanship at bureaucratic and diplomatic junkets either.
As the article points out, this advice/recommendation runs counter to all of the evidence thus far gained and as such is a contradiction of reality.
The most worrying aspect of this announcement is not that drinking is dangerous at any level but that the government's chief medical advisor thinks it's a good idea to make this announcement, in contradiction of reality, according to all the available evidence, and will achieve some objective by doing so.
Given that the announcement is targeted at the public, and in a pretty high-profile way, as it's in all of the national media, I can't accept that the objective is simply bragging rights at junkets.
I suspect that the real purpose of this announcement is to justify a big rise in booze prices, via a reduction in quantity for the same price, along the same lines as we've seen with recommendations to reduce sugar content and the size of food servings for health reasons, but with no corresponding price cut. Now these measures may deter those who do over-consume, but I doubt it; those who do over-consume already know they are doing so, and will continue to do so, as long as they can afford to. No, I think it's really for everyone else, who doesn't over-consume, and will just have to pay more to get the same (reasonable) amount.
The purpose of smart meters isn't to enable people to save money but to enable remote-control by the by both the government and the utilities.
The government will be able to switch off your power before they raid your home and the utilities will be able to switch it off as soon as you fail to pay one of their bills on time, and then add an exorbitant charge to restore it.
I recall reading, some time ago, that Curiosity's wheels didn't seem to be holding up as well as planned, and seemed to be receiving a lot more damage than expected, but that middle wheel looks totalled.