Wot! No red-shifts?
Really need the relative red-shifts to know how far apart they may actually be.
Olber's Paradox is worth recalling when thinking on this scale.
159 posts • joined 12 Apr 2012
Really need the relative red-shifts to know how far apart they may actually be.
Olber's Paradox is worth recalling when thinking on this scale.
"Photoshop’s 32-bit mode also enables the creation of high dynamic range (HDR) images..."
Umm... that isn't quite how it works. 32bpp just gives you 4x8bpp channels: R, G, B & Alpha. Assuming you include an Alpha channel, then the next 'common' format is 12bpp & Alpha = 48bpp.
Biggest problem with the Space Elevator, atm, is finding a material that could not only sustain the tension of 22,000 miles of its own mass against the Earth (the Space Elevator needs to be balanced, so it's Cog is in geosynchronous orbit, otherwise it'd be all over the place), but also needs to be producable to both an extremly high quality, and in a vast quantity.
Afaik, it's been suggested that Carbon nao-tubes just might be able to support their own 'weight', but even then, we don't really have much practical experience with Carbon nano-tubes beyond the micro-scopic, let alone a structure > 22,000 miles long/high.
It'll happen one day, but probably not in our life-times.
A valid point - move inwards and orbital velocity increases, except that to move in you must slow down; accelerate and you'll move out, and then subsequently slow down... Dending upon the timing...
Let's say we start from achieving a perfectly circular initial insertion orbit... We we add energy (at any point, because we're in a perfectly circular orbit), which means we'll accelerate, which will take us 'out' ('in' & 'out' are a better way to think of gravity wells than 'up' & 'down'), because we've changed the equilibium of V vs. G, to the advantage of V at a tangent to G. The result, with a single input of energy, is that we'll end up in an elipitical orbit. So you'll need at least two burns; one to slow down and therefore move in, which will speed you up, into an elipitical orbit, which means that you'll catch up with a target ahead of you, but which then means you'll need a second burn to speed up, to take you back out, and back in to a circular orbit, ahead of where you would have been if you'd just stayed in orbit (phew).
The same works for a higher initial orbit but that's a waste of fuel.
In practice, you're not going to go for a circular initial insertion orbit - no need, what with current computational power available; Newton is good enough for LEO stuff until local RADAR and LASER comms can cope with mm accuracy, so in practice, unless there's a really good reason, then each launch to the ISS is quite a clever bit of choreography.
Most definitly written whilst under the influence of the icon...
"The spacecraft would have accelerated to around 16,000mph causing the air in front of it to heat up and destroy the capsule …"
Actually, the spacecraft _decelerated_ to around 16,000 mph; its orbital velocity was greater.
You can look at it as an energy equation - it would have needed an input of energy to accelerate the spacecraft; the kinetic energy lost by the spacecraft, via its deceleration, was transferred to the air, which caused the heating.
I liked and used AMD CPUs until they came up with the single shared FPU (per pair of cores) design, which made it a non-starter for me. However, it looks like each Zen core may have two dedicated FPUs (for SMT), so it'll be interesting to see how well the Zen cores work.
Not interested in AMD GPUs until they sort out their OpenGL drivers for Linux.
If Intel claims that one of these new systems can replace nine older systems, should we expect to see a corresponding drop in their [Intel's] sales?
A gross simplification, of course, but there's something that doesn't quite add up there.
Does nobody else have a problem with that headline image?
"Yes, but we tend to forget the stuff that isn't still standing"
A very good point i.e. Thomas Bouch's Tay Bridge.
In fairness, the quality of the castings weren't up to spec, and it was poorly maintained, but the design was still marginal; the massive over-engineering of the Forth Railway Bridge was, in part, a reaction to the marginal design of Bouch's Tay Bridge.
Indeed; it's the Royal Albert Bridge, and a very clever bit of design it is too.
Essentially, it's a suspension bridge that doesn't need anchors.
Normally, in a suspension design, you need to anchor the suspension chains/cables to some very solid ground but with the Tamar crossing this would have lead to an unfeasibly long (for the time) single span (because suspension bridges with multiple spans are a bit tricky). So instead of using outlying anchors to counter the inward pull of the suspension chains upon the towers, the towers are kept from falling inwards by the outward force of the upper bowed tubes. Neat stuff.
"In Yeat's Wanderings of Oisin, Oisin is transported to the realm of the fairies. While there he plays his harp and the fairies beg him to stop because of its unendurable sadness"
And then one of the faeries asks him the name of the last piece of music he played, to which he replies "I love you so much it makes me shit my pants"
You deserve a thumbs up for:
"Ah, the day when microprocessors can run freely and feel the wind on their faces."
"But if the AI were to achieve superintelligence, which Bostrom believes is inevitable once it reaches human-level intelligence, and be totally focussed on making paperclips, it could end up converting all known matter into making paperclips."
This is an oxymoron; anything that ended up converting all known matter into making paperclips could _not_ be regarded as having even human-level intelligence, let alone superintelligence; converting all known matter into making paperclips is plain stupid.
Another example of this flawed thinking:
"Much of the book focusses on how easy it would be for a machine intelligence to believe itself to be happily helping the human race by accomplishing the goal set out for it, but actually end up destroying us all in a problem he calls “perverse instantiation”
Once again, if an AI were to make this mistake then it can't be regarded as ordinarily intelligent, let alone super intelligent.
On a slightly different note we have:
"If we were to try for something a bit more complex, such as “Make humanity happy”, we could all end up as virtual brains hooked up to a source of constant stimulation of our virtual pleasure centres, since this is a very efficient and neat way to take care of the goal of making human beings happy."
But then this supposes that we would be unable to prevent it i.e. the AI would have some means of physically compelling us to being hooked up and/or we would be to witless to prevent it.
And then it goes on with:
"Although the AI may be intelligent enough to realise that’s not what we meant, it would be indifferent to that fact. Its very nature tells it to make paperclips or make us happy, so that is exactly what it would do."
There's no logic to that assertion; why would it be indifferent to the fact that it wasn't doing what we wanted it to do? There's no explanation as to why it would be indifferent - apparently, it just would be.
Is economics supposed to serve people, or are people supposed to serve economics?
I don't think it's because of Company bloat/inertia; rather, I think it's symptomatic of a flawed design philosphy.
For example, consider the problem with launching W32 apps from the start menu in this latest build, and the suggested work-around: what on earth are they doing with the program launch process to make it so fragile?
It's the regular occurence of slightly weird problems like this that leads me to the conclusion that the Windows OS is designed, by intent, to incoporate an extremely high degree of very deep integration, not withstanding recent moves towards greater modularity. However, the problem with very deep integration in a very complex system is that a relatively insignificant problem, in an insignificant subsystem, can impact other more important subsystems, despite there being little logical interlocking and/or interdependency between them.
It's difficult not to make comparisons between Windows and Linux in this repect; whilst both systems rely upon a monolithic (modular) kernel, Linux doesn't try to incorporate the deep integration* that appears to be such a major feature of Windows, and certainly, in my experience, doesn't seem to suffer from the same sort of rather weird problems.
For most users, the major difference between the various versions of Windows is the interface, probably followed by any new services/features, and then followed by boot/load times. I see very little info regarding changes in the kernel or infrastructure services though, which is where any work is actually done.
* yeah - systemd bait.
"why do I want to spend silly money on a power hungry radio?"
For me, this is the most compelling arguement against DAB. The only place I would use a portable DAB radio is in my kitchen, where a 30 year old non-digital tuning £5 FM radio currently suffices. Due to its entire lack of any digital electronics I only need to replace its 2xAA batteries perhaps twice a year.
On a slightly off-topic point from the same article: "Designed like an award-winning glowing bowl..." Erm... I didn't realise that there had already been an award-winning glowing bowl... but then again, I'm not the sort of person who would feel the need to "experience the light that you want, wherever you want as you move around your house and garden." either.
Think: From Richard Burton doing Dylan Thomas to William Shatner doing Lennon & McCartney...
Suggest pairings of performers & writers that _haven't_ happened, and are never likely to happen.
All I'm looking for are the performers and the writers, not any specific works (it's ok if you do have a specific work in mind but it's not the work that's important, it's more the subjective style and depth of the writer combined with the performer).
I'd like to start with Vivian Stanshall performing something written by Iain Banks.
Shirley, shouldn't it be dark matter particles colliding with anti-dark matter particles that result in annihilation?
But then if anti-matter (electron + positron) can be created by a pair of gamma photons, and photons are their own anti-particles, then shouldn't we be talking about dark-matter anti-anti-particles?
Data without context is just noise.
So it comes down to context: if a common context applies to multiple data items then those items comprise a single collection of data items.
If different data items require different contexts then you have datas.
Disclaimer: I haven't read the book but the Coriolis effect, as described in the quoted excerpts, doesn't make sense.
The Coriolis effect operates in the plane of rotation and on the basis that the spacecraft is rotating to simulate gravity (because if you've already got artificial gravity then rotating the ship is pointless and just makes things difficult for yourself) then the Coriolis Effect will only be a real problem to any jugglers on board i.e. in the perceived up-down axis.
In the first quote, where something is thrown 'across the compartment' , the effect of rotation wouldn't turn it into a curveball; it would still follow a straight path _across_ the compartment, but instead would just appear to fall or rise at a different rate, depending on the direction in which it was thrown.
The second quote makes a bit more sense because Liana is dropping i.e. moving across the plane of rotation, but even then the result of the Coriolis effect would be to make anything fixed to the ship, such as a ladder she might be descending, just appear to move sideways relative to her; perhaps it's just a bad choice of words but 'fending off Coriolis' suggests she's being pushed or pulled against something when what she'd really need to do would be to hold on against the apparent sideways drift.
What do you call a one-eyed dinosaur?
How do you know that elephants have been hiding in your fridge?
Footprints in the custard.
What's red and comes in tubes?
Underground train disasters.
What goes "Bip, bip, bippity-bip, bip, bip, bippity-bip-bip, bip, bip, bip, bip, bippity...?
A ping-pong ball in a tornado.
What goes "Yelp, leap, bong, splash, yelp, leap, bong, splash, yelp, leap,bong, splash...?
A frog in a pressure cooker.
A man woke up on the beach of a desert island and realised that he'd been painted dark red...
...He'd been marooned.
Nothing to do with topic...
"I am blind -- but I am able to read thanks to a wonderful new system known as 'broille' . . . I'm sorry, I'll just feel that again."
Peter Cook (one of Secret Policeman's Balls, I think)
My favourite was:
"WORLD WAR 2 BOMBER FOUND ON MOON"
... followed a week or two later by:
"WORLD WAR 2 BOMBER FOUND ON MOON VANISHES"
My place in the queue was handed down to me by my father, and it was handed to him by my grandfather...
Largely agree re the Transputer. Considering the technology of time though, it was a pretty good effort and if it had achieved take-up in it's intended field of GP/MPP/HCP instead of ending up in embedded, I reckon the problems and omissions, like the MMU, would have been addressed - bare in mind that an MMU for a Transputer, being designed primarily around large-scale parallelisation ideas and concepts, would be a bit trickier to design than an MMU for a Von Neumann architecture CPU.
Interesting that no specific architecture was mentioned in the article - might give some idea of power-draw. Guess a bit of digging to find if any of the major chip companies are sponsering the work might give an answer, if they're licensing one of the common architectures.
Police operations and actions concerning copyright theft are performed on behalf of the entertainment industry, but funded by tax payers. Can you see what they did there?
That sounds like one of the 'Aurora' conversations. Several conversations between unidentified aircraft and ATC along those lines i.e. pilots alerting ATC that they were _descending_ to a Flight Level way above the capability of known aircraft have been reported by various plane spotters and cited as evidence for the Aurora spy plane. Other variations include F-4/F-15 jocks reporting ascending to FL600+ to ATC in a boastful sort of way only to be 'trumped' by an Aurora pilot reporting that he's descending to an even higher FL.
None of these conversations can be verified though, so they don't really count for anything.
On the other hand, there has been some pretty good evidence for an Aurora type aircraft. Amongst the best evidence is a sighting over the North Sea by someone who had been in the Royal Observer Corps International Aircraft Recognition Team. Also, a series of sonic booms recorded by the USGS seismic sensor array in Southern California which, when analysed, indicated an aircraft, smaller than the Shuttle, flying overhead at ~90,000ft @ Mach 4-5. Then there was a photo taken by a geosynchronous weather satellite that appeared to show a very high-altitude, high-speed contrail starting at Groom Lake and extending directly East across the Atlantic Ocean (it had to be created very quickly, i.e. by a very high-speed aircraft, otherwise it would have started to disperse at the start of the contrail). Not sure if this photo was ever verified though. Most of this evidence dates from the late '80s, through the '90s to the early '00s. However, there have been some reports of more recent sightings over Kansas and Texas, with photographs, earlier this year, in February and March, which might tie in with Aurora missions associated with the on-going Ukraine/Crimea situation.
Probably the best evidence against an Aurora type aircraft is that if it does exist then it won't be entirely unknown to the military forces of the rest of the world, even if they don't know its full capabilities, and the only people to whom it's existance is actually being kept secret are the general public, who don't really count, so what's the point in keeping its existance secret from the public when the rest of the world's military know about it?
"There is another possibility. The Big Bang we measure in the past is an echo of the Big Bang we foresee in the future."
If so, what has the echo hit to be rebounded?
Didn't understand the relevance of the rest.
The problem with this explanation is that it requires the future to be pre-determined i.e. fate/destiny.
From our point of view, a particle going 'backwards' through time would appear to get younger as (our) time progresses - so far, so good, but this means that as we continue our course 'forwards' through our time we must reach the point where the observed particle reaches an age of zero, or in other words, its origin.
So when we scrollback to the point where we first observed the particle we would know right then that its creation event must have already occurred in our future. Thus, our future must be pre-defined.
This is not to say that at the Big Bang there wasn't also an opposite direction to time but if so the 'now' in our temporal direction won't be in the same place as 'now' in the opposite temporal direction...
The BB is at the bar char, our 'now' is at x and the opposite direction 'now' is at y, so whilst they can both exist they're in entirely different places; we can only be aware of things that are at 'x'.
Adapting this diagram to illustrate my initial point would give us:
...which implies that the Big Bang has occurred both in our past and in our future. However, the trouble with this is that if the the BB is the _single_ origin of both temporal directions it can't be in more than one temporal place.
"It's flat at the bottom and the rest is diffuse."
That was the very first thing that struck me. The second thing was that there's no pixelisation in the enlarged image - whoops!
If you look at the source image from JPL (jpeg - bah!) the brightened region is just 3 pixels wide by 6 pixels high, so the diffuse appearance in the enlarged image is due to interpolation being employed when the enlarged image was blown up - doh!
What _is_interesting is that only the lower 4 pixels of the central column have been burned out, with the upper two pixels of the central pixel column and both of the two side columns being much less bright, so that you can actually make out the background through them.
I'd say that this isn't a good match for a cosmic ray striking the detector because although the halo pixels could be the result of internal reflection within the layers of the detector they don't spread below the bright column as they do above it; the bright region, even in the raw image, cuts off uniformly across the bottom.
So I had a look at the other image that's referred to and this does have different characteristics; instead of a line of burnt-out pixels there's a 2x2 block, which in itself is not significant because it could indeed be due to the incidence at which the cosmic ray hit the detector, but more importantly, there's a 1 pixel halo of brightened pixels on _all_ sides of the burnt-out region, which is a better match for a cosmic ray striking the detector.
Going back to the first image, I don't think it's a reflection either, because the bottom pixel, for sure, and possibly the one above it as well, in the burnt-out 4 pixel column appear to be coming from a bit of the ground that's in shade - the camera is pointing almost directly towards the sun with the bright region appearing to originate just this side of a ridge, on a slope that's falling towards us.
All things considered, and baring in mind there's aparently a one second time difference between the images from the left and right cameras, I'm more inclined to think that this might have been a small meteorite strike.
I think you & Wikipedia are largely correct but I'd be a bit surprised if there weren't a few stations off the south coast of Australia/Tasmania; targets detected in the northern Atlantic and moving south would be followed by a western sub but you'd also want to keep an eye on subs entering the Indian Ocean from the East via the South Pacific. I certainly agree that there won't be any SOSUS sensors in the Southern Ocean itself; there isn't really anywhere for the sensors to be linked to, apart from Australia (I don't think that political relations have been good enough for stations to have been set up on Cape Horn and the Cape of Good Luck).
I think it's unlikely to be military because there's not a lot of point in surveying, spending valuable limited bandwidth and analysing thousands of square miles of the strategically and tactically unimportant Southern Ocean.
Spy sats are good for surface targets, like ships, but the Southern Ocean is a very hostile place for surface vessels to operate and whilst it may be a good out-of-the-way sort of place for subs to loiter subs aren't good spy sat targets, which are better tracked by other systems, such as SOSUS. There just aren't really any targets from where the Southern Ocean is the best place to operate; apart from loitering to waste time there's just not a lot of point in having military resources in the Southern Ocean so there's little point in having spy sats survey it.
..never found a use for it myself, but if you ever need to calculate the side 's' of a square having equal area to a circle of radius 'r' you would normally do...
s= (pi * (r**2))**0.5
... which is a relatively heavy calculation if you have to do it a lot of times.
Instead, you can reduce this overhead quite a bit by first precalculating an alternative constant 'k' to use instead...
k= (pi**0.5) - 1
...which works out to 0.7724538509...
The side 's' of the square can now be calculated by...
s= r + (r * k)
...which is a fair bit easier to do.
And while we're talking about Pi...
To keep my daftness muscle in trim I once wrote some distributed(*) code to calculate Pi by using random numbers. Generate a pair of random numbers between 0.0 & 1.0, treat them as x/y coordinate vectors and sum them: if the sum of the two vectors is > 1.0 they fall outside a quadrant of circle radius 1.0 whereas if they sum to <= 1.0 they're inside the quadrant of circle.
Do this a lot of times and keep track of the total number of pairs (t) and the the number of pairs that fall inside the circle (i)
Pi = (i / t) * 4
It's probably the least efficient way of calculating Pi but it works.
(*) distributed because you really need to do trillions of pairs to get any accuracy.
Personally, I enjoy coding but I can see that there's no long-term future in it.
In the not-too-distant future all s/w will be written by AIs, which themselves will take the place of Operating Systems.
At the most basic level, s/w could be developed by copying the method that got us here: Evolution. An AIOS could simply randomly mutate existing s/w at a binary level and then see if the s/w still works and then assess if there's any improvement. Very inefficient, of course, just like the real thing, except whereas evolution of life has taken a few billion years doing it in silicon would be considerably faster. The same basic process could be made more efficient by moving up a level to working with logic blocks instead of binary bits but in practice I suspect that directed mutation, as opposed to random mutation, would be even more efficient.
For this to work though, it'll be necessary for the individual AIOSs to be able to communicate with each other, to compare and share results: Skynet anyone?
Naturally, the big s/w houses won't be at all keen on the idea of being made redundant so when this eventually emerges from academia they'll be spreading a lot of FUD about it, and probably even pushing for legislation against it.
Time scale: less than twenty years.
Notwithstanding the problem of head levitation, I believe that it is actually easier to maintain a vacuum than to store Helium.
The problem with storing Helium is that it's monatomic i.e. it's normal state is in the form of single atoms, whereas Hydrogen, for example, is molecular and is normally found bound together in pairs. This means that the individual atoms of Helium are smaller than the individual atoms or molecules of any of the other elements with the consequence that Helium will eventually leak through any physical container, regardless of what it's made from and even if it's hermetically sealed.
Maintaining a vacuum, on the other hand, is easier because the size of the molecules present in the air that you're trying to keep out are much larger than the monatomic Helium atoms you're trying to keep in.
I wish you get get rack mounting kits that would allow you to mount 19"/21" units vertically in full/half-height cabs.
I compile my own kernel images for my x86/64 systems and it's not terribly onerous or time consuming. They're typically about half the size of a stock kernel image and still include a lot of usb drivers for stuff I haven't actually got. Whilst the reduction in size is nice the main reason I roll my own is that less code = less potential to go wrong (I also always install a stock kernel as well but that's really for just in case).
...but you're having to move the entire bulk of the camera instead of just the sensor. If you were just moving the sensor you'd have a lot less mass to deal with, both in terms of the sensor payload and the mounting itself. This would considerably reduce its mass, size and, as a consequence, its power draw, as well as making it more comfortable and safer to use.
Once you've proved your initial/Mk 1. version you'll really need to sort out either a supply deal for the sensor and electronics, sans casing, or a licensing deal for manufacture by the camera makers (which I think would probably be your best bet).
One good thing is that you've got some publicity, via this article, which strengthens your negotiating position re supply/manufacturing deals; I notice that there's no mention of patents in the article but I believe that having gone public means that it's now unpatentable due to prior art, so whilst anyone could now take your idea and start churning out their own 'Mk 2.' version the goodwill they'd get by not ripping-off your idea would probably make it worth their while to reach an agreement with you.
Btw, what's the problem with PID controllers? Sure, they need to be tuned for any specific application, which can be a pain, but that's a one-off task. On the plus-side, digital/sw PID controllers are extremly simple and efficient, and (joke alert) they can even be implemented in analogue.
Looks like that's taken from the Met Office...
I don't believe I've ever disagreed quite so strongly with a Reg article before, with what both BB and the marketing droid have said.
Simply put, a backup is a copy of current data whereas an archive holds historical data i.e. old versions of current data and deleted data that is no longer present in the current data.
And furthermore, if you can't easily and reliably identify and retrieve any particular item from a backup or an archive then you don't actually have either; simply putting a copy of something somewhere, with no means of identifying and retrieving it is equivalent to deleting it.
I think it's fair to say that most people were surprised when the first of these close-orbiting gas giants were found but they probably retain most of their gaseous atmospheres due to their gravity and magnetospheres.
Jupiter is a smidge under 318 times the mass of Earth, so even for the two very close gas giants we're still looking at planets with ~100x the gravity of Earth. At the same time, Jupiter's magnetosphere is ~14 times stronger than the Earth's, believed to be caused by convection currents in its liquid metallic hydrogen core, so it wouldn't be implausible for these close-orbiting gas giants to have even stronger magnetospheres due to their higher temperatures resulting in even more vigourous convection currents.
Apart from the brief mentions of the Sharepoint and Exchange superstructure products that run upon the OS infrastructure all of the other comments, up to the time I posted this, are purely concerned with the GUI shell and I think this probably reflects the fact that the GUI shell is synonymous with the OS in most user's minds.
MS could easily get away with releasing a 'new' Windows9 product by simply changing the GUI shell for W8, which would basically be money for old rope.
Expect it soon?
...but I would have said it shows a barred-spiral with only two arms.
“We've spent about 2,000 man-hours a year doing field trials with service providers – going down the holes where the rats are,” he said.
2000 man-hours per year sounds a bit underwhelming - it's just one full-time worker.
...and not only does it look a little suspect, but have you tried wiping your bottom with one when you realise the paper's run out?