121 posts • joined 12 Apr 2012
Waste of time
Personally, I enjoy coding but I can see that there's no long-term future in it.
In the not-too-distant future all s/w will be written by AIs, which themselves will take the place of Operating Systems.
At the most basic level, s/w could be developed by copying the method that got us here: Evolution. An AIOS could simply randomly mutate existing s/w at a binary level and then see if the s/w still works and then assess if there's any improvement. Very inefficient, of course, just like the real thing, except whereas evolution of life has taken a few billion years doing it in silicon would be considerably faster. The same basic process could be made more efficient by moving up a level to working with logic blocks instead of binary bits but in practice I suspect that directed mutation, as opposed to random mutation, would be even more efficient.
For this to work though, it'll be necessary for the individual AIOSs to be able to communicate with each other, to compare and share results: Skynet anyone?
Naturally, the big s/w houses won't be at all keen on the idea of being made redundant so when this eventually emerges from academia they'll be spreading a lot of FUD about it, and probably even pushing for legislation against it.
Time scale: less than twenty years.
Re: Wow...so intriguing...
Notwithstanding the problem of head levitation, I believe that it is actually easier to maintain a vacuum than to store Helium.
The problem with storing Helium is that it's monatomic i.e. it's normal state is in the form of single atoms, whereas Hydrogen, for example, is molecular and is normally found bound together in pairs. This means that the individual atoms of Helium are smaller than the individual atoms or molecules of any of the other elements with the consequence that Helium will eventually leak through any physical container, regardless of what it's made from and even if it's hermetically sealed.
Maintaining a vacuum, on the other hand, is easier because the size of the molecules present in the air that you're trying to keep out are much larger than the monatomic Helium atoms you're trying to keep in.
Horizontal mounting only makes snese for small enclosures
I wish you get get rack mounting kits that would allow you to mount 19"/21" units vertically in full/half-height cabs.
It doesn't take much time to trim a kernel
I compile my own kernel images for my x86/64 systems and it's not terribly onerous or time consuming. They're typically about half the size of a stock kernel image and still include a lot of usb drivers for stuff I haven't actually got. Whilst the reduction in size is nice the main reason I roll my own is that less code = less potential to go wrong (I also always install a stock kernel as well but that's really for just in case).
...but you're having to move the entire bulk of the camera instead of just the sensor. If you were just moving the sensor you'd have a lot less mass to deal with, both in terms of the sensor payload and the mounting itself. This would considerably reduce its mass, size and, as a consequence, its power draw, as well as making it more comfortable and safer to use.
Once you've proved your initial/Mk 1. version you'll really need to sort out either a supply deal for the sensor and electronics, sans casing, or a licensing deal for manufacture by the camera makers (which I think would probably be your best bet).
One good thing is that you've got some publicity, via this article, which strengthens your negotiating position re supply/manufacturing deals; I notice that there's no mention of patents in the article but I believe that having gone public means that it's now unpatentable due to prior art, so whilst anyone could now take your idea and start churning out their own 'Mk 2.' version the goodwill they'd get by not ripping-off your idea would probably make it worth their while to reach an agreement with you.
Btw, what's the problem with PID controllers? Sure, they need to be tuned for any specific application, which can be a pain, but that's a one-off task. On the plus-side, digital/sw PID controllers are extremly simple and efficient, and (joke alert) they can even be implemented in analogue.
Looks like that's taken from the Met Office...
I don't believe I've ever disagreed quite so strongly with a Reg article before, with what both BB and the marketing droid have said.
Simply put, a backup is a copy of current data whereas an archive holds historical data i.e. old versions of current data and deleted data that is no longer present in the current data.
And furthermore, if you can't easily and reliably identify and retrieve any particular item from a backup or an archive then you don't actually have either; simply putting a copy of something somewhere, with no means of identifying and retrieving it is equivalent to deleting it.
Re: It does not add up.
I think it's fair to say that most people were surprised when the first of these close-orbiting gas giants were found but they probably retain most of their gaseous atmospheres due to their gravity and magnetospheres.
Jupiter is a smidge under 318 times the mass of Earth, so even for the two very close gas giants we're still looking at planets with ~100x the gravity of Earth. At the same time, Jupiter's magnetosphere is ~14 times stronger than the Earth's, believed to be caused by convection currents in its liquid metallic hydrogen core, so it wouldn't be implausible for these close-orbiting gas giants to have even stronger magnetospheres due to their higher temperatures resulting in even more vigourous convection currents.
There's an interesting aspect to the comments so far
Apart from the brief mentions of the Sharepoint and Exchange superstructure products that run upon the OS infrastructure all of the other comments, up to the time I posted this, are purely concerned with the GUI shell and I think this probably reflects the fact that the GUI shell is synonymous with the OS in most user's minds.
MS could easily get away with releasing a 'new' Windows9 product by simply changing the GUI shell for W8, which would basically be money for old rope.
Expect it soon?
Nice artist's impression...
...but I would have said it shows a barred-spiral with only two arms.
One man job?
“We've spent about 2,000 man-hours a year doing field trials with service providers – going down the holes where the rats are,” he said.
2000 man-hours per year sounds a bit underwhelming - it's just one full-time worker.
...and not only does it look a little suspect, but have you tried wiping your bottom with one when you realise the paper's run out?
Everything's working perfectly to plan...
Not only is that another £100m+ of tax-payer money channeled in to the hands of the already wealthy but it'll also reduce the amount of benefits paid out by simply not working.
Re: Why a hexagon?
I stand corrected - you're probably right: http://en.wikipedia.org/wiki/Saturn's_hexagon
Re: Why a hexagon?
A very good question.
From the article: NASA explains that the storm "folds into a six-sided shape because the hexagon [in the image] is a stationary wave that guides the path of the gas in the jet".
Which essentially says that it's six-sided because it's a Hexagon! I've added the omission/qualifier because without it the statement appears to be redefining the meaning of "hexagon" to mean a "stationary wave".
So why is it hexagonal? Well, I suspect that it's just an imaging artifact due to the photo being a composite of six images; the entire polar hemisphere can't be photographed in a single shot because half of it will be in shadow and two thirds of what isn't in shadow will still be significantly darker than the middle third that most closely faces the Sun; to get an evenly illuminated image, as shown, you need to take multiple photos and stitch them together.
So there are not six nodes but just one, which is probably due to solar heating at the boundary of the storm facing the Sun with the result that it shifts a few degrees of latitude further away from the poles but which stays in the same place i.e. facing the Sun, as the planet revolves.
Re: Please!!! Won't somebody think of the children?
How can you express annoyance at peoples' inability to understand the difference between the omissive and possessive apostrophe yet enjoy eventuate?
Ahh... I think I understand now.
Please!!! Won't somebody think of the children?
I'm not often moved to spit fire but "EVENTUATE" ??? FFS!!! What's wrong with occur?
Satellite bathymetry sound complicated...
...except it essentially just means extrapolating water depth from surface colour.
"...the Red Planet
may be have been more volcanically active than originally believed."
Re: But faster than light time becomes imaginary
The Lorentz formula for relativistic time dilation, which is basically just the summing of space and time movement vectors using Pythagoras's right-angle triangle formula, tells us an awful lot of really interesting things about space-time.
Perhaps the most important thing it tells us is that it shows that our movement through space and our movement through time are similar enough phenomena that the two vectors can actually be summed: space and time are not two entirely different things, for if they were then the vectors couldn't be summed to reach a meaningful answer. Relativistic time dilation has been proved to work in accordance with the Lorentz formula, probably the best known example being the timing allowances that need to be built in to the GPS satellites to account for both gravitational and relativistic time dilation.
The nature of the Pythagorus/Lorentz formula is such that there is no scope for discontinuities in the curve produced with the consequence that it sets upper and lower bounds for the rates of movement through both space and time: we term the fastest you can go as 'c' and the slowest is zero. This means that your individual vectors through space or time can't exceed 'c', or go slower than zero within the dimensions being summed, or in other words, you can't exceed 'c' in this universe because, as far as we can tell, Pythagorus/Lorentz always applies.
For anyone who's interested, take the Lorentz formula, simplify it to produce a factor by removing the relative terms, and then, using normalised values (0 = 0, 'c' = 1), punch it in to a spread sheet and plot the results - you'll get a quarter-circle arc of radius 1 ('c'). So whilst Lorentz proves to be true we exist only on that one-dimensional arc, curved through space-time; never inside or outside.
Once you've done that, you can then start speculating about the rest of the circle ;-)
Re: Well, someone will have to mention the SFX in 2001...
Nope, I didn't mean the slit-scan sequences but the model and instrument display graphics work, which still don't look dated and which made early real CGI attempts look pretty poor in comparison (compare the high-res instrument displays in 2001 with the relatively low-res efforts in Star Wars etc.).
Thanks for the explanation on slit-scan photography but I actually coded some stuff to churn out animation frames a few years ago.
Well, someone will have to mention the SFX in 2001...
...so I just did
Not quite invisibility
This isn't really invisibility because whilst it may cancel reflections it doesn't substitute a replacement image. To use the Harry Potter analogy, when he put it on you'd see a pitch-black silhouette instead of what's behind him. As far as radar goes, well, it would probably be ok for above-the-horizon targets because you wouldn't get a return from the sky anyway, so the 'hole' wouldn't be apparent, but for targets below the horizon, where a discriminator is needed to pick the target out from the background returns then that 'hole' might be noticable, at least at relatively short ranges.
Re. "bypass[ing] obstacles" yes, it could certainly act as a repeater in the radio spectrum, and by messing about with phasing it could make itself appear to be bigger or smaller, or in a different place, but then that's not making it invisbile.
As for making it work in the visible spectrum? Well, I think they've got their work cut out there. With enough processing power you could make it work for _one_ point of view, but not for many points of view because the system would need to simultaneously project different backgrounds, not only for different directions but also for different perspectives, which isn't simply a processing issue; each single (re)emitter in the array would simultaneously need to send different signals corresponding to all the possible viewing positions. For example, if we imagine we're using an array of LEDs to display a visible spectrum replacement image then each individual LED would need to appear to be a different colour and brightness depending upon where it's being viewed from.
Re: There's science and then there's wild guesswork
I don't think we need to question our understanding of how planets form to answer this; all that's needed is a sequence of interactions with other bodies in the system.
In fact, the solution is implicit in the question: if the planet formed further out from its star, as we believe it must have, then what caused it to migrate inwards? The only plausible* explanation is that an interaction with another** massive body in the system disrupted the planet from the stable orbit in which it formed and sent it inwards towards its star. Whilst on its way then, it must have interacted with a third body in such a way to have its course changed for a second time, leaving it in its current orbit. Incidentally, due the the equal-and-opposite etc. the second interaction would have also left the third body in a modified orbit, either moving it further out or further in and if inwards, considering the distances, it probably collided with the star.
Its currently considered highly possible that the bodies in our system didn't originally form in their current orbits: the gas giants in particular are now thought to have been formed much closer to Sol and have subsequently migrated outwards to their current orbits.
* plausible = non-artificial
** There's a small chance it could have been a 'run-away' extra-system body just passin' through.
Re: He's right.
I'm _guessing_ that perhaps Linus's annoyance with undiscoverable cpu features isn't that they're not documented but that he can't use a small number of methods or functions to identify them and instead needs unique code for each one. I'll swiftly add that this isn't due to lazyness but because of the increased amount of code that needs to be maintained, and with more code you have a greater likelyhood of errors.
In coding terms it's a bit like having a line of code for every data record you need to process instead of processing all data records in a single loop.
However, this is all nothing to do with what the article was about, which was, essentially, is he getting too stressed out?
Well, there's no doubt that he's in a _very_ stressful position: the Linux kernel runs on a wider range of h/w than any other other bit of s/w that's ever existed and trying to coordinate, collate and implement the correspondingly wide amount and variety of code modules submitted by an equally wide variety of code submitters, from individuals to global corporations, can be nothing but stressful.
So is he getting too stressed out? No, what he said was was not to be taken literally and was clearly a tongue-in-cheek gesture to express his annoyance, the severity of the gesture indicating his degree of annoyance. This is something that every human being does from time to time: someone does something a bit stupid or thoughtless and the person who is annoyed by it replies with a deliberately irrational, out of proportion and over the top response. It's no different to that young chap who threatened to blow up an airport because of repeated delays due to the volcanic ash cloud or Jeremy Clarkson suggesting that certain people who had annoyed him should be shot in front of their families.
Personally, I'm more concerned with the mental wellbeing of people who appear to be either incapable of discerning the real meaning of these gestures or, even more worryingly, use them as the basis for an attack on someone. Why even more worringly? Because if you feel entitled to launch an attack on someone whom you don't know because of something they said to someone else whom you don't know then you must be _really_ fscked up.
"In small stars, the gamma rays created in the explosion take a short time to reach the edge of the star, where they are propelled into space. With huge stars, the explosion takes much longer to travel through the star, meaning the gamma ray burst is longer."
Not quite right, I'm afraid.
It's correct that the larger the star, the longer it will take for the effects of an 'explosion' (not really the right word to use) at its center to propagate to the star's surface, but what we're talking about here is the duration of that 'explosion', or in other words, how long the 'explosion' went on for.
Whilst it's possible that the passage of the initial effects of the event changed the characteristics of the star as it passed through it, it's extremely difficult to imagine what sort of change could have occurred that would've slowed the effects from the final part of the event to the degree observed; in short, it would've taken roughly the same time for both the first effects and the final effects to make the journey.
To be sure though, a larger star will be required for a longer event; a smaller star would not have enough matter to sustain a longer event.
As no mention of the relative intensities of the gamma rays from this long burst, compared with a more typical short burst, adjusted for distance, of course, was made then assuming the intensities were similar it means that the rate of matter-energy conversion was about the same, which means that more matter had to be converted, over a longer period of time, which means a larger star. If, however, the intensity of the longer burst was relatively greater it means that a higher rate of matter-energy conversion was going on, and as the period of time remains unchanged it means that the star must have been even bigger.
Re: QNX Photon microGUI
If RIM could be persuaded to open-source QNX it would certainly shake things up.
QNX Photon microGUI
Anyone remember the QNX Photon microGUI that was part of the QNX demo floppy?
Re: There is only one thing a text editor needs
...A journal. I remember (I blame the COBOL thread for making me reminisce) the 'full-screen' editor on a DEC VAX 11/780 that simply journalised every editing action. Was fun to crash out of a long editing session, restart the editor and sit back and watch it re-apply every key-stroke up to the point of the crash. Did rectangular-block copy&paste too.
Re: 'mind-crippling' COBOL
Whilst I'd agree that 'writers cramp' wasn't such a problem after terminals, that was pretty late in the day; for years it was writing your source on box-paper, to be handed over to the punch-room where, although likely as not, the punch-girl would spot that you'd missed a full-stop in the Identification Division but was forbidden to correct it, your opus would be faithfully rendered, over the course of a day or two, into a monumental stack of punch-cards which, when compiled, produced a goodly quantity of processed trees, manuscript style, telling you several thousand times that there was a compilation error.
I still like COBOL though, a lot: it's a pretty damn good language for collating data. Loved Redefines in the Data Division. Although the source is very wordy and precise, it compiles small and efficient.
(Oh goodness - just remembered Segmentation - actually had to use it once!!)
Debian has supported Arm since 2.2 'potato' released 2000-08-15. Debian is a lot more 'major' than you might think.
Missing the point, I think...
Where I think everyone's missing the point, even ARM perhaps, it that everyone's still thinking in terms of the current paradigm. When the maturity & process differences are taken into consideration the ARM chips offer the potential of 10^2 x times as many processing cores as the x86 architecture does, for the same practical considerations, and what this means is that 10^9 core MPP systems are now viable. At that scale we can drop the Turing Machine model, where processing is separated from data, and progress to an Object Machine, where every element of data is an active processing element - data that finds and synthesises its own associations with other data elements.
At this point, software evolution can occur, in the literal sense, where you can start by generating random sequences of logic and see if they do anything useful and then randomly modify those sequences to see if any of them still work, and if they do, do they work better, or do they do anything new? So the issues about MS Vs. Linux (to symbolically represent all such arguments) are going to become moot in the not too distant future, because all software will be written by software.
Around the same time, if not before, extremely high-precision analogue electronics will be combined and integrated with digital electronics so that we can do not just binary digital processing but higher-base hardware processing, which means that you could, for example, feed a base-3 digital processor a compatible pair of base-2 & base-3 instructions at the same time and get two completety different and correct answers to two completely different tests. This is still the same old data processing paradigm; the next paradigm will be a different way of dealing with data.
Anyways though, the ARM architecture will get us to those 10^9+ MPP systems we really need for a start.
Accept a well earned pat on the back...
...for a very good article.
It might work for them...
...because, in the field of MPUs, at this point in time, there are two distinct growth curves to be considered; those of application processing and storage processing.
Intel's high-performance chips are ideally suited to application processing because application processing is still largely linear and intrinsically unsuited to parallel processing. AMD's new low-power chips, on the other hand, are ideally suited to parallel processing jobs like handling storage.
Right now, and until there's a big breakthrough in AI, the growth in demand for application processing will decline whilst the demand for storage processing will grow.
In this context, it's interesting that both companies are heavily promoting future low-power MPUs; Intel with its Atom and AMD with its ARM Opterons, and what's significant here is x86. x86 is ubiquitous and the architecture that everything else is compared with but there's no way that x86 will be the architecture in use in 20 years time; the problem for Intel is that they only want do x86, something that will eventually be supplanted.
Re: Ummm. Sort of helpful
What this article was good about was separating the quantum computing device from the IO device; the tricky bit isn't the qpu calculating all the possible answers, but rather it's the selection of the particular answer you want.
Think of it this way: the answer you get from a qpu is a 2-D area of smoothly blended different colours but the answer that you need is in the form of a 1-D line on a graph; there are infinitely many 1-D lines within that finite 2-D area, so the tricky bit is pulling out the particular 1-D line, from all the other 1-D lines, that you need to get the answer to the particular problem you're trying to solve.
Re: This may or may not be a great article
As someone who has read the Reg' since about two months after MM started it, and spends an awful lot of time thinking about this theoretical physics stuff, I think this is certainly one of the best articles El Reg' has ever done.
...not to mention the > 30000 firearm related deaths in the U.S. every year.
According to http://www.cdc.gov/nchs/fastats/injury.htm the total number of firearm related deaths in the U.S. for 2009 was 31,347.
Which is the greater U.S. tragedy; the ~3000 killed by the 9/11 terrorists or the 30000+/year killed by each other?
Sensationalism in science
If there's one thing that's sure to rub me up the wrong way it's the sensationalism of science. Science is intrinsically pretty damn sensational but it does need an adequate understanding of the science to perceive just how sensational it is. Unfortunately, this article seemed to focus more upon sensationalism than understanding and I suspect that more was said in this Reg article, in pursuit of sensationalism, than was present in the Science publication.
For example, to say "a third of these pairs are eventually expected to merge..." is fair enough but to continue with "...as the vampire consumes the last of its hapless friend." is not only sensationalist but a bit dubious. The problem here is that it gives the impression of the O type star gradually vanishing as it is 'hoovered' up by the companion when what will actually happen, as the companion star acquires mass from the O type star, is that both stars will expand and eventually merge. The companion star will expand because of the additional mass it has acquired and the O type star will expand because the reduction of its mass will mean less gravitational pressure upon its core, countering the outward pressure from the fusion going on within. The outer atmospheres of both stars will merge first whilst the two inner cores spiral in towards each other, also to finally merge. However, this process will be quite an eventful process as both stars will have to constantly re-balance gravitational pressure against fusion pressure.
Re: How the Pros Do It
Whilst the thermite scheme will probably work it's a bit of a BF&I solution.
I like the idea of using a smaller motor to start the main motor though: form a miniature version of the main motor fuel charge that sits just inside the main combustion chamber and seals the main motor nozzle, facing backwards so that it fires towards the main motor fuel when ignited. Even if the seal isn't perfectly air-tight, it would provide enough of a seal to prevent the excessive expansion and cooling of the fuel vaporised by the igniter.
You'd need to calibrate the miniature so that it burns just long enough to start the main motor fuel before burning itself out to clear the nozzle; you could also add some thermite to the mini charge just for good measure, the thrust from the mini charge ensuring that the thermite is directed towards the main fuel instead of possibly just falling away (due to the motor being angled upwards on the launch rod, which it will be when Vulture2 is actually launched).
More balloons, but not for lifting...
On the basis that the problem is due to the low ambient pressure resulting in the too rapid dispersal of the initially vaporised fuel, before it can ignite the rest of the fuel, I started thinking along the lines of using an equalising chamber to maintain pressure in the combustion chamber. For example, imagine two small balloons attached neck-to-neck, with one balloon inside the chamber and the other threaded through the nozzle to the outside, this double-balloon then being inflated to just above ambient ground-level pressure: as the ambient pressure drops both balloons will start to expand, the inner balloon expanding into and filling the combustion chamber and maintaining pressure within it. Once the rocket motor has fired I wouldn't expect it to have problems burning through the balloons as it's got to burn out the igniter wire anyway.
The tricky bit is ensuring an air-tight seal around the ignition wires and the balloon neck as they pass through the nozzle and enter the combustion chamber although as the balloons expand they'll tend to occupy any gaps and improve the seal. Of course, you'd also need to take into account the fact that the igniter wire will have to run around the inflated balloon within the combustion chamber i.e. against the walls which will mean more wire in the chamber than usual.
Alternatively, if you think that the motor would be able to burn through a fine nylon mesh, just stick a small balloon inside the chamber that's prevented from expanding out through the nozzle by a fine nylon mesh placed over the inside of the nozzle opening. However, the same issues re the routing of the igniter wire apply.
It's not lack of O2 that's the problem; the fuel incorporates its own O2.
Re: Echo and SCORE
This rang a bell with me too. After a quick bit of digging around I think it was project "West Ford" that we're remembering. Wikipedia link: http://en.wikipedia.org/wiki/Project_West_Ford
Re: The pictures don't do it justice
Oops! - you're quite right; I initially read it as being much closer
The pictures don't do it justice
The artist(s) who produced those pictures just didn't grasp the size of gas giants. To say that the gas giant, when seen from the rocky world, would be much larger in the sky than those images suggest would be a gross understatement; the gas giant would fill most of the sky.
Worrying in many ways
..."knowledge economy." This is a very scary idea because if it is to work then it'll mean restricting access to knowledge; if we think that patenting software is a bad idea just wait until the rules are rewritten so that knowledge can be patented too.
"...from friction comes life." From friction comes heat; energy that is wasted and, due to entropy, lost forever.
'"Those documents that have come into circulation, thanks in large part to the wcitleaks website set up to publish them, could alarm "credulous members of the public," Touré said...' If you're doing something that may alarm people there're two ways to handle it: either publish a credible and binding explanation to reassure everyone or just try to demean and discredit anyone who shows concern.
Unless I'm missing something here...
Doc, Happy, Bashful, Sneezy, Grumpy, Sleepy, and Dopey == dwarves != leprechauns
I think that this is the most significant issue. However, I believe that one of the benefits that Intel has claimed for MIC is that it'll run existing x86 code, which implies that it has the 'full' x86 architecture.
Having said that I guess it's possible that they've removed some of the more general purpose underlying hardware to make it more efficient in HPC and then re-implemented around the missing hardware via slightly less efficient and more complex microcode to make it fully x86 compatible again.
Either way though, it doesn't seem that it can excel in both scenarios.
Re: Just seems like a not very good idea
Your paranoia is showing through there; you've got yourself all in a tizzy about RISC vs. x86 when that wasn't what I was talking about.
x86, as an HPC 'co-processor' doesn't seem like a good idea because it's a general purpose instruction set; HPC is all about efficiency but incorporating a large and comprehensive instruction set, a significant proportion of which won't be used in typical HPC workloads and which just wastes space, is inefficient.
You are aware that, for a considerable number of years now, x86 has been implemented via microcode on what is essentially (but not purely) 'RISC'y hardware? Furthermore, if Intel were so satisfied with x86 why did they bother with iAPX, i960 and IA-64? Do you think that all those organisations that are using large systems based upon POWER and even SPARC are doing so just out of spite and resentment at x86?
No, x86 is fine for general purpose computing, even though it _is_ clunky and inelegant, but this is because it's comprehensive, well known and widely supported. Only a tiny proportion of people involved in software development (mostly compiler devs) need to care about the underlying architecture; the rest of us only interact via relatively high-level abstractions.
Btw, similar arguments apply to the embedded space too; this is another area where x86 doesn't make much sense and isn't widely used (The i960 did make some inroads to this area but it is now largely dominated by ARM and IBM Power architectures).
Just seems like a not very good idea
I won't go as far as saying it's a bad idea but the Intel MIC just doesn't make sense to me:
i) Using x86 architecture cores for HPC; isn't that going to mean a lot of redundant x86 instructions that need to be implemented in the hardware/microcode but which will be rarely used in typical HPC workloads?
ii) 8GB RAM seems grossly inadequate for 50+ x86 cores; for 50 cores that's just 160MB/core, which doesn't seem enough if they're going to be running x86 application code, which itself will need some degree of OS support running on the same core and sharing that 160MB allocation.
iii) The benefit of being able to run existing x86 code unchanged on the MIC seems highly questionable and hardly efficient. Is this not a brute force approach when a more elegantly designed MPP solution would be far more efficient?
Intel are very clearly not stupid but I'm beginning to think that, having failed to replace x86 themselves, with their own iAPX, i960 & Itanium IA-64 architectures, the only horse they have left to flog is the ageing and inelegant x86. I can't really see this as a viable long-term proposition.
Re: Balloon to Platform... twist of cable, string, and rope...
Any sort of "rope" will be far too heavy, and a 'climbing' quality rope unnecessarily expensive as well.
I've previously suggested Dacron Big-game fishing line - a quick check on e-bay shows a spool of 180lb x 100ft braided Dacron line for £5.11.
- Does Apple's iOS 7 make you physically SICK? Try swallowing version 7.1
- Fee fie Firefox: Mozilla's lawyers probe Dell over browser install charge
- Pics Indestructible Death Stars blow up planets with glowing KILL RAY
- Video Snowden: You can't trust SPOOKS with your DATA
- Hands on Satisfy my scroll: El Reg gets claws on Windows 8.1 spring update