Thanks for this interesting description of an interesting problem!
63 posts • joined 7 Jun 2008
Uranus has weather from time to time
If you google-images 'Uranus Keck' there are some pictures of outbreaks of large white spots on the planet in August 2014; it was in a boring state when Voyager flew past it, but occasionally it wakes up.
If you've got thirty years to work with, gravity tractors (stick a heavy spacecraft with an ion drive hovering near the asteroid) will probably work, as would chroming one side of the asteroid; and you can measure if it's worked.
Five years is cutting it a bit close, but an object that would hit us in five years and that we haven't discovered already is probably small enough that it turns into an evacuation exercise. Which would be somewhat expensive, and would cause quite a lot of stress-related illness, but nothing an even fairly incompetent government couldn't handle.
Coping with a well-predicted medium-sized tsunami, which is the expected result from a hundred-metre impactor, is a harder but not an impossible coordination problem
Re: Am I the only person...
The price is what it costs; if you want to collect it when people buy TVs, people buy a telly once a decade so it'd be £1500 on the price of every TV sold, which I don't think is practical.
The issue isn't "shipping Xeons to China", it's shipping anything to the four supercomputer centres (NUDT in Changsha, and NSC-Changsha, NSC-Guangzhou and NSC-Tianjin) that have been declared to be using their machines to work on atom bombs. It's not a full ban, you have to apply for an export licence and there's no guarantee that the licence will be refused, but the particular licence for building Tianhe-2a was refused. IBM are no more permitted to ship them Power chips than Intel is to ship them Xeons.
That's one of the main points of espionage.
If the Ruritanians have learned something that would be of great interest to the Potslavian opposition, and that would derail Potslavian plans in a way to the advantage of the Ruritanians were the opposition to learn it, of course they should inform the Potslavian opposition - you can't do this directly, but you might well be able toinform the Wallachian allies of the Potslavian opposition, who would cause the Potslavians to know the information through a reasonably deniable conduit.
That's exactly how diplomacy is supposed to work! Derailing the internal politics of other countries in ways that cause advantage to your own is what foreign offices are *for*.
So is there anything you could use these for?
Oxygen-free-copper cabling designed for the audiophool market turned out to be just what you needed to make practical proton precession magnetometers, and the process accidentally removed radioactive impurities so OFC shielding is used in a lot of sensitive particle physics work; there may still be some weird quantum-physics application in which extremely smooth silver cylinders are useful, though you'd feel a bit silly getting this cable and then dissolving the insulation to get to the silver.
This project cost £2.5 million.
Kepler answered 'how common are exoplanets'. This project will answer 'what are the easiest largish exoplanets to observe'. The TESS satellite, launching in summer 2017, which is much the same as this project but done from space at about a hundred times the cost, will produce a complete catalogue of exoplanets around bright stars, down to half the size of the ones found by this project.
The problem with Kepler is that it looked at such faint stars that it took inordinate effort to follow up the results; if you find planets around significantly brighter stars, you can do radial-velocity follow-up with moderate telescopes (which confirms that you've found a planet rather than a sunspot of unusual size, and tells you whether there are other planets in the system), there's the possibility of doing differential spectroscopy from space to find out what their atmospheres are made of, and if the stars are bright because they're close there's the possibility of directly imaging the planets with things like the Gemini Planet Imager.
Re: "The Ci20 will cost $65 (US)/£50 (Europe)"
A 20% price hike seems exactly right for a no-VAT price in the US and a with-VAT price in Europe
I think it's worth mentioning that this is a silicon-packaging facility (wafers go in, little square things with lots of pads on the bottom come out) rather than a chip fab.
Re: Now that's just going too far
The Apples were precisely designed to be incredibly cheap, in a world where the cheapest 'real computer' was a PDP-11/34 which cost $10,000 and was a metal box about 8U tall. The Sinclairs and Commodores and Acorn Electrons were cost-reduced versions of things that had already been significantly cost-reduced.
Re: "Piracy often arises when consumer demand goes unmet by legitimate supply"
You wait. They'll come out in your country eventually.
Simultaneous with the release of iOS 8 were reviews demonstrating that it slowed down iPhone 4S or original iPad Mini hardware by about 30%; at which point my desire to upgrade rather went away.
Could you check the geography in paragraph five again?
Whilst Vadadora and Gandhinagar are quite close to Ahmedabad, Ghaziabad is a thousand kilometres away. Ghaziabad is, however, adjacent to New Delhi.
Thane is the one next to Mumbai; Nashik is a hundred miles north-east of Mumbai and Aurungabad a further hundred miles inland.
Nagpur is pretty much exactly in the middle of India, I would be interested to hear DEITY's advice about its infrastructure issues.
On the other hand, it's nice to have started hearing about India's tier-two cities.
The UK does not import gas from Russia.
We import gas from Norway, Qatar and the Netherlands.
We do import a fair amount of coal from Russia.
Re: As a 1TB flash drive ....
It's not 32 devices stacked vertically.
It's a single silicon chip with 32 layers; the flash memory cells are on the sides of U-shaped things pointing downwards into the silicon. It's a ridiculously impressive piece of engineering.
But each of the layers is made with a lower-precision (and therefore cheaper) process - something like 100nm rather than 20nm - so the 3D chip doesn't have the same capacity as 32 current-technology flash chips; it has about the capacity of one current-technology flash chip at a lower manufacturing cost, and a higher reliability because the memory cells are bigger.
I'm not sure you've made entirely clear the connection between Jodrell Bank, which is an astronomy centre in Manchester, and the ESS, which is a solid-state-physics facility to be built in Lund, Sweden.
Re: Aren't the ASICs so fast because they're, well, ASIC?
Yes: the high speed of an ASIC hash engine comes because it's a pipelined implementation of SHA with the try-next-nonce code baked in, whilst a password-cracking engine needs a much more complicated next-trial-password unit
The Titan does not have crippled floating-point performance; the GTX 780 range of cards does
You don't need floating-point arithmetic to crack codes
Re: So what is the exact problem?
The impression I have is that the motors for folding up the solar panels into the heated part of the vehicle are broken; so they will freeze up during the night and stop working.
Re: Banned until money
It's only illegal for the signatories to the Antarctic Treaty; the nearest sufficiently-rich non-signatory appears to be Indonesia, who aren't that bad at mining. Other rich non-signatories with mining industries are Mexico, Kazakhstan and Saudi Arabia.
Re: "Yeah, Intel. What are you going to do, bleed on me??"
GPGPU and many-core are in very much the same regime; Intel gives you 456 1.1GHz double-precision FP units in 300W, and at a similar price nVidia gives you 832 706MHz double-precision FP units in 225 watts.
Are you sure about the '34 millimetres by 28 millimetres in size' part? That would make the chips five times the size of Haswell, and larger than your average DSLR sensor.
Is 9.6GB/s a typo?
You mention '40ns, 9.6GB/s' for the core-to-memory-controller link, and then say that a system with eight controller chips can have 230GB/s CPU-to-memory bandwidth; which of those figures is right?
Juno has a remarkably pathetic camera (2-megapixel, 7.4-micron pixels, 11mm focal length, 58-degree FOV - better than most smartphone back-cameras but not by much); it's specced for 15-kilometre resolution at the 4300km closest approach of Juno to Jupiter, and has not the slightest chance of getting near enough to Europa to achieve six-metre resolution,
The Galileo probe is currently a thin spread of titanium vapour lightly sprinkled over the hundred-kilometre cloud deck of Jupiter; is the article very old, or should it be referring to ESA's JUICE mission?
Re: Spoon holder...
At least burning PLA has a pleasant milky stench.
The problem with PLA for household goods is that it melts at about 50C; if you stick a 3D-printed thing in the dishwasher it comes out a bit Dali, if you try to 3D-print coasters you find you have made expensive and attractive stick-on bottoms for your coffee cups.
As far as I can see from the paper, it's offering a technique for getting 3N capacity out of five capacity-N discs, with protection against one disc failure in conjunction with two badly-located unreadable sectors.
(whereas RAID6 gives you protection against two whole-disc failures, but if you lose one disc and have unreadable sectors in the same place on two of the others then you've lost that sector)
It seems to involve fifteen reads and five writes per sector write, because it works by looking at groups of sectors on each disc, whilst RAID6 requires three reads (the sector you're overwriting and the two parity sectors) and three writes, so there's a lot more bandwidth used.
Basically this is a paper which has discovered a pretty mathematical pattern, with a dubious justification that it might be relevant for data recovery. It doesn't make sense in a world in which discs tend to fail mechanically rather than to develop individual bad sectors.
"many clusters where latency or cost is more important than bandwidth are still being built with Gigabit Ethernet switches"
Gigabit Ethernet latency is *dreadful*, 180us or more for a ping between two boxes attached to the same switch! You use a gigabit interconnect only when latency is immaterial and bandwidth not terribly important; thankfully a lot of interesting jobs have that property.
Unfortunately the slower grades of infiniband, which were still cheaper and lower-power than 10GbaseT when they started to be phased out, are no longer readily available new.
Computational number theory lab
I have a 48-core 64GB Opteron 6168 machine, an old Core2Quad Q6600 as NFS server, and a Sunfire 4150 dual-quad-Xeon, all installed in my gigabit-ethernet-connected (thanks to a Very Large Drill) outbuilding. I use them to factorise large numbers; by Easter 2^929-1 will have fallen to my ponderous linear algebra machinery.
Re: $50K for the 100M digit prime
Each test at that size takes about ten days on a $1500 computer which uses about $300 of electricity a year; so in a five-year lifetime it does about 200 tests and costs $3000. The chance of success is about one in a million per test, so: yes, it would cost a lot more than $50,000.
Re: Note the time for the GPU vs the 32 core server.
Not quite; the calculation is basically 58 million *consecutive* 3407872-element double-precision complex FFTs. The FFTs can be split among the cores of a GPU or of a multi-core CPU-based system, but it's not embarrassingly parallel in the normal sense of requiring lots of independent small calculations.
Re: Note the time for the GPU vs the 32 core server.
The 32-core server was running a completely different implementation of the large FFT needed to do the arithmetic on such huge numbers, which is not particularly aggressively tuned (in particular, it doesn't use AVX instructions), which is why it was rather slower than a six-core Sandy Bridge using AVX; the idea was to do the calculation using two completely different software implementations and check both got the same answer.
Getting Fourier transforms to run well on a GPU is not at all straightforward, but since doing it allows you to sell thousands of GPUs to people like Shell and Exxon because the work of converting seismic reflection data to 3D images is made of Fourier transforms, nVidia has done it.
Re: Manufacturers should go 5.25" like the ancient Quantum Bigfoot
Current discs offer 100MB/second read rates, so you're saying that if you constantly read over a disc you'll get an unrecoverable error every other day.
This doesn't seem consonant with something like http://www.numberworld.org/misc_runs/pi-10t/details.html ; yes, this lost a lot of time to disc failures, but in an environment where it was running flat-out to 24 separate spindles without redundancy it lost one disc about every four spindle-years.
Re: Not much chance of that
I don't anticipate Apple releasing a TV until they can release a 3840x2160 Retina TV (that is, until Sharp has managed to scale up by a factor 1000 the production rate of the panels they're launching in February 2013).
Being the only people offering convenient one-click access to quad-HD content - yes, this will require a fast Internet connection, a fair amount of in-TV storage, and special negotiation with content providers; the first is ubiquitous, the second straightforward, and the third the kind of thing that Apple is quite good at and in a unique position for - would seem the kind of unique selling point that Apple would like to have.
I would pay $0.99 per half-hour for the BBC Wildlife Film Unit doing what it does best in quad-HD.
Well done Intel
You've managed to launch a new server chip with less memory bandwidth than an Exynos 5250.
Note that the 39.6% rate only applies to dividends on shares owned by highest-rate taxpayers (ie people earning more than $388350); whilst that clearly includes Larry, it may well not include a lot of people who have a thousand Oracle shares in their retirement funds.
For cluster nodes I'm not quite sure why you wouldn't use infiniband; colfaxdirect.com will sell you an 8-port QDR switch $2000 and adapter cards for $600, and it's four times the speed of 10Gb Ethernet.
(they used to do SDR cards, which are equivalent of 10Gb, for $125 with a $750 switch, but those have mysteriously disappeared)
Re: Why are they upset?
The Guardian article on this suggested that they were either paid piece-work or penalised for rejected items, both of which are of course good ways to get the workers and the QC team into an adversarial relationship. Didn't we go through most of this with British Leyland in the seventies?
Actually a class of one
The Soyuz down-mass is 100kg only, the Dragon test flight brought back 660kg, Dragon is specified for three tonnes. The Space Shuttle down-mass was something like twenty tonnes and it would routinely return with about five tons of stuff packed inside a four-ton MPLM.
... also more efficient than our competitor's model with integrated toaster
You always ought to give your competitor the benefit of the doubt in this sort of comparison, otherwise people will question your honesty - having said that the machines are usually compute-limited, why are they comparing to an essentially-HPC setup full of dual-socket eight-core Xeons rather than low-power Ivy Bridge E3-1200 systems?
Re: Very Intel
Being complacent about Intel's competence has never worked out well.
Yes, it still has an x87 - it's an x87 borrowed from the Pentium-90, pipelined but not particularly superscalar. If you want to do arithmetic you use the VPU, if you have some little piece of setup code that desperately needs 80-bit floating point for thirty million cycles then you can run it slowly on the x87 side and the VPU will be briefly power-gated.
This is a function-field-sieve discrete logarithm over GF(3^582); it's asymptotically equivalent difficulty to special number field sieve factorisations, which casual groups have managed to do for 1061-bit (320 digit) numbers. As the Fujitsu paper http://www.nict.go.jp/en/press/2012/06/PDF-att/20120618en.pdf points out, it involved a fair amount of implementation work but no more computing than finding a single DES key.
You've got cores and core-groups confused at the start; you write
The Fermi GPU had 512 cores, with 64KB of L1 cache per core and a 768KB L2 cache shared across a group of 32 cores known as a streaming multiprocessor, or SM
where in fact there is a single 768KB L2 cache shared between all 512 cores, and 64KB L1-like memory shared across each SM.
'The Fermi GPU has sixteen streaming multiprocessors, each comprising 32 cores and 64KB of fast memory, and a 768KB L2 cache shared by the sixteen SMs' would be a more correct way to put it.
Intel thoroughly missing the point here
These are not credible competitors to four-socket Opteron boxes, because they're so enormously more expensive; even if you regard a Sandy Bridge hyperthread as equivalent to an Opteron core, $1611 for eight 2.2GHz SB cores versus $639 for sixteen 2.2GHz cores is a big premium.
The four-socket Opteron boxes are great for HPC-like jobs, my 4x6168 machine delivers 360GFLOP peak for about 700 watts and I've not had to fiddle around with Infiniband cards and switches to connect smaller boxes together.
Re: Haven't heard of those SI units...
Blue Gene Q is a 2GFlop/watt configuration; so a 100W lightbulb and a hundred Atom laptops.
Last para is rather unclear.
Blue Gene/Q is the energy-efficient one, Hector is the XE6 (enormous pile of Bulldozers).
Blue Gene/Q is in no sense a distributed computing project; it's a collection of cabinets (probably four cabinets) each containing an enormous pile of custom IBM chips each containing 16 PowerPC cores.
Two too many millions in the first sentence!
Roughly what they've done
This is a weak-lensing survey. The idea is that the shapes of galaxies seen from Earth are changed by gravitational lensing from mass concentrations that the light has passed through on the way; so you produce an enormous sample of galaxies which you're reasonably confident are at about the same, large distance (by looking at their colours in several infra-red bands: 'photometric Z' is the term, Z being the symbol for red-shift), and the map plots roughly the extent to which the galaxy images in each patch of space are elongated.
The galaxies are small and the variations in their shapes are comparable to all sorts of other systematic effects caused by (for example) the presence of the atmosphere, so there are several statistical steps in there, which is why the maps look so blobby; the confirmation is at least in part that the brightest blobs turn out actually to contain foreground galaxy clusters, though a bright blob without a galaxy cluster would be a much more exciting result.
I suspect it's more hope for Jupiter
They haven't got radial velocity data to get the masses, and the paper is in Nature and not open-access, but the more technical summaries suggest that the deep-fried planets are the iron cores of former gas giants.
Three years isn't that long ago.
The average three-year-old server was bought in mid-2008, so will have dual Harpertown quad-core Xeons, not 'single or dual cores'.