* Posts by another_vulture

81 publicly visible posts • joined 13 Mar 2011

Page:

NASA – get this – just launched 8 satellites from a rocket dropped from a plane at 40,000ft

another_vulture

Re: "from the GPS system work out the wind speed on the surface." How?

From the Wikipedia article, they are using " bi-static scatterometry". "Bi-static" merely means the transmitter and receiver are not co-located. "scatterometry" is the extraction of information from the ways in which a radio wave is modified by a surface that reflects it. Use of the GPS satellite as the transmitter is clever: the receiving satellite uses the GPS system itself to learn the position of the transmitter and of itself with extreme precision. However, the GPS signal is quite weak even before it hits the surface and is reflected, so I'm sure they must integrate over a long time. The GPS signal is intended to be used for exactly this type of signal integration (i.e., spread spectrum), but the reflected signal has got to be at least another 30 dB down in the dirt, so this is pretty impressive.

"Scatterometry" is apparently a well-developed discipline. I assume they get loads of calibration against physical surface measurements, probably pretty much continuously, as they measure the conditions near monitoring buoys.

Pegasus is pretty much ideal for this very specific satellite's size/weight/orbit parameters, probably because the design team chose to stay within Pegasus' parameters. Expensive in terms of $/lb-to-orbit, but very inexpensive on a $/launch basis, so they can launch more to extend coverage or extend mission life.

USS Zumwalt gets Panama tug job after yet another breakdown

another_vulture

Requirements specification failure.

The specs said it was an oceangoing ship! Now you also want it to work in the fresh water of the Panama canal? That will cost extra.

By 2040, computers will need more electricity than the world can generate

another_vulture

The chart is confusing most comentators

It's really quite simple. The chart is a simple projection of he growth of energy production and the growth of computation. It's not intended as a prediction. Rather, it is intended to show that something must change. If the global amount of computation increases faster than does the available power, then computation will eventually consume all the energy. The data at which this happens depends on the computational efficiency. The three lines each assume a (fixed) exponential increase in computation, but at three efficiencies: "benchmark", "target", and Landauer's bound. choose any model of efficiency increase you want: your model describes a curve starting on the "benchmark" line (no efficiency increase) and eventually approaching the bound. No technology can exceed the bound: it's a law of physics. (Look it up.) So, before about 2050, either computation quits growing so fast or energy production starts growing faster.

That's still many, many orders of magnitude more computation than we are doing today, and I cannot figure out what we will be doing with it all. Note that better algorithms can make use more efficient use of the same amount of computation.

Trans-Pacific FASTER fibre fires first photons, finally

another_vulture

Re: So any of you brainiacs

... and at 200 m/us, each bit is 2 mm long for 100 Gbps. As implmented, they actually send 4 bits/Hz on a 50 MHz lambda using a scheme called DPQPSK, and then use a 0.5 FEC scheme to encode e.g. 100 bits into a 200-bit FEC block, for a speed of 100 Gpbs of data bits.

Hold on a sec. When did HDDs get SSD-style workload rate limits?

another_vulture

Re: sure large scale flash is just around the corner

For HDD, you save a bunch of money if you accept slower I/O, because a slower motor is cheaper and allows cheaper I/O electronics, and a slower actuator is cheaper. Furthermore, increased parallelism is very expensive. Access electronics are very expensive because of the complicated analog circuitry needed for the magnetic signals.

By contrast, you save almost no money if you accept slower I/O for flash. Access electronics are cheap and therefore any amount of parallelism is cheap. Throughput is limited only by the speed of interface. Read latency does have a lower bound but is thousands of times better than HDD even for very slow flash. Flash cost is driven (to a first approximation) by the number of flash bits. Large-scale flash storage cost will closely track the cost of flash chips.

If you work on Seagate's performance drives, time to find another job

another_vulture

Enterprise : capacity vs performance

This article at least distinguishes the two types of "enterprise" drives: "capacity" and "performance". It's clear that "performance" HDDs are already completely obsolete due to SSD performance. It's less clear that "capacity" HDD is obsolete, as they still have a large 15x advantage in $/TB. I do wonder however if there is any real advantage in a "enterprise capacity" HDD versus a cheaper "commodity capacity" HDD, given that performance (IOPS) ceases to be an interesting metric for this application.

WD’s revenue wheels have fallen off. Profits are sinking, too

another_vulture

IOPS?

Am I doing the math right?

It appears that in the past you purchased enterprise disks to achieve high IOPS: 10K and 15K RPMs with SAS. But these disks can still usually achieve less than one IO per revolution when tranactions are truly random access, and 15K RPM is only 250 RPS, so for databases we were IOPS-limited, not size-limited, and we spread a database across lots of disks. With an SSD, we are limited (to a first approximation) by the SAS, at 12 Gbit/sec or about 1.2 GB/sec (counting 20% overhead) so for reasonable IO sizes we get more than 50,000 IOPS. It looks to me like a single 1 TB SSD would displace more than 200 enterprise HDDs.

Obviously, if I also need very large files, I am in a different situation, but these match with slower HDDs where I can transfer entire tracks at once.

Google's 'fair use' mass slurping of books can continue – US Supremes snub writers' pleas

another_vulture

Distinguish in-copyright/out-of-copyright and in-print/out-of-print

The arguments for these categories are very different. most of the posts here appear to lump them together.

If a book is out of copyright (in or out of print), I see no moral or legal problem with a Google scan made freely available. The only objection a specious one that Google or another printer can make money by forcing you to buy a copy if Google refuses to make it available. I get extremely frustrated when Google provides snippets instead of full text in these cases.

If a book is in copyright but out of print, holders of printed copies such as used booksellers, NOT the author or publisher, stand to lose if Google makes a copy available. But for many such works, the snippets constitute free advertizing that may make the printed work easier to sell. the copyright holder can creata an e-book and sell it via a link from the Google site and actually make some money.

If a book is in print and in copyright, then snippets are again free advertizing. An author (not a publisher) should be happy for this and should make the sell the work as an e-book from the the author's site, getting a link from the Google snippets. eliminating the need for a rapacious publisher.

Dropping 1,000 cats from 32km: How practical is that?

another_vulture

Breathing at 30 km

We need to keep these poor kitties alive, and we need them to be awake for at least the last km or so before hitting the ground. Presumably they lifted with enough oxygen and in sufficient pressure. They won't black out until they are explosively expelled from their container. If we hyperoxygenate them just before expulsion, they will black out but not die, and they will reawaken below about 3 km. This should give them time to reorient and flatten out to maximize drag, which should slow them to a survivable terminal velocity. This system design minimizes the non-cat mass by using a single container for all the kitties instead of a some silly per-kitty pressure suit. The container itself is simply another balloon, full of kitties and oxygen. Use pure oxygen a lowish pressure during most of the flight to avoid oxygen poisoning, flood with pressure to 2 atm for 30 seconds to hyperoxygenate, flood with more oxygen to burst the balloon. Kitties free fall for 27 km, then wake up. I suspect that they will not be particularly happy when they reach the ground, but that's out of scope for the physical analysis.

PS: If typical feline terminal velocity is not survivable, we will need to fit them with wing suits and train them to use them. I foresee no problems whatsoever with this.

Gartner: RIP double-digit smartphone growth. 2016 has killed you

another_vulture

Math, again.

1.5 Bn phones /yr, retention time average of 2.5 yr = steady state of 3.75 Bn phones. Total world population is currently 7.4 Bn people, including all men, women, and children in all countries. Thus, the current phone production rate provides for one phone for every 2 people in the world. Why, exactly, does anyone think the production rate will increase? Either the market penetration must increase (a phone for every infant in India?) or the retention time must decrease. But as phone technology matures, retention times will increase, not decrease.

Whatever happened to Green IT?

another_vulture

Re: IoT

Just playing around in my house, my problem with IoT is trying to find an efficient way to provide a small amount of DC power to a small device. Every little always-on device needs its own itty-bitty power supply (usually a wall-wart or equivalent) and each of these has a transformer that draws AC power even when the the device is all the way off. Not much power, but if this is one watt and you have 50 devices, you are at 50 Watts 24/7. This includes every remote-controllable light bulb in the house and every control switch. It includes every phone charger that you leave plugged in, including those that are oh-so-conveniently included in your power strips. It also includes the controllers inside most of your appliances. Its not a matter of the actual electronics, which sleeps at microwatt levels.

Uncle Sam's boffins stumble upon battery storage holy grail

another_vulture

A real-world(?) example

I own a BMW i3. I travel about 40 miles a day (20 miles each way to work), and about the same on weekends. I charge at home, and I need about 10 kWh a day. I charge at 25 A at 220V, so I need a bit more than 2 hours to charge. I charge at night between 11:00 PM and 6:AM and I am on a special rate plan that provides electricity at a low rate for these hours, which are the hours of lowest demand in my area. So: Charging off-peak is a crude but effective way to solve the peak demand problem.

It is perfectly feasible to deploy home charging stations that work with the car to communicate with the grid. My car already knows when I intend to use it next and the current battery state. If the electricity company would give me a further discount, I would be happy to suspend charging when sent a request to do so: my car knows when it can honor such a request and still have time to complete its charging. A central computer in the grid can allocate electricity to the cars to perform load leveling. This would solve the problem when there are more electric cars than now.

For those of you with no dedicated parking: wait for driverless cars. They can go park in a shared garage and quit cluttering up your neighbourhood, and they can charge when they get there.

Toshiba rolls out PC-busting monster: 1 terabyte TLC flash SSD

another_vulture

Why SATA?

I just don't get it. The same Flash chips arranged in parallel will provide a massive I/O rate improvement. Just plug into a wider bus. Cost is only very lightly higher. Possible busses are PCI-e x16 or a DIMM slot.

Yes, I know that SATA is easy because it's a direct SSD/HDD replacement, but you would think that at least one vendor would like to differentiate their high-end laptop.

Those converged infrastructure vendors are cannibals, I tell you

another_vulture

Do I understand correctly?

The article appears to say that a "converged" datacenter is just a bunch of commodity servers, each of which contributes CPU, memory, and storage to the "cloud", presumably via a really serious LAN connection in a "flat" topology. Magic software then makes appropriate use of these resources. This means that there is no need for specialized storage nodes. Therefore, proprietary magic software in the storage nodes is replaced by commodity magic software on the commodity nodes, and vendors of speciality storage nodes are screwed.

Sounds good to me, but I'm an OpenSource fanatic anyway. Basically, this architecture will enable the use of storage hierarchies within each commodity node that can move beyond the legacy SAS and SATA interfaces to PCI-e or memory interfaces, without requiring a massive and expensive upgrade of a legacy storage node. Of course, the magic software will need to deal with heterogeneous storage capabilities of nodes in the datacenter. We live in interesting times.

Lithium-air: A battery breakthrough explained

another_vulture

Closed cycle

Produces oxygen when charging, consumes oxygen when discharging. Just store the oxygen, probably in another chemical compound, and you have a closed cycle, just as with most rechargable batteries.

I'm not a chemist, so I don't know the best way to store oxygen reversibly in a chemical for this particular cycle.

How much of one year's Californian energy use would wipe out the drought?

another_vulture

Time of day

Here in California we have a lot of solar and wind. We have lots of spare power for some of the time each day. Solar and wind are lousy sources for base load and for peak load. However, they are great for any load that can be varied depending on available supply. The big problem with solar and wind is is energy storage, but it's quite easy to store fresh water. Another way to look at this: the desal plants can effectively act as load buffers, allowing the entire energy system to act more efficiently. From a rate-paying perspective, This usage pattern should get the lowest electricity rate. Alternatively, the power company could build the plants and sell the water.

Salt pans have produced salt commercially at the south end of SF bay for more than century. Since a desal plant discharges high-salinity waste water, it might be possible to sell this waste water to the salt company. I doubt that the total amount of salty waste for a statewide system would have a market, but some would.

Also, using solar and wind power for this purpose would also get help get the environmentalists on board.

Voyager 2 'stopped' last week, and not just for maintenance

another_vulture

"Wiped out" is a continuum

When an advanced civilization encounters a less-advanced civilization, the latter is "wiped out", sort of. In human history, most less-advanced civilizations are assimilated, not totally destroyed. The extent to which a less-advanced civilization contributes to the more-advanced civilization is roughly in proportion to the unique "useful" features of the less-advanced civilization. Why is this a problem? If you are worried about your biological progeny instead of your intellectual progeny, you should be worried much more about the technological singularity or various existential risks instead of worrying about ETs.

http://en.wikipedia.org/wiki/Technological_singularity

http://en.wikipedia.org/wiki/Global_catastrophic_risk

The 'echo chamber' effect misleading people on climate change

another_vulture

Consensus (on evolution)

Less than half of Americans believe that Humans evolved from animals:

http://en.wikipedia.org/wiki/Level_of_support_for_evolution

Why should we consider the "consensus" when we are evaluating a scientific theory?

Note: I have not seen a specific study, but my gut feeling is that there is a very high correlation between creationist thinking and environmental skepticism, and a quick google search turns up lots of support for this: e.g.:

http://religiondispatches.org/creationism-and-global-warming-denial-anti-sciences-kissing-cousins/

You've come a long way, Inkscape: Open-source Illustrator sneaks up

another_vulture

SVG

I'm not a graphics professional, I chose Inkscape for my occasional 2D work because its native format is SVG, and SVG is a truly open standard. This allows for useful extensions. For example, there is an extension to emit gcode files to drive CNC machines.

Also, since SVG is human readable, you can generally debug any strange behavior if you really need to, and modern browsers can handle SVG directly.

I'm more interested in Inkscape as a replacement for Visio than as an Illustrator replacement. The SVG format can of course handle this easily, but the Inkscape developers seem to focus more on art than CAD.

Microsoft springs for new undersea cables to link US, UK, Asia

another_vulture

Re: 10 Tbps/pair? (yes)

Answering my own questions. Its a sign of getting old when you are talking to yourself. The practical state of the art has advanced since I went to sleep about 10 years ago.

Each fiber on Google's FASTER cable runs 100 lambas x 100 Gbps. The lambdas are at a 50 GHz spacing, and this made possible by using a modulation called DPQPSK to encode 4 "raw" bits/Hz and a FEC code (roughly rate 1/2) to get an encoded rate of 2 data bits/Hz. The optical C band supports up to 120 lambdas at this spacing.

Microsoft is using the NCP cable. NCP uses all of the lambdas to achieve 120 x 100 Gbps.

No magic math is involved. The cables get this simplex rate in each fiber.

another_vulture

10 Tbps/pair?

Google has 6 pair and 60 Tbps. Does that match current practice (not labaratory stuff, but real deployments)? My knowledge is out of date. In the early days, we had 10 Gbps per lambda and 160 lambas (1.6 Tbps), and then 40 Gbps in 80 lambdas (3.2 Tbps), all in the C band. So what are we doing now? Faster lambdas (say, 100 Gbps)? more bits/Hz (different modulations)? Use of wavelengths outside of the C band?

Going to 100 Gbps, you still need 100 lambdas. Squeezing them all into C band would be messy. Higher modulations would be "interesting," not in a good way, and going outside of C band would require some type of dual-band amplifiers since EDFAs only cover C band.

I guess they could be using "cisco math" and calling 5 Tbps in each direction "10 Tbps". That might work with 50 lambdas of 100Gbps in C band without too much magic.

Tape thrives at the margin as shipped capacity breaks record

another_vulture

Really ridiculous

A 4 TB HDD costs $120, retail qty 1. That's $.03/GB. Without a quantity discount and without compression. The LTO guys very frequently hype their numbers by assuming compression. It is a heck of a lot easier to compress disk ("de-dupe") than tape, but let's just assume HDD will be 3x LTO at the module level.

The equipment to support a a PB of HDD is a lot cheaper than the equipment to support a PB of LTO.

You can turn off the disks you aren't using, so the power for the disk system is at least a small as for the tape and probably smaller.

disks can be re-read or even reused millions of times. An LTO can be read or written a total of 260 times.

You can access any data in a PB of spun-down disks in less than 10 seconds.

Tape is dead.

FCC says cities should be free to run decent ISPs. And Republicans can't stand it

another_vulture

"Competition" is a farce.

The incumbent "private companies" are regulated monopolies. The have cozy decades-old relationships with state government agencies that let them use their monopoly positions to maximize profits: there is effectively no competition and the free market does not exist. Localities that want decent Internet have no way to get it from these monopolies. The monopolies oppose federal intervention because they have lots of leverage at the state level and less at the federal level. State politics is a lot dirtier that federal politics. A more blatant example is state laws prohibiting Tesla from selling direct to the public. There are a lot of car dealers in state government.

Linux clockpocalypse in 2038 is looming and there's no 'serious plan'

another_vulture

2038 is a 31-bit problem, not a 32-bit problem

The 32-bit fields used everywhere to store UNIX time (not just Linux time) will roll over in 2106. However, any code the treats the field as a signed integer will roll over in 2038. The fix is therfore easy and does not require changing any data files, just code: use the same field but make sure to treat it as an unsigned integer.

This gives us until 2106 to quit using old file systems whose headers have 32-bit fields, and quit using application file formats that use 32-bit seconds. This class of fixed is a whole lot harder than code fixes as it requires that all the old file be converted.

See: http://en.wikipedia.org/wiki/Year_2038_problem

TITANIC: Nuclear SUBMARINE cruising 'Sea of KRAKENS' may be FOUND ON icy MOON

another_vulture

Buoyancy

A sub (or a dirigible) needs neutral buoyancy. It needs enough of its volume to be less dense than the surrounding medium to counterbalance its heavier parts. For example, the Trieste bathyscaphe used a tank of oil that was lighter than water. For liquid methane, a fabric "tank" filled with hydrogen gas would probably be the best choice. We would need a way to use electricity from the power plant to generate hydrogen (and dispose of carbon) from the methane. Alternatively, With a much larger (insulated) envelope we can use heat to keep methane in a gaseous state. In either case the result looks more like a dirigible than a submarine.

The problem with gaseous buoyancy in liquid is compression. Positive feedback: more depth, more pressure, less gas volume, less buoyancy, more sinking. counteracting this requires active systems. For instance, with a variable-volume envelope, keep some gas at high pressure to release into the envelope to counter sinking.

'Giving geo-engineering to this US govt is like giving a child a loaded gun'

another_vulture

We already did that

The main proposal for Alberdo modification is to spew sulfur dioxide (SO2) into the atmosphere. This causes a high haze layer which reflects sunlight. But we already did that in the US by burning high-sulfur coal before and during the 1970s. The big spike in "global warming" occurred after we started scrubbing the sulfur out of the exhaust because it causes "acid rain."

For the last decade, China has been rapidly increasing their use of coal, and (you guessed it) their coal is high sulfur: they are in effect counteracting their CO2 emissions by spewing out SO2, and the warming trend is in abeyance. Unfortunately, the SO2 falls out in the relatively short term, but the CO2 does not. Also, the SO2 has other consequences (some bad like acidified lakes, some good like free fertilizer) that may eventually cause China to begin scrubbing their exhaust.

So: we already did a massive experiment in albedo modification, and we are now doing another one.

Walmart's $99 crap-let will make people hate Windows 8.1 even more

another_vulture

cheap UI for any project

The problem with most embedded controllers (e.g., Rasberry PI, beaglebone ...) is finding a good inexpensive user interface device. A good solution is to run a web server on your embedded system and use a web browser running on a device with a touch screen. Buy this device and run Win*.1 or the OS of your choice and use it to run the browser or a custom UI app, and the problem is solved. For example, use multiple of these to replace the expensive UIs in a home automation system. The device is not itself a good embedded controller because it does not have the right physical interfaces.

This approach is also good because you can allow control from any smart phone or computer in your house: the $99 computers are then mounted in fixed locations on the wall just so you don't have to fumble around with your phone to turn on the lights or whatever.

Humanity now making about 41 mobes EACH SECOND

another_vulture

Growth.

World Human population is growing at 75 million/yr. This is 2.38 net new humans/second.

There are about 7 billion humans. At the current rate, 7 billion phones will be shipped in 5.35 years.

If we assume that half the world's population will eventually have phones, then in 2.6 years we saturate even if nobody currently owned a phone, and the current production rate would sustain a phone replacement rate of once every 2.6 years.

Presumably the market penetration will increase, but probably not past 100%. Meanwhile, the replacement rate will decrease as the technology matures.

Conclusion: the production rate must decrease some time in the next 5 years.

Cold storage, Facebook style? Flash FPGA controller to knock your SoCs off, vows upstart

another_vulture

LDPC

It appears that the on-board computation is actually the LDPC decoder. By doing the LDPC in the FPGA, you off-load it and also reduce the amount of data being sent. LDPC is a sophisticated error correction scheme normally used on noisy radio links.

FLAPE – the next big thing in storage archiving

another_vulture

Disk cost

DFO: no, the ridiculous $2,400,000 did not include those additional costs. Those costs are in a different line in the matrix in the graphic, and they are also very high relative to the same line item for tape. The only way you can possibly get to these numbers is to buy very fast, very small disks, and those are completely inappropriate for near-line storage.

With respect to scaling: one post mentions that tape cost does not rise as quickly as disk as the archive size increases. But this article is specifically about a 1PB store.

another_vulture

Re: Disk is a lot faster.

DJO: No there is no way to cost-effectively "stripe" tape. Striping works by using multiple drives simultaneously ("striping" by adding heads on one drive is a different discussion.) But tape drives are very expensive. The whole point of tape is to use a small set of drives to handle the entire set of tapes, and the logistics of the tape-handler robots will get very ugly very fast. The tape library usually has multiple drives to accommodate multiple simultaneous requests, not to do striping. But this is not the metric the article uses. When you look at multiple simultaneous requests, the controller-per-disk scheme is overwhelmingly superior.

The contemplated disk scheme is a very conservative RAID 1/0, and is still massivly cheaper than tape. We can easily go to two separate RAID 1/0 for backup and still be cheaper. Where is this madness of which you speak?

another_vulture

Disks do have multiple heads

A modern disk has one head per platter surface, so a modern high-capacity disk may have up to eight heads. However, since they share a servo they cannot be dynamically aligned to the tracks on each surface simultaneously at the new extreme track density ("shingled tracks") now coming into vogue. Earlier, it would have been possible, but it was not cost-effective because a bunch of relatively expensive read/write electronics is shared between all heads and would need to be duplicated, and the speed of the SATA interface would need to be (at least) quadrupled.

It's also unnecessary, since an array of disks accesses multiple heads simultaneously.

another_vulture

Re: @another_vulture

Yes, AC, I meant $150,000. actually, I was way off: $134,000 will buy 1000 of these disks, so pay half for the disks and half for the remaining infrastructure.

Yes, RAID 1/0 is gross overkill for near-line storage. You can use your scheme or any of several others to build a system that is cheaper, faster, more power-effecient, smaller footprint, and probably better in other ways. This simply makes the articles $2,400,000 number even sillier.

another_vulture

$2,400,000 for 1 PB of disk??!

That's simply silly. I can purchase a 4 TB SATA drive for $134.00, retail, quantity 1. 500 of these yield a redundant 1PB array for less than $150. Stripe them in sets of (say) eight (sixteen disks in a RAID 1/0 configuration and only spin up a stripe when I need it. has faster access time (limited by spin-up) and faster throughput (limited by stripe width.) You can still use the Flash for metadata.

The Register had a article about the Facebook Open Vault specification That is more or less just this:

http://www.theregister.co.uk/2013/09/24/facebook_on_the_rue_morgue/

another_vulture

Disk is a lot faster.

It is trivially easy to increase the disk file transfer rate. Just stripe the files across multiple disks. In a Petabyte array, you can theoretically increase the speed by a factor of about 200 at almost no incremental cost, since you already have 200 drives, 200 controllers, etc. This also means that you can use cheaper 7200 RPM drives. Faster rotation increases database transaction rates, but striping increases bulk transfer rates. Tape simply loses in this regard.

Finding the formula for the travelling salesman problem

another_vulture

Factorial, not exponential

I don't think there is a exponential bound on a factorial problem. Factorial is worse than exponential.

Remember Control Data? The Living Computer Museum wants YOU

another_vulture

Too modern.

I have no experience with the newfangled 3rd gen CDC Cyber or IBM 1130. I do have experience with real computers: 2nd gen CDC 3800 and IBM 7040.

The 3800 was the supercomputer of its time. 48-bit words, discrete logic (one single flip-flop on a small module, so 48 modules per register.) Freon cooled.

Drooping smartphone sales mean hard times ahead for Brit chipmaker

another_vulture

saturated market

Let's do the math. They are selling chips at a rate of >500M/quarter, or >2Billion/year. Assume each phone needs one chip. in 4 years, they provide enough chips for 8 billion phones. But there are only 7 billion humans on the planet, and that includes infants.

Eagle steals crocodile-cam, records video selfie

another_vulture

NOT "stolen"

"Stolen" is a value-laden word. The juvenile eagle picked the object up and flew with it. I cannot believe that the camea was identified in any way that could be interpreted by this eagle as belongimg to some owner. The eagle simply asserted its right to possess an object in its natural environment. This is justifiable retribution for humans who force eagles to carry cameras.

On a related note: Is there a way to make camera packages attractive to eagles? If so, a properly-designed camera package (internally stabilized, GPS tracking, multiple POV, location transmitter) would provide a way to track the flights of these juvinile eagles, and also porvide spectacular videos that may very likely result in crowdfunding. I would certainly be willing to pay for the result.

Decades ago, computing was saved by CMOS. Today, no hero is in sight

another_vulture

Re: Third dimension/empty space

Look inside a modern enterprise server: what do you see?

power supplies

disks

fans

Printed circuit boards surrounded by air.

But power supplies, disks, and fans do not need low-latency connections, and the only reason for all that empty airspace is to provide cooling. if we remote the disks and power supplies, and use a coolihg system that is more efficient that brute-force air cooling, we can increase the computing density by several orders of magnitude. My guess is that we can easily achieve a factor of 1000 improvement even if there is no additional improvement due to "Moore's law" in its traditional sense. If we do get this factor of 1000, that is equivalent to 20 years of Moore's law.

In this scenario, a computer is a dense collection of computing elements surrounded by cooling elements, power supply elements, and storage elements. Cooling is almost certainly provided by a transfer fluid such as water or freon.

another_vulture

Third dimension

Assume we are at a CMOS plateau. Over the last decade the metric has shifted from MIPS/$ to MIPS/Watt, and newer computers are dramatically more efficient. But this means we can get more MIPS/liter at an acceptable power density. Sure, we may need to get more innovative with cooling architecture, but engineers know how to do this.

But why? Well, because cramming more circuitry into a smaller volume reduces the interconnect length, and this reduces latency. If I can reduce a roomful of computers to a single rack, my longest runs go from 20 meters to two meters and latency goes from 100ns down to 10ns. (Speed of light in fiber is 20cm/ns.)

Today's devices are almost all single layer: one side of one die. A wafer is patterned and sliced into dice, and each die is encapsulated in plastic, with leads. This is primarily because they are air cooled, so they must be far apart in the third dimension. But it's physically possible to stack tens or hundreds of layers if you can figure out how to remove all the heat.

HP: Our NonStop servers will be rock solid – even when running on x86

another_vulture

It's all about legacy software

NonStop supports a large number of legacy applications that have been evolving since the Tandem days, mostly in very conservative industries. It is extremely expensive to migrate this software, since NonStop supports a set of fine-grained checkpointing that is not available on Unix. The architecture of those applications depends on these features, so they cannot be "ported," but instead must be re-implemented starting from the architecture level. The users are therefor locked in, and HP can therefore make good money if they can provide NonStop on modern hardware. Sadly, this means Xeon.

This happened to both Unisys architectures, (Burroughs and UNiVAC,) both of which are sufficiently different from UNIX to make porting infeasible. It's not clear if this is also true of HPUX or VMS.

The NonStop architecture depends on fault identification features at the hardware level, some of which have only recently been added to the Xeons. This may be the reason that HP did not do this earlier.

HP 100TB Memristor drives by 2018 – if you're lucky, admits tech titan

another_vulture

Re: Biiiiiiig Changes

There is an easy technical fix here. The CPU can generate a random encryption key and use it for the "volatile" portion of the storage. The CPU will need some serious hardware-assisted encryption/decryption to avoid a performance penalty. The randomly-generated key will reside only in a register inside the CPU. When power is cut, the register loses the value.

The same hardware can also use other keys for other memory ranges to support memory-speed access to encrypted non-volatile data. As with today's encrypted disk, you obviously cannot store these keys on the non-volatile store.

another_vulture

Re: confidently? Remember Itanic?

Some of us have long memories. HP's predictions have no credibility for us. Memristors will completely supersede DRAM and NAND flash? The Itanium was touted as the technology that would completely supersede the x86. Itanium was pushed so hard that it seriously distorted the entire industry from about 1997 until about 2002. This time I'll wait until I can buy one, thanks.

Storage Memory

another_vulture

A processor is an expensive controller

DIMM sockets are connected to a processor. They are there to provide RAM for the processor. An inexpensive processor (still not cheap) can support 4 DIMMs. A processor that supports many more DIMM sockets is much more expensive. IN effect, the processor becomes a very expensive Flash controller. Conclusion: It's cheaper to add all that flash onto a PCIe card. A 16x PCIe provides very high throughput without occupying valuable DIMM channel capacity.

If the Flash DIMM also provides RAM, The equation changes and the DIMM makes sense. Newer types of memory that provide RAM functionality and static storage are also suitable for DIMM, but are not yet commercially available.

Is it barge? Is it a data center? Mystery FLOATING 'Google thing'

another_vulture

Other benefits

A big problem with tax incentives is that they can be changed. A land-based data center is hard to move when the local tax structure changes, while a barge can be moved more easily.

Barges can be built in a single place, assembly-line fashion, with the bulk of the work done in efficient asian yards. Fixed centers must be built in place using local labor.

11m Chinese engulfed by 'Airpocalypse' at 4000% of safe pollution levels

another_vulture

Industrialization

The Chinese population is 1.34 Billion.

China is is rapidly industrializing. Countries in this phase historically pass through a phase of horrible smog, but since their population is so huge, it's happening in a lot of cities all at the same time. This happened in London in 1952, and in Donora, Pennsylvania, in 1948, but with much smaller populations, the problems were much less widespread. It is apparently very hard for one country to learn from the experience of another. It takes a widley publicized tragedy to cause change.

From a human perspective, this is a short-term acute crisis (probably thousands of localized deaths in a week) sitting on top of a chronic health problem (tens of thousands of dispersed deaths in a year) and a global warming problem (potentially much more severe on a long timescale.) The short-term problem gets the attention and causes action.

Change takes time. In 1950, all of London's buildings were black with soot. Today, London's air is (fairly) clean. It's hard to predict how the Chinese government will react to killer smog and how long it will take to solve this problem. The chinese GNP per capita may not be high enough to avoid life-or-death tradeoffs. A shift away from coal may require resources that otherwise avoid famine: a choice between death from smog ro death from hunger.

(Sorry, but I've been reading all of Dickens. London in the 1840's and all that.)

Tape rocks for storage - if you don't need to, um, access your data

another_vulture

External USB disks

We had this discussion last year, in the context of the cost of a petabyte storage system. This is a slight update to adjust the costs (cheaper disks) and to compare with tape.

An LTO-6 stores 2.5 TB (raw) and costs about $50, or $20/TB. An 4TB external HD, USB 3.0, costs about $150, or $37/TB. The bytes per cubic centimeter are about the same, and the HD cost continues to drop. The potential for compression is much better for disk than for tape, but I choose to ignore this because any compression scheme add complexity that may prevent the data from being recoverable 20 years from now.

I can build an archival storage system with 8 computers each supporting 32 of these drives, with switchable power for each drive. The total cost for the non-disk portion of this system is about $4000, so the system-level cost per petabyte is about $41,000.

This is basically a stack of 256 disk drives that are almost all powered off almost all of the time. Any given file can be accessed by turning the drive on and waiting for it to spin up, so access is about 5 seconds. In a backup/recovery system, you treat each drive more or less like an LTO, so you power up one drive each day and write to it for an hour or so (assuming you have 4TB/day to back up.) Just as with tape, you may choose to back up to two drives at twice the overall cost.

Disk lifetime is driven primarily by the amount of time the disk is powered up, so data retention in this system should be very long.

another_vulture

Re: Longevity of SSD as a medium

"LTO (at the time) rated for 15-30 years."

Sorry, but this is an invalid comparison. An SSD is rated for at least a million writes per bit. The LTO is rated for 260 (yes, less than three hundred) full passes. If you only write to the SSD 260 times over the course of 15-30 years, it will likely not exhibit the "wear-out" phenomenon.

Boffins debate killing leap seconds to help sysadmins

another_vulture

IEEE 1588

When two computers need to synchronize absolute time to sub-microsecond accuracy, leap seconds become a big problem. There are a great many situations where this needed, especially in measurement systems for science and industry. These systems use 1EEE 1588 (a.k.a. PTP, Precision Time Protocol.) PTP used GPS time instead of UTC precisely to avoid leap seconds. See:

https://en.wikipedia.org/wiki/Precision_Time_Protocol

Page: