54 posts • joined 13 Mar 2011
It appears that the on-board computation is actually the LDPC decoder. By doing the LDPC in the FPGA, you off-load it and also reduce the amount of data being sent. LDPC is a sophisticated error correction scheme normally used on noisy radio links.
DFO: no, the ridiculous $2,400,000 did not include those additional costs. Those costs are in a different line in the matrix in the graphic, and they are also very high relative to the same line item for tape. The only way you can possibly get to these numbers is to buy very fast, very small disks, and those are completely inappropriate for near-line storage.
With respect to scaling: one post mentions that tape cost does not rise as quickly as disk as the archive size increases. But this article is specifically about a 1PB store.
Re: Disk is a lot faster.
DJO: No there is no way to cost-effectively "stripe" tape. Striping works by using multiple drives simultaneously ("striping" by adding heads on one drive is a different discussion.) But tape drives are very expensive. The whole point of tape is to use a small set of drives to handle the entire set of tapes, and the logistics of the tape-handler robots will get very ugly very fast. The tape library usually has multiple drives to accommodate multiple simultaneous requests, not to do striping. But this is not the metric the article uses. When you look at multiple simultaneous requests, the controller-per-disk scheme is overwhelmingly superior.
The contemplated disk scheme is a very conservative RAID 1/0, and is still massivly cheaper than tape. We can easily go to two separate RAID 1/0 for backup and still be cheaper. Where is this madness of which you speak?
Disks do have multiple heads
A modern disk has one head per platter surface, so a modern high-capacity disk may have up to eight heads. However, since they share a servo they cannot be dynamically aligned to the tracks on each surface simultaneously at the new extreme track density ("shingled tracks") now coming into vogue. Earlier, it would have been possible, but it was not cost-effective because a bunch of relatively expensive read/write electronics is shared between all heads and would need to be duplicated, and the speed of the SATA interface would need to be (at least) quadrupled.
It's also unnecessary, since an array of disks accesses multiple heads simultaneously.
Yes, AC, I meant $150,000. actually, I was way off: $134,000 will buy 1000 of these disks, so pay half for the disks and half for the remaining infrastructure.
Yes, RAID 1/0 is gross overkill for near-line storage. You can use your scheme or any of several others to build a system that is cheaper, faster, more power-effecient, smaller footprint, and probably better in other ways. This simply makes the articles $2,400,000 number even sillier.
$2,400,000 for 1 PB of disk??!
That's simply silly. I can purchase a 4 TB SATA drive for $134.00, retail, quantity 1. 500 of these yield a redundant 1PB array for less than $150. Stripe them in sets of (say) eight (sixteen disks in a RAID 1/0 configuration and only spin up a stripe when I need it. has faster access time (limited by spin-up) and faster throughput (limited by stripe width.) You can still use the Flash for metadata.
The Register had a article about the Facebook Open Vault specification That is more or less just this:
Disk is a lot faster.
It is trivially easy to increase the disk file transfer rate. Just stripe the files across multiple disks. In a Petabyte array, you can theoretically increase the speed by a factor of about 200 at almost no incremental cost, since you already have 200 drives, 200 controllers, etc. This also means that you can use cheaper 7200 RPM drives. Faster rotation increases database transaction rates, but striping increases bulk transfer rates. Tape simply loses in this regard.
Factorial, not exponential
I don't think there is a exponential bound on a factorial problem. Factorial is worse than exponential.
I have no experience with the newfangled 3rd gen CDC Cyber or IBM 1130. I do have experience with real computers: 2nd gen CDC 3800 and IBM 7040.
The 3800 was the supercomputer of its time. 48-bit words, discrete logic (one single flip-flop on a small module, so 48 modules per register.) Freon cooled.
Let's do the math. They are selling chips at a rate of >500M/quarter, or >2Billion/year. Assume each phone needs one chip. in 4 years, they provide enough chips for 8 billion phones. But there are only 7 billion humans on the planet, and that includes infants.
"Stolen" is a value-laden word. The juvenile eagle picked the object up and flew with it. I cannot believe that the camea was identified in any way that could be interpreted by this eagle as belongimg to some owner. The eagle simply asserted its right to possess an object in its natural environment. This is justifiable retribution for humans who force eagles to carry cameras.
On a related note: Is there a way to make camera packages attractive to eagles? If so, a properly-designed camera package (internally stabilized, GPS tracking, multiple POV, location transmitter) would provide a way to track the flights of these juvinile eagles, and also porvide spectacular videos that may very likely result in crowdfunding. I would certainly be willing to pay for the result.
Re: Third dimension/empty space
Look inside a modern enterprise server: what do you see?
Printed circuit boards surrounded by air.
But power supplies, disks, and fans do not need low-latency connections, and the only reason for all that empty airspace is to provide cooling. if we remote the disks and power supplies, and use a coolihg system that is more efficient that brute-force air cooling, we can increase the computing density by several orders of magnitude. My guess is that we can easily achieve a factor of 1000 improvement even if there is no additional improvement due to "Moore's law" in its traditional sense. If we do get this factor of 1000, that is equivalent to 20 years of Moore's law.
In this scenario, a computer is a dense collection of computing elements surrounded by cooling elements, power supply elements, and storage elements. Cooling is almost certainly provided by a transfer fluid such as water or freon.
Assume we are at a CMOS plateau. Over the last decade the metric has shifted from MIPS/$ to MIPS/Watt, and newer computers are dramatically more efficient. But this means we can get more MIPS/liter at an acceptable power density. Sure, we may need to get more innovative with cooling architecture, but engineers know how to do this.
But why? Well, because cramming more circuitry into a smaller volume reduces the interconnect length, and this reduces latency. If I can reduce a roomful of computers to a single rack, my longest runs go from 20 meters to two meters and latency goes from 100ns down to 10ns. (Speed of light in fiber is 20cm/ns.)
Today's devices are almost all single layer: one side of one die. A wafer is patterned and sliced into dice, and each die is encapsulated in plastic, with leads. This is primarily because they are air cooled, so they must be far apart in the third dimension. But it's physically possible to stack tens or hundreds of layers if you can figure out how to remove all the heat.
It's all about legacy software
NonStop supports a large number of legacy applications that have been evolving since the Tandem days, mostly in very conservative industries. It is extremely expensive to migrate this software, since NonStop supports a set of fine-grained checkpointing that is not available on Unix. The architecture of those applications depends on these features, so they cannot be "ported," but instead must be re-implemented starting from the architecture level. The users are therefor locked in, and HP can therefore make good money if they can provide NonStop on modern hardware. Sadly, this means Xeon.
This happened to both Unisys architectures, (Burroughs and UNiVAC,) both of which are sufficiently different from UNIX to make porting infeasible. It's not clear if this is also true of HPUX or VMS.
The NonStop architecture depends on fault identification features at the hardware level, some of which have only recently been added to the Xeons. This may be the reason that HP did not do this earlier.
Re: Biiiiiiig Changes
There is an easy technical fix here. The CPU can generate a random encryption key and use it for the "volatile" portion of the storage. The CPU will need some serious hardware-assisted encryption/decryption to avoid a performance penalty. The randomly-generated key will reside only in a register inside the CPU. When power is cut, the register loses the value.
The same hardware can also use other keys for other memory ranges to support memory-speed access to encrypted non-volatile data. As with today's encrypted disk, you obviously cannot store these keys on the non-volatile store.
Re: confidently? Remember Itanic?
Some of us have long memories. HP's predictions have no credibility for us. Memristors will completely supersede DRAM and NAND flash? The Itanium was touted as the technology that would completely supersede the x86. Itanium was pushed so hard that it seriously distorted the entire industry from about 1997 until about 2002. This time I'll wait until I can buy one, thanks.
A processor is an expensive controller
DIMM sockets are connected to a processor. They are there to provide RAM for the processor. An inexpensive processor (still not cheap) can support 4 DIMMs. A processor that supports many more DIMM sockets is much more expensive. IN effect, the processor becomes a very expensive Flash controller. Conclusion: It's cheaper to add all that flash onto a PCIe card. A 16x PCIe provides very high throughput without occupying valuable DIMM channel capacity.
If the Flash DIMM also provides RAM, The equation changes and the DIMM makes sense. Newer types of memory that provide RAM functionality and static storage are also suitable for DIMM, but are not yet commercially available.
A big problem with tax incentives is that they can be changed. A land-based data center is hard to move when the local tax structure changes, while a barge can be moved more easily.
Barges can be built in a single place, assembly-line fashion, with the bulk of the work done in efficient asian yards. Fixed centers must be built in place using local labor.
The Chinese population is 1.34 Billion.
China is is rapidly industrializing. Countries in this phase historically pass through a phase of horrible smog, but since their population is so huge, it's happening in a lot of cities all at the same time. This happened in London in 1952, and in Donora, Pennsylvania, in 1948, but with much smaller populations, the problems were much less widespread. It is apparently very hard for one country to learn from the experience of another. It takes a widley publicized tragedy to cause change.
From a human perspective, this is a short-term acute crisis (probably thousands of localized deaths in a week) sitting on top of a chronic health problem (tens of thousands of dispersed deaths in a year) and a global warming problem (potentially much more severe on a long timescale.) The short-term problem gets the attention and causes action.
Change takes time. In 1950, all of London's buildings were black with soot. Today, London's air is (fairly) clean. It's hard to predict how the Chinese government will react to killer smog and how long it will take to solve this problem. The chinese GNP per capita may not be high enough to avoid life-or-death tradeoffs. A shift away from coal may require resources that otherwise avoid famine: a choice between death from smog ro death from hunger.
(Sorry, but I've been reading all of Dickens. London in the 1840's and all that.)
External USB disks
We had this discussion last year, in the context of the cost of a petabyte storage system. This is a slight update to adjust the costs (cheaper disks) and to compare with tape.
An LTO-6 stores 2.5 TB (raw) and costs about $50, or $20/TB. An 4TB external HD, USB 3.0, costs about $150, or $37/TB. The bytes per cubic centimeter are about the same, and the HD cost continues to drop. The potential for compression is much better for disk than for tape, but I choose to ignore this because any compression scheme add complexity that may prevent the data from being recoverable 20 years from now.
I can build an archival storage system with 8 computers each supporting 32 of these drives, with switchable power for each drive. The total cost for the non-disk portion of this system is about $4000, so the system-level cost per petabyte is about $41,000.
This is basically a stack of 256 disk drives that are almost all powered off almost all of the time. Any given file can be accessed by turning the drive on and waiting for it to spin up, so access is about 5 seconds. In a backup/recovery system, you treat each drive more or less like an LTO, so you power up one drive each day and write to it for an hour or so (assuming you have 4TB/day to back up.) Just as with tape, you may choose to back up to two drives at twice the overall cost.
Disk lifetime is driven primarily by the amount of time the disk is powered up, so data retention in this system should be very long.
Re: Longevity of SSD as a medium
"LTO (at the time) rated for 15-30 years."
Sorry, but this is an invalid comparison. An SSD is rated for at least a million writes per bit. The LTO is rated for 260 (yes, less than three hundred) full passes. If you only write to the SSD 260 times over the course of 15-30 years, it will likely not exhibit the "wear-out" phenomenon.
When two computers need to synchronize absolute time to sub-microsecond accuracy, leap seconds become a big problem. There are a great many situations where this needed, especially in measurement systems for science and industry. These systems use 1EEE 1588 (a.k.a. PTP, Precision Time Protocol.) PTP used GPS time instead of UTC precisely to avoid leap seconds. See:
Clearly, there are other balloons in the sky that may have introduced material into the upper stratosphere. Probability is miniscule, but more likely than a continuous rain of microbes from spaaace. In fact a concentration in the upper atmosphere that is high enough to detect in a single sample drawer implies a concentration in space that satellite dust collection experiments would have found by now.
Re: While we are on the subject of woven fabrics, holes and magnetic storage...
Sorry, but the IBM 353 disk drive was not used with the RAMAC computer system: it was used with the IBM 7030 computer system. The IBM 350 Disk drive was used with the RAMAC computer system and is traditionally called the "RAMAC disk."
No, but your neighbors do.
You don't need to worry unless you let someone come inside your house and connect a cable to the femtocell to get access to the OS. This report is about a local hack, not a remote hack to the femtocell,
On the other hand, your neighbors, to whom you provided access, need to worry, because you have that physical access. This means that you can use this hack to monitor their phone calls and SMS messages.
If you are really paranoid, you can protect against any future remote attack on your femtocell by ensuring that your router firewall is configured to stop all incoming access other than the IPsec tunnel, but there is currently no published remote attack. We can hope that the femtocell has internal firewall rules and other configurations that prevent remote logins.
It's your phone, not your femtocell
You don't need to worry about your own femtocell if you keep it physically secure. Instead, you need to worry about your phone when it connects to someone else's femtocell. But this is just like using someone else's WIFI hotspot: It means you need phone-based security.
Basically, unless you have phone-based security, you are trusting the (extended) phone network to not be evil. why are you more afraid of the femtocell owner than you are of the phone company equyipment? Oh right! we know we can trust the phone company to never make our connections available to a third party. Silly me.
I'm confused (as usual.)
This artice makes no mentionof quantum tunnelling, but we know ( http://en.wikipedia.org/wiki/Tunnel_diode ) that this effect is operative at lengths below about 10nm. So, how can a 3nm device operate without considering quantum tunneling?
Requires a motorized antenna
Yes, MEO has all of the advantages you mention. In addition, since the satellite is closer, you need less power per bit to send the signal to the satellite from the ground.
However, there is one drawback. With GEO, you can point your antenna at the satellite and then lock it in place: no motors required. With MEO, the satellite crosses the sky, and your antenna must track it. Furthermore, unless you have a second antenna, you lose the signal for a few seconds every 20 minutes or so. One satellite sets in the east and another rises in the west (yes, opposite of the sun) and you must swing your antenna to the new satellite.
CTBTO detected radionuclides after the 2006 test, but not after the 2009 test, and lots of folks think the 2009 test was faked. There is not yet a report of radionuclide detection for the 2013 test, but we need to wait a few more days at least before we get a definite statement from CTBTO. My guess is this one was also a fake.
They were observed to have dug two tunnels. My guess is that one has a real A-bomb, while the other was filled with conventional explosive. The real test failed and they then blew the conventional bomb as a cover-up. This was not to fool the world, but rather to fool the upper echelon of NK, to avoid being executed for failure. It is not possible to fake the radionuclide signature, which is not just Xenon 133. It is, however, just possible that an underground test completely seals all of the cracks and that there is therefore no radionuclide signature at all.
World population is 7.066 Billion
At the rate of 1.75 billion/yr, we need 4 years to provide each human a phone. This includes every infant and every person in North Korea. Sure, some folks get a new phone every year, and some have more than one phone.
If half to population has phones and the average phone life is 2 years, that accounts for the entire market. Why do we expect any growth?
Re: USB 3.0
Yes. Each of the 4' shelves needs four disk power distribution systems (DPDS.) Each DPDS would be a 10-position power strip plugged into a USB-controlled plug. The four DPDSs plus the four USB hubs plus the computer plug into a 10-position power strip, so each shelf ends up with a single plug leading out.
The main bulk is in the DPDS power strips and their plugs. Since each disk has a power cord that unplugs from the disk unit, it's possible to build a custom DPDS by cutting the plugs off of these cords and screwing the wires directly to a terminal block inside an approved small electrical box. All of this fits between the two rows on disks on the1' wide shelf.
Re: USB 3.0
Update on power control: Its ugly, but a single-circuit USB-controlled unit costs $25 USD. So, 25 x $25 costs $625 USD, not the $2000 mentioned previously.
The costs mentioned in the original post were retail qty 1 on the web. I suspect you can get at least a 15% discount for this large order, so the total for each of the redundant 1PB systems is just over $50,000 USD.
Also, the architecture as stated is 7 systems, one per shelf, each capable of supporting 40 x 4TB, but actually supporting 36 to get an even balance. It might be prettier to use 8 shelves each supporting 32 x 4TB, just because it's binary. Adds slightly to the cost, but those $1200 computers are WAY overkill. In fact, since we are only powering up one set ot 10 at a time, we can actually get away with a single computer instead of 7 or 8 computers, which means we do not need the switch.
In case it's not obvious, this system is basically an automated version of a stack of unpowered disks on a shelf. Access time to a file will usually be about 10 seconds to allow the disks to spin up and stabilize.
you can buy a 4TB external USB3.0 drive for $210.00 USD. A 10-port hub costs $50 USD. A computer with USB 3.0 and a 10 Gig-e NIC costs $1200 USD. An 8-port 10Gig-e switch costs $1000.
one switch, 7 computers, 25 USB hubs, 250 drives: $1000+7x$1200+25x$50+210x$250=$63,150.
Now double it because we want a second one in a secure location for backup.
The external disks are 2" wide and < 5" deep, so 40 sit on a 4' shelf 1'deep, and we need 7 such shelves, about 8" high, and each has room for one computer and four hubs.
For power, we need a cheap way to turn the AC power on and off for the disks. Unfortunately, its not cheap so we turn them on and off in sets of 10. To access the data, turn on the correct set, copy the data, turn off the set. power control for 25 sets will cost perhaps another $2000.
Any parent could predict this
Children do exactly this, and need to be corrected. Watson called "BS,"probably correctly, but one must use the correct vocabulary subset depending on the audience and context. see:
You started by defining "the Internet" to ionclude its servers, and you are absolutely right. Buyt hte server component is siliocon and electronic, not photonic,. For servers, the figure of metis as recently as 2005 was ops/$. Now, the figure of merit is ops/Watt. Internet content providers (Google, Facebook, etc.) are no longer compute-constrainted, so now the cost of operatins is driven by the energy used. We can expect ops/Watt to continue to decrease even in the pure electronic domain due to increased integration at the chip level. Later, we will start to see power reduction in servers at the board level when chip-to-chip photonics replace chip-to-chip electrical signalling.
At the data center level, we will see much more power-efficient LANs. Up to now, LAN technology was driven by bps/$. But now, we can also look at bps/Watt. and dramatic improvements are possible.
One upshot of all of this is that the capability of a physical data center (ops per cubic meter) will continue to increase exponentially, even though our old metrics such as CPU cycles hve flattened.
In the days of sail, the RN did not use ships of the line against pirates. The RN used Frigates and smaller ships for that. Similarly, a big-deck carrier is completely inapproprate against modern pirates. Enterprise has a crew of 4600, and its task force has about that again, for a total of 9200. That's enough to crew 92 LCSs, and an LCS is just about ideal against pirates, since it can support fast patrol craft and helicopters. The problem is that neither the USN nor the RN really want to do anti-piracy becase it just isn't sexy and it does not provide seagoing commands for admirals. for LCS, see:
The US has 10 big-deck carriers plus 9 "little" carriers. The entire rest of the world has a total of zero big-deck carriers and nine "little" carriers. See:
The big carriers can support various high-performance aircraft. the little guys handle STOL or VTOL aircraft, which have all sorts of design compromises.
Today, in the data center we have 10Gbps over copper to each server. An individual server doing a non-trivial job might support a 10GB load. By Moore's law, an individual server may need to support a 100Gbps load in 2020. But we can do 100Gbps today, using WDM, on a single fiber, to each server. So today, CPU power is the constraint, not NIC bandwidth.
By 2020, a WM NIC should be able to handle 10x110 0Gbe or better, cheaply, on a single fiber. NIC BW will not be the conatraint. LAN BW will follow suit. WAN BW will (again) be the bottleneck.
Not 12 Lasers.
That is a "multi-core Fiber" (MCF.) It has 12 cores: effectively twelve spatially-separate optical channels, equivalent to 12 separate fibers in an (extremely) tight bundle. Each of thes cores supports a separate DWDM signal. Each of the many wavelengths within each of these twelve channels requires its own separate laser (or at least its own separate modulator.) The new innovation is the MCD with dramatically reduced cross-talk, and the optics that permit the 12 DWDM signals to be injected into the MCD and extracted from it. There appears to be no new innovation in the DWDM itself.
Truss as a tension/compression structure
The truss appears to use only rods. Rods provide both tension and compression. A well-designed truss can use rods for compression, and (much lighter) wires for tension. In this application, you can use carbon-composite rods for the compression members and truly light-weight fibers for the tension members. I suspect that the thinnest available Kevlar fibers will suffice for the tension members of the truss. So: a triangular truss would have three long composite rods, Nx3 very short rods, and 2Nx3 fibers. Or if you are brave Nx3 fibers. But this application does not need Symmetric triangular truss, because the forces in the three dimensions are not symmetric.
The trick here is to ensure that the rods and fibers have roughly the same coefficients of expansion (change in length with temperature.) Increased distiane between the rods will add weight but will reduce any warpage. Acceptable truss warpage, in turn, depend on how the truss orientation affects the mission parameters.
If all we need it a truss that points (approximately) "up", then at the extreme we need a single carbon rod. If we need more rigidity, then we need two or three rods. it is not clear why we need mor than two: gravity workss, after all.
Consider a two-dimensional truss of (say) one metre. Two 1-metre rods, separated at (say) 20-cm intervals with 100 mm rods. The rods are turn connected by digaonal Kevlar fibres at each junction. This two-dimensional structure is in turn stabilized at its midpoint by triangular outriggers in the third dimension, connected to each end of the truss with more kevlar fibres.
The design appears to include a rubber band from the titanium rod to the truss. Why? If you actually intend to use a real rubber band, you have a problem: it will become brittle at low temperature. Remember Challenger's O-rings.
e-beam lithography has lots of benefits, but it is extremely slow. This technique increases its speed by a factor of 10,000 by using parallel beams.
The article points out one advantage: you no longer need to create a mask, and the cost of the mask is drives the cost of the photolithographic process. Masks have become extremely challenging as feature sizes dropped below the wavelength of the light used for the photolithography: the mask is no longer a simple reproduction of the shape of the desire result. Rather, the mask (rather, the masks, since "double-patterning" is needed) have funny shapes that cause the light to interact with the surface based on the rules of optics,and not all desired results have corresponding masks.
But there is another consequence that is even more important: A mask is so expensive that you must produce a huge number of parts to amortize the mask cost. For E-beam, you can spefiy the exact result you want, and the beam can produce that exact result. But even more importantly, there is essentially no penalty for creating multiple different kinds of devices on the same wafer. This completely changes the economics for creating experimental devices and for small production runs of ASICS, and it allows the industry to re-open the idea of wafer-scale integration.
E-beam failed because is was too slow, and it lost ground to photolithography as the wafers got bigger and the feature sizes got smaller. But suddenly we have parallel e-beams, which conceptually increase the speed by number of parallel beams (currently 10,000.) But if 10,000 now, why not 1,000,000 in the future? We get to the point where a specialty fab could produce a single instance of an experimental custom device for not too much extra money, and suddenly we can create a small quantity of ASICs for $100 apiece.
Yes, 688 Mjoule
That's remarkably close. I computed 32 x 30 = 960 M joules of energy in the powder, and you computed 688 M joules in the shell based on MV^2. These are in fair agreement given that we were probably not working with exactly the same gun.
Not so much
A WWII battleship's 16" guns used approximately 300 Kg of propellant, at approximtely 3.2 Mjoule/Kg. So the gun imparted about 30 times the energy imparted by this railgun. We have a way to go yet, but not too bad for a $21 M device.
It's Plagiarism, because they did not attribute the source. That's unethical, but not illegal.
However, it's also a violation of copyright law. Wikipedia is copyrighted, and you are not permitted to copy the material except under the terms of its license. Those term are very liberal, but they do require attribution. Wikipedia could choose to sue the Vatican, and if they do, they will win.
Temperature in vacuum
In addition to worry about hotspots, you also need to remember that convective cooling does not work very well at reduced pressure. Fortunately, in this case the total heat is in bounded because the motor quits after a few seconds. I do hope you continued to monitor the temperature for at least a minute after the end of the burn, since it takes a few seconds at least for the heat to migrate from the inner wall of the chamber.
amount of ice
1km^^3 of ice is approximately 1 Gt, so 100 Gt of ice is about 100 km^^3 of ice. That's a 10x10 km area covered 1 km deep, or a 10 km x 10 km area covered 10 m deep, or a 100 km x 100 km area covered 10 cm deep.
For Americans, that's the District of Columbia covered in 4" of ice.
In 1987, three neutrino detectors in different countries each detected a burst of neutrinos at 7:38 UTC on the same day.About three hours later, multiple telescopes observed a new supernova at ah location now computed to be at a distance of 168,000 light years away. Theoretical analysis says that neutrinos are generated in a core collapse and are not delayed as they leave the core, while light is only emitted when the shockwave from the collapse reaches the surface of the collapsing star, about 3 hours later.
These observations are consistent win neutrinos moving at the speed of light. They are not consistent with neutrinos moving faster than the speed of light. A baseline of 168,000 light years is many orders of magnitude longer than the baseline from CERN to Gran Sasso.
The U.S. suppressed global warming in the same way in the 1970's. The problem is that the sulfur that created the aerosols that cause the cooling also causes acid rain. This is really bad for lakes and ponds (e.g., in the Northeast) that are not naturally buffered, so we started scrubbing the sulfur out to the stack gasses, which increased global warming.
Of course, if you live where the lakes and ponds ARE buffered (i.e., in a limestone area such as the southeast) then the extra sulfur is good for the crops.
- JLaw, Kate Upton exposed in celeb nude pics hack
- Google flushes out users of old browsers by serving up CLUNKY, AGED version of search
- GCHQ protesters stick it to British spooks ... by drinking urine
- China: You, Microsoft. Office-Windows 'compatibility'. You have 20 days to explain
- Something for the Weekend, Sir? If you think 3D printing is just firing blanks, just you wait