82 posts • joined Monday 3rd December 2007 20:34 GMT
Good point. Tesla is making money on selling pollution credits. They wouldn't have been profitable last quarter without that revenue.
Or, I should say, that government-forced money transfer that represents the worst in policymaking idiocy. And everyone that's looked at it admits that it's somewhere between deeply flawed and just plain stupid. Yet, because it doesn't cost the government directly, just forces private companies to shuffle money, there no motivation to fix it.
On the other hand, the Tesla loan was a justifiable investment in a technology the could be economically viable with reasonable assumptions about energy prices and expected advances. It's fair to disagree, but most people think it's reasonable.
Solyndra proves that the U.S. government can pick an obviously wrong technology even when they invest in the right concept. Even if you don't philosophically agree with this type of government investment, you shouldn't be so stuck in saying "it's always wrong" that you lose the ability to say "this is the wrong technology".
I think that Tesla has done an amazing job. I expected that they would be able to build a working power plant. I never expected that they would be able to design and manufacture a mass production vehicle without a decade of expensive experience.
From the deposition, AF Holdings paid $0 for the copyright they were suing over.
Now this was likely a bit of a lie. They transferred the copyright ownership to a trust in Nevis (an overseas tax and identity have), and transferring something of value would have raised tax issues.
But still, they actually stated in a deposition that it was a worthless copyright. The settlement letters wanted $3K right away, or more later, and stated that up to $150K plus legal fees would be cost in court. For a copyright that cost them nothing.
It's likely that the value of the movie was actually pretty low. Apparently it was a very low budget movie, and they have limited shelf life. Fans of the specific actors buy copies in the few weeks after release, and almost no copies are sold later. So after making a single pressing of a few thousand, the right to make additional copies has little commercial value.
Re: But you can already get..... at a stupid cost!
I was lucky enough to meet one of the Biolite stove developers at Design West / Embedded Systems Conference two weeks ago.
He happened to be the "booth babe" nearest the stove, a few UV purifiers and other interesting gear. I asked a question, and was pleasantly surprised at a knowledgeable reply. That led to a 30 minute conversation about the capabilities and design.
The stove has a microcontroller to manage the power, optimizing the peak power extraction from the thermal generator , and setting the priority of running the fan, recharging the single A123 cell, and charging a USB device.
Power extraction from the thermal generator is much like a solar cell. Draw too little current and you give up some of the output potential. Draw too much current and the voltage sags more than the extra current gains.
The key to a stove like this is running the fan. The fan cools one side of the thermal generator, then flows around the outside of the combustion chamber to keep it cool, then feeds the flame with now-quite-hot air.
Because it uses forced air, small stuff like twigs burn intensely and completely, whereas a open twig fire will flare up and die down, always burning inefficiently. The drawback is that this is a tiny stove, so you can never move up to bigger stuff. Unless you carry wood pellets with you, you need to constantly feed it more twigs. (That's probably it's biggest problem: it's too heavy to be a backpacking stove, and too attention-seeking to be a casual camping stove.)
The thermal generator does have very low efficiency, in part to keep it reasonably light. But efficiency isn't a huge problem, especially if you treat it as a stove first. There is plenty of source heat and the 'waste' heat still goes into cooking. Now if you are treating it as an electric power source... uhggh. It will use just as much fuel (and attention) as when cooking, and its priority is running the fan and recharging the internal battery before outputting external power.
Porn is a poor fallback from spying
I notice that the "stolen laptop" side of the story isn't mentioned either.
It's likely that the laptop was purchased by the company for his use and got a NASA property tracking tag to make it easy to carry in daily. Otherwise he would need a form every time he took it out, stating who owned it and why it was being removed.
When he was terminated, the laptop was worth less than the cost to clean it, test it and reload software. If he had been fired, the company might want it back on principle. But since he was terminated because of political pressure, the company probably told him to just keep it.
Once the laptop was his, he had a few weeks with nothing to do and a high bandwidth connection. Even if it's not that difficult to bypass the Great Firewall, it's still easier and faster to gather your collection in U.S. Or perhaps it really a very modest collection, or just an incidental one. One where the FBI could quickly compare it to the original source material and start backing away from their blunder.
I'm surprised they didn't get him to plead guilty to an additional charge of jaywalking.
There is an easy solution: weekly updates. That way you get to multiple-count your users, even if they never open the app.
You don't think that's a fair way to count users? You must not know the standards of this industry.
I used to think that all employer provided snacks and meals were tax-dodging perks.
Then I saw an excellent example of how meals can be "for the employers convenience".
I got to see the production village of a Formula One race. They flew in a cafeteria. On a 747. Including chefs and gourmet food (although the those didn't fly on the cargo jets). Hugely expensive. It probably cost them $100 per meal. Perhaps even $500.
The alternative was having the crew go out and buy their own meals locally. But the area surrounding a F1 race is a full time traffic jam that would result in a 6 hour lunch and a 8 hour dinner, just when you needed them to be working 16 hour days.
It suddenly made sense why sometimes meals are not a "perk", and why taxing on the meal cost would sometimes be absurd.
It's as if a million voices cried out in terror and then the updates went silent...
Most of the people from Prenda Law had already testified, by submitting sworn statements as evidence. One of the lawyers was deposed for a whole day
The substance of most of those statements have proven to be deliberate misleading or outright false.
This is invoking the fifth after having initially testified, not declining to make a statement.
Re: Nothing to do with chips.
Calling it "pure software" is misunderstanding the point.
It's only fast and energy efficient enough to use because the software is written to run on the GPU.
While part of the value is the details of how the image is classified, the point of the demo -- and the reason for showcasing it at GTC -- is that by structuring it to take advantage of the GPU, it's now computationally feasible. You can do the image classification and matching before the skirt goes out of style or your phone runs out of battery.
This is much like High Dynamic Range photography. The basic idea is simple. Take a picture, analyze it for brightness levels, use that info to take one or two additional pictures to fill in bright or dark areas. But behind the simple idea is a huge amount of computation. Making it worse is that because the first stage is so hard, the second picture won't line up with the first. So now you have to do image registration, which is even more work. But you still end up with crap, because any motion or changes in the scene causes disturbing artifacts.
By being very clever with the GPU on the Tegra, including configuring the hardware to put GPU compute into the camera image pipeline, NVIDIA can now do HDR in real time. Not only can HDR now be used for video (jaw-dropping amazing!), by being fast it avoids almost all of the disturbing artifacts from the slow approach.
Sometimes speed and efficiency makes all the difference.
The older system was CARMA, for CUDA on ARM Architecture.
The announced development systems are internally named Kayla, and the CARMA name is being dropped.
The two announcements are both "Kayla" devkits. The first is similar to the original CARMA, which had a MXM 3.0 GPU and Q7 processor module on a carrier board powered by a single DC power rail. Now, with the GPU updated to a Kepler class GPU, it's named the 'CUDA on ARM MXM devkit'.
The second system is a new mini-ITX carrier board that supports a Q7 processor module and has a PCIe slot. It uses a ATX power supply, and can run much more power hungry GPUs. Although strictly speaking it's "Kayla" when uses the same new GPU as the MXM module version.
The original devkit was developed around a existing Quadro 1000m MXM module with a GF108 Fermi class GPU. The GPU has 3 SMs, or 96 CUDA cores. The Q1000m has 2GB local memory. Only a portion of that can be mapped into the ARM's address space at one time.
The new devkit uses a Kepler class GPU with 2 SMX units ("SM35") for a total of 384 CUDA cores. Right now it's configured with 1GB of GDDR5 memory.
For both, the CPU module remains based on the Tegra 3. Neither newly announced Tegra 4 products (Tegra 4 and Tegra 4i) have PCIe interfaces. That's why this is a "close development model" rather than exactly the same as Logan.
BTW, the CPU module has 2GB of low power DDR2, and the GPU has 2GB of local memory. While the total is 4GB, only about 3GB is directly addressable. 2GB is pretty much the maximum main memory configuration of ARMv7, due to some sparse utilization of the memory map. Plus you have about 1GB of address space into which you can map PCI devices.
The A15 has a PAE feature to add a few address bits, but it's new, not really used and doesn't help most ARM use cases. The real fix for the cramped address space is ARMv8.
I'm with one of the previous posters: Why is ebay making a story out of this? There is likely some underlying reason.
Perhaps they will be introducing a competing advertising service.
My take is that eBay has been paying for blanket coverage on search results, regardless if they could offer anything of benefit. Until a few months ago you could search for "broken leg" and be offered "get a broken leg on eBay".
An early version of Burning Man?
The 'giant party' theory resonates with me.
Many camps at Burning Man put months of effort into art pieces that will be burnt or dismantled at the end. A handful of people are practically full time burners, working only enough at a paying job to fund what they need for their Burning Man art exhibit.
The result is an amazing experience that entices people to return year after year and build ever-larger art works. (Or, for others, to have another go at a week long live-in clothing optional rave.)
That same drive must have existed back then: who can resist a giant annual party with a pseudo-religious justification?
To make it clear, Samsung provided desperately needed money to Sharp.
One of the things they got in return was relatively small number of shares, something that Sharp had to report. There are almost certainly other terms and benefits to Samsung that don't need to be immediately reported.
This is far different than buying 3% of the company's shares on a stock exchange. That would have gotten Samsung almost no control or goodwill, nor would it have eased Sharp's tight finances.
As far as Apple just going out and building a display factory rather than investing in Sharp... yes, it would be cheaper. If they started now, they might have a working production line by 2015, and make industry leading displays by 2016 or 2017. If they didn't care about the cost, they might be able do it a year faster. In the meantime they still require displays, and every potential new supplier can see their future.
I was about to report that I didn't agree with the booth babes comment, then I recalled having stopped, actually stopped, trying not to stare at a black leather thing that couldn't quite qualify as a mini-skirt.
But overall there were relatively few booth staff people that looked as if they were out of place for a tech show. I enjoy an occasional booth babe, and they serve as a quick indication of the company's technical depth.
24*6*500 attacks a month!
Someone must be taking Sunday off to get that nice round number.
Here's an experiment. Go to an old-fashioned ISP and get a single static IP address. Put a packer sniffer/wireshark/whatever on it. You'll get a constant stream of port probes. Very likely in the range of hundreds per minute.
Are they attacking you? Yes. Is someone targeting you? No. It's just the constant noise of botnets and like trying to expand.
Now put up a website. You'll get a smaller number of attackers, trying a broad range of attacks. Again it might feel targeted, but it's all just automated.
Now if you are a high profile target, there are undoubtedly some targeted attacks as part of that barrage. But 144K per month is way too high of an estimate.
"In any case, the guy was driving legally"
If you look at the speed logs provided by Tesla, there were numerous excursions over 80MPH. The only period at a continuous speed was a bit over 60MPH, not 54MPH (or 45MPH) as claimed in the story.
The reporter claimed in the second story that the speed difference must have been because of the winter tire package with 19" wheels instead of 21". But the tire diameter was exactly the same between the two.
The reporter claimed that he charged for 58 minutes, while the logs show about 47 minutes. The story claimed 25% longer charging than actually occurred -- that's a major difference.
Those are very specific numbers in the story. Writers use specific numbers to convey careful, accurate reporting. In this story they were either made up, or deliberately false. This was clearly not a fair story.
There was a question about why this always seems to happen in Russia.
It's because of its proximity to polar aperture.
Grab a globe -- the kind that spins. Look for the part They don't want you do see. Yes. Right there. Under the pivot point, hidden by the brass disk. (If you have an inflatable globe, it's where the air fill hole is.) The north one is the Polar Aperture, where the flying saucers land and come from when they visit the hollow sphere that is earth. Every century or so there is a bad landing and Russia gets hit.
Of course The Sexiest Man Alive, Kim Jong Un, will claim this is one of his. But you now know the truth.
Re: They're both full of $#!T
Broder used very specific numbers in his story, "58 minutes" when it was actually 47 minutes, "54 MPH" when it actually was 60MPH, etc.
All of the incorrect numbers were to the detriment of Tesla. That makes them unlikely to be innocent mistakes.
Yahoo messages boards might be semi-useful, if they weren't spam sewers
Go to any finance message board and you'll see that it is filled with newsletter offers and dozens of sock puppet follow-ups. Post something relevant and it's down-voted.
It has been that way for years, and they show no interest in fixing the problems. And this is one of their most valuable "properties"...
This is likely all just noise to increase the bid.
But I have to wonder if Dell, the man, understands the deal he is making.
Rather than a bunch of random shareholders, which he can basically ignore, he will be directly controlled by people coming by each week for the vig. That means selling off whichever body part is worth money, right now. Wall Street isn't known for having a long term perspective, but raising private money usually comes with terms that are less about sharing risk and reward, and more about them owning it all if anything goes slightly wrong.
Re: Yes but
I don't agree with "refill in five minutes or less".
If you can charge at home (admittedly ruling out many city and apartment dwellers), you just change your daily routine to plugging in when you get home for the evening. With overnight charging you start out each day with a full charge. That can be less hassle than going to a special store and taking five or ten minutes when the fuel tank runs low. After a while it seems like way less inconvenience than standing in the cold, risking dripping gas on your hands, clothes or shoes.
That doesn't address long trips, but for most drivers those are rare (a few times a year) and never unexpected.
Those that are saying "it works with Windows, therefore It Works" are behind the times.
Microsoft used to have that attitude. But about a dozen years ago they very suddenly understood the flaw with that approach. Most of the things that "worked with Windows" and didn't work with Linux, actually didn't work.
They just happened to not crash immediately with Windows 95 and perhaps Windows 98.
Once Microsoft put a priority on an OS that didn't require a daily reboot, and tried to move to improved API (e.g. 32 bit, eliminate the need to rely on undocumented state in registers and variables) they found that almost every hardware issue that Linux faced also bit them when they tried to introduce an updated OS.
Almost certainly this Linux-triggered bug would have otherwise lay hidden, waiting to bite them with Windows++.
Oh, haven't we seen this before
I'm having a pre-deja-vu moment.
In three or four months we'll see this group/product/service severely hacked, with all sales and customer data taken.
Many people don't seem to understand the operation of the nanocells.
A typical device is the one sold by Verizon. You purchase the box for $200 (although some people have gotten discounts) and install it on your internet connection. The box provides cell service for anyone in the area, supposedly prioritizing your phone calls (really reserving one call slot, out of eight or more).
That means your internet connection isn't just carrying your own calls and data, it's potentially carrying your neighbors calls and data as well. It's actually more likely than most people estimate, since if your cell coverage was poor, so is every else's in the neighborhood.
There is an easy solution, but it's one that is strong resisted by the cell companies: you should get credit for the calls carried on your cell site. You bought the nanocell, and paid for the data transport. They would compensate another carrier for calls carried. But that would cut into their profits, as they currently get the extra coverage at no cost. And perhaps even a modest profit for selling the nanocell for $200.
"...his defense counsel would have been free to recommend a sentence of probation.Ultimately, any sentence imposed would have been up to the judge."
Judges rarely make a critical review of a plea bargain agreement. The defense isn't allowed to recommend a sentence of probation in court -- the judge only wants a final agreement. Those statements are trying to shift responsibility. The prosecutor's office has the sole ability
And, to address an earlier comment, a plea bargain all but precludes a later appeal.
I can see how this was a non-choice. Accepting a felony conviction would destroy his credibility and effectiveness -- essentially his life-long goals and "career". He didn't have enough money for a strong defense in a trial where he faced a 50 year maximum sentence (30 years was "typical", not maximum).
I've read that the room the laptop was found in wasn't regularly locked, and that a homeless man stored his possessions there. If that's the case, why was the prosecutor treating access as a felony-level offense?
A bit of misunderstood info about Token Ring above.
IBM used to market Token Ring as more efficient and more reliable than Ethernet. Their marketing talking points included a claim that Ethernet had a maximum of 37% utilization of maximum capacity. This was convenient when they were flogging 4Mbps TR against 10Mbps Ethernet.
They based this fraction on a flawed paper that modeled Ethernet as a CSMA network, ignoring the "/CD" part and modified pseudo-exponential backoff. IBM knew that this was bogus, and Ethernet users were seeing 98% utilization in real life, but it didn't stop IBM from loudly spreading FUD.
A second claim was that Ethernet was undependable and unreliable. It actually _relied_ on _collisions!_ to work, and the spec said that you could _throw away_ packets! Horrors! And you could never guarantee that a packet would be sent in a bounded period of time. But IBM failed to mention that it was just a difference in reliability and delay profiles. Losing a token in a TR network could be pretty common, and the result was massive disruption and delay. Even if you didn't lose the token, the "bounded latency" had such a high bound that it was mostly useless.
I'll tie this into the current discussion: there is a close analogy between Ethernet and TCP/IP. Both were cheap over-provisioned packet-switched network that only promised best-effort packet delivery. They supported high numbers of nodes, and had seemingly-simple access and flow control rules that turned out to be surprisingly stable when scaled up.
OSI was a late-coming spoiler attempt
Don't put OSI into the same category as TCP/IP.
OSI and ATM were both primarily "spoiler" technologies. They were concocted and promoted by organizations that were far behind with TCP/IP/Ethernet. The goal was not to introduce a better designed network, but rather to press the reset button and have everyone start from scratch.
The OSI "layer" model remains only to classify protocols and describe products. That doesn't mean it ever helped design anything. We should remember the rest for what it was: an attempt to do evil by delaying progress.
ATM wasn't quite as bad. It was promoted by people that really did believe that the future was all about centralized control from central offices connecting you centralized computers for which you would be billed from a central billing service. You would dial up ("establish a circuit") an information service such as Compuserve, AOL or the Phone Company. That circuit would stay connected for the whole conversation ("session") giving you fixed bandwidth billed in 6 second periods.
We are very fortunate that neither became the wide-area networking standard.
(I was fortunate enough to be at MIT in 1983, and experience the extraordinary and a normal occurrence.)
He just wants to meet "The sexiest man alive"
I hope they wear warm clothes, because it's time for the annual treaty. The one that provides a half million tons of fuel oil in exchange for a nuke treaty that will be broken in the spring. I'm pretty sure that the newspapers have a form story where they just update the year and fill in the details of saber rattling over the past few months.
Putting a satellite into a stable orbit is not "a pretty hard thing to do" if you have a working rocket. Putting it into the exact orbit you want is challenging.
They pretty much said "watch this" then "nailed it". ("We _meant_ for that to happen.")
I doubt that the satellite is slowly trying to stabilize itself. Satellites that are expected to have long operational lives have flywheels (momentum/reaction wheels) to conserve maneuvering fuel. They attempt to use coils working against the earth's (weak) magnetic field if the wheels build up too much speed, falling back to conventional attitude jets. But all of this is complicated and difficult to get working, so short-lived satellites use only attitude jets. Even the smallest jets should be able to stabilize the satellite in minutes.
It's more useful to think of them using the phase change than the frequency.
The grid tries really hard to keep the frequency at 60.000Hz (or 50Hz, for the countries that are a little slower).
If the load increases, the phase lags. This indicates to the power plant that they need to throw another shovel full of coal onto the fire and let a little more steam into the turbine.
It sounds easy to record that phase difference and match the pattern, right?
Except that there isn't one power plant. The whole point of a grid is there are thousands connected together. So you have thousands of sources trying to push the phase a little faster. Each substation is seeing a different mix of the phase variations. All mixed together, with extra noise added by local loads being switched on and off.
I can see how you might be able to show a proof of concept that invalidates a recording (but not authenticates its veracity) made nearby, within the same substation service area.
The next challenge is the audio recording. Today that's largely digital. Analog audio is hard to do well on digital chips. There is quite a bit of non-linearity. Some of the distortion results in phase changes with sounds level. The audible effect is more pronounced at high frequencies, but it will overwhelm the minuscule phase difference of the power line hum. Even the sample clock of the D/A digital will have enough jitter, correlated with the processor workload and other power draws, to overwhelm any otherwise detectable hum phase pattern.
Perhaps that's why they are publicizing this now. It's a technique that sounded promising, they invested a lot of effort, only to have progress has render it completely useless. They might as well recover whatever value they can by dissuading people from trying to forge recordings.
Dropping 386 support impacts zero users.
I initially used Linux ('MCC Interim') with a 386, but a bit over two decades ago (!) switched to a 486.
I think I still have that 486 around for sentimental reasons.
That 386 has no chance of running a modern Linux kernel. It had too little memory. You would have to strip the kernel down to uselessness to get it to load at all, then there would still be too little space for buffers.
"or maybe that's a dick move too far for facebook?"
We'll take that as a rhetorical question...
This is a believable story. I suspect that it's true only in a limited sense, but the key elements are correct.
The performance profile of Apple's A6 chip suggests that it's an Apple-designed core, rather than a standard core licensed from ARM. This has many implications.
If they are an architectural licensee of ARM, they have a big internal investment but a very modest per-chip cost. Switching to the ARM instruction set with their own processor could save Apple substantial money.
Using their own processor makes it more difficult to directly compare performance. This has served them well in the mobile phone business. They have claimed high performance without standing behind actual performance numbers that can be compared against other devices. Using their own non-x86 processor would return them to the bliss of a decade ago, where Apple faithful could say "the clock may be slower, but thanks to the instruction set it's actually faster". (Which happened right up until the switch to the actually-much-faster x86.)
They already have a design process putting their own chips into mobile devices. Having a different design cycle around x86 chips from Intel duplicates effort. (I don't actually believe this, as the same team probably can't do both. And once you have a working design process, you shouldn't break it up. But eliminating duplication sounds good to management.)
The only reason people look their is that Verizon programs their phones so that every function key and mistaken key press takes you there. And perhaps buys something you didn't intend to.
They could have made it an organized, trusted marketplace. Instead it felt like a locked-in scam from the beginning.
I don't see an appeal as likely.
This was a minor legal skirmish, not the main battle.
Apple was trying to get the courts to rule that Motogoogle must offer a Fair and Reasonable rate, rather than have to enter negotiations. Their legal approach was to argue for "specific performance" on contracts between Motorola and the standards organization -- a contract that Apple was not a party to.
There were several flaws with this argument. The biggest one was that there was a negotiation and arbitration process indirectly included as part of those contracts, and Apple didn't even attempt to use it. Apple wanted the court to ignore that part, and even go so far as to set the rate.
"With prejudice" just means they'll need to make a token effort at following the negotiation process before refiling a slightly different lawsuit. Which (I believe) they have already started. The sole impact is that they lost this lawsuit, and may be held responsible for court costs or even fees.
Why was this so important to Apple (and also to Microsoft)? Because they had been using the patents without being part of the patent pool (the way essentially all other industry players are) or paying royalties. And any arbitration would certainly include their failure to previously pay as a factor in setting the rate. If they could get the courts to instead force a rate that matched what the other members of the standard organization were paying (an "insider" rate), they would pay much less. Oh, and wouldn't risk an injunction for brazenly selling products incorporating the patents without having a license.
Almost all other potential licensees are in the patent pool. Apple is not part of the patent pool, and doesn't want to be. Nor are they willing to generally cross-license.
Being part of the patent pool is like going to a potluck party. Everyone brings a dish. There is a mechanism for payments if someone brings, say, a single cupcake. But with reasonable players, the valuations often work out so that no net payments are made. Now Apple is coming in as a party crasher. They've eaten half a plate of food, covered with ketchup they brought. When someone asks why they aren't sharing the ketchup, they say the ketchup costs $30 per serving offer $1 for the food.
An injunction? That should give them a chuckle
An injunction might slow a legitimate company, but these people have been flouting the do-not-call list for years. I'm certain that they have hundreds of shell companies that are owned by other shells. The FTC needs to go after the credit card banks and telecom carriers simultaneously. Those are the "legitimate" fronts that allows this to continue.
Curiously, only a few people above demonstrated knowledge of basic physics.
You can draw water only about 20 feet up into a pump. In theory a little under 10 meters, but practical pumps can't draw a hard vacuum and easily cavitate. Because diesel is a little less dense than water, you can draw 15% further, perhaps 4 fathoms, but that's a minor effect. We'll call that two office floors, not three.
Most water wells are deeper than this and must use submersible pumps to push up rather than draw from the top. That's why the well casing is so large: so that a large-ish pump can be lowered down (plus to trap an occasional curious child).
The old fashioned hand pumps have a rod extending to a lift disk near the bottom of the well.
Pressurizing the basement tanks would allow pumping from the top. Although at that point you can go slightly further and just use the air compressor. But now you have a highly pressurized tank of fuel. It would have to be immensely strong for safety, and any tiny leak is a disaster. Far worse, if you use regular air the partial pressure of oxygen becomes a major problem. Even cold, the fuel will spontaneously decompose, ignite and explode.
The pictures make it clear that they are transferring the oil from the 55 gallon barrel to 5 gallon buckets. A pair of those is a reasonably effective load for a man to carry. Even in excellent shape, you won't get in many 17 floor trips before you call it a day.
Don't compare those profit numbers to other countries.
Those profit numbers are not the same as U.S. GAAP net revenue.
Think of them as roughly comparable to 'Hollywood Accounting' or old-style casino books. The books are re-worked so that only a trivial profit is shown. Cars, vacations and houses for executives are company expenses.
One HP division slashing at the legs of another HP division... internal competition like that is what keeps them nimble, dominating the market, and steadily increasing their stock price.
I've seen massive EM pulses.
They were big enough to split 1 meter diameter poles.
I've heard about a theory that such EM fields can power localized time travel, with a specialized no-longer-available vehicle with a rare exposed corrosion resistant alloy skin. Although it has to be traveling at exactly 88 MPH when the pulse hits.
Yes these massive pulses don't see to cause the end of civilization as we know it. Specifically, they don't destroy equipment that is designed to withstand EM pulses. So you take out their microwave oven, depriving them of popcorn, but the equipment you would like to destroy likely continues to run.
Some of the posters are missing the point.
Microsoft agreed to present a browser choice as part of the remedy (or had it imposed on them, but that's unimportant). The alternative would have been higher fines or blocking their right to do business in the EU.
They then did not follow through with the remedy while (this is an important point) annually certifying to the court that they were complying.
This wasn't a trivial side issue, where checking would cost more than the fine. This was a Billion Dollars/Euros. Big 'B'. An army of lawyers were involved. Complying with the agreement would require a medium sized team inside Microsoft to implement. Not as a small part of their job, but as their primary focus.
"Mac mini is still the world's most energy-efficient desktop – at idle it consumes just 11 watts."
A Trimslice consumes under 2W at idle, ranging up to 6W under load. It's not quite as fast, but that's not part of the statement above.
Isn't this what component serial numbers and associated bar codes are for?
And, to keep it on-topic for Friday, is the proper phrase "rounds of plastic surgery". Sure, a round might be a pair of hemispheres, but it could be more conical shaped.
Re: No idea how well this may work.
Yes, putting some of the waste heat into a gravel bed would work.
But it's back to the same problem: for efficiency you need either a massive plant or lots of time. In this case a massive gravel bed, which negates the energy density of liquified air.
Compressed air has horrible efficiency because of the dynamic range of pressure.
Using liquified air reduces the problem with varying pressure, but the thermodynamic efficiency is still very low.
If you compress the air to liquify it, you need to get rid of lots of low-grade heat. It's hard to extract the energy from that heat without making the compressor work harder.
You have the problem in reverse when you let the liquid boil -- you have to keep putting in heat. You might try to recover energy from the temperature difference, but you need either a very large plant or lots of time.
Besides the giant heat exchangers, you also need massive pressure tanks for this scheme. A failure will result in a cryogenic liquid spill. The cleanup will be easy, but most equipment it touches will need to be scrapped and the humans buried. Not everything that goes wrong can be fixed with a wrench.
"Or you could wait a week or two between charges"
Only a week or two? Yours must be quite small.
You can only put about 100 watts of PV panels on a car. You might be able to get a bit more power out if you park in the corner of the parking lot that has no shade, tilts to the south, manages to point all PV surfaces towards the sun, is always at midday, during the summer.
Estimate that you get that power output for 4 hours a day. 6 to be generous. So we'll call it 500 watt-hours a day.
The smallest battery you can get in the Tesla Model S is 40KWh, or 80 sunny days of charging. The large battery is 85KWh. Counting seasonal variations in solar radiation, weather, and a bit of self discharge we'll just round up to a year.
It takes about 400 watt-hours to go a mile at highway speed. Some have managed 250 watt-hours in efficient vehicles. The Tesla Model S is custom designed to be quite efficient, and they only claim 250 watt-hours with the smallest, lightest battery configuration. So you could get a 1-2 miles per day from PV panels mounted on a car.
"Is this just a thinly disguised way of farming subsidies from some Californian feed in tariff system?"
Yes, and it's not just California.
Home-scale producers are can inject power into the grid. They are paid for it at the full retail price. It's a major, high-value hidden subsidy.
The retail rate pays for the electric distribution system, maintenance, capital risk, pricing risk, etc. It's approximately double the wholesale rate. And even the wholesale rate is the wrong reference, since the distribution company can choose to buy at that rate, while being forced to buy home solar/wind excess.
Right now it's such as minor part of the grid that the subsidy isn't distorting the electric market. But the subsidy is distorting the solar PV cell market. It's encouraging the installation of solar panels in sub-optimal locations (cloudy locales, incorrectly angled roofs that don't face south, etc) instead of high-value locations (remote locations with long high-loss connections to the grid). In California you'll find PV installations in foggy San Francisco are more common than in sunny high-altitude rural areas.
A bit misleading to pick a badly written comment on the discussion part of a site, and make it look as if the site itself is run by illiterate fools.
Hmmm, pretty much completely wrong about the Google v. Oracle case. The jury didn't decide that Google infringed on Oracle's rights. The question posed to them was a conditional 'if APIs are a protected right, did Goggle use them'. The jury could only answer "yes".
But the judge still has to rule if APIs are protected by copyright. Long practice and precedent says that they are not. If they are not, the question and jury verdict is moot.
A minor factual correction.
The jury did find that Android did include copied code. Nine lines. Out of 15 million.
There were other claims, such as the similarity of simple comments describing functions. Pretty much "Takes X and returns an integer Y". There are many ways to word this idea, but only a few simple ways, thus the comments look similar. The jury agreed with Google that this was not evidence of literal copying.
The 9 lines of code aren't worth "up to $150,000" as a copyright violation. That's only if there was knowing, willful infringement. Google removed those lines as soon as it was pointed out to them. Given that Sun's copyright registration was flawed, the statutory damages would typically be $200.
In a different setting there would be a defense stated that the usual threshold for infringement starts somewhere at 12 to 20 non-trivial lines of code. But the time to say even a single sentence in this trial is costing far more than $200 in legal fees. Boise reportedly got one sentence into trying to make it an issue before the court, perhaps trying to claim a partial win. The judge made it clear that Oracle had completely lost and wasn't going to get an accounting for 9 lines out of 15 million.
- Geek's Guide to Britain INSIDE GCHQ: Welcome to Cheltenham's cottage industry
- 'Catastrophic failure' of 3D-printed gun in Oz Police test
- Game Theory Is the next-gen console war already One?
- BBC suspends CTO after it wastes £100m on doomed IT system
- Peak Facebook: British users lose their Liking for Zuck's ad empire