How to leak information...unintentionally
IBM has SG&A expenses that are almost 3x R&D expenses.
Intel has SG&A expenses that are less than half their R&D expenses.
And you wonder why IBM is so screwed up?
50 posts • joined 12 Oct 2012
IBM has SG&A expenses that are almost 3x R&D expenses.
Intel has SG&A expenses that are less than half their R&D expenses.
And you wonder why IBM is so screwed up?
And more recently, over the last five years IBM shares lost 21% of their value (even before inflation), whereas the NASDAQ gained 109%.
May I point out that over those same 5 years IBM bought back almost 20% of their shares, too? Without their stock buyback plan, the stock would have performed even more abysmally.
I know that stock buyback plans are popular with investors, but they're also an admission by management that they really don't know how to use the money they have on hand to drum up new business. In IBM's case, that little bit of financial engineering has become their bread and butter since the corporate finance guys booted the tech guys out of the company leadership in the early 80s. Is it any wonder the company has gone downhill since then?
Also, how the fuck can you "shut down every data centre in every location in the world"
I work for a large chip company. We use a certain company's software to manage versioning in our chip databases. My company is also notoriously cheap. A decade back or so the team managing the software was reduced to one person (call him 'Chip'). One of the chip designers (call him 'Dave') was a very senior designer and a big proponent of that crappy piece of software given the role of helping out when Chip went on holiday.
Now Chip was a nice enough guy and a wizard at the versioning software, but he wasn't too knowledgeable on remote protocols, and we were using the software on multiple sites under multiple OSes (HPUX, Solaris, Linux, etc) even though it wasn't designed to be networked (think something like rcs vs. git). I'm a chip designer, but I've also had IT administrator experience, so I had written a whole bunch of scripts that Chip used to mirror commands across the company, but having an intense dislike for our crappy commercial versioning software I never learned most of the more dangerous administrative commands and had no checks in the scripts. I had merely handled the networking aspects of the software and told Chip that he was responsible for any checking other than to see that the remote command worked since I had no idea what he needed to do, and he'd never done that since he was too busy doing the work that had been done by six people to implement checks.
Of course, Chip went off on holiday and Dave took over. Dave innocently went to completely reset a private library in the versioning software (delete the old private library and all past versions and recreate a blank one). Only what he thought was his own private library was not only wildcarded, he was doing it with the networked command environment, and included all the production libraries for that silicon generation. And of course he did this at midnight before going to bed since even for his smaller private library the procedure would take some time (I told you it was crappy software).
By early morning, about 800 chip designers all around the globe were howling about being unable to work because the all the production databases had disappeared or were in the process of disappearing. Of course, the Unix backups were only performed at one site because of the sheer volume of data (remember that I said the company was cheap?). And of course, being cheap, backups had not been tested during all the downsizing and they weren't working. All told, the cutting had gone beyond the meat and well into the bone, and now the cost was a company's worth of highly paid designers being idle for two weeks as the databases were recovered. In my case I wasn't affected too much since I wasn't following official procedure and had been working in unmanaged private libraries anyway (remember I said I hated that piece of software?).
I rather suspect that that incident was why Chip was given two guys to help him, and Dave was removed from the administer's group on the software.
So yes, I've seen entire companies' sites all go down at once because someone did something with tools he didn't understand completely.
Google isn't doing graphics. Look at the papers on the Tensor chips they've been doing and you can see that while the architectures are similar (SIMD machines with massive high bandwidth memory access), there are distinct differences between a Tensor machine and a graphics card. But from just the papers Google has published you can get an estimate of what their Tensor chips run, and a reasonable estimate is that those chips alone, not counting the HBM, assembly, and all else, run much more than a maxed out 1080 Ti card. Google may be large as companies go, but they're still not large enough to get the massive discounts you get from volume Si production.
Whatever their internal disagreements, the IP community have a Unix-like disdain of outsiders, so I wouldn't assume automatically that the ITU's ideas are necessarily any more fanciful than have at times emerged from those of the true faith.
Obviously you've never had to deal with the ITU at the spec level if you don't understand the IP community's dislike of dealing with the ITU. Not that it's pleasant dealing with spec making at most times, but dealing with the ITU in particular makes no anesthesia dental work look like a holiday in comparison. Calling the ITU more political than technical is the mildest of the complaints that can be made.
Don't fix the candidates. Fix the system that filters out better ones.Don't fix the candidates. Fix the system that filters out better ones.
On the Democrat side, the system was fixed. Just ask Bernie about how the DNC behaved and how the super-delegates system works. Yet still I can't believe the system nominated someone who, had she been anyone else, would have been behind bars in any non-politicized justice system. I know I'd have been in Leavenworth if I'd mishandled classified materials like that.
Strangely enough, the Republicans actually had the more democratic (little d) nominating system. For the GOP the problem were there were so many similar "mainstream candidates" splitting the votes in a winner-takes-all system that the outlier was the one who survived to win. Trump may well have won because the highly partisan media gave him far more exposure than he deserved, and there's a fair bit of this country that dislikes the media. There's evidence Clinton and the DNC conspired to promote his candidacy, and in the most perverse sense they may well have nominated the only candidate who touched issues that allowed him to peel off states like Ohio and Wisconsin that were more sensitive to trade and immigration issues. Absent that manipulation, Clinton should have beaten Cruz handily, for example.
I'm not sure that making the GOP system less democratic and more like the Democrat's system is the best idea as the GOP establishment is at least as corrupt as the Democrat's. But I would like it if the media were more trusted on political matters and wasn't so partisan that it is dismissed by half the country. That would be the biggest and best improvement you could make in the system at present, but the odds of that happening are minuscule.
I thought that Broadcom was moving its headquarters to become an American based company: https://www.usatoday.com/story/tech/news/2017/11/02/trump-announces-broadcom-moving-legal-hq-u-s/825600001/
That said, I'm massively confused by the statement "We are concerned about the possibility of a European company handling sensitive data of EU citizens falling in the hands of a company that is based in Singapore, where data protection standards are lower than in the EU." What PII will Qualcomm have on EU citizens other than their own employees?
And the speculation of possible data exfiltration in their chips seems highly unlikely. While the company may be legally domiciled in Singapore, the vast bulk of Broadcom's design efforts and corporate decision making centers are US based so it would be more likely that any exfiltration implementation would come at the behest of the US rather than Singapore. And in purely practical terms, getting exfiltration implemented in a quiet manner in an SoC seems to me like it would be massively difficult given how freely the folks in Silicon Valley talk and the huge number of folks involved in the design, implementation, and testing of an SoC.
Honestly folks, if you're talking about the possibility of exfiltration you really should be talking about software rather than hardware. It's far easier to hide those nefarious programs than it is to design them into the hardware.
But given that Hock Tan's modus operandii is to buy a company and spit out the parts he doesn't want makes Niels Annen's concerns pretty valid about the fate of the Hamburg plant. I'll certainly give him that.
I haven't met anyone who's bought a prebuilt desktop in years. If you care about performance and durability you either build or get a custom built system.
Moore's Law and Dennard scaling (base semiconductor scaling) are dead. We've hit the practical Amdahl limit, too. So if you look at the practical speed of a general purpose CPU for the last 5 years you'll see it's almost flat. Why upgrade your system if your CPU and RAM are basically unchanged in speed?
The only thing that's still scaling up at a respectable pace are GPUs. I've upgraded my GPU regularly, but I haven't seen enough of an improvement to even consider a CPU update, meaning that my desktop system with high quality components has served me well for years, and probably will for more years to come.
"It has the most Nobel prizes precisely because most of the people who have won them have immigrated from other countries to be there."
Somehow I don't think Einstein would have had any trouble getting through a merit-based immigration system, nor would he be disqualified under the "moral turpitude" clause that's hitting that Polish-born doctor. He's being reviewed for deportation (and it's not a certainty) because of convictions for receiving stolen goods years ago. But as the article noted, had he gone and applied for citizenship as many green card holders have done, this wouldn't be an issue now for him.
To me, the more telling comment is the lawyer who said that H1-B applications would have more chance of being approved if they didn't pay the absolute minimum wage required. Isn't that exactly how the program was sold? That H1-B was for those cases where jobs couldn't be filled by Americans?
Already done. Last I heard the patent portfolio brought in north of $1.5B/year. They pass it around to the latest fair-haired division to make that VP look good. Terribly political.
BTW, in IBM-ese RA is not redundancy action, it's "Resource Action", although the effect is still the same.
I also used to work at NASA. That is until the HR drone and my manager gave a presentation that flatly stated that no white male would get a promotion until there was "equality" in our department. There were 20 engineers in our department, all "white males" as far as HR was concerned (i.e. we were all males of Caucasian or Asian decent). We did the math and very soon the department was down to 5 engineers and still shrinking last I heard. Why stick around in those situations?
There are a fair number of ways that Intel can fix the Meltdown issues cleanly since AMD already does that. (Yes, let's acknowledge that Intel chose the more risky architecture for speed reasons.) TLB isolation or mirroring, change the order of the execution, etc.
Spectre will be a touch harder to fix. Right now it's almost secure on AMD, while it's a gaping hole on Intel's processors. Again, there are fixes, but what ones will impact performance the least? That's probably a big unknown, even inside Intel.
As a practical estimate, look what it takes Intel to design a new processor. Their ping-pong strategy using 2 design groups should tell you that it takes probably 18 months to make each fairly large change in their processor, and this is likely to be a fairly large change in an area that's notoriously finicky (branch prediction is an art). As a rough estimate, I'd say that pushing either the Spectre or Meltdown fixes through the process is probably at least 6 man months worth of effort (new architecture required with performance optimizations, RTL implementation and checking, new P&R, lab validation, etc).
The timing couldn't be worse for Intel. They typically announce desktop processors in the fall. That means that they're probably in the testing and finalization stages of validating next fall's announcement now. Any attempt to put a fix in there will likely hit fall's announcement hard. You're talking designing a fix, implementing it, 2 months to turn around the design in the fab, and then testing the fix. Maybe it's doable, but there are going to be a lot of sleepless Intel engineers if they hope to keep the schedule. My personal bet is that they'll have to slip the schedule AND rob the engineers blind on overtime.
I'm certainly no fan of politics in the workplace. That said, this suit was filed in California, and as with many things, California is an oddball. And remember, this suit is filed in California, in California courts, with California law controlling.
Google has a policy of actively attacking those who don't hew to one particular viewpoint and firing them as Damore found out. Under California employment law, that's illegal. Remember that Damore didn't discriminate against anybody according to the reason the CEO gave for canning him, it was because he promoted "badthink". In most right to work states that's actually legal, but not in California where there are protections against firing for political reasons. Given the storm of accusations and leaked documents showing managers who would blackball employees based on their political beliefs, Google's really behind the 8-ball in this suit.
The suit would have had to be structured differently if it were filed in Federal courts, since firing for political reasons is allowed under Federal law. The discrimination and hostile workplace claims could still be made, but they're harder to prove. Frankly, the ability to add the political angle probably made filing in the California state courts preferable given the statements that Google made when they fired Damore. It will take some pretty fancy gymnastics on the part of Google's lawyers to make this go away.
Exactly right. When you are doing development you are nearly always told that you should *NOT* do searches for IP related subjects. Every company I've ever worked for will tell its engineers the same thing: NO IP SEARCHES! Willful violations of patents are *extremely* expensive (see CMU vs Marvell for an example), while inadvertent or unknowing ones are far less expensive.
Besides, once the product is out there and making money any IP violation becomes a patent war, and very, very few companies want to go up against some of the titans of patenting. Although in this case, Qualcomm vs. Intel would be an epic battle. Both have some very fundamental patents in some key but very different niches. The negotiations would be epic.
And that points out a fundamental difference between Qualcomm vs. Apple and Qualcomm vs. Intel. Apple doesn't have anywhere near the depth of IP related to chips that Intel does, so Qualcomm has a heck of a better chance of strong arming Apple than it does Intel. It's not like Apple's patents have much overlap with Qualcomm's business, but since Apple is a consumer of Qualcomm and competitor's products, they're more vulnerable to Qualcomm's IP threats.
Oh IBM used Notes, and even had it ported to their POWER machines in the 90s.
I have to say, Notes got me my first laptop. The POWER workstation port was so bloated it made the PC version look svelte and quick in comparison (and it was slow on the PC, but that's another story). Even though we were R&D we were expected to use it, but it was so big and slow that I loaded it up twice a day: once when I first got in and once before I left. I the usual course of events, my 3rd line needed something right away and asked me for it, but I didn't see it until well after he wanted it. When asked why, I explained how I couldn't do both my work and run Blotus Notes at the same time. It wasn't a week later when my manager dropped by with a laptop so that I wouldn't miss another management meltdown. And this was a time when laptops were nearly unheard of outside of sales and management.
You don't do chips, do you?
TSMC has several modes of operation. One where you pass off RTL to them and they do the synthesis, P&R, etc. This is the "handholding for newbie startups" mode. TSMC could, if it desired, change your logic and hide it from you since they also design the test patterns.
The other mode is where they take GDS2 (geometric trapezoids) and hand you back silicon. This is the one that serious companies use. In this case TSMC is practically locked out since they'd have to decompile the GDS, make changes, and hope like h*ll that they didn't change the test patterns you've already generated. The odds of this are infinitesimal on any practically sized SoC.
I expect Google is a serious company, with serious money to spend given that they are going to this extent for security so TSMC isn't a practical attack vector. Your better bet would be to corrupt one of Google's IP suppliers and try to inject a vulnerability there. I seriously doubt Google is designing the microcontroller, for example, so that's where I'd start if I wanted to corrupt this sequence, although you could do it on any of several IP blocks they use.
May I point out that MS had the same inability to read a thermal spec issue with the Xbox 360 and the RRoD? Which is why they had to go to IBM's more expensive SOI process to move the processor temperature down. (Not that it was totally MS's fault, but they should have done a better transition to lead-less solder like most other companies did.)
I would suggest there may be a lack of attention to thermal engineering in the MS hardware department and too-slavish deference to design engineering. A failure in one flagship product is understandable, but not learning from it speaks volumes about the culture in MS's hardware division.
It's not like anybody didn't suspect the numbers, it's just interesting how those numbers got out.
IBM is moving their key work to low wage locations.
IBM sales haven't grown in 20 quarters.
Not that management will see any connection. So, so, so glad I left long ago when they went from techie management to cookie management. Any company that's been pumping its stock price with buybacks for way more than a decade has been telling stockholders that it has no idea how to be a tech business because they can't find anything useful to do with the money they're making, and IBM shows what that sort of clueless management leads to.
You also need to consider that it was *illegal* for Pence to use his state government email for personal email. That there was some spillover between political and state emails is almost guaranteed in such situations. Even if Pence obeyed all the rules, there's the case where a donor might email the governor's personal email for some assistance with bureaucratic red tape down the road.
And to compare that to Clinton's situation is laughable. We already know that Clinton was required by law to turn over *all* work related emails, yet she culled numerous emails that were obviously work related from the cache she turned over and that were only recovered by the FBI getting them from other sources not inside the government. She was actively trying to make sure that none of her political emails were subject to FOIA by running her own filter, and she violated the law by not turning over all the email that she was legally obliged to do. Pence has no such similar intentional violation of Indiana laws.
Austin is nothing compared to Houston. And as for raw heat, you might as well live in an oven as in Phoenix. But I'd take either of those places over SV.
They've tried to get me out in SV many times, but I always laugh at the recruiter and ask them if they can match my quality of life. Sure, there's a shot at making millions (I've had friends who've done that in SV), but the odds of that are low enough that it's not worth the pain of living in SV. Unless you're at a good startup you're not going to hit the lottery. Working for Google or Facebook in SV is a losing proposition unless you just want them on your resume, and for that you just want to get in and get out while you're young because those are no places to make a career.
Why accept the abuse for years on end before collectively resigning? Why not start by collectively speaking up?
Because it rarely works?
My version goes like this. We had a group in a big, TLA company back in its gold plated era. A productive group, we never missed a tapeout by anything other than the expected amount (i.e. for this company we never made the fab wait more than what they were running behind). It got our manager a promotion by 2 levels, and we got a new, freshly minted manager from a different area that was chronically late and buggy.
Step 1: 3 months later 25 people scheduled a group meeting with the former manager's and his subordinate who was our manager's manager, presented how badly he was treating us and how badly he was managing the schedule. "All proper concern" was expressed to our complaint, and "corrective action would be taken."
Step 2: after 6 months of nothing being done, two individuals were chosen to begin feeling out opportunities with external companies for the entire group. We had 3 bids and ...
Step 3: after 3 months of searching, 24 people put 24 15-minute appointments on our former manager's calendar to hand in our resignations. The new company was certainly better than the TLA with the infamous bureaucracy and paid better, but the important factor that going to work was fun again, and we actually got to do work rather than spend all our time avoiding management abuse. The TLA got out of that business line soon after that, as they never were able to put out another competitive chip in that market segment.
Using every resource to crush someone who has insulted you is SOP right now, just ask Lois Lerner.
One good thing about Trump being President is that all of a sudden there are a lot of people who will worry about too much government power and be a lot more interested in exposing government abuses.
Pixel is too much money for too little return in my book. I've been a Nexus user from the start, and when I look at my 5x and compare the Pixel I'm massively underwhelmed. $400+ for that?! What I still want is a replaceable battery and a uSD slot and possibly water resistance (although that's not as big a deal), none of which come in the Pixel.
If Google's going to make an iPhone clone, it's got to do better than an iPhone because Google already comes to table with a big minus in that it lives by stripping away my privacy in more ways than Apple does. At this point, I'd do the iPhone before I'd do a Pixel if I had to choose. And I dislike my daughter's iPhone and I absolutely despise iTunes with white-hot hate.
I keep meaning to try Cyanogen and the Pixel might be what drives me to it. Now where'd I leave my wife's old LG G3?
I've lived in both places: San Diego and the Twin Cities. Both have their advantages.
San Diego has my favorite weather of anywhere I've ever lived. Any place where the entire summer forecast is mild heat, low humidity, and the only question is when the fog will burn off is awesome, the social scene is a blast, and the scenery is top notch. But you can't afford a house there, and the idea of getting any sort of land is out of the question unless you're Bill Gates, the traffic is essentially at the level of LA terrible most of the time now, and the schools suck in general.
Minneapolis is cold, but you get used to it. Housing is inexpensive, you can get a nice spread with an easy commute (I had 10 acres with horses and a 20 minute drive to work and the cost was less than a third of the median house on a postage stamp sized lot in SD, and taxes far less). In general the people were friendlier and the public schools better.
And as far as outdoor activity my attitude has always been to find out what the locals do for fun and do it. In SD I surfed, hiked the mountains, and did a little sailing. In the TC I canoed, fished, snowmobiled, cross country skied, and whatnot.
Given my choice, SD was great back when I was single. With kids, Minnesota won.
When I was at university many years ago I ran the computer network of the Electrical Engineering department (long story, involving a VMS admin who had tried to run the Unix systems with disastrous results and I got drafted to take his place on the Unix systems because my systems had never been under his administration and everyone liked how they worked).
One night close on midnight I had complaints that one of the labs went down. I checked and sure enough, the server for that lab was down. I walked down to the lab and found two of the student admins playing hide the sausage on server. They'd gotten energetic enough that they'd knocked the power cord from the wall. They were shocked, but I turned and left without a word.
Firing them the next day was kind of awkward.
Ever heard of Powerline Ethernet? You don't need to wire your flat, just plug one into an outlet, connect to your router with a standard ethernet cable, then plug another into the wall near your TV and ethernet to that. Fast, simple and no WiFi to leak. Great if you're a gamer, too, since the latency is lower.
One of the key differences between Consumer Reports and the vast majority of reviewers is that they actually purchase their test samples at retail outlets. As such, their reviews are for stuff that the actual consumer will encounter, not stuff hand-picked by suppliers to hand-picked reviewers. This does mean that by the time the report is actually done the product might not be available (depending on the durability times, for example), but it does mean that you'll get a more honest review.
Oh, and Consumer's Union, the folks behind Consumer Reports, don't accept advertising or corporate support. Again, to give the most unbiased opinions.
Personally, I find their auto reports to be the most revealing of all their stuff since they track long-term durability of various brands and makes.
The SV staff have no loyalty? Hardly surprising given that company loyalty to their employees died decades ago.
And how much do you trust that you won't get hit again with ransomware? Any time I run across a PC with a nasty it I assume that no matter what I do there's a chance some back door or other nasty will be left on the machine and I wind up wiping it anyway. Yes, it may take a while to get the data back, and yes, the luser will be stuck reinstalling all their programs, but if I reimage the system at least I don't have to worry about missing a back doors. And I keep months of images around, so an unencrypted version of the data should be available.
There are all sorts of little details on phones that reviewers/bloggers care about that real people who use phones don't care about. LG's buttons on the back, for example, drive the reviewers to hate-filled rants, but most consumers seem to like them there.
I'm always amused at listening to the bloggers go on and on about how great metal and/or glass backs are on flagship phones. It seems that bloggers/reviewers are the 1% of the people in this world who actually have a flagship phone without some case for protection. Sure, I know, they get the phones free, so what do they care? But the rest of us don't give a rip if the phone has a plastic back, because if it didn't come with one from the factory, it will have one within 24 hours of purchase.
As noted above, it's not CMU's fault it took so long. They went to Marvell early on (2002?) and tried to license it, Marvell refused and said they found a different way around the patent. It was about 2006 and CMU got some good intel that Marvell was using the patent and restarted negotiations. Those fell through after over a year, and the suits started. Discovery was slow as is usual (and disastrous for Marvell -- look on the web for full details), and the trial took a long time to prepare. They've been slowly grinding through the system, and the fact that they've been slowly grinding through the system means that the award keeps going up because of the magic of compound interest and something that had willful infringement.
Compounded treble damages are something nobody every wants to see. It's why most working engineers are told to be very, very, very careful on anything to do with patents. It's actually better if you never do a patent search because willful infringement is so damaging to the company.
You have to be very, very careful when you don't assert a patent after you become aware that it's been infringed. Waiting to assert will massively lower any award you get.
The problem for Marvell in this case is that CMU came after Marvell under the suspicion that Marvell was using their patents. Marvell denied that and CMU walked away. Only later did evidence arise that made Marvell's infringement obvious and caused CMU to come back to Marvell, Marvell to stonewall again, and finally this suit. It was Marvell's initial denial that allowed CMU to reach back much farther than would be typically allowed in a case like this. Well, that and the fact that Marvell's internal documents showed a blatant intent to infringe the patents.
Marvell was built on these chips. They were Marvell's first products and have been a cash cow for them for over a decade.
Marvell admitted they looked at the patent, and they claimed in court that it was impractical so they did something "similar." The problem for Marvell is that they got caught using exactly the same algorithm (down to the names!) as was used at CMU internally. The fact is that they were blatantly using these patents during their development activities and that came out in discovery. The patent infringement in this was went way beyond infringement to blatant abuse, which is why the award is so high. And it was so obviously blatant that the jury ruled unanimously that it was blatant (which we should probably blame the engineers at Marvell for -- couldn't they have made it at least plausible that they did this accidentally?!).
If you look, the jury's judgement of the value of the patents was pretty accurate at about 5% of the profit of the chips. Remember, using this was necessary for them to be competitive in the market, as Seagate and others testified (and Seagate is a big customer of Marvell's disk drive chips). It's the treble damages and interest costs for willful, blatant infringement that's really painful for Marvell. Well, that's the point. Marvell didn't do this accidentally, they did it with malice aforethought and now they're being called to account for that behavior.
Actually, you do have to write a full block in a disk drive, like it or not. Parity and error correction bits are spread throughout the sector in a disk drive, which means if you were to attempt to change less than a block you'd destroy the ECC, munge the SOVA, and corrupt the detectors. You could modify the file system interface to hide the fact that you're not writing a complete sector, but the drive itself has to write a complete sector.
DVDs and CDs are CLV. HDDs aren't CLV because the acceleration and settling and is too hard a problem for something that's random access. As slow as HDD access is, it'd be far worse if you had to change spindle speed as you changed radius. Even drives with one platter would suffer horribly if you tried to do CLV. There's a reason skipping segments/songs is so slow on DVDs and CDs...
Actually, if you look at the supplier's chips, they're pretty close to "infinitely smooth adjustment". The frequency selection is approximately a small 1% increment over the range of the SoC in question, and the better SoCs can do from 100MHz to 3+GHz.
Where the AD jumps occur is actually 2 places: the read zones on the drives, and the servo. In general, most drives have 20+ read zones, where the frequency of the data on the disk is fixed. There's a tradeoff between making tons of zones and optimization time, as well as SoC frequency switching time as you switch zones. But the more zones you have, the more efficiently you can pack those bits since as the diameter increases you can change the frequency to keep the linear density constant.
But where the real difference in AD occurs is really in servo. For many drives, zero is a fixed frequency from inner diameter to outer diameter. Zoned servo is relatively rare, so in general there's a huge penalty in AD as you go to the outer diameter and the servo wedge gets very large as compared to the read you lose a ton of AD.
Seagate is big in enterprise and desktop. They have relatively small market share in laptops.
WD/HGST have a bigger exposure to laptops, well more than half the market.
Laptops are transitioning to flash, as well as dropping in sales terms as tablets/phablets increase in popularity.
Put it all together and you can see why Seagate had a better quarter than WD.
Why do you think that they need to intercept something in the channel and delay a transport? The NSA could simply have a stock of hacked routers in their warehouse. Then, when a router is ordered, it could simply substitute the hacked router for the ordered router at some step along the way. Customs comes to mind, since it could just as easily swap in a bugged router for the unbugged router as it is doing its "inspections."
Don't make things too complicated, folks. Think like a spook.
I played Diablo and Diablo II quite a bit, and despite my qualms I bought D3. Bad decision. Bad, bad decision. I should have dumped the money down the drain and saved the time.
Ok, so it might have been a bad decision to pick up D2 again for 6 months before D3 came out as a way to remind myself how good a game Diablo could be. But I have to say that playing D2 for 6 months before D3 came out just slaughtered any enjoyment I had playing D3. The mechanics in D2 were so much better, the customizations so much better, etc. It was night and day.
Torchlight 2 put D3 in the trash bin, never to return. I like T2 simply because it's more fun to play and you're not really grinding for the one or two items that let you survive. You've got more variety of stuff that keeps you going, but giving you slightly different emphases that keep things fresh. And you can LAN party with friends, you get fanboy mods, etc. All the things that kept D2 fresh rather than playing in the Blizzard D3 jail.
If Blizzard would update the graphics and AI of D2 I'd buy it again. But an "upgrade" to D3 isn't going to get my money absent a complete overhaul.
If Google wants ARM in an Intel process then Google has to do it to get it done right. Of all the transistor shops around there are few that approach the level of NIH that's inside Intel. They already had StrongARM, the best ARM implementation at the time, and screwed it up badly before selling it off to Marvell, for example. Intel's got a great processor design team, and great fabs, but it's abysmal at taking other folks learning to heart.
And, FYI, Intel chips aren't really CISC when you dig into them. Much of the reason that x64 never lost to RISC is that Intel has a massive unit that breaks apart CISC instructions into component RISC parts before dispatching them to the processors. That's a great way to keep the speed up and compatibility flawless, but there's a big power and area penalty paid for that. It's a great solution for desktops, and an acceptable solution for laptops, but in a cell phone it doesn't fly due to the extra battery draw.
Keep in mind that BlackBlaze uses those drives a heck of a lot more than the typical consumer applications where the drives spend all their time just track following and staying much cooler. Seriously, heat is the enemy of disk drives and for a typical consumer application consumer grade is fine. You want continuous operation you need something with a better thermal profile and those are enterprise drives.
I have a Nexus 4 and just got a Nexus 5 for my wife. We're on T-Mobile USA so we're prime targets for this kind of thing: no subsidy plan, GSM, etc.
The Nexus 5 is smoother, thinner, and has noticeably more screen area. It's a damn nice phone. But I'm not upgrading until my Nexus 4 breaks. It's nice and fast, but it's not that much nicer or faster to make me want to drop $350 more on the phone. 4.4 (KitKat) hasn't won me over, either. It's not bad, but I don't like how Google has borged the messaging app into hangouts, for example. I suppose I'll get used to it, but fully entering The Google Collective with all the software tweaks is kind of unsettling. Still, KitKat is better than the software on my daughter's S4.
Nope, you don't need to do that, or at least it's very, very rare to have problems with that.
What we do these days is that we can detect errors and weak sectors using various intermediate code output stages to estimate the SNR of the read (think SOVA systems and the like). If we detect a bad or weak readback sector while reading we map out the offending block and use a spare one in its place. It's completely transparent to the user and it keeps us from having to wear out NAND any more than is absolutely necessary. (Something similar is done for HDDs.) You have to have a complete failure before something like this causes a problem that's visible to the user.
But think about what this means to end users. it means that if you ever start getting bad sector warnings what's happened is that we've used up all our spares and that we can't safely remap bad sectors without OS level help. That means that your storage device is on its last legs and you'd best be getting anything valuable off the drive ASAP since the aging doesn't ever stop.
Bwahahahaha! You don't know the *half* of it.
Let's take the example of a typical disk drive. In the bad old days, we had "peak detector" systems where bit densities were less than 1.0 CBD - 1 bit in a half pulse width. With enough work and some really simple coding you could reach about 1e-9 BER.
Then IBM came up with the idea of applying signal processing to the disk drive, introducing partial response/maximum likelihood systems (Viterbi detectors) where you started to get more than 1 bit in a pulse width and the raw BER off the disk started to drop. Now they're putting about 3 bits in a single bit width because they're putting in LDPC codes and their 6M+ gate decoders behind the PRML and the raw BER coming off the disk is typically around 1e-5, but with the coding behind them they're typically well below 1e-15.
You want scary? Look at MLC NAND flash drives. After a few hundred erasure cycles the raw BER of those things can be 1e-4 or worse. Why? Feature sizes are getting so small that leakage and wear (threshold voltage shifts, etc) are causing those ideal voltage levels to get pretty whacked out. It's getting bad enough that you're starting to see those massively complicated LDPC codes in flash drives, too. Those fancy codes are needed, as are wear leveling, compression, and all those other tricks to make NAND drives last as long as they do.
HDD systems typically fail from mechanical failures but the underlying data is maintained and you can usually get someone to haul the data off the platters for enough money. NAND flash systems, though, die a horrible death from aging and if you have a "crash" on one of those it's not likely that any amount of money will get your data off it because of all the massaging of the data we do to keep those drives alive.
You're also forgetting that NAND will soon stop scaling. It simply can't scale past about 20nm since it has to be a planar process and the shrink keeps decreasing the number of electrons stored. In 22nm you're talking about trying to store 200 or so electrons over PVT and the half life of the cell storage is getting into the realm of months, much less considering the decreased lifetime wear. 20nm flash is _hard_ to get working well.
The technology of NAND just doesn't scale well. It's likely that other technologies will come to replace it, but they're not available yet. So predicting the end of "spinning rust" due to NAND just by past performance ignores the technology roadmap and physics. Spinning rust is losing its share of the market, but there's still some doubt about what can replace it.
"The same report concluded that development of even a single bad sector is a pretty good sign that the drive is getting ready to check out."
There's a reason for that. SSDs copied the HDD redundancy scheme. In both cases manufacturers keep a fair bit of "spare space": for SSDs that's unused pages and for HDDs it's spare tracks. When you hit a problem reading a sector where you have to try and read it more than once you map it to a spare sector and mark the old one as bad. At no point does the user know you've done that, it's all done under the covers seamlessly.
Now that you understand that you can see the "why" behind Google's result: by the time a user sees a sector failing the drive has run out of spares, which means that a pretty fair fraction of the drive area has failed for some reason. Those reasons are usually cascade failures (heat related wear in an SSD, TA contamination for HDDs, etc). It's your hint to go out and replace the drive folks.
As drives fill up you get a couple of different things going on. First, the wear leveling starts running out of blank pages and has to start going garbage collection to try and make more compact file systems. Second, the more fragmented the file system the more writes you have to do. Thirdly, your overprovisioning starts to run out and get less efficient. If you want the more detailed version look up write amplification on Wikipedia, it's a tolerable introduction to the problem.
Nice assumptions, but not real. Write amplification is a real problem, especially with a drive that's even somewhat packed. Even the best SSD controllers can't keep the write amplification below 1 at ~30% capacity. By the time you hit 80% capacity you're talking monstrous amplification factors for even relatively sequential writes.
Example: I write a 512 byte sector. In a HDD, I write the sector. Done. In an SSD I have to read/erase/write the whole page (~64K or more). That's not including any remapping that has to take place for moving the other sectors on the page.
Then there are problems with longevity (cells need to be refreshed periodically since flash really isn't a permanent storage mechanism and cell content degrade over time), etc. There's garbage collection, all that junk that has to go on in the background on an SSD that doesn't go on in an HDD.
All told, flash isn't a technology to make a long lived drive. It's fast, and it's useful in some applications, but you have to be even more paranoid about it failing and give it a lot more margin than you'd give a HDD.
I design controllers for both SSDs and HDDs. Failure mechanisms are typically very different.
For SSDs what kills you is the NAND wearing out, and that's a big function of how much data you have on your SSD. The problem for SSDs is that sector oriented writes in HDDs are still 512-4K bytes, while SSDs require different sized writes that are typically much bigger, although the exact size depends on NAND configuration. Since SSDs require full page erase-write cycles that means that a lot of small writes can cause page wear far beyond what you'd expect since even with wear leveling controllers you'll be writing tons and tons of new pages if you're not careful.
That same wearing of writing small blocks causing big blocks to be written and wear out gets exponentially worse as your SSD fills up. While you can push an HDD to 80+% capacity without significant penalty (just usually seek time), pushing an SSD past 50% capacity causes the controller to have write factors well above 1.0 and your SSD will wear out significantly faster. This is a real issue in SSDs that use MLC NAND because of the lower lifetime.
I tend to agree with richard7 above: a smallish SSD for OS/apps backed by an HDD for data storage and redundancy is the right way to go. I hate trusting the Cloud as it's pitifully slow if you have a lot of data to recover, and flash tends to have to many catastrophic failure mechanisms that arise without warning. I've been doing this stuff since the start of the PC era and I've only had one HDD fail without warning, but I've seen lots of SSDs fail without warning.
No, toasty warm is actually pretty deadly to NAND. When you're storing 200 electrons per cell in a 125 C environment you're lucky to keep good data for a month or so in the latest NAND technologies. There's a reason we've got strong ECC schemes to make flash more reliable, and the next step up will be LDPC codes, which are coming soon.
Biting the hand that feeds IT © 1998–2018