* Posts by nerdbert

20 posts • joined 12 Oct 2012

Marvell: We don't want to pay this $1.5bn patent bill because, cripes, it's way too much

nerdbert
Alien

As noted above, it's not CMU's fault it took so long. They went to Marvell early on (2002?) and tried to license it, Marvell refused and said they found a different way around the patent. It was about 2006 and CMU got some good intel that Marvell was using the patent and restarted negotiations. Those fell through after over a year, and the suits started. Discovery was slow as is usual (and disastrous for Marvell -- look on the web for full details), and the trial took a long time to prepare. They've been slowly grinding through the system, and the fact that they've been slowly grinding through the system means that the award keeps going up because of the magic of compound interest and something that had willful infringement.

Compounded treble damages are something nobody every wants to see. It's why most working engineers are told to be very, very, very careful on anything to do with patents. It's actually better if you never do a patent search because willful infringement is so damaging to the company.

0
0
nerdbert

Re: @Pele ...

You have to be very, very careful when you don't assert a patent after you become aware that it's been infringed. Waiting to assert will massively lower any award you get.

The problem for Marvell in this case is that CMU came after Marvell under the suspicion that Marvell was using their patents. Marvell denied that and CMU walked away. Only later did evidence arise that made Marvell's infringement obvious and caused CMU to come back to Marvell, Marvell to stonewall again, and finally this suit. It was Marvell's initial denial that allowed CMU to reach back much farther than would be typically allowed in a case like this. Well, that and the fact that Marvell's internal documents showed a blatant intent to infringe the patents.

Marvell was built on these chips. They were Marvell's first products and have been a cash cow for them for over a decade.

0
0
nerdbert
Mushroom

Re: But did they...

Marvell admitted they looked at the patent, and they claimed in court that it was impractical so they did something "similar." The problem for Marvell is that they got caught using exactly the same algorithm (down to the names!) as was used at CMU internally. The fact is that they were blatantly using these patents during their development activities and that came out in discovery. The patent infringement in this was went way beyond infringement to blatant abuse, which is why the award is so high. And it was so obviously blatant that the jury ruled unanimously that it was blatant (which we should probably blame the engineers at Marvell for -- couldn't they have made it at least plausible that they did this accidentally?!).

If you look, the jury's judgement of the value of the patents was pretty accurate at about 5% of the profit of the chips. Remember, using this was necessary for them to be competitive in the market, as Seagate and others testified (and Seagate is a big customer of Marvell's disk drive chips). It's the treble damages and interest costs for willful, blatant infringement that's really painful for Marvell. Well, that's the point. Marvell didn't do this accidentally, they did it with malice aforethought and now they're being called to account for that behavior.

2
0

A day may come when flash memory is USELESS. But today is not that day

nerdbert

Re: Open question...

Actually, you do have to write a full block in a disk drive, like it or not. Parity and error correction bits are spread throughout the sector in a disk drive, which means if you were to attempt to change less than a block you'd destroy the ECC, munge the SOVA, and corrupt the detectors. You could modify the file system interface to hide the fact that you're not writing a complete sector, but the drive itself has to write a complete sector.

0
0

Disk areal density: Not a constant, consistent platter

nerdbert
Holmes

Re: Duh...

DVDs and CDs are CLV. HDDs aren't CLV because the acceleration and settling and is too hard a problem for something that's random access. As slow as HDD access is, it'd be far worse if you had to change spindle speed as you changed radius. Even drives with one platter would suffer horribly if you tried to do CLV. There's a reason skipping segments/songs is so slow on DVDs and CDs...

10
0
nerdbert
Holmes

Re: Duh...

Actually, if you look at the supplier's chips, they're pretty close to "infinitely smooth adjustment". The frequency selection is approximately a small 1% increment over the range of the SoC in question, and the better SoCs can do from 100MHz to 3+GHz.

Where the AD jumps occur is actually 2 places: the read zones on the drives, and the servo. In general, most drives have 20+ read zones, where the frequency of the data on the disk is fixed. There's a tradeoff between making tons of zones and optimization time, as well as SoC frequency switching time as you switch zones. But the more zones you have, the more efficiently you can pack those bits since as the diameter increases you can change the frequency to keep the linear density constant.

But where the real difference in AD occurs is really in servo. For many drives, zero is a fixed frequency from inner diameter to outer diameter. Zoned servo is relatively rare, so in general there's a huge penalty in AD as you go to the outer diameter and the servo wedge gets very large as compared to the read you lose a ton of AD.

3
0

Return of the disk drive bigness? Not for poor old, busted WD

nerdbert

It's the product mix

Seagate is big in enterprise and desktop. They have relatively small market share in laptops.

WD/HGST have a bigger exposure to laptops, well more than half the market.

Laptops are transitioning to flash, as well as dropping in sales terms as tablets/phablets increase in popularity.

Put it all together and you can see why Seagate had a better quarter than WD.

0
0

Greenwald alleges NSA tampers with routers to plant backdoors

nerdbert
Holmes

Re: oldest one in the book

Why do you think that they need to intercept something in the channel and delay a transport? The NSA could simply have a stock of hacked routers in their warehouse. Then, when a router is ordered, it could simply substitute the hacked router for the ordered router at some step along the way. Customs comes to mind, since it could just as easily swap in a bugged router for the unbugged router as it is doing its "inspections."

Don't make things too complicated, folks. Think like a spook.

4
1

Battling with Blizzard's new WoW expansion and Diablo revamp

nerdbert

I'd like to care, but Blizzard broke my give-a-damn

I played Diablo and Diablo II quite a bit, and despite my qualms I bought D3. Bad decision. Bad, bad decision. I should have dumped the money down the drain and saved the time.

Ok, so it might have been a bad decision to pick up D2 again for 6 months before D3 came out as a way to remind myself how good a game Diablo could be. But I have to say that playing D2 for 6 months before D3 came out just slaughtered any enjoyment I had playing D3. The mechanics in D2 were so much better, the customizations so much better, etc. It was night and day.

Torchlight 2 put D3 in the trash bin, never to return. I like T2 simply because it's more fun to play and you're not really grinding for the one or two items that let you survive. You've got more variety of stuff that keeps you going, but giving you slightly different emphases that keep things fresh. And you can LAN party with friends, you get fanboy mods, etc. All the things that kept D2 fresh rather than playing in the Blizzard D3 jail.

If Blizzard would update the graphics and AI of D2 I'd buy it again. But an "upgrade" to D3 isn't going to get my money absent a complete overhaul.

0
0

That Google ARM love-in: They want it for their own s*** and they don't want Bing having it

nerdbert

Re: What Google really wants?

If Google wants ARM in an Intel process then Google has to do it to get it done right. Of all the transistor shops around there are few that approach the level of NIH that's inside Intel. They already had StrongARM, the best ARM implementation at the time, and screwed it up badly before selling it off to Marvell, for example. Intel's got a great processor design team, and great fabs, but it's abysmal at taking other folks learning to heart.

And, FYI, Intel chips aren't really CISC when you dig into them. Much of the reason that x64 never lost to RISC is that Intel has a massive unit that breaks apart CISC instructions into component RISC parts before dispatching them to the processors. That's a great way to keep the speed up and compatibility flawless, but there's a big power and area penalty paid for that. It's a great solution for desktops, and an acceptable solution for laptops, but in a cell phone it doesn't fly due to the extra battery draw.

0
0

Server, server in the rack, when's my disk drive going to crack?

nerdbert

Keep in mind that BlackBlaze uses those drives a heck of a lot more than the typical consumer applications where the drives spend all their time just track following and staying much cooler. Seriously, heat is the enemy of disk drives and for a typical consumer application consumer grade is fine. You want continuous operation you need something with a better thermal profile and those are enterprise drives.

0
0

Google's Nexus 5: Best smartphone bang for your buck. There, we said it

nerdbert

Re: I have a 6 month old Nexus 4...

I have a Nexus 4 and just got a Nexus 5 for my wife. We're on T-Mobile USA so we're prime targets for this kind of thing: no subsidy plan, GSM, etc.

The Nexus 5 is smoother, thinner, and has noticeably more screen area. It's a damn nice phone. But I'm not upgrading until my Nexus 4 breaks. It's nice and fast, but it's not that much nicer or faster to make me want to drop $350 more on the phone. 4.4 (KitKat) hasn't won me over, either. It's not bad, but I don't like how Google has borged the messaging app into hangouts, for example. I suppose I'll get used to it, but fully entering The Google Collective with all the software tweaks is kind of unsettling. Still, KitKat is better than the software on my daughter's S4.

1
0

Why data storage technology is pretty much PERFECT

nerdbert
Holmes

Re: Error correction isn't good enough nowadays.

Nope, you don't need to do that, or at least it's very, very rare to have problems with that.

What we do these days is that we can detect errors and weak sectors using various intermediate code output stages to estimate the SNR of the read (think SOVA systems and the like). If we detect a bad or weak readback sector while reading we map out the offending block and use a spare one in its place. It's completely transparent to the user and it keeps us from having to wear out NAND any more than is absolutely necessary. (Something similar is done for HDDs.) You have to have a complete failure before something like this causes a problem that's visible to the user.

But think about what this means to end users. it means that if you ever start getting bad sector warnings what's happened is that we've used up all our spares and that we can't safely remap bad sectors without OS level help. That means that your storage device is on its last legs and you'd best be getting anything valuable off the drive ASAP since the aging doesn't ever stop.

1
0
nerdbert
Holmes

Re: Error correction isn't good enough nowadays.

Bwahahahaha! You don't know the *half* of it.

Let's take the example of a typical disk drive. In the bad old days, we had "peak detector" systems where bit densities were less than 1.0 CBD - 1 bit in a half pulse width. With enough work and some really simple coding you could reach about 1e-9 BER.

Then IBM came up with the idea of applying signal processing to the disk drive, introducing partial response/maximum likelihood systems (Viterbi detectors) where you started to get more than 1 bit in a pulse width and the raw BER off the disk started to drop. Now they're putting about 3 bits in a single bit width because they're putting in LDPC codes and their 6M+ gate decoders behind the PRML and the raw BER coming off the disk is typically around 1e-5, but with the coding behind them they're typically well below 1e-15.

You want scary? Look at MLC NAND flash drives. After a few hundred erasure cycles the raw BER of those things can be 1e-4 or worse. Why? Feature sizes are getting so small that leakage and wear (threshold voltage shifts, etc) are causing those ideal voltage levels to get pretty whacked out. It's getting bad enough that you're starting to see those massively complicated LDPC codes in flash drives, too. Those fancy codes are needed, as are wear leveling, compression, and all those other tricks to make NAND drives last as long as they do.

HDD systems typically fail from mechanical failures but the underlying data is maintained and you can usually get someone to haul the data off the platters for enough money. NAND flash systems, though, die a horrible death from aging and if you have a "crash" on one of those it's not likely that any amount of money will get your data off it because of all the massaging of the data we do to keep those drives alive.

9
1

Hm, disk drive maker, what's that smell lingering around you?

nerdbert
Pint

Re: In the fantasy relm of Corporate __________

You're also forgetting that NAND will soon stop scaling. It simply can't scale past about 20nm since it has to be a planar process and the shrink keeps decreasing the number of electrons stored. In 22nm you're talking about trying to store 200 or so electrons over PVT and the half life of the cell storage is getting into the realm of months, much less considering the decreased lifetime wear. 20nm flash is _hard_ to get working well.

The technology of NAND just doesn't scale well. It's likely that other technologies will come to replace it, but they're not available yet. So predicting the end of "spinning rust" due to NAND just by past performance ignores the technology roadmap and physics. Spinning rust is losing its share of the market, but there's still some doubt about what can replace it.

2
0

Why is solid-state storage so flimsy?

nerdbert
Alien

Re: SSDs and HDDs both require backup...

"The same report concluded that development of even a single bad sector is a pretty good sign that the drive is getting ready to check out."

There's a reason for that. SSDs copied the HDD redundancy scheme. In both cases manufacturers keep a fair bit of "spare space": for SSDs that's unused pages and for HDDs it's spare tracks. When you hit a problem reading a sector where you have to try and read it more than once you map it to a spare sector and mark the old one as bad. At no point does the user know you've done that, it's all done under the covers seamlessly.

Now that you understand that you can see the "why" behind Google's result: by the time a user sees a sector failing the drive has run out of spares, which means that a pretty fair fraction of the drive area has failed for some reason. Those reasons are usually cascade failures (heat related wear in an SSD, TA contamination for HDDs, etc). It's your hint to go out and replace the drive folks.

0
0
nerdbert
Alien

Re: SSD failure and HDD failure are very different

As drives fill up you get a couple of different things going on. First, the wear leveling starts running out of blank pages and has to start going garbage collection to try and make more compact file systems. Second, the more fragmented the file system the more writes you have to do. Thirdly, your overprovisioning starts to run out and get less efficient. If you want the more detailed version look up write amplification on Wikipedia, it's a tolerable introduction to the problem.

0
0
nerdbert
Linux

Re: Not easily replaceable? Plus, longevity

Nice assumptions, but not real. Write amplification is a real problem, especially with a drive that's even somewhat packed. Even the best SSD controllers can't keep the write amplification below 1 at ~30% capacity. By the time you hit 80% capacity you're talking monstrous amplification factors for even relatively sequential writes.

Example: I write a 512 byte sector. In a HDD, I write the sector. Done. In an SSD I have to read/erase/write the whole page (~64K or more). That's not including any remapping that has to take place for moving the other sectors on the page.

Then there are problems with longevity (cells need to be refreshed periodically since flash really isn't a permanent storage mechanism and cell content degrade over time), etc. There's garbage collection, all that junk that has to go on in the background on an SSD that doesn't go on in an HDD.

All told, flash isn't a technology to make a long lived drive. It's fast, and it's useful in some applications, but you have to be even more paranoid about it failing and give it a lot more margin than you'd give a HDD.

1
0
nerdbert
Pint

SSD failure and HDD failure are very different

I design controllers for both SSDs and HDDs. Failure mechanisms are typically very different.

For SSDs what kills you is the NAND wearing out, and that's a big function of how much data you have on your SSD. The problem for SSDs is that sector oriented writes in HDDs are still 512-4K bytes, while SSDs require different sized writes that are typically much bigger, although the exact size depends on NAND configuration. Since SSDs require full page erase-write cycles that means that a lot of small writes can cause page wear far beyond what you'd expect since even with wear leveling controllers you'll be writing tons and tons of new pages if you're not careful.

That same wearing of writing small blocks causing big blocks to be written and wear out gets exponentially worse as your SSD fills up. While you can push an HDD to 80+% capacity without significant penalty (just usually seek time), pushing an SSD past 50% capacity causes the controller to have write factors well above 1.0 and your SSD will wear out significantly faster. This is a real issue in SSDs that use MLC NAND because of the lower lifetime.

I tend to agree with richard7 above: a smallish SSD for OS/apps backed by an HDD for data storage and redundancy is the right way to go. I hate trusting the Cloud as it's pitifully slow if you have a lot of data to recover, and flash tends to have to many catastrophic failure mechanisms that arise without warning. I've been doing this stuff since the start of the PC era and I've only had one HDD fail without warning, but I've seen lots of SSDs fail without warning.

4
0
nerdbert

Re: SSDs and HDDs both require backup...

No, toasty warm is actually pretty deadly to NAND. When you're storing 200 electrons per cell in a 125 C environment you're lucky to keep good data for a month or so in the latest NAND technologies. There's a reason we've got strong ECC schemes to make flash more reliable, and the next step up will be LDPC codes, which are coming soon.

1
0

Forums