back to article Early to embed and early to rise? Western Digital drops veil on SweRVy RISC-V based designs

Western Digital today finally flashed the results of its vow to move a billion controller cores to RISC-V designs. RISC chip Linux lobby org joins with RISC-V bods to promote open chip spec READ MORE WD said last year it needed an open and extensible CPU architecture for its purpose-built drive controllers and other devices …

  1. Anonymous Coward
    Anonymous Coward

    28mm CMOS process technology

    That's not going to be easy to package ;-)

    I'm guessing that should be 28nm!

    1. steelpillow Silver badge
      Devil

      Re: 28mm CMOS process technology

      You stole my post!

      I knew that Open Source technologies can lag behind the proprietary bleeding edge, but this is ridiculous.

    2. eldakka Silver badge

      Re: 28mm CMOS process technology

      Maybe it's a DYI-style project? They supply the design, a stack of breadboards, wires and basic logic chips (NAND gates etc.), and you have to make it yourself?

      1. Anonymous Coward
        Anonymous Coward

        Re: 28mm CMOS process technology

        I think this is the DIY project you are looking for:

        https://monster6502.com/

  2. rcxb

    With that kind of CPU, WD should be able to sell individual hard drives with built-in ethernet, instead of just SATA or SAS interfaces.

    1. Anonymous Coward
      Anonymous Coward

      True. And there are many more opportunities with more oomph in the on-drive processing power. You could for isntance have a self organising RAID system: buy 10 WD drives, instruct them to build the array and all the logic takes place on drive(s) using distributed computing.

      1. ATeal

        (RE AC and "distributed computing")

        Calm down, chaining those logically blocks is not a great idea.

        Drives relocate sectors all the time, you're better off connecting 10 drives to one controller than 10 drives together as they'll need to communicate if some drive breaks it may partition the remaining devices (trivial case: drive connects to each neighbour, one of the ends can be used to control the array, any drive goes something gets cut off)

        Don't hype this up into something it's not.

        In all seriousness if you wanted the world to be a better place (which this wont help) hope for better data durability, actually KNOWING when some data is written, the writes got there in order ect is EXTREMELY difficult, the more between the program and the drive (like NFS for example - worst case for this problem) a RAID array can be bad too ect ect.

        You get the idea, this is just a different controller - best case is it cheapens drives a bit, to miss a little accuracy but to save a lot of explanation: "turing complete"-ness, trivial to show (modulo infinite tape) a computer can simulate a TM (turing machine) thus a computer is at least as powerful as one. So they can *already* make any kind of magical controller they can make with RISC-V (worst case: emulate the RISC-V on whatever they use, it may suck speed wise but they can do it, compiling for whatever arch they used would reduce the speed issue if it is an issue at all!)

        So there's nothing new here. Arm is the home of invisible custom busses varying from everything with everything.

        Now if you excuse me I'm going to go and work out (a bound) for the minimum number of links between n drives such that you can remove m drives and the rest stay connected.

        MATHS AWAY!

        1. Anonymous Coward
          Anonymous Coward

          Re: (RE AC and "distributed computing")

          It is hard, as you say, to know when data actually hits the platter. The only one to know is the manufacturer who has proprietary means for knowing such things. And it is such things that makes chaining a tasty opportunity. This also represents strong vendor lock-in which has rarely failed to tempt manufacturers in a competitive market.

          So, using this data, they can assure platter rotation sync, and known phase difference, know when data reaches the platter and provide features that a third party controller vendor just cannot. Chaining will also allow for hot/cold standby and phasing in that to the end user appears simple: everything just works and just pull out the HD with that flashing red light and put in a new drive from the same manufacturer. This is how lock-in starts.

    2. Spazturtle Silver badge

      "WD should be able to sell individual hard drives with built-in ethernet, instead of just SATA or SAS interfaces."

      That would be a downgrade, SATA 3 is 6Gbps, SAS 3 is 12Gbps, currently most Ethernet is 1Gbps.

      1. ManOfFewWords

        True for laptops and desktops, but most servers are now shipping with 10GbE and supporting 25GbE as an option.

      2. rcxb

        You won't find many drives that can sustain more than 1Gbps, even sequentially, definitely not random reads/writes. You'll only see that briefly to/from cache. Besides, SAS and SATA are higher speed in part so they can be shared across several drives. Ethernet switches give everybody the full 1Gbps. And let's not get into overhead. USB3 claims 5Gbps, but is actually a bottleneck to drives than can't sustain 1Gbps.

  3. Christian Berger Silver badge

    What I don't understand about a memory centric architecture

    Memory is today one of the slower parts of computing. Whenever your CPU actually has to access it it takes a long time. Caching solves a bit of the problem, but it quickly gets very difficult.

    Wouldn't it make more sense to not have one of the slowest part of your computer be your bottleneck?

    1. _LC_

      Re: What I don't understand about a memory centric architecture

      The alternatives are 'file centric' architectures, which – You won't believe this! – are much slower and even less practical.

      1. Christian Berger Silver badge

        Re: What I don't understand about a memory centric architecture

        "The alternatives are 'file centric' architectures"

        No, there's another obvious architecture, message passing. If you build your interconnection on asynchronous messages it can scale very well. It's the concept the Transputer used.

        1. _LC_

          Re: What I don't understand about a memory centric architecture

          That's what's happening (or not happening) behind the scenes. We were talking about the visible part before that. ;-)

    2. the spectacularly refined chap

      Re: What I don't understand about a memory centric architecture

      Memory is today one of the slower parts of computing. Whenever your CPU actually has to access it it takes a long time. Caching solves a bit of the problem, but it quickly gets very difficult.

      Memory isn't inherently slow, certainly it can be physically made far faster than current modules. The problem is the interconnect: the physical distance adds latency and limits on how quickly you can modulate even PCB traces limit the bandwidth. I suspect in the long term massively parallel systems with relatively small on chip stores will be the answer, but making that work needs a fundamentally new programming paradigm and new algorithms.

  4. CAPS LOCK Silver badge

    Good to see some practical RISC-V products...

    ...I'm hoping to see something like Rasberry Pi but using RISC-V emerge.

    1. LeftyX

      Re: Good to see some practical RISC-V products...

      They already exist: https://www.sifive.com/boards

      I believe there are other manufacturers too.

      1. _LC_

        Re: Good to see some practical RISC-V products...

        Last time I checked, it was still 10-40 times the price. I guess, that's what he meant.

    2. Anonymous Coward
      Anonymous Coward

      Re: Good to see some practical RISC-V products...

      > ...I'm hoping to see something like Rasberry Pi but using RISC-V emerge.

      Broadcom is one of the few networking chip manufacturers who are not (yet) member of the RISC-V consortium. They seem to have stayed with ARM. Next Raspberry Pi 4 is expected early 2019 using a different linewidth so it is too short time to switch architecture completely.

      That they could do is to add a few RISC-V cores for real time tasks, the same architecture TI uses with some of their systems. Fast realtime with fast ADC and new VideoCore would make for an interesting SDR radio platform.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2019