That looks an awful lot like a NUMA memory map to me.
Didn't IBM buy this concept when they brought Sequent to get NUMA?
HPC boffins at startup A3CUBE have picked up where Virtensys left off and designed a way to use PCIe interfaces to provide direct shared global memory across a network with lower latency and higher scalability than Ethernet, InfiniBand and Fibre Channel. San Jose-based A3CUBE was founded by in 2012 by Emilio Billi, its chief …
NUMA is not just IBM, today NUMA is everywhere, Intel and AMD are Numa architecture in any their servers and MB.
PCIe is a memory mapped bus.
RONNIEE Express is a logically extension of the memory mapping of the PCIe, so ....
In any case the memory is logical not physical, do you know global address space concept?
(Well, actually, maybe I am, as I tried to post a version of this earlier, put the wrong password in, and took the "you've cocked that up" screen as meaning it had successfully posted. Ahem.)
Right, I'm not exactly an expert in the field of shared, server level memory-based storage, but... have they been writing microseconds where they meant to put nanoseconds? 33us is ... 30kHz. I'm pretty certain that's a cycle time that even the olden-days Big Iron engineers would look at and say "What, seriously? Are you using super-high-rpm drum memory or something?". It's slower even than the average random-seek access time of a typical hard disc, by a factor of about three. Even 2.5us is sort of C64 slow, at 400kHz.
Now, 30MHz I might believe, as it's an appreciable fraction of how fast PCIe actually signals, even if it's overall a bit on the sluggish side in modern terms ... gotta allow time for the signals to move along the wires after all. About what you might have expected from the early PC-66 SDRAM standard. And 400MHz is probably between acceptable and very good for this sort of application, given that we're still occasionally throwing away hoary old laptops with DDR-400 chips in that various other departments (at my workplace) finally can't tolerate using any more, and have plenty of desktops with DDR2-800 in daily use, and is right at the upper end of what you can achieve for roundtrip latency between two separate machines in the same rack (if you take propagation speed in copper cable as being a little over 50% of C, then that's a 20cm path between the two points being measured...).
I suppose the question is, what sort of distance are we talking between the two devices? Obviously, if they're 200 metres apart then you can forget everything I just said :D
Which are the companies that use PCIe to transfer data to memory on servers in a cluster ?
RONNIEE Express extends PCI EXPRESS OUT SIDE the box without using encapsulation or other protocols.
This is an extension of PCI Express memory mapping in a global share address space with multidimensional topology support that can scale in a non coherent way at 10000s nodes with a memory mapped TCP/Ip support that has 1.2us of latency ( TCP/IP).
NUMA is not just IBM, today NUMA is everywhere, Intel and AMD are Numa architecture in any their servers and MB.
PCIe is a memory mapped bus.
RONNIEE Express is a logically extension of the memory mapping of the PCIe, so ....
In any case the memory is logical not physical, do you know global address space concept?