* Posts by Corey M

4 publicly visible posts • joined 8 Sep 2013

Sysadmins hail Windows Server 2012 R2's killer ... clipboard?

Corey M

Re: I dont get it

There are numerous scenarios where RDP is simply not available. Broken network connections, broken networks, security restrictions... the list goes on. When RDP fails, the new functionality of the Connect sessions gives you an option you didn't have previously.

Corey M

Correction and Comparison

The discussion from the floor about the 64-core limit during the session was specifically about the limitation of 64 cores available to the *HOST* operating system. Ben specifically stated that ALL cores on the system were available for VMs to use - although he did not discuss per-VM limits. The limitation is purely to do with how many cores the host is allowed to use.

And from talking to a bunch of other people after the session, the common theme (that seems to be completely avoided here) is that switching to Hyper-V was going to save a lot of money. One delegate was talking about moving from VMWare to Hyper-V because it was going to save him in excess of AU$20K per year on licensing alone.

Funny how the VMWare evangelists here aren't talking about the price, isn't it?

Corey M

Re: Even more confused

"I know I am a bit out of touch with Windows. Last time I used it, I complained that the middle mouse button did not work."

That would be because you didn't know what you were doing. I've been using the middle button on my mice in Windows since the mid nineties. It would have been earlier, but I didn't get my first 3-button mouse until then.

Would you like me to list all the things that *NIX doesn't do the same as Windows and then complain about how *NIX is stuck in the 60's? I can if you like. Wouldn't mean anything more than your uneducated slam.

"I would like to congratulate Windows users for reaching the twentieth century, but I am not sure if they all have three button mice yet."

Thanks. I'm sure that one day you'll catch up with us though.

Corey M

Re: Even more confused @Flocke Kroes

"I cannot understand how you can have remote access to another system's memory via RDMA faster than access to local memory."

The main aspect of the session this related to was that when they teamed 4 network adapters and the transfer bottle-necked on the memory bus.

Given that RDMA is designed to be high-throughput and low latency, it is possible that it has fewer overheads than NUMA node memory access. It doesn't mean that a NUMA node on one computer can access memory on another computer faster than on the same device, it just means that the RDMA transfers can move data faster than the NUMA nodes can access it.

"It would be interesting to see whether the systems he has seen in the lab are the ones using PCIe4 as an interconnect."

According to Ben they were using 4x PCIe3 NICs. I didn't catch the specifics of the NICs, but he said they bottle-necked the PCI bus with 3 of them (well, 2 really - the third NIC made no difference to transfer rate). No doubt when they post the session recordings you can listen to what he said and figure it out.