Re: Hmmm... programmable?
And certain thee & four letter government agencies too
Microsoft has switched on new network interface cards packing field-programmable gate arrays and announced that doing so has let it hit 30Gbps of throughput for servers in Azure. Redmond’s talked up these “SmartNICs” since late 2016 and even detailed (PDF) their workings to the Open Compute project. It has now revealed the …
This post has been deleted by its author
We’ve been using Napatech’s FPGA toting NICs for many years now. We hit 20 Gbps lossless throughput over six years ago, and we now handle 80Gbps without difficulty. This is on Linux, of course, but I’m surprised that Microsoft has a) only just managed 40Gbps and b) considers it newsworthy.
As to running malware on the NIC, I’d love to know how (at least in the case of Napatech’s offering). It’s pretty locked down, and runs (AFAICS) Napatech’s Software only. A little more flexibility might be nice - but, perhaps, not at the cost of security. That said, we haven’t put any effort into getting the NIC to do something other than it’s core function.
I wonder when Wintel will re-invent the wheel and come up with Firewire v2, offloading USB interrupt traffic from the cpu, as they are crowing about doing it successfully for networking? Obviously, it doesn't bother servers much, but when when I'm doing a disk-to-disk copy on my laptop, where the two disks are USB-attached devices, I am unimpressed by how much cpu is used.
I know that Firewire, with unsecured DMA access, wouldn't wash these days, but IOMMUs that support virtualization(Intel:VT-d; AMD: AMD-Vi) are available now that solves that problem (unless you find a sneaky side channel/timing attack). Bothering a cpu with I/O interrupts is an archaic practice.
We used Xilinx FPGAs to filter the LAN networking load on the cpu back in the 1980s.
Still have the handwired ISA prototype board that did the CRC verification of 10mbps Ethernet frames in real time. Thought of offering it and its design paperwork to the National Museum of Computing for posterity.
It showed our design engineers that FPGAs were more than just ASICs - as it was reprogrammed to change its functionality at various stages of normal operation..
Sadly, at least on a consumer level, the PC market has separated into two camps. The consumer, who wants everything for as little money as possible (and the cheapest way to do this is use the CPU for as much as is possible), and the high performance/gamer market. These are people who can and will pay thousands for a PC, and often end up buying "snake oil" solutions to make that PC faster. They will pay hundreds of pounds for performance they rarely need. Personally, I am in the middle. I am a PC gamer, and will pay money to increase the performance of my PC, but I have neither the money nor the inclination to spend thousands upgrading my PC to a level where I'd rarely use all it's power.
Even enterprise customers want to pay as little as possible, often preferring to offload vital parts of their operation to the cloud (which offers a whole load of benefits and dangers that are beyond the scope of this comment).
The result is that as CPUs have gotten faster, I feel we've stayed still a little in PC design. The way to process data most efficiently (be it graphics, network or whatever) is to use a processor designed to process that data efficiently, and not a generic CPU that, while it may offer facilities to speed up processing given kinds of data, is not as efficient as a dedicated processor. Trouble is, the dedicated processor costs more.
I say stayed still because, in the 90s, there seemed to be a move to give every interface on a PC the power to do it's own processing, with SCSI cards, Sound Cards, Network cards and other interface cards. There was even a market for hardware DVD decoders.There is still a market for GPUs that do their own processing, although the processing for everything else has largely been absorbed into the CPU. On low end systems, the CPU even does the GPU work..