Friday, 2 May 2014

100G Ethernet FPGA NIC - PCIe 3 with 16 lanes + GPS

I hadn't heard of these guys: Inveatech. Shame on me as the spec's look neat. Though I'm not quite sure why they're reselling the Stanford NetFPGA-10G cards. Those NetFPGA-10G cards are not my favourites due to their power and clock specifics as well as their PHY latencies. But I digress, here is a picture of their cool 100G NIC:

Inveatech 100G card: - 1 x 100G or 10 x 10G
They are not publicising how much QDR and DDR3 the 100G NIC carries. A Xilinx blog tells us which FPGA they're using, 
"the INVEA-TECH COMBO 100G HANIC accepts one 100Gbps CFP2 optical Ethernet transceiver module and the on-board Virtex-7 H580T 3D FPGA receives the Ethernet streams using four of its GTZ 28.05Gbps SerDes transceivers operating at 25Gbps to communicate with CFP2 cage"
That's an expensive FPGA, so the cards will not be cheap.  Their 80G card with 2 x 40G QSFPs uses a Virtex-7 H690T for what it's worth. Interestingly, the 100G card must have an active module to pull out 10 x 10G lanes from the 4 x 25G lanes if the vendor and Xilinx descriptions are both right.

They also claim to have some interesting trade acceleration IP bits and pieces. I'd be interested in hearing from anyone who has had any experience of these tasty looking Czech morsels?

If you're thinking of just playing then you're probably better off getting one of the Xilinx dev boards. It is hard to go past their $5k 4 x 10G Virtex-7 board which includes node locked tools.  

The distributor High-tech Gobal has a couple of neat morsels for the price or space constrained:
100G will bring more heartache. *Sigh*, so many 100G module form factors to choose from. There are six main ones. "Standards" are always just dandy when they provide so much choice. I'm just hoping that when we do 100G we can stick to QSFP28 modules instead of CPAK, CXP, CFP2 or CFP4 modules, though the density of the HD modules are tempting even if we'll have to buy new cabling.

--Matt.