top | item 27099044

(no title)

0x000000E2 | 4 years ago

^^ this is huge. I was looking at CPU's for a ML recently and Intel is out of the question. Chips with enough lanes to hit full speed on 4 GPUs + SSD cost twice as much as AMD.

This may even apply to high end gaming machines. Soon as you have 2 SSD's or video cards you will exceed link budget on most Intel CPU's and everything slows down.

It also happens to routers. 10 gig NICs plus attached SSD storage, over link budget again.

Intel's stupid market segmentation is biting them in the rear

discuss

order

walrus01|4 years ago

Not just 10Gbps (such as the Intel card which is four 10Gbps SFP ports in one slot), but single and dual 100GbE per slot like this:

https://www.intel.com/content/www/us/en/products/docs/networ...

In calculating the bandwidth and pci-e bus throughput needed, a single 100GbE port is full duplex, so one has to budget about 210Gbps per port.

The funny thing is that some of the best 100GbE NICs for x86-64 servers on the market right now are Intel, but are best used on an AMD platform...

drewg123|4 years ago

What? Why do you think the Intel 100GbE NIC is good?

We've been quite happy with Mellanox and Chelsio 100GbE NICs. The latest from each can do in-line HW TLS offload, which is a killer feature for us. No Intel NIC can do that.

IMHO the last good Intel NIC was the 10GbE "ixgbe" NIC. The design of the NIC was so tight as to be almost beautiful.

Recent 40GbE (and 10GbE based on the 40GbE chipset), and the new 100GbE NIC have the feel of being designed by a committee with endless features of questionable value stuffed in and consuming power and chip area.

9front|4 years ago

Intel's E810 based NICs require only a PCIe3.1x16 slot. 16 lanes will accommodate the 100GbE port just fine. Theoretical PCIe throughput for 16 PCI lanes is around 252Gbps. The 800 NIC chipset is just four 25Gb Ethernet lanes stitched together. PCIe4 won't help this NIC much.