top | item 43757731

(no title)

zten | 10 months ago

I bought a Gigabyte X870E board with 3 PCIe slots (PCIe5 16x, PCIe4 4x, PCIe3 4x) and 4 M.2 slots (3x PCIe5, 1x PCIe 4). Three of the M.2 slots are connected to the CPU, and one is connected to the chipset. Using the 2nd and 3rd M.2 CPU-connected slots causes the board to bifurcate the lanes assigned to the GPU's PCIe slot, so you get 8x GPU, 4x M.2, 4x M.2.

I wish you didn't have to buy Xeon or Threadripper to get considerably more PCIe lanes, but for most people I suspect this split is acceptable. The penalty for gaming going from 16x to 8x is pretty small.

discuss

order

ciupicri|10 months ago

For a moment I didn't believe you, then I looked at the X870E AORUS PRO ICE (rev. 1.1) motherboard [1] and found this:

> 1x PCI Express x16 slot (PCIEX16), integrated in the CPU:

> AMD Ryzen™ 9000/7000 Series Processors support PCIe 5.0 x16 mode

> * The M2B_CPU and M2C_CPU connectors share bandwidth with the PCIEX16 slot.

> When theM2B_CPU orM2C_CPU connector is populated, the PCIEX16 slot operates at up to x8 mode.

[1]: https://www.gigabyte.com/Motherboard/X870E-AORUS-PRO-ICE-rev...

wtallis|10 months ago

IIRC, X870 boards are required to spend some of their PCIe lanes on providing USB4/Thunderbolt ports. If you don't want those, you can get an X670 board that uses the same chipset silicon but provides a better allocation of PCIe lanes to internal M.2 and PCIe slots.

elevation|10 months ago

Even with a Threadripper you're at the mercy of the motherboard design.

I use ROG board that has 4 PCIe slots. While each can physically seat an x16 card, only one of them has 16 lanes -- the rest are x4. I had to demote my GPU to a slower slot in order to get full throughput from my 100GbE card. All this despite having a CPU with 64 lanes available.

grw_|10 months ago

I don't think Threadripper platform is to blame that you bought a board with potentially the worst possible pcie lane routing. Latest generation has 88 usable lanes at minimum, most boards have 4x 16x, and Pro supports 7x Gen 5.0 x16 links, an absolutely insane amount of IO. "At the mercy of motherboard design"- do the absolute minimum amount of research and pick any other board?

nrdvana|10 months ago

You're using 100GbE ... in an end-user PC? What would you even saturate that with?

kimixa|10 months ago

Though for the most the performance cost of going down to 8x PCIe is often pretty tiny - only a couple of percent at most

[0] shows a pretty "worst case" impact of 1-4% - that's on the absolute highest-end card possible (a geforce 5090) and pushing it down to 16x PCIe3.0. A lower end card would likely show an even smaller difference. They even showed zero impact from 16xPCIe4.0, which is the same bandwidth as 8x of the PCIe5.0 lanes supported on X870E boards like you mentioned.

Though if you're not on a gaming use case and know you're already PCIe limited it could be larger - but people who have that sort of use case likely already know what to look for, and have systems tuned to that use case more than "generic consumer gamer board"

[0] https://gamersnexus.net/gpus/nvidia-rtx-5090-pcie-50-vs-40-v...

dur-randir|10 months ago

>I wish you didn't have to buy Xeon

But that's the whole point of Intel's market segmentation strategy - otherwise their low-tier workstation Xeons would see no market.

vladvasiliu|10 months ago

I wonder how this works. I'm typing this on a machine running an i7-6700K, which, according to Intel, only has 16 lanes total.

It has a 4x SSD and a 16x GPU. Their respective tools report them as using all the lanes, which is clearly impossible if I'm to believe Intel's specs.

Could this bifurcation be dynamic, and activate those lanes which are required at a given time?

toast0|10 months ago

For Skylake, Intel ran 16 lanes of pci-e to the CPU, and ran DMI to the chipset, which had pci-e lanes behind it. Depending on the chipset, there would be anywhere from 6 lanes at pci-e 2.0 to 20 lanes at pci-e 3.0. My wild guess is that a board from back then would have put m.2 behind the chipset and no cpu attached ssd for you; that fits with your report of the GPU having all 16 lanes.

But, if you had the nicer chipsets, wikipedia says your board could split the 16 cpu lanes into two x8 slots or one x8 and 2 x4 slots, which would fit. This would usually be dynamic at boot time, not at runtime; the firmware would typically look if anything is in the x4 slots and if so, set bifurcation, otherwise the x16 gets all the lanes. Some motherboards do have PCI-e switches to use the bandwidth more flexibly, but those got really expensive; i think at the transition to pci-e 4.0, but maybe 3.0?