top | item 45976693

Free interactive tool that shows you how PCIe lanes work on motherboards

274 points| tagyro | 3 months ago |mobomaps.com

61 comments

order

nirav72|3 months ago

Nice! One suggestion - please add AM4 socket boards. With current memory prices, AM5 with DDR5 is becoming unattainable for some. DDR4 prices are rising as well. But not nearly as bad as DDR5.

Dylan16807|3 months ago

So you're specifically considering the people that would have gone AM5 but are now looking at AM4 at the end of 2025 and into 2026?

Is that a significant number of people? I kind of expect almost everyone that waited this long to sit tight on their current builds and keep waiting until RAM goes back down.

rao-v|3 months ago

I’ve been struggling to find an AM5 board that can run three MI50s at 4x. This is perfect thank you.

Him are you sure about some of the PCI slots? I think some marked as 4x get downgraded to 1x on these boards…

Further edit - this maybe accurate - how are you getting this / confirming it?

chainingsolid|3 months ago

I would normally figure this out by reading motherboard manuals. Which for SKUs you can buy standalone tend to be on the manufacter's site with no account/paywall. They tend to include all the "if you populate this slot you lose xyz" language. Along with how to change PCIe lane bifercation in bios if nessesary.

matja|3 months ago

How can I contribute the data for the boards I own which are not on the site?

throw7|3 months ago

I wish all manufacturers clearly gave info like this up front. AM4 boards would be nice.

PunchyHamster|3 months ago

Yeah my ASRock have nice map of the every lane and interface and where they are connected on the board. Especially important as some devices go thru second io expander

temp0826|3 months ago

Probably a good thing SLI fell out of fashion. No consumer boards with multiple 16x, but a few with 2 8x (gated behind a "mode" switch). A few years ago it was looking like we were on our way to full 4 16x slots. For cuda/llm/whatever does it really matter if the cards are in 1x slots?

cjensen|3 months ago

It's the other way around. SLI falling out of fashion is why there are no consumer boards with multiple x16 slots. There's no longer any demand for it on the consumer side, so the CPU vendors only provide lots of PCIe lanes for expensive chips.

On the server side, seven x16 slot motherboards exist.

hoss1474489|3 months ago

GPUs in 16x slots is still important for LLM stuff, especially multi-GPU, where lots of data needs to move between cards during computation.

Dylan16807|3 months ago

I would expect x8 at 5.0 speeds to be plenty for SLI. That's twice as fast as x16 slots were around the end of the SLI era.

tryauuum|3 months ago

... shouldn't the logic be opposite? "Bad that SLI went out of fashion, there's no way for two GPUs to communicate fast over pcie, and SLI would allow such fast bridge"

rkagerer|3 months ago

Can anyone recommend a specific, well-made, high-performance motherboard with loads of PCIe lanes and expansion slots, and sensible lane topology?

All the motherboards these days make me feel claustrophobic. My current workstation is pretty old, but feels like it had more expansion capability (relative to its time) than what's on the market today.

Aurornis|3 months ago

You’ll have to be more specific about your price range. There are a lot of server and workstation chipsets/platforms that will have a large number of PCIe lanes, but you will pay for them.

I really suggest not seeking a lot of PCIe lanes unless you really need them right now, though. The price premium for a platform with a lot of extra PCIe is very steep once you get past consumer boards. It would be a shame to spend a huge premium on a server board and settle for slower older tech CPUs only to have all of those slots sit empty.

It’s a good idea to add up the PCIe devices you will use and the actual bandwidth they need. You lose very little by running a GPU in a PCIe x8 slot instead of a full x16 slot, for example. A 10G Ethernet card only needs 1 lane of PCIe 4.0. Even fast SSDs can get away with half of their lanes and you’ll never notice except in rare cases of sustained large file transfers.

michaelt|3 months ago

Given that you've said 'workstation', if you've got a spare $5000, a Threadripper Pro comes with 128 PCIe 5.0 lanes.

This means you can get a motherboard like the "Asus Pro WS WRX90E-SAGE SE" which dedicates 104 lanes to seven PCIe slots and 16 lanes to four M.2 slots.

For more like $3000 you can get a non-Pro Threadripper; the "Asus Pro WS TRX50-SAGE" has a more restrained 48 PCIe 5.0 and 32 PCIe 4.0 lanes, meaning the board's five PCIe slots and three M.2 slots have a mixture of speeds and lanes.

The rest of the market seems to think you just want to plug in one huge four-slot GPU and perhaps one other card.

notpublic|3 months ago

Check out the following for AM5 motherboards. It helped me a lot when I was in the market. Seems to be well maintained still:

https://docs.google.com/spreadsheets/d/1NQHkDEcgDPm34Mns3C93...

I ended up getting ASRock X870E Taichi Lite. The main reason to get it was because it had 2 CPU x8 slots which are spaced perfectly for an Nvidia NVLink. And, they are Gen5 PCIe.

PeterStuer|3 months ago

If you really need lots of pcie lanes, you are going to be moving up to the TRX50 (or used TRX40) and its ilk. Different price ranges from your typical enthusiast MB though.

hengheng|3 months ago

Look into CXL, Oculink, and riser cables.

sidewndr46|3 months ago

Wow, this is great! I don't know how they generate this but it's really impressive. One of the things that I've been surprised with is some older dual socket workstations have tons of PCI-E lanes, but none are hooked to the second CPU it seems

tripdout|3 months ago

Very cool. Seeing how almost everything from WiFi, to NVME SSDs, (to apparently USB ports sometimes?) are connected to it, is PCIe the only high-speed interconnect we have for peripherals to communicate with modern CPUs?

stinkbeetle|3 months ago

The high speed signals that come out of mainstream CPU chips are generally DDR, SMP, and PCIe. Outside of a very few exotic things that use QPI or HT to connect, or exotic storage might use DDR, yes high speed off-chip peripherals use PCIe.

NVLink is another one you might have heard of, although it might also fall in the exotic category. I think some systems take AXI off-chip too. So there's various other weird and wonderful things. But none you're likely to have in your PC I think.

On-chip is another story, you can connect USB or NVMe or GPU "peripherals" using an on-chip interconnect type. But I guess you are asking about off-chip.

baby_souffle|3 months ago

> PCIe the only high-speed interconnect we have for peripherals to communicate with modern CPUs?

In a pedantic/technical sense, no. Practically speaking though, yes.

crote|3 months ago

USB4 needs PCIe because its Thunderbolt part has PCIe tunneling.

gitpusher|3 months ago

Whoa. This is so cool and helpful. Too bad my board is Intel. Is there a way to contribute to this?

tagyro|3 months ago

I dropped a message to the creator :fingers_crossed: they open the motherboard database so we can make contributions

mifreewil|3 months ago

Very nice! Just a note (as the site says on bottom left side), this can vary depending on the CPU you use, would be nice to be able to select all different variations of supported CPUs as a future feature.

smcleod|3 months ago

That is so incredibly useful, hardware vendors do such a bad job of properly advertising how many GPUs will actually work and with what combination of m.2 slots in use.

SketchySeaBeast|3 months ago

Yup. I've been lost for a while on how to properly set up my MSI X870 TOMAHAWK mobo, this makes it all clear. Boy is it a mess with all the bifurcation.

consp|3 months ago

Bifurcation support is also almost never mentioned, even if the bios supports it.

max002|3 months ago

So cool xD i think you could take it to tool with premoum features for people who are learning :)

asciii|3 months ago

Warning: addicting site :)