top | item 20399555

PCI Express on the Raspberry Pi 4

619 points| trollied | 6 years ago |mloduchowski.com

163 comments

order
[+] buildbuildbuild|6 years ago|reply
Fun. If you want PCIe on a SBC without the soldering, I highly recommend perusing Hackerboards. I'm very happy with my RockPi 4 (4GB RAM, PCIe, USB 3, 6 cores), which I discovered through their excellent database.

https://www.hackerboards.com/search_boarddb.php

[+] hajile|6 years ago|reply
If I were moving away from the Raspberry Pi 4, I'd definitely consider the Nvidia Jetson Nano. It comes with a massive cooler attached. Pi 4 needs a cooler which will run you around $20. That puts you rather close to the nano in price ($75 vs $100), but the nano also has a GPU that is enormously more powerful and well worth the extra $25. Not needing special HDMI cables (or adapters) for the Nano is another money saver.

The biggest factor though is support. Raspberry Pi has a lot of software support, so you aren't running into weird bugs here and there with nobody around to help. The Jetson community isn't nearly as big, but Nvidia's track record on their software support is generally quite good. In this case, they have an extra interest given their push for commercial applications and that the X1 sees use in the Nintendo Switch and Shield TV (among other things).

[+] beatgammit|6 years ago|reply
I think you mean the RockPro64. The Rock64 only has 4 cores and no PCIe.

That being said, I missed the PCIe on the specs last time I was comparing SOCs and I had forgotten about hackerboards, thanks for the reminder!

[+] bjoli|6 years ago|reply
The official heat sink is amazing, and can be used together with a PoE hat. That is a huge winner feature for me since the small fans are obnoxiously loud.
[+] justinclift|6 years ago|reply
Not seeing a mention of PCIe as an option that can be searched?
[+] Jonnax|6 years ago|reply
That's really cool. I'm curious, would it be possible to use a modern GPU (running at 1x) on an ARM based board?

Would the open source drivers that are part of the Kernel work out the box on ARM?

[+] qdot_me|6 years ago|reply
Hack’s creator here - it’s on my list of things to try. GPUs are notoriously hard to get to work on non-intel, having tried to get a few up on Alpha and Itaniums in the past.

VideoBIOS expects to run and expects a well behaving Intel CPU to do the power-up. That said X can sometimes emulate these quite well. On ARM we’d also run into alignment issues and likely other quirks - but in principle...

[+] mntmn|6 years ago|reply
You might run into address space issues. I haven’t checked Broadcom PCIe documentation for RPi4 (is there any?), but I tried a very similar hack with i.MX6 and older AMD and nVidia cards. They get recognized fine, but BARs cannot be mapped because they don’t fit in i.MX6’s tiny 16MB PCIe space.
[+] edude03|6 years ago|reply
Yes, if you look around, people have even recompiled opensource drivers for RISC-V to use AMD cards.
[+] true_tuna|6 years ago|reply
If you want it for ML you could check out the Coral TPU. (Tensor Processing Unit over USB) I picked up a couple of those yesterday.
[+] segfaultbuserr|6 years ago|reply
It looks like an unreliable modification. Running a GHz-level interface with jumpers is almost impossible to control the impedance, it's a cool Proof-of-Concept though.

But is it possible to bring the project to the next level? Is it possible to make a daughterboard with QFN connector? If so, one can make a pin-compatible daughterboard with an extension connector. To use it, just desolder the USB chip and solder a new daughterboard on it, and you're ready to go. It would be one of the coolest Raspberry Pi projects!

[+] qdot_me|6 years ago|reply
Yup. The daughterboard is on my mind. Likely flex-PCB and that’s gonna take a week or two to respin. Hence I’m collecting ideas for various daughtercards I could cram in a panel before sending it off - straight-through to riser via USB3, expresscard SMT, maybe through-hole 1x?

That said, PCIe phy’s are extremely robust - they do most of the impedance matching and delay mismatch training. And if you don’t ruin the onboard caps, this could be jumpered straight across.

[+] kees99|6 years ago|reply

  It looks like an unreliable modification.
PCIe is surprisingly robust at short lengths. For example, [NanoPi_M4] has two lanes of PCIe coming to the daughterboards via old-school 0.1" connector. Something that many electrical engineers would cringe at, and yet - it works rather reliably.

[NanoPi_M4] http://wiki.friendlyarm.com/wiki/index.php/NanoPi_M4#Layout

[+] monocasa|6 years ago|reply
I've seen pcie literally run over a metal clothes hanger soldered to the board. It's extremely tolerant of terrible quality connections.
[+] madengr|6 years ago|reply
36 gauge wire, and twisting them into pairs would help, or parallel and flat against the PCB.
[+] dwheeler|6 years ago|reply
It'd be nice if there was an easier way to do this (vs. removing a chip!). E.g., maybe a dedicated pinout and an easy way to disable the existing use (since the pins can't be shared).
[+] noobermin|6 years ago|reply
Now this is the content I come to HN for. A serious hack just days after the 4 was released. Kudos to the OP.

I envy people like OP for their tenacity. I barely have time to follow what's happening in IT, much less get ahead of the pack in doing cools hacks like this.

[+] userbinator|6 years ago|reply
On the other hand, the fact that the RPi ecosystem remains notoriously proprietary (even the USB controller is a bastard variant that has next to no documentation --- of all the ones available, they had to choose that one) continues to be disappointing.

I definitely like this sort of hack, but such hacks with documentation already available (and doing more than documented, basically) are certainly preferable.

[+] anderspitman|6 years ago|reply
I'm sure you are but just in case are you aware of hackaday.com?
[+] baybal2|6 years ago|reply
Raspberry has 2x Gigabit RGMIIs on those SoCs, but they don't wire them out. It is a waste I think
[+] wang_li|6 years ago|reply
Aren't PCIe lanes shared? Why would I need to remove the USB 3.0 chip rather than just hooking right to the pins on the device where it's soldered in place?

E: Apparently it's the PCI bus that is shared, not PCI Express lanes. Ty.

[+] Kirby64|6 years ago|reply
Nope. PCIe lanes are not shared. There are some chips (and a lot of motherboards) that allow (or automatically perform) remapping of lanes, though. That's why if you check a motherboard with SLI/Crossfire, it usually has some setting in the BIOS to either dedicate all 16 lanes to 1 PCIe 16x slot or split 2 PCIe 16x slots 8 and 8.

The chip has to be removed.

[+] strmpnk|6 years ago|reply
AFAICT lanes are not shared but there are chipsets which can break lanes out into other sets of lanes which are then routed back onto the original set of lanes. So if your CPU has 16 lanes you can hang a chip off of it which then provides more lanes which are then signaled back to the CPU over some subset of those lanes.

It’s not clear if the lanes themselves can be multiplexed with packets from many devices but they can change the number of assigned lanes after initialization so a clever chipset could probably dynamically allocate lanes as used.

[+] Millennium|6 years ago|reply
If a Pi is capable of this already, why not replace the Ethernet, charging, micro-HDMI, and USB ports with a boatload of type-C Thunderbolt ports (plus support for the HDMI 1.4 alt mode)? Would 8xUSB-C cost that much more than 1xUSB-C+1xEthernet+2xMicro-HDMI+2xUSB3+2xUSB2 (with no PCI Express), in exchange for a considerably more flexible device?
[+] peterburkimsher|6 years ago|reply
Nice work! Would this be compatible with an M.2 to PCIe adaptor? [1]

Being able to attach an Intel 660P and get 2 TB of fast SSD storage on a Raspberry Pi would be sweet.

[1] https://www.amazon.com/EZDIY-FAB-Express-Adapter-Support-221...

[+] elFarto|6 years ago|reply
Technically yes, but it's only a PCIe 1x Gen2 slot, so only 500MB/s of bandwidth (4x Gen3 is ~4GB/s). You'd be better off with a USB 3.0 to M.2 adapter.
[+] Zenst|6 years ago|reply
This is something that will probably prove more accessible with the Zero flavour of the 4.
[+] mng2|6 years ago|reply
Wow, that was quick. Given that this is Broadcom, I don't suppose there is any visibility into the Root Complex? When troubleshooting PCIe it'd be nice to have the LTSSM state at least. Would be really cool to get eye diagrams...
[+] yetihehe|6 years ago|reply
Hmm, how about using this for fast interconnect for making rpi clusters?
[+] epynonymous|6 years ago|reply
this is too awesome! that's quite a lot of work to get the pcie exposed, soldering and such i try to stay away from so great for the author.

the form factor of pcie devices doesn't really play well with rpi, but there's definitely a need for faster, more stable persistent storage. i have heard a lot of issues with microsd cards based on wear leveling and such. it would be really nice if rpi could develop like an m2 interconnect where i could install an nvme ssd within the form factor of an rpi, that would make for a truly incredible little machine.

[+] nereid|6 years ago|reply
Interesting install a SATA card and make a NAS. I think better than a usb.
[+] boyadjian|6 years ago|reply
Why not, but I will wait for a classic ATX ARM motherboard instead. It should happen this year.
[+] mallets|6 years ago|reply
I think I will wait for the pi 4 compute model, tyvm.
[+] teamski|6 years ago|reply
Hijacking the thread:

I develop remotely on VPSes because I like to have an always-on box reachable from any client. I am wondering if a RP4 offers a similar experience at lower cost.

Does anyone use a RP for this?

[+] numlock86|6 years ago|reply
I do this with an RPi3 and it's doing good, so it's doable. It strongly depends on your setup and develeopment environment, though. Do you want VI to work over SSH or want full VNC access to a machine with Gnome and Eclipse? Or something in between like X forwarding? Also, is aarch64 even an option as a host system? (compilations, software availability etc.)