top | item 46690072

(no title)

tiernano | 1 month ago

Hmmm.... Wondering if this could be eventually used to emulate a PCIe card using another device, like a RaspberryPi or something more powerful... Thinking the idea of a card you could stick in a machine, anything from a 1x to 16x slot, that emulates a network card (you could run VPN or other stuff on the card and offload it from the host) or storage (running something with enough power to run ZFS and a few disks, and show to the host as a single disk, allowing ZFS on devices that would not support it). but this is probably not something easy...

discuss

order

cakehonolulu|1 month ago

Hi! Author here! You can technically offload the transactions the real driver on your host does to wherever you want really. PCI is very delay-tolerant and it usually negotiates with the device so I see not much of an issue doing that proven that you can efficiently and performantly manage the throughput throughout the architecture. The thing that kinda makes PCIem special is that you are pretty much free to do whatever you want with the accesses the driver does, you have total freedom. I have made a simple NVME controller (With a 1GB drive I basically malloc'd) which pops up on the local PCI bus (And the regular Linux's nvme block driver attaches to it just fine). You can format it, mount it, create files, folders... it's kinda neat. I also have a simple dumb rasteriser that I made inside QEMU that I wanted to write a driver for, but since it doesn't exist, I used PCIem to help me redirect the driver writes to the QEMU instance hosting the card (Thus was able to run software-rendered DOOM, OpenGL 1.X-based Quake and Half-Life ports).

topspin|1 month ago

> PCI is very delay-tolerant

That fascinates me. Intel deserves a lot of credit for PCI. They built in future proofing for use cases that wouldn't emerge for years, when their bread and butter was PC processors and peripheral PC chips, and they could have done far less. The platform independence and general openness (PCI-SIG) are also notable for something that came from 1990 Intel.

tonyplee|1 month ago

Can one make a PCIe analyzer out of your code base by proxy all transactions thru a virtual PCIem driver to a real driver?

baruch|1 month ago

Is it possible to put such a driver for nvme under igb_uio or another uio interface? I have an app that uses raw nvme devices and being able to tests strange edge cases would be a real boon!

jacquesm|1 month ago

Fantastic tool, thank you for making this it is one of those things that you never knew you needed until someone took the time to put it together.

gigatexal|1 month ago

This is really interesting. Could it be used to carve up a host GPU for use in a guest VM?

s4mbh4|1 month ago

I wonder if it's possible to create a wire shark plugin for analyzing PCIE?

MisterTea|1 month ago

This kind of stuff is stupid easy on an OS like Plan 9 where you speak a single protocol: 9P. Ethernet devices are abstracted and served by the kernel as a file system explained in ether(3). Since it's all 9P the system doesn't care where the server is running; could be a local in-kernel/user-space server or remote server over ANY 2-way link including TCP, IL, PCIe link, RS232 port, SPI, USB, etc. This means you can mount individual pieces of hardware or networking stacks like ip(3), any 9P server, from other machines to a processes local namespace. Per-process name spaces let you customize the processes view of the file system and hence all its children allowing you to customize each and every programs resource view.

There is interest in getting 9front running on the Octeon chips. This would allow one to run anything they want on an Octeon card (Plan 9 cross platform is first class) so one could boot the card using the hosts root file system, write and test a program on the host, change the objtype env variable to mips/arm, build the binary for the Octeon and then run it on the Octeon using rcpu (like running a command remotely via ssh.) All you need is a working kernel on the Octeon and a host kernel driver and the rest is out of the box.

3PS|1 month ago

This is also the case with Google Fuchsia, just replace 9P with FIDL. I'm really hoping Fuchsia doesn't end up just being vaporware since it has made some very interesting technical decisions (often borrowing from Plan 9, NixOS, and others.)

pjc50|1 month ago

> emulate a PCIe card using another device

The other existing solution to this is FPGA cards: https://www.fpgadeveloper.com/list-of-fpga-dev-boards-for-pc... - note the wide spread in price. You then also have to deal with FPGA tooling. The benefit is much better timing.

cakehonolulu|1 month ago

Indeed, and even then, there's some sw-hw-codesign stuff that kinda helps you do what PCIem does but it's usually really pricey; so I kinda thought it'd be a good thing to have for free.

PCIe prototyping is usually not something super straightforward if you don't want to pay hefty sums IME.

xerxes901|1 month ago

Something like the stm32mp2 series of MCUs can run Linux and act as a PCIe endpoint you can control from a kernel module on the MCU. So you can program an arbitrary PCIe device that way (although it won’t be setting any speed records, and I think the PHY might be limited to PCIe 1x)

jdub|1 month ago

(Ha, nice to see Jon Corbet's name on the PCI Endpoint documentation...)

tiernano|1 month ago

interesting... x1 would too slow for large amounts of storage, but as a test, a couple small SSDs could potentially be workable... sounds like im doing some digging...

asdefghyk|1 month ago

Could add one or more (reprograble?) FPGA's for extra? processing power OR reconfiguration ease to such a card ......

I've often wondered why such a card (with FPGA) is not available for retro? computer emulation or simulation ??

wmf|1 month ago

This is what DPUs are for.

hhh|1 month ago

this is what dma cards do

immibis|1 month ago

I recently bought a DMA cheating card because it's secretly just an FPGA PCIe card. Haven't tried to play around with it yet.

Seems unlikely you'd emulate a real PCIe card in software because PCIe is pretty high-speed.