cakehonolulu's comments

cakehonolulu | 1 month ago | on: Linux kernel framework for PCIe device emulation, in userspace

Highly interesting! I kinda wanted not to rely on QEMU as a default "end" for the emulation (As in, I want the end user to be able to choose whatever userspace shim/transport layer/thing they want), but for some of my tests I did forward accesses to QEMU itself (And worked wonders). Thanks for that link! Super cool stuff!

cakehonolulu | 1 month ago | on: Linux kernel framework for PCIe device emulation, in userspace

Hi!

PCIEM_EVENT_MMIO_READ is kept for reference on the while (head != tail) loop just to remind the user that there can be more than 1 event registered in terms of access type.

Let's say that you register your watchpoint in READ mode (I still have to change the IOCTL for that as currently is hardcoded for writes: attr.bp_type = HW_BREAKPOINT_W), then you'd be consuming PCIEM_EVENT_MMIO_READ events instead of PCIEM_EVENT_MMIO_WRITE.

The fact that the PCIEM_EVENT_MMIO_READ define is there is to help me remind me to incorporate that missing logic.

cakehonolulu | 1 month ago | on: Linux kernel framework for PCIe device emulation, in userspace

I feel like libfvio-user is a cool project and works perfectly fine, that is, if you want to have the device on the host's userspace but exposed to a VM (QEMU, in this case).

PCIem kinda does that, but it's down a level; in terms of, it basically pops the device on your host PCI bus, which lets real, unmodified drivers to interact with the userspace implementation of your card, no QEMU, no VM, no hypervisors.

Not saying that you can then, for instance, forward all the accesses to QEMU (Some people/orgs already have their cards defined in QEMU so it'd be a bit pointless to redefine the same stuff over and over, right?) so they're free to basically glue their QEMU stuff to PCIem in case they want to try the driver directly on the host but maintaining the functional emulation on QEMU. PCIem takes care of abstracting the accesses and whatnot with an API that tries to mimick that the cool people over at KVM do.

cakehonolulu | 1 month ago | on: Linux kernel framework for PCIe device emulation, in userspace

Hi! That's correct. We need a way to have a chunk of what Linux calls "Reserved" memory for the virtual BAR trick. Currently, PCIem only thinks about a single device (Since I first needed to have something that worked in order to check how feasible this all was), but there's planned support for multiple devices that can share a "Reserved" memory pool dynamically so you can have multiple BARs for multiple devices.

cakehonolulu | 1 month ago | on: Linux kernel framework for PCIe device emulation, in userspace

You can definitely proxy the transactions wherever you may see fit, but I'm not completely sure how that'd work.

As in, PCIem is going to populate the bus with virtually the same card (At least, in terms of capabilities, vendor/product id... and whanot) so I don't see how you'd then add another layer of indirection that somehow can transparently process the unfiltered transaction stream PCIem provides to it to an actual PCIe card on the bus. I feel like there's many colliding responsabilities in this.

I would instead suggest to have some sort of behavioural model (As in, have a predefined set of data to feed from/to) and have PCIem log all the accesses your real driver does. That way the driver would have enough infrastructure not to crash and at the same time you'd get the transport layer information.

cakehonolulu | 1 month ago | on: Linux kernel framework for PCIe device emulation, in userspace

Hi! Sorry, this is an issue on my side; I forgot to update the documentation's example with the latest changes.

You basically have the kernel eventfd notify you about any access triggered (Based on your configuration), so from userspace, you have the eventfd and then you mmap the shared lock-less ring buffer that actually contains the events PCIem notifies (So you don't end up busy polling).

You basically mmap a struct pciem_shared_ring where you'll have your usual head/tail pointers.

From then on, on your main, you'd have a select() or a poll() for the eventfd; when PCIem notifies the userspace you'd check head != tail (Which means there are events to process) and you can basically do:

struct pciem_event *event = &event_ring->events[head]; atomic_thread_fence(memory_order_acquire); if (event->type == PCIEM_EVENT_MMIO_WRITE) handle_mmio_read(...);

And that's it, don't forget to update the head pointer!

I'll go and update the docs now. Hopefully this clears stuff up!

cakehonolulu | 1 month ago | on: Linux kernel framework for PCIe device emulation, in userspace

Sure! Let's say you (Or the company you work for) are trying to develop an NVME controller card, or a RAID card, or a NIC...

Usually, without actual silicon, you are pretty limited on what you can do in terms of anticipating the software that'll run.

What if you want to write a driver for it w/o having to buy auxiliary boards that act as your card? What happens if you already have a driver and want to do some security testing on it but don't have the card/don't want to use a physical one for any specific reason (Maybe some UB on the driver pokes at some register that kills the card? Just making disastrous scenarios to prove the point hah).

What if you want to add explicit failures to the card so that you can try and make the driver as tamper-proof and as fault-tolerant as possible (Think, getting the PCI card out of the bus w/o switching the computer off)?

Testing your driver functionally and/or behaviourally on CI/CD on any server (Not requiring the actual card!)?

There's quite a bunch of stuff you can do with it, thanks to being in userspace means that you can get as hacky-wacky as you want (Heck, I have a dumb-framebuffer-esque and OpenGL 1.X capable QEMU device I wanted to write a driver for fun and I used PCIem to forward the accesses to it).

cakehonolulu | 1 month ago | on: Linux kernel framework for PCIe device emulation, in userspace

As in, getting the PCIem shim to show up on a VM (Like, passthrough)? If that's what you're asking for, then; it's something being explored currently. Main challenges come from the subsystem that has to "unbind" the device from the host and do the reconfiguration (IOMMU, interrupt routing... and whatnot). But from my initial gatherings, it doesn't look like an impossible task.

cakehonolulu | 1 month ago | on: Linux kernel framework for PCIe device emulation, in userspace

Indeed, the project has gone through a few iterations already (It was first a monolithic kernel module that required a secondary module to call into the API and whatnot). I've went towards a more userspace-friendly usage mainly so that you can iterate your changes much, much faster. Creating the synthetic PCI device is as easy as opening the userspace shim you program, it'll then appear on your bus. When you want to test new changes, you close the shim normally (Effectively removing it from the bus) and you can do this process as many times as needed.

cakehonolulu | 1 month ago | on: Linux kernel framework for PCIe device emulation, in userspace

Indeed, and even then, there's some sw-hw-codesign stuff that kinda helps you do what PCIem does but it's usually really pricey; so I kinda thought it'd be a good thing to have for free.

PCIe prototyping is usually not something super straightforward if you don't want to pay hefty sums IME.

cakehonolulu | 1 month ago | on: Linux kernel framework for PCIe device emulation, in userspace

Hi! Author here! You can technically offload the transactions the real driver on your host does to wherever you want really. PCI is very delay-tolerant and it usually negotiates with the device so I see not much of an issue doing that proven that you can efficiently and performantly manage the throughput throughout the architecture. The thing that kinda makes PCIem special is that you are pretty much free to do whatever you want with the accesses the driver does, you have total freedom. I have made a simple NVME controller (With a 1GB drive I basically malloc'd) which pops up on the local PCI bus (And the regular Linux's nvme block driver attaches to it just fine). You can format it, mount it, create files, folders... it's kinda neat. I also have a simple dumb rasteriser that I made inside QEMU that I wanted to write a driver for, but since it doesn't exist, I used PCIem to help me redirect the driver writes to the QEMU instance hosting the card (Thus was able to run software-rendered DOOM, OpenGL 1.X-based Quake and Half-Life ports).
page 1