The less between your application and the hardware the better you can utilise it. Thereād be basically no contention for resources, IO could go basically straight to the drive with less fsync-shenanigans etc.
I don't think there'll be much. Most modern relational databases rely on underlying filesystem to manage the persistence aspect. Many cannot even work with bare block device (I don't think PostgreSQL can). So, they need OS to provide at least that.
However, in principle, if you could design a database to run as unikernel you could benefit from creating your own persistence layer. For instance, you might be able to completely or to a large extent avoid the problem created by fsync.
Another aspect you could aim for is becoming a real-time database, because you'd have control over memory allocation and thread context switching. This may not give any tangible benefits to an average database user, but in cases where being real-time is relevant (eg. medical equipment) you'd certainly be able to expand your area of application.
So Nanos can run quite a few databases today and you should get the same sort of experience that running a database on any vm will produce (that is everything running in the cloud).
You are absolutely correct though that most filesystems in use today were designed to utilize and more appropriately put deal with various aspects of running on actual real hardware which is interesting as compared to running on a vm. I think there is a ton of room for newer filesystems to emerge that are tuned for virtualized workloads vs tuning them for hardware.
From my POV security and this particular implementation uses threads so automatically gets bonus points imo when dealing with the problems associated and discussed on the linked mailing list thread wrt large memory.
However, from a much more arguably important POV - usability - if you look at the vast collection of docker-compose.yml files out there a ton that want postgres to run. Yes, you could run it on the side and run the rest as unikernels but that breaks the UX of the compose functionality. It is such a small thing but having it goes an insanely long way towards having better developer experience.
We now have compose like support too, so you can take some app that spins up 10+ instances locally and along with the others and spin up postgres inside as well. So usability/dx.
FridgeSeal|2 years ago
tpetry|2 years ago
crabbone|2 years ago
However, in principle, if you could design a database to run as unikernel you could benefit from creating your own persistence layer. For instance, you might be able to completely or to a large extent avoid the problem created by fsync.
Another aspect you could aim for is becoming a real-time database, because you'd have control over memory allocation and thread context switching. This may not give any tangible benefits to an average database user, but in cases where being real-time is relevant (eg. medical equipment) you'd certainly be able to expand your area of application.
eyberg|2 years ago
You are absolutely correct though that most filesystems in use today were designed to utilize and more appropriately put deal with various aspects of running on actual real hardware which is interesting as compared to running on a vm. I think there is a ton of room for newer filesystems to emerge that are tuned for virtualized workloads vs tuning them for hardware.
eyberg|2 years ago
However, from a much more arguably important POV - usability - if you look at the vast collection of docker-compose.yml files out there a ton that want postgres to run. Yes, you could run it on the side and run the rest as unikernels but that breaks the UX of the compose functionality. It is such a small thing but having it goes an insanely long way towards having better developer experience.
We now have compose like support too, so you can take some app that spins up 10+ instances locally and along with the others and spin up postgres inside as well. So usability/dx.