It's a cool idea; as other posters have mentioned, there have been other projects mapping VRAM for swap, etc.
I'd personally be wary of putting anything too important into VRAM. About five years ago I did a bunch of work testing consumer GPU memory for reliability [1, 2]. Because until that time GPUs were primarily used for error-tolerant applications (graphics) storing only short-lived data (textures) in memory, there wasn't a whole lot of pressure to make sure the memory was as reliable as that found on the main system board. We found that indeed, there was a persistent, low level of memory errors that could be triggered depending on access pattern. I haven't followed up for recent generations, but the fact that the "professional" GPGPU boards both clock their memory slower and include hardware ECC is possible cause for concern with leaving anything too important on the GPU for a long time.
There's code [3,4], too, but I haven't actively worked on it in a few years, so no guarantees on how well it runs nowadays...
While memory errors in textures would usually cause only visual artifacts (unless used for data), memory errors in executable code, shader programs, vertex data, and other types of data could easily cause more fatal problems.
GPUfs: Integrating a File System with GPUs. Mark Silberstein (UT Austin), Bryan Ford (Yale University), Idit Keidar (Technion), Emmett Witchel (UT Austin)
The concept is not new - I remember some utilities that would let DOS use the VRAM on a VGA (256KB available, but slightly less than 4KB actually needed in 80x25 text mode - and 256KB was a big amount in those days), back in the early 90s. There were some demos that used this to their advantage too.
Could anybody please explain to me why there is a need for a special treatment of VRAM compared to a regular system RAM in this use case? Assuming, we can perform an allocation in VRAM (probably using OpenCL API), why can't we use tmpfs/ramfs code? Do I understand correctly that PCI maps VRAM to a certain memory region and it is accessible via regular CPU instructions? Is it because CPU caching is different or VRAM is uncacheable? Or is it something else?
VRAM is not mapped to the same memory space as your normal RAM and it is not directly accessible via regular CPU instructions. It's wholly owned by the GPU, and that's who the CPU has to talk to to use it.
This is in fact a (if not the) major limiting factor to expanded use of GPUs for general purpose calculations: you always have to copy input and results between video RAM and normal RAM.
Your graphics drivers aren't written to be able to share resources with tmpfs. Going through OpenCL ensures that the graphics drivers know about and will respect any VRAM allocations.
Ideally then one should be able to use spare VRAM as a second level RAM - an area to page out things to before disk.
I've played a bit with the different memory compression tools on Linux, zram, zswap, and zcache, and they all behave in interesting ways on workloads whose active set is well over 2x available RAM. I played with compiling the Glasgow Haskell Compiler on small and extra small instances of cloud services, I wager this would work for the GPU instances on EC2 to increase their capacity a little.
The transcendental memory model in Linux is interesting for exploring these ideas, and it's one of the things I really like about the kernel. However the last time I played with it (Kernel version ~3.10) I had some lockup issues where the kernel would take up almost all of the CPU cycles with zswap. That was kind of a nasty issue.
This is interesting. However, considering how 4GB to 8GB RAM is getting so common nowadays, using a ramdisk e.g. a tmpfs partition is quite useful. I've set Firefox and Google Chrome to use a 1 GB tmpfs partition for their cache and the performance improvement is clearly visible.
Is that called shared GPU memory? Can it be adjusted in WinNT/Linux? Some recent console game ports need/want 3+ GB VRAM. Upgrading VRAM is impossible, RAM is easy and cheap(er).
_ihaque|11 years ago
I'd personally be wary of putting anything too important into VRAM. About five years ago I did a bunch of work testing consumer GPU memory for reliability [1, 2]. Because until that time GPUs were primarily used for error-tolerant applications (graphics) storing only short-lived data (textures) in memory, there wasn't a whole lot of pressure to make sure the memory was as reliable as that found on the main system board. We found that indeed, there was a persistent, low level of memory errors that could be triggered depending on access pattern. I haven't followed up for recent generations, but the fact that the "professional" GPGPU boards both clock their memory slower and include hardware ECC is possible cause for concern with leaving anything too important on the GPU for a long time.
There's code [3,4], too, but I haven't actively worked on it in a few years, so no guarantees on how well it runs nowadays...
[1] http://cs.stanford.edu/people/ihaque/papers/gpuser.pdf
[2] http://cs.stanford.edu/people/ihaque/talks/gpuser_lacss_oct_...
[3] https://github.com/ihaque/memtestG80
[4] https://github.com/ihaque/memtestCL
JoshTriplett|11 years ago
nviennot|11 years ago
GPUfs: Integrating a File System with GPUs. Mark Silberstein (UT Austin), Bryan Ford (Yale University), Idit Keidar (Technion), Emmett Witchel (UT Austin)
Paper: http://dedis.cs.yale.edu/2010/det/papers/asplos13-gpufs.pdf
Slides: http://dedis.cs.yale.edu/2010/det/papers/asplos13-gpufs-slid...
userbinator|11 years ago
SXX|11 years ago
manover|11 years ago
revelation|11 years ago
This is in fact a (if not the) major limiting factor to expanded use of GPUs for general purpose calculations: you always have to copy input and results between video RAM and normal RAM.
wtallis|11 years ago
AaronFriel|11 years ago
I've played a bit with the different memory compression tools on Linux, zram, zswap, and zcache, and they all behave in interesting ways on workloads whose active set is well over 2x available RAM. I played with compiling the Glasgow Haskell Compiler on small and extra small instances of cloud services, I wager this would work for the GPU instances on EC2 to increase their capacity a little.
The transcendental memory model in Linux is interesting for exploring these ideas, and it's one of the things I really like about the kernel. However the last time I played with it (Kernel version ~3.10) I had some lockup issues where the kernel would take up almost all of the CPU cycles with zswap. That was kind of a nasty issue.
SXX|11 years ago
It's already possible on Linux. You can use SWAP file instead of partition and there also SWAP priorities available.
geertj|11 years ago
noisy_boy|11 years ago
xorcist|11 years ago
frik|11 years ago
Is that called shared GPU memory? Can it be adjusted in WinNT/Linux? Some recent console game ports need/want 3+ GB VRAM. Upgrading VRAM is impossible, RAM is easy and cheap(er).
digi_owl|11 years ago
ww520|11 years ago
maaku|11 years ago
You'd be surprised.