top | item 41575365

(no title)

leonardool | 1 year ago

Implementing an RVM running on a GPU is definitely feasible, however, I'm not sure if it would be fast or efficient. For the case of Ribbit, you only need to write an RVM in a language that can compile to a GPU. An RVM is mostly a interpreter loop with some primitives. Maybe exposing the GPU primitives could enable heavily parallelizable scheme code ? It could be a fun experiment

discuss

order

No comments yet.