I was playing earlier today with Triton[0], from OpenAI. Like Taichi it makes it super easy to write native GPU code from Python, but it really does feel like something very experimental for now. (I know the use case is very different)[0] https://openai.com/research/triton
robertlagrant|3 years ago
[0] https://developer.nvidia.com/nvidia-triton-inference-server
coldtea|3 years ago
Meaning?
- their approach is still bizarre and exploratory and they still don't know how to structure their APIs and are making it up as they go?
or:
- there are still some rough edges, bugs, and no full documentation yet?
as those are quite different cases...
Archit3ch|3 years ago
CUDA-only, no mention of Metal.
unknown|3 years ago
[deleted]