top | item 36606916

(no title)

ggerganov | 2 years ago

It was designed in #915 (read just the OP and the linked PRs at the end) and the implementation pretty much follows it closely, at least for the Metal backend. The CUDA and OpenCL backends are currently slightly coupled in ggml as they started developing before #915, but I think we'll resolve this eventually.

#915 - https://github.com/ggerganov/llama.cpp/discussions/915

discuss

order

vitaminka|2 years ago

interesting decoupling method, ty :)