top | item 47192339 (no title) am17an | 1 day ago Honestly you can run this on a 16GB VRAM GPU with llama.cpp. Just try it! discuss order hn newest No comments yet.
No comments yet.