(no title)
tkz1312 | 4 months ago
The hardware required to run something like deepseek / kimi / glm locally at any speed fast enough for coding is probably around $50,000. You need hundreds of gigabytes of fast VRAM to run models that can come anywhere close to openai or anthropic.
edude03|4 months ago
tkz1312|4 months ago