top | item 47206236

(no title)

adam_patarino | 18 hours ago

The biggest gaps are not in hardware or model size. There is a lot of logical fallacy in the industry. Most people believe bigger is better. For model size, compute, tools, etc.

The reality in ML is that small models can perform better at a narrow problem set than large ones.

The key is the narrow problem set. Opus can write you a poem, create a shopping list, and analyze your massive code base.

We trained our model to only focus on coding with our specific agent harness, tools, and context engine. And it’s small enough to fit on an M2 16GB. It’s as good as sonnet 4.5 and way better than qwen3.5:35b-a3b

Our beta will be out soon / rig.ai

discuss

order

amritananda|6 hours ago

No benchmarks, no information about training methods/datasets, template placeholder vibe-coded website. Waste of time.