top | item 45753942

Cognition Releases SWE-1.5: Near-SOTA Coding Performance at 950 tok/s

11 points| yashvg | 4 months ago |cognition.ai

8 comments

order

swyx|4 months ago

(coauthor) xpost here: https://x.com/cognition/status/1983662836896448756

happy to answer any questions. i think my higher level insight to paraphrase McLuhan, "first the model shapes the harness, then the harness shapes the model". this is the first model that combines cognition's new gb200 cluster, cerebras' cs3 inference, and data from our evals work with {partners} as referenced in https://www.theinformation.com/articles/anthropic-openai-usi...

CuriouslyC|4 months ago

In the interest of transparency you should update your post with the model you fine tuned, it matters.

pandada8|4 months ago

very curious, which model can only run up to 950 tok/s even with cerebras.