top | item 37562089

(no title)

cec | 2 years ago

We use the same architecture as other LLMs, but we include no natural language in our pretraining. We figured a single-domain training corpus would make evaluation easier. We’ll be looking at layering this on top of something like Code Llama next

discuss

order

No comments yet.