Curious about the mechanics here — when you say the model was ‘trained on our code base’, was that an actual fine-tune of the weights (e.g. LoRA/adapter or full SFT), or more of a retrieval/indexing setup where the model sees code snippets at inference? Always interested in how teams distinguish between the two.
No comments yet.