top | item 43242019

SOTA Code Retrieval with Efficient Code Embedding Models

11 points| jimminyx | 1 year ago |qodo.ai | reply

2 comments

order
[+] timbilt|1 year ago|reply
anyone else concerned that training models on synthetic, LLM-generated data might push us into a linguistic feedback loop? relying on LLM text for training could bias the next model towards even more overuse of words like "delve", "showcasing", and "underscores"...
[+] lenerdenator|1 year ago|reply
SOTA? Lora? Seems like people are trying to usurp ham radio names for things.