Glad this work is happening! That said, HumanEval as the current gold standard for benchmarking models is a crime. The dataset itself is tiny (around 150) examples and all the problems themselves aren’t really indicative of actual software engineering problems. Also, we’ve been able to get around 85% pass@1 on GPT-4 internally as of a couple weeks ago. It’s hard to say if they’ve contaminated the models with RLHF though. It still is exciting how close we’re getting with open source models but we’ve still got a decent amount of work to go!
rushingcreek|2 years ago
We're working hard to use these advances to make models that are production ready. One such idea is to run a mixture of experts on various fine-tuned CodeLlamas.
DigitalNoumena|2 years ago
Realistically how many of the practical use cases where it’ll be applied will be OOD? If you can take GPT4 there then you are either a genius or working on something extremely novel so why use GPT4 in the first place?
I understand the goal is for LLMs to get there, but the majority of practical applications just don’t need that.
dragonwriter|2 years ago
If its contaminated by the test set being in the model’s training set, then the test is no longer (assuming it was in the first place) a valid measure of whether the model has “a good enough distilled representation of arguably all the code out there”.
bfogelman|2 years ago
unknown|2 years ago
[deleted]