top | item 27767328

Evaluating Large Language Models Trained on Code

11 points| aray | 4 years ago |arxiv.org

1 comment

order

yewenjie|4 years ago

> On HumanEval, a new evaluation set we release to measure functional correctness for synthesizing programs from docstrings, our model solves 28.8% of the problems, while GPT-3 solves 0% and GPT-J solves 11.4%.

Interesting that they are comparing their model with GPT-J.