> On HumanEval, a new evaluation set we release to measure functional correctness for synthesizing programs from docstrings, our model solves 28.8% of the problems, while GPT-3 solves 0% and GPT-J solves 11.4%.
Interesting that they are comparing their model with GPT-J.
Interesting that they are comparing their model with GPT-J.