Not really. Fine-tuning fundamentally changes the model weights to be more amendable to a particular use case/domain; the few-shot prompts for GPT-3 is just a guide (and it's very easy for the model to ignore said guides and go on a tangent).
If you could finetune the 175B model, you'd likely get even better results for these Q&A prompts. (unclear how the OpenAI API is implementing its finetuning demo but I believe it's not on the 175B model).
typon|5 years ago
minimaxir|5 years ago
If you could finetune the 175B model, you'd likely get even better results for these Q&A prompts. (unclear how the OpenAI API is implementing its finetuning demo but I believe it's not on the 175B model).