Fine-tuning improves on few-shot learning by training on many more examples than can fit in the prompt, letting you achieve better results on a wide number of tasks. Once a model has been fine-tuned, you won't need to provide examples in the prompt anymore. This saves costs and enables lower-latency requests.
At a high level, fine-tuning involves the following steps:
Prepare and upload training data
Train a new fine-tuned model using fine-tune job
Use your fine-tuned model by specifying this model as a parameter to Completions API
Your data must be a JSONL document, where each line is a prompt-completion pair corresponding to a training example.
{"prompt": "<prompt text>", "completion": "<ideal generated text>"}
{"prompt": "<prompt text>", "completion": "<ideal generated text>"}
{"prompt": "<prompt text>", "completion": "<ideal generated text>"}
...