No, fine-tuning a GPT model does not require retraining the entire model from scratch.
Fine-tuning typically involves taking a pre-trained model, like GPT-3, which has already learned a large number of parameters over a large amount of data, and then further training (fine-tuning) the model on a smaller, specific dataset for a specific task.
During the fine-tuning process, the parameters of the model are slightly adjusted so that it can apply its previously learned knowledge to the new task. This ability to transfer learned general knowledge to specific tasks is what makes these large pre-trained models so versatile and powerful.
However, it should be noted that the fine-tuning process still requires computational resources and the specific data to fine-tune on, and must be done carefully to prevent overfitting to the new data or catastrophic forgetting of the previously learned tasks.