No, GPT-3, the underlying technology for ChatGPT, does not have inherent memory of past interactions outside the context window of the current conversation. Each prompt you give the model is processed without knowledge of past queries, unless they are included in the same context window.
The model does not store personal data from the questions it is asked unless specifically programmed to do so in a particular implementation. It treats each conversation independently, meaning it doesn’t know who you are or have any ability to recall or learn from past interactions unless those interactions are included in the immediate conversation. The model generates responses based on patterns in the data it was trained on, not from a continuous memory or understanding of past requests.
Summarizing Previous Conversation: This solution involves having GPT-3 itself condense previous conversations and then inputting that summary into the next prompt. The challenge here is that some meaning or context could be lost during the summarization process. But it allows you to retain some important points from previous inputs within the context window.
Vector Embeddings: This is a more technical solution and entails saving previous conversations as vector embeddings. Next, you perform a vector search to locate the most relevant portion of the previous conversation and include this in the subsequent prompt. It's a more complex approach as it requires the ability to work with and understand GPT-3's vector embeddings endpoint. However, it might alleviate the issue of losing crucial information compared to just summarizing the conversation.
Remember, both these approaches aim to work around the inherent limitations of GPT-3 and still require providing the relevant information within the limited context window during every interaction with the model. While these are not perfect solutions, they highlight some creative ways to help GPT-3 generate more contextually aware responses.