Insofar as I have come to understand how an LLM works, there is nothing until you feed it a prompt, and there is nothing after that. In effect, it always bases its answer on your prompt only, and on its "memory" of your current "conversation" with it--i.e., how it has your earlier prompts...