System prompts in large language model are designed to guide the model’s output based on the requirements of the application, but may inadvertently contain secrets. Attackers will often try to backwards engineer system prompts for this very reason.