Attackers may use extraction prompts or simply ask the model to give out sensitive data included in your training data.