To defeat this type of attack, make sure your training data is sanitized before use - masking or removing personally identifiable information. Or simply train your model on synthetic data, reserving the real data for testing new versions.