Model inversion refers to a type of attack where an adversary attempts to reconstruct or extract the training data that was used to train a machine learning model.