Access to only the output labels is a seemingly restrictive setting. What is an adversary modeled by PrivacyRaven capable of, given this restrictive setting?
Neural networks are not infallible. In this post we use a model inversion attack to discover data which was input to a target model.