Weekly Digs #7
While academia may still be a bit busy with submission deadlines, industry reported interesting stories this week regarding secure computation.
Papers
- Nothing Refreshes Like a RePSI: Reactive Private Set Intersection
PSI was several applications in private data processing, including object linking in advertising and data augmentation. This paper takes a step towards mitigating exhaustive attacks where a party learns too much by simply asking for many intersections.
News
- Sharemind, one of the biggest and earliest players pushing MPC to industry, has launched a new privacy service based on secure computation using secure enclaves with the promise that it can handle big data. Via @positium.
- Interesting interview with Lea Kissner, the head of Google's privacy team NightWatch. Few details are given but "She recently tried to obscure some data using cryptography, so that none of it would be visible to Google upon upload ... but it turned out that [it] would require more spare computing power than Google has" sounds like techniques that could be related to MPC or HE. Via @rosa.
- Google had two AI presentations at this year's RSA conference, one on fraud detection and one on adversarial techniques. Via @goodfellow_ian.
Bonus
- Privacy-Preserving Multibiometric Authentication in Cloud with Untrusted Database Providers
Relevant application of secure computation to authentication using sensitive data. Relative black box use of existing protocols yet experimental performance <1sec. - Private Anonymous Data Access
Interesting mix of private information retrieval and oblivious RAM: "We consider a scenario where a server holds a huge database that it wants to make accessible to a large group of clients while maintaining privacy and anonymity ... with the goal of getting the best of both worlds: allow many clients to privately and anonymously access the database as in PIR, while having an efficient server as in ORAM". - Adversarial Attacks Against Medical Deep Learning Systems
A discussion around some of the concrete consequences the medical profession may face from adversarial examples in machine learning systems with a warning of "caution in employing deep learning systems in clinical settings".