organisational measures which has to be standardised.
Overall, the work provided ample background
information on relevant concepts concerning
anonymisation and pseudonymisation and how to
deal with the fact that the ML process does not have
an anonymising effect. Clearly, more practical
interdisciplinary work linking up adversarial attacks
and Privacy Preserving techniques with regulation
and data protection efforts needs to ramp up. Above
all, it is important that the uncertainty associated with
adversarial attacks is surmounted by governmental
and technical standards, which will be developed in
the future.
REFERENCES
Al-Rubaie, M., & Chang, J. M. (2019). Privacy-Preserving
Machine Learning: Threats and Solutions. IEEE
Security & Privacy Magazine, 17(2), 49–58.
https://doi.org/10.1109/MSEC.2018.2888775
Article 29 Working Party. (2007). WP 136: Opinion 4/2007
on the concept of personal data.
Article 29 Working Party. (2014a). Opinion 05/2014 on
Anonymisation Techniques WP216.
Article 29 Working Party. (2014b). WP 217: Opinion
06/2014 on the notion of legitimate interests of the data
controller under Article 7 of Directive 95/46/EC,
Brasher, E. A. (2018). Addressing the Failure of
Anonymization: Guidance from the European Union's
General Data Protection Regulation. Columbia
Business Law Review, 2018, 209. https://heinonline.
org/HOL/Page?handle=hein.journals/colb2018&id=21
5&div=&collection=
Carlini, N., Liu, C., Erlingsson, Ú., Kos, J., & Song, D. The
secret sharer: evaluating and testing unintended
memorization in neural networks. In Proceedings of the
28th USENIX Conference on Security Symposium
(SEC’19). USENIX Association.
Case C-582/14, Patrick Breyer v Bundesrepublik
Deutschland. (2016). https://eur-lex.europa.eu/legal-
content/EN/TXT/?uri=CELEX%3A62014CN0582
ENISA European Union Agency for Network and
Information Security (2019). Guidelines-on-shaping-
technology-according-to-GDPR-provisions.
https://www.ledecodeur.ch/wp-
content/uploads/2019/12/Guidelines-on-shaping-
technology-according-to-GDPR-provisions.pdf
Fredrikson, M., Jha, S., & Ristenpart, T. (2015). Model
Inversion Attacks that Exploit Confidence Information
and Basic Countermeasures. In I. Ray, N. Li, & C.
Kruegel (Eds.), Proceedings of the 22nd ACM SIGSAC
Conference on Computer and Communications
Security - CCS '15 (pp. 1322–1333). ACM Press.
https://doi.org/10.1145/2810103.2813677
Fredrikson, M., Lantz, E., Jha, S., Lin, S., Page, D., &
Ristenpart, T. (2014). Privacy in Pharmacogenetics: An
End-to-End Case Study of Personalized Warfarin
Dosing. In K. Fu (Ed.), 23rd USENIX Security
Symposium: August 20 - 22, 2014, San Diego, CA (pp.
17–32). USENIX Association.
Gambs, S., Ladouceur, F., Laurent, A., & Roy-Gaumond,
A. (2021). Growing synthetic data through
differentially-private vine copulas. Proceedings on
Privacy Enhancing Technologies, 2021(3), 122–141.
https://doi.org/10.2478/popets-2021-0040
Groos, D., & van Veen, E. (2020). Anonymised Data and
the Rule of Law. European Data Protection Law
Review, 6(4), 498–508. https://doi.org/10.21552/
edpl/2020/4/6
Hayes, J., Melis, L., Danezis, G., & Cristofaro, E. D. (2019).
LOGAN: Membership Inference Attacks Against
Generative Models. Proceedings on Privacy Enhancing
Technologies, 2019(1), 133–152. https://doi.org/10.
2478/popets-2019-0008
Hilprecht, B., Härterich, M., & Bernau, D. (2019). Monte
Carlo and Reconstruction Membership Inference
Attacks against Generative Models.
Proceedings on
Privacy Enhancing Technologies, 2019(4), 232–249.
https://doi.org/10.2478/popets-2019-0067
Hu, R., Stalla-Bourdillon, S., Yang, M., Schiavo, V., &
Sassone, V. (2017). Bridging Policy, Regulation, and
Practice? A Techno-Legal Analysis of Three Types of
Data in the GDPR.
Jain, P., Kulkarni, V., Thakurta, A., & Williams, O. (2015,
March 6). To Drop or Not to Drop: Robustness,
Consistency and Differential Privacy Properties of
Dropout. https://arxiv.org/pdf/1503.02031
Jayaraman, B., & Evans, D. (2019). Proceedings of the 28th
USENIX Security Symposium: August 14-16, 2019,
Santa Clara, CA, USA. USENIX Association.
https://atc.usenix.org/system/files/sec19-
jayaraman.pdf
Jia, J., Salem, A., Backes, M., Zhang, Y., & Gong, N. Z.
(2019). Memguard. In L. Cavallaro (Ed.), ACM Digital
Library, Proceedings of the 2019 ACM SIGSAC
Conference on Computer and Communications
Security (pp. 259–274). Association for Computing
Machinery. https://doi.org/10.1145/3319535.3363201
Mehmood, A., Natgunanathan, I., Xiang, Y., Hua, G., &
Guo, S. (2016). Protection of Big Data Privacy. IEEE
Access, 4, 1821–1834. https://doi.org/10.1109/
ACCESS.2016.2558446
Milad Nasr, Reza Shokri, & Amir Houmansadr (2018).
Machine Learning with Membership Privacy using
Adversarial Regularization. In ACM Conference on
Computer and Communications Security (CCS).
https://www.researchgate.net/publication/326782376_
Machine_Learning_with_Membership_Privacy_using
_Adversarial_Regularization
Mourby, M., Mackey, E., Elliot, M., Gowans, H., Wallace,
S. E., Bell, J., Smith, H., Aidinlis, S., & Kaye, J. (2018).
Are ‘pseudonymised’ data always personal data?
Implications of the GDPR for administrative data
research in the UK. Computer Law & Security Review,
34(2), 222–233. https://doi.org/10.1016/j.clsr.2018.0
1.002