Refining Data Protection: Anonymisation and Scope of GDPR
This paper explores the #uncertainty around when #data is considered “#personaldata” under #dataprotection #laws. The authors propose that by focusing on the specific #risks to #fundamentalrights that are caused by #dataprocessing, the question of whether data falls under the scope of the #gdpr becomes clearer. Lire
Machine Data, Personal Data, Sensitive Data and Artificial Intelligence. the Interplay of Privacy Enhancing Technologies with the GDPR
“By employing Big Data and Artificial Intelligence (AI), personal data that is categorized as sensitive data according to the GDPR Art. 9 can often be extracted. Art. 9(1) GDPR initially forbids this kind of processing. Almost no industrial control system functions without AI, even when considering the broad definition of the EU AI Regulation (EU […]
Data Is What Data Does: Regulating Use, Harm, and Risk Instead of Sensitive Data
“… nothing meaningful for regulation can be determined solely by looking at the data itself. Data is what data does. Personal data is harmful when its use causes harm or creates a risk of harm. It is not harmful if it is not used in a way to cause harm or risk of harm.” Lire
Manipulation through Design: A Law and Economics Analysis of EU Dark Patterns Regulation
“Dark Patterns are ubiquitous: deliberate choices in website- or app-design that exploit unobservant or irrational behavior of users, tricking them into reaching agreements or consenting with settings that are not in line with the users’ actual preferences.” Lire
Blockchain-Based Access Control System for Efficient and Gdpr-Compliant Personal Data Management
“… we have performed a detailed study which includes: GDPR-compliance, provided functionality, security and privacy issues, and the cost … of the different operations to be run on the blockchain.” Lire
From Transparency to Justification: Toward Ex Ante Accountability for AI
“The EU’s GDPR and proposed AI Act tend toward a sustainable environment of AI systems. However, they are still too lenient and the sanction in case of non-conformity with the Regulation is a monetary sanction, not a prohibition. This paper proposes a pre-approval model in which some AI developers, before launching their systems into the […]
The GDPR and Unstructured Data: Is Anonymization Possible?
“This article examines the two contrasting approaches for determining identifiability that are prevalent today: (i) the risk-based approach and (ii) the strict approach in the Article 29 Working Party’s Opinion on Anonymization Techniques (WP 216). Through two case studies, we illustrate the challenges encountered when trying to anonymize unstructured datasets. We show that, while the risk-based approach […]
The European Risk-Based Approaches: Connecting Constitutional Dots in the Digital Age
” … it is … possible to grasp the fil rouge behind the GDPR, the DSA and the AI Act as they express a common constitutional aspiration and direction.” Lire
Brexit, Data Adequacy, and the EU Law Enforcement Directive
“The analysis discusses the overall negotiations process underpinning Brexit and examines the key resulting regimes, including the Trade and Cooperation Agreement (TCA) and the EU-UK data adequacy agreements granted under the GDPR and the LED.” Lire
Telematics Insurance – Establishing a Data Subject Right to ‘Gaming the System’ under the GDPR
“Insurers are faced with a lack of consumer trust in whether premiums are established in an objective and fair manner and that claims are adequately dealt with and paid without delay. In trying to balance between consumers interests and business interests there is a risk that insurers will go too far using personal data and increasingly automated […]
Personal Data and Consumer Insurance: A Freedom to Share or Duty to Disclose?
“Contributing to the discussion on the interpretation and use of the right to data portability this paper looks specifically at the role for Article 20 of the General Data Protection Directive to help insurers gain access to personal data they would otherwise not have access to. As proposed in this paper there is potential for […]
The Effects of Data Localization on Cybersecurity
“The discussion includes both de jure and de facto effects, including China’s explicit laws, recent enforcement actions in the European Union, and proposed privacy legislation in India. The focus is on effects on cybersecurity defense, rather than offensive cyber measures” Lire
Anonymity Assessment – A Universal Tool for Measuring Anonymity of Data Sets Under the GDPR with a Special Focus on Smart Robotics
“By introducing the method of “Anonymity Assessment,” we propose an interdisciplinary approach to classifying anonymity and measuring the degree of pseudo-anonymization of a given data set in a legal and technical sense. “ Lire
Regulating Algorithms at Work: Lessons for a ‘European Approach to Artificial Intelligence’
“This article scrutinises the potential of the existing regulatory apparatus in Union law to tackle the social, technical, and legal challenges inherent in deploying automated systems in high-risk settings such as the workplace, with a view to setting out key lessons for the proposed EU Artificial Intelligence Act.” Lire
The Future of International Data Transfers: Managing New Legal Risk with a ‘User-Held’ Data Model
“This article examines… a user-held data model. This approach takes advantage of ‘personal data clouds’ that allows data subjects to store their data locally and in a more decentralised manner, thus decreasing the need for cross-border transfers and offering end-users the possibility of greater control over their data.” Lire
The UK reform of data protection: impact on data subjects, harm prevention, and regulatory probity
“… the proposed reforms risk (1) undermining the data subjects’ rights that were ensured with the adoption of the EU GDPR into UK law; (2) introducing an accountability framework that is inadequate to address harm prevention; and (3) eroding the regulatory probity of the Information Commissioner’s Office (ICO). We also comment on the analysis of the expected […]
Data Privacy, Human Rights, and Algorithmic Opacity
“… in a world where algorithmic opacity has become a strategic tool for firms to escape accountability, regulators in the EU, the US, and elsewhere should adopt a human-rights-based approach to impose a social transparency duty on firms deploying high-risk AI techniques.” Lire