The discussion on the reform of GDPR is an interesting one that is long overdue. But at the same time, I believe that a lot of the problems with GDPR could be addressed even without re-opening the text of the law.
Take the German regulator's ReguLab: Nothing prevents DPAs from setting up those labs today. Under GDPR, advising businesses on data processing is as much a responsibility of DPAs as is enforcement, but in reality, regulators rarely had the resources for this. So it's great that DPAs are now embracing their role as advisors - but they could have done this six years ago already!
Or take Christiane Wendehorst's proposal of a risk-based approach to privacy. Again, GDPR already follows a risk-based approach, at least in theory. But in reality, regulators have tended to err on the side of the strictest interpretation of GDPR whenever they were discussing a case. So it is already in the hand of regulators to take a more nuanced approach to data processing, e.g. to allow for learning from personal data.
Which brings me to the last part of my comment, the distinction between data processing for advertising and data processing for learning. It's fair to say in my view that all the protections we erected to protect users from advertising may now hold us back to pursue groundbreaking developments in health and science.
But GDPR sees the protection of personal data first and foremost as a fundamental right and only then considers use cases for data processing. And the same personal data that is used to display advertising could also lead to a new discovery.
So one question for a reform of GDPR could be whether the use case of data processing should be taken into account when processing personal data. That would certainly be an interesting angle but also a fundamental shift from where we stand currently.
Another interesting route is to think more about exceptions for self-processing and exceptions under Art. 9 as suggested by Wendehorst (although I would question why this data could not simply be anonymised in which case we would just need simple guidelines for anonymisation.
So in short, it's great to fix a broken law but it seems more important to me we fix the bias in GDPR's interpretation and the mindset around privacy and data protection!
This is a great comment - thank you. In terms of whether it is reform or re-interpretation I am on the fence. I see your points and think that there can be a reasonable disagreement here, but I also think that it would be good, not least from a signalling perspective, to engage in real reform. And your point around if the use means something is exactly why: I think that a balance needs to be struck here, and that the fundamental rights language risks negating any real consideration of the real benefits that could be unlocked. But, as noted in the text, I see that there is an entirely consistent and logical position saying the oppsite, focusing on the right as right and rejecting the idea of any balancing need as pertains to the right itself.
The discussion on the reform of GDPR is an interesting one that is long overdue. But at the same time, I believe that a lot of the problems with GDPR could be addressed even without re-opening the text of the law.
Take the German regulator's ReguLab: Nothing prevents DPAs from setting up those labs today. Under GDPR, advising businesses on data processing is as much a responsibility of DPAs as is enforcement, but in reality, regulators rarely had the resources for this. So it's great that DPAs are now embracing their role as advisors - but they could have done this six years ago already!
Or take Christiane Wendehorst's proposal of a risk-based approach to privacy. Again, GDPR already follows a risk-based approach, at least in theory. But in reality, regulators have tended to err on the side of the strictest interpretation of GDPR whenever they were discussing a case. So it is already in the hand of regulators to take a more nuanced approach to data processing, e.g. to allow for learning from personal data.
Which brings me to the last part of my comment, the distinction between data processing for advertising and data processing for learning. It's fair to say in my view that all the protections we erected to protect users from advertising may now hold us back to pursue groundbreaking developments in health and science.
But GDPR sees the protection of personal data first and foremost as a fundamental right and only then considers use cases for data processing. And the same personal data that is used to display advertising could also lead to a new discovery.
So one question for a reform of GDPR could be whether the use case of data processing should be taken into account when processing personal data. That would certainly be an interesting angle but also a fundamental shift from where we stand currently.
Another interesting route is to think more about exceptions for self-processing and exceptions under Art. 9 as suggested by Wendehorst (although I would question why this data could not simply be anonymised in which case we would just need simple guidelines for anonymisation.
So in short, it's great to fix a broken law but it seems more important to me we fix the bias in GDPR's interpretation and the mindset around privacy and data protection!
This is a great comment - thank you. In terms of whether it is reform or re-interpretation I am on the fence. I see your points and think that there can be a reasonable disagreement here, but I also think that it would be good, not least from a signalling perspective, to engage in real reform. And your point around if the use means something is exactly why: I think that a balance needs to be struck here, and that the fundamental rights language risks negating any real consideration of the real benefits that could be unlocked. But, as noted in the text, I see that there is an entirely consistent and logical position saying the oppsite, focusing on the right as right and rejecting the idea of any balancing need as pertains to the right itself.