Virt-EU Panel at IDP 2017

The 29th of June the 13th IDP conference hosted a Virt-EU panel entitled: Privacy, ethical and social impact assessment of risks in data processing.

Barcelona represented the first venue where Virt-EU researchers exhibited some preliminary research outputs. The outputs are based on fieldwork studies and legal understanding of both expected and unexpected impact of the next year new General Data Protection Regulation entry into force.

The Panel underscored the issue of risk related to the spread of smart technologies into old and new dimensions of human agency in respect to the emerging “Data Economy” paradigm.

London School of Economics’ Alison Powell posited the risk arising from the so-called “Smart Open Data City”— a novel ecosystem where a growing number of smart sensors collect data of every kind: from location trace data towards sensor data and transactional data.

The future of IoT in Cities” summary reports issued by Responsive Communities Lab from Harvard Law School affirms that the use of IoT in Smart Cities programs relies on three major layers:

  1. physical infrastructure, including sensors, fiber networks, and data storage;
  2. data collection, aggregation, and management systems;
  3. the applications and services that use data to serve government workers and the public.

Though these three layers do foresee the participation in services delivery by both private and public actors, what Powell finds problematic is bound up to the power asymmetry generated by this relationship. Indeed, the operationalization of data by private actors might lead to the disjunction of the operations from the administering function, depriving public bodies of that very important role consisting in holding the whole picture when it comes to decision-making process. How can the openness and the accountability towards decisions be assured when it comes to third parties closed systems and proprietary algorithms, furthermore covered under intellectual property rights? In this perspective, the fundamental question addressed by Powell pinpoints the individual capacity to act and to behave as a subject with proper rights, namely what entails citizenship, in an enriched ecosystem where decisions are more and more inferred from the sensors data collection and the subsequent data processing.

So how is it possible to confront the uptake of data-driven decision-making process while preserving rooms for disagreement?

Though not strictly related to the case of smart cities, Irina Skhlovski’s talk analyzed the tangled panorama of IoT solutions, outlining the need for an ethical framework that makes an assessment of the risk in those situations where assumptions and decisions about the design of smart technologies are not sided by ethical concerns.

What has been uttered regarding the use of data about an inclusive category such as citizenship, become much more complex while shifting our attention to more sensitives data categories? This holds true for example when it comes to personally identifiable information (PII), such as the sexual activity captured by connected sex toys, or home assistant appliances such as Google Echo and Amazon Alexa, able to listen and to record ambient information. Admitting IoT devices into people’s intimate life does not automatically entails increased awareness about the veiled data acquisition that those objects execute. Even though the acquisition of data has been legally conveyed under EULA agreements, when a greater number of objects can communicate and transmit intimate information about one person, and the pervasiveness of those objects affects people’s intimacy in everyday life, being compliant with legal requirements might not suffice anymore. To put it simply: the process of categorization of PII, being processed and analyzed, should be more and more based on moral reasoning when it comes to decisions affecting the design of smart technologies, considering the impact those decisions play on people’s “privacies of life.” But one might argue that the attention to ethical aspects in the design of IoT devices is biased toward a person’s own values and guidance. In the end, what is perceived risky by an enterprise is context dependent, as well as fundamentally tied up with the values rooted in it. In its 1986 publication Ulrich Beck posited the concept of Risk Society as “an inescapable structural condition of advanced industrialization” furthermore asserting that “while policy-oriented risk assessment posited the manageability of risks, even the most restrained and moderate objectivist account of risk implications involves a hidden politics, ethics and morality.”

This problematic aspect has been tackled by Alessandro Mantelero in his talk. Though the new General Data Protection Regulation provides a mandatory Data Protection Impact Assessment, this procedure overlooks an understanding of the ethical and the societal consequences of the use of data, mainly when data are used to make decisions that may affect individuals and groups with potential impact in term of discrimination. Indeed, Alessandro Mantelero conveyed that the Data Protection Impact Assessment as featured in the GDPR seems to be closed to public scrutiny. Thus, what constitutes the forefront of the PESIA model is the publicness of the entire assessment process that aims to give the user the chance to know the risk related to the potential uses of the personal private information. Alessandro Mantelero furthermore described the PESIA model as an architecture based on three layers. The first general layer is represented by the common guiding values of international charters of human rights and fundamental freedoms. The second layer focuses on the values and social interests of given communities. Finally, the third layer consists of a more specific set of social and ethical values provided by PESIA committees and focused on a given use of data.

Javier Ruiz, Open Rights Group’s Policy Director, complemented Alessandro Mantelero’s talk with an overview of the expected impact of GDPR into the data protection impact assessment conducted by a data controller. What he posited as a risk – that has been found to be partially true in the fieldwork studies done by virtue researchers – is data controllers inclination to consider the GDPR merely in terms of compliance. But why the strict compliance to the GDPR is conceived by Javier Ruiz as a puzzling issue that only ethics might help overcome? After all, when law and rules are enacted what is asked to subjects is to be compliant with the prescription contained in it. The law establishes the perimeter where people’s actions are considered legitimate. However, when it comes to regulating technology the task is far more complex because technology evolves much faster than legislation, and therefore we must look at an alternative approach. This leads to the question of whether ethics is a substitute for regulation. Indeed data protection is not just about ticking boxes, making lists and writing justifications. When we question whether the processing of data is fair, the idea of fairness is basically about ethical considerations. Another way to look at data ethics is that data protection – done right – is always an ethical exercise, not just compliance. European data protection law, including GDPR, contains many references to processes of consideration, where the data controller must make a judgment call. These exercises require an ethical approach beyond minimal compliance in order to be effective. This is connected with trust as well, as generally, we cannot be present when these decisions are made. In this perspective, the PESIA framework aims at opening up the decisions process in a way that fosters trust among data controllers and people affected by those decisions.