Visit hbs.edu

Defining global digital ethics standards

asian-woman-using-mobile-phone-on-the-train-while-traveling_t20_P3PJYR

This post originally appeared in ODBMS Industry Watch

I have interviewed Giovanni Buttarelli, head of the European Data Protection Supervisor. We talked about the mission of the European Data Protection Supervisor (EDPS), Digital Ethics, AI, China, USA, Tech companies, Democracy and many more…

Q1. What is the mission of the European Data Protection Supervisor (EDPS)?

Giovanni Buttarelli: The EDPS is an independent supervisory authority and EU institution. We monitor the processing of personal data by the EU institutions and bodies, advise on policies and legislation that affect privacy and cooperate with similar authorities to ensure consistent data protection.

As of 25 of May 2018, we are full member of the European Data Protection Board and in addition provide its secretariat. Since we launched our five year strategy in 2015, our aim has been for EDPS to be a hub for discussion on the next generation of protections for digital rights.

But I want to focus on two very strong initiatives of this mandate.

In our 2016 opinion on a “coherent enforcement of fundamental rights in the age of big data”, we established the Digital Clearinghouse, a voluntary network of regulatory bodies to share information, voluntarily and within the bounds of their respective competences, about possible abuses in the digital ecosystem and the most effective way of tackling them.

The network is increasingly growing and finding legitimacy among regulators.

In 2015, we launched a debate on digital ethics. The EDPS set up the Ethics Advisory Group with the aim of advance debate about ‘digital life’ beyond the legal requirements of the current law and help finding a new approach. After 3 years of discussion and the report of the EAG, in 2018 the 40th International Conference of Data Protection and Privacy Commissioners hosted by the EDPS represented the ground for an open and global discussion on the need of an ethical approach towards new technologies.

Our motto is ‘leading by example’ and the EDPS has become the centre for gravity in data protection and privacy matters, also representing a hub for discussion in data driven society.

Q2. How many of the EDPB representatives do have a technical background?

Giovanni Buttarelli: We have an IT Policy team composed of half a dozen computer scientists and technical experts who monitor and advise on emerging technologies. We are also in the process of recruiting someone with a data scientist background.

Q3. You recently organised the 2018 International Conference of Data Protection and Privacy Commissioners in Brussels. What are the main messages that came out from such a conference?

Giovanni Buttarelli: The conference’s success built on the fact that we identified a need to take the data protection debate onto a new level and discuss the subject matter in its broader context. Data protection cannot be isolated from developments in AI, machine learning, big data, the internet of things, biometrics… anymore.

Everyone who is a first-hand witness of this, from tech developers to data protection authorities, has a responsibility to acknowledge that this unprecedented digital shift is a historical moment and that new approaches are needed in light of the challenges it brings about. Whenever technological innovation came with risks and dangers, ethics have been key in addressing and preventing them: Ethics can also help us now to find a path into a digital future that re-affirms and protects our long-standing culture of values and rights.

The conference also showed that a collective effort is necessary to move towards internationally recognised and respected standards. What is ethical for whom and how can we agree upon common standards? From tech developers and service providers, to regulators and supervisory authorities, ethicists and anthropologists, civil society organisations and human rights defenders, and representatives of these from all regions of our planet, everyone must engage in this debate.

Q4. And what are the main challenges ahead?

Giovanni Buttarelli: The overall challenge we are facing is to ensure we gain the maximum benefit from new technologies without undermining fundamental rights and long-standing ethical principles. This requires a collective intelligence exercise as it has maybe never happened before: it requires farsighted, long-term thinking, precaution and risk-awareness.

This will take us into abstract ethical deliberations about the meaning of human autonomy and self-determination in the digital age, it will require us to realistically think through various scenarios of how emerging tech could affect our lives in the future, and it will require us to ensure a timely prevention of their potential detrimental effects – without hindering innovation.

In the language of examples: we do not want an overly restrictive stem cell research regulation because of unfounded fears – this has been a criticism of Canada’s approach. On the other hand, we don’t want to introduce smart cars which constantly sends location information to the government – as is in China.

Striking the right balance is the challenge. And getting everyone on board, most importantly, the leading tech developers and providers, and regions with different ethical standards.

Certain technologies – facial recognition, autonomous weapons, smart glasses – imply such profound and unpredictable consequences for society that ethics may demand a general prohibition unless there is a clear benefit for society and clear controls on their use, with accountability where something goes wrong.

Q5. Data, AI and Intelligent systems are becoming sophisticated tools in the hands of a variety of stakeholders, including political leaders. Why do you think it is necessary to define a global digital ethics?

Giovanni Buttarelli: The questions raised by AI are myriad, many legal but more importantly ethical. Indeed as the big subject of the World Economic Forum in Davos last January, privacy was flagged as the biggest concern surrounding the development of AI systems.

The last few years have demonstrated that digital markets cannot be left entirely to their own devices. Doubts surfaced with a string of high profile data breaches, like the Ashley Maddison incident in 2015, controversial not so much in the volume of data but the sensitivity of the type of data. It culminated with the Facebook / Cambridge Analytica case.

“AI is now the most fashionable pretext for collecting data. The long honeymoon with big tech is over. But they need to be part of the solution no longer part of the problem.” — Giovanni Buttarelli

Big tech has a major responsibility here. But the problem is systemic. The dominant business model dictates that to be successful you need to track everyone, profile and target. So if you want to develop autonomous vehicles it is fine, but you have to take responsibility for the data collection and surveillance which seems to be needed to train these systems.

AI is now the most fashionable pretext for collecting data. It requires personal data on a huge scale until it becomes intelligent enough to teach itself. In this context, a number of ethical questions have to be raised: for instance, how do we control bias in AI systems? How do we keep control of AI developments and algorithms? Who decides what is right or wrong?

Recent events have shown how our democracies can be affected by the unethical use of personal data, and if our democracies are at stake we are undermining the foundation of our society.

Q6. Is it really feasible? How do you plan to engage nations outside Europe, such as USA, China, India, to name a few?

Giovanni Buttarelli: Is this feasible? I don’t think we have only ‘yes’ and ‘no’ answer in this case. First we need to build a global consensus on what is and is not acceptable. We don’t have that consensus now. Look at how the UN is not able to even start to discuss whether to ban killer drones.

We will continue discussions this year with our programme of teleconferences and podcasts on digital ethics. Like with the conference, we will involve experts from all regions of the world including China and India.

After the International Conference, I remain optimistic. I believe that in the years to come Europe will not be alone and other countries will be more and more involved in such a key debate.

Q7. How does such digital ethics interact with the law? How has it materialised in fields like the life sciences and what role does it play in resolving public policy dilemmas?

Giovanni Buttarelli: GDPR represents a landmark in data protection law not only at EU level, but globally. With the new regulation, the European Union set the highest standards and now many countries are trying to emulate it. What makes GDPR future proof for at least 15 years are the ethical principles that it incorporates.

Accountability principle, privacy by default and privacy by design for instance are the first steps towards the adoption of a more ethical approach. However, an effective implementation of the legal principle of data protection by design and by default is a necessary yet not sufficient milestone towards responsible technology and data governance at the service of humans and should be framed within the wider concept of “ethics by design”.

We need to clarify that ethics is not a substitute for robust, clear, simple and well enforced legislation. Companies and governments need to reflect on the impact of their use of technology on individuals, groups, society and the environment, as well as respecting the spirit and the letter of the law.

It is interesting how the Social Credit System in China – a complex, programme involving multiple government agencies at all levels – seems to elevate the “ethical” notion of trustworthiness in a harmonious society above the established traditions of rule of law and human rights in the Peoples Republic. That seems dangerous, and an instructive example of the ethics and law debate we need to have.

Q8. What if the decision made using AI-driven algorithm harmed somebody, and you cannot explain how the decision was made?

Giovanni Buttarelli: This is exactly what I did at the International Conference of Data Protection and Privacy Commissioners. The first session was dedicated on purpose to the identification of what we mean by ethics and the recognition of relevant experiences before we move to speaking of digital ethics.

AI and automated decision making have risen a number of ethical concerns: technology should serve human kind and not the other way round. The EDPS wanted to have a global discussion on data driven technologies because they constantly affect people’s lives.

Self-driving cars, for instance, are increasingly spreading worldwide. They are controlled by algorithms which analyse the surroundings and take actions or decisions. Nevertheless, in case of a unavoidable incident that may involve a children crossing the street or a third person, who decides what is the best decision to make? If the car has to decide between potentially kill the driver or potentially kill the pedestrian, what is the right choice? Is it right that a machine can determine human life? The same applies to self-guided drones and many other technologies ready to be launched soon.

These technologies are growing worldwide, therefore I believe that a global consensus on what is feasible or acceptable is possible.

Q9. Pedro Domingos (Professor at University of Washington) said in a interview “So maybe AI will force us to confront what we really mean by ethics before we can decide how we want AIs to be ethical.” What do think of this?

Giovanni Buttarelli: AI – in the sense of autonomous machines – has been with us for several decades. It has not developed in a vacuum – algorithms have been trained by people with their own conscious and unconscious biases. Increasingly, the market for AI, like most digital services, is becoming concentrated in the hands the tech giants. So you cannot separate the technology from the power structure and the prejudices and inequalities which already exist in society – and which seem to be getting bigger.

AI is potentially very powerful, but more urgent is to debate what the AI is supposed to achieve, who will benefit, who will suffer, who will take responsibility when something goes wrong. These are ethical questions.

Plus we need to avoid hyperbolic statements about AI, and instead force AI investment to take place within the existing legal framework. In data protection terms, that means clear lines of responsibility, purpose limitation, data minimisation, respect for the rights of data subjects and privacy by design.

Q10. Are computer system designers (i.e. Software Developers, Software Engineers, Data Scientists, Data Engineers, etc,), the ones who will decide what the impact of these technologies are and whether to replace or augment humans in society?

Giovanni Buttarelli: Not alone. At the moment there is too much power in the hands of a few mega tech companies and governments. We need to decentralise the internet, give more power to people over their digital lives.

Engineers have a valid voice but they need to be part of a conversation with lawyers, ethicists, experts from the humanities. IPEN, our initiative, seeks to do this.

Q11. How is it possible to define incentives for using an ethical approach to software development, especially in the area of AI?

Giovanni Buttarelli: Societal awareness is indeed increasing and many people are getting more privacy-conscious.

A recent US study says that since Cambridge Analytica, over half of (adult) Facebook users have adjusted their privacy settings, around 40% have taken a break from checking the platform for a period of several weeks or more, while around a quarter say they have deleted the Facebook app from their cellphone. Their share prices seem to have taken a tumble too.

The long honeymoon with big tech is over. But they need to be part of the solution no longer part of the problem.

There are three lessons to be taken away by any company:

  • First, your clients will lose their trust in you and leave you if you do not respect their rights and dignity.
  • Second, you face sanctions and reputational damage.
  • And third, your business model is not successful in the long-term. So I repeat: real innovation is responsible innovation.

Trust in big tech companies is decreasing and at the same time we can measure an increasing number of people using data protection or privacy oriented services.

Engage With Us

Join Our Community

Ready to dive deeper with the Digital Data Design Institute at Harvard? Subscribe to our newsletter, contribute to the conversation and begin to invent the future for yourself, your business and society as a whole.