Do-ers v. Postpon-ers: How do IoT developers respond to ethical challenges?

By Funda Ustek-Spilda

Introduction

In an event organised by one of the major internet of things (IoT) networks in London, I asked a developer who recently started her own company on wearable IoT technologies, if she ever faces any ethical challenges in her work. She leaned in and repeated more loudly: “Ethics?” as if she did not hear. I nodded and said, “Yes, ethics.” Then she responded: “Oh that!” and continued “Unfortunately, ethics never makes it into my ever-growing to-do list. Maybe one day, I will have time for it. But not at the moment, not when I am just starting my company. Indeed, time and time again, I came across this response, even when it was formulated in slight variations. For instance, a software developer explained it to me how in their company they mainly rely on “the giants” such as Google in ensuring their data is kept secure, as they use Google Drive or other Google products, which are mainly free. “After all” he said, “Google has all the resources, time and money to make sure everything is in order. I neither have the time, nor the money.” Another developer who works at a company that develops software for supply-chain management mentioned how in their company, they make sure they comply with legal rules and regulations, but other than that, ethics is not one of their “day-to-day concerns.” In contrast, I have met developers who refused major investments because the origin of the funds did not fit with their personal values. Or I have met developers who went separate ways with their co-founders as they did not agree with how far they were willing to veer away from the values they identified with and wanted their products to stand for. Similarly, I had lengthy discussions with developers who continue to provide product support to their customers, even when their ventures went bankrupt years ago, as they felt ethically responsible for the full life-cycle of their products.

Hence, it emerged that developers of IoT products mainly have two [seemingly] contrasting standpoints towards ethics. In this article, I will refer to the first group as Postponers, as they tend to defer any ethical decision unless they absolutely need to respond to them (e.g. legal liability); and the second group as the Doers, as they strive to build their companies around the values they identify with. This, however, is a simple categorisation. The important question is why do some developers postpone ethical decision-making, while others take it to be their responsibility to face them? My main argument is that how developers understand responsibility — vis-a-vis the products they build and businesses they set up — shape their ethical positioning.

Photo by Fancycrave on Unsplash

When is [ever] a good time for ethics?

Building a start-up is not an easy undertaking. It comes with many unknowns and many uncertainties. The CEOs and co-founders of the start-ups I have met all shared the same anxieties: What if things go wrong? And everybody knows, things do go wrong –after all, it is [almost] public knowledge that majority of start-ups fail within two years of their ventures. From failures to close investment rounds to spending the precious limited funds they raised on “poor” recruits, the end always lurks in the horizon when money is so tight. It is not unheard of that some cofounders do not even pay themselves salaries, and invest all the money their companies make back into the company itself. It is also common practice to employ developers and designers on a sub-contractual basis, to keep costs to a bare minimum. So, priorities almost always lie with keeping the company going. This entails a constant cost-benefit analysis on the part of [co]founders and those in managing roles, but also developers, designers and other employees who personally feel the impact of uncertainty working for a start-up brings.

Cost-benefit analysis is inherently a consequentialist decision-making paradigm, that is, the consequences of any decision determine the basis of its rightness or wrongness. Here, not only the financial costs, but also human costs such as time and effort are key. Benefit, on the other hand, almost always is measured in terms of financial outcome. After all, when a start-up fails, the cost of that failure is not shared equally across all stakeholders. The CEOs, co-founders and those in managing roles feel a personal pressure to keep the company going. As a CEO of an IoT startup put it, one feels responsible for not only the company’s survival, but also mortgages, rents, childcare expenses, school fees of those who work for the company. Hence, even when the financial pressures might be industry-wide, how they are felt within the context of small companies is personal. All start-ups, however, face (at least some) financial pressure. Against this background, many co-founders [and developers] reason that the priority is first to grow and scale the company and then once everything is more or less stable and one has the funds, to hire legal persons to make sure that the company is in line with legal requirements and ethical considerations.

Why do some developers choose to postpone making ethical considerations, when others take it as one of the building blocks of their products (and companies)? I think the answer to this question can be sought in how developers understand and approach the concept of responsibility when building their products/companies, and how they interpret ethical risks in their cost-benefit analysis.

Alexei Grinbaum and Christopher Groves in their chapter titled “What is ‘responsible’ about responsible innovation?’ identify two ‘tenses’ of responsibility: a “backward-facing” condition and one that is concerned with a “secular future.”[1]The backward-facing condition looks at how one’s actions conform or differ from the duties assigned to her. The secular future, in contrast, is not concerned with the pre-ordained duties, but with how an individual as a moral subject takes responsibility for deciding what she should and should not do and how she prepares to be accountable later (p.121). One of the main challenges of emerging technologies such as the IoT is that, while there are some risks that have been already identified (e.g. privacy and security), there are still others that remain unknown. It is also uncertain how future technologies would interact with the ones that are being built today. This means that developers building new technologies are not always in a position to be able to identify what they should and should not do and how they would be accountable later for something they could not know today. That there are no ”pre-ordained” duties assigned to any role in the context of a start-up entails that this unknown future (and its unknowability in general) creates a vacuum for developers to choose not to engage with ethical decision-making. While some explain this non-engagement as waiting for bigger companies to pave the way so that they can follow their lead; others stress that current ethical thinking cannot keep up with the speed of technology, so ethics will [have to] follow technology rather than the other way around. Else, they argue that a time will come for them to consider ethical implications of their products, but first, their products should ”make it” in the market.

But is there [ever] a good time for ethics? If ethical thinking constantly gets pushed in people’s agendas and ever-growing to-do lists, and personal responsibility remains vague in the scenario of an unknown future, then how will we produce ethical technologies today? The doers, developers who engage with ethical thinking in all stages of a product as well as in all aspects of the companies they are part of, tell us that we need to think about responsibility differently. Rather than an understanding of responsibility as a matter of personal liability, they are concerned with the future they are building through their products and companies. Instead of creating technologies just because they find them interesting or ‘a challenge’, they would like to improve the current societies they are part of and help future societies at the same time. So, how can we move from a personal understanding of responsibility that feeds into the consequentialist cost-benefit analyses to a collective one that cares for the future of the planet and its habitants?[2]

Hannah Arendt in Responsibility and Judgement writes that collective responsibility is not merely being there or being engaged in a particular action or non-action when a non-ethical [moral] decision/event takes place.[3]She gives the example of a thousand able swimmers not coming to the help of a man drowning at sea. She explains that in this example there is no collective responsibility because the thousand able swimmers are not a collective, to begin with (149). As such, she stresses that it is membership in a group, that makes responsibility collective and political. So, the political considerations of a group’s conduct[4]become pertinent in understanding how and why ethical decision-making is done today with implications for the future.

Take the example of sourcing sensors for IoT devices. Companies that work with sensors make this decision daily. Are they going to buy sensors produced in Europe or China or other countries where the privacy and security regulations are not very strict? Are they going to source sensors with rechargeable batteries which potentially cost more or sensors that are cheaper but need replacing after a certain period of time? Answers to these questions have important implications for the users who will be using the devices built with those sensors, and for the environment. Users will get more [or less] secure devices and more [or less] e-waste will be produced and less [or more] minerals will be extracted from the oceans.

Photo by rawpixel on Unsplash

Such positioning of responsibility with implications for the future also helps us move away from individualistic accounts of why some developers engage with ethics from an early stage, while others choose to postpone. It is not merely the personal values of the developers or that one group is more ’virtuous’ than the other [though they may well be], it is rather how they position themselves and the technologies they are building vis-a-vis the technical cultures they are part of and the technical futures they would like to see being built. This implies that when making a seemingly technical decision as whether to add a camera and a microphone to a device which might not necessarily need it, they consider not only their own subjective positions, but they also extend a matter of care[5]to the networks they are part of, their potential users as well as the future generations that might be affected by them. They make an ethical judgment by assessing the options: the camera and microphone might give them a competitive edge if their competitors do not provide them, but would it make the device less secure and more prone to privacy risks? Adding a camera and a microphone might help them increase their revenue if they can increase the price, but would it make the device less recyclable or shorten its lifespan, given that camera technologies move very fast? As such, they go beyond the consequentialist logic as these questions become serious moments of deliberation.

Conclusion

Technologies are not neutral. They can be ethical or unethical, responsible or not, depending on the context in which they are put into use. The same technology can be used to provide care to those in need and create surveillance societies. Autonomous technologies give us further reason to move from individualistic cost-benefit analyses to collective responsibility, as personal responsibility is hard to apply when there are [seemingly] no persons involved in making the decisions. We need to acknowledge that ethics is an on-going process and there will unlikely be a moment where we will be able to just stop and wait for ethics to catch up, or a moment where ethics will transpire on its own. If technologies are being built, then responsibility is shared. Obviously, not all developers go through ethical training, or are able to foresee the societal implications of the seemingly technical decisions they make on a day-to-day basis. This is not a limitation, but a great opportunity to form better collaborations between developers, legal experts, ethicists and social scientists.

At Virt-EU, we have developed a framework based on Virtue Ethics, Capability Approach and Care Ethics to move beyond consequentialist ethical approaches. We believe through identifying virtues that developers and companies care for and acting within their capabilities but also together with other stakeholders in their industries, responsible technologies can be built. We will continue this blog series to demonstrate how this framework can be put into use.


[1]Grinbaum, Alexei, and Christopher Groves. “What Is ‘Responsible’ about Responsible Innovation? Understanding the Ethical Issues.” In Responsible Innovation: Managing the Responsible Emergence of Science and Innovation in Society, Richard Owen, John Bessant and Maggy Heintz eds., 119–142. Chichester, West Sussex: Wiley, 2013.

[2]I am particularly cautious here to not only talk about future generations, as technologies such as the IoT have tremendous environmental costs, with growing e-waste and plastic waste, as well as demand for minerals and raw materials.

[3]Arendt, Hannah. Responsibility and Judgment. New York: Schocken Books, 2003.

[4]Grinbaum and Groves, 2013, 133.

[5]Puig de la Bellacasa, Maria Puig. “Matters of Care in Technoscience: Assembling Neglected Things.” Social Studies of Science41, no. 1 (2011): 85–106.