5 Experimental Technology & The Principles of Engineering Ethics
Key Themes & Ideas
- Experimental technologies are those technological products and projects for which the social benefits and risks are uncertain and/or unknown
- We can borrow ethical principles developed for human medical research to ethically analyze and evaluate technologies, especially experimental technologies
- The 3 principles of engineering ethics are the Principle of Welfare, the Principle of Autonomy, and the Principle of Fairness
- The principles function as ethical design constraints that must be met by a technology to be morally successful
- Each principle directs us to focus on specific values as well as reason in specific ways about those values and technologies in general
The Smartphone Social Experiment[1]
What was Apple thinking when it launched the iPhone? It was an impressive bit of technology, poised to revolutionize the smartphone industry, and set to become nearly ubiquitous within a decade. The social consequences have been dramatic. Many of those consequences have been positive: increased connectivity, increased knowledge and increased day-to-day convenience. A considerable number have been quite negative: the assault on privacy; increased distractibility, endless social noise. But were any of them weighing on the mind of Steve Jobs when he stepped onstage to deliver his keynote on January 9th 2007?
Some probably were, but more than likely they leaned toward the positive end of the spectrum. Jobs was famous for his ‘reality distortion field’; it’s unlikely he allowed the negative to hold him back for more than a few milliseconds. It was a cool product and it was bound to be a big seller. That’s all that mattered. But when you think about it this attitude is pretty odd. The success of the iPhone and subsequent smartphones has given rise to one of the biggest social experiments in human history. The consequences of near-ubiquitous smartphone use were uncertain at the time. Why didn’t we insist on Jobs giving it quite a good deal more thought and scrutiny? Imagine if instead of an iPhone he was launching a revolutionary new cancer drug? In that case we would have insisted upon a decade of trials and experiments, with animal and human subjects, before it could be brought to market. Why are we so blasé about information technology (and other technologies) vis-a-vis medication?
Human experimentation is essential to medical innovation. It is through human experimentation that we get the best understanding of the risks (and benefits) of a potential therapy. But, importantly, we also only conduct human experimentation when we are unsure of the potential hazards and/or are uncertain of their likelihood. Strictly speaking, a risk is a measure of the probability of some hazard occurring multiplied by its magnitude – that is, a measure of how bad the hazard would be if it did occur. Sometimes we are ignorant of potential hazards – we simply do not know what effects something may have and therefore cannot determine magnitude – and other times we know what may occur but are uncertain as to its likelihood and therefore cannot determine probability. And so, when we conduct human medical experiments, we are doing so in order to better understand what hazards are possible and/or how likely the hazards are.
But because human medical experimentation is carried out under these conditions of ignorance and uncertainty, we insist on strict ethical controls. We are likely all familiar with some of these ethical controls: all research subjects must be informed of the potential risks (as well as the lack of complete knowledge) and voluntarily agree to be exposed to them; research can only be carried out when there is good reason to believe the therapy could in fact provide a meaningful benefit. There are others as well, which we need not detail now. The central point is simply that we require strict ethical scrutiny of human medical experiments precisely because they involve exposure to unknown hazards with unknown probabilities.
But isn’t the same true of many technological innovations? Recall the Control Dilemma from a previous chapter: it shows us that for many technological products and projects, we find ourselves in the same position as medical researchers: we are simply unaware of some of the potential effects and we are uncertain of the likelihood of other effects that we are aware of. Moreover, just like medical therapies, many technologies can have a deep and lasting impact on individual lives and on the trajectory of society. Just think which has had a larger social impact: the invention of vaccines or the invention of the internet? Whatever the answer to that question, it is not an obvious one, and that alone should show the affinity between medical therapies and technological innovations. And if there is this sort of meaningful parallel, and we hold human medical research to high ethical standards, ought we not similarly hold the introduction of technology to high ethical standards?
1. Experimental Technology
At the time of its launch, the iPhone had two key properties that are shared with many other types of technology:
- Significant Impact Potential: It had the potential to cause significant social changes if it took off; and
- Uncertain and Unknown Impact: Many of the potential impacts could be speculated about but not actually predicted or quantified in any meaningful way while some of the potential impacts were complete unknown at the time
These two properties make the launch of the iPhone rather different from some other technological developments. For example, the construction of a new bridge could be seen as a technological development, but the potential impacts are usually more easily identified and quantified (although this can vary). Since we have a lot of experience building bridges and long running familiarity with the scientific principles that underly their design and construction, there is a lot less uncertainty involved. But, of course, if the bridge uses new techniques or new materials, or a bridge is used for a new purpose, then even a bridge can start to have properties similar to the iPhone. Nonetheless, what we can say here is that some technologies fall into a special class that merit greater ethical scrutiny:
Experimental technologies, more than others, are subject to the Control Dilemma: they are those technologies where we are substantially ignorant and uncertain of the risks and therefore are not in a position to control them early on. But it is also worth noting that whether a given technological product or project is “experimental” is really a matter of degree: even a new bridge may be experimental, at least to some degree.
So, many technological products and projects are (to some degree) human experiments. When a human experiment involves medical therapies, we insist on strict ethical controls. But these experimental technologies can have at least as great, if not a greater, effect on individuals and society than the medical therapies. Thus, if we think it is right to have strict ethical controls for medical experimentation, we should also have strict ethical controls for engineering and technology experimentation.
But what does that look like?
2. The Principles of Engineering Ethics
A good starting point for developing an ethical framework for thinking about experimental technologies is to consider the ethical framework most commonly used to think about human medical experiments, a framework known as Ethical Principlism. In broad outline, ethical principlism is an ethical framework centered around multiple equally important, but sometimes conflicting, ethical principles. In the context of medicine, there are three major principles:
The Principle of Beneficence: Experiments should only be done if there is a potential of individual or social benefit and the human subjects involved should not be harmed
The Principle of Respect for Autonomy: Human autonomy and agency should be respected
The Principle of Justice: The benefits and risks ought to be fairly distributed
These three principles are intentionally general and somewhat vague. The basic idea is that they represent widely shared ethical commitments and provide the basis for establishing more precise and practical guidelines for medical researchers.
By and large, we can apply these same three principles to thinking about experimental technology. But, of course, out context is a bit different (even if similar) and so we will want to tweak things a bit. Additionally, for the sake of simplicity, we can rename the principles. In so doing, we end up with the following three Principles of Engineering Ethics:
The Principle of Welfare. Technological products and projects should aim at promoting social welfare and avoid harming or risking harm to individual and social welfare
The Principle of Autonomy. Technological design and implementation should respect the right of all people to make their own informed choices and develop their own life plans
The Principle of Fairness. The benefits and burdens of technology should be distributed fairly
We rename the Principle of Beneficence to the Principle of Welfare to make clearer that its focus is on promoting and protecting welfare, thereby underwriting the most fundamental principle of engineering ethics. Additionally, we spell out what it means to respect “human autonomy and agency” and, finally, rename the Principle of Justice to the Principle of Fairness to avoid confusion with thinking about (for instance) criminal justice or the law more generally.
3. Applying the Principles
We now have three principles for scrutinizing the design and implementation of technology into society. But just as the principles of biomedical ethics are general and require specification for context, so too do the principles of engineering ethics. So, the question arises, how do we work with these principles?
Technological design and implementation already involves working with a variety of necessary but sometimes conflicting design constraints. For instance, buildings must be able to resist wind forces up a certain speed and bridges must be able to hold at least a certain weight level. But these considerations often butt up against other constraints regarding, for instance, total weight of the bridge or building. The project must be kept under a certain total weight but also must display a certain level of strength. Both of these are constraints on the design, and must be adhered to, but also must be balanced against one another – you cannot build the bridge maximally strong, for it will be too heavy, and you cannot build it maximally light, for it will be too weak. Similar considerations apply when we aren’t talking about safety: an interactive device must balance simplicity with power, other products may need to balance longevity with accessibility.
And so, we can think of the ethical principles as Ethical Design Constraints that function in a similar manner. Each principle must be adhered to for a project to be (ethically) successful, but the requirements of the principles will sometimes come into conflict. Indeed, one basic conflict applies to both engineering and medicine and is illustrated with the iPhone example: we could, hypothetically, make significantly greater medical progress if we did not inform people they were research subjects and did not worry about whether the experiment would be harmful to them. In this way, if we ignore the principle of autonomy and half of the principle of welfare we would be better able to achieve what the first half of the welfare principle requires. But such use of people for the benefit of others should strike us all as morally suspect.
In functioning as ethical design constraints, the principles provide broad guidelines that must always be accounted for in design and implementation. So, then, how do we account for them? To do that, it can help to recognize that the principles really engage us in two ways: they both tell us what to pay attention to as well as what to do with the information we are paying attention to.
3.1. Thinking about Welfare
The Welfare Principle, as its name suggests, directs us to pay attention to the ways in which welfare may be positively and negatively affected. In this way, it functions quite similarly to Canon 1 of the NSPE Code of Ethics (and the first principle of many other engineering codes of ethics). But, importantly, it also directs us to pay attention to two distinct ways that a technology may affect welfare: it may enhance it by either providing a new benefit or by reducing or eliminating existing harms or risks. So, for instance, the introduction of FaceTime provided a new benefit: the ability to visually converse via phone. On the other hand, new water treatment technology can be understood as reducing or eliminating the existing risk of water-borne illness. These are two ways of “doing good”. But the Welfare Principle also directs us to “do no harm”, which means not introducing new harms or risks through the technology. Thus, this second part of the principle directs us to (for instance) not introduce a technology that will pollute the water and thereby create (or increase) the risk of water-borne illness.
This point about introducing new risks and reducing existing risks is important because they influence how we think about what to do once we have identified the potential effects on welfare. There is a millennia old debate over whether it is morally worse to cause harm than to fail to benefit. On one view, so long as the consequence is identical, then the moral evaluation is identical. Thus, were you to fail to save a drowning child from a pond when you were able to, that would be just as bad as if you had actually drowned the child in the pond. According to this symmetry thesis, harms caused are symmetrical with benefits denied – so long as their likelihood and effect are identical, they are morally identical. In contrast, the asymmetry thesis holds that causing harm is morally worse than failing to benefit. While it may still be morally bad to fail to save the drowning child from the pond, it is not as morally bad as if you were to have drowned the child.
We will not resolve this debate here (there is a reason it has been raging for thousands of years). Instead, we point it out so that we all may understand the competing perspectives. But what position one takes on this “doing/allowing” debate does often influence how one will apply the welfare principle. If you believe that it is morally worse to cause harm, then you will “err on the side of caution” and be more willing to (for instance) stop a project or reduce the power of a technology so as to limit the introduction of new risks. On the other hand, if you accept the symmetry thesis then you are much more willing to say “the benefits outweigh the risks” and therefore be willing to justify almost any risks so long as the benefits are greater.
Our stance on the doing/allowing debate will influence how we apply the welfare principle to some degree, but there are nonetheless some general guidelines that the principle offers us. These include:
- Only pursue a product or project if it is reasonable to expect social benefits
- Aim to reduce the potential for harm to the public as far as reasonably possible
- Conduct controlled “experiments” of the technology whenever possible
- Provide safeguards that allow a project to be halted or a product to be recalled if excessive harm is discovered
- Avoid products and projects that may produce irreversible harm
- Pursue products and projects that have the potential to improve social or environmental resiliency (the ability to ‘bounce back’ from bad situations)
Like the principle itself, these guidelines are still a bit general and vague: it leaves open, for instance, what counts as “reasonable possibility”. But they are more precise than the principle itself and help us understand, in more detail, how to reason with it.
3.2. Thinking about Autonomy
Autonomy is derived from Ancient Greek and literally means “self-legislate” (auto-nomos). Although it hasn’t always been the case, in contemporary society we value individual autonomy, even at the expense of individual and social welfare. For instance, we leave people quite free to make their own decisions even knowing that many of them will make decisions that reduce their welfare. This is because we value people being the authors of their own lives alongside valuing their welfare. Put another way, we think it important not just that people live good lives but that they control their own lives. Obviously, this control is not absolute, but it matters, nonetheless.
In the medical context, the autonomy principle grounds the practice of informed consent: we only allow researchers to research on human subjects who have been properly informed of the risks involved in the experiment and have voluntarily agreed to be exposed to those risks. It is not the role of the researcher to decide whether someone else is exposed to risks.
The principle functions similarly in the technology case. First, it directs us to pay attention to what information people have about the technology and whether they are in control of their exposure to it or not. Relatedly, and as we will explore in more detail in a later chapter, it encourages us to think about their privacy, for privacy is a means by which we protect our autonomy. My ability to control what others know about me, for instance, helps me limit the ways in which others may try to interfere with my informed choices and the development of my life plans. Thus, whereas the principle of welfare directs us to focus on all those values associated with welfare, the principle of autonomy directs us to focus on those values associated with autonomy: honesty, adequate information, good reasoning and decision-making, control over our bodies and personal information, and individual freedom more generally.
An important note must be made about respecting autonomy. The principle directs us to respect the informed choices of autonomous people. But not everyone is autonomous all the time. If I have been systematically deceived and am therefore making choices based on false information, then I am not making autonomous choices. Relatedly, even if I have accurate information, if I am (for instance) drunk then I may not be reasoning autonomously. Thus, autonomy can be imperiled both by false information and by various barriers to good reasoning. Respecting autonomy is not about just letting people do whatever they want. However, we must also be careful not to judge someone as non-autonomous just because they are making choices we don’t agree with. Autonomous choice does not require making the best or right choice either.
Like the principle of welfare, the principle of autonomy also provides a few guidelines for thinking through technology’s impact on autonomy:
- Ensure all those exposed to the risks of a technological product or project are informed
- Ensure all those exposed to the risks of a technological product or project have voluntarily agreed to be exposed
- Introduction of experimental technologies should be approved by democratically legitimized bodies
- Those involved in the “experiment” should be able to withdraw from the experiment
- Reduce, as far as possible, any exposure to risk for people who have not been informed and/or have not approved the product or project
Once again, these guidelines are intentionally general so as to be widely applicable. It should also be noted that all these guidelines (including those for the other principles) are often statements of what is ideal. As a matter of fact, it may be impossible to “ensure all those exposed to the risks of a technological product or project are informed” since, unlike medical experiments, technologies are often simply “tested on society” via their introduction as a consumer product or their implementation as a public works project. We simply do not know who all will be affected and therefore cannot inform them in advance. That does not mean the guideline is irrelevant, though. By setting the ideal that all will be informed, we know what we should aim for, even if we cannot ever quite reach it. This is certainly better than not even attempting to meet the guideline at all.
3.3. Thinking about Fairness
The Principle of Autonomy is fundamentally about protecting individuals, for it is only individuals that have autonomy. The Principle of Welfare is about both individuals and groups, since we can meaningfully speak about both individual and social welfare. The Principle of Fairness, on the other hand, is fundamentally about groups and not individuals. Whereas the Principle of Autonomy augments our pursuit of social welfare by basically telling us not to sacrifice the individual, the Principle of Fairness augments our pursuit of social welfare by telling us to focus not just on the social benefits but also on how they are distributed across society. In this way, both autonomy and fairness help us avoid the sort of “tyranny of the majority” potential that characterized medical research before the introduction of the ethical principles.
The Principle of Fairness, then, directs us to focus on values like fairness, equality, democracy, and the protection of the vulnerable. It is unfair for only a select few people to gain all the benefits of a new technology whilst everyone else pays all the costs. More positively, technology has the potential to, and has in fact, promoted greater equality in society and “democratized” society. Various social goods that were once only available to nobility are now available to everyone while the creation of new means of communication have enhanced peoples’ abilities to have a say in how their society progresses.
Once we have identified the potential for unfair distribution or potentially affected vulnerable populations, the Principle of Fairness offers us the following sorts of guidelines for moving forward:
- Avoid imposing risks on vulnerable populations or, at the least, offer additional protections and/or compensation if they are exposed
- Avoid imposing risks on specific populations without any corresponding benefit to that same population
- Distribute the benefits and risks of a technology fairly across all those affected by the technology
- Avoid imposing irreversible risks on society
4. The Principles & Ethical Design
As should now be clear, these ethical principles are applicable to our thinking about technology at multiple levels. As policymakers or members of society, we can use the principles to think about existing or emerging technologies and how they are implemented into society. As designers of technology we can also use the principles to help guide the design and implementation of technology in society.
When using the principles to think about design, we should think broadly. That means considering the immediate risks the technological artifact may pose – for instance, the potential health effects of some new building material. But it also means considering the wider social effects of the use of technology in society – for instance, the potential health effects of the spread of misinformation due to the widespread use of social media as a result of the easy accessibility of smartphone devices. This is why keeping in mind the social context of technology and technological mediation is important – they both help expand our thinking beyond the technological artifact to the techno-social system as a whole.
We should thus endeavor to speculate, within reason, about the potential long-term social effects of a technology on existing social institutions and systems. For instance, the principle of welfare does not only direct us to think about the welfare effects of the technology but also how the technology may affect existing social institutions and practices that protect or promote welfare. The spread of misinformation via social media both directly affects health, a welfare value, and indirectly affects existing health-oriented institutions by, for instance, leading to an increased burden on health resources, healthcare workers, and the health system as a whole. Similarly, the rise of consumer surveillance – Ring doorbells, etc. – raises both immediate privacy concerns with the device itself but also has a long-term effect on how we think about privacy in society. And, finally, existing institutions that promote fairness such as democratic elections can be imperiled both directly and indirectly by technology.
In short, then, a major value of these principles is their wide applicability. They are simply statements of nearly universal ethical principles, with application to thinking about technology. As such, in applying them, we should keep in mind both the narrow applications and the wider applications.
Check Your Understanding
After successfully completing this chapter, you should be able to answer all the following questions:
- What is a risk? What are the 2 ways we can lack knowledge of technological risks?
- What is an experimental technology? What are the 2 key features of experimental technologies?
- What does it mean for the principles of engineering ethics to function as ethical design constraints?
- What does the principle of welfare focus on? What values does it direct us to focus on and how does it tell us to think about technology?
- How does the “doing/allowing” debate relate to the principle of welfare? What are the 2 main positions (or ‘theses’) related to the debate?
- What does the principle of autonomy focus on? What values does it direct us to focus on and how does it tell us to think about technology?
- What does the principle of fairness focus on? What values does it direct us to focus on and how does it tell us to think about technology?
- What are the various “levels” of analysis that the principles can be applied to? What are some questions that we might ask at each level?
- This discussion comes from John Danaher’s “New Technologies as Social Experiments: An Ethical Framework,” Philosophical Disquisitions: March 15, 2016. https://philosophicaldisquisitions.blogspot.com/2016/03/new-technologies-as-social-experiments.html ↵