5 Introduction to The Principles of Ethical Design

In the previous chapter we drew an analogy between medical experimentation and engineering/technology. The basic idea was that many engineering projects and technologies can be understood as social experiments where the public (or at least some members of it) are the research subjects. This means, as noted at the end of the last chapter, that engineers are human experimenters. And given that we think other human researchers – such as medical researchers – must conduct their research responsibly and ethically, this implies that technology professionals should also aim to be responsible researchers.

In this and the following chapters we will develop and investigate a moral framework composed of a set of moral principles that can be used to engage in ethical technological design and to assess technologies from a social perspective. And to begin that investigation, we can lean even further into the analogy between medical research and engineering. This is because there is already a well-developed framework for responsible medical research, a framework which can largely carry over to engineering.

1. The Principles of Biomedical Ethics

Starting around 1945, to both provide the legal basis to punish Nazi doctors for their heinous experiments and to try to prevent such things in the future, the medical community began to establish ethical guidelines for medical research. In 1978, the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research released The Belmont Report. Soon after, the lead author of that report, philosopher Tom Beauchamp, developed The Principles of Biomedical Ethics. According to both The Belmont Report and The Principles of Biomedical Ethics, medical professionals (and especially researchers) have 4 general ethical obligations, captured in 3 principles:

The Principle of Beneficence: Researchers are obligated both to “do good” by furthering the interests of society and to “do no harm” by avoiding causing unnecessary harm to research subjects and by minimizing exposure to risk

The Principle of Respect for Persons: Researchers are obligated to protect and promote the right of all persons to make their own informed choices and develop their own life plans. This principle is sometimes called the Principle of Respect for Autonomy because autonomy just means “deciding for oneself according to one’s own values and goals”

The Principle of Justice: Researchers are obligated to treat everyone fairly by, among other things, avoiding exploitation of vulnerable populations, establishing a fair distribution of the risks and benefits of the research, and adhering to fair procedures

As we will see in the next section, each of these principles can largely carry over to the technology context. But, before we make that move, it is worth saying a few things about how these principles are supposed to work.

First, note that the Principle of Beneficence contains two obligations: to provide a benefit where possible and to not cause harm or expose to risk. Clearly, these two obligations can and do come into conflict: the only way to produce a new pharmaceutical that can benefit society by treating some disease is to trial it on research subjects who are thus exposed to various risks.

Following from that, then, we should emphasize that we are working with moral principles, not moral rules. Each principle provides general guidance but does not clearly specify how to respond in specific situations. That is obvious in the case of beneficence since it can conflict with itself. But it is also true between principles: in the clinical setting there is often a conflict between benefitting a patient and respecting that patient’s informed choices, such as when they refuse lifesaving treatment.

Finally, because these are general principles and not rules, this principlist framework suggests that our moral deliberations should be a matter of weighing and balancing the competing considerations. We’ll say a bit more about how to do this in the engineering case later, but the basic idea is that each principle applies and matters in every case, so the context will provide us with the resources for establishing a weighting of the principles relative to one another that will then provide us with the reasons we need to justify our decision. There is no fixed weighting: in one context respect for persons will be weightier than beneficence but in another it may be flipped. That decisions using this framework are not algorithmic, and that the decisions require considering all morally relevant features, also suggests that these decisions should be made collaboratively whenever possible, with multiple people presenting multiple perspectives.

2. The Principles of Ethical Design

We can now complete our analogy and bring the Principles of Biomedical Ethics into the context of technological design. In doing so, we will change the names of the principles to make them a bit clearer, but we will still end up with 4 main obligations captured under 3 principles:

The Welfare Principle: Technology professionals are obligated to (a) use their expertise to improve human, animal, and environmental welfare; and (b) avoid diminishing or risking the diminishment of human, animal, or environmental welfare

The Autonomy Principle: Technology professionals are obligated to protect and promote the right of all persons to make their own informed choices and develop their own life plans

The Fairness Principle: Technology professionals are obligated to ensure a fair distribution of the benefits and burdens of technology and to promote and protect rights and institutions concerned with social fairness

In simpler terms, we can say that technology professionals should do good, avoid causing harm, respect individual choice, and treat everyone fairly. Moreover, each of these general principles applies to technology itself: technology should be designed to improve lives and make society fairer, but this should be done with a clear recognition that individuals should still control their own lives and that technology has the potential to make things worse and that should be avoided.

We noted earlier that these principles are not strict rules, that they may conflict, and that the context will largely determine how we weigh and balance the various considerations the principles generate. Now that we have established these principles as principles of ethical design, we can connect the process of weighing and balancing more directly to technical design. From a technical perspective, we often must deal with various design constraints: general considerations that must be accommodated for the design to be successful. Buildings must be able to resist winds up to a certain speed and bridges must be able to hold at least a certain weight level, etc. But often various design constraints inversely relate to one another: the bridge must meet a certain strength threshold, but it must not exceed a weight threshold. Improving strength will also increase weight while decreasing weight will often decrease strength. The fact that these various design constraints are in tension does not mean you can simply ignore one of them: you cannot simply say “screw it, don’t worry about the strength of this bridge”. Rather, you have to find a way to balance the competing considerations.

In other cases, when we aren’t dealing with design constraints related to safety, we may also focus on weighing the competing considerations: perhaps the device should both be simple to navigate but also robust in application. These two considerations are likely in tension since the more a device can do the less simple it tends to be. Neither has a clear threshold that must be met (unlike the safety constraints) but neither can be wholly neglected either. Instead, we may decide to weight simplicity above robustness, deciding it is more important to allow the device to be used by more people than it is to allow a smaller number to use it in more ways.

Since engineers are typically already trained to juggle multiple competing design constraints, it may be helpful to understand the ethical principles as laying out ethical design constraints: general ethical considerations that must be accommodated for the design to be successful.

3. The Principles, Professional Ethics, and the Social Context of Technology

So far, we have suggested that the Principles of Ethical Design make sense when we adopt the perspective of engineering and technology as a form of human experimentation. But there are other ways to justify and support the principles. For instance, we previously discussed the idea of professionalism and the existence of professional codes of ethics for engineers and other technology professionals. Importantly, for those codes of ethics to have real “bite”, they cannot simply be a list of made-up rules; they need some sort of underlying justification as well. Moreover, there needs to be some more fundamental basis by which we can evaluate and update codes of ethics. The Principles of Ethical Design can provide that justification and basis.

Consider, first, the first Fundamental Canon of the NSPE Code of Ethics: Engineers shall “hold paramount the safety, health, and welfare of the public.” That canon is quite clearly supported by The Welfare Principle. In fact, since “holding paramount” encompasses both protecting and promoting, both obligations that fit under the Principle of Welfare are included in Canon 1. Similarly, Canon 5 requires engineers to “avoid deceptive acts” without qualification. So, even if deceiving a person may promote their welfare, it is impermissible. Why would that be the case? Well, because beside the Welfare Principle we have the Autonomy Principle, which tells us to respect people as informed choosers. Deceiving someone not only denies them potentially important information, it provides them with false information that may pollute their ability to choose and construct their own life plans. In short, each canon can be connected to at least one principle and so the principles justify the code of ethics.

This connection between the ethical principles and codes of ethics is even clearer with the Association of Computing Machinery Code of Ethics and Professional Conduct. That code is presented as a set of ethical principles which neatly map onto our ethical principles:

ACM Ethical Principle Principle of Ethical Design
Contribute to society and to human well-being, acknowledging that all people are stakeholders in computing The Welfare Principle (although both Autonomy and Fairness are mentioned in the explanation as well)
Avoid harm The Welfare Principle
Be honest and trustworthy The Autonomy Principle
Be fair and take action not to discriminate The Fairness Principle
Respect privacy The Autonomy Principle
Honor confidentiality The Autonomy Principle

Although it may now be obvious, we can also note that recognition of the social context and value-ladenness of technology also supports the use of the Principles of Ethical Design. Once we accept that technology both influences and is influenced by society and that all technology involves consideration of values, it becomes clear that we need to have some justifiable basis for determining what sorts of values to consider in designing and evaluating technology. The principles provide just that basis by guiding us to focus on values like welfare, autonomy, and fairness.

4. Deploying the Principles: The Stakeholder Analysis

In the remaining chapters of this book we will explore each of the ethical principles in more detail. But to be able to do that, we need to know who to take into account and how. A central tool we can use to do this is The Stakeholder Analysis. The concept of a stakeholder analysis is common in business, but we use it differently in the context of technological design. For, as the ACM Code of Ethics clearly states: all people are stakeholders in computing, and we can say the same about other forms of technology. For our purposes, we only care about public stakeholders: as designers of the technology or members of a society that may embrace the technology, we don’t care about what the investors want; our goal is to make the best technology possible from a social value perspective.

For our use of the Stakeholder Analysis, we are centrally focused on identifying the various entities that will interact with and/or be affected by the technology. Most immediately, we think of the various people that may be affected. But we should also consider how non-human animals and natural environments may be affected. Thus, “local waterways” may be an appropriate stakeholder for a public engineering project if those waterways may potentially be affected by that project. If we are thinking about how best to design and implement a wind farm, for instance, we will certainly want to include “migrating birds” as a stakeholder as well as “local birds and bats” or simply “local flying wildlife”.

To get a fuller grasp on how to identify and consider stakeholders, it will be helpful to have a few more conceptual tools, which we consider below.

4.1. Direct & Indirect Stakeholders

When the ACM Code of Ethics says “all people are stakeholders in computing” it does not mean that literally every person uses or will use a computing device. That simply is not true. Rather, what they mean is that even those who do not interact with the technology are nevertheless affected by it. And by mere fact of being effected, they are stakeholders.

We can call those who will, in some way, interact with (or “use”) the technology under question direct stakeholders. If we are considering a consumer product, the most obvious direct stakeholder would be users, although (as we will see in more detail shortly) we may want to distinguish between different types of users. If we are considering a large-scale public project, like a bridge, then we may have drivers and pedestrians, both of whom use the bridge but do so in different ways. Nonetheless, both would be direct stakeholders.

Typically, it is not difficult to keep in mind direct stakeholders. However, many technologies have wide ranging effects: people who never use a specific bridge may nevertheless have their view altered by its existence. Those who are or may be affected by a technology without (deliberately) interacting with it are indirect stakeholders. It is often more difficult to identify indirect stakeholders and keep them in mind in design, but it is in considering indirect stakeholders that we can often best identify the fuller social context and impacts of technology. For instance, it is easy to identify the users of social media as (direct) stakeholders; but the existence and widespread use of social media has, in various ways, impacted everyone. The power of social media to allow, for instance, misinformation to spread affects people who have never used social media. Similarly, those who do not interact with a technology due to lack of access are often affected by its relative empowerment of those who do have the privilege of interacting. When most people own an automobile, the social world comes to be built around automobiles. But this results in social disadvantages for those who don’t or can’t access an automobile. One of the core concerns with human enhancement technologies (such as gene editing for height or health) is precisely that they will disadvantage those who cannot or choose not to use the technologies. Thus, it is vitally important to keep indirect stakeholders in mind and it is in doing so that we can really come to understand and better design the technology.

4.2. Stakeholder Roles

Once a technology is out “in the wild”, named individuals will be affected by it. However, when we are designing the technology, we don’t know precisely which individuals will be interacting with it or otherwise affected by it. That, of course, relates back to engineering projects being experiments that lack a control group. Moreover, since most technology affects large numbers of people or other relevant entities, if we were to try to identify individuals our list of stakeholders would be massively unwieldy!

So, in identifying stakeholders, our goal is not to identify specific individuals who may interact or be impacted by the technology, but rather to identify various ways individuals or groups may relate to the technology. In this way, we focus on roles, not individuals. There is no simple way to identify or carve up roles – context is always key – but the idea is to make important distinctions that track stakeholder concerns. For instance, rather than simply identifying all users of a device as “users”, we may distinguish between “core users” and “power users”; or, if we are talking about some sort of social content app we could distinguish between “content creators” and “content consumers”. Both use the app, but they do so for different purposes and therefore will have different concerns, interests, preferences, etc. Because of this, they will be affected by our design decisions in different ways.

Although context and deliberation will always be needed to determine how best to divide up stakeholder roles, there are some general examples we can use as a guide:

Direct Stakeholders:

  • Demographically diverse users – it may be important to differentiate users based on certain demographics, such as race, ethnicity, gender, sexual orientation, physical and cognitive ability/limitations, geographic region, educational attainment, and socio-economic status
  • Populations that deserve special consideration – Considerations of fairness tell us to prioritize and specially consider certain populations and these populations are often affected by technology differently from others. So, we may want to distinctly identify children, the elderly, victims of intimate partner violence, families living in poverty, the incarcerated, indigenous peoples, the homeless, religious minorities, and non-technology users

Indirect Stakeholders:

  • Bystanders: Those who are around the users of technology, such as pedestrians near an automobile
  • Human data points: Those who are passively surveilled or otherwise incorporated as data into a system, such as individuals captured on video technology (video doorbells, CCTV, etc.)
  • Social Institutions: Associations or social processes that are altered by technology, such as democratic elections, public interest groups, infrastructure
  • Those lacking access: Anyone who is unable to use the technology. We may distinguish between different roles here: those who cannot use due to cost, due to education, due to lack of infrastructure, or due to institutional censorship

This list is by no means exhaustive. And, importantly, although we distinguished between general types of direct and indirect stakeholders, some of the distinctions may carry over. For instance, we may want to distinguish between different types of bystanders based on demographic diversity. At the end of the day, context and deliberation will determine how to adequately identify and distinguish various stakeholders. The key consideration in deciding whether to break down a single stakeholder into multiple stakeholders is if they will be differentially impacted by the technology: will the welfare of two different user groups be differently effected? What about their ability to form and act on their life plans? Will the technology differently impact their ability to engage in society? If so, then we should treat them as distinct stakeholder roles. If not, then we are probably safe to lump them together.

License

Icon for the Creative Commons Attribution-NonCommercial 4.0 International License

The Primacy of the Public by Marcus Schultz-Bergin is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, except where otherwise noted.