6 The Welfare Principle

Every engineering and computing code of ethics begins with a requirement to hold paramount, protect, and/or promote the welfare of the public. As such, it makes sense that we would begin our investigation of the Principles of Ethical Design with the Welfare Principle.

The Welfare Principle: Technology should be designed, and technologists should focus their expertise on (a) improving the welfare of the public; and (b) avoid designing technology that diminishes or risks the diminishment of the public welfare

We’ve stated the principle a bit differently here than in our introduction to the principles, but the meaning is the same: technology and technology professionals should do good and avoid doing harm. That sounds reasonably straightforward, and it is to some extent, but to put ourselves in the best position to apply the Welfare Principle we do need to dig a bit deeper into certain aspects of it. In particular, we need to get a grip on what we mean by welfare and examine a few different ways we may incorporate the welfare principle into our moral deliberations.

1. Welfare as Capabilities

There is vast philosophical, psychological, and economic work aimed at understanding and conceptualizing welfare. But, at its core, the concept of welfare denotes the sorts of things that are good for a living being; the sorts of things that make a life go better. All else being equal, a longer life is a better life; all else being equal, a joyful life is a better life. Thus, lifespan and joy (or pleasure) are part of our welfares. Welfare, as we are going to understand it, is identical to well-being and you may find that term easier to keep in mind.

We will not dig into the details of the debates over precisely what is and is not a part of our welfare and how to fundamentally conceptualize it. Instead, we will adopt a certain conception of welfare that has been used in philosophy, psychology, and economics and is especially helpful when we want to be able to measure welfare across populations. On this view, we understand welfare as encompassing a set of capabilities: the fundamental things that all persons and animals should be able to do to achieve their well-being. Part of the justification for this focus is that, fundamentally, all technology is aimed at making us more capable: technology supplements and strengthens our capabilities to make our lives better.

The capabilities approach to welfare has been used in a variety of fields, including development economics where it became the basis of measurement indices like the Human Development Index used by the United Nations and other global organizations. And the philosopher Martha Nussbaum developed and outlined a core set of capabilities which we can use as our base understanding of welfare.[1] Importantly, this list is not exhaustive but should instead give a general idea of the sorts of ways that our welfare can be improved or diminished. As always, context may direct us to ignore some of these or to include ones not on the list. Indeed, one of the other original advocates of the capability approach (alongside Nussbaum), the economist Amartya Sen, refused to provide a list of core capabilities precisely because he believed context mattered. Nevertheless, it can be educational and helpful to start with a list, which is produced below.

Core Capabilities

  • Life: Being able to live to the end of a human life of normal length; not dying prematurely, or before one’s life is so reduced as to be not worth living.
  • Health: Being able to have good health, both mental and physical; to be adequately nourished; to have adequate shelter.
  • Sensation & Cognition: Being able to use the senses, to imagine, think, and reason—and to do these things in a “truly human” way, a way informed and cultivated by an adequate education, including, but by no means limited to, literacy and basic mathematical and scientific training. Being able to have pleasurable experiences and to avoid non-beneficial pain.
  • Emotions: Being able to have attachments to things and people outside ourselves; to love those who love and care for us, to grieve at their absence; in general, to love, to grieve, to experience longing, gratitude, and justified anger. Not having one’s emotional development blighted by fear and anxiety. Supporting this capability means supporting forms of human association that can be shown to be crucial in their development.
  • Affiliation: Being able to live with and toward others, to recognize and show concern for other humans, to engage in various forms of social interaction; to be able to imagine the situation of another.
  • Nature: Being able to live with concern for and in relation to animals, plants, and the natural world.
  • Play: Being able to laugh, to play, to enjoy creational activities and leisure.

As the list should indicate, the capabilities approach to welfare suggests that human lives are better when they are longer, healthier, lived in concert with others, and involve ample opportunity for leisure and recreation. These are simply the core capabilities and certainly different people will have different preferences which may change their list of capabilities, but these core capabilities are both broad enough and stable enough that they should more or less apply to everyone. It should also be noticed that nowhere does “making money” or something similar appear. This is not because money and wealth don’t contribute to a good life, they certainly do. But to the extent they do contribute to a good life, they do so by impacting one or more of these capabilities. Money can buy you happiness in the form of facilitating recreation or affiliation or education, etc. But the point is we care about those things for their own sake while we only care about money for the sake of those and other things.

It is also important to note that this list was developed specifically in the context of humans. Nonetheless, all the capabilities also apply to many non-human animals, although the precise details will change. For instance, animals’ lives go better when they can use their senses and cognition, but of course they do so in different ways and so will need different support to be able to do so.

We can use this list to assess technologies and guide our design decisions. For instance, as noted in a previous chapter, highway construction will typically alter the social and natural landscape. In evaluating a highway proposal, or deciding how best to complete the project, the list of capabilities encourages us to consider whether the highway may divide up a neighborhood, thereby diminishing peoples’ ability to affiliate with one another; or, perhaps it may destroy a natural area that many people used for communing with nature or recreation. On the other hand, the highway could enhance social interaction by allowing people easier access to certain areas of town. Whatever the case may be, the point is that we can use the list of capabilities as a partial guide to thinking through the potential positive and negative implications of a technology, project, or design.

This list can become especially helpful when combined with conducting a stakeholder analysis as discussed in the previous chapter. While certain capabilities may be obvious – health, for instance – the list encourages us to think about how (for example) our health app may affect peoples’ emotions and ability to engage with nature. But it really becomes useful when we consider the effects of a technology on indirect stakeholders. For instance, we could ask how might non-users’ ability to form meaningful relationships be impacted by the increased use of social media?

2. Applying The Welfare Principle

Now that we have an idea of what we are looking for when we are assessing welfare, we can turn to a few different ways we may apply the welfare principle in our moral deliberations. Importantly, as we will see, because the welfare principle includes two potentially conflicting duties, there are distinct ways of applying the principle. Part of our goal, then, will be to understand that there are multiple reasonable ways to apply the welfare principle and consider a situation. Thus, disagreement should be expected and that is precisely why it is best to incorporate multiple perspectives into our deliberations.

2.1. Doing & Allowing

The Welfare Principle includes both a duty to improve welfare and a duty to not diminish welfare. But, as we noted when we introduced the concept of risk: no technological project worth doing is without the potential to diminish welfare (or have some other bad outcome). Thus, it is common that technological designs and projects will have the potential to both improve welfare and diminish it. A fundamental issue, then, is how to compare those two potentialities. And this question moves us into an important distinction in theoretical ethics: the difference between doing something harmful and allowing something harmful to occur. In brief, this is known as the doing/allowing distinction. The central question in this debate is: is it morally worse to do something that causes harm than it is to allow something to happen that results in harm?

To better grapple with the question, consider the following two scenarios totally unrelated to technology:

Scenario 1: You are taking a stroll in a park when you notice a small child drowning in the pond. You decide not to provide any assistance, and as a result the child dies.

Scenario 2: You are taking a stroll in a park when you notice a small child exiting the pond after nearly drowning. You decide to kick the child in the head, sending him back into the pond where he drowns and dies.

The Welfare Principle includes two distinct duties, but only one of those duties is violated in each scenario. In scenario 1, you do not cause the child’s death – you do nothing. Rather, you allow the child to die because you fail to provide assistance. Thus, in this situation you have not violated your duty to avoid diminishing welfare but you have violated your duty to promote welfare: you failed to provide a benefit that you could have provided.

Scenario 2 is just the opposite: the child would not have drowned except for your actions and so you did do something that caused the child’s death. Thus, in scenario 2 you do violate your duty to avoid diminishing welfare but you have not violated your duty to promote welfare.

Now, importantly, the welfare principle is violated in both scenarios and so both scenarios involve some degree of wrongdoing. But the wrongdoing is for different reasons in the two different scenarios. And it is an open question if whether doing something that causes harm is, all else being equal, morally worse than allowing something to happen that results in harm: Is the action in scenario 2 more wrong than the (in)action in scenario 1?

One perspective is to focus wholly on the outcome: both scenarios result in a child’s death. Since the outcomes are the same in both, and you had the power to stop that death in both, they are equally wrong. This perspective sits at the foundation of some common ways of reasoning: people talk of doing a “cost-benefit analysis” or “pro and cons list” to decide what to do. Typically implicit in these approaches is the view that costs you may introduce can be offset by an equal benefit. Thus, the assumption is there is nothing additionally wrong with causing harm.

But many people reject this sort of reasoning in many other situations. For instance, many of you probably believe that the person in scenario 2 is morally worse than the person in scenario 1: the act of kicking the child back into the pond, thereby causing it to die, is worse than simply failing to save it. This sort of asymmetry is also captured in our laws: it is not illegal to refuse to provide money to someone who asks for it, but it is illegal to take money from someone. In the first case, you have merely failed to provide a benefit while in the second case you have caused harm.

The point of this discussion is not to settle the issue – there is a reason it has been debated for thousands of years. Rather, it is to make clear that there are these two different perspectives, that each have something to be said for them, but that they will also often produce conflicting answers. To see how that may come about in the context of technology we can turn to two common approaches used in assessing risks and benefits.

2.2. The Net Benefits Approach

One common approach to risk assessment is to simply seek the optimal balance between risks and benefits, without regard to the source of those risks. Thus, this net benefits approach adopts the perspective that there is no moral difference between doing and allowing. It is perfectly acceptable for a technology to potentially cause harm so long as it provides greater benefits (i.e., the benefits are either more certain or of greater magnitude).

Net Benefits Approach: Technology should be aimed at providing the greatest potential net benefit – all potential benefits minus all risks

The net benefits approach is just a more generalized form of the cost-benefit analysis that is common in business. Whereas the cost-benefit analysis converts all risks and benefits to currency (by, for instance, putting a price on human life) the net benefits approach doesn’t require converting everything to currency. It can accept some looseness in how we compare things.

To properly apply the net benefits approach, we must start with a reasonably comprehensive stakeholder analysis identifying the relevant stakeholders and the benefits and risks for each. Recalling our definition of risk (which also applies to benefits), this involves having some idea of both the probability of the harm or benefit coming about as well as the magnitude of it, if it were to come about.

Once we have this loose list, we start making comparisons. If we are simply evaluating an existing technology, we may simply be trying to understand if the benefits do in fact outweigh the risks. If we are instead evaluating various design options, we would be looking for the option with the greatest net benefit. One general means of doing this is to try to identify benefits and risks that are reasonably similar: we tend to be more accepting of trading risks and benefits when they appear “commensurable” – when they appear to share a common measure. So, for instance, if our project has a 50% chance of improving the health of 100 people but also a 50% chance of diminishing their health to a similar degree, then we may be more willing to ignore the fact that if the risks come about, we caused the diminishment. On the other hand, when the risks and benefits are quite different – 100 people will have greater access to recreational activities, but 50 people will likely suffer from chronic health issues – then it can become much more difficult to justify trading the benefits for the risks.

One major shortcoming of the net benefits approach is that it is not capable of handling uncertainty: if we really have no idea of the potential or magnitude for a hazard, then it cannot go into our “calculation”. This fact has been used, historically, to simply exclude potential catastrophic harms because “we don’t know how likely they are”. Thus, a core blind spot of employing the net benefits approach is uncertain risks. This does not mean it should never be used, it simply means we should keep in mind its shortcomings when we do use it.

2.3. The Precautionary Approach

When we are particularly concerned about uncertain harms, especially potentially catastrophic ones, then we may accept that there is a moral difference between doing and allowing harm and side against potentially causing new harms, even if that means not providing a benefit (or providing less of a benefit). The Precautionary Approach accepts that there is a moral difference between doing and allowing, that causing harm is worse than merely allowing it, and that therefore we should be keenly focused on those new risks our project may introduce, especially the uncertain and potentially catastrophic ones.

The Precautionary Approach: Technology should be designed to introduce the least amount of new risk, even if that means providing a lesser overall benefit

Whereas the net benefits approach effectively ignores uncertain risks, the precautionary approach tells us to focus on them, even at the expense of other risks and benefits. This means that in applying the precautionary approach one typically need only appeal to the uncertain and potentially catastrophic risks involved; there is no need to have a full accounting of the potential benefits. In this way, the precautionary approach’s blind spot is with the benefits of technology, including those benefits that involve reducing the chance of some sort of catastrophe that may happen absent the technology.

In the engineering context, the Precautionary Approach was first advocated (under different descriptions) as far back as 1729 and used as a ‘factor of safety’ in decision-making. It was justified then, as now, on the basis that engineers have a social responsibility to protect the public from exposure to harm or risk, especially when those members of the public have no say in the matter. Some have suggested that had Lund used (or stuck to) the Precautionary Approach in determining whether to launch the Challenger, it never would have launched. While there was some evidence of issues with the O-Rings at certain temperatures, the bigger issue was that we simply did not know all the mechanical properties of the O-Rings because they had not been suitably tested. And so, in that way, we were in a position of uncertainty regarding a potentially catastrophic risk: the explosion of the space shuttle.

2.4. Combining the Approaches

The Net Benefits and Precautionary Approaches basically sit at either extreme of a continuum in how to think about the effects of technology on welfare. However, both approaches are common in engineering and so it can become valuable to see how they can interact with each other, rather than simply treating them as completely opposing approaches.

One way to combine the approaches is to let the precautionary approach guide your response to uncertainty while still using a net benefits approach. Rather than simply ignoring uncertain risks, as the traditional net benefits approach would do, one could artificially assign a high probability to any new risks that have a high magnitude – that are potentially catastrophic. The precautionary approach is centrally concerned with those high magnitude risks and so other, lower magnitude risks, can still be handled in the standard way for the net benefits approach. The effect of this approach would simply be to weigh down the new risks side of things, thereby accepting that it is worse to introduce new (especially catastrophic) harms but recognizing that some new harms can reasonably be outweighed by potential benefits, as the net benefits approach suggests.

3. Conclusion

In this chapter, we have investigated the Welfare Principle to understand what welfare is and how we may use the principle in our moral deliberations. Importantly, welfare is not the only thing we care about and so the effects technology may have on welfare are not the only effects we may care about. But as is made clear by the various codes of ethics, protecting and promoting welfare are the core responsibilities of all technology professionals. And so, understanding how to think about welfare, how to assess technology to determine its welfare impacts, and how to make design decisions supported by the principle of welfare are fundamental to being a technology professional.

  1. Martha Nussbaum (2011). Creating Capabilities: The Human Development Approach. Cambridge, MA: Harvard University Press. Pp. 33-34.


Icon for the Creative Commons Attribution-NonCommercial 4.0 International License

The Primacy of the Public by Marcus Schultz-Bergin is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, except where otherwise noted.