8 The Fairness Principle

In the United States (and statistics are similar elsewhere) women are 73 percent more likely to be seriously injured in frontal car accidents. This is despite the fact that, on average, men drive more miles and engage in riskier driving (speeding, not wearing a seatbelt, driving drunk). There are multiple reasons for this, including that women are more likely to drive smaller vehicles and are more likely to be driving the struck vehicle. But, there is another reason: vehicle safety testing doesn’t (fully) account for women. Until just recently, all crash test dummies were modeled after the average male. Now, some safety testing uses “female” crash test dummies (although it is not required), but these dummies are simply smaller and lighter versions of the male model. They do not account for the differences in body shape and center of gravity. Moreover, they don’t even represent the “average” female in terms of height (as the male dummy does for males). Finally, even when these sub-par female crash test dummies are used, they are never placed in the driver’s seat of the vehicle.

As Astrid Linder, an engineer and director of traffic safety at Swedish National Road and Transport Research Institute has put it: “We need a reality check. And that reality consists of females and males – therefore we need representations of females and males in our tests.”[1]

The above example presents one way that considerations of fairness can enter engineering and technology design. One straightforward complaint we can make about the safety testing problem is that it is unfair to women: it puts them at greater risk of harm than men for no justifiable reason. Indeed, the history of engineering and technological design abounds with examples of male bias that result in females receiving fewer benefits or being exposed to more harm simply due to their sex. Similar things can be said, in the US and Western Europe at least, about a bias toward white people.

But we can also see fairness at play in other ways. The net benefits approach, previously discussed, is an aggregationist approach to social decision-making. This means it is insensitive to the distribution of harms and benefits, as it simply aims to maximize the net benefit. Although in many cases this may correspond to a roughly equal (or otherwise fair) distribution of harms and benefits across those affected, it certainly does not have to and nothing about the approach guarantees it will. Consider an example of a typical public works decisions, such as where to site a new fossil fuel-based power plant. To keep costs of electricity delivery low for everyone, it is better to place the power plant closer to population centers. But, of course, to reduce the health effects of pollution, it is better to place the power plant farther from population centers. The typical result? The power plant is placed on the outskirts of population centers, typically near poor and disenfranchised neighborhoods. This can perhaps be justified on a net benefits calculation – it keeps the plant reasonably close and so reduces electricity costs for everyone while imposing extra risks only on a smaller subset of those affected. But is it fair to impose those extra costs on that subset of people? Especially if they are already worse off than many others who will benefit from the cheaper electricity? Almost certainly not. It fails an ancient test of fairness, offered most clearly by the Greek philosopher Aristotle: we must treat like cases alike. Put in more modern terms, a decision of this sort fails to regard all affected as equals, even though they are equals in the relevant sense.

We can further see fairness at play in situations where technology creates or (more often) reinforces existing social biases. For instance, it is typically agreed that a basic requirement of fairness in society is the guarantee of equal opportunity: everyone should have a fair shot at making the best of their lives. One instantiation of this guarantee is equal opportunity in employment. And while this has never been fully realized due to social discrimination, it is being newly threatened by hiring algorithms that scan job applications for various key words or features, automatically disposing of those that don’t make the cut. While such algorithms can hold the promise of reducing bias, by taking biased humans out of (at least part of) the decision procedure, as Amazon’s recent scrapping of their algorithm indicates, these tools can also reassert bias. Amazon’s algorithm taught itself to be biased against women and systematically began excluding resumes that included the word “women’s” in them, as when a woman might have indicated that she was a “women’s chess club captain”. Similarly, these algorithms will often learn to downgrade people from certain colleges, and the pattern tends to indicate this is happens especially with Historically Black Colleges and Universities (HBCUs) and all-women’s colleges.[2]

Finally, technological designs can have more diffused impacts on social fairness because of various side effects, some predictable and some not. For instance, social media was almost certainly not originally created to support government surveillance or the spread of misinformation. But it has certainly had that effect, and both of those activities threaten social fairness. Government surveillance is often used to target particular populations, which is most clear when it is in the hands of authoritarian regimes who are explicitly and actively targeting dissident groups. But it happens even among putatively liberal democratic regimes as well. Similarly, the spread of misinformation has drastically affected the public political landscape in ways that threaten basic democratic procedures, procedures which are often held up as necessitated by and supportive of social fairness.

All of this is to say that fairness is a wide-ranging consideration and that engineering and technology interacts with it in myriad ways. All this, combined with the basic requirement contained in many engineering codes of ethics to treat all people fairly, supports the importance of focusing on fairness in the design, implementation, and social decision-making of technology. And we can capture the essence of this obligation in the following way:

The Fairness Principle: Technology should be designed to protect and promote a fair distribution of the benefits and burdens of social life as well as promote and protect the rights and institutions concerned with social fairness.

1. Fairness & Capabilities

Like welfare and autonomy, we can capture a variety of the values associated with fairness in terms of human capabilities. Importantly, these do not exhaust the ways that technology can impact fairness and therefore the ways in which we should account for fairness in technological decision-making. Nonetheless, they provide a helpful starting point for considering fairness.

1.1. Social Participation

Human beings are social animals. To live a flourishing human life, then, it is important to be able to participate in one’s society. Moreover, a fundamental commitment of liberal democracies (and one which even authoritarian regimes tend to pay lip service to) is that social decision-making should be collective decision-making: the laws and rules which we must follow should, in some sense, be “the will of the people”. We can capture this idea under the heading of social participation.

In broad strokes, social participation is the ability to participate in or (at least) have one’s interests taken into account in social decision-making. This capability entails key liberties and rights such as voting rights and the freedom of association. It also undergirds commitments to free speech (for how else can we ensure interests are heard?) and general protections against violence and intimidation aimed at limiting peoples’ social power. Finally, more broadly, we can say that this capability enjoins us to work toward every individual having equal social power – a roughly equal ability to participate in collective decision-making.

The decision to site a polluting power plant near a poor neighborhood plays on existing unequal social power and reinforces it, thus infringing on this capability. Similarly, but in a more diffused way, failure to be inclusive in design – for instance by not equally incorporating female body types into vehicle safety testing – excludes the interests of certain populations. Finally, technologies that directly or indirectly lead to denials of political power threaten social participation.

1.2. Equal Opportunity

Some basic rights, such as voting rights, must be distributed equally to be fair. But other forms of social engagement can be fair even if not equal. Employment is one such example: we don’t think fair employment requires that everyone be equally able to have the same job. Instead, with these sorts of forms of social participation, the emphasis is on equal opportunity (rather than, we might say, equal treatment). Equal opportunity is an ability to pursue social opportunities and receive the social benefits of those opportunities on an equal footing with others. It does not involve a guarantee of the benefits, but rather a guarantee of the opportunity to compete for those benefits as an equal.

Equal opportunity, understood as a capability, entails and supports protections against discrimination based on factors like race, sex, and religion (among others). It also supports a general freedom from exploitation and the fair distribution of the benefits and burdens of social projects.

Equal opportunity is threatened when we employ methods of decision-making that discriminate on inappropriate grounds – such as biased hiring algorithms. It is also threatened when we impose excess burdens on certain people without proper justification, as when we site a polluting power plant near certain populations without any appropriate form of compensation.

1.3. Reparative Justice

A final capability is reparative justice: the ability to pursue recompense for unjustified harms or rights violations. It is impossible for any society to wholly prevent harms and rights violations, but it is unfair for a society to deny some the chance to have those harms or rights violations appropriately compensated for (whatever that might mean in the context). This applies both to recompense against other individuals or private entities (such as corporations) as well as the government itself.

The importance of reparative justice is one major reason why it is inappropriate for engineers or other technologists to simply make socially momentous decisions without appropriate input from affected populations. For in many cases, once such decisions are made and implementation is underway (or complete), those who believe they have been unjustifiably harmed have no recourse. This is, of course, partly a social problem with most existing legal systems, but given that fact, engineers should respond accordingly.

A key implication of reparative justice is that it should be reasonably easy to identify who (or what) is responsible for harms and rights violations in order to hold them accountable for potentially unjust actions. Such things can be supported by technologies that promote transparency and other forms of easily available accounting and threatened by technologies that promote anonymity or allow people to skirt responsibility.

2. Applying the Fairness Principle

So far, most of our examples of fairness in design have been negative: ways in which technological design or implementation may threaten fairness. That, of course, is largely because technology professionals tend to be overly optimistic about their designs and thus miss potential negative effects and because threats to social fairness can be quite devastating.

However, before examining a couple specific approaches we can use in applying the fairness principle, it is worth considering some of the various ways that technology or design processes can be used to enhance fairness. Some of these examples will help in understanding how to apply the approaches discussed below.

  • Diversity in Design. Bias festers in homogenous thinking spaces, typically implicitly. Thus, designers can reduce the risk of bias or other forms of unfairness in their designs by building diverse design teams.
  • Inclusive Design. Accounting for the variety of possible users – and not just those who you want to use the technology – can promote a fairer distribution of benefits and burdens and help to avoid unintentional bias. This means working to understand possible users and work to target and correct for any potential accessibility gap. This commitment to accessibility can extend beyond specific designs to focus on how technology can promote accessibility to existing social institutions, as when designers overhaul a city government website to ensure it is accessible for people from various devices and through various modalities.
  • Promote Transparency. Technologies and their enabling elements (such as user manuals) can be designed to be up front about the potential effects or interactions they may have, thereby ensuring people can make informed decisions, thereby promoting fairness. Similarly, technologies can be used to make existing social institutions more transparent.
  • Promote Trust in Institutions. This is more of a goal than a method, but social trust is deeply connected to social stability and social justice. The more trust people have in each other and in their social institutions, the more likely they are to support policies or other social activities that promote social fairness. Thus, technology can play a role (for instance via transparency) in promoting trust through design and implementation, as well as indirect side effects.

With some ideas of how technology can be used to promote fairness, we can now turn to a few methods we might use to apply the principle of fairness to help us make justifiable decisions and evaluations.

2.1. Fair Distribution

To compensate for the net benefits approach’s insensitivity to distribution, we can employ a method that specifically focuses on the distribution of benefits and risks rather than their total sum. Here, our goal is not to maximize the benefits minus the risks, but rather to ensure that everyone affected has roughly the same level of benefits minus risks. This may be accomplished through strict equality – everyone affected receives the exact same benefits to the same level and is exposed to the same risks to the same level. But fairness does not always require strict equality and that sort of strict equality may be impossible to achieve in practice.

So, instead, we search for a reasonable level of fairness in the distribution. This might be somewhat unequal, but should certainly not be radically unequal. Additionally, this may be accomplished not by ensuring the exact same benefits and risks for all, but rather ensuring that the “net benefit” for each affected is roughly equivalent. That may mean that some get benefits that others do not while some are exposed to risks that others are not. So long as the benefits minus the risks for each is roughly equivalent, that may be fair.

Finally, given the realities of technological decision-making, especially when it comes to large public works, we may have to accept that the technology or project itself will not result in a fair distribution of benefits and burdens. Nonetheless, we may be able to get nearer to fairness through other processes, such as compensation. For instance, there is likely no practical location to place a power plant that won’t differentially impact some populations. Thus, it is unlikely that the benefits and burdens of the power plant itself will be fairly distributed. However, one potential solution is to provide additional benefits, not related to the technology, to those who assume greater risk. This is, for instance, one justification for the Alaska Permanent Fund, which provides a regular cash payout to all Alaskan residents, funded by oil and gas revenues. Here, the thought might be that the oil and gas harvested benefits (nearly) all the United States but imposes unique risks on Alaskan residents. Since you must harvest oil and natural gas from its repositories, and so cannot wholly choose where to do so, the second-best option is to provide an additional benefit to the residents who are exposed to an additional risk (or set of risks).

It should be noted here that inclusive design is one means by which we can promote a fair distribution. It is common that certain populations – perhaps those with disabilities or with lesser technological access – will receive fewer benefits from a technology, even as they may be exposed to the same level of risk. This would be because the design hasn’t really taken them into account. Thus, engaging in inclusive design aims to “up the benefits” for those populations, while largely leaving the risks stable.

2.2. Prioritizing the Vulnerable

Why do so many cities have poor east ends?[3]

An interesting social phenomenon, that occurs across the globe, is that the “poor” neighborhoods of nearly every major city are found on the eastern side of the city. There are various explanations given for this pattern, and it is likely that multiple factors are at play. However, one explanation, that explains about 20% of observed neighborhood segregation in the UK, attributes the problem to air pollution. Most cities are found in the middle latitudes of the Earth, where the prevailing winds are westerlies, meaning they blow to the east. As cities developed, especially during the industrial revolution, then smoke, smog, and other air pollution would tend to drift east. Those with sufficient means to move did so, and thus you saw the wealthy generally flee eastern neighborhoods. From that point on, due to other social factors – like red-lining and local school taxes in the United States – the poverty tended to remain, even if the pollution dissipated. In this way, a certain side effect of technological development contributed to a major instance of social injustice.

Although we said above that it is rare that we can distribute the benefits and burdens of a technology equally, it seemed at least reasonable to suggest that if we could, then we should. But is that right? Would it, in fact, be fair to distribute benefits and burdens of a given project or technology equally? While it might seem obvious that it would be, there is an important reason to doubt it.

Technology does not exist in a social vacuum. When we introduce some new public works project or new device, it is going to be embedded in a social world that is already filled with inequities and injustices. There are already some people who have a lot more than other people. There are also some people who due to their circumstances simply need more than others to attain the same level of welfare or life satisfaction more broadly. For instance, we don’t think parking availability should be distributed equally; instead, to be fair, we think that those who have certain physical disabilities should have priority in parking. In that sort of circumstance, we are not saying that equality is impossible, but rather that fairness specifically demands inequality of a certain sort: it demands that we prioritize the interests or needs of particular vulnerable populations.

The same idea can be applied to technology. Since it does not exist in a social vacuum, but rather comes into being in an already unfair world, distributing its benefits and burdens equally may actually exacerbate existing social inequities. Put more positively, aiming to distribute the benefits and burdens of the new technology unequally, with a focus on providing more to those most in need, can enhance overall social fairness. It can become a way, then, for a specific technology to not only avoid being a threat to fairness but actually become a tool for fairness.

Returning to our example of the powerplant, a focus on prioritizing the vulnerable would identify those residents in the poor and politically powerless neighborhoods as a relevant vulnerable population. Based on that, it would likely not recommend siting the powerplant near them, even if this is compensated for in some other way. Rather, it would likely recommend ensuring that this population is heavily included in the decision-making procedure if possible, and if not then certainly would exclude siting the powerplant near them.

And so, more broadly, when we focus on prioritizing the vulnerable our task is to identify which of our stakeholders constitute vulnerable populations and then privilege their interests in our decision-making. This need not mean they necessarily win the day, although often we may focus on “protecting the vulnerable” and therefore make decisions that are not supported by (for instance) a standard net benefits calculation but which ensure the already vulnerable are not made any worse off. One way to incorporate this procedure into a net benefits decision procedure is simply to artificially weight the benefits and risks to those vulnerable populations. Thus, much like incorporating the precautionary approach into a net benefits procedure, we can use the understanding of risk as the product of magnitude and probability to build ethical thinking into an otherwise (seemingly) quantitative procedure.

2.3. Institutional Protection

A final approach is perhaps just a modification of the Prioritizing the Vulnerable approach that can be used in special circumstances. In society there exists a variety of social and political institutions that protect or promote social fairness more broadly. For instance, the institution of voting in democratic countries protects the basic notion of democratic fairness: the idea that all those affected by a political decision should, in some sense, have a say in that decision. Most contemporary nations also include various safety net institutions: food, money, or healthcare guarantees or supplements. More broadly, social institutions like schools and hospitals contribute to social fairness by providing the basic goods necessary to even have a fair opportunity in society.

In all these cases, we can understand the principle of fairness as obligating technology professionals to at least protect those institutions. Thus, if a technological decision may have a large impact on any of these sorts of institutions, we are obligated to give special consideration to them. Like prioritizing the vulnerable, this need not mean we neglect other stakeholders or interests, but it does at least mean that we consider the interests associated with these institutions as especially weighty. Put another way, we can interpret this approach as functioning like the precautionary approach, but with a focus on social fairness rather than welfare.

A final note about this approach: while the core focus is on protecting institutions, and so we can identify those institutions as stakeholders, we also want to think more broadly about how technologies may influence society in ways that could affect those institutions. So, for instance, we know that social trust – the general degree to which members of a society see each other as trustworthy and cooperative – strongly correlates with economic and social equality. Those societies with high measures of social trust, such as the Scandinavian countries, also tend to have low inequality and high welfare overall. Alternatively, countries with low social trust, such as the United States, tend to have much higher inequality and large populations with low welfare. Similar things can be said about government trust, which is a measure of citizens’ beliefs that their government will tend to act in the interests of the people. These are just two examples of general social values that technology can interact with and that, at least down the line, have an impact on institutions concerned with social fairness. This interaction can already be seen in how social media use correlates with declines in social and government trust and have thus led to a lack of government action on important social matters, further eroding trust in a sort of vicious feedback loop.


  1. The information contained in this example, including the quotation, comes from: Sophie Putka (2021), “Why Are There No Crash Test Dummies That Represent Average Women?” Discover Magazine. https://www.discovermagazine.com/technology/why-are-there-no-crash-test-dummies-that-represent-average-women
  2. Reuters (2018), “Amazon scrapped a secret AI recruitment tool that showed bias against women,” Venture Beat. https://venturebeat.com/2018/10/10/amazon-scrapped-a-secret-ai-recruitment-tool-that-showed-bias-against-women/
  3. Stephan Heblich, Alex Trew, & Yanos Zylbergerg (2016). “East Side Story: Historical Pollution and Persistent Neighborhood Sorting,” Spatial Economics Research Center Discussion Paper 208.

License

Icon for the Creative Commons Attribution-NonCommercial 4.0 International License

The Primacy of the Public by Marcus Schultz-Bergin is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, except where otherwise noted.