7 The Autonomy Principle

We often lie to children for their own good. We also often force them to do things that we believe are “good for them” even though they don’t want to do them. It is understandable why we do this with children – we generally think they are not in a good position to understand their long-term good. We also may think that they are not in a good position to reason about what they ought to do because they are too biased toward immediate gratification or otherwise easily swayed by irrelevant or inappropriate considerations.

But we also typically balk at doing the same sorts of things to adults. Although we may occasionally tell someone close to us a “white lie” to avoid (emotionally) hurting them, by and large we think we should be honest with adults and let them make their own decisions, at least if those decisions mostly just effect themselves. In this way, we tend to view adults, but not children, as autonomous agents: beings capable of reasoning about what is good for them and acting on their own beliefs about what is good for them, even when they are mistaken. Children, on the other hand, are generally held to not yet be autonomous agents and so it is permissible to force goals and actions on them to the extent we have a better understanding of their own good and are doing it for that sake.

Thus far, everything said here simply amounts to general observations of social morality. But the same considerations also apply to technology professionals. While technology professionals do have a general obligation to protect and promote welfare, we don’t typically think they should do that at all cost. Technology professionals are not dictators who can simply determine for us what is good and then enforce it on us against our will. This fact is clearly captured in a number of NSPE Fundamental Canons, including the requirement to “avoid deceptive acts” (Canon 5) and “issue public statements only in an objective and truthful manner” (Canon 3).

We can also put the issue more broadly by considering some of what we often like most about certain technologies and what we fear most about some others. One of the major reasons the personal automobile became so popular was the sense of “individual freedom” it provided: it greatly enhanced the sort of opportunities available to individuals. Notably, it also greatly enhanced everyday risks of harm, but many believe that trade-off is worth it. On the other hand, one of the major concerns with the rise of various surveillance technologies is that they greatly diminish our freedom, even though they may (potentially) improve our overall safety. These concerns for individual freedom are just instances of general concern for our autonomy: our ability to be the authors of our own lives.

The Autonomy Principle: Technology should be designed, and technologists should focus their expertise on protecting and promoting the right of all persons to make their own informed choices and develop their own life plans.

1. Autonomy and Capabilities

In the previous chapter, we introduced The Capabilities Approach as a framework for understanding welfare. There, we outlined a list of “core capabilities” that all humans (and, for their own list, non-human animals) should be able to do to live worthwhile lives. Importantly, however, a worthwhile life is about more than merely having certain basic needs taken care of. It is also about autonomously engaging in the world: about reflecting on what is important to you and choosing to act on that basis. Thus, both Nussbaum and Sen (the originators of The Capabilities Approach) take seriously the importance of autonomy as well.

To see how autonomy fits in, just consider one part of the capabilities list: adequate nourishment (a part of “Health”). Clearly it is important that people have access to adequate health, but there are instances where a person may choose to forgo nutrition for other purposes, such as religious or dietary fasting. In both cases, it would be incorrect to assert that something bad has occurred when these people, in these instances, lack adequate nutrition. Instead, if they freely chose to do this, understanding their choice, then we need not and should not intervene to ensure they have adequate nutrition. To so intervene would, in fact, be wrong of us: we would be inappropriately interfering with their autonomous choices.

Thus, we can add to our general list of capabilities the capability of autonomy or effective agency, understanding this to include both the capacity to reflect on one’s life plans and the ability to carry out one’s decisions about one’s own life. Importantly, emphasizing that autonomy is both a capacity and an ability highlights that it can be frustrated in at least two distinct ways. On the one hand, people or social institutions can interfere with a person’s autonomy by interfering with their ability to carry out their decisions. This is what we do if we (for instance) force someone who is fasting for religious reason to eat. On the other hand, a person’s capacity for autonomy may be frustrated by both internal and external factors. Externally, we may frustrate someone’s autonomy by denying them access to education or indoctrinating them, both of which interfere with their ability to engage in reflection. Similarly, lying to a person or not providing them sufficient information also interferes with their capacity. Internally, a person’s capacity for autonomous thought may be permanently or temporarily hampered by lacking the relevant skills or due to various mental conditions. A person who has been indoctrinated or denied education all their life may simply be incapable (or much less capable) of autonomous thought. A person under the influence of drugs may be temporarily impaired. And a person in the grips of a mental illness such as addiction or severe anxiety may also be impaired in their ability to reflect in the way necessary to truly be autonomous.

It is because autonomy can be frustrated by both internal and external factors that respecting autonomy is a matter of both protecting and promoting it. We protect it by not interfering and by supporting institutions that prevent interference. But we also promote it by providing education and information and supporting institutions that help people develop their capacities. Thus, just as the welfare principle included both a negative component – don’t diminish or risk diminishing welfare – and a positive component – promote welfare – so too does the autonomy principle.

Examples of Technology Impacting Autonomy

Technologies can impact autonomy in many ways, as can technology professionals in their professional capacities. Below, we list some of the more common ways autonomy may be impacted, noting that the “impact” here could be positive or negative.

  • The provision, or lack of provision, of information about technologies that people will interact with. For products, this may come in the form of warnings and user manuals; for public projects, it may come in the form of civic engagement.
  • Increasing or diminishing the range of options available to people. Often, technology enhances our autonomy by providing us with new life opportunities, as the internet so clearly did. But it can also close off opportunities, as when a public works project eliminates a green space previously used for recreational activities.
  • Typically, the use of persuasive design is seen as a threat to autonomy, although it depends on how it is executed. If it truly engages a person in thinking about their own actions, then it may enhance their autonomy. If, on the other hand, it is more a form of coercion or subliminal affect, thereby bypassing a person’s thinking process, then it restricts their autonomy.
  • Technology can influence our ability to own and protect our (real or digital) property. Since ownership is one means by which we pursue our autonomous choices, when our ability to secure property is enhanced, our autonomy is enhanced. When it becomes easier to steal property or harder to come to attain it, then our autonomy is reduced.

2. Autonomy and Privacy

As the example box above indicates, technology can affect autonomy in a variety of ways. But there is one special connection between many technologies and autonomy that is worth exploring in detail: privacy. Privacy is a key component of autonomy for a variety of reasons, most notably because controlling who has access to our personal information is a central means by which we protect our ability to pursue our own life plans. When everyone knows everyone’s business, or has access to everyone’s personal information, then it becomes much more likely that people will interfere with each other’s business. More generally, we can understand autonomy as fundamentally about control over one’s own life. Part of that control, then, is control over what other people do and do not know about us.

It should be clear that not all sub-disciplines of engineering, and not all types of technology, engage with privacy. However, in the contemporary digital world, many of the most influential and emerging technologies certainly do. As such, the protection of privacy is a core feature of many technology disciplines, as is evident by a review of those Codes of Ethics for software engineers and other professionals who regularly engage with matters that can affect privacy.

The philosophical literature on privacy is vast, with many competing conceptions of precisely what privacy is and why it matters. Rather than wading into these debates, we will adopt a simplified version of a recently developed theory that has been quite successful in assisting in the technological design process as well as assessing social policy related to privacy.

Privacy: The socially embedded and contextually sensitive ability to control access to, and uses of, places, bodies, and personal information.[1]

This definition captures the simple idea that privacy has something to do with control of information, but it qualifies that idea by recognizing that privacy practices are embedded in various social contexts. Put another way, whether and to what extent you have control over the flow of your personal information is largely a matter of socially determined reasonable expectations. We can capture this idea further by noting three important factors to consider in determining whether an activity constitutes a privacy violation:

  1. Contextual Access: Although there are some things about ourselves that we may want no one to know, typically we exercise control over our information by limiting access. For instance, I have no problem sharing my edical history with my physician but I would feel violated if I found out that my physician was then sharing this information with her friends at the bar. Thus, control of information is contextual: it is not so much about the information itself as it is about the relationship between the information and the situation or people involved.
  2. Reasonable Expectations: Not all unwanted collections or uses of information are privacy violations. While, at its root, privacy about individual control, that individual control must be harmonized with social expectations. Thus, whether something counts as a privacy violation is partly a function of whether someone legitimately had a “reasonable expectation” of privacy in that context. These reasonable expectations are often set by social dynamics and thus change over time.
  3. Informational Processing: Privacy is not merely a matter of controlling information about oneself. It is also about controlling what happens with information about oneself. In the age of Big Data, this is especially true, as the technology we now have allows for the processing of large swaths of data to identify emergent patterns. This can allow people or organizations to discover new information about a person that was not directly contained in the shared data. For instance, in sharing one’s running route over time, a system can determine when someone’s home is likely to be vacated, something that was not directly or intentionally shared.

Each of these three factors are relevant, to various degrees in various contexts, to making determinations about privacy. For instance, these three factors come together to help us understand why “the data is already public” is not necessarily a defense against the claim of a privacy violation. Take the case of speaking on a cell phone in a public park. Now, on one hand, it would be unreasonable to expect all passersby to totally block out whatever they can hear, and so one should expect that what one says is not wholly private. On the other hand, there doesn’t seem to be much at stake in random passersby hearing my side of the conversation and so it is fair to say there is some degree of privacy here. But if we start changing the context – perhaps the government has installed surveillance systems around that can pick up all that people are saying on their phones in the park – then a privacy violation may emerge, even though the information hasn’t changed. This is because a new entity (the government) is receiving our information. Furthermore, if we expect that the government has far greater ability than a random person on the street to process the information to uncover patterns, then our complaint may be more about the use or processing of the information rather than even the receiver of it. And so, in this sort of scenario, we can say that the reasonable expectation of privacy is not “no one listening whatsoever”, but it may be something like “no government collection of information” or “no collection of information for the purposes of data mining”.

In a section below we will provide more details about how to identify relevant privacy norms and determine whether they have been or may be violated. For now, we can end our discussion of privacy with a few general ways in which technology and technology professionals may protect or violate privacy.

Examples of Technology Impacting Privacy

  • Creation and use of encryption tools permit appropriate transfers of information while limiting the risk of inappropriate transfers via hacking or other means.
  • Procedures for anonymizing collected data can protect privacy by eliminating the possibility that data processing will uncover new information about a person
  • The surreptitious collection of information by various devices for marketing or product improvement purposes
  • The use of face shots, available online, to train facial recognition technology may constitute a new context, thereby raising the question of whether such use violates privacy

3. Applying the Autonomy Principle

As with the Welfare Principle, the Autonomy Principle can be applied in multiple reasonable ways. Which we choose will partly be a matter of context – only some apply in some situations – but is also a matter of exercising appropriate judgment about the situation.

3.1. Informed Consent

Medical researchers, like technologists, are often faced with a conflict between promoting the social good and respecting individuals in their care. This occurs because the aim of medical research is to promote the social good – research is not about curing specific patients at the time – but in order to do that it must make use of identifiable individuals – the research subjects. Not too long ago – up, at least, until the 1980s – the medical communities across the world took the “Omelas approach” to their research: they were willing to sacrifice their test subjects in all sorts of ways if doing so would potentially lead to a medical breakthrough that would be good for many more in society. Indeed, this is precisely how Nazi doctors justified their torturous experiments on Jewish and other captives. In the US, this sort of thinking was used to justified the long-term Tuskegee Syphilis study that involved lying to a population of African Americans – telling them they were being treated for syphilis when they were not – in order to understand the natural progression of syphilis, the knowledge of which could be quite beneficial to society.

This approach began to change after World War II but did not really get moving until the mid-1980s with the introduction of The Principle of Respect for Persons into medical ethics. Before that, medical ethics was effectively just a matter of beneficence (and, to a lesser degree, non-maleficence). Following on guidance from The Belmont Report, the United States and other nations began to introduce legal and professional requirements for informed consent of both research subjects and medical patients. Informed consent is now, at least in word if not in deed, a central part of medicine. And it can be used in similar fashion by technologists.

Informed Consent is a process for design and policymaking that heavily focuses on the autonomy of individuals affected. In brief, the idea is this: It is only permissible to expose people to risks if they have been adequately informed of those risks (and relevant benefits) and have provided clear consent to be exposed to those risks. In the medical case, this comes out in the form of research subjects or patients being told about the risks and benefits of the research or procedure and then signing a legal document confirming they understand the information and voluntarily consent to be exposed to the risks. Things may not be so streamlined in the engineering and technology case, but the idea remains. For instance, warning labels on products are a form of information provision and the voluntary purchasing of the product is a form of consent. We may, quite reasonably, question whether a person was provided with adequate information, adequately understood it, and/or whether their ‘consent’ was truly voluntary, but the basic pieces are all there.

As a form of social experimentation, technology often lacks a control group. Unlike the medical researchers who are experimenting on identifiable people who have agreed to be part of the study, in many cases engineers cannot control who will interact with their technology. When they can, the application of informed consent is quite straightforward: simply provide a method by which people can be adequately informed of the risks associated with the technology before voluntarily agreeing to engage with it. But in situations where we cannot wholly control exposure, we may have to turn to other mechanisms for informed consent, such as democratic procedures. Handing over the “consent” to a city council that is taken to properly represent its constituency is a reasonable indirect form of informed consent. Of course, it is still deficient compared to the standard medical case and so as we move further away from the standard case, we should reduce our confidence in the method as being decisive. For instance, if we must appeal to a city council decision as our “informed consent” then we will very likely want to supplement our case with at least one other distinct justification. This is also because it is technology professionals who have the responsibility to respect autonomy and therefore carry out the actions necessary to do so. Handing those activities over to others, especially non-professionals, could constitute a dereliction of duty if those others fail.

Regardless of the specific mechanisms we use to get informed consent, there are a few key elements that must be kept in mind:

  1. Adequate disclosure: The “informed” half of informed consent requires that those consenting don’t merely say “yes” but that they have all the relevant information and understand it. This imposes a responsibility on technology professionals to learn how to properly communicate with the general public in terms they understand and not merely “engineering-ese”
  2. Voluntariness: You can get people to “consent” to almost anything if you hold a gun to their head. But that is not a morally appropriate form of consent. Real informed consent requires that those providing consent are doing so voluntarily – they have other legitimate options open to them
  3. Consent Action: Consent is not mere acquiescence and so “silence” is not consent. Instead, consent requires an identifiable action. Precisely what that action is will vary and it need not be the core case of someone saying “I consent”, but the key is that you must be able to point to a specific act or set of acts that rightly count as the consent action(s)

To apply informed consent, consider an app designed to promote healthy eating. It does this through manipulating the user in specific, targeted ways known to promote the right sorts of behaviors. In isolation, this action seems to be a violation of autonomy since it attempts to get the user to engage in the relevant behaviors through known manipulative means. However, we may be able to justify such an app by ensuring that we inform users of the methods and require them to agree to the methods prior to the app working for them. Of course, whether that will work partly depends on whether the information would be understandable to all potential users and whether we present it in a clear way, but in principle that could ensure our healthy eating app respects autonomy even as it manipulates the user to engage in behaviors they are otherwise avoiding.

3.2. Contextual Integrity

Privacy as contextual integrity, as developed by Nissenbaum, provides us with a useful means of assessing situations to determine whether privacy has been or may be violated. It does this by helping us identify the reasonable expectations of privacy established by the social context. And we call this a privacy norm. Before detailing the process, it is worth saying something briefly about the concept of a norm in general. Norms, in general, are simply standards of conduct. They are socially established, often simply by many people coordinating on the same behavior rather than anyone making a ruling, although laws and other explicit rules can create norms. But whether something is a norm is more about whether it is conformed to than if it has been ordered by some authority. Thus, in the United States, it is a norm to drive on the right side of the road (rather than the left) because nearly everyone does it. It is also the law, but it wouldn’t be a norm if the majority of people violated it. And so, a privacy norm is just a contextually-sensitive standard of conduct relating to the flows of information.

To get us thinking about how to identify and make use of privacy norms, consider the following example of an existing privacy norm (in the United States at least):

A patient’s health information may be transmitted from the patient’s healthcare provider to a relevant colleague to help diagnose or treat the patient

This statement of the norm establishes who can share what with whom and why the sharing is acceptable. To put this in terms of our theory of privacy, we can say that a privacy norm is made up of 5 parameters, 3 of which fit under a single heading:

  1. The Actors: The people or entities involved in the transmission. This encompasses 3 parameters
    • The Subject: The person(s) whose information is being transmitted
    • The Sender: The person(s)/entities transmitting the information
    • The Receiver: The person(s)/entities receiving the transmitted information
  2. The Information Type: The information being transferred
  3. The Transmission Principle: The constraints or limits on the flow or use of information

A complete privacy norm will detail each of these 5 elements, and a privacy violation occurs when any of the 5 elements are violated. Notice how our example privacy norm above fits the taxonomy:

  1. The Actors
    1. Subject: “A patient”
    2. Sender: “the patient’s healthcare provider”
    3. Receiver: “provider’s colleague”
  2. Information type: “Patient’s health information”
  3. Transmission Principle: “A relevant colleague to help diagnose or treat the patient”

Once we understand the privacy norm in question, which requires a satisfactory investigation of the social context(s), then we can adjudicate whether a privacy violation has occurred or whether one may occur through our design and implementation of a technology. For instance, if I learn that my healthcare provider shared my health information with someone other than a colleague, then my privacy has been violated; if they share it with a colleague, but not for the purpose of helping diagnose or treat me, then my privacy has been violated. In the first case, the receiver parameter has been violated while in the second the transmission principle has.

At this point it is worth flagging the centrality of the transmission principle to analyzing potential cases of privacy violations. While privacy can be violated when the wrong person sends information or when the wrong person receives information, in contemporary contexts potential violations often come down to whether the transmission principle has been violated. It is most directly the transmission principle that is violated when, for instance, organizations or governments use machine learning to process large data sets and, in the process, discover new information about people. This is true even if the organization is an appropriate receiver of the information in other contexts.

Historically, one of the more common transmission principles was informed consent: we may permit one person to transmit our information to another person so long as they inform us and receive our consent. And so, in this way, there is a close link between the practice of informed consent and the protection of privacy. However, most contemporary matters of privacy are more complex. Even in cases where it appears we are giving consent – accepting terms of service, for instance – it is quite far-fetched to say we are providing our informed consent. Rather, it seems the general practice of compiling massive terms of service (or privacy policies) and then requiring a user to click “I agree” prior to being able to use the service amounts, more or less, to a transmission principle of “agreeing to use the service”, rather than informed consent.

To bring this discussion back around, we can identify a three-step process for determining whether there has been or will be a privacy violation:

  1. Discover the relevant privacy norm(s): This is necessarily a social matter since norms are necessarily social. Often we must review similar situations, get a read on how people think about the context, examine any applicable laws, etc.
  2. Map the privacy norms to the 5 parameters: Take the social scientific research used to identify the privacy norm and reconstruct it in terms of actors, information type, and transmission principle
  3. Compare the norm to the situation to determine whether the situation conforms to all 5 parameters or not

At this point, if we have established that there is indeed a (potential) privacy violation then we may simply stop there and take that information into the design process. This assumes, of course, that we can simply avoid the violation and that the privacy norm is worth adhering to. Alternatively, we may judge that although there is a violation, the privacy norm is out of date or otherwise problematic given new information and so we think the violation is justified. In that case, our task is to provide a (strong) justification for the violation, typically one that appeals to other considerations of autonomy or fairness (rather than mere welfare, although welfare can work in some circumstances).

Returning to the earlier example of the health app, we may conclude that we can avoid informing the user and getting their consent because it is a social norm that “self-help” apps collect and use personal information to manipulate us for our own good. So long as that is the purpose of the collection and use of the information, then there is no privacy violation (assuming, as we are here, that this is an acceptable transmission principle). Alternatively, we may accept that merely agreeing to use the app (or agreeing to the unread terms of service) is sufficient to protect privacy and thus stick to something like that without having to worry about whether the information is truly understood.

3.3. The Net Benefits & Precautionary Approaches

A final note should be made about how autonomy can be handled in the approaches discussed in the chapter on the Welfare Principle. The general view is that since our core requirement is to respect or protect autonomy (and privacy), we cannot simply take autonomy into account in a net benefits calculation: it is not the sort of thing we can just “trade off” with various welfare considerations. Hence the unique approaches discussed above, both of which effectively set a “test” that must be passed before we even consider welfare.

However, as indicated earlier, there is also a way in which we can promote autonomy: by assisting people or supporting institutions that assist people in the development of those faculties which are necessary to being an autonomous person. When we are affecting autonomy in this way, we can largely account for it similarly to how we account for various welfare considerations. And so, we could incorporate it into a net benefits assessment as just another thing being promoted. Or, if there is a possibility that our actions or designs will irreversibly frustrate peoples’ abilities to be autonomous people, then we may make use of the precautionary approach to reject those actions or designs.

Put short, when protecting or respecting autonomy, we should treat the considerations as effectively a “gateway” we must successfully pass before considering issues of welfare. But when promoting autonomy, we may treat its considerations as on a par with issues of welfare.


  1. This is an original definition derived from a combination of two sources: Adam D. Moore (2003). “Privacy: Its meaning and value,” American Philosophical Quarterly 40: 215-227. And Helen Nissenbaum (2010). Privacy in Context: Technology, Policy, and the Integrity of Social Life.

License

Icon for the Creative Commons Attribution-NonCommercial 4.0 International License

The Primacy of the Public by Marcus Schultz-Bergin is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, except where otherwise noted.