3 Technology & Values: The Social Context of Technology

As it departed on its maiden voyage in April 1912, the Titanic was proclaimed the greatest engineering achievement ever.[1] Not merely was it the largest ship the world had seen, having a length of almost three football fields; it was also the most glamorous of ocean liners, complete with a tropical vinegarden restaurant and the first seagoing masseuse. It was touted as the first fully safe ship. Since the worst collision envisaged was at the juncture of two of its sixteen watertight compartments, and since it could float with any four compartments flooded, the Titanic was believed to be virtually unsinkable.

Buoyed by such confidence, the captain allowed the ship to sail full speed at night in an area frequented by icebergs, one of which tore a large gap in the ship’s side, flooding five compartments. Time remained to evacuate the ship, but there were not enough lifeboats to accommodate all the passengers and crew. British regulations then in effect did not foresee vessels of this size. Accordingly, only 825 places were required in lifeboats, sufficient for a mere one-quarter of the Titanic’s capacity of 3547 passengers and crew. No extra precautions had seemed necessary for an unsinkable ship. The result: 1522 dead (drowned or frozen) out of the 2227 on board for the Titanic’s first trip.[2]

The Titanic remains a haunting image of technological complacency. So many products of technology present some potential dangers that engineering should be regarded as an inherently risky activity. In order to underscore this fact and help to explore its ethical implications, we suggest that engineering should be viewed as an experimental process. It is not, of course, an experiment conducted solely in a laboratory under controlled conditions. Rather, it is an experiment on a social scale involving human subjects.

There are conjectures that the Titanic left England with a coal fire on board, that this made the captain rush the ship to New York, and that water entering the coal bunkers through the gash caused an explosion and greater damage to the compartments. Others maintain that embrittlement of the ship’s steel hull in the icy waters caused a much larger crack than a collision would otherwise have produced. Shipbuilders have argued that carrying the watertight bulkheads up higher on such a big ship instead of allowing less obstructed space on the passenger decks for arranging cabins would have kept the ship afloat. However, what matters most is that the lack of lifeboats and the difficulty of launching those available from the listing ship prevented a safe exit for two-thirds of the persons on board, where a safe exit is a mechanism or procedure for escape from harm in the event a product fails.

1. The Social Context of Engineering

Engineering does not occur in a vacuum and technology is neither created nor maintained in a vacuum. Every individual engineer and every engineering firm or company is embedded in society. As such, both engineers as people and engineering as a profession are influenced by their society while also having the power to influence that society going forward. Consider, for instance, the lack of lifeboats on the Titanic. The engineers who signed off on the designs with such limited lifeboats did so, in part, because that is all society required of them via its regulations.

Not only do the law and official regulations influence engineering decisions, but so does public perception. Consider the idea of ‘efficiency’, a common term in engineering and technology. It is generally agreed that it is better to make an activity more efficient and, within the technical sciences, ‘efficiency’ is often understood in a purely quantitative way: it is a ratio of energy input to energy output. However, there are a variety of likely ‘efficient’ changes we could make (or previously had) which are nevertheless excluded from consideration: child labor was in many ways more efficient than adult labor and yet no one currently builds machinery that is reliant on a small child being able to fit into certain crevices.

Old Sporting Bicycle
The “Penny-farthing” sporting bicycle

Other examples abound. For instance, Trevor Pinch and Wiebe Bijker showed that social forces directed the development path of bicycles in their early history.[3] Early on, there were two types of bicycles: a sporting bicycle with a high front wheel that made it fast but unstable and a more “utilitarian” design with a smaller front wheel to promote stability at the expense of speed. Although originally designed for different purposes – the sporting bicycle for athletes and the other for ordinary transportation – the sporting bicycle never really caught on and eventually disappeared. Society, rather than the designers, determined that the sporting bicycle was unnecessary.

In the other direction, we can see the way technology has influenced and changed society, often in unexpected or unpredictable ways. There are obvious cases: speed bumps effectively force people to drive more slowly and walking paths and streetlamps encourage foot traffic to follow specified paths. But there are also less obvious cases: the invention of the printing press revolutionized European civilization in many ways, including being a major contributing factor in the Protestant Reformation, thus drastically changing the religious landscape of the entire world. More recent technologies like cell phones and social media have also heavily influenced social relationships by encouraging instantaneous and constant communication and altering what it means to call someone a “friend”.

And then there are the more general ways technology often influences society: changing what we consider possible, required, or impermissible. Before advances in medical technologies, we would simply expect someone to die from cancer but now because it is possible to treat many cancers we find it morally problematic when those treatments are not available to someone. Technology also changes what we can expect from our lives: what sorts of interactions are possible, where it is possible to live and work, what sorts of jobs even exist.

In short, all of these examples come together to show that technology functions in a social context: Technology simultaneously exerts influence over society and is influenced by society. This, then, implies that technology professionals both exert influence over society through their designs and, in turn, are influenced by various social factors in the creation and deployment of their designs.

2. All Technology is Value-Laden

Because there is always interaction between society and technology there is always interaction between values and technology. The general public valued comfort and utility over athletic prowess in widely preferring the standard bicycle to the sporting bicycle. We, as a society, valued care of children and child development over costs and efficiency in banning child labor. In these ways, technology is inseparable from values – technology is always value-laden: soaked through with value considerations, whether implicit or explicit.

The idea that all technology is value-laden contrasts with a tempting view of technology as value neutral. On that view, technology is just a tool – neither good nor bad in itself. Ethical questions about technology, on this view, are just questions about use and users, not the technology itself. Such a view is attractive for it allows technology professionals to downplay value considerations in their design thinking and, instead, just say “I just design the thing, others choose how to use it”.

But the theory that technology is value neutral is false.[4] It is certainly true that technologies are means to accomplishing human ends, and so they are tools. But they are not merely tools. All technologies are the products of human minds and actions. They are the result of numerous decisions, big and small, by the people crafting them. And all decisions are made on the basis of valuesoutlooksperspectivesintentions, and desires. In this way, it is impossible to truly divorce a technology from the humans that designed it and its social context. Additionally, as previously discussed, technologies change us – they change how we relate to the world, interact with each other, and they influence what we take to be possible, plausible, and worthwhile. The invention and distribution of mechanical respirators not only made it possible to artificially maintain human respiration, it also drove us to see that as a generally good thing, to the point where it is taken as the default assumption in medical treatment. More broadly, the invention and wide-scale deployment of personal automobiles drastically changed our entire social structure, providing the impetus for building an entirely new form of human community: the suburb. Technological artefacts also change how we think about our broader world: just consider the idea that “the brain is (like) a computer”; such a perspective is only possible once computers exist and people are broadly familiar with how they function. For the ancient Greeks, human bodies were like water clocks; for thinkers of the 17th century, as the scientific revolution was in full force in Europe, the body was “an assemblage of springs.”[5] Plenty of other metaphors abound in history, but the point is that they all link to the dominant or emerging technology of the time. These examples don’t immediately tell us what sort of values are being encouraged or discouraged by the technology, but they do show us the deep influence technological artefacts have on our understanding of the world and our place in it. Given that our values are heavily determined by our understanding of the world and our place in it, it is only a short step from technology to worldview to value. In all these ways, technology cannot be viewed as a neutral tool. They exert influence whether we like it or not and in ways we may not predict or desire.

Once we recognize that technology both expresses and impacts human values, it becomes clear that technology must be designed in a value informed way. Consider a team of American engineers who decide to build an app for fitness tracking. They design the app’s user interface in English; this excludes all non-English speakers, including some American citizens. They choose to design the app for high-end smartphones paired with a smartwatch; the cost of the necessary hardware makes the service inaccessible to people of lesser means, including some students. The company chooses to create an additional stream of revenue by selling data collected from their users to a marketing company; users may not be aware that their location and activity data is being shared with a third-party, or, even if they are aware, what inferences can be drawn about them using this data (where they live, where they work, the stores at which they shop, the recreational activities in which they engage). All of these design choices impact who will be able to use their app, and potentially even impact non-users who happen to engage with the app users.

Building technology is all about design choices, and these choices matter. We rarely make technology just for the sake of it; we make technology because we expect to deploy it out into the world, where it will interact with all different kinds of people and be used in all different kinds of contexts. Therefore, technology will be better when it is thoughtfully designed, with explicit consideration of the values it embodies and the social, institutional, and cultural contexts in which it is to be used. Technologies that are designed in this value sensitive way are more intuitive, more accessible, more seamless, and more delightful than those that are not. They are also more likely to do good: promote human flourishing, generate societal benefits, and contribute to environmental sustainability. They are more likely to work effectively, and they are more likely to be successful.

Examples of Values in Technology

There is a constant stream of news stories about controversial, ethically-problematic technologies. Below are a few, brief examples. These cases illustrate the importance of designing technologies in a value-informed way that includes careful consideration of the social and institutional contexts of deployment and use:

  • Highways have the potential to help people travel faster and increase access to certain areas. However, their construction often alters the social landscape by dividing neighborhoods, increasing noise pollution, and altering traffic patterns in ways that can both increase and decrease business in nearby areas.
  • Facial recognition systems have the potential to make our lives more convenient (automatic face tagging on social media, face unlock on your smartphone), but also have profound implications for individual privacy.
  • Smart speakers are extremely popular, but users were upset to learn that transcripts of their audio were secretly being reviewed by human beings. The key issues were (1) that users were not informed about this practice, and (2) audio recordings from within our homes are considered by most people to be very sensitive data.
  • Sharing economy apps that let anyone rent out their car or home are very convenient for those with resources to offer and those looking to rent, but they are also having a negative impact on cities in the form of increased roadway congestion and displacement of local residents.
  • You pay for “free” online services by divulging your personal data, which is collected by hundreds of advertising companies. People are often distressed at the extent to which this data can be used to hyper-target advertising, as best evinced by the Cambridge Analytica scandal.
  • The impending rollout of self-driving cars raises challenging questions about how these cars should be designed to ensure human safety, and how these vehicles, their makers, and their operators should be held accountable when accidents occur. Even more challenging ethical questions arise when we consider the development of autonomous military drones.
  • Many organizations are adopting machine learning systems in an effort to reduce costs and remove human bias from processes. These systems evaluate whether people are eligible for loans, insurance, employment, social services, and even parole. However, machine learning systems are not neutral, and these systems have been found to exhibit human biases like racism and sexism.
  • User interfaces are powerful mechanisms for shaping how users interact with systems. However, some designers adopt intentionally deceptive user interfaces called “dark patterns”.

3. Engaging in Value Sensitive Design

So far we have focused on making the case for engaging in Value Sensitive Design: A design approach that consciously and explicitly considers values and social context in technological design. In later chapters we will introduce ethical frameworks to assist in this process. But for now we can introduce a thinking tool that we can use for developing our sensitivity to value and social context and our ability to incorporate them into design.[6]

While our inevitable goal is to be able to consider values and social contexts in the creation of new technologies, this tool gets our thinking going by having us consider existing technologies and effectively “reverse engineering” the values and social context at play, with a special emphasis on how the design of the technology encourages or discourages behavior among users. Consider any piece of technology you are familiar with, and then work through the following three categories of investigation

3.1. Cognitive Interaction

Most technologies engage our consciousness in some way: they require us to think as we use them, or they encourage certain types of thoughts when we view them. Put another way: technological artefacts often inform and influence our decision-making faculties. We can investigate how a specific technological artefact does this by considering three common cognitive effects: guidance, persuasion, and image.

3.1.1. Guidance

Car handles and textured surfaces guide behavior

Technologies are designed with certain functions in mind, and thus their designs typically give users suggestions about possible use and/or the technologies are explicitly designed to provide guidance in a larger social context. In the first case, a car door handle is shaped in such a way as to provide an intuitive gripping hand gesture, given how humans typically use their hands; thus, the car door handle quite clearly guides the user to use it to open the car door. In the second case, textured walking strips, curbs, and braille signs all provide guidance for the sight-impaired in order to navigate the larger social world.

We can investigate how an artefact guides, as well as consider how we can better design artefacts to guide, by considering the following sorts of questions (not all may apply, and there will likely be some overlap in answers):

In analyzing an existing technological artefact on the basis of guidance, we can ask the following sorts of questions:

  • How does the design of the artefact guide interaction with the artefact?
  • How does the design of the artefact guide interaction with the larger social world?
  • What guidance is the artefact not providing that it should provide?
  • How could the guidance provided by the artefact be perceived by the user in a different way or be misunderstood?

In looking to (re)design a piece of technology, we can ask the following sorts of questions to help us think about how to design it for guidance:

  • How does or can the artefact use symbols, signs, or text to provide proper guidance?
  • How is guidance logically integrated into the artefact? How could guidance be logically integrated?
  • Is or can guidance be provided in multiple sensory modalities (i.e., sight, touch, sound, smell)?

3.1.2. Persuasion

Technology can persuade us to engage in desired behaviors

Sometimes technologies go beyond mere guidance for use and instead aim to persuade users to behave in certain ways. For instance, in the Netherlands some live speed checking signs will output a “sad face” when the driver is exceeding the speed limit as a form of encouragement to slow down. Similarly, there has been increasing interest in “persuasive design” aimed at helping people engage in ‘good’ behaviors: the stairs leading to a Swedish metro station were repainted to look like a piano as a means of encouraging greater use of the stairs rather than the escalator, and the change succeeded in increasing stair use by 66%. The AWARE light switch consists of a two-tone pattern with a rotating portion set inside a wider ring. The two-tone pattern only matches up – “looks right” to the human eye – when the switch is set in the off position. In this way, the light switch encourages turning off lights and reducing the consumption of electricity.

As may be obvious, persuasive design raises its own set of ethical questions, which we will return to in later chapters. But it is worth noting that many instances of persuasive design are intentionally aimed at reminding people of specific values (like health) or encouraging them to adopt new values, and thus persuasive design is a prime example of value-laden technology.

For now, we simply want to focus on understanding how technology may be persuasive, and we can do that by considering the following sorts of questions (these questions can be used both in analysis of existing artefacts and in the design of new artefacts):

  • What behaviors are being encouraged by the design?
  • What behaviors are being discouraged by the design?
  • What values are the designs attempting to remind users about or introduce them to?
  • How can the design make the desired behavior more attractive to the user?
  • Is it possible to implement a gaming element to encourage the desired behavior?
  • Can (or does) the design evoke a feeling of guilt when the user shows undesired behavior?

3.1.3. Image

Some people are “Apple people” while others are “Android people” (or “PC people”, etc.). Some people are “Ford people” while others are “Honda people”. We both use technology to express our identity and image and technology is often designed with specific identities or images in mind. Professor Daniel Miller captures this interaction nicely when he concludes that “objects make us, in the very same process that we make them.”[7]

In capturing or provoking an image, technology is at least superficially appealing to the values that are connected with those images, as is most clear by considering commercials: luxury cars are promoted with appeals to independence and comfort; perfumes are promoted with appeals to physical attractiveness. So, once again, we see a clear connection between values and technology.

In thinking about how technology may be designed with image in mind, we can ask the following sorts of questions:

  • What does the artefact look like?
  • What emotions does its appearance evoke?
  • Who would buy or use the artefact?
  • What is the (desired) image of the target group?
  • How does the artefact appeal to the target group?
  • What style characteristics can be used to appeal to the target group?
  • Is or can certain behavior be encouraged or discouraged by implementing a certain image in the artefact?

3.2. Physical Interaction

Perhaps the most obvious effects of technology on humans are direct influences on the human body and gestures. While cognitive interactions aim to influence conscious thought and engagement with the technology, physical interactions aim to short cut cognition by effectively forcing certain types of physical engagement. There are three main ways this can come about: Coercion, Embodied Technology, and Subliminal Affect.

3.2.1. Coercion

Examples of Technological Coercion

Speed bumps force drivers to slow down (or suffer the personal consequences), rather than simply guiding or attempting to persuade them to do so. Some shops install anti-theft locks on their shopping carts which will lock the cart when it leaves the designated perimeter. In London, urban planners designed street islands with an “S-shaped pattern” that forces pedestrians to look to their left as they cross, which is particularly helpful for people from outside England who are used to vehicles driving on the right side of the road. Various industrial machines are designed with enforced safety mechanisms: they must be operated with two hands or require depression of a foot pedal to ensure someone is always present during operation. Finally, as philosopher Langdon Winner pointed out, low road overpasses were intentionally installed on Long Island in New York to keep out busses as a means of keeping out the poor.[8]

As these examples indicate, coercive technology can be vitally important but can also be used for morally suspect ends. Again, we will return to ethically evaluating such things at a later time. For now, we can focus on asking the following sorts of questions to understand how technology may coerce:

  • Does the technology force a certain use?
  • How might the user feel as a result of any coerced use?
  • How can the artefact be designed to force the user to show the desired behavior?
  • How is or can undesirable behavior be made physically impossible by design?
  • How can or does the design render any coercion invisible to the user?

3.2.2. Embodied Technology

Examples of Embodied Technologies

We often talk about technologies being intuitive or unintuitive. This is basically a way of saying whether the technology can be properly used without much thought. In neuroscientific terms, intuitive technology is often embodied technology: our brains have literally been altered by interaction with the technology to the point that it is neurologically considered part of our own body. Typically, this is not innate and we do have to learn how to engage with the technology first, but the learning often comes easy and (more importantly) it rarely disappears once learned. We often say something is like “riding a bicycle” to indicate that once you learn it, you never forget it even if you don’t regularly practice it. The bicycle is a form of embodied technology, as is the automobile, the pencil, and musical instruments. In every case, the technology structures the physical behavior of the user but the user does not experience this as constraining or coercive. Some embodied technologies integrate with humans to allow them to do things they could not otherwise do. Ballet shoes, for instance, allow the wearer to walk on their toes which is impossible (for most people) absent the technology.

There are deeply interesting philosophical questions about embodied technologies, one of which is captured in the title of philosopher Andy Clark’s book Natural Born Cyborgs. For if humans are natural tool users and many tools end up being integrated as parts of our bodies (as indicated by the neural structures involved) then we are all always cyborgs – human-technology hybrids!

We’ll set these questions aside for now and instead think about what questions we can ask in investigating the potential embodied nature of a technology:

  • (How) does the artefact interact with the user’s body?
  • (How) does the artefact structure the user’s physical movements?
  • Can the artefact be successfully used with little or no conscious thought?
  • How can the artefact be designed to closely fit with the body of the user?
  • How can the artefact be designed so the user focuses on the end result rather than the tool itself?

3.3.3. Subliminal Affect

Scent associations are common forms of subliminal affect

Technology can also steer behavior by the use of smells, noise, or images that work on the subconscious. These sorts of design elements are still physical, even if the user never directly touches the artefact. For instance, the smell of fresh baked bread in a market will enhance mood and hunger, incentivizing further shopping. The smell of fresh coffee tends to make people feel at home, and thus can generate comfort in a location. The Mosquito is a speaker used to ward off loitering teens by emitting an annoying high pitched noise that only young people can hear. The obnoxious noise of various alarms is specifically designed to ensure people pay attention to the alarm and (typically) leave its vicinity.

Like coercive and persuasive design, subliminal design raises complex ethical questions. But, as should be clear by the various examples, subliminal design can be used as an important safety tool (as in the cases of noisy alarms).

In considering subliminal affect, we can ask the following sorts of questions:

  • How does the artefact appeal to the user’s senses?
  • To what extent does the appeal to the senses affect the user to do something they did not initially intend?
  • Can the design make the user think of the desired behavior with an association with something else?
  • Can the desired effect be brought about by letting the user be busy with something else?

3.3. Environmental Interaction

Both cognitive and physical interaction tend to focus on direct effects of technology and interaction with it. Environmental interaction, on the other hand, is about the indirect effects and influences of technology – those things it brings about independently of specific interaction and use. Here we are very much thinking about the social impacts of technology and we can focus on three main types of interaction: Side Effects, Background Conditions, and Technical Determinism.

3.3.1. Side Effects

Most technologies have side effects: some have side effects that make the technology “self-defeating” because they undermine its intended function(s). For instance, some people respond to the availability of energy efficient lightbulbs by using more of them, since “they don’t use that much electricity”. In that case, the intended social purpose of the lightbulbs is undermined. In other cases, the technology successfully achieves its function but produces other side effects. For instance, mining and fracking can successfully extract resources from the earth (their intended function) but may also produce water pollution and earthquakes.

Clearly the category of “side effects” can be quite broad, but there are a few questions we can ask to help us get a grip on it: [Note we use the term ‘environment’ very broadly here to simply mean the wider context of use; it is not just about ecosystems, etc.]

  • In what sort of environment will the technology be used?
  • How does the environment in which the technology will be used influence the user?
  • Does the use of the technology also influence the environment?
  • Which side effects could occur that would (partially) nullify the intended effects/function of the technology?
  • Are there side effects that (could) enhance the functionality of the technology?

3.3.2. Background Conditions

Technology is not created nor used in a vacuum. The success of most technologies depend on the existence of background conditions. Most obviously, many large scale technologies depend on the existence of road infrastructure. Similarly, all electronic technologies depend on a functioning electricity infrastructure and all internet technologies depend on a functioning internet and computer infrastructure. The success of the automobile depends on the background conditions of fueling stations.

One major benefit of considering background conditions in the design process is that it may become possible to design a better overall system, incorporating both the technology itself and its necessary background conditions. For instance, the success of wind farms for electricity generation is heavily dependent on the efficiency of the electrical grid system. Recognizing this can encourage the co-design of wind farms and electrical grids.

A few questions we can ask to assess background conditions include:

  • Is there a need for infrastructure to make the technology function?
  • Which environmental characteristics are needed to make the technology function properly?
  • (How) could the technology function better if they were integrated into a wider network or system?
  • Which functions of the technology could also be (independently) fulfilled by existing features of the environment?

3.3.3. Technical Determinism

If we take the long view of time, it becomes quite clear that much of history is heavily influenced by material and technical circumstances. Jared Diamond’s famous book Guns, Germs, and Steel aimed to show (as the title suggests) that we can understand much of how our current world is carved up by paying attention to the development and distribution of guns, germs, and steel. Some people hold that material and technical conditions determine, rather than just heavily influence, the course of history. We need not take a stand on that precise matter, but will use the phrase technical determinism for the sake of convenience. In investigating technology, considerations of technical determinism ask us to focus on how the technology under consideration may affect individuals and societies in the long run. The idea, discussed earlier in the chapter, that technologies influence what we consider possible and can alter our worldviews is largely captured by exploring the idea of technical determinism. Just consider the shift from the 90s, where mobile phones existed but many people did not see them as essential to today where most people (especially younger people) report they cannot imagine life without their smartphone.

We can ask the following questions to help us think about our a technology may influence the course of individual and social development over time:

  • What changes (to individuals or groups) could occur based on long term use of the technology?
  • What societal changes could the technology initiate?
  • What things could disappear as a result of the introduction (or widespread use) of the technology?
  • Does the technology encourage the user to become attached to it for a long time?

  1. The discussion that follows was originally presented by Mike Martin and Roland Schinzinger in Ethics in Engineering (1989)
  2. Lord, 1976; Wade, 1980; Davie, 1986.
  3. Trevor J. Pinch & Wiebe E. Bijker (1984). “The social construction of facts and artefacts: or How the sociology of science and the sociology of technology might benefit each other,” Social Studies of Science 14.3: 399-441.
  4. Much of the content of this and the following sections is derived from Christo Wilson’s VSD@Khoury website. https://vsd.ccs.neu.edu
  5. Julien Offray de La Mettrie, L'homme Machine (1748)
  6. The tool and techniques described here are derived from the De-scription activity created by Jet Gipson and the Product Impact Tool created by Steven Dorrestijn & Wouter Eggink
  7. Daniel Miller (2010). Stuff. Polity Press. Page 60.
  8. Langdon Winner (1980). "Do Artifacts have Politics?" Daedalus 109 (1): 121-136.

License

Icon for the Creative Commons Attribution-NonCommercial 4.0 International License

The Primacy of the Public by Marcus Schultz-Bergin is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, except where otherwise noted.