4 Technology as Social Experiments

What was Apple thinking when it launched the iPhone? It was an impressive bit of technology, poised to revolutionize the smartphone industry, and set to become nearly ubiquitous within a decade. The social consequences have been dramatic. Many of those consequences have been positive: increased connectivity, increased knowledge and increased day-to-day convenience. A considerable number have been quite negative: the assault on privacy; increased distractibility, endless social noise. But were any of them weighing on the mind of Steve Jobs when he stepped onstage to deliver his keynote on January 9th 2007?

Some probably were, but more than likely they leaned toward the positive end of the spectrum. Jobs was famous for his ‘reality distortion field’; it’s unlikely he allowed the negative to hold him back for more than a few milliseconds. It was a cool product and it was bound to be a big seller. That’s all that mattered. But when you think about it this attitude is pretty odd. The success of the iPhone and subsequent smartphones has given rise to one of the biggest social experiments in human history. The consequences of near-ubiquitous smartphone use were uncertain at the time. Why didn’t we insist on Jobs giving it quite a good deal more thought and scrutiny? Imagine if instead of an iPhone he was launching a revolutionary new cancer drug? In that case we would have insisted upon a decade of trials and experiments, with animal and human subjects, before it could be brought to market. Why are we so blasé about information technology (and other technologies) vis-a-vis medication?[1]

– Dr. John Danaher, Institute for Ethics and Emerging Technologies

1. Technology & Risk

Anything worth doing involves risk, and the creation of new technologies is no different. This is why we typically insist on risk assessments, which aim to quantify risks and potential benefits prior to starting the project. This is also why we establish safety standards that may require various forms of risk reduction. Importantly, though, we do not insist that technologies pose no risks. That is both practically impossible and undesirable; as the old adage goes, “nothing ventured, nothing gained”. A project without risk is also a project without benefit.

So, the concept of risk is central to engineering and technology. It is easy to find technologists engaging in risk assessments and weighing risks against benefits. But the introduction of the iPhone, discussed above, and the voyage of the Titanic, discussed in a previous chapter, both illustrate a fundamental problem with how technologists typically think about risk, risk assessment, and the introduction of technology into society. To see the problem, it will help to begin with a clear understanding of the concept of risk.

Risk: A measure of the probability of some hazard occurring multiplied by the magnitude of that hazard.

When we conduct a risk assessment, we do not simply list out the various potential bad effects (or hazards). Rather, we make determinations of how likely (i.e., probable) it is that the bad effect will occur and how bad it would be (i.e., magnitude) if that bad effect did occur. In many cases, those conducting the risk assessment will assign numerical values to the probability and magnitude: following standard statistics, probability is measured between 0 (certain not to occur) and 1 (certain to occur, which really means already occurring). Magnitude can be assigned different numerical values depending on context: it may be number of people affected by the hazard or some abstract measure like assigning a dollar value to human life. For our purposes, the precise methods of quantification do not matter and, indeed, we will notice in this and later chapters that despite the appearance of “objectivity” in assigning numbers to risks, there are a lot of subjective assumptions built into a risk assessment.

Now that we have a grasp on the basics of risk, we can start to see the problem with using traditional risk assessments to determine whether it would be acceptable to introduce the technology into society. First, consider the case of the iPhone discussed above: many of the major effects (both positive and negative) that resulted from the introduction of the iPhone simply could not have been known prior to that introduction (or even for many years after). For any given project or product, there will be some predictable effects but there will also always be some that we are ignorant of, no matter how hard we try. This technological ignorance is partly the result of the social context of technology: because technology functions in a complex social context, we simply cannot predict all the possible ways that technology will affect society in the future.

Thus, technological ignorance – the lack of information about potential effects of a technology – is (nearly) guaranteed and, when present, ensures that half of our “risk calculation” (magnitude) is incomplete. Thus, due to ignorance of potential effects alone, determinations of risk will necessarily be incomplete.

But there are also issues with the other half of the calculation. Even when we do know the potential effects of a technology, there will almost certainly be technological uncertainty as to the likelihood of those effects. Consider the Titanic: Obviously the engineers knew that the ship could sink, but they nonetheless severely downplayed the likelihood of that hazard, even labeling the ship ‘virtually unsinkable’, a clear statement that the probability of sinking was near 0. More broadly, we can note that it will be common to misrepresent probabilities since the likelihood of some hazard coming about is heavily dependent on interaction with the technology. Since it is largely impossible to fully predict how people will interact with a technology once it is in society, it will be largely impossible to accurately predict the likelihood of various hazards coming about.

And so, technological uncertainty – the lack of information about the likelihood of potential effects of a technology – combines with technological ignorance to show that our nice and neat “risk calculation” is anything but. A risk assessment is not a thorough analysis of the risks associated with a technology, but rather a partially ignorant and uncertain analysis of known hazards combined with subjective determinations of likelihoods. And, given that technologists are generally quite optimistic about their own creations – Steve Jobs’s “reality distortion field” being a clear example – it is highly likely that they will both ignore some hazards and downplay the probabilities of those they don’t ignore.

2. Experimental Technology & The Control Dilemma

Ignorance and Uncertainty, as we have been discussing them, are not unique to technology. Indeed, as John Danaher notes in his musings on the iPhone above, such issues also exist in the medical field. In the creation of new pharmaceuticals and development of new therapies, medical researchers and physicians are similarly faced with ignorance of some of the potential effects as well as uncertainty about the likelihood of those for which they are aware. And consider what we say about medical therapies with higher levels of ignorance and/or uncertainty: they are experimental. More broadly, we also recognize that is precisely because we may be ignorant and uncertain about the effects of some new medical therapy that we demand experimental trials be done and generally refuse to allow the therapy to be released into society until the ignorance and uncertainty is significantly reduced.

To parallel the idea of experimental therapies in the medical field, we can introduce the concept of Experimental Technology: Technology with which there is little operational experience and for which, consequently, the social benefits and risks are uncertain and/or unknown.[2] Importantly, almost all technology is experimental to some degree: The less operational experience we have, the greater the lack of knowledge of potential effects, and the greater the uncertainty of known effects, the more experimental a technology is. Thus, a new bridge being built using an existing and well used design would be much less experimental than a new bridge being built using a novel design. In this way, the degree to which a technology is experimental is a matter of degree, rather than a matter of kind. There may be some technological creations for which we have so much information and experience that we may simply not call them experimental at all, but by and large all technology is experimental to some degree.

It is important to be able to identify experimental technologies, and the degree to which they are experimental, because experimental technologies fall prey to what David Collingridge identified as The Control Dilemma:[3]

  • (a) In the early phases of development, the technology is malleable and controllable but its social effects are not well understood;
  • (b) In the later phases of development, the effects become better understood but the technology is so entrenched in society that it becomes difficult to control.

The Control Dilemma, like all dilemmas, establishes two paths (“di-”), both of which are problematic in some way. We want to be able to control the introduction of technology into society to limit risks, but we cannot really know the risks until we introduce the technology into society. But if we introduce the technology into society to figure out the effects, then we are not in as good of a position to control the technology and its effects.

There are two main ways of navigating the dilemma, neither of which is fully satisfactory. On the one hand, we can focus on dealing with issue (a) by conducting risk assessments. But, as we already saw, those will always be incomplete: the use of numbers and measures effectively cons us into thinking we know more than we really do. Alternatively, we can focus on issue (b) by embracing the experimental nature of such technologies and treating them similarly to medical experimentation. This typically involves (among other things) being much more incremental and cautious in design and implementation. Further, as we will explore in later chapters, it potentially involves adopting ethical principles and other moral concepts from medicine.

3. Technology as Social Experiment

Thus far we have focused on the ways in which many technologies can be understood as experimental due to our lack of knowledge of the potential hazards and/or the likelihood of such hazards. As noted, this understanding of experimental quite clearly parallels the idea of experimental therapies in medicine. But there is a further feature of experimental technology not typically found in the medical context, and it is this third feature which truly defines technology as a social experiment.

Consider our general approach to new pharmaceuticals, as hinted at in the discussion of the iPhone above: We do not typically make a new pharmaceutical available to the public until we have done multiple distinct types of clinical trials. These clinical trials are always controlled: The only people who are supposed to be exposed to the pharmaceutical are those who have voluntarily agreed to be part of the trial. When done appropriately, this means each volunteer has been adequately informed of the risks and potential benefits (if any) and has made a free choice to be exposed to those risks.

Now contrast this with a typical large-scale engineering project, such as the pedestrian bridge that collapsed at Florida International University. Here, the road underneath the bridge was open and so engineers could not control who drove under the bridge at any given time nor did any of those drivers have any information that the bridge was of a new design or had recently shown signs of cracking. Here, it was not possible to control specifically who was exposed to the risks or what risks they were exposed to. Similarly, consider the introduction of a new powerplant. Although it may be built some distance away from any population center, it nevertheless will expose all sorts of people to the risks of meltdown (in the case of a nuclear reactor) or pollution and explosion (in the case of fossil fuel plants).

Put simply, when engineers experiment on people, they typically do so without adequately informing them of the risks of the ‘experiment’ (i.e., technology) and without those people providing a free and informed choice to be exposed to the risks. And this is precisely what defines a Social Experiment: It is an experiment that lacks an experimental control. Thus, we can say that any given technology is a Socially Experimental Technology to the degree that it is an experimental technology (as described above) and that it lacks an experimental control.

The degree to which a social experiment is controlled is, just like uncertainty and ignorance, a matter of degree. Sometimes the experimental subjects will be aware of some of the risks; perhaps even some will have freely chosen to be exposed. The use of warning labels on consumer products is one way that we may work toward providing information. But even then, we still lack an experimental control. In the medical case, it is generally expected that researchers confirm understanding on behalf of subjects before accepting their consent; simply providing information is insufficient.

And so, in sum, we can compare and rank various technologies according to their social experimental nature by considering three dimensions:

  1. The degree to which the risks are known;
  2. The degree to which the likelihood of any known risks can be accurately determined; and
  3. The degree to which those exposed to the risks understand the risks and voluntarily accept exposure

Some technologies may be highly experimental on two dimensions while being less experimental on the third. For instance, a new bridge built according to long used and tested designs may not be especially experimental on dimensions (1) and (2), but could certainly still be quite experimental on dimension (3).

4. Conclusion: Engineers as Responsible Experimenters

In a 2007 report, a European Union expert group on science and governance concluded the following:

[W]e are in an unavoidably experimental state. Yet this is usually deleted from public view and public negotiation. If citizens are routinely being enrolled without negotiation as experimental subjects, in experiments which are not called by name, then some serious ethical and social issues would have to be addressed.[4]

Viewing technologies as social experiments provides a new perspective for understanding the ethical responsibilities of engineers and other technologists. It demands that designers of technology take responsibility for their power in society rather than regard what they do as a mere job. In this way, it embraces the National Society of Professional Engineers’ assertion that “[e]ngineering has a direct and vital impact on the quality of life for all people” and thus that engineers must adhere “to the highest principles of ethical conduct.”

In the remaining chapters of this book, we will develop and investigate a robust set of ethical principles that can guide technology professionals in their work as well as society in its consideration of technology policy and use. For now, it is worth reflecting for yourself about what sorts of moral values, rules, and principles may be appropriate for responsible technological experimenters.

If you want to read more about “Big Tech” and its social experiments, I recommend this New Yorker article: “Big Tech is Testing You”


  1. John Danaher, “New Technologies as Social Experiments: An Ethical Framework,” Philosophical Disquisitions: March 15, 2016. https://philosophicaldisquisitions.blogspot.com/2016/03/new-technologies-as-social-experiments.html
  2. Ibo van de Poel (2016). “An Ethical Framework for Evaluating Experimental Technology,” Science and Engineering Ethics 22.
  3. David Collingridge (1980). The Social Control of Technology. London: Frances Pinter.
  4. Felt, et al. (2007). Taking European knowledge society seriously. Report of the expert group on science and governance to the Science, Economy and Society Directorate, Directorate-General for Research, European Commission. Brussels: Directorate-General for Research, Science, Economy and Society. Page 68.

License

Icon for the Creative Commons Attribution-NonCommercial 4.0 International License

The Primacy of the Public by Marcus Schultz-Bergin is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, except where otherwise noted.