AB01 – Graham, S. S., Kim, S.-Y., Devasto, M. D., & Keith, W. (2015). Statistical genre analysis: Toward big data methodologies in technical communication.

A team of researchers determines to bring the power of “big data” into the toolkit of technical communication scholars by piloting a research method they “dub statistical genre analysis (SGA)” and describing and explaining the method in an article published in the journal Technical Communication Quarterly (Graham, Kim, Devasto, & Keith, 2015, pp. 70-71).

Acknowledging the value academic markets have begun assigning to findings, conclusions, and theories founded upon rigorous analysis of massive data sets, this team deconstructs the amorphous “big data” phenomenon and demonstrates how their SGA methodology can be used to quantitatively describe and visually represent the generic content (e.g. types of evidence and modes of reasoning) of rhetorical situations (e.g. committee meetings) and to discover input variables (e.g. conflicts of interest) that have statistically significant effects upon output variables (e.g. recommendations) of important policy-influencing entities such as the Food and Drug Administration’s (FDA) Oncologic Drugs Advisory Committee (ODAC) (Graham et al., 2015, pp. 86-89).

The authors believe there is much to gain by integrating the “humanistic and qualitative study of discourse with statistical methods” and although they respect the “craft character of rhetorical inquiry” (Graham et al., 2015, pp 71-72) and utilize “the inductive and qualitative nature of rhetorical analysis as a necessary” initial step in their hybrid method (Graham et al., 2015, p. 77), they conclude their mixed-method SGA approach can increase the “range and power” (Graham et al., 2015 p. 92) of “traditional, inductive approaches to genre analysis” (Graham et al., 2015, p. 86) by offering the advantages “of statistical insights” while avoiding the disadvantages of statistical sterility that can emerge when the qualitative humanist element is absent (Graham et al., 2015, p. 91).

In the conclusion of their article, the researchers identify two main benefits of their hybrid SGA method. The first benefit is communication genres “can be defined with more precision” since SGA documents the actual frequency of generic conventions as they exist within a large sample of the corpus, rather than being defined more generally since traditional rhetorical methods may document the opinions experts have of the “typical” frequency of generic conventions as they perceive them to exist within a limited sample of “exemplars” selected from a small sample of the corpus. In addition, the authors argue analysis of a massive number of texts may reveal generic conventions that do not appear in the limited sample of exemplars that may be studied by practitioners of the traditional rhetorical approach involving only “critical analysis and close reading.” The second benefit is communications scholars are enabled to move beyond critical opinion and to claim statistically significant correlations between “situational inputs and outputs” and “genre characteristics that have been empirically established” (Graham et al., 2015, p. 92).

Befitting the subject of their study, the authors devote a considerable portion of their article to describing their research methodology. In the third section titled “Statistical Genre Analysis,” they begin by noting they conducted the “current pilot study” on a “relatively small subset” of the available data in order to “demonstrate the potential of SGA.” Further, they outline their research questions, the answers to two of which indeed seem to attest to the strength SGA can contribute to both the evidence and the inferences used by communication scholars in their own arguments about the communications they study. As they do in the introduction, in this section also, the authors note the intellectual lineage of SGA in various disciplines, including “rhetorical studies, linguistics,” “health communication,” psychology, and “applied statistics” (Graham et al., 2015, pp. 71, 76).

As explained earlier, the communication artifacts studied by these researches are selected from among the various artifacts arising from the FDA’s ODAC meetings, specifically the textual transcriptions of presentations (essentially opening statements) given by the sponsors (pharmaceutical manufacturing companies) of the drugs under review during meetings which usually last one or two days (Graham et al., 2015, pp. 75-76). Not only in the arenas of technical communication and rhetoric, but also in the arenas of Science and Technology Studies (STS) and of Science, Technology, Engineering, and Math (STEM) public policy, managing conflicts of interests among ODAC participants and encouraging inclusion of all relevant stakeholders in ODAC meetings are prominent issues (Graham et al., 2015, p. 72). At the conclusion of ODAC meetings, voting participants vote either for or against the issue under consideration, generally “applications to market new drugs, new indications for already approved drugs, and appropriate research/study endpoints” (Graham et al., 2015, pp. 74-76).

It is within this context the authors attempted to answer the following two research questions, among others, regarding all ODAC meetings and sponsor presentations given at those meetings between 2009 and 2012: “1. How does the distribution of stakeholders affect the distribution of votes?” and “3. How does the distribution of evidence and forms of reasoning in sponsor presentations affect the distribution of votes?” (Graham et al., 2015, pp. 75-76). Notice both of these research questions ask whether certain input variables affect certain output variables. And in this case, the output variables are votes either for or against an action that will have serious consequences for people and organizations. Put another way, this is a political (or deliberative rhetoric) situation and the ability to predict with a high degree of certainty which inputs produce which outputs could be quite valuable, given those inputs and outputs could determine substantial budget allocations, consulting fees, and pharmaceutical sales – essentially, success or failure – among other things.

Toward the aim of asking and answering research questions with such potentially high stakes, the authors applied their SGA mixed-methods approach, which they explain included four phases of research conducted over approximately six months to one year and included at least four researchers. The authors explain SGA “requires first an extensive data preparation phase” after which the researchers “subjected” the data “to various statistical tests to directly address the research questions.” They describe the four phases of their SGA method as “(a) coding schema development, (b) directed content analysis, (c) meeting data and participant demographics extraction, and (d) statistical analyses.” Before moving into a deeper discussion of their own “coding schema” development, as well as the other phases of their SGA approach, the authors cite numerous influences from scholars in “behavioral research,” “multivariate statistics,” “corpus linguistics,” and “quantitative work in English for specific purposes,” while explaining the specific statistical “techniques” they apply “can be found in canonical works of multivariate statistics such as Keppel’s (1991) Design and Analysis and Johnson and Wichern’s (2007) Applied Multivariate Statistical Analysis” (Graham et al., 2015, pp. 75-77). One important distinction the authors make between their method and these other methods is while the other methods operate at the more granular “word and sentence level” that facilitates formulation of “coding schema amenable to automated content analysis,” the authors operate at the less granular paragraph level that requires human intervention in order to formulate coding schema reflecting nuances only discernable at higher cognitive levels, for example whether particular evidentiary artifacts (transcripts) are based on randomized controlled trials (RCTs) addressing issues of “efficacy” or RCTs addressing issues of “safety and treatment-related hazards” (Graham et al., 2015, pp. 77-78). Choosing the longer, more complex paragraph as their unit of analysis requires the research method to depend upon “the inductive and qualitative nature of rhetorical analysis as a necessary precursor to both qualitative coding and statistical testing” (Graham et al., 2015, p. 77).

In the final section of their explanation of SGA, their research methodology, the authors summarize their statistical methods including both “descriptive statistics” and “inferential statistics” and how they applied these two types of statistical methods, respectively, to “provide a quantitative representation of the data set” (e.g. “mean, median, and standard deviation”) and to “estimate the relationship between variables” (e.g. “statistically significant impacts”) (Graham et al., 2015, pp. 81-83).

Returning to the point of the authors’ research – namely demonstrating how SGA empowers scholars to provide confident answers to research questions and therefore to create and assert knowledge clearly valued by societal interests – their SGA enables them to state their “multiple regression analysis” found “RCT-efficacy data and conflict of interest remained as the only significant predictors of approval rates. Oddly, the use of efficacy data seems to lower the chance of approval, whereas a greater presence of conflict of interest increases the probability of approval” (Graham et al., 2015, p. 89). Obviously, this finding encourages entities aiming to increase the probability of approval to allocate resources toward increasing the presence of conflicts of interests since that is the only input variable demonstrated to contribute to achieving their aim. On the other hand, this finding provides evidence entities claiming conflicts of interests illegally (or at least undesirably) affect ODAC participants’ votes can use to bolster their arguments “stricter controls on conflicts of interests should be deployed (Graham et al., 2015, p. 92).

03B – Technology and Human Ends: Human Beings as “Makers” or “Tool-Users”

Mumford, Lewis (1966). Tool-users vs. Homo sapiens and the megamachine.

Mumford concludes humans’ mid-twentieth-century commitment to the rationalism of science and technology – and to a vision of a future in which humans not only will have mastered nature but will have separated themselves from it as much as possible – is flawed and unbalanced in that it ignores the essence of human identity and organic life and exalts “technical and scientific progress” (Mumford, 2003, p. 344) with its promises of a chance at the god-like omniscience and omnipotence of the ruling elite or of a position with the cocoon-like comforts of the docile masses (Mumford, 2003, pp. 349-351). First of all, Mumford argues the notion of humans as differentiated by their capacities to create and use tools (or technology) due to hand and finger dexterity is incorrect (Mumford, 2003, pp. 345-346). According to Mumford, humans’ mental capacities as manifested in abstract, symbolic thought, language development and communication, autonomous and social activities, and varied human personalities are what differentiate the human species (Mumford, 2003, pp. 345-347). Mumford also argues, however, it is this abundance of mental capacity that can also drive humans to excess through “demonic promptings of the unconscious,” for instance the libertine excess of a privileged elite exempt from meaningful work or the destructive excess of political or commercial criminals exempt from meaningful prosecution (Mumford, 2003, p. 349).

In Mumford’s view, as an antidote to this propensity for excess, humans have created various means of maintaining balance, by directing surplus mental capacity and energy into “the primeval repetitive order of ritual,” into development of “the ultimate collective product, spoken language,” or into “the discipline of tool-making and tool-using” (Mumford, 2003, pp. 347-348). It is not with balanced, community-based antidotes to excess Mumford takes issue. Mumford takes issue with the antidote to excess embodied by the “megamachine,” an antidote that paradoxically reveals itself not as an antidote, but as a “monotechnics, devoted to the increase of power and wealth by the systematic organization of workaday activities in a rigidly mechanical pattern” for the enrichment of centralized authorities such as the Egyptian pharaohs and their religious, military, and secular nobilities. According to Mumford, this is when humans first became estranged from local, varied, meaningful work devoted to organic “growth and reproduction” and first became subject to “work at a single specialized task, segregated from other biological and social activities,” or in other words, subject to the massive organization, specialization, and mobilization of human resources (Mumford, 2003, p. 348).

In this scenario, most humans have become the unwitting components of a rather abstract machine controlled by a ruling elite with the machine’s primary purpose being the enrichment of that elite. Therefore, from Mumford’s perspective, this abstract machine is not an antidote to restrain humans’ excess, but is rather itself the most perverted expression of that excess, that is of the “delusions of omniscience and omnipotence” harbored in the surplus energy of the human mind. “The archetypal collective machine – the human model for all later specialized machines – the “Megamachine” (Mumford, 2003, p. 348) is for Mumford that which poses the most profound threat to the essence of humanity (Mumford, 2003, 350), especially since it has been vastly empowered by the compounding factors of the influence of the “mathematical and physical sciences upon technology” (Mumford, 2003, p. 344) and the “current scientific and educational ideology, which is now pressing to shift the locus of human activity from the organic environment, the social group, and the human personality to the Megamachine, considered as the ultimate expression of human intelligence” (Mumford, 2003, p. 350). In Mumford’s view, this “machine-centered metaphysics” should be forsaken and replaced with a balanced, life-centered metaphysics that is less centrally-controlled, less “coercive, totalitarian, and – in its direct human expression – compulsive and grimly irrational” (Mumford, 2003, p. 350).

03A – Heidegger on Technology

Heidegger, Martin (1954). The question concerning technology.

In the first paragraph of his essay, Heidegger states his purpose as to ask questions about technology (since questioning builds a way and “the way is thinking”) and thereby to “build a free relationship to” technology, where a “free relationship” is defined as a relationship that allows a person to grasp the “essence” of technology. Once a person grasps the essence of technology, that understanding will enable the person to “experience the technological within its own bounds.” Using what he considers the commonly accepted definition of technology as a “means and a human activity” as his point of departure, Heidegger embarks on a long argument about how although this definition may be “correct,” it is not “true” (Heidegger, 2003, pp. 253); and if it is not true, then a person whose understanding is limited to this definition only does not grasp the “essence” of technology and therefore does not have a free relationship to it. Heidegger points out that although the “essence” of something has since ancient times been understood to mean what a thing is, his notion of the “essence” of something requires that the “essence” be “true.” Since Heidegger has already stated he does not believe the commonly accepted “instrumental and anthropological definition of technology” to be “true,” he is obligated to explain why – and his explanation involves rationalizing how words and concepts, when considered a certain way, become other words and concepts that become other words and concepts until Heidegger has arrived at his destination, a destination where an “instrument” equals a “means” equals a cause equals one of the “four causes” known since at least Aristotle’s time equals “to be responsible for” equals “to reveal” equals “technology” as a “way of reveling” equals “truth.” At this destination, if one has followed Heidegger’s rationalizations and accepts them, then one is prepared for the next leg of the journey; and on this next leg, Heidegger promises “another whole realm for the essence of technology will open itself up to us” (Heidegger, 2003, pp. 253-255).

Heidegger guides us into and through this new realm by employing again a host of linguistic transformations (or etymologies or derivations, if one prefers) that begin with “the revealing” becoming “a challenging” (perhaps equivalent to “the will to mastery) with humans as the creators of “modern technology” based upon science and physics (as opposed to pre-scientific or artisanal technology) that “puts to nature the unreasonable demand that it supply energy which can be extracted and stored” and used as a means toward some end that consists, essentially, of “driving on to the maximum yield at the minimum expense” (Heidegger, 2003, p. 256). It appears, then, that for Heidegger the realm of technology we have now entered presupposes an inexplicable force, but he does not call it an inexplicable force. Perhaps the inexplicable force is what at the beginning of his essay he calls a “will to mastery” that becomes “a revealing” that becomes “a challenging” that becomes an “enframing.” Heidegger admits his use of the term “enframing” is unique and he elaborates that it “means the gathering together of the setting-upon that sets upon man, i.e., challenges him forth, to reveal the actual, in the mode of ordering, as standing reserve” (Heidegger, 2003, p. 258). Standing reserve here seems to mean to Heidegger what is commonly considered stored or surplus material or human resources that could be deployed to attain some objective. At this point in his essay, Heidegger proposes the “essence of modern technology starts man upon the way” of revealing that discloses to man how “the actual everywhere” is standing reserve – and, perhaps most important – Heidegger makes another linguistic leap (or draws another linguistic circle, so to speak) and introduces the term “destining” which he says is “the sending that gathers,” and apparently defines in different (but equally abstract) terms an aspect of what he earlier called the “enframing” (Heidegger, 2003, p. 260) and what I proposed may be called the inexplicable force. Heidegger never calls anything an inexplicable force, however, choosing rather to travel in what appear to be infinitely spiraling loops of terms conveniently defined by other terms Heidegger defined until the last term he defines is defined by the first term he defined and he arrives at the same place from which he departed. In this case, that place of departure is “the essence of technology” and that essence is “ambiguous” and “such ambiguity points to the mystery of all revealing, i.e. of truth” (Heidegger, 2003, p. 263).

02B – Defining technology

Kline, Stephen J. (1985). What is technology.

Arguing for agreement on the meanings of the primary usages of the term “technology,” Kline believes such an agreement would clarify important concepts in the discourse of science, technology, and society such as “innovation” and the nature of “sociotechnical systems” (Kline, 2003, pp. 210, 212). Kline delineates four usages of the term “technology” that could be considered positioned at various points on a continuum between concrete (narrow) and abstract (broad).

The first and most concrete usage Kline describes is the one he considers most pervasive, that of “manufactured articles” such as machines or refrigerators, i.e. items created by humans or their technologies, as opposed to items such as water or stone, i.e. items existing naturally or independent of human activity. Kline suggests common terms such as “hardware” in the engineering vernacular or “artifact” in the anthropology vernacular express this usage.

The second usage Kline explains will be one of his two sociotechnical systems. Kline suggests the term “sociotechnical system of manufacture” to denote all the “elements” required to manufacture hardware, including humans, equipment, natural resources, and social processes and systems such as economic or legal frameworks (Kline, 2003, pp. 210-211).

The third usage Kline explains refers to “information, skills, processes, and procedures for accomplishing tasks.” He suggests terms including “knowledge,” know-how,” or “methodology” as expressions of this usage. Kline asserts this usage is equivalent to Ellul’s idea that any “rationalized methodology” could be considered technology (Kline, 2003, p. 211). In this sense, the usage seems to me similar to what practitioners in industry or business today refer to as “best practices,” or in other words, the commonly accepted optimal means to some end, although no sound theoretical foundation or rigorously accumulated evidence may exist upon which to base this common acceptance.

The forth usage Kline describes is the second of the sociotechnical systems and he describes it as “the basis of what we do with the hardware after we have manufactured it.” He suggests the term “sociotechnical system of use” as an appropriate expression of this usage. In Kline’s view, it is important to delineate the supporting systems that enable the application (deployment or implementation) of hardware after it has been created (Kline, 2003, p. 211).

Kline emphasizes that it is the combination of these two sociotechnical systems (manufacture and use) that has enabled our species, as an extension of our ancestral species, to “extend our capacities” to the degree that we have “materially affected our evolutionary path” (Kline, 2003, p. 211-212).

Two other important ideas Kline asserts are 1) humans are the only species that “purposely makes innovations” to its sociotechnical systems toward the aim of improving those systems, a characteristic clearly differentiating humans from animals, and 2) the human species’ improvements of its sociotechnical systems and thereby its extensions of its capacities is accelerating in magnitudes with an even increased acceleration noted since the early nineteenth century (Kline, 2003, pp. 211-212).

Gehlen, Arnold. (1983). A philosophical-anthropological perspective on technology.

Gehlen applies philosophical and anthropological perspectives to explore concepts relevant for defining technology, to analyze the human’s relationship to technology, and to discuss the past, present, and future consequences of both of these topics. In the first three parts of his essay, Gehlen explores a number of concepts that present both a rationale for the invention of technology and an outline of the form of its definition.

Regarding the invention of technology, Gehlen cites a number of authors in asserting the premise technology was and continues to be invented by the human species to augment its biological “organs and instincts” in order to survive and propagate in “any natural, uncultivated environment.” Gehlen notes he describes this concept of the “deficient being” in his work, Der mensch, but credits the concept to Herder (probably Johann Gottfried von Herder, a philosopher in the 18th century). It is important to consider also, in Gehlen’s view, the term technology “must refer to both the actual tools and the skills needed to create and use them which make it possible for this instinct-poor and defenseless creature” to survive and reproduce (Gehlen, 2003, pp. 213).

With this rationale for the invention of technology established, Gehlen moves on to describe Kapp’s concept of “organ projection” and the three principles it encompasses: “organic relief,” “organic substitution or replacement,” and “organic strengthening or improvement.” Focusing on organic substitution, Gehlen discusses the significance of humans’ substitution of inorganic materials for organic ones to improve technological artifacts (or tools), for example by substituting metal for wood or stone in crafting utensils or weapons. Among organ projections of the substitution type, Gehlen notes that perhaps the most substantial and important one developed relatively recently in human history, “namely the replacement of the organic power of man and beast with inorganic or stored and recovered solar energies: coal, oil, electricity,” and “nuclear energy” which have increased beyond comprehension the power and or energy available to the human species (Gehlen, 2003, pp. 213 – 214).

In part two of his essay, Gehlen’s discusses a few other concepts relevant to defining technology and humans’ relationship to it. First of all, he notes that Hans Freyer pointed out that since the beginning of the industrial age in the 18th century, the relationship of humans to technology can no longer be considered one in which humans determine the desired end (or objective) and then invent a means (or technology) to attain that end. Instead, technology “creates a kind of abstract ability” and it is only later that humans consider for what end the newly acquired technology may serve as a means. This is significant because it inverts the relationship between ends and means; the means (technology) becomes the end (human objective) and only later is a desired application of the technology pursued or discovered. This inversion, Gehlen interprets Freyer as arguing, elevates the “technological spirit” to the level of a fundamental “absolute” indicating humans’ “desire for controlling and interfering with nature, and for progress” has become the overarching aim. In some sense, it seems to me that here Gehlen (interpreting Freyer) is asserting that humans have determined presupposing technological advancement (or innovation) will bring progress (or, in other words, higher stages of evolution) is better somehow than presupposing it will not. Therefore, redistributing resources (financial and mental, for example) away from efforts to plan and determine worthwhile ends and toward developing superior means (or technological innovation) is the optimal method (or gamble) for ensuring survival and reproduction of whatever group is in control of those resources. This inversion, whether it occurs consciously or unconsciously, that elevates technology to the position of primary end (or purpose) provides a foundation for Gehlen to introduce his concept of the “superstructure” consisting of “reciprocal interaction” between various “spheres” (or disciplines or domains) such as natural science, industrial production, applied science, and technology, “along with the entire sphere of information.” The “superstructure,” according to Gehlen is the “primary difference between our culture and all previous cultures” and could be viewed as the foundational concept for what he argues should be a necessarily “general definition” of technology, a definition that is much broader in scope than the now insufficient idea of technology as nothing more than the practical application of scientific knowledge, that is, as nothing more than applied science (Gehlen, 2003, pp. 214-215).

In the final paragraphs of part two, Gehlen raises the question of who or what may be in control in this relatively recently dawned era of the technological “superstructure” composed of multiple subsystems (or disciplines or fields) reciprocally interacting. Without answering the question himself at this point, Gehlen cites Frederick Jonas in proposing we may be in an era “without a subject term or primary actor.” Going further, Gehlen seems to me to imply previous and current conceptions of agency assume agents are rational and that greater degrees of rationality could lead to greater degrees of agency, that is, to greater degrees of power and control by the agent. In this new era, however, the very meaning of rationality changes. Gehlen interprets Jonas to say that rationality “no longer signifies a control of relationships, but the optimal reaction to facts which are continuously changing due to the boundless stream of new occurrences – i.e. optimal reaction to the unexpected” (Gehlen, 2003, p. 215). Gehlen concludes part two of his essay by suggesting a consolation, if there could be one in dispensing with humans’ conception of themselves as the primary agents of the world, that perhaps this is the era in which humans could be grateful for being the “beneficiaries” of their loss of agency since the population of the human species has massively expanded over the past few hundred years. In other words, even while losing agency, the human species as not only survived, but at least in terms of biological reproduction, its numbers have increased (Gehlen, 2003, p. 215).

Approaching technology from another perspective in part three of his article, Gehlen references Hermann Schmidt’s law of the “objectification of human work” in three phases, with each phase distancing humans further from the work. In the first phase, the work is still determined, directed, and powered by humans, although tools are used to increase power and accomplish more. The human is still the subject (or agent) in this phase. In the second phase, the work is determined and directed by humans, but the operational power is provided by machines. And, in the third phase, the intellectual contribution of humans (determining and directing) is automated also such that the machines determine, direct, and power the work and the human is relieved (or deprived, if one prefers) of duty. According to Gehlen, it is important to recognize the passage through these phases is not intended by humans, but rather “this law operates, so to speak, behind the back, and extends throughout the whole of human cultural history.” Gehlen adds another dimension to this view of technology by mentioning how in 1940 his book Der Mensch was published and “described man as what one would now call a feedback system” (Gehlen, 2003, pp. 215-216).

The concept of feedback systems has a central role in parts two and three of Gehlen’s essay that central role continues into part four as Gehlen begins his discussion of the moral and ethical consequences of the concepts he has discussed thus far and of the various ways he has outlined the boundaries for defining technology. Essentially, Gehlen himself and also Gehlen citing Schmidt and others, seems concerned about the objectification of human work, about the apparent dissolution of the “epistemological gap” that existed when physicists, for example, could easily distinguish between the scientist (representing conscious subject) and nature (representing unconscious object), and about autonomous feedback systems existing and developing without humans controlling them. Moral and ethical questions begin to be raised that strike at humans’ image of the human, at “the belief in the value of rationality,” and at what constitutes “real technological progress.” In part four, Gehlen introduces the field cybernetics as he continues to question the potential future and desirability of a world in which humans have succeeded through modern technology in overcoming, more or less, the “natural, uncultivated environment” only to be confronted by the inevitable dangers of estrangement from work and agency already enumerated by Marxist ideology; although those now fully realized dangers are re-marketed as modern technological advancement (Gehlen, 2003, pp. 216-217).

In part five of his essay, Gehlen makes a few points regarding the “presently discernable human impact of the modern, technologically-created life-environment,” beginning by stating Max Weber’s work in 1908 had asserted civilization’s development already had transformed completely humans’ spirituality. The next point Gehlen makes is that humans have become existentially dependent upon the artificial environment they have created, that humans’ “physical survival presupposes undisturbed functioning of energy plants, water supply systems, communication and information systems, chemical industries,” and such to the extent that even though a limited number of humans may be able to survive still in a natural environment by producing all they require, humans living in modern urban areas are entirely dependent upon modern technological systems to survive. This is true to such an extent Gehlen raises the point he credits to Hannah Arendt that one must wonder whether these modern technological systems may “become part of our biological make-up” to such a degree that it may no longer be appropriate to classify humans among the mammalian species (Gehlen, 2003, p. 217). Elaborating further, he notes that Arendt cites Heisenberg to develop further the idea the now firmly established technological superstructure operating beyond “‘conscious human effort to enlarge material power’” may have become “‘a biological development of mankind in which the innate structures of the human organism are transformed in an ever-increasing measure to the environment of man’” (Gehlen, 2003, p. 218).

Gehlen discusses in part six of his article a few potential consequences on humans of the way technology has developed. First, he discusses the effect on humans of their recently developed existential dependence on the technological society or superstructure by describing how that dependence may insidiously replace what we have known as individual human interests (i.e. those thoughts, desires, or values that distinguish one individual from another) with the generalized interests of the collective society (i.e. those thoughts, desires, or values that all individuals have in common). Overall, it seems Gehlen laments the potential sacrifice of individual character and freedom for collective character and determinism. He says “it will likely be that the individual will feel only such needs as have a chance of collective fulfillment or which are guaranteed by law and are thus for the common good” (Gehlen, 2003, p. 218). Another consequence, according to Gehlen, of the way technological society has developed is humans increasingly live vicariously through the media and “such conditions are inseparably interwoven with the disadvantages of secondary experience.” By its nature, secondary (or mediated) experience is removed from reality since at least one layer of interpretation (or subjectivity) exists – the creator or recorder of the media – between the consumer and the reality represented in the media. In the end, Gehlen notes that even though these homogenizing and apparently deterministic forces resulting from technological advancement exist, the ultimate “mechanization of consciousness” is not a foregone conclusion since “it is counter-balanced to the degree that our intellectual life is renewed.” Finally, in another somewhat optimistic reversal in tone, Gehlen notes that it may be that currently universal or collective values represent an optimal balance such that “control of nature, both in technology and in the basic organization of society, and human propagation with social control catalyzed by this propagation, have reached a high level of mutual reinforcement” (Gehlen, 2003, p. 219).

Bijker, W. E. & Pinch, T.J. (1987). The social construction of facts and artifacts.

As the title of their essay indicates, Bijker and Pinch argue the facts and artifacts typically considered the results of science and the manifestation of technology should be studied from what they call an “integrated social constructivist approach to the empirical study of science and technology” (Bijker & Pinch, 2003, p. 227). In their opinion, the traditional view that conveniently separated science (discovery of truth or knowledge) and technology (practical application of scientific knowledge) has been rightfully abandoned (Bijker & Pinch, 2003, p. 223). They argue, however, this abandonment has not revealed the grander vistas that could be revealed if only various relevant disciplines (e.g. the sociology of science, the sociology of technology, or the history of science) and subdisciplines (e.g. innovation studies, the causes of belief, or the relationship between science and technology) would embrace and develop further the integrated social constructivist approach they espouse, an approach they propose may be best founded upon two clusters of “concepts and methods” developing within the sociology of science and the sociology of technology disciplines, respectively, at the time they wrote their essay: The Empirical Program of Relativism (EPOR) and The Social Construction of Technology (SCOT).

Before providing an overview of EPOR and SCOT, I would like to mention a few other points or concepts Bijker and Pinch discuss in their review of relevant literature exploring the sociology of science, the relationship between science and technology, and technology studies. Regarding the sociology of science, first they differentiate between the discipline’s earlier focus on “science as an institution” and on the “norms, career patterns, and reward structures” of scientists and its more recent focus on scientific knowledge itself, that is on the “actual content of scientific ideas, theories, and experiments” (Bijker & Pinch, 2003, p. 221). Citing David Bloor’s work, Bijker and Pinch note a primary tenet of the “strong programme” in the sociology of science is the study of the “causes of beliefs” rather than the truth or falsity of those beliefs. When studied in this way, “all knowledge claims are to be treated as being socially constructed, that is, explanations for the genesis, acceptance, and rejection of knowledge claims are sought in the domain of social world rather than in the natural world” (Bijker & Pinch, 2003, p. 222). When considered in this way, the privilege heretofore granted the epistemology of science is withdrawn and scientific knowledge is revealed as “merely one in a whole series of knowledge cultures” (Bijker & Pinch, 2003, p. 222). Regarding the relationship between science and technology, Bijker and Pinch cite a few authors as they propose neither science nor technology is the progenitor of the other. Their perspective is science and technology each produce knowledge domains and each may be a means to be exploited or an ends to be pursued according to the situation. They view the relationship between science and technology itself as socially constructed and as a still fertile field worthy of further empirical study. Regarding technology studies, Bijker and Pinch distinguish the subdisciplines “innovation studies, history of technology, and sociology of technology” (Bijker & Pinch, 2003, pp. 223-224). In their view, none of these subdisciplines has gone far enough in studying the technological artifacts themselves (i.e. the technological content) and how that content whether successful or unsuccessful could be explained as “a social construct” susceptible to a “sociological analysis of belief” (Bijker & Pinch, 2003, p. 225) or, I would add, to the social construction of the specifications of the artifact (i.e. technology).

Bijker and Pinch describe EPOR as an empirical approach to analyzing and explaining scientific knowledge (content) that can be divided into three phases that ultimately reveal the socially constructed nature of scientific knowledge. The first phase of EPOR establishes the socially constructed nature of scientific knowledge by demonstrating the “interpretative flexibility” of it, or in other words, by demonstrating scientific findings may be interpreted in numerous ways. Although in the sciences this initial phase of “interpretative flexibility” often ends with a general consensus among scientists regarding what is the truth, Bijker and Pinch argue that demonstrating there is a phase of “interpretative flexibility” is sufficient to “shift the focus for the explanation of scientific developments from the natural world to the social world” (Bijker & Pinch, 2003, p. 226). The second phase of EPOR focuses on the controversy (disagreement or differences) revealed in the first phase and identifies the “social mechanisms that limit interpretative flexibility” and thereby dispense with the controversy as consensus is reached (Bijker & Pinch, 2003, p. 226). The third phase of EPOR would identify relationships between the social mechanisms that caused consensus (or established the system of belief) and the general “socio-cultural milieu” (Bijker & Pinch, 2003, p. 226).

The SCOT approach to technology studies, according to Bijker and Pinch, is one in which “the developmental process of a technological artifact is described as an alteration of variation and selection” and thereby leverages a “multidirectional model” that differs from the linear models that often ignore the unsuccessful (or abandoned paths) of technological development. In early sections of their essay, Bijker and Pinch note their opinion that in the sociology of scientific knowledge, there is a bias (or asymmetry, they say) toward studying only the successful paths of technological development. Bijker and Pinch argue, however, that using a multidirectional approach, an approach that studies both the abandoned and selected paths of technological development from differing perspectives, will yield a clearer understanding of the socially constructed nature of technology and of the different “meanings” assigned to technology by different social groups according to their unique perspective from within the broader sociopolitical milieu (Bijker & Pinch, 2003, p. 227).

Winner, Langdon (1993). Social constructivism: Opening the black box and finding it empty.

Langdon Winner states the purpose of his paper as to examine contemporary work in the study of science and technology and to question its success in orienting “our understanding of the place of technology in human affairs.” To do this, he critically analyzes social construction, an at the time ascending “research strategy” gaining traction with scholars studying the history, sociology, and philosophy of science and technology (Winner, 2003, pp. 233-234), comparatively analyzes social construction in relation to some other research methods (Winner, 2003, pp, 238, 241), and proposes adding an evaluative dimension to studying the history, sociology, and philosophy of science and technology that could result in what he would view as a better, more active, politicized approach that could more concretely influence the role and impact of science and technology on the future of humanity (Winner, 2003, pp. 241-242).

In the first part of his paper following the introduction, Winner explores “the dynamics of change” in science and technology and describes how social construction as a methodology largely succeeds in overcoming what he views as the incorrect perspective of some scholars in sociology and the humanities which dismisses the “black box” of technology by assuming that “one need not understand anything about what goes on inside such black boxes,” that the “device or system” may be “described solely in terms of its inputs and outputs” and that a superficial understanding of black boxes as “instruments that perform certain valuable functions” is sufficient (Winner, 2003, pp. 234-235). In Winner’s view, scholars of science and technology have a responsibility at least to themselves and to their own effort to maintain scholarly relevance in a world increasingly influenced by technologies to “somehow gain a well-developed understanding of at least a representative slice of them” in order to avoid appearing too abstract and detached from reality. Another positive attribute of social construction cited by Winner is its debunking of the attitude he says historians and sociologists of science and technology had taken since the 1970s that elevated so-called scientific knowledge (or objective truth) over technology since scientific knowledge “deals with the fundamentals of human knowledge” whereas technology deals with the means toward attaining subjective (or relative) practical aims such as efficiently and effectively pursuing economic or political objectives.

As an alternative, social constructivists attempt, according to Winner, to develop “’more realistic’ models of their own” by applying “’the empirical programme of relativism’ commonly used in the sociology of science.” Winner’s description of the empirical program of relativism aligns with that of other scholars (Bijker & Pinch, 2003, p. 226) in explaining that EPOR denies the objectivity of scientific knowledge (or truth) by demonstrating how “’truth’ can be seen to emerge through a variety of social activities in which different social groups contend to establish their knowledge claims” (Winner, 2003, p. 235). One distinction Winner makes between EPOR and the particular variety of it social constructivists apply to the study of technology is the latter focuses more on the “’interpretive flexibility’ of technical artifacts and their uses,” which he explains as meaning various people (or interest groups) have various opinions about how to utilize the same technological artifact and the study of the reasons for those opinions may be a significant part of the overall process of understanding and resolving contentious views of technology and of reaching “points of closure” where the “fundamental process of innovation ceases.” Although Winner notes a general consistency among social construction theory and practice, he also points out some subtle distinctions in emphasis exist, for example the focus of Michel Callon and Bruno Latour on “actor networks” that “include both living persons and non-living technological entities,” and the focus of Bijker and Pinch on society as “an environment or context in which technologies develop” (Winner, 2003, p. 236).

Winner underscores another strength of social construction in researching the “dynamics of technological change” as its “clear, step-by-step guidance for doing case studies of technological innovation.” For those who practice it, Winner notes, it can yield a coveted scholarly case study rich in empirical detail identifying the various agents and factors influencing the “multi-centered, complex process” of technological development, including revelations of the “spectrum of possible technological choices” and this means social constructivists “emphasize contingency and choice rather than forces of necessity in the history of technology.” One further benefit of social construction highlighted by Winner is its usefulness in dissolving the “sometimes highly arbitrary distinctions between the social sphere and the technical sphere” which helps demand the attention of philosophers who can no longer dismiss technology as irrelevant to the “human experience” (Winner, 2003, p. 236).

In most of the first part of his paper, Winner focuses on and explains many of what he views as the valuable aspects of social construction. In the final few paragraphs of this first part, however, he pauses to urge caution against embracing social construction as the ultimate methodology for understanding technology since even social construction is socially constructed and may suffer from an “Oedipus complex” in relation to its intellectual ancestors: the criticism social constructivists have directed at “the whole range of thinkers who have written about the origins and significance of modern technology” may not give enough credit to the different, but still valuable contributions of “sociologists of technology like William Ogburn, historians of technology like Lynn White, and a variety of economists who have written on the economic correlates of innovation,” as well as others such as Martin Heidegger, “Lewis Mumford, Jacques Ellul, Ivan Illich, members of the Frankfurt school of critical theory, and any number of Marxist social theorists, not to mention Marx and Engels themselves” (Winner, 2003, p. 237).

Winner’s acknowledgement of the positive attributes and contributions of both the social constructivists and of some of the previous scholarly traditions the social constructivists criticize establishes a foundation from which Winner can 1) criticize what he views as the weaknesses of social construction, and 2) advocate for a scholarly position that would focus more on the societal consequences of technology and the responsibility of scholars to take clear positions regarding the advantages and disadvantages of specific technological perspectives and choices. Elaborating on these two areas is Winner’s concern in the rest of his paper. Beginning with part two, subtitled “technology and human experience,” Winner addresses what he views as three of the four major weaknesses of the social construction research methodology.

First, Winner argues a major weakness of social construction is its “almost total disregard for the social consequences of technological choice” even though, as Winner has already elaborated, the social construction methodology depends on the fundamental notion of competing social groups choosing (or deciding upon) their view of what would be the optimal developmental direction in a particular technological discipline and then advocating for broader adoption of the view they have chosen (or decided upon). The main point he argues is social construction is neutral and neutrality is a weakness. Although social construction empirically and effectively identifies and describes factors determining why certain technologies, certain implementations of those technologies, and certain “social constituencies are the ones that prevail within the range of alternatives available at a given time,” social construction does not even attempt to address what to Winner are the more important questions, namely those questions that ask what have been the consequences on humanity of the technological reality resulting from the social construction of technology. For example, Winner asks “what the introduction of new artifacts has done for people’s sense of self, for the texture of human communities, for qualities of everyday living, and for the broader distribution of power in society” (Winner, 2003, p. 237). Winner provides a couple of reasons why he thinks the social constructivist’s neglect addressing the consequences of the chosen directions in which technology has been developed. One reason Winner cites, but with which he disagrees, is the social constructivists believe the issues of technological consequences have been amply addressed already by “earlier generations of humanists and social scientists.” Another reason, in Winner’s opinion, is the social constructivists are primarily sociologists enamored with what they view as a powerful research method, one they have already applied with success to what they consider the loftier subject of the origins of scientific knowledge (or knowledge of the natural world). Energized by their success, these sociologists are eager to demonstrate further the power of the social construction research methodology and they have settled upon applying it to studying technology, a field they see as a sub-discipline of science anyway (Winner, 2003, p. 237).

Second, Winner asserts social construction suffers from the same “elitism” characteristic of some theories of “political pluralism” and “bureaucratic politics” in that it acknowledges and therefore empowers perceived “’relevant social actors’ who are engaged in a process of defining technical problems, seeking solutions, and having their solutions adopted as authoritative within prevailing patterns of social use.” Winner proposes the action of choosing who or what are the “significant” human or non-human agents and dynamics determining the direction of technological development necessarily excludes, represses, or suppresses those not chosen since the action of choosing is itself based upon perhaps unexpressed assumptions that may be rooted in “deep-seated political biases” that inform the contexts within which alternative technological solutions are presented and decisions about which among them is optimal are made (Winner, 2003, p. 238).

Third, Winner claims social constructivists ignore key philosophical contributions pertinent to technology studies, for example, Winner states “one of the key claims in philosophical writings is that if one looks closely, one sees basic conditions that underlie the busy social activities of technology-making” (Winner, 2003, p. 238). Elaborating further on this idea, Winner says there is “the possibility that the ebb and flow of social interaction among social groups may reflect other, more deeply seated processes in society,” processes one could argue are manifestations of “deeper cultural, intellectual or economic origins of social choices about technology.” To Winner, it appears the “social constructivists do not seek to reveal” these conditions or dynamics, perhaps because the social constructivists consider the conditions too abstract and therefore less amenable to their favored empirical methods (Winner, 2003, pp. 238-239). As one example, Winner cites Marx’s theory of how the “structural relationships between classes are fundamental conditions that underlie all economic institutions, government policies and technological choices (Winner, 2003, p. 238). As another example, Winner differentiates between what he views as the social constructivist’s simplistic conception of “autonomous technology” (or technological determinism) and the philosophers more nuanced conception of it. To the social constructivists, “autonomous technology” is nothing more than the “technological determinism” the social constructivists already have shown to be false with their “models of a dynamic, multicentered process of social selection” as the primary driver of technological change. To philosophers, “autonomous technology” is not concerned with “technological determinism,” but is concerned with the technological result of a process in which “as people pursue their interests, socially constructing technologies that succeed at some level of practice, they undermine what are or ought to be key concerns at another level.” In other words, some socially constructed technologies may result in unforeseen and undesired consequences for those very agents who constructed them. Therefore, “autonomous technology” has “nothing at all to do with any self-generating properties” of technological determinism, but has everything to do with “the often painful ironies of technological choice” (Winner, 2003, p. 230).

Fourth, Winner criticizes social construction for its intentional neutrality regarding the results of technological development. He proposes that what social construction claims as its highest achievement is also its lowest failure: EPOR become interpretive flexibility by way of exhaustive identification and analysis of the factors and dynamics of socially constructed technological development yields valued academic case studies with little or no impact on the realities of technology in human life (Winner, 2003, p. 239). From Winner’s perspective, this may be fine “in cases where social consensus is achievable;” however, when there are serious disagreements regarding technological choices that must be made, the results of which may be irreversible, Winner proposes the social constructivists add no value to the discussion since they “forbid” themselves on “methodological grounds” from taking an “evaluative stance” or from subscribing to any “any particular moral or political principles” (Winner, 2003, p. 239). If this is not enough criticism of social construction, Winner has more. He believes the neutrality of social constructivists in reality “amounts to a political stance which regards the status quo and its ills and injustices with precious equanimity” (Winner, 2003, 240). Alas, Winner concludes, social construction “appears content to define itself as a narrow academic subfield – innovation studies.” And from Winner’s perspective, this is too bad since the policy debates regarding the development and application of technologies as parts of human life in the decades before and after the turn into the 21st century certainly would benefit from the participation of “leading scholars of technology and society” if only those scholars had not “retreated into a blasé, depoliticized scholasticism” (Winner, 2003, 242).

Interactions – Codebreaker

Codebreaker, a docudrama about Alan Turing, is a fascinating introduction to both Turing, the public intellectual, who is cited by many as a primary contributor to modern computing and artificial intelligence, as well as to Turing, the private human being, who was convicted as a criminal by the English society of his time because of personal inclinations commonly accepted and protected by most societies today.

Simon Schaffer, a professor in the Department of History and the Philosophy of Science at the University of Cambridge, makes this poignant comment in one of the documentary segments of the film:

I mean what’s going on, partly, in the insupportable tragedy of Turing’s fate, is what happens when deeply institutionalized English intellectuals encounter what life’s like outside the walls. They forget, and cannot imagine, how evil and vicious life can be.

02A – The task of a philosophy of technology

Bunge, Mario (1979). Philosophical inputs and outputs of technology.

In the introduction to his essay, Bunge asserts his thesis that technology “has a philosophical input and a philosophical output and, moreover, part of the latter controls the former.” In his elaboration of this claim, he implies three conditions must be met to prove an activity or discipline is philosophical: 1) the activity must exhibit some traits associated with philosophy, e.g. the assumption that reality can be known and altered “through experience and reason,” 2) the discipline must include use of theories that are relevant to philosophy, and 3) that some claim regarding the discipline’s approach to ethics (or morality) must be possible. Bunge’s discussion in his introduction also notes how technology’s relationship to philosophy is similar to and different from the relationship of science to philosophy and asserts that the existence of this relationship to philosophy itself is substantial enough to prove that technology is a legitimate discipline in itself and that is warrants “a fully developed philosophy” to accompany it, a philosophy distinct from the philosophy of science, a philosophy that would establish technology as a “major organ of contemporary culture,” much like science (Bunge, 1979/2014, chapter 17, section: introduction, para. 1-2). In the remainder of his essay, Bunge proposes a view of the problems of (or questions to be answered by) a philosophy of technology, a definition of technology that includes explication of its relationship to science and of its distinct research and development and production components, an epistemology of technology, a metaphysics of technology, a statement on the values of technology, and an ethics of technology.

Bunge delineates science from technology by focusing on “knowledge” and “change.” Whereas science performs research (or changes things) in order to gain knowledge, technology applies knowledge in order to incite change. (Bunge, 1979/2014, chapter 17, section: Branches of contemporary technology, para 1). In other words, he says, while the research of both science and technology pursue specific objectives, those objectives are different in that science pursues research in order to know the truth, while technology performs research in order to discover truth that can be applied to some other aim. Even though, according to Bunge, science and technology are methodologically similar (both pursuing specific aims), they are purposefully different (pursing different aims). He clarifies his view on how the epistemology of technology differs from that of science by noting it is driven by practical result and a looser definition of truth than science (Bunge, 1979/2014, chapter 17, section: The epistemology of technology).

Bunge’s “Flow Diagram of Technological Process” describes how science and technology operate in unison, to a certain degree, by both serving as an input function and an output function for each other – that is, scientific knowledge can be applied in technological research to create a product and that product can then become an object or a tool of scientific research. Bunge’s assertion that technology consists not only of the technologies it produces, but also of all its contextual processes is an important aspect of Bunge’s view of technology, which could be considered expansive since it involves not only the research and development processes and the production process, but also the management planning and decision-making processes, and perhaps by implication, the governmental or political processes that also partly determine the direction of technology (Bunge, 1979/2014, chapter 17, section: Technological research and policy).

Ellul, Jacques. (1954). On the aims of a philosophy of technology.

This essay is a collection of prefaces Ellul wrote for his book The Technological Society and together they provide an overview of his stated purpose in writing the book, a definition of technology or “technique,” summaries of his perspectives on a few key concepts, his reply to the apparent reaction to his book, and his general opinion of the ideas of a few other authors approaching the same subject.

I think it worth mention that although Ellul states his aim is to describe, analyze, and interpret “technique” (Ellul, 1954/2014, chapter 19, section: Author’s Preface to the French edition of The Technological Society [1954], para. 5) or only to know the truth, he finishes his essay with “a call to the sleeper to awake” (Ellul, 1954/2014, chapter 19, section: Author’s Foreword to the revised American edition [1964], para. 2, 17).

My impression is that at least in this essay, the primary aim of Ellul is to persuade the audience not only of the validity of his interpretation of “the technological society,” but also to encourage the audience to assert its freedom as he has defined that freedom, that is as a conscious act to disrupt the natural, evolutionary course of events – a conscious act Ellul obviously espouses. It is only through asserting agency that humans create their freedom (Ellul, 1954/2014, chapter 19, section: Author’s Foreword to the revised American edition [1964], para. 15-16).

Ellul’s definition of “technique” is an expansive one not limited to describing technological artifacts such as machines, nor to elaborating processes or methods surrounding the conception, creation, production, and use of those machines. Ellul defines technique as the “totality of methods rationally arrived at and having absolute efficiency (for a given state of development) in every field of human activity.” The aggregate of these rational and efficient methods in all areas of life are to Ellul a “sociological phenomenon” (Ellul, 1954/2014, chapter 19, section: Notes to the reader [1963], para. 2, 5), a “technological civilization,” and a “system of technical necessity” (Ellul, 1954/2014, chapter 19, section: Author’s Foreword to the revised American edition [1964], para. 5, 11).

Much of Ellul’s essay explains that even though his view of society – not only technological society, but also previous iterations of society (e.g. hunter-gatherer society) – is rightly perceived as essentially deterministic (Ellul, 1954/2014, chapter 19, section: Author’s Foreword to the revised American edition [1964], para. 5), he concedes determinism in aggregate does not preclude individual freedom and the capability of free individuals or of free groups of individuals to disrupt his projection of the probable evolution of the technological society to greater degrees of determinism (Ellul, 1954/2014, chapter 19, section: Author’s Foreword to the revised American edition [1964], para. 13-17).

The audience’s reaction to Ellul’s essays seems to have been negative, with a predominant claim that Ellul is a pessimist. Ellul denies his view is pessimistic. Rather, he claims that both his assessment of technological society and his prediction of its natural evolution are objective enough and accurate enough to warrant acceptance — and to catalyze individual assertion of freedom over systemic determinism (Ellul, 1954/2014, chapter 19, section: Author’s Foreword to the revised American edition [1964], para. 2, 6).

Shrader-Frechette, Kristin. (1992). Technology and ethics.

In this essay, Shrader-Frechette argues a multidisciplinary approach is required in order for those who analyze, deliberate, determine, and manage the development, introduction, and adoption of technology to attain optimal results — and she provides an overview of what she considers to be the five primary components of such an approach (Shrader-Frechette, 1992/2003, chapter 17, section: introduction, para. 1-5).

Her focus on the practical aspects of technology is evident in her definition of technology as “knowledge associated with the industrial arts, applied sciences, and various forms of engineering” and in her references to what it is “within one’s power to do” and to “new possibilities for action” (Shrader-Frechette, 1992/2003, chapter 17, section: introduction, para. 1). At the same time, she clearly suggests those she believes who would be best positioned to determine the course of technological development and adoption would possess an integrated perspective, perhaps technocrats, techno-rhetoricians, or some other hybrid professional with knowledge spanning at least philosophy (including ethics), science, and economics (Shrader-Frechette, 1992/2003, chapter 17, section: introduction, para. 1).

I said Shrader-Frechette describes five components of the framework for analysis, deliberation, and action she explains, but she calls them five categories into which “philosophical questions about technology and ethics generally fall” (Shrader-Frechette, 1992/2003, chapter 17, section: introduction, para. 5). The remainder of the essay is divided into five sections aligned with those questions, or topics; and throughout these five sections, Shrader-Frechette introduces her view of the main concepts, and predominant perspectives on those concepts, that commonly arise when deliberating on matters concerning technology and ethics.

Jonas, Hans. (1979). Toward a philosophy of technology.  

Rather than ask whether technology warrants philosophical attention, Jonas asks how it could not, given that technology informs all aspects of human life, including the “material, mental, and physical” (Jonas, 1979/2014, chapter 20, section: introduction, para. 1). Structuring his essay in three parts, Jonas provides an extended definition of technology in the first two parts, while in the third part he addresses technology’s ethical dimension.

The first part of Jonas’ essay discusses what he calls “the formal dynamics of technology as a continuing collective enterprise” (Jonas, 1979/2014, chapter 20, section: introduction, para. 3) which he divides into two eras, the era of “modern technology” and the era that preceded it. My impression is Jonas distinguishes between these eras by describing the eras’ characteristics in relation to the concrete variable duration in time and in relation to some abstract variables such as a) catalysts, b) resolutions, c) ends and means, and d) progress. Due to the different relationships of the eras’ characteristics with these different variables, Jonas states the primary difference between earlier technology and later technology is “that modern technology is an enterprise and a process, whereas earlier technology was a possession and a state” (Jonas, 1979/2014, chapter 20, part: I The formal dynamics of technology, section 1: introduction, para. 1).

Jonas describes the time-frame for significant advancement in the pre-modern technological era as one in which the “pace was so slow that only in the time-contraction of historical retrospect” do those advancements seem revolutionary. Regarding the catalysts of technological innovation, Jonas argues that revolutionary change in the pre-modern technological era was essentially unintentional and happened “more by accident than by design.” In the pre-modern era, according to Jonas, the resolution of technological advances (e.g. establishment of a given set of tools or procedures) was a “stable equilibrium of ends and means” representing “an unchallenged optimum of technical competence” or “point of technological saturation.” In general, it seems to me Jonas is describing a state in which at least the established powers accept that the available technological means are optimal for sustaining their ends. When this situation arises, vested interests become conservative and disinclined toward potentially risky technological innovation. Regarding progress, Jonas asserts that in the pre-modern technological era, humans did not presuppose a future of “constant progress” and did not possess a “deliberate method” of formal research which could produce theoretical knowledge and which could have as its ultimate end practical application – this came later with the refinement of scientific disciplines and with the support of these disciplines by societal institutions (Jonas, 1979/2014, chapter 20, part: I The formal dynamics of technology, section: introduction, para. 1-3).

Unlike the timeframe for significant advancement in the pre-modern technological era, the time-frame in the modern technological era is a rapid one, one in which technological innovations “spread quickly through the technological world community, as also do theoretical discoveries in the sciences” (Jonas, 1979/2014, chapter 20, part: I The formal dynamics of technology, section: Traits of modern technology, para. 1). Continuing his contrast of advancement in the modern technological era with advancement in the traditional era, Jonas describes the catalysts in the modern technological era primarily as phenomena of modern technology’s “enterprise and process” (Jonas, 1979/2014, chapter 20, part: I The formal dynamics of technology, section: introduction, para. 1) which he also states expresses its “dialectics or circularity.” In other words, in the modern technological era, Jonas argues the technological-industrial-mercantile system itself “may suggest, create, even impose new ends, never before conceived, simply by offering their feasibility” (Jonas, 1979/2014, chapter 20, part: I The formal dynamics of technology, section: Traits of modern technology, para. 1). Although it seems to me Jonas to a certain degree conflates his abstract variables of the characteristics of the modern technological era, his description strikes me as rational, even if it becomes somewhat difficult to understand exactly what is causing what. Considering Jonas states a dialectic (or mutual feedback) system is at work here, then it makes sense what was at one time an end could become at a later time a means and there may never be a resolution (or ending). And this is exactly what Jonas proposes in describing the modern technological era. It is an era in which – unlike the pre-modern technological era – technology “tends not to approach an equilibrium or saturation point in the process of fitting means to ends” and thus in which there is no resolution (or end) at which vested interests may become conservative and disinclined toward innovation and its concomitant risks. At times, Jonas description seems to me to falter into fallacious circular reasoning, reasoning in which the absence of resolution and the perpetual mutual feedback of the system beget infinite advancement and continually opening perspectives; but, he also mentions other forces (or pressures) exist which he asserts influence the phenomenon of continual technological advancement or progress, progress which he notes is neither a “value term” nor a “neutral term,” but is a term describing “a case of the entropy-defying sort (organic evolution is another)” in which the “internal motion of the system, left to itself and not interfered with” naturally results in a later period being “superior” to a preceding period “in terms of technology itself” (Jonas, 1979/2014, chapter 20, part: I The formal dynamics of technology, section: Traits of modern technology, para. 1-2).

Completing his description of the “traditional” and the “modern” technological eras by delineating how those eras’ traits differ in their relations to time, catalysts, resolutions, ends and means, and progress, Jonas moves on to address directly what he views as the “motive forces” or “compulsive pressures” creating the unique characteristic of modern technology to continually progress. First, Jonas introduces some familiar forces such as the “pressure of competition” for ends such as profit, power, and security, as well as the “somewhat paradoxical” feeling people have even in materially abundant societies they must constantly act in order to get ahead. In addition to these forces some may consider unique to capitalist societies, Jonas notes other forces that would compel innovation even in socialist societies, forces such as population pressure or environmental emergencies that were themselves caused by technological progress (Jonas, 1979/2014, chapter 20, part: I The formal dynamics of technology, section: The nature of restless technology, para. 1-2). Jonas explains other forces and scenarios also, but near the end of this section of his essay, he asserts the common denominator of all these forces as the “premise that there can be indefinite progress because there is always something new and better to find” (Jonas, 1979/2014, chapter 20, part: I The formal dynamics of technology, section: The nature of restless technology, para. 4). According to Jonas, humans’ acceptance of this premise of the “virtual infinitude of advance” began in the modern technological era – founded on relatively valid evidence that “the nature of things and of human cognition” are essentially boundless – and distinguishes this era from the pre-modern one. Jonas states “the phenomenon of an exponentially growing generic innovation is qualitatively different” from innovation in the pre-modern technological era; and one must understand this “ontological-epistemological premise” if one is to grasp the fundamental “agent” of modern “technological dynamics” and thereby grasp also the “corollary conviction” that technology deriving from and suited to this boundless potential of nature and of cognition is itself boundless (Jonas, 1979/2014, chapter 20, part: I The formal dynamics of technology, section: The nature of restless technology, para. 4-6).

In the next section of his essay, Jonas proposes the main cause of what humans in the modern technological era perceive as an “intrinsic infinity” to progress is the “interaction of science and technology” that has resulted in a fundamentally different perspective on nature that has arisen since the middle of the nineteenth century when improved concepts and instruments began to enrich humans’ understanding of the natural world. Since Newton, according to Jonas, the natural world had been described by applying relatively simple natural laws to explain a wide variety of phenomena, but those applications offered few novel concepts. With improved instruments, however, humans developed a more penetrating understanding of the nuances of nature and of how more refined observations, measurements, and analyses could reveal greater complexity than had previously been imagined (Jonas, 1979/2014, chapter 20, part: I The formal dynamics of technology, section: Science as a source of restlessness, para. 1). Essentially, Jonas argues the human inclination toward scientific inquiry and knowledge creation (theory) spawns technological innovations (practice) that propels further knowledge creation that propels further technological innovations in a mutual feedback system resulting in progress. Still, Jonas acknowledges his explanations and extrapolations are to a certain degree “conjectural” and the future may reveal boundaries to scientific knowledge (Jonas, 1979/2014, chapter 20, part: I The formal dynamics of technology, section: Science as a source of restlessness, para. 1-3).

Jonas concludes the first part of his essay with two main points on what his views of knowledge and technology mean for the philosophy of technology. First, he proclaims the division between theory and practice has been bridged by technology; and although there was a time when the pursuit of knowledge itself was considered a noble end, that time has given way to one in which the pursuit of knowledge alone is deemed an insufficient end. In other words, Jonas states, “nobility has been exchanged for utility” and this “technological syndrome” has engendered a “socializing of the theoretical realm, enlisting it in the service of common need.” In Jonas’ view, this undoubtedly raises philosophical questions regarding an apparent change in human values and in what a concept such as “wisdom” now means (Jonas, 1979/2014, chapter 20, part: I The formal dynamics of technology, section: The philosophical implications, para. 1-2). Second, he proclaims that the grandeur of the technological “enterprise” with its promise of worldly power and infinite advancement has allowed it to “establish itself as the transcendent end” and to inform the collective human mind in the modern technological era with a hunger to become “masters of the world” (Jonas, 1979/2014, chapter 20, part: I The formal dynamics of technology, section: The philosophical implications, para. 3).

With his overview of the two main technological eras and his description of how the characteristics of those two eras differ in their relations to certain variables, Jonas completes part one of his essay, which as a whole, is clearly concerned with the overall situation, forces, and enterprise comprising the technological realms of the traditional and modern technological eras.

In part two of his essay, Jonas addresses the objects of technology of which he says there are two: the technological artifacts (objects or means) and the purposes (objectives or ends) to which those artifacts are applied (Jonas, 1979/2014, chapter 20, part: II The material works of technology, section: introduction, par. 1). Jonas’ analysis in part two also distinguishes between some eras in which the objects and objectives of technology are different: the mechanical era, the chemical era, the electrical era, the electronic (or communication-information) era, and the biological era. My impression of Jonas’ main points in part two of the essay are: 1) Advancement through these eras in terms of both the means and ends of technology in general have progressed along a continuum from the more concrete (or natural) to the more abstract (or artificial); 2) The development of technology as objects (or means) and as objectives (or ends) can be divided into phases or categories in which humans may essentially harness or re-configure existing natural elements (e.g. to build and power mechanical or electrical machines that improve production efficiency or living conditions or save human labor), in which humans may create new materials by altering elemental natural patterns (e.g. to create synthetic chemicals to attack microorganisms), in which humans may use recently created technologies to create technologies that produce data or information which machines (or technologies) themselves may process and apply to some even more abstract purpose (e.g. systems of predictive analytics that may provide suggestions to consumers that result in customer satisfaction, in higher sales for companies, and in higher profits for shareholders), and in which humans may use technology upon humans to re-design either themselves or others into more suitable or desirable forms; and, finally, 3) The introduction of modern technology includes the introduction of previously non-existent and increasingly artificial “technological apparatus” of machines and infrastructure (or matrix, so to speak) in which humans are embedded and from which it would be almost impossible to extricate themselves (Jonas, 1979/2014, chapter 20, part: II The material works of technology, section: New kinds of commodities, par. 1-10, section: The last stage of the revolution?, par. 1-2).

At more than one point in the first two parts of his discussion concerning the dynamics and the content of technology, Jonas has noted that human introduction and acceptance of technology has raised issues concerning both the means and ends of technology which require philosophical (i.e. metaphysical, epistemological, and ethical) attention. In the final part of this work, Jonas addresses some of these issues and asserts clearly the human role as “prime agent” (Jonas, 1979/2014, chapter 20, part: III Toward an ethics of technology, section: introduction, par. 2) creating and bearing responsibility for the results of these issues, obvious issues such as potential ecological disaster (e.g. by nuclear warfare or accident) as well as subtle issues such as what should be the collective ideal (or image) of the future human (e.g. immortal human-machine hybrid or mortal Homo sapiens sapiens) or to what degree should humans mobilize to overcome the apparent determinism inherent in the evolution of the technological enterprise, a mobilization that would be contingent upon humans exercising what Jonas calls their “most troublesome gift: the spontaneity of human acting which confounds all prediction” (Jonas, 1979/2014, chapter 20, part: III Toward an ethics of technology, section: Problematic preconditions of an effective ethics, par. 10) and which – if by spontaneity he means freedom – he has already stated probably has been reduced at least at the individual level due to humans’ dependence on the technological matrix they have created and accepted (Jonas, 1979/2014, chapter 20, part: III Toward an ethics of technology, section: Problematic preconditions of an effective ethics, par. 2).

01 – The Singularity is Near: When Humans Transcend Biology (Kurzweil, 2005)

Kurzweil’s mind-blowing and optimistic vision of our future

The sheer volume of concepts and technologies Kurzweil introduces in his book in rapid succession will blow your mind — and his upbeat optimism is refreshing, especially for those of us who sometimes feel a little beaten down by all the dystopian views of technology in the news and in the fictional and expository volumes we consumed in the late-twentieth century. Aside from the optimism and concepts, there is also the argument Kurzweil constructs – backed by mounds of citations and calculations as evidence – that 2045 will be the year of the Singularity (Kurzweil, 2005, Chapter 3, location 2344). Two primary questions, then, are what is the Singularity and how can Kurzweil predict when it will occur?

The Singularity and its implications

Kurzweil hints at his definition of the Singularity in the subtitle of his book by noting the event which will mark its beginning, “When Humans Transcend Biology.” How humans will transcend biology and what will be the general form and nature of this transcendence, or Singularity, is specifically elaborated at a few junctures in the first four chapters. In the prologue, Kurzweil quickly begins sketching the form of the Singularity by envisioning “a future civilization whose intelligence vastly outstrips our own” in which humans have augmented their biological selves with the nonbiological and intelligent technology they have created. In addition, he states that his book is “the story of the destiny of the human-machine civilization, a destiny we have come to refer to as the Singularity” (Kurzweil, 2005, Prologue, locations 276, 313). After skimming only the prologue, one might conclude Kurzweil’s Singularity is nothing more than the glorified robotics embodied in any number of cinematic cyborgs and be done with it. In the first chapter, however, it becomes clear that understanding Kurzweil’s Singularity will not be so simple and filing the concept away as merely one more iteration of the Frankenstein – or humans versus their creations – plot, will not suffice.

In chapter one, Kurzweil constructs his foundation on the familiar concept of evolution and a quick survey of its major epochs, six epochs in Kurzweil’s view. Most people will not be surprised by the first four of Kurzweil’s epochs: 1) physics and chemistry, 2) biology and DNA, 3) brains, and 4) technology; but, for those who did not immediately grasp the significance of his theory he calls “the law of accelerating returns” in the prologue (Kurzweil, 2005, Prologue, location 295), the fifth epoch may surprise with its abrupt break from the glacial pace of past. In brief, the fifth epoch is the Singularity – or a stage in evolution – and Kurzweil’s book is the predictive analysis that yields the characteristics of that epoch and of the possible implications of those characteristics for humanity and the universe.

In the second paragraph of his first chapter, Kurzweil summarizes the nature of this imminent next phase of evolution (or the Singularity) as being so fast and so different that it will totally transform human existence, including our understanding of reality, our value systems, and our biological mortality. Since the changes will be so profound and even imagining them is so difficult for people – given that it is inherently difficult to comprehend a future state, the comprehension of which is prerequisite upon cumulative knowledge and experience one does not possess yet – Kurzweil goes so far as to introduce a term to describe those currently living humans who grasp the Singularity and its significance. He calls them “singularitarians” (Kurzweil, 2005, Chapter one, location 339). Here, it seems to me, Kurzweil is entertaining – perhaps ironically – a post-modern, intellectual cult of science and technology or of an exclusive futurist sub-culture of an enlightened intelligentsia.

To continue …

 Countdown to 2045: Kurzweil’s argument and evidence

The primary premise of Kurzweil’s argument is the creation of technology by humans has fundamentally changed the pace of evolution. Whereas evolutionary epochs before human’s creation of technology can be measured in many millions of years (geological and biological epochs, for example), evolutionary epochs after human’s creation of technology can be measured in tens or hundreds of years (economic and technological epochs, for example). Kurzweil expresses this changed pace of evolution by describing two concepts of perspective he calls the “intuitive linear view” and the “historical exponential view” (Kurzweil, 2005, Chapter 3, location 390).

To continue …

 

References

Baehr, Craig. (2013). Developing a sustainable content strategy for a technical communication body of knowledge. Technical Communication. 60, 293-306.

Bijker, W. E. & Pinch, T.J. (2003). The social construction of facts and artifacts. In Robert C. Sharff & Val Dusek (Eds.), Philosophy of technology: The technological condition: An anthology (pp. 221-231). West Sussex, UK: John Wiley & Sons. (Original work published 1987).

Boyd, D., & Crawford, K. (2012). Critical questions for Big Data. Information, Communication & Society, 15, 662–679.

Bunge, Mario. (2014). Philosophical inputs and outputs of technology. In Robert C. Sharff & Val Dusek (Eds.), Philosophy of technology: The technological condition: An anthology (Second ed.) [Amazon Kindle edition, Kindle for PC 2, Windows 8.1 desktop version]. West Sussex, UK: John Wiley & Sons. (Original work published 1979).

Dean, Jared. (2014). Big Data, Data Mining, and Machine Learning: Value Creation for Business Leaders and Practitioners. John Wiley & Sons.

Dean, J., & Ghemawat, S. (2008). MapReduce: Simplified data processing on large clusters. Communications of the ACM, 51(1), 107-113.

Ellul, Jacques. (2014). On the aims of a philosophy of technology. In Robert C. Sharff & Val Dusek (Eds.), Philosophy of technology: The technological condition: An anthology (Second ed.) [Amazon Kindle edition, Kindle for PC 2, Windows 8.1 desktop version]. West Sussex, UK: John Wiley & Sons. (Original work published 1954).

Fan, W. & Bifet, A. (2012). Mining big data: Current status, and forecast to the future. SIGKDD Explorations, 14(2), 1-5.

Gehlen, Arnold. (2003). A philosophical-anthropological perspective on technology. In Robert C. Sharff & Val Dusek (Eds.), Philosophy of technology: The technological condition: An anthology. West Sussex, UK: John Wiley & Sons. (Original work published 1983).

Ghemawat, S., Gobioff, H., & Leung, S. T. (2003, December). The Google file system. ACM SIGOPS Operating Systems Review, 37(5), 29-43. DOI: 10.1145/1165389.945450.

Graham, S. S., Kim, S.-Y., Devasto, M. D., & Keith, W. (2015). Statistical genre analysis: Toward big data methodologies in technical communication. Technical Communication Quarterly, 24:1, 70-104, DOI: 10.1080/10572252.2015.975955

Heidegger, Martin (2003). The question concerning technology. In Robert C. Sharff & Val Dusek (Eds.), Philosophy of technology: The technological condition: An anthology. West Sussex, UK: John Wiley & Sons. (Original work published 1954).

Jonas, Hans. (2014). Toward a philosophy of technology. In Robert C. Sharff & Val Dusek (Eds.), Philosophy of technology: The technological condition: An anthology (Second ed.) [Amazon Kindle edition, Kindle for PC 2, Windows 8.1 desktop version]. West Sussex, UK: John Wiley & Sons. (Original work published 1979).

Kline, Stephen J. (2003). What is technology. In Robert C. Sharff & Val Dusek (Eds.), Philosophy of technology: The technological condition: An anthology. West Sussex, UK: John Wiley & Sons. (Original work published 1985).

Kurzweil, Ray. (2005). The singularity is near: When humans transcend biology [Amazon Kindle edition, Kindle for PC 2, Windows 8.1 desktop version]. New York, New York: Penguin Books.

Mahrt, M. & Scharkow, M. (2013). The value of big data in digital media research, Journal of Broadcasting & Electronic Media, 57 (1), 20-33.

McNely, B., Spinuzzi, C., & Teston, C. (2015). Contemporary research methodologies in technical communication. Technical Communication Quarterly, 24, 1-13.

Mumford, Lewis (2003). Tool-users vs. homo sapiens and the megamachine.  In Robert C. Sharff & Val Dusek (Eds.), Philosophy of technology: The technological condition: An anthology. West Sussex, UK: John Wiley & Sons. (Original work published 1966).

Shrader-Frechette, Kristin. (2003). Technology and ethics.  In Robert C. Sharff & Val Dusek (Eds.), Philosophy of technology: The technological condition: An anthology. West Sussex, UK: John Wiley & Sons. (Original work published 1992).

Winner, Langdon. (2003). Social constructivsm: Opening the black box and finding it empty. In Robert C. Sharff & Val Dusek (Eds.), Philosophy of technology: The technological condition: An anthology. West Sussex, UK: John Wiley & Sons. (Original work published 1993).

Wolfe, Joanna. (2015). Teaching students to focus on the data in data visualization. Journal of Business and Technical Communication, 29, 344-359