AB08 – McNely, B., Spinuzzi, C., & Teston, C. (2015). Contemporary research methodologies in technical communication.

In Technical Communication Quarterly’s most recent special issue on research methods and methodologies, the issue’s guest editors assert “methodological approaches” are important “markers for disciplinary identity” and thereby agree with previous guest editor, Goubil-Gambrell, who in the 1998 special issue “argued that ‘defining research methods is a part of disciplinary development’” (McNely, Spinuzzi, & Teston, 2015, p. 2). Furthermore, the authors of the 2015 special issue revere the 1998 special issue as a “landmark issue” including ideas that “informed a generation of technical communication scholars as they defined their own objects of study, enacted their research ethics, and thought through their metrics” (McNely, et al., 2015, p. 9).

It is in this tradition the authors of the 2015 special issue both desire to review “key methodological developments” and associated theories forming the technical communication “field’s current research identity” and to preview and “map future methodological approaches” and relevant theories (McNely, et al., 2015, p. 2). The editors argue the approaches and theories discussed in this special edition of the journal “not only respond to” what they view as substantial changes in “tools, technologies, spaces, and practices” in the field over the past two decades, but also “innovate” by describing and modeling how these changes are informing technical communicators’ emerging research methodologies and theories as those methodologies and theories relate to the “field’s objects of study, research ethics, and metrics” (i.e. “methodo-communicative issues”) (McNely, et al., 2015, pp. 1-2, 6-7).

Reviewing what they see as the fundamental theories and research methodologies of the field, the authors explore how a broad set of factors (e.g. assumptions, values, agency, tools, technology, and contexts) manifest in work produced along three vectors of theory and practice they identify as “sociocultural theories of writing and communication,” “associative theories and methodologies,” and “the new material turn” (McNely, et al., 2015, p. 2). The authors describe the sociocultural vector as developing from theoretical traditions in “social psychology, symbolic interactionism,” “learning theory,” and “activity theory,” among others, and as essentially involving “purposeful human actors,” “material surroundings,” “heterogeneous artifacts and tools,” and even “cognitive constructs” combining in “concrete interactions” – that is, situations – arising from synchronic and diachronic contextual variables scholars may identify, describe, measure, and use to explain phenomena and theorize about them (McNely, et al., 2015, pp. 2-4). The authors describe the associative vector as developing from theoretical traditions in “articulation theory,” “rhizomatics,” “distributed cognition,” and “actor-network theory (ANT)” (McNely, et al., 2015, p. 4) and as essentially involving “symmetry—a methodological stance that ascribes agency to a network of human and nonhuman actors rather than to specific human actors” and therefore leading researchers to “focus on associations among nodes” as objects at the methodological nexus (McNely, et al., 2015, p. 4). The authors describe the new material vector as developing from theoretical traditions in “science and technology studies, political science, rhetoric, and philosophy” (with the overlap of the specific traditions from political science and philosophy often “collected under the umbrella known as “object-oriented ontology”) and as essentially involving a “radically symmetrical perspective on relationships between humans and nonhumans—between people and things, whether those things are animal, vegetable, or mineral” and how these human and non-human entities integrate into “collectives” or “assemblages” that have “agency” one could view as “distributed and interdependent,” a phenomenon the authors cite Latour as labeling “interagentivity” (McNely, et al., 2015, p. 5).

Previewing the articles in this special issue, the editors acknowledge how technical communication methodologies have been “influenced by new materialisms and associative theories” and argue these methodologies “broaden the scope of social and rhetorical aspects” of the field and “encourage us to consider tools, technologies, and environs as potentially interagentive elements of practice” that enrich the field (McNely, et al., 2015, p. 6). At the same time, the editors mention how approaches such as “action research” and “participatory design” are advancing “traditional qualitative approaches” (McNely, et al., 2015, p. 6). In addition, the authors state “given the increasing importance of so-called ‘big data’ in a variety of knowledge work fields, mixed methods and statistical approaches to technical communication are likely to become more prominent” (McNely, et al., 2015, p. 6). Amidst these developments, the editor’s state their view that adopting “innovative methods” in order to “explore increasingly large date sets” while “remaining grounded in the values and aims that have guided technical communication methodologies over the previous three decades” may be one of the field’s greatest challenges (McNely, et al., 2015, p. 6).

In the final section of their paper, the authors explicitly return to what they seem to view as primary disciplinary characteristics (i.e. markers, identifiers), which they call “methodo-communicative issues,” and use those characteristics to compare the articles in the 1998 special issue with those in the 2015 special issue and to identify what they see as new or significant in the 2015 articles. The “methodo-communicative issues” or disciplinary characteristics they use are: “objects of study, research ethics, and metrics” (McNely, et al., 2015, pp. 6-7). Regarding objects of study, the authors note how in the 1998 special issue, Longo focuses on the “contextual nature of technical communication” while in the 2015 special issue, Read and Swarts focus on “networks and knowledge work” (McNely, et al., 2015, p. 7). Regarding ethics, the authors cite Blyer in the 1998 special issue as applying “critical” methods rather than “descriptive/explanatory methods” while in the 2015 special issue, Walton, Zraly, and Mugengana apply “visual methods” to create “ethically sound cross-cultural, community-based research” (McNely, et al., 2015, p. 7). Regarding metrics or “measurement,” the authors cite Charney in the 1998 special issue as contrasting the affordances of “empiricism” with “romanticism” while in the 2015 special issue, Graham, Kim, DeVasto, and Keith explore the affordances of “statistical genre analysis of larger data sets” (McNely, et al., 2015, p. 7). In their discussion of what is new or significant in the articles in the 2015 special issue, the editors highlight how some articles address particular methodo-communicative issues. Regarding metrics or “measurement,” for example, they highlight how Graham, Kim, DeVasto, and Keith apply Statistical Genre Analysis (SGA) – a hybrid research method combining rhetorical analysis with statistical analysis – to answer research questions such as which “specific genre features can be correlated with specific outcomes” across an “entire data set” rather than across selected exemplars (McNely, et al., 2015, p. 8).

In summary, the guest editors of this 2015 special issue on contemporary research methodologies both review the theoretical and methodological traditions of technical communication and preview the probable future direction of the field as portrayed in the articles included in this special issue.

AB06 – Mahrt, M. & Scharkow, M. (2013). The value of big data in digital media research.

In their effort to promote “theory-driven” research strategies and to caution against the naïve embrace of “data-driven” research strategies that seems to have culminated recently in a veritable “’data rush’ promising new insights” into almost anything, the authors of this paper “review” a “diverse selection of literature on” digital media research methodologies and the Big Data phenomenon as they provide “an overview of ongoing debates” in this realm while arguing ultimately for a pragmatic approach based on “established principles of empirical research” and “the importance of methodological rigor and careful research design” (Mahrt & Scharkow, 2013, pp. 26, 20, 21, 30).

Mahrt and Scharkow acknowledge the advent of the Internet and other technologies has enticed “social scientists from various fields” to utilize “the massive amounts of publicly available data about Internet users” and some scholars have enjoyed success in “giving insight into previously inaccessible subject matters” (Mahrt & Scharkow, 2013, p. 21). Still, the authors note, there are some “inherent disadvantages” with sourcing data from the Internet in general and also from particular sites such as social media sites or gaming platforms (Mahrt & Scharkow, 2013, p. 21, 25). One of the most commonly cited problems with sourcing publicly available data from social media sites or gaming platforms or Internet usage is “the problem of random sampling on which all statistical inference is based, remains largely unsolved” (Mahrt & Scharkow, 2013, p. 25). The data in Big Data essentially are “huge” amounts of data “’naturally’ created by Internet users,” “not indexed in any meaningful way,” and with no “comprehensive overview” available (Mahrt & Scharkow, 2013, p. 21).

While Mahrt and Scharkow mention the positive attitude of “commercial researchers” toward a “golden future” for big data, they also mention the cautious attitude of academic researchers and explain how the “term Big Data has a relative meaning” (Mahrt & Scharkow, 2013, pp. 22, 25) contingent perhaps in part on these different attitudes. And although Mahrt and Scharkow imply most professionals would agree the big data concept “denotes bigger and bigger data sets over time,” they explain also how “in computer science” researchers emphasize the concept “refers to data sets that are too big” to manage with “regular storage and processing infrastructures” (Mahrt & Scharkow, 2013, p. 22). This emphasis on data volume and data management infrastructure familiar to computer scientists may seem to some researchers in “the social sciences and humanities as well as applied fields in business” too narrowly focused on computational or quantitative methods and this focus may seem exclusive and controversial in additional ways (Mahrt & Scharkow, 2013, pp. 22-23). Some of these additional controversies revolve around issues such as, for example, whether a “data analysis divide” may be developing that favors those with “the necessary analytical training and tools” over those without them (Mahrt & Scharkow, 2013, pp. 22-23), or whether an overemphasis on “data analysis” may have contributed to the “assumption that advanced analytical techniques make theories obsolete in the research process,” as if the numbers, the “observed data,” no longer require human interpretation to clarify meaning or to identify contextual or other confounding factors that may undermine the quality of the research and raise “concerns about the validity and generalizability of the results” (Mahrt & Scharkow, 2013, pp. 23-25).

Although Mahrt and Scharkow grant advances in “computer-mediated communication,” “social media,” and other types of “digital media” may be “fueling methodological innovation” such as analysis of large-scale data sets – or so-called Big Data – and that the opportunity to participate is alluring to “social scientists” in many fields, the authors conclude their paper by citing Herring and others urging researchers to commit to “methodological training,” “to learn to ask meaningful questions,” and to continually “assess” whether collection and analysis of massive amounts of data is truly valuable in any specific research endeavor (Mahrt & Scharkow, 2013, p. 20, 29-30). The advantages of automated, big data research are numerous, as Mahrt and Scharkow concede, for instance “convenience” and “efficiency,” or the elimination of research obstacles such as “artificial settings” and “observation effects,” or the “visualization” of massive “patterns in human behavior” previously impossible to discover and render (Mahrt & Scharkow, 2013, pp. 24-25). With those advantages understood and granted, the author’s argument seems a reasonable reminder of the “established principles of empirical research” and of the occasional need to reaffirm the value of the tradition (Mahrt & Scharkow, 2013, p. 21).

AB02 – Boyd, D., & Crawford, K. (2012). Critical questions for Big Data

As “social scientists and media studies scholars,” Boyd and Crawford (2012) consider it their responsibility to encourage and focus the public discussion regarding “Big Data” by asserting six claims they imply help define the many and important potential issues the “era of Big Data” has already presented to humanity and the diverse and competing interests that comprise it (Boyd & Crawford, 2012, pp. 662-663). Before asserting and explaining their claims, however, the authors define Big Data “as a cultural, technological, and scholarly phenomenon” that “is less about data that is big than it is about a capacity to search, aggregate, and cross-reference large data sets,” a phenomenon that has three primary components (fields or forces) interacting within it: 1) technology, 2) analysis, and 3) mythology (Boyd & Crawford, 2012, p. 663). Precisely because Big Data, as well as some “other socio-technical phenomenon,” elicit both “utopian and dystopian rhetoric” and visions of the future of humanity, Boyd and Crawford think it is “necessary to ask critical questions” about “what all this data means, who gets access to what data, how data analysis is deployed, and to what ends” (Boyd & Crawford, 2012, p. 664).

The authors’ first two claims are concerned essentially with epistemological issues regarding the nature of knowledge and truth (Boyd & Crawford, 2012, pp. 665-667. In explaining their first claim, “1. Big Data changes the definition of knowledge,” the authors draw parallels between Big Data as a “system of knowledge” and “’Fordism’” as a “manufacturing system of mass production.” According to the authors, both of these systems influence peoples’ “understanding” in certain ways. Fordism “produced a new understanding of labor, the human relationship to work, and society at large.” And Big Data “is already changing the objects of knowledge” and suggesting new concepts that may “inform how we understand human networks and community” (Boyd & Crawford, 2012, p. 665). In addition, the authors cite Burkholder, Latour, and others in describing how Big Data refers not only to the quantity of data, but also to the “tools and procedures” that enable people to process and analyze “large data sets,” and to the general “computational turn in thought and research” that accompanies these new instruments and methods (Boyd & Crawford, 2012, p. 665). In addition, the authors state “Big Data reframes key questions about the constitution of knowledge, the processes of research, how we should engage with information, and the nature and categorization of reality” (Boyd & Crawford, 2012, p. 665). Finally, as counterpoint to the many potential benefits and positive aspects of Big Data they have emphasized thus far, the authors cite Anderson as one who has revealed the at times prejudicial and arrogant beliefs and attitudes of some quantitative proponents who summarily dismiss all qualitative or humanistic approaches to gathering evidence and formulating theories (Boyd & Crawford, 2012, pp. 665-666) as inferior.

In explaining their second claim, “2. Claims to objectivity and accuracy are misleading,” the authors continue considering some of the biases and misconceptions inherent in epistemologies that privilege “quantitative science and objective method” as the paths to knowledge and absolute truth. According to the authors, Big Data “is still subjective” and even when research subjects or variables are quantified, those quantifications do “not necessarily have a closer claim on objective truth.” In the view of the authors, the obsession of social science and the “humanistic disciplines” with attaining “the status of quantitative science and objective method” is at least to some extent misdirected (Boyd & Crawford, 2012, pp. 666-667), even if understandable given the apparent value society assigns to quantitative evidence. Citing Gitelman and Bollier, among others, the authors believe “all researchers are interpreters of data” not only when they draw conclusions based on their research findings, but also when they design their research and decide what will – and what will not – be measured. Overall, the authors argue against too eagerly embracing the positivistic perspective on knowledge and truth and argue in favor of critically examining research philosophies and methods and considering the limitations inherent within them (Boyd & Crawford, 2012, pp. 667-668).

The third and fourth claims the authors make could be considered to address research quality. Their third claim, “3. Big data are not always better data,” emphasizes the importance of quality control in research and highlights how “understanding sample, for example, is more important than ever.” Since “the public discourse around” massive and easily collected data streams such as Twitter “tends to focus on the raw number of tweets available” and since “raw numbers” would not be a “representative sample” of most populations about which researchers seek to make claims, public perceptions and opinion could be skewed by either mainstream media’s misleading reporting about valid research or by unprofessional researchers’ erroneous claims based upon invalid research methods and evidence (Boyd & Crawford, 2012, pp. 668-669). In addition to these issues of research design, the authors highlight how additional “methodological challenges” can arise “when researchers combine multiple large data sets,” challenges involving “not only the limits of the data set, but also the limits of which questions they can ask of a data set and what interpretations are appropriate” (Boyd & Crawford, 2012, pp. 669-670).

The authors fourth claim continues addressing research quality, but at the broader level of context. Their fourth claim, “4. Taken out of context, Big Data loses its meaning,” emphasizes the importance of considering how the research context affects research methods and research findings and conclusions. The authors imply attitudes toward mathematical modeling and data collection methods may cause researchers to select data more for their suitability to large-scale, computational, automated, quantitative data collection and analysis than for their suitability to discovering patterns or to answering research questions. As an example, the authors consider the evolution of the concept of human networks in sociology and focus on different ways of measuring “‘tie strength,’” a concept understood by many sociologists to indicate “the importance of individual relationships” (Boyd & Crawford, 2013, p. 670). Although recently developed concepts such as “articulated networks” and “behavioral networks” may appear at times to indicate tie strength equivalent to more traditional concepts such as “kinship networks,” the authors explain how the tie strength of kinship networks is based on more in-depth, context-sensitive data collection such as “surveys, interviews” and even “observation,” while the tie strength of articulated networks or behavioral networks may rely on nothing more than interaction frequency analysis; and “measuring tie strength through frequency or public articulation is a common mistake” (Boyd & Crawford, 2013, p. 671). In general, the authors urge caution against considering Big Data the panacea that will objectively and definitively answer all research questions. In their view, “the size of the data should fit the research question being asked; in some cases, small is best” (Boyd & Crawford, 2012, p. 670).

The authors’ final two claims address ethical issues related to Big Data, some of which seem to have arisen in parallel with its ascent. In their fifth claim, “5. Just because it is accessible does not make it ethical,” the authors focus primarily on whether “social media users” implicitly give permission to anyone to use publicly available data related to the user in all contexts, even contexts the user may not have imagined, such as in research studies or in the collectors’ data or information products and services (Boyd & Crawford, 2012, pp. 672-673). Citing Ess and others, the authors emphasize researchers and scholars have “accountability” for their actions, including those actions related to “the serious issues involved in the ethics of online data collections and analysis.” The authors encourage researchers and scholars to consider privacy issues and to proactively assess whether they should assume users have provided “informed consent” for the researchers to collect and analyze users’ publicly available data just because the data is publicly available” (Boyd & Crawford, 2013, pp. 672-673). In their sixth claim, “6. Limited access to Big Data creates new digital divides,” the authors note that although there is a prevalent perception Big Data “offers easy access to massive amounts to data,” the reality is access to Big Data and the ability to manage and analyze Big Data require resources unavailable to much of the population – and this “creates a new kind of digital divide: the Big Data rich and the Big Data poor” (Boyd & Crawford, 2013, pp. 673-674). “Whenever inequalities are explicitly written into the system,” the authors assert further, “they produce class-based structures (Boyd & Crawford, 2012, p. 675)

In their article overall, Boyd & Crawford maintain an optimistic tone while enumerating the many and myriad issues emanating from the phenomenon Big Data. In concluding, the authors encourage scholars, researchers, and society to “start questioning the underlying assumptions, values, and biases of this new wave of research” (Boyd & Crawford, 2012, p. 675).

AB01 – Graham, S. S., Kim, S.-Y., Devasto, M. D., & Keith, W. (2015). Statistical genre analysis: Toward big data methodologies in technical communication.

A team of researchers determines to bring the power of “big data” into the toolkit of technical communication scholars by piloting a research method they “dub statistical genre analysis (SGA)” and describing and explaining the method in an article published in the journal Technical Communication Quarterly (Graham, Kim, Devasto, & Keith, 2015, pp. 70-71).

Acknowledging the value academic markets have begun assigning to findings, conclusions, and theories founded upon rigorous analysis of massive data sets, this team deconstructs the amorphous “big data” phenomenon and demonstrates how their SGA methodology can be used to quantitatively describe and visually represent the generic content (e.g. types of evidence and modes of reasoning) of rhetorical situations (e.g. committee meetings) and to discover input variables (e.g. conflicts of interest) that have statistically significant effects upon output variables (e.g. recommendations) of important policy-influencing entities such as the Food and Drug Administration’s (FDA) Oncologic Drugs Advisory Committee (ODAC) (Graham et al., 2015, pp. 86-89).

The authors believe there is much to gain by integrating the “humanistic and qualitative study of discourse with statistical methods” and although they respect the “craft character of rhetorical inquiry” (Graham et al., 2015, pp 71-72) and utilize “the inductive and qualitative nature of rhetorical analysis as a necessary” initial step in their hybrid method (Graham et al., 2015, p. 77), they conclude their mixed-method SGA approach can increase the “range and power” (Graham et al., 2015 p. 92) of “traditional, inductive approaches to genre analysis” (Graham et al., 2015, p. 86) by offering the advantages “of statistical insights” while avoiding the disadvantages of statistical sterility that can emerge when the qualitative humanist element is absent (Graham et al., 2015, p. 91).

In the conclusion of their article, the researchers identify two main benefits of their hybrid SGA method. The first benefit is communication genres “can be defined with more precision” since SGA documents the actual frequency of generic conventions as they exist within a large sample of the corpus, rather than being defined more generally since traditional rhetorical methods may document the opinions experts have of the “typical” frequency of generic conventions as they perceive them to exist within a limited sample of “exemplars” selected from a small sample of the corpus. In addition, the authors argue analysis of a massive number of texts may reveal generic conventions that do not appear in the limited sample of exemplars that may be studied by practitioners of the traditional rhetorical approach involving only “critical analysis and close reading.” The second benefit is communications scholars are enabled to move beyond critical opinion and to claim statistically significant correlations between “situational inputs and outputs” and “genre characteristics that have been empirically established” (Graham et al., 2015, p. 92).

Befitting the subject of their study, the authors devote a considerable portion of their article to describing their research methodology. In the third section titled “Statistical Genre Analysis,” they begin by noting they conducted the “current pilot study” on a “relatively small subset” of the available data in order to “demonstrate the potential of SGA.” Further, they outline their research questions, the answers to two of which indeed seem to attest to the strength SGA can contribute to both the evidence and the inferences used by communication scholars in their own arguments about the communications they study. As they do in the introduction, in this section also, the authors note the intellectual lineage of SGA in various disciplines, including “rhetorical studies, linguistics,” “health communication,” psychology, and “applied statistics” (Graham et al., 2015, pp. 71, 76).

As explained earlier, the communication artifacts studied by these researches are selected from among the various artifacts arising from the FDA’s ODAC meetings, specifically the textual transcriptions of presentations (essentially opening statements) given by the sponsors (pharmaceutical manufacturing companies) of the drugs under review during meetings which usually last one or two days (Graham et al., 2015, pp. 75-76). Not only in the arenas of technical communication and rhetoric, but also in the arenas of Science and Technology Studies (STS) and of Science, Technology, Engineering, and Math (STEM) public policy, managing conflicts of interests among ODAC participants and encouraging inclusion of all relevant stakeholders in ODAC meetings are prominent issues (Graham et al., 2015, p. 72). At the conclusion of ODAC meetings, voting participants vote either for or against the issue under consideration, generally “applications to market new drugs, new indications for already approved drugs, and appropriate research/study endpoints” (Graham et al., 2015, pp. 74-76).

It is within this context the authors attempted to answer the following two research questions, among others, regarding all ODAC meetings and sponsor presentations given at those meetings between 2009 and 2012: “1. How does the distribution of stakeholders affect the distribution of votes?” and “3. How does the distribution of evidence and forms of reasoning in sponsor presentations affect the distribution of votes?” (Graham et al., 2015, pp. 75-76). Notice both of these research questions ask whether certain input variables affect certain output variables. And in this case, the output variables are votes either for or against an action that will have serious consequences for people and organizations. Put another way, this is a political (or deliberative rhetoric) situation and the ability to predict with a high degree of certainty which inputs produce which outputs could be quite valuable, given those inputs and outputs could determine substantial budget allocations, consulting fees, and pharmaceutical sales – essentially, success or failure – among other things.

Toward the aim of asking and answering research questions with such potentially high stakes, the authors applied their SGA mixed-methods approach, which they explain included four phases of research conducted over approximately six months to one year and included at least four researchers. The authors explain SGA “requires first an extensive data preparation phase” after which the researchers “subjected” the data “to various statistical tests to directly address the research questions.” They describe the four phases of their SGA method as “(a) coding schema development, (b) directed content analysis, (c) meeting data and participant demographics extraction, and (d) statistical analyses.” Before moving into a deeper discussion of their own “coding schema” development, as well as the other phases of their SGA approach, the authors cite numerous influences from scholars in “behavioral research,” “multivariate statistics,” “corpus linguistics,” and “quantitative work in English for specific purposes,” while explaining the specific statistical “techniques” they apply “can be found in canonical works of multivariate statistics such as Keppel’s (1991) Design and Analysis and Johnson and Wichern’s (2007) Applied Multivariate Statistical Analysis” (Graham et al., 2015, pp. 75-77). One important distinction the authors make between their method and these other methods is while the other methods operate at the more granular “word and sentence level” that facilitates formulation of “coding schema amenable to automated content analysis,” the authors operate at the less granular paragraph level that requires human intervention in order to formulate coding schema reflecting nuances only discernable at higher cognitive levels, for example whether particular evidentiary artifacts (transcripts) are based on randomized controlled trials (RCTs) addressing issues of “efficacy” or RCTs addressing issues of “safety and treatment-related hazards” (Graham et al., 2015, pp. 77-78). Choosing the longer, more complex paragraph as their unit of analysis requires the research method to depend upon “the inductive and qualitative nature of rhetorical analysis as a necessary precursor to both qualitative coding and statistical testing” (Graham et al., 2015, p. 77).

In the final section of their explanation of SGA, their research methodology, the authors summarize their statistical methods including both “descriptive statistics” and “inferential statistics” and how they applied these two types of statistical methods, respectively, to “provide a quantitative representation of the data set” (e.g. “mean, median, and standard deviation”) and to “estimate the relationship between variables” (e.g. “statistically significant impacts”) (Graham et al., 2015, pp. 81-83).

Returning to the point of the authors’ research – namely demonstrating how SGA empowers scholars to provide confident answers to research questions and therefore to create and assert knowledge clearly valued by societal interests – their SGA enables them to state their “multiple regression analysis” found “RCT-efficacy data and conflict of interest remained as the only significant predictors of approval rates. Oddly, the use of efficacy data seems to lower the chance of approval, whereas a greater presence of conflict of interest increases the probability of approval” (Graham et al., 2015, p. 89). Obviously, this finding encourages entities aiming to increase the probability of approval to allocate resources toward increasing the presence of conflicts of interests since that is the only input variable demonstrated to contribute to achieving their aim. On the other hand, this finding provides evidence entities claiming conflicts of interests illegally (or at least undesirably) affect ODAC participants’ votes can use to bolster their arguments “stricter controls on conflicts of interests should be deployed (Graham et al., 2015, p. 92).