The Science Studies Colloquium Series takes place every Monday of the quarter from 4:00p-5:30p in Room 3027, Humanities & Social Sciences Building, Muir College campus, unless noted otherwise.
A reception for the colloquium speaker takes place before the talk from 3:30p-4:00p in Room 3005, Humanities & Social Sciences Building.
SSP faculty and students only
By the late-nineteenth century, the deep ocean floor had become “Darwin's laboratory,” a place to test the “direct action of external conditions on organisms.” According to dominant Victorian marine biology, the deep sea was an eternal, unchanging biogeographical space. There, and only there, could naturalists investigate how organisms evolved without the influence of changing environmental factors. Consequently, marine invertebrate specimens from the ocean floor played a large role in the formation of evolutionary theory throughout the nineteenth century. This presentation explores the 1880s dispute between Charles Darwin and Sir Wyville Thomson regarding natural selection as the culmination of a half-century of conflict over deep sea invertebrates and biological evidence. Marine invertebrates, according to some naturalists, were uniquely suited to the philosophical study of organismal complexity. Other naturalists focused on the much-anticipated discovery of Darwin's “living fossil” dredged from the sea floor as proof of evolutionary divergence. Sir Wyville Thomson, on the other hand, was certain that his deep sea crinoids offered no proof of evolution by natural selection, thereby offering a serious challenge to Darwin's theory. Ultimately, the practices of three international scientific communities, Edinburgh, Cambridge, and the US Coast Survey, converged over deep sea creatures and those marine organisms changed the way we study life's history.
An array of forces at many levels conspire to favor identification, and dissemination, of drug benefits vs harms. Practical aspects of randomized trial conduct, coupled with human subjects protection considerations, foster trial designs that (through selection processes) advantage detection of benefits over harms – producing disparities between the truth and evidence. Then (as findings show), forces, many fostered by industry conflicts, modify the relation between the evidence that is, and the evidence that is seen. Factors range from submission and representation bias (including ghostwriting and plural publication), though reviewer and journal influence, to media representations, and medical education. Disparities may propagate to medical practice through guideline generation and physician “performance pay.” The cumulative effect of these forces is that what we “know” may depart from what is “true” (in sign as well as magnitude); and what we do may depart from what is right. Recommendations are put forth to lessen the impact of these forces – with the goal to realign practice with patients’ interests."
Data mining, or, more formally, Knowledge Discovery in Databases (KDD), is the activity of creating non-trivial knowledge suitable for action from databases of vast size and dimensionality. From the mid-1960s to the late 1990s, data mining moved from a disparaged, dubious sort of statistical work—“fishing” or “dredging”—to become what its practitioners proclaim to be an utterly transformative technology. According to KDD advocates, traditional scientific approaches to data—and the traditional competencies of scientists—simply cannot keep up with the volume of data and multidimensionality possible thanks to computers. Something else is needed, something less pure—because it deals with vast impurities of dynamic data, nearly always from a particular business, governmental, or scientific research goal. Establishing the legitimacy of KKD meant demonstrating that lack of luxury. Data miners tell a technologically determinist story of the necessary shift from the niceties of statistical rigor to the capaciousness and utility of data mining. I look at how stories of technologically determined emergence were crucial to the legitimization of data mining in authorizing the loosening—and often abandonment—of the disciplinary and epistemological values of its predecessor disciplines, statistics, database management, and machine learning.
Many have observed the decline of scientific authority over the last three decades, for reasons ranging from the toxic legacies of Cold War science (Beck 1992), to the current commercialization and privatization of knowledge production (Lave 2012, Mirowski 2011), to the success of social constructivist critique (Latour 2004). Whatever the cause(s), the relationship between academia, economic elites, and the military is shifting once again, and a new regime of knowledge production is emerging (Pestre 2003). What shape will this new regime take? How will the construction of expertise and scientific authority change, and with what political implications?
This talk argues that the emergent science regime will deeply erode academia’s increasingly tenuous monopoly on scientific expertise. The legions of volunteers on whom fields such as astronomy, cartography, and ornithology increasingly depend will expand wildly as research funding shrinks and academics become reaccustomed to an integration of “amateurs” and “professionals” last seen in the mid-19th century (Reingold 1976, Secord 1994). The resurgence of knowledge as a central target of capital accumulation (Canaan and Schumar 2008, Tyfield 2010) will deepen the current push to evaluate knowledge claims on their commercial merits, regardless of source, producing a perverse democratization of knowledge production that elevates Merck and Monsanto above conflict of interest concerns, but also allows the Environmental Working Group and bucket brigades in fenceline communities to transcend accusations of amateurism and take on the status of experts. The challenge will be to push this coming horizontality towards intellectually and politically progressive ends.
Questions about the nature and location of comets had not been definitively decided by 1618, a year marked by a succession of three comets visible to the naked eye, culminating in the great comet of 1618. These events resulted in the publication of multiple treatises about comets by numerous observers, not the least being those of Libertus Fromondus, of the Jesuit Horatio Grassi, and of Galileo, responding to Grassi, in defense of his own position, as elucidated by his disciple, Mario Guiducci. This talk discusses Fromondus’ critique of the Aristotelian account of comets, which caused him subsequently to reject Galileo’s explanation as well. Fromondus, professor of philosophy, then theology, at Louvain mades significant modifications to his Aristotelianism to accommodate astronomical novelties such as supra-lunar comets. While he could be thought as a conservative thinker—in this case, he should still be considered an Aristotelian; he made changes that went well beyond what could be described as the articulation of the Aristotelian paradigm or as part of the sequence of theories in the Aristotelian “research programme.” And while we are used to thinking that the great Galileo was right about astronomical novelties such as comets, and that the Aristotelians were wrong, in this case, Fromondus had the better of Galileo (or right and wrong are the wrong ways to think about such issues).
The development of museological science in the nineteenth century radically restructured the way physicians understood, visualized and discussed medicine. By arranging medical specimens museologically, physicians were better able to understand the mechanism of disease. At the same time, this empirically-based medicine transformed the patient into an object of study, creating for American physicians a tension between the dehumanizing practices of scientific medicine and its inherently humanistic spirit. This tension can be seen clearly in the development of the U.S. Army Medical Museum during the Civil War, as well as in its museological display of specimens and the historical exhibits after the war. Designed as both a national museum and institution of medical education, the Army Medical Museum exhibited not only scientifically constructed displays of specimens and medical objects, but also historically contextualized narratives of the history of medicine. Building on John Harley Warner’s study of the deployment of French museological science in America (1998), my project interrogates the development of an American scientific medicine within the framework of this national medical museum that was simultaneously constrained and shaped by the U.S. Civil War. Through the display of their unique collection of pathological and anatomical specimens, the Army Medical Museum combined museological science practice with historical artifacts in its display of Civil War specimens, balancing the humanistic and objectifying qualities of scientific medicine in a way other medical museums could not. I interrogate this dual practice by examining the processes of collection and commemoration in building this museum during the Civil War. Whereas most medical museums in the nineteenth century displayed specimens according to their scientific, rather than historical, significance, I argue that the development of museological display at the Army Medical Museum, shaped by the events of the Civil War, demonstrated a uniquely American solution to the problem of how to integrate the humanistic practice of the healing arts with the dehumanizing truth imperatives of scientific analysis.
Why are chemical kinds (think elements) so paradigmatically natural, whereas biological kinds (think species) are messy and complicated? This lecture argues that, in fact, chemical kinds are not nearly as neat and tidy as is often supposed—in other words, that chemical and biological kinds are not so different after all. Both chemical and biological kinds tend to be complex—so this talk applies a study of biochemical complexity to traditional classificatory puzzles as they arise throughout parts of chemistry and biology. These are familiar puzzles like: how to classify X? Is there one right way and, if so, what is it? If not, how do the different possible classifications relate to one another? Finally: is classificatory diversity a problem for the study of X? This lecture argues that at least one kind of classificatory diversity—that of selective naturalism—is not necessarily a problem for scientific study, by presenting a case in which this kind of classificatory diversity gets used as a tool for discovery in the biochemical sciences.
The aim of this essay is to argue for a new version of ‘inference-to-the-best-explanation’ scientific realism, which this lecture characterize as Best Theory Realism or ‘BTR’. On BTR, the realist needs only to embrace a commitment to the truth or approximate truth of the best theories in a field, those which are unique in satisfying the highest standards of empirical success in a mature field with many successful but falsified predecessors. This talk argues that taking our best theories to be true is justified because it provides the best explanation of (1) the predictive success of their predecessors and (2) their own special success. Against standard and especially structural realism, this lecture argues against the claim that the best explanations of the success of theories is provided by identifying their true components, such as structural relations between unobservable, which are preserved across theory change. In particular, this lecture criticizes Ladyman’s and Carrier’s structural account of the success of phlogiston theory, and Worrall’s well-known structural account of the success of Fresnel’s theory of light. This talk argues that these accounts tacitly assume the truth of our best theories, which in my case provides a better explanation of these theories success than the structural accounts.
Structural realism is now defended as the only version of realism that is able to surmount the pessimistic meta-induction and the general problem that successful theories involve ontological claims concerning unobservable entities that are abandoned and falsified in theory-change. This lecture argues that Best Theory Realism can overcome the pessimistic meta-induction and this general problem posed by theory-change. Our best theories possess a characteristic which sharply distinguishes them from their successful but false predecessors. Furthermore ‘inference-to-the-best-explanation’ confirmation can establish the truth of our best theories and thus trumps the pessimistic inductive reasoning which is supposed to show that even our best theories are most likely false in their claims concerning unobservable entities and processes.
SSP faculty and students only
In February 2005, the world’s first public health treaty – the Framework Convention on Tobacco Control (FCTC) – was brought into force by the World Health Organization (WHO). Unanimously endorsed by the World Health Assembly, the FCTC has become one of the most widely and rapidly adopted treaties in the history of the United Nations. The success of the treaty is frequently attributed to its “unequivocal evidence base” and is seen, primarily, as a technical accomplishment. However, the evidence base of global tobacco control has been built on a very particular way of quantifying the global burden of disease that was introduced with the development of the Disability Adjusted Life Year (DALY) metric by the World Bank in 1993. The DALY metric ascribes economic value to the individual years of life lost to ill-health and facilitates the use of cost-benefit analysis of potential health interventions. On a DALY-logic, tobacco control came to be seen as a global health priority resulting, eventually, in the passage of the FCTC. The FCTC is thus far more than a technical accomplishment: it represents a key moment in the institutionalization of a particular way of quantifying disease, economizing life and governing health. Drawing on interviews with key actors, participant observations, and historical documents, this talk examines the economization of life that has been inscribed in the evidence base of global tobacco control by tracing the epistemological and political assumptions that underlie the FCTC treaty and by raising questions about the role of democratic participation in global health priority-setting.
This lecture develops the concept of a model taxon to complement that of a model organism and contrast the use of such models as platforms for research with their use as representations. Model taxa figure in distinctive practices or modes of research in biology, specifically comparative research. Comparative biology involves distinctive practices of modeling, hypothesis-testing, and generalization which give it a quite different character as a knowledge-producing enterprise than would be expected on most traditional philosophies of science. For example, generalizations of results using model taxa can take a form this study calls “export generalization.” These resemble extrapolations of research in one place or setting to another particular setting more than they do inductive or abductive generalizations to a regularity or law statement or to statistical inferences treating subjects as samples. This talk focuses on illustrative examples of modeling and modes of generalization drawn from case studies in evolutionary developmental morphology and, if time permits, works in progress extending the concept to model populations.
The Big Data movement has been taken up in health care as so-called “Precision Medicine.” Claiming to be predictive, personalized, and participatory, the grand vision aims to create an information commons for researchers, drawn from vast amounts of biological, social and geographical data about individuals. Information would be collected not only during the course of routine clinical encounters (and stored as medical records or in biobanks), but also using a variety of biosensors, mobile health devices, data mining of social media and more. Applying data analytics to the heterogenous, complex data sets, proponents claim to be able to identify individual and population risk profiles. A major emphasis is to identify biomarkers with which to create new taxonomies of disease and new ways of modeling disease. This talk discusses the implications of data-driven biomedicine including the infrastructures already being created to sustain such efforts. New epistemic spaces are being created as research and clinic become increasingly blurred and as personal health information, gleaned from sources not conventionally considered “medical” are used to create new ontologies of the body.
In this paper we explore an organization’s annual budgeting process as a ‘future- oriented sensemaking process.’ In so doing we articulate the critical role of ritual in guiding, shaping, and bounding how people create visions of the future that become reified and govern action. The budget ritual provides a regular sensemaking opportunity, one driven by a cyclical process rather than a crisis event, which legitimates action, channels attention and emotion, and provides a liminal space for contestation and constructing meaning. We further show how the collective temporal work of bringing together details of the past with hopes for the future occurs in the context of ritual. In outlining how future-oriented sensemaking through ritualized moves engenders a stable, coherent, and robust form of collective sensemaking this work thus has implications for strategy making in practice and organizational control.
Probabilistic reasoning has increasingly emerged as a contested and controversial site in forensic science and at the intersection of science and law. Proponents of probabilistic reasoning assert as a truism that all evidence can and should be understood probabilistically. Historically, the forensic disciplines have avoided probabilistic reasoning through semantic workarounds. Increasing external scrutiny on forensic science, however, has made such positions increasingly untenable. This paper chronicles serial efforts over the past five years to reconfigure the knowledge claims of one prominent forensic discipline, fingerprint identification, in a manner consistent with probabilistic reasoning. Based on a close textual analysis of key policy documents and trial transcript testimony, the paper shows that the purported embrace of probabilistic reasoning has been inconsistent and half-hearted. This suggests that the imposition of probabilistic re asoning upon disciplines historically resistant to it will entail considerable difficulties. As such, the paper contributes to STS discussions of the scientization of quasi-scientific disciplines like forensic science and the applicability of probabilistic reasoning to real-world scientific problems.
Author of Meeting the Universe Halfway: Quantum Physics and the Entanglement of Matter and Meaning (Duke 2007)
"Quantum Entanglements and Hauntological Relations of Inheritance: Dis/continuities, SpaceTime Enfoldings, and Justice-to-Come”
Karen Barad is Professor of Feminist Studies, Philosophy, and History of Consciousness at the University of California at Santa Cruz. Barad’s Ph.D. is in theoretical particle physics and quantum field theory. Barad held a tenured appointment in a physics department before moving into more interdisciplinary spaces. Barad is the author of Meeting the Universe Halfway: Quantum Physics and the Entanglement of Matter and Meaning (Duke University Press, 2007) and numerous articles in the fields of physics, philosophy, science studies, poststructuralist theory, and feminist theory. Barad’s research has been supported by the National Science Foundation, the Ford Foundation, the Hughes Foundation, the Irvine Foundation, the Mellon Foundation, and the National Endowment for the Humanities. Barad is the Co-Director of the Science & Justice Graduate Training Program at UCSC.
Her work engages feminist science studies, materialism, deconstruction, poststructuralism, posthumanism, multi-species studies, science & justice, physics, twentieth-century continental philosophy, epistemology, ontology, ethics, philosophy of physics, feminist, and queer, & trans theories.
Monday, March 3, 4-5:30, Philip Vera Cruz Room, Old Student Center, Mandeville Campus, UCSD
The standard exorcism of Maxwell's demon requires us to attach an entropy cost to information processing undertaken by the demon. This exorcism faces many difficulties. This talk shows that there is a much simpler way to exorcise Maxwell's demon.
SSP faculty and students only
Although Pierre Duhem is widely considered to be an instrumentalist with respect to physical theories, his thesis of natural classification – the view that theories classify experimental laws, and that we are compelled to believe that this classification progressively tends to a natural classification, one that reflects a real metaphysical order – has led to various realist interpretations of him. I critically assess some realisms attributed to Duhem in the literature – including the standard no-miracles style realism, motivational realism, and plausibility realism – and argue that Duhem does not convincingly fit any of these. I argue that the rationale behind Duhem’s view of natural classification was not that it was the best explanation of the success of physical theories, or that it merely motivated the practice of physics, or that it holds on plausibility considerations given the success of theories. I contend that it was rather that the thesis of natural classification for Duhem helps the physicist justify and make sense of the activity of pursuing physics. It helps make the pursuit of progress in physics intelligible to the physicist. Arthur Fine’s take on his ‘natural ontological attitude’ I believe, comes closest to this view. I then leave it as an open question as to whether this rationale warrants a realist reading of Duhem.
The 2011 global wave of protests raised a number of questions for scholars studying social movements. Like May 68, these world-historic events are more than a revolt, but less than a revolution. Moreover, they represent a puzzle for sociologists who cannot easily reconcile the available theoretical models of social movements with the processes of social change imagined by Occupy protesters. Principally, the Occupy protesters purposively broke from traditional descriptions of social movements in their organizational frames, assessment of political opportunities, and techniques for resource mobilization. Simultaneously, another paradigm is emerging, networked social movements, that more closely matches the participatory knowledge of Occupy protesters as networks of networks that use information communication technologies to coordinate large scale actions. This new model is both distinctly interdisciplinary and fraught with contentious debates about how to conduct research. What organizing infrastructure underlies these massive popular uprisings that seem to have no center? What are the best methods for visualizing, describing, and analyzing these forms of social action? Finally, are paradigm shifts in sociological research more than just transformations in research programs, can they also be markers of social change themselves?
In 1947, the Nuremberg Code defined the first imperative of medical research, the duty to obtain the free and informed consent of a human experimental subject. It took three turbulent and scandal-ridden decades for that standard to be adopted in the United States. After giving a brief account of the struggle to get American doctors to adhere to the Nuremberg Code, this presentation examines how informed consent evolved in the 1980s and 1990s. Things began to change after AIDS made its first appearance in the U.S. Suspicion of the pharmaceutical industry survived from the era of the human experimentation scandals, but underwent a major adjustment under the conditions of a lethal epidemic. Clinical trials went from being a hazard from which people needed to be protected, to a privilege to which people needed access. AIDS activists became better informed about the disease than their physicians, sweeping away the last vestiges of medical paternalism. The patient autonomy of the 1970s was centered on the right to refuse to participate in research or undergo therapy. In the 1980s, it became the right to choose the level of one’s exposure to risk, the right to decide to take an experimental drug because the alternative was death. In the early 1990s, thanks to the combined efforts of pharmaceutical science and patient activism, AIDS was transformed into a chronic manageable condition. Coinciding with NAFTA and other neoliberal structural adjustment programs, the Clinton administration implemented a series of medical reforms which defined autonomy in terms of privacy and consumer choice. When the FDA loosened its guidelines for direct-to-consumer advertising of prescription drugs in 1997, autonomy was defined in terms of the empowered patient, armed with a self-diagnosis culled from the Internet and primed by television ads to ask her doctor about brand name drugs.
The material turn in science and technology studies (STS) has influenced a number of scholars who analyze the bio-economy, especially when it comes to positing latent value in biological material (e.g. tissues, cells, blood, etc.). However, in focusing on the material value of this biological matter these scholars end up missing a far more significant source of value in the bio-economy. Value in the bio-economy is constituted by life science businesses themselves as organizations and by their tangible and, especially, intangible assets. This necessitates looking at the process of assetization which involves analyzing corporate governance, (e)valuative practices and another form of materiality within financial accounting.
An important trend in recent scholarship on ethics and technology is to identify and analyze the ethical issues raised by new technologies while these technologies are still in the early stages of development. A major thrust of anticipatory ethics is to have ethical considerations and social values influence the development of future technologies. In this paper, I explore anticipatory ethics in relation to artificial agent technology. Artificial agents are computational devices that perform tasks on behalf of humans and do so without immediate, direct human control or intervention, e.g., software programs, robots, drones. Because of their complexity and autonomy and especially because of their capacity to learn as they operate, some argue that in the future there will be artificial agents for which no humans can be (or can fairly be held) responsible. The responsibility issues posed by artificial agent technology illustrate the challenges of anticipatory ethics, in particular those having to do with the contingent nature of technological development and the malleability of moral concepts. The issue also poses a challenge for discourse around agency.
A jellyfish surrounds a plastic fragment, merging the synthetic material with its body; a water agency poster warns of dangerous plastic bottle ‘fish’; marine organisms take shelter on and under synthetic materials. These are the denizens of a growing realm marine ecologists call the ‘plastisphere,’ where sea life and plastics meet. Building upon STS interrogations of nature/culture divides and the practical work of classification, this presentation explores the indeterminacy – the very plasticity – of the category of ‘species’ as it is engaged in seriousness and irony, with organic and synthetic kinds. First, I document attempts to untangle assemblages of nature-culture, drawing on participant-observation at a nonprofit marine institute laboratory in California. Here volunteers measure samples of vast oceans in teaspoons, sorting tiny plastic bits from animal ones under the microscope, deciding what gets counted as life (not plastic) and what does not (plastic). Second, I consider related public education campaigns, paying particular attention to assumptions about whether and when plastic species should or should not meet. I argue that the ‘danger’ of plastic relationships lurks not in associations but in the very categories used to understand and live with forms of plastic and forms of life, in the kinds of belonging that emerge with kinds of materials, and, in the failure to recognize the impossibility of their separation.
Since the 1990s, medical educators have heeded calls to implement cultural competence training during medical school. This has been widely taken up, and the cultural competence movement is arguably in its second articulation—from a set of tacked-on lectures to a curricular model that attempts to integrate biomedical and biopsychosocial aspects of medical care. One common way of integrating these areas is in Problem-based Learning (PBL) cases. PBL is an educational method used to teach clinical reasoning during which a group works through a paper case under the supervision of a faculty facilitator. PBL cases present a narrative of clinical care that follows a patient from first presentation in the hospital or doctor’s office until the resolution of the patient’s health concern, and most cases are organized such that students ‘solve’ the mystery of the patient’s malady as they work through the case. PBL cases have come under sociological scrutiny because case writers sometimes ‘thicken’ the cases with stereotypical patient attributes, contributing to larger portrayals of cultural competence approaches as teaching medical students that culture is ‘other’ and competence is, in practice, knowing common attributes of social groups. Using ethnographic data from PBL and cultural competence training at “West Coast Medical School,” this presentation will show how patient diversity is incorporated into training cases and how these difference markers (race, class, ethnicity, sexuality, etc.) become meaningful—or not—during PBL sessions.
Historians of psychology have described how the “introspection” of early Wundtian psychology largely came to be ruled out of experimental settings by the mid 20th century. In this paper I take a fresh look at the years before this process was complete -- from the vantage point of early anthropological and psychological field expeditions. The psychological research conducted during and after the Cambridge Anthropological Expedition to the Torres Straits Islands (CAETS) in 1898 had a certain impact on Ludwig Wittgenstein, who, among other things, became an important commentator on experimental psychology. In his later writings, Wittgenstein frequently referred to “anthropological facts” and “anthropological phenomena.” He articulated some of the central tenets of cultural anthropological analysis. His efforts to move the ground of analysis from philosophy to anthropology take on greater force in the light of his acquaintance with the early history of anthropology. I will take this opportunity to reconsider the importance of the CAETS in the history of anthropology and to explore some possible ways of approaching experimental psychology ethnographically.