The Science Studies Colloquium Series takes place every Monday of the quarter from 4:00p-5:30p in Room 3027, Humanities & Social Sciences Building, Muir College campus, unless noted otherwise.
A reception for the colloquium speaker takes place before the talk from 3:30p-4:00p in Room 3005, Humanities & Social Sciences Building.
SSP faculty and students only
By the late-nineteenth century, the deep ocean floor had become “Darwin's laboratory,” a place to test the “direct action of external conditions on organisms.” According to dominant Victorian marine biology, the deep sea was an eternal, unchanging biogeographical space. There, and only there, could naturalists investigate how organisms evolved without the influence of changing environmental factors. Consequently, marine invertebrate specimens from the ocean floor played a large role in the formation of evolutionary theory throughout the nineteenth century. This presentation explores the 1880s dispute between Charles Darwin and Sir Wyville Thomson regarding natural selection as the culmination of a half-century of conflict over deep sea invertebrates and biological evidence. Marine invertebrates, according to some naturalists, were uniquely suited to the philosophical study of organismal complexity. Other naturalists focused on the much-anticipated discovery of Darwin's “living fossil” dredged from the sea floor as proof of evolutionary divergence. Sir Wyville Thomson, on the other hand, was certain that his deep sea crinoids offered no proof of evolution by natural selection, thereby offering a serious challenge to Darwin's theory. Ultimately, the practices of three international scientific communities, Edinburgh, Cambridge, and the US Coast Survey, converged over deep sea creatures and those marine organisms changed the way we study life's history.
An array of forces at many levels conspire to favor identification, and dissemination, of drug benefits vs harms. Practical aspects of randomized trial conduct, coupled with human subjects protection considerations, foster trial designs that (through selection processes) advantage detection of benefits over harms – producing disparities between the truth and evidence. Then (as findings show), forces, many fostered by industry conflicts, modify the relation between the evidence that is, and the evidence that is seen. Factors range from submission and representation bias (including ghostwriting and plural publication), though reviewer and journal influence, to media representations, and medical education. Disparities may propagate to medical practice through guideline generation and physician “performance pay.” The cumulative effect of these forces is that what we “know” may depart from what is “true” (in sign as well as magnitude); and what we do may depart from what is right. Recommendations are put forth to lessen the impact of these forces – with the goal to realign practice with patients’ interests."
Data mining, or, more formally, Knowledge Discovery in Databases (KDD), is the activity of creating non-trivial knowledge suitable for action from databases of vast size and dimensionality. From the mid-1960s to the late 1990s, data mining moved from a disparaged, dubious sort of statistical work—“fishing” or “dredging”—to become what its practitioners proclaim to be an utterly transformative technology. According to KDD advocates, traditional scientific approaches to data—and the traditional competencies of scientists—simply cannot keep up with the volume of data and multidimensionality possible thanks to computers. Something else is needed, something less pure—because it deals with vast impurities of dynamic data, nearly always from a particular business, governmental, or scientific research goal. Establishing the legitimacy of KKD meant demonstrating that lack of luxury. Data miners tell a technologically determinist story of the necessary shift from the niceties of statistical rigor to the capaciousness and utility of data mining. I look at how stories of technologically determined emergence were crucial to the legitimization of data mining in authorizing the loosening—and often abandonment—of the disciplinary and epistemological values of its predecessor disciplines, statistics, database management, and machine learning.
Many have observed the decline of scientific authority over the last three decades, for reasons ranging from the toxic legacies of Cold War science (Beck 1992), to the current commercialization and privatization of knowledge production (Lave 2012, Mirowski 2011), to the success of social constructivist critique (Latour 2004). Whatever the cause(s), the relationship between academia, economic elites, and the military is shifting once again, and a new regime of knowledge production is emerging (Pestre 2003). What shape will this new regime take? How will the construction of expertise and scientific authority change, and with what political implications?
This talk argues that the emergent science regime will deeply erode academia’s increasingly tenuous monopoly on scientific expertise. The legions of volunteers on whom fields such as astronomy, cartography, and ornithology increasingly depend will expand wildly as research funding shrinks and academics become reaccustomed to an integration of “amateurs” and “professionals” last seen in the mid-19th century (Reingold 1976, Secord 1994). The resurgence of knowledge as a central target of capital accumulation (Canaan and Schumar 2008, Tyfield 2010) will deepen the current push to evaluate knowledge claims on their commercial merits, regardless of source, producing a perverse democratization of knowledge production that elevates Merck and Monsanto above conflict of interest concerns, but also allows the Environmental Working Group and bucket brigades in fenceline communities to transcend accusations of amateurism and take on the status of experts. The challenge will be to push this coming horizontality towards intellectually and politically progressive ends.
Questions about the nature and location of comets had not been definitively decided by 1618, a year marked by a succession of three comets visible to the naked eye, culminating in the great comet of 1618. These events resulted in the publication of multiple treatises about comets by numerous observers, not the least being those of Libertus Fromondus, of the Jesuit Horatio Grassi, and of Galileo, responding to Grassi, in defense of his own position, as elucidated by his disciple, Mario Guiducci. This talk discusses Fromondus’ critique of the Aristotelian account of comets, which caused him subsequently to reject Galileo’s explanation as well. Fromondus, professor of philosophy, then theology, at Louvain mades significant modifications to his Aristotelianism to accommodate astronomical novelties such as supra-lunar comets. While he could be thought as a conservative thinker—in this case, he should still be considered an Aristotelian; he made changes that went well beyond what could be described as the articulation of the Aristotelian paradigm or as part of the sequence of theories in the Aristotelian “research programme.” And while we are used to thinking that the great Galileo was right about astronomical novelties such as comets, and that the Aristotelians were wrong, in this case, Fromondus had the better of Galileo (or right and wrong are the wrong ways to think about such issues).
The development of museological science in the nineteenth century radically restructured the way physicians understood, visualized and discussed medicine. By arranging medical specimens museologically, physicians were better able to understand the mechanism of disease. At the same time, this empirically-based medicine transformed the patient into an object of study, creating for American physicians a tension between the dehumanizing practices of scientific medicine and its inherently humanistic spirit. This tension can be seen clearly in the development of the U.S. Army Medical Museum during the Civil War, as well as in its museological display of specimens and the historical exhibits after the war. Designed as both a national museum and institution of medical education, the Army Medical Museum exhibited not only scientifically constructed displays of specimens and medical objects, but also historically contextualized narratives of the history of medicine. Building on John Harley Warner’s study of the deployment of French museological science in America (1998), my project interrogates the development of an American scientific medicine within the framework of this national medical museum that was simultaneously constrained and shaped by the U.S. Civil War. Through the display of their unique collection of pathological and anatomical specimens, the Army Medical Museum combined museological science practice with historical artifacts in its display of Civil War specimens, balancing the humanistic and objectifying qualities of scientific medicine in a way other medical museums could not. I interrogate this dual practice by examining the processes of collection and commemoration in building this museum during the Civil War. Whereas most medical museums in the nineteenth century displayed specimens according to their scientific, rather than historical, significance, I argue that the development of museological display at the Army Medical Museum, shaped by the events of the Civil War, demonstrated a uniquely American solution to the problem of how to integrate the humanistic practice of the healing arts with the dehumanizing truth imperatives of scientific analysis.
Why are chemical kinds (think elements) so paradigmatically natural, whereas biological kinds (think species) are messy and complicated? This lecture argues that, in fact, chemical kinds are not nearly as neat and tidy as is often supposed—in other words, that chemical and biological kinds are not so different after all. Both chemical and biological kinds tend to be complex—so this talk applies a study of biochemical complexity to traditional classificatory puzzles as they arise throughout parts of chemistry and biology. These are familiar puzzles like: how to classify X? Is there one right way and, if so, what is it? If not, how do the different possible classifications relate to one another? Finally: is classificatory diversity a problem for the study of X? This lecture argues that at least one kind of classificatory diversity—that of selective naturalism—is not necessarily a problem for scientific study, by presenting a case in which this kind of classificatory diversity gets used as a tool for discovery in the biochemical sciences.
The aim of this essay is to argue for a new version of ‘inference-to-the-best-explanation’ scientific realism, which this lecture characterize as Best Theory Realism or ‘BTR’. On BTR, the realist needs only to embrace a commitment to the truth or approximate truth of the best theories in a field, those which are unique in satisfying the highest standards of empirical success in a mature field with many successful but falsified predecessors. This talk argues that taking our best theories to be true is justified because it provides the best explanation of (1) the predictive success of their predecessors and (2) their own special success. Against standard and especially structural realism, this lecture argues against the claim that the best explanations of the success of theories is provided by identifying their true components, such as structural relations between unobservable, which are preserved across theory change. In particular, this lecture criticizes Ladyman’s and Carrier’s structural account of the success of phlogiston theory, and Worrall’s well-known structural account of the success of Fresnel’s theory of light. This talk argues that these accounts tacitly assume the truth of our best theories, which in my case provides a better explanation of these theories success than the structural accounts.
Structural realism is now defended as the only version of realism that is able to surmount the pessimistic meta-induction and the general problem that successful theories involve ontological claims concerning unobservable entities that are abandoned and falsified in theory-change. This lecture argues that Best Theory Realism can overcome the pessimistic meta-induction and this general problem posed by theory-change. Our best theories possess a characteristic which sharply distinguishes them from their successful but false predecessors. Furthermore ‘inference-to-the-best-explanation’ confirmation can establish the truth of our best theories and thus trumps the pessimistic inductive reasoning which is supposed to show that even our best theories are most likely false in their claims concerning unobservable entities and processes.
SSP faculty and students only
In February 2005, the world’s first public health treaty – the Framework Convention on Tobacco Control (FCTC) – was brought into force by the World Health Organization (WHO). Unanimously endorsed by the World Health Assembly, the FCTC has become one of the most widely and rapidly adopted treaties in the history of the United Nations. The success of the treaty is frequently attributed to its “unequivocal evidence base” and is seen, primarily, as a technical accomplishment. However, the evidence base of global tobacco control has been built on a very particular way of quantifying the global burden of disease that was introduced with the development of the Disability Adjusted Life Year (DALY) metric by the World Bank in 1993. The DALY metric ascribes economic value to the individual years of life lost to ill-health and facilitates the use of cost-benefit analysis of potential health interventions. On a DALY-logic, tobacco control came to be seen as a global health priority resulting, eventually, in the passage of the FCTC. The FCTC is thus far more than a technical accomplishment: it represents a key moment in the institutionalization of a particular way of quantifying disease, economizing life and governing health. Drawing on interviews with key actors, participant observations, and historical documents, this talk examines the economization of life that has been inscribed in the evidence base of global tobacco control by tracing the epistemological and political assumptions that underlie the FCTC treaty and by raising questions about the role of democratic participation in global health priority-setting.
This lecture develops the concept of a model taxon to complement that of a model organism and contrast the use of such models as platforms for research with their use as representations. Model taxa figure in distinctive practices or modes of research in biology, specifically comparative research. Comparative biology involves distinctive practices of modeling, hypothesis-testing, and generalization which give it a quite different character as a knowledge-producing enterprise than would be expected on most traditional philosophies of science. For example, generalizations of results using model taxa can take a form this study calls “export generalization.” These resemble extrapolations of research in one place or setting to another particular setting more than they do inductive or abductive generalizations to a regularity or law statement or to statistical inferences treating subjects as samples. This talk focuses on illustrative examples of modeling and modes of generalization drawn from case studies in evolutionary developmental morphology and, if time permits, works in progress extending the concept to model populations.
The Big Data movement has been taken up in health care as so-called “Precision Medicine.” Claiming to be predictive, personalized, and participatory, the grand vision aims to create an information commons for researchers, drawn from vast amounts of biological, social and geographical data about individuals. Information would be collected not only during the course of routine clinical encounters (and stored as medical records or in biobanks), but also using a variety of biosensors, mobile health devices, data mining of social media and more. Applying data analytics to the heterogenous, complex data sets, proponents claim to be able to identify individual and population risk profiles. A major emphasis is to identify biomarkers with which to create new taxonomies of disease and new ways of modeling disease. This talk discusses the implications of data-driven biomedicine including the infrastructures already being created to sustain such efforts. New epistemic spaces are being created as research and clinic become increasingly blurred and as personal health information, gleaned from sources not conventionally considered “medical” are used to create new ontologies of the body.
In this paper we explore an organization’s annual budgeting process as a ‘future- oriented sensemaking process.’ In so doing we articulate the critical role of ritual in guiding, shaping, and bounding how people create visions of the future that become reified and govern action. The budget ritual provides a regular sensemaking opportunity, one driven by a cyclical process rather than a crisis event, which legitimates action, channels attention and emotion, and provides a liminal space for contestation and constructing meaning. We further show how the collective temporal work of bringing together details of the past with hopes for the future occurs in the context of ritual. In outlining how future-oriented sensemaking through ritualized moves engenders a stable, coherent, and robust form of collective sensemaking this work thus has implications for strategy making in practice and organizational control.
Probabilistic reasoning has increasingly emerged as a contested and controversial site in forensic science and at the intersection of science and law. Proponents of probabilistic reasoning assert as a truism that all evidence can and should be understood probabilistically. Historically, the forensic disciplines have avoided probabilistic reasoning through semantic workarounds. Increasing external scrutiny on forensic science, however, has made such positions increasingly untenable. This paper chronicles serial efforts over the past five years to reconfigure the knowledge claims of one prominent forensic discipline, fingerprint identification, in a manner consistent with probabilistic reasoning. Based on a close textual analysis of key policy documents and trial transcript testimony, the paper shows that the purported embrace of probabilistic reasoning has been inconsistent and half-hearted. This suggests that the imposition of probabilistic re asoning upon disciplines historically resistant to it will entail considerable difficulties. As such, the paper contributes to STS discussions of the scientization of quasi-scientific disciplines like forensic science and the applicability of probabilistic reasoning to real-world scientific problems.
Author of Meeting the Universe Halfway: Quantum Physics and the Entanglement of Matter and Meaning (Duke 2007)
"Quantum Entanglements and Hauntological Relations of Inheritance: Dis/continuities, SpaceTime Enfoldings, and Justice-to-Come”
Karen Barad is Professor of Feminist Studies, Philosophy, and History of Consciousness at the University of California at Santa Cruz. Barad’s Ph.D. is in theoretical particle physics and quantum field theory. Barad held a tenured appointment in a physics department before moving into more interdisciplinary spaces. Barad is the author of Meeting the Universe Halfway: Quantum Physics and the Entanglement of Matter and Meaning (Duke University Press, 2007) and numerous articles in the fields of physics, philosophy, science studies, poststructuralist theory, and feminist theory. Barad’s research has been supported by the National Science Foundation, the Ford Foundation, the Hughes Foundation, the Irvine Foundation, the Mellon Foundation, and the National Endowment for the Humanities. Barad is the Co-Director of the Science & Justice Graduate Training Program at UCSC.
Her work engages feminist science studies, materialism, deconstruction, poststructuralism, posthumanism, multi-species studies, science & justice, physics, twentieth-century continental philosophy, epistemology, ontology, ethics, philosophy of physics, feminist, and queer, & trans theories.
Monday, March 3, 4-5:30, Philip Vera Cruz Room, Old Student Center, Mandeville Campus, UCSD
The standard exorcism of Maxwell's demon requires us to attach an entropy cost to information processing undertaken by the demon. This exorcism faces many difficulties. This talk shows that there is a much simpler way to exorcise Maxwell's demon.
SSP faculty and students only
A few years after Einstein first published his work on the theory of relativity, many of his contemporaries remarked on the theory’s profound implications for communications. Some of them referred to it as “signal-theory” or “message theory.” For many writers, the perceived constancy of the speed of light—one of the central tenets of Einstein’s theory—was a mere technological effect related to current limitations in communication technologies. The physicist Paul Langevin, who explained the significance of Einstein’s theory in terms of the sending and receiving of messages, went as far as describing the old mechanics as a science centered on the belief—now discredited by Einstein—in a “means of instantaneous signaling at a distance.” Einstein’s universe, where the shortest path between two points was often curved and which broke the laws of Euclidean geometry, f unctioned according to the rules of electromagnetic signal transmission. This talk shows how one of the foundational texts for theoretical physics—with its metaphysical and cosmological implications—emerged alongside modern communications media.
The material turn in science and technology studies (STS) has influenced a number of scholars who analyze the bio-economy, especially when it comes to positing latent value in biological material (e.g. tissues, cells, blood, etc.). However, in focusing on the material value of this biological matter these scholars end up missing a far more significant source of value in the bio-economy. Value in the bio-economy is constituted by life science businesses themselves as organizations and by their tangible and, especially, intangible assets. This necessitates looking at the process of assetization which involves analyzing corporate governance, (e)valuative practices and another form of materiality within financial accounting.