Stephen M. Downes is a Full Professor in the Philosophy Department at the University of Utah (USA). Most of his work is in philosophy of science with special focus on philosophy of biology, philosophy of social science and models and modeling across the sciences. He is also an Adjunct Professor in the School of Biological Sciences at the University of Utah, and a member of the PhilInBioMed network. Detailed CV.
An Early History of the Heritability Coefficient Applied to Humans (1918–1960)
Stephen M. Downes (in collaboration with Eric Turkheimer)
(See full paper here)
Abstract
Fisher’s 1918 paper accomplished two distinct goals: unifying discrete Mendelian genetics with continuous biometric phe- notypes and quantifying the variance components of variation in complex human characteristics. The former contributed to the foundation of modern quantitative genetics; the latter was adopted by social scientists interested in the pursuit of Gal- tonian nature-nurture questions about the biological and social origins of human behavior, especially human intelligence. This historical divergence has produced competing notions of the estimation of variance ratios referred to as heritability. Jay Lush showed that they could be applied to selective breeding on the farm, while the early twin geneticists used them as a descriptive statistic to describe the degree of genetic determination in complex human traits. Here we trace the early history (1918 to 1960) of the heritability coefficient now used by social scientists.
Keywords
Behavior genetics · Heritability · Heritability coefficient · Human behavior genetics
Abstract
Evolutionary gradualism, the randomness of mutations, and the hypothesis that natural selection exerts a pervasive influence on evolutionary outcomes are pair-wise logically independent. Can the claims about selection and mutation be used to formulate an argument for gradualism? In his Genetical Theory of Natural Selection, R.A. Fisher made an important start at this project in his famous “geometric argument” about the fitness consequences of random mutations that have different sizes of phenotypic effect. Kimura’s theory of how the probability of fixation depends on both the selection coefficient and the effective population size shows that Fisher’s argument for gradualism was mistaken. Here we analyze Fisher’s argument and explain how Kimura’s theory leads to a conclusion that Fisher did not anticipate. We identify a fallacy that reasoning about fitness differences and their consequences for evolution should avoid. We distinguish forward-directed from backward-directed versions of gradualism. The backward-directed thesis may be correct, but the forward-directed thesis is not.
Johanna Joyce is a cancer biologist and geneticist, and her research interests focus on exploring the critical functions of the tumor microenvironment in regulating cancer progression, metastasis and therapeutic response, with the ultimate goal of exploiting this knowledge to devise rational and effective therapies.
Her fascination with cancer genetics began during her undergraduate degree in Genetics at Trinity College Dublin, and continued during her PhD at the University of Cambridge, UK, where she investigated dysregulation of genomic imprinting in cancer predisposition syndromes. She did her postdoc at the University of California, San Francisco, in Doug Hanahan’s lab, focusing on mechanisms of tumour angiogenesis and invasion in pancreatic cancers.
In December 2004, she started her lab at Memorial Sloan Kettering Cancer Center, New York, USA and was promoted through the ranks to tenured Professor and Full Member in 2014.
In January 2016, she was recruited to the University of Lausanne, Switzerland and the Ludwig Institute of Cancer Research. Her lab continues to unravel the complex mechanisms of communication between cancer cells and their microenvironment that regulate tumor progression, metastasis, and response to anti-cancer therapy. They are especially intrigued by the study of brain tumors – including glioblastoma and brain metastases – with the ultimate goal of developing effective new therapies against these deadly cancers.
The seminar will be organised via Zoom (ID : 882 6482 8610)
Abstract
Cancers do not arise within a vacuum; rather they develop and grow within complex organs and tissue environments that critically regulate the fate of tumor cells at each sequential step of malignant progression. The tumor microenvironment (TME) can be viewed as an intricate ecosystem populated by diverse innate and adaptive immune cell types, stromal cells, extracellular matrix, blood and lymphatic vessel networks that are embedded along with the cancer cells. While bidirectional communication between cells and their microenvironment is critical for normal tissue homeostasis, this active dialog can become subverted in cancer leading to tumor initiation and progression. Through their exposure to tumor-derived molecules, normal cells can become “educated” to actually promote cancer development. As a consequence of this tumor-mediated education, TME cells produce a plethora of growth factors, chemokines, and matrix-degrading enzymes that together enhance the proliferation and invasion of the tumor. Moreover, these conscripted normal cells also provide a support system for cancer cells to fall back on following traditional therapies such as chemotherapy and radiation, and additionally contribute to a general immune-suppressive state, thus limiting the efficacy of immunotherapies. Consequently, multi-targeted approaches in which co-opted cells in the microenvironment are “re-educated” to actively fight the cancer represent a promising strategy for the effective long-term treatment of this devastating disease.
Paul Rainey (Max Planck Institute for Evolutionary Biology, Germany & Ecole supérieure de Physique et de Chimie Industrielles de la Ville de Paris, France), “Ecological scaffolding and the Evolution of Individuality” (Zoom talk).
Paul was born in New Zealand and completed his bachelors, masters and PhD at the University of Canterbury. From 1989 until 2005 he was based in the UK where most of his time was as researcher and then professor at the University of Oxford. He began transitioning back to New Zealand in 2003, firstly as Chair of Ecology and Evolution at the University of Auckland, but then in 2007 moved to the New Zealand Institute for Advanced Study as one of its founding professors. Paul is currently Director of the Department of Microbial Population Biology at the Max Planck Institute for Evolutionary Biology in Plön (Germany), Professor at ESPCI in Paris, and he retains an adjunct professorial position at the NZIAS. He is a Fellow of the Royal Society of New Zealand, a Member of EMBO and honorary professor at Christian-Albrechts Universität zu Kiel.
Eva Jablonka is professor emeritus at the Cohn Institute for the History and Philosophy of Science and Ideas, Tel Aviv University. In 1981 she was awarded the Landau prize of Israel for outstanding Master of Science (M.Sc.) work and in 1988, the Marcus prize for outstanding Ph.D. work. EvaJablonka publishes about evolutionary themes, especially epigenetics. Her emphasis on non-genetic forms of evolution has received interest from those attempting to expand the scope of evolutionist thinking into other spheres. Jablonka has been described as being in the vanguard of an ongoing revolution within evolutionary biology, and is a leading proponent of the extended evolutionary synthesis.
Abstract “Neural Transitions in Learning and Cognition”
A focus on learning as a marker of a cognitive system provides a unifying framework for experimental and theoretical studies of cognition in the living world. Focusing on neural learning, Simona Ginsburg and I identified five major neural transitions, the first two of which involve animals at the base of the phylogenetic tree: (i) the evolutionary transition from learning in non-neural animals to learning in the first neural animals; (ii) the transition to animals showing limited, elemental associative learning, entailing neural centralization and primary brain differentiation; (iii) the transition to animals capable of unlimited associative learning (UAL), which, on our account, constitutes sentience and entails hierarchical brain organization and dedicated memory and value networks, and (iv) the transition to imaginative animals that can plan and learn through selection among virtual events; (v) the transition to human, symbol-based cognition and cultural learning.
Please find the video below :
Judith Campisi is a professor of biogerontology at the Buck Institute in California. She is a member of the National Academy of Science and a fellow of the American Association for the Advancement of Science. She has won a solid reputation in this field for her studies of the relationship between aging, cell senescence and cancer. Her working hypothesis is that cell senescence is a major driver of aging and of age-related diseases, mainly, through inflammation. She is also a founder of Unity Biotechnology.
Abstract
Cancer is primarily a disease of aging, similar to many other age-related pathologies ranging from sarcopenia to neurodegeneration. In contrast to many age-related diseases, which are loss-of-function in nature, cancer can be considered a gain-of-function disease because cancer cells must acquire new properties, generally by somatic mutation, in order to develop into a lethal tumor. Given that potentially oncogenic mutations occur throughout life, why do most cancers take decades to develop? One answer is there are powerful tumor suppressive mechanisms, selected throughout evolution, that keep cancer at bay for approximately half the mammalian life span. One of these tumor suppressive mechanisms is a cell fate decision termed cellular senescence. Cells undergo senescence in response to many types of stress or damage, including potentially oncogenic mutations. Senescent cells arrest proliferation, essentially irreversibly, and develop a complex senescence-associated secretory phenotype (SASP) that includes many inflammatory cytokines and chemokines, growth factors, proteases and bioactive metabolites, including lipids. Senescent cells increase with age in most, if not all, mammalian tissues, and are present at higher numbers in many diseased, compared to age-matched non-diseased, tissues. There is now mounting evidence that senescent cells, and particularly their SASPs, are prime drivers of many age-related pathologies, including, ironically, late-life cancer. Further, many genotoxic and cytotoxic anti-cancer drugs induce senescence in both tumor and normal cells, suggesting that senescent cells might be responsible for the premature aging phenotypes that commonly develop in cancer patients treated with certain anti-cancer therapies. Mouse models, and a new class of drugs that selectively kill senescent cells, give hope that the balance between tumor suppression and aging can be tipped to reduce the incidence of age-related cancer and extend health span.
Talk : Explainable AI and medicine
Speaker : Emanuele Ratti is a philosopher based in the Institute of Philosophy and Scientific Method at Johannes Kepler University Linz. Before his current appointment, he worked at the University of Notre Dame, and he holds a PhD in ethics and foundations of the life sciences from the European School of Molecular Medicine (SEMM), in Milan.
His research trajectory is in history and philosophy of science and technology (biomedicine and data science). In particular, he is interested in how data science and biomedicine shape one another, both in epistemic and non-epistemic terms.
Abstract :
In the past few years, several scholars have been critical of the use of machine learning systems (MLS) in medicine, in particular for three reasons. First, MLSs are theory agnostic. Second, MLSs do not track any causal relationship. Finally, MLSs are black-boxes. For all these reasons, it has been claimed that MLSs should be able to provide explanations of how they work – the so-called Explainable AI (XAI). Recently, Alex John London claims that these reasons do not stand scrutiny. As long as MLSs are thoroughly validated by means of rigorous empirical testing, we do not need XAI in medicine. London’s view is based on three assumption: (1) we should treat MLSs as akin to pharmaceuticals, for which we do not need an understanding of how they work, but only that they work; (2) XAI plays one role in medicine, which is to assess reliability and safety; (3) MLSs have unlimited interoperability and little transfer-costs. In this talk, I will question London’s assumptions, and I will elaborate an account of XAI that I call ‘explanation-by-translation’. In a nutshell, XAI’s goal is to integrate MLS tools in medical practice; and in order to fulfill this integration task, XAI translates or represent MLSs findings in a way that is compatible with the conceptual and representational apparatus used in that system of practice in which MLS has to be integrated. I will illustrate ‘explanation-by-translation’ in action in medical diagnosis, and I will show how this account is helpful for understanding, in different contexts, whether we need XAI, what XAI has to explain, and how XAI has to explain it.
Please find the video of the talk here :
The talk will be given by Zoom at 6pm, Paris time zone (GMT+1) James Tabery is an Associate Professor at the University of Utah (USA), with appointments in the Department of Philosophy, the Department of Pediatrics, and the Department of Internal Medicine (Program in Medical Ethics and Humanities). His research areas are history and philosophy of science, as well as bioethics. In particular, he examines the history and modern day implementation of genetics–how debates surrounding that science have evolved over the last century, what impact genetic results are having in the criminal justice system, what impact genetics is having in the clinical domain, and who has benefited and who has been harmed historically be genetic research. His research has been reported on in The New York Times, National Geographic, Time Magazine, and National Public Radio. He is the author of Beyond Versus: The Struggle to Understand the Interaction of Nature and Nurture, Cambridge, Mass, The MIT Press (2014), and many papers.
Abstract:
In the late 1990’s, pharmaceutical company executives who were committed to integrating genomics into the pharmaceutical industry introduced a series of phrases to capture and market this pharmacogenomic turn in their businesses: (1) pharmacogenomics was in contrast to “one pill fits all” medicine, (2) pharmacogenomics was about getting “the right drug, to the right patient, at the right time”, and (3) pharmacogenomics paved the way for “personalized medicine”. This concept of personalized medicine quickly expanded to capture any notion of genomic medicine. Soon, clinicians and researchers became worried about the language of personalized medicine. First, doctors had been personalizing medicine for centuries (if not millennia), so the idea that genomics somehow uniquely ushered in an era of personalization was misleading. Second, what geneticists were calling personalized medicine didn’t really personalize medicine; it grouped patient populations into groups based on their genomic profiles. And third, personalized medicine became surrounded by heady promises of miracle cures and breakthrough treatments which were recognized largely as hype. As a result, starting around 2009-2011, a number of communities abandoned talk of “personalized medicine” and replaced it with “precision medicine”, where the revised idea was that adding genomic information about patients to clinical care would get at the underlying causes of health/illness (vs. just treating the symptoms) and so make healthcare more precise. I will argue that all the faults identified with regards to personalized medicine apply equally to precision medicine. First, doctors have been identifying and treating underlying causes of illness for centuries (if not millennia), so the idea that genomics somehow uniquely ushered in an era of treating causes (vs. symptoms) is misleading. Second, genomics is often imprecise; information about the pathogenicity of genetic variants changes frequently and it is fraught with clinical ambiguity. And third, precision medicine has continued to be surrounded by heady promises of miracle cures and breakthrough treatments which remain largely hype.
The talk will be given by Zoom at 6pm (Paris time zone, GMT+1): Zoom link for the talk; everyone is welcome to join.
John Dupré is Professor of Philosophy of science at the University of Exeter (UK), with a main focus on philosophy of biology. He is the Director of Egenis, the Centre for the Study of Life Sciences.
Abstract
People still often think that viruses are tiny little things that cause disease by parasitizing larger organisms. Here I argue that viruses are not things, but processes, and while some do, of course, cause serious disease, many or even most may be important positive contributors to larger biological systems. Finally, returning to the mistaken characterization of viruses as things rather than processes, I show how this erroneous reification may have seriously harmful consequences for research.