Loading Events

« All Events

  • This event has passed.

Workshop: Challenges of AI in biology and medicine

28 November 2022 - 29 November 2022

Summary

AI techniques have led to spectacular advances in many different fields, such as image recognition, language processing, and chess. In the scientific context, the most recent example is the success of the AI system “AlphaFold” in predicting the folded structure of proteins based on their amino acid sequence. In medicine, AI-based methods promise to revolutionize diagnosis and decision-making for treatments and drugs. With these promises come many concerns about these methods, such as their validation, reliability, or accountability. Philosophy could be of help in addressing these issues, in which epistemological questions often intermingle with ethical concerns.

This workshop brings together scientists and philosophers to discuss the challenges of introducing AI into biology and medicine. In particular, we will discuss whether and how a philosophical perspective can be helpful in addressing these challenges.

Preliminary Program

Monday, November 28

9:30 – 10:00 Welcome + Introduction
10:00 – 10:45 Emanuele Ratti Molecular biology, machine learning and scientific understanding
10:45 – 11:00 Break
11:00 – 11:45 Olivier Saut Challenges of AI for improving clinical and radiological followup of cancer
11:45 – 12:30 Océane Fiant Artificial intelligence in pathology: what kind of decision support?
12:30 – 14:30 Lunch
14:30 – 15:30 Macha Nikolski,  Ulysse Guyet, Emmanuel Bouilhol Challenges for AI in life sciences (MN), ARSENAL: Antimicrobial resistance prediction by a machine learning method (UG), Artificial Intelligence in fluorescent microscopy: improving cellular phenotypes characterization (EB)
15:30 – 16:15 Rodolphe Thiébaut AI for mechanistic modelling of biological processes using high-dimensional data: a dream for a naïve researcher?
16:15 – 16:30 Break
16:30 – 17:00 General Discussion

Tuesday, November 29

9:30 – 10:15 Éric Pardoux Where to put philosophy in AI? Considerations about ethics & epistemology from a design perspective
10:15 – 11:00 Guillaume Martinroche TBD
11:00 – 11:15 Break
11:15 – 11:45 Closing discussion

Abstracts

Emanuele Ratti: Molecular biology, machine learning and scientific understanding

In this talk, I discuss the relation between scientific explanations in molecular biology and machine learning. In particular, my goal is to identify the extent to which the use of machine learning impairs our abilities to formulate molecular explanations. I will argue that the ability of biologists to understand the model that they work with (i.e. the intelligibility of the model) severely constrains their ability of turning the model into an explanatory model. The more a molecular model is complex (in the sense of including a high number of variables), the more such a model will be difficult to be turned into a scientific explanation. Since machine learning increases its performances when more components are added, then it generates models which are not intelligible, and hence not explanatory.

Olivier Saut: Challenges of AI for improving clinical and radiological followup of cancer

Beyond the hype, there are still many challenges to overcome to build ambitious clinical tools using AI. Through examples of our research on AI approaches for oncology using radiological images, I will present some of these and the approaches we had to develop (and our failures).

Océane Fiant: Artificial intelligence in pathology: what kind of decision support?

Two arguments are frequently put forward to justify the deployment of artificial intelligence in medicine: it is either to relieve the physician of repetitive tasks with little added value, or to provide him or her with simple decision support.
I will present a case illustrating the second perspective. This is a project aiming at building a dataset of breast cancer images, which will later be used to train convolutional neural networks to detect tumor components on hematoxylin and eosin-stained whole slide images. The systematic inventory of these components should allow the pathologist to “see” things that he or she cannot detect with the naked eye, thereby improving his or her ability to analyze breast tumors, diagnose them and manage patients.
However, the study of this case reveals a challenge other than that of providing the pathologist with a simple decision support. For some years now, the management of patients according to the characteristics of their tumor has been based on molecular assays that correlate genetic variants with pathological phenotypes. These assays are used in certain clinical cases to choose some therapeutic options over others. While it is possible to argue that these assays do not compete with, but merely complement the pathologist’s expertise, the fact remains that they can guide clinical decisions according to knowledge and criteria that are not part of the epistemic equipment of this practitioner. Thus, by enhancing the latter’s ability to analyze tumors, artificial intelligence tools are also part of a professional strategy to reinforce the pathologist’s expertise, faced with genomics-based approaches.
My presentation aims at examining the design process of this dataset, by comparing its objectives and its implementation to those of available gene expression assays (mainly OncotypeDX).

Éric Pardoux: Where to put philosophy in AI? Considerations about ethics & epistemology from a design perspective

Artificial Intelligence (AI) appears to be pervasive nowadays. As such, almost any field of philosophy can be linked to it in some fashion. Nonetheless, the exact roles that philosophy (and philosophers) can take in the development of AI and AI-based systems are still blurry.

My own doctoral research aims at studying AI in healthcare and medicine. My main aim would be to understand how to ethically design ethical AI systems. This project gives me the opportunity to question the position a philosopher can take at the interplay of philosophy, computer science and medicine.

Although the objective of my doctoral research is broad, this talk will give an insight on some considerations about the ways in which both ethics and epistemology may be incorporated in the very design of AI systems for healthcare and medicine. Firstly I will discuss a general theoretical framework called processual ethics which suggests ways to integrate ethics, philosophy and philosophers all along the design processes of AI and AI-based systems. Then I will present the implications such a framework may have for a practical project — knowingly a software platform allowing the building of machine learning (ML) models for (digital) epidemiology through the formalization of data processing workflows (or pipelines), thus enabling meta-reasoning.

This example will give the opportunity to underline some issues about ML, such as the needed distinction that has to be made between data, reasoning and meta-reasoning. It will also allow us to consider how philosophy can improve or clarify some key element in ethical AI systems’ designs and uses.

Rodolphe Thiébaut: AI for mechanistic modelling of biological processes using high-dimensional data: a dream for a naïve researcher?

Here I would like to discuss a scientific challenge we are facing in the SISTM research group (https://www.inria.fr/en/sistm). We are constructing mechanistic dynamical models for vaccine response based on ordinary differential equations with parameters estimated using omics data generated in early phase clinical trials and animal experimentations. Could a part of the immune response be captured by a mechanistic model (as opposed to a pure predictive approach)? Is the inference in the context of high-dimensional data in low sample size tractable, especially with new learning algorithms?

Details

Start:
28 November 2022
End:
29 November 2022
Event Categories:
, , , ,

Organizers

Fridolin Gross
Guillaume Martinroche

Venue

Bordeaux Pellegrin Hospital, Rheumatology Service, 12th floor
Place Amélie Raba Léon
Bordeaux, 33000 France
+ Google Map