Category: Senza categoria (Page 1 of 4)

Explainable and Robust Deep Learning for Medical Domain

Paul Rad

Abstract

The role of AI in healthcare is becoming more and more important as we venture into advanced algorithms to detect, diagnose, and generate treatment plans for a variety of diseases. However, the best use of AI algorithms today in healthcare are to be decision aids to human decision makers rather than having the AI algorithms make the final decisions. In this presentation I cover: 1) Robust Representation Learning and Uncertainty. How do we build models that have the power of deep learning that work well in I.I.D. settings but can also generalize and avoid surprises when making decisions in an uncertain world of O.O.D. plus to know what it doesn’t know.

Shor Bio

I maintain a globally recognized academic research and industry mentorship in cloud computing and artificial intelligence, focused on explainability, robust representation learning, and decision making algorithms towards cyber and healthcare applications. My notable scientific innovations include new insights into computer vision facial micro-expressions using explainable self-supervised learning algorithms to predict cognitive state, neurological disorders, and social network Deepfake discovery. My patents and inventions have also transitioned from academic to industry and enabled the development of a computer vision startup in south Texas. I have also made highly significant contributions to a deeper understanding of robust representation learning algorithms for social network Deepfake analysis.

Keywords

Explanation, Healthcare, Self-supervised Learning, Robust Representation, Uncertainty

AI generated hallucinations in the sciences – On the stability accuracy trade-off in deep learning

Vegard Antun

Abstract

Artificial intelligence (AI) is changing the world in front of our eyes, yielding the question: How reliable is modern AI, and can it be trusted? This talk will discuss how AI techniques used in medical imaging can produce highly untrustworthy outputs, yielding potential incorrect medical diagnosis. Moreover, we will show an inherent trade-off between how stable and how accurate methods can become for image reconstruction from undersampled acquisitions. This can be explained in a mathematically precise way, demonstrating fundamental limitations of modern AI approaches.

Short Bio

Vegard Antun is a Postdoctoral Fellow in Applied Mathematics at the University of Oslo. His research is centred on deep learning based techniques for scientific computing, with a particular focus on inverse problems and imaging. A focal point of his research is the design and investigation of fundamental barriers for stable and accurate neural networks in the sciences. He holds a PhD in Mathematics from the University of Oslo.

Keywords

Trustworthy AI, Fundamental Barriers, Undersampled acquisitions, Stability and accuracy

Artificial Intelligence in RF Pulse Design: from High Resolution NMR to Imaging

Prof. Gianluigi Veglia, Dr. Manu Veliparambil Subrahmanian

Abstract

High fidelity unitary control of quantum systems is central to quantum computing and several spectroscopies spanning from optics, coherent spectroscopy, NMR, MRI and EPR. Here we introduce a time-optimal RF pulse design strategy and developed a Neural Network for generating high-fidelity broadband RF pulses with customizable operation, bandwidth, RF inhomogeneity compensation and operational fidelity. Applications include traditional NMR experiments, imaging at high RF inhomogeneity, high-fidelity operations for quantum information processing. The newly designed RF pulses are more robust and less prone to imperfection than the commonly used shapes for basic liquid-state NMR experiments and reduce RF artifacts in MRI. This new strategy will enable the design of efficient quantum computing operators as well as new spectroscopic and imaging techniques.

Keywords

RF pulse design, Broadband pulses, Quantum Information processing, MRI, ultra high field NMR

Focused introduction to deep learning for biomedical applications

Andrea Duggento

Abstract

In this focused introduction to deep learning for biomedical applications, a brief historical account about the first deep neural networks architectures. The main neural network architectures employied in biomedical applications will be discussed, together with some conclusions about the future of neural networks in medicine.

Short Bio

Andrea Duggento (Male) is Assistant Professor of Medical Physics at UNITOV.He received the bachelor’s and master’s degrees in Theoretical Physics from the University of Pisa (Italy) in 2005 and the PhD degree in Physics from the Lancaster University (UK), in 2009. He received the Medical Physics Degrees in 2015 from University of Rome “Tor Vergata”. His research interests are centred on nonlinear dynamical systems, statistical analysis and information processes, with special emphasis on networks of biological systems. His recent activity focuses on directed functional networks in brain. Since 2008, i.e., very early in his young scientific career, he has published in prestigious journals such as Physical Review Letters and Physical Review E on topics like nonlinear dynamics in biologically-relevant time series or Phylosophically Transactions of Royal Society on topics like brain network dynamics. His research topics span a wide area of interest, ranging from brain-heart functional interaction, to structural brain imaging, from heart and baroreflex model, to quantitative analyses in PET imaging.

Interpretability and Explainability in Machine Learning: lesson learnt, challenges and directions from a NLP perspective

Roberto Basili

Abstract

AI provides successul techniques in the area of learning and language processing, that can be profitably applied to a larger set of domains. However, the cognitive focus of AI research is still the driving force of its development. Current AI challenges are increasingly related to aspects such as governance and explainability of AI models and processes. In this talk I discuss explainability methods as they are proposed in the area of NLP, with a specific focus on the adoption of kernel methods integrated with deep learning. The role and impact of explanation in different NLP tasks is discussed as a further example.

Short Bio

Prof. Roberto Basili is Professor of Computer Science at University of Roma, Tor Vergata, since May 2003. Current Teaching: “Artificial Intelligence”, “Web Mining and Retrieval”, “Database Systems” for the Computer Science and Computer Engineering curriculum. He is member of the Board of Trustees of the Italian Association for Artificial Intelligence (AI*IA), since 2005, co-Founder and Member of the Board of Trustees of the Italian Association for Computational Linguistics (AILC), since September 2015. He is also co-editor in chief of the Italian Journal of Computational Linguistics in coordination with Simonetta Montemagni (ILC, CNR, Pisa). His research started in 90’s on problems, methodologies and technologies of Artificial Intelligence, in the area of Knowledge Representation, Machine Learning, and Natural Language Processing as well as in the Engineering of Distributed and Web-oriented Natural Language Processing and Information Retrieval Systems. The major engineering aspects refer to mathematical models of machine learning, as explanations and automation core models for the development of linguistic knowledge in intelligent agents. In this area, he has contributed with the definition of several algorithmic techniques for the optimization of semantic inference tasks (such as classification and pattern recognition from texts and streams of unstructured data, adaptive ranking in search engines as well as sentiment detection and analysis in Social Web data). He is author of more than 180 publications on international journals, proceedings of International Conferences and Workshops (h-Index: 31). He has been invited speaker in several Workshops and Conferences in the area of Artificial Intelligence, Computational Linguistics, Machine Learning and Information Retrieval, about corpus-driven ontology learning and data-driven learning methods for Language Processing, Web Mining and Retrieval.

DBLP: http://dblp.uni-trier.de/pers/hd/b/Basili:Roberto.html
Scholar: https://scholar.google.it/citations?user=U1A22fYAAAAJ

Keywords

Explainable AI, Kernel Machines, Kernel-based Neural Learning, Natural Language Processing, Question Answering

Dissecting the progression of multiple sclerosis through explainable ML techniques

Allegra Conti

Abstract

In multiple sclerosis, cortical lesions are a main determinant of disease progression. Recently, it has been suggested that the presence of chronic active white matter lesions harboring a paramagnetic rim is associated with a more aggressive form of the disease. However, it is still uncertain how these two types of lesions are related, or which one plays a larger role in disability advancement. Using 7 T MRI, we characterized cortical and rim lesions prevalence, interplay and evolution. Also, using state of the art machine learning algorithms (extreme gradient boosting), we assessed their cumulative power as well as individual importance in predicting disease stage and disability progression. Our results show that volumes of both rim and cortical lesions increase over time. By using an XGBoost classifier, it has been found that the evaluation of rim and cortical lesions volumes in MS might result in an improved ability to distinguish the patients susceptible to experience a progression of the neurological disability and also supports the clinical decision.

Short Bio

Dr. Conti is a postoctoral researcher at the Medical Physics Section (Department of Biomedicine and Prevention) of the University of Rome “Tor Vergata”. She received her master’s degree with Summa Cum Laude in Physics from the University of Rome, “La Sapienza” and her Ph.D. degree in 2015 from the University G. D’Annunzio, Italy, ranking as best student of the year. She held the Eurotalents post-doctoral individual fellowship, part of the Marie Sklodowska-Curie Actions Programme in CEA-Paris Saclay (France). She won several scientific awards, including the ‘MSCA Seal of Excellence’ awarded by the European Commission, the ‘Under 35 award’ awarded by the Italian Magnetic Resonance Discussion Group (GIDRM) and the’ Best Oral Presentation Award’, based on the evaluation of the committee members of the Biomedical Physics Section of the Italian Physical Society (SIF). She is author of several peer-reviewed articles. She is reviewer for high impact peer-reviewed scientific journals (Theranostics, Frontiers in Systems Neuroscience, International Journal of Nanomedicine, among the others) as well as for internal scientific conferences (International Society for Magnetic Resonance in Medicine (ISMRM); Engineering in Medicine and Biology (EMBC)). She is in the editorial board of scientific journals such as Frontiers in Physics, Frontiers in Physiology, Computational and Mathematical Methods in Medicine, Pharmaceutics.

Keywords

Multiple Sclerosis, MRI, Machine Learning Classifiers, XGBoost, MS progression

AI for healthcare

Birgi Tamersoy

Abstract

Artificial Intelligence in Healthcare from a Data Continuum Perspective

Keywords

artificial intelligence, healthcare, deep learning, big data

Artificial Intelligence in Cancer Imaging

Hugo Aerts

Abstract

Medical imaging in oncology has traditionally been restricted to the diagnosis and staging of cancer. But technological advances in Artificial Intelligence (AI) are moving imaging modalities into the heart of patient care. Imaging can address a critical barrier in precision medicine as solid tumors can be spatial and temporal heterogeneous, and the standard approach to tumor sampling, often invasive needle biopsy, is unable to fully capture the spatial state of the tumor. Radiomics refers to the automatic quantification of this radiographic phenotype. Radiomic methods heavily rely on AI technologies, in specific engineered and deep-learning algorithms, to quantify phenotypic characteristics that can be used to develop non-invasive biomarkers. In this talk, Dr. Aerts will discuss recent developments from his group and collaborators performing research at the intersection of radiology, bioinformatics, and data science. Also, he will discuss recent work of building a computational image analysis system to extract a rich radiomics set and use these features to build radiomic signatures. The presentation will conclude with a discussion of future work on building integrative systems incorporating both molecular and phenotypic data to improve cancer therapies.

Short Bio

Hugo Aerts PhD is Director of the Artificial Intelligence in Medicine (AIM) Program at Harvard-BWH. AIM’s mission is to accelerate the application of AI algorithms in medical sciences and clinical practice. This academic program centralizes AI expertise stimulating cross-pollination among clinical and technical expertise areas, and provides a common platform to address a wide range of clinical challenges. Dr. Aerts is a leader in medical AI and Principle Investigator on major NIH supported efforts, including the Quantitative Imaging Network (U01) and Informatics Technology for Cancer Research (U24) initiatives of the NCI. In 2020 he was awarded a prestigious ERC Consolidator grant of the Horizon program from the European Union. His research has resulted in numerous peer reviewed publications in top tier journals. Dr. Aerts is Associate Professor at Harvard University and Full Professor at Maastricht University. Dr. Aerts earned his Master in Engineering from Eindhoven Institute of Technology, his PhD from Maastricht University, and his postdoctoral fellowship from Harvard School of Public Health.

Keywords

Cancer Imaging, Artificial Intelligence, Radiology, Deep Learning

General discussion forum

Welcome to the workshop’s general discussion room. In addition to the presentation-specific Q&A forums, this is a space for attendees and speakers to interact, discuss, exchange ideas, point to a talk you liked, leave suggestion and comments, etc. Don’t be shy!

« Older posts