Skip to main content

AI + Health Seminar Series

Connecting computer scientists and engineers with clinicians to tackle actionable AI/Health projects.

AI + Health

UPCOMING TALKS


March 12, 2026

Speaker: Anthony Christodoulou, UCLA

Title: "Generative magnetic resonance multitasking: patient-specific AI models for high-dimensional imaging"

Zoom link info: 
Join Zoom Meeting
https://utexas.zoom.us/j/87121024650?pwd=kkV0qG3NF7BkeuOHL7bHWeIO4nB0Uv.1

Meeting ID: 871 2102 4650
Passcode: 517642

 

Abstract: Magnetic resonance imaging (MRI) is a cornerstone of noninvasive clinical diagnosis. Imaging moving organs like the heart remains challenging because cardiac motion, respiratory motion, and contrast and physical dynamics overlap during acquisition. Recent high-dimensional cardiac imaging frameworks address this by modeling the image as a high-dimensional function of multiple independent, time-varying factors. They treat motion states and contrast-related sequence parameters as separate coordinates. This seminar presents an AI approach, Generative MR Multitasking, that represents images in a learnable, interpretable latent space. It uses scan-specific conditional generative models conditioned on known pulse-sequence timing parameters, which encourages the latent variables to encode interpretable motion states. The approach yields flexible, scan-specific models of patient motion and physical dynamics. These models can represent and quantify physical processes despite cardiac and respiratory motion.


Bio: Anthony Christodoulou is an Associate Professor of Radiology, Bioengineering, and Physics & Biology in Medicine at the University of California, Los Angeles (UCLA). Previously, he was Associate Professor of Biomedical Sciences and the Director of Magnetic Resonance Technology Innovations for the Biomedical Imaging Research Institute at Cedars-Sinai Medical Center (CSMC). He received his Ph.D. in Electrical and Computer Engineering from the University of Illinois at Urbana-Champaign (UIUC) and his B.S. and M.S. degrees in Electrical Engineering from the University of Southern California (USC). Prof. Christodoulou’s research laboratory develops and translates novel magnetic resonance imaging (MRI) techniques through innovations in MR physics, artificial intelligence, and image reconstruction. His group’s primary focus is on multidimensional quantitative imaging methods for the diagnosis, risk prediction, and treatment monitoring of cardiovascular diseases and cancer.



March 26, 2026

Title: "Trustworthy Health AI: Challenges & Lessons Learned"

Speaker: Krishnaram Kenthapadi, Oracle Health
 

Zoom link info: 
Join Zoom Meeting
https://utexas.zoom.us/j/87121024650?pwd=kkV0qG3NF7BkeuOHL7bHWeIO4nB0Uv.1

Meeting ID: 871 2102 4650
Passcode: 517642
 

Abstract: While generative AI models and applications have huge potential across healthcare, their successful deployment requires addressing several ethical, trustworthiness, and safety considerations. These concerns include domain‑specific evaluation, hallucinations, truthfulness and grounding, safety and alignment, bias and fairness, robustness and security, privacy and unlearning, calibration and confidence, and transparency. In this talk, we first highlight the key challenges and opportunities for AI in healthcare, and then discuss unique challenges associated with trustworthy deployment of generative AI in healthcare. Focusing on the clinical documentation use case, we present practical guidelines for applying responsible AI techniques effectively and discuss lessons learned from deploying responsible AI approaches for AI agentic applications in healthcare. In particular, we present insights from building and deploying AI agents as part of Oracle Health Clinical AI Agent.

Bio: Krishnaram Kenthapadi is the Chief Scientist, Healthcare AI at Oracle, where he leads the AI initiatives for Clinical AI Agent and other Oracle Health products. Previously, he led AI safety, trustworthiness, and responsible AI initiatives at Fiddler AI, Amazon AWS AI, and LinkedIn, and served as LinkedIn’s representative in Microsoft’s AI and Ethics in Engineering and Research (AETHER) Advisory Board. Prior to that, he was a Researcher at Microsoft Research Silicon Valley Lab. Krishnaram obtained his Ph.D. in Computer Science from Stanford University in 2006. He has published 60+ papers, with 7000+ citations and filed 150+ patents, 72 of which have been granted. He has given invited talks and tutorials at leading research conferences and industry forums, and received research recognition awards.



April 9, 2026: Ergus Subashi, MD Anderson


April 23, 2026: Marculescu, Radu (UT)

 



PAST TALKS:

February 26, 2026:

Title: Real-World Data to Real-World Evidence with some AI: Successes, Challenges, and Opportunities
Speaker: Jiang Bian, regenstrief.org

Abstract: This presentation examines practical methods—and some AI tools—for transforming real-world data (RWD) into credible real-world evidence (RWE). It highlights the central role of data science in overcoming common obstacles in electronic health records (EHR) and claims data (e.g., missingness, measurement error, and coding variability). Using case studies focused on GLP-1 receptor agonists (GLP-1RAs), the talk illustrates how rigorous study design and causal inference—particularly target trial emulation—can be used to assess the effectiveness and safety of GLP-1RAs.  The presentation emphasizes when and how RWE can complement randomized controlled trials—and where it can mislead without careful attention to potential biases, many of which originate from data limitations.

Bio: Dr. Bian specializes in biomedical informatics and health data science—interdisciplinary fields focused on leveraging data, information, and knowledge to drive scientific discovery, problem-solving, and decision-making aimed at improving human health. Dr. Bian brings extensive experience in developing real-world data infrastructure, informatics tools, and systems, as well as applying advanced AI and data science methods to analyze and interpret multimodal clinical and biomedical data. Dr. Bian is the Chief Research Information Officer of the IU Melvin and Bren Simon Comprehensive Cancer Center. In addition, he serves as Chief Data Scientist at the Regenstrief Institute, Chief Data Scientist at IU Health, Associate Dean of Data Science and Vice Chair for Translational Informatics in the Department of Biostatistics and Health Data Science at the IU School of Medicine, and Deputy Director of the Indiana Clinical and Translational Sciences Institute (CTSI) at the Regenstrief Institute.

February 12, 2026: 

Title: "Knowledge-Informed Weakly-Supervised Deep Learning Models for Cancer Applications"
Speaker: Hairong Wang (UT BME)

Abstract:  Within the past decade, the unprecedented capability of modern deep learning (DL) models has been undoubtedly proven on large dataset. It is therefore suggested that, thanks to its computational power and versatility, DL possesses substantial potential for analyzing healthcare data, thereby significantly enhancing diagnosis, prognosis, and treatment planning. Healthcare data, on the other hand, possess unique properties distinct from commonly used DL benchmarks. Notably, due to the invasive nature and high expense of direct diagnosis, accurate healthcare data are often scarce, rendering off-the-shelf DL models largely ineffective in high-stakes applications. In this talk, I will discuss my recent developments on addressing these constraints by advancing knowledge-informed, image-based DL methodologies, which improve sample efficiency, predictive accuracy, and generalizability for real-world cancer applications. Such approaches systematically integrate biological, anatomical, and clinical domain knowledge into DL pipelines to overcome data scarcity and heterogeneity in cancer imaging. Across applications in glioblastoma and liver cancer, the methods demonstrate substantial improvements in generalizability and precision, showing strong potential to support personalized diagnosis, prognosis, treatment planning, and monitoring in precision oncology.
 

Bio: Hairong Wang is an Assistant Professor in the Operations Research & Industrial Engineering program at UT Austin. Her research focuses on the development of machine learning models and algorithms for high-dimensional, multi-modal data with complex, heterogeneous structures. In particular, she develops data-driven methodologies for building and training machine learning models with data and computational efficiency, interpretability, generalizability, and robustness, and propose principled approaches to fuse domain knowledge into model design for supporting clinical diagnosis and optimal treatment in high-stake scenarios. Hairong received her PhD in Operations Research from the School of Industrial and Systems Engineering at Georgia Tech. Prior to joining Georgia Tech, she received her BA in Mathematics from University of Oxford.

January 29, 2026

Time: 1:00 -1:30pm

Title: "Enhancing GI Tract Cancer Diagnosis Through Generative Models and Vision-based Robotic Tactile Sensing" 

Speaker:  Dr. Farshid Alambeigi

Summary: Colonoscopy remains the gold standard for colorectal cancer screening, yet it is difficult and unintuitive to operate and relies almost entirely on vision, making subtle or early-stage polyps easy to miss. In this talk, I present a unified research platform to accelerate next-generation AI-enabled robotic colonoscopy by addressing three core gaps: improving the steerability and intuitiveness of conventional devices, advancing sensing beyond vision alone, and expanding access to data for intelligent screening.

First, we robotize conventional colonoscopes with a modular add-on system that improves steerability and clinician intuitiveness without disrupting established clinical workflow. Second, we extend beyond vision-only colonoscopy by integrating an inflatable vision-based robotic tactile sensor. While its output is also camera-based, tactile interaction provides complementary cues, including polyp surface texture and local stiffness relative to surrounding tissue. Finally, to overcome limited access to diverse, well-labeled clinical data, we incorporate a generative AI module to synthesize realistic training data and improve model robustness across variations in anatomy, lighting, and pathology.

Together, these components form a practical, end-to-end framework for developing, validating, and translating AI-driven robotic colonoscopy with enhanced sensing and improved generalization.

Bio: Dr. Farshid Alambeigi is an Associate Professor and the Leland Barclay Fellow in the Walker Department of Mechanical Engineering at The University of Texas at Austin. He is also a core faculty member of Texas Robotics. Dr. Alambeigi earned his Ph.D. in Mechanical Engineering (2019) and M.Sc. in Robotics (2017) from Johns Hopkins University. In 2018, he was awarded the 2019 Siebel Scholarship in recognition of his academic excellence and leadership. He is the recipient of the NIH NIBIB Trailblazer Award (2020) for his work on flexible implants and robotic systems for minimally invasive spinal fixation surgery and the NIH Director’s New Innovator Award (2022) for pioneering in vivo bioprinting surgical robotics for the treatment of volumetric muscle loss. His contributions have also been recognized with the UT Austin Faculty Innovation Award, the Outstanding Research Award by an Assistant Professor, the Walker Scholar Award, and several best paper awards and recognitions. He serves as an Associate Editor for the IEEE Transactions on Robotics (TRO), IEEE Robotics and Automation Letters (RAL), and the IEEE Robotics and Automation Magazine (RAM).

At UT Austin, Dr. Alambeigi directs the Advanced Robotic Technologies for Surgery (ARTS) Lab. In collaboration with the UT Dell Medical School and MD Anderson Cancer Center, the ARTS Lab advances the concept of Surgineering, engineering the surgery, by developing dexterous, intelligent robotic systems designed to partner with surgeons. The ultimate goal of this work is to enhance surgical precision, improve clinician performance, and advance patient safety and outcomes.



2025 

AIHealthTalk: 11/06/25 - Semantics in Medicine: Expert, Data, and Application Perspectives


AIHealth Talk: 10/23/25 - Predicting Long Term Mortality in COPD Using Deep Learning Imaging Markers


AIHealthTalk:10/9/25 - Using Large Language Models to Simulate Patients for Training Mental Health


 AIHealthTalk: 09/25/25 - PanEcho: Toward Complete Al-Enabled Echocardiography Interpretation

AIHealthTalk: 09/11/25 - Clinical Deployment of AI:From Single Models to Compound Agentic Systems




April 10, 2025: Na Zou, Assistant Professor, University of Houston
Exploring and Exploiting Fairness in AI/ML: Algorithms and Applications

April 24, 2025: Edison Thomaz, Associate Professor and William H. Hartwig Fellow, Electrical and Computer Engineering, UT Austin
Identifying Digital Biomarkers of Cognitive Impairment from Real World Activity Data


Past Talks: 

Fall 2024

Nov. 14: Ziyue Xu, NVIDIA Health
Flexible Modality Learning: Modeling Arbitrary Modality Combination via the Mixture-of-Experts Framework

Oct 31: Greg Durrett, Associate Professor, The University of Texas at Austin
Specializing LLMs for Factuality and Soft Reasoning

Oct 17: Akshay Chaudhari, Stanford University
Towards Multi-modal Foundation Models for 3D Medical Imaging



Oct 3: Tianlong Chen, UNC 



Sept 19: Carl Yang, Assistant Professor of Computer Science, Emory University
KG-LLM Co-Learning for Health