AI + Health Seminar Series
Connecting computer scientists and engineers with clinicians to tackle actionable AI/Health projects.
Connecting computer scientists and engineers with clinicians to tackle actionable AI/Health projects.
Upcoming Talks:
February 12, 2026:
Title: "Knowledge-Informed Weakly-Supervised Deep Learning Models for Cancer Applications"
Speaker: Hairong Wang (UT BME)
Zoom link info:
Join Zoom Meeting
https://utexas.zoom.us/j/87121024650?pwd=kkV0qG3NF7BkeuOHL7bHWeIO4nB0Uv.1
Meeting ID: 871 2102 4650
Passcode: 517642
Abstract: Within the past decade, the unprecedented capability of modern deep learning (DL) models has been undoubtedly proven on large dataset. It is therefore suggested that, thanks to its computational power and versatility, DL possesses substantial potential for analyzing healthcare data, thereby significantly enhancing diagnosis, prognosis, and treatment planning. Healthcare data, on the other hand, possess unique properties distinct from commonly used DL benchmarks. Notably, due to the invasive nature and high expense of direct diagnosis, accurate healthcare data are often scarce, rendering off-the-shelf DL models largely ineffective in high-stakes applications. In this talk, I will discuss my recent developments on addressing these constraints by advancing knowledge-informed, image-based DL methodologies, which improve sample efficiency, predictive accuracy, and generalizability for real-world cancer applications. Such approaches systematically integrate biological, anatomical, and clinical domain knowledge into DL pipelines to overcome data scarcity and heterogeneity in cancer imaging. Across applications in glioblastoma and liver cancer, the methods demonstrate substantial improvements in generalizability and precision, showing strong potential to support personalized diagnosis, prognosis, treatment planning, and monitoring in precision oncology.
Bio: Hairong Wang is an Assistant Professor in the Operations Research & Industrial Engineering program at UT Austin. Her research focuses on the development of machine learning models and algorithms for high-dimensional, multi-modal data with complex, heterogeneous structures. In particular, she develops data-driven methodologies for building and training machine learning models with data and computational efficiency, interpretability, generalizability, and robustness, and propose principled approaches to fuse domain knowledge into model design for supporting clinical diagnosis and optimal treatment in high-stake scenarios. Hairong received her PhD in Operations Research from the School of Industrial and Systems Engineering at Georgia Tech. Prior to joining Georgia Tech, she received her BA in Mathematics from University of Oxford.
-----------
February 26, 2026: Bian, Jiang, regenstrief.org
March 12, 2026: Anthony Christodoulou, UCLA
March 26, 2026
Title: "Trustworthy Health AI: Challenges & Lessons Learned"
Speaker: Krishnaram Kenthapadi, Oracle Health
Zoom link info:
Join Zoom Meeting
https://utexas.zoom.us/j/87121024650?pwd=kkV0qG3NF7BkeuOHL7bHWeIO4nB0Uv.1
Meeting ID: 871 2102 4650
Passcode: 517642
Abstract: While generative AI models and applications have huge potential across healthcare, their successful deployment requires addressing several ethical, trustworthiness, and safety considerations. These concerns include domain‑specific evaluation, hallucinations, truthfulness and grounding, safety and alignment, bias and fairness, robustness and security, privacy and unlearning, calibration and confidence, and transparency. In this talk, we first highlight the key challenges and opportunities for AI in healthcare, and then discuss unique challenges associated with trustworthy deployment of generative AI in healthcare. Focusing on the clinical documentation use case, we present practical guidelines for applying responsible AI techniques effectively and discuss lessons learned from deploying responsible AI approaches for AI agentic applications in healthcare. In particular, we present insights from building and deploying AI agents as part of Oracle Health Clinical AI Agent.
Bio: Krishnaram Kenthapadi is the Chief Scientist, Healthcare AI at Oracle, where he leads the AI initiatives for Clinical AI Agent and other Oracle Health products. Previously, he led AI safety, trustworthiness, and responsible AI initiatives at Fiddler AI, Amazon AWS AI, and LinkedIn, and served as LinkedIn’s representative in Microsoft’s AI and Ethics in Engineering and Research (AETHER) Advisory Board. Prior to that, he was a Researcher at Microsoft Research Silicon Valley Lab. Krishnaram obtained his Ph.D. in Computer Science from Stanford University in 2006. He has published 60+ papers, with 7000+ citations and filed 150+ patents, 72 of which have been granted. He has given invited talks and tutorials at leading research conferences and industry forums, and received research recognition awards.
April 9, 2026: Ergus Subashi, MD Anderson
April 23, 2026: Marculescu, Radu (UT)
PAST TALKS:
January 29, 2026
Time: 1:00 -1:30pm
Title: "Enhancing GI Tract Cancer Diagnosis Through Generative Models and Vision-based Robotic Tactile Sensing"
Speaker: Dr. Farshid Alambeigi
Summary: Colonoscopy remains the gold standard for colorectal cancer screening, yet it is difficult and unintuitive to operate and relies almost entirely on vision, making subtle or early-stage polyps easy to miss. In this talk, I present a unified research platform to accelerate next-generation AI-enabled robotic colonoscopy by addressing three core gaps: improving the steerability and intuitiveness of conventional devices, advancing sensing beyond vision alone, and expanding access to data for intelligent screening.
First, we robotize conventional colonoscopes with a modular add-on system that improves steerability and clinician intuitiveness without disrupting established clinical workflow. Second, we extend beyond vision-only colonoscopy by integrating an inflatable vision-based robotic tactile sensor. While its output is also camera-based, tactile interaction provides complementary cues, including polyp surface texture and local stiffness relative to surrounding tissue. Finally, to overcome limited access to diverse, well-labeled clinical data, we incorporate a generative AI module to synthesize realistic training data and improve model robustness across variations in anatomy, lighting, and pathology.
Together, these components form a practical, end-to-end framework for developing, validating, and translating AI-driven robotic colonoscopy with enhanced sensing and improved generalization.
Bio: Dr. Farshid Alambeigi is an Associate Professor and the Leland Barclay Fellow in the Walker Department of Mechanical Engineering at The University of Texas at Austin. He is also a core faculty member of Texas Robotics. Dr. Alambeigi earned his Ph.D. in Mechanical Engineering (2019) and M.Sc. in Robotics (2017) from Johns Hopkins University. In 2018, he was awarded the 2019 Siebel Scholarship in recognition of his academic excellence and leadership. He is the recipient of the NIH NIBIB Trailblazer Award (2020) for his work on flexible implants and robotic systems for minimally invasive spinal fixation surgery and the NIH Director’s New Innovator Award (2022) for pioneering in vivo bioprinting surgical robotics for the treatment of volumetric muscle loss. His contributions have also been recognized with the UT Austin Faculty Innovation Award, the Outstanding Research Award by an Assistant Professor, the Walker Scholar Award, and several best paper awards and recognitions. He serves as an Associate Editor for the IEEE Transactions on Robotics (TRO), IEEE Robotics and Automation Letters (RAL), and the IEEE Robotics and Automation Magazine (RAM).
At UT Austin, Dr. Alambeigi directs the Advanced Robotic Technologies for Surgery (ARTS) Lab. In collaboration with the UT Dell Medical School and MD Anderson Cancer Center, the ARTS Lab advances the concept of Surgineering, engineering the surgery, by developing dexterous, intelligent robotic systems designed to partner with surgeons. The ultimate goal of this work is to enhance surgical precision, improve clinician performance, and advance patient safety and outcomes.
2025
AIHealthTalk: 11/06/25 - Semantics in Medicine: Expert, Data, and Application Perspectives
AIHealth Talk: 10/23/25 - Predicting Long Term Mortality in COPD Using Deep Learning Imaging Markers
AIHealthTalk:10/9/25 - Using Large Language Models to Simulate Patients for Training Mental Health
AIHealthTalk: 09/25/25 - PanEcho: Toward Complete Al-Enabled Echocardiography Interpretation
AIHealthTalk: 09/11/25 - Clinical Deployment of AI:From Single Models to Compound Agentic Systems
April 10, 2025: Na Zou, Assistant Professor, University of Houston
Exploring and Exploiting Fairness in AI/ML: Algorithms and Applications
April 24, 2025: Edison Thomaz, Associate Professor and William H. Hartwig Fellow, Electrical and Computer Engineering, UT Austin
Identifying Digital Biomarkers of Cognitive Impairment from Real World Activity Data
Past Talks:
Fall 2024
Nov. 14: Ziyue Xu, NVIDIA Health
Flexible Modality Learning: Modeling Arbitrary Modality Combination via the Mixture-of-Experts Framework
Oct 31: Greg Durrett, Associate Professor, The University of Texas at Austin
Specializing LLMs for Factuality and Soft Reasoning
Oct 17: Akshay Chaudhari, Stanford University
Towards Multi-modal Foundation Models for 3D Medical Imaging
Oct 3: Tianlong Chen, UNC
Sept 19: