Laleh Seyyed-Kalantari
Assistant Professor
Department:
Electrical Engineering & Computer Science
Email: lsk@yorku.ca
Website: https://responsibleai.eecs.yorku.ca/
Bio
Dr. Laleh Seyyed-Kalantari is an Assistant Professor at York University’s Lassonde School of Engineering and a faculty affiliate at the Vector Institute. She earned her Ph.D. in Electrical Engineering from McMaster University and completed her NSERC Postdoctoral Fellowship (2019–2022) at the Vector Institute and the University of Toronto. Her research centers on AI safety, risk, bias, and interpretability, aiming to mitigate bias and prevent digital exclusion across cultural, social, and healthcare domains. At her ResponsibleAI Lab, she leverages generative AI and foundation models with adaptive design strategies to improve model performance for underrepresented groups and promote equitable access, representation, and cultural inclusion in AI systems. While her work often applies to health and societal contexts, her broader goal is to develop AI systems that embed cultural and ethical awareness, ensuring technology serves diverse communities responsibly. Dr. Seyyed-Kalantari also contributes to policy efforts through the AI Insights for Policymakers Program, initiated by CIFAR and Mila – Quebec Artificial Intelligence Institute (2024–2025). Her accolades include the Google Research Scholar Award (2024), two York University Research Award (2025) for Outstanding Early Career and Significant Knowledge Mobilization & Impact, NSERC Postdoctoral Fellowship (2018) and the Banting Postdoctoral Fellowship (2022, declined). Her pioneering work on fairness in medical imaging has been featured in several prominent technology news outlets.
Research Interests
- AI safety and risks
- Responsible AI
- Generative AI in medical imaging
- Foundation Models for drug discovery
- Foundation models in medical imaging
- Large Language Models bias and fairness
Selected Publications
Selected awards and achievements
- Google Research Scholar Program award (2024)
- Distinguished Vision Award at ACM KDD Health Day 2025
- 2025 York University Research Award - Outstanding Early Career Awards
- 2025 York University Research Award - Significant Knowledge Mobilization & Impact Awards
- Banting Postdoctoral Fellowship to join Massachusetts Institute of Technology (MIT) University. (National, 2022-2024, declined )
- NSERC Postdoctoral Fellowship. (National, 2018-2020)
- Finalist of the 2021 CIFAR ‘AICan 3-M Impact’ Competition. (National, 2021)
- Winner team (1st rank) of the Toronto Health Data Hackathon (served as a team lead). (Municipal, 2019)
Featured in Tech. News
Our 'Nature Medicine' paper demonstrating underdiagnosis bias amplification on AI medical image diagnostic tools were featured in the following tech news:
- Rising to the challenge of bias in healthcare AI, Nature Medicine news & Views, Dec. 10, 2021.
- Trained to underdiagnose, The Imaging Wire, Dec. 16, 2021.
- Vector Institute Research highlights Neural Net News, Dec. 21, 2021.
- We need a fairness check in AI pipelines, Artificial Intelligence in Medicine (AIMED), Feb. 24, 2022.
Our 'The Lancet Digital Health' paper on demonstrating AI can detect a patient’s race from the medical image was featured in the following tech news among the others:
- Study shows AI deep learning models can detect race in medical imaging, KMGH-TV (channel 7) is a television station in Denver, Colorado, United States, affiliated with ABC Television Network, May. 24, 2022.
- AI recognizes patient’s racial identity in medical images, AIMED, May 26, 2022.
- AI’s Ability To Predict Race From X-Rays Alone Sparks, UNILAD, UK, May 23, 2022.
- Imaging AI’s Unseen Potential. The Imaging Wire. May. 22, 2022.
- Artificial intelligence predicts patients’ race from their medical images, MIT News, May 20, 2022.
- AI Can Predict People's Race From X-Ray Images, And Scientists Are Concerned, ScienceAlert, Australia, May 19, 2022.
- AI can tell your race from an X-ray image — and scientists can't figure out how, National Post, May 17, 2022.
- Study: AI deep learning models can predict race from imaging results, MobiHealthNews, May 17, 2022.
- MIT, Harvard scientists find AI can recognize race from X-rays — and nobody knows how, The Boston Globe, May. 15, 2021.
- Policy brief – Risks of AI race detection in the medical system, Stanford University Human-Centered Artificial Intelligence, Dec. 2021.
- Artificial intelligence can guess a person’s race with up to 99% accuracy just by looking at their X-rays or other medical scans, Daily Mail, Aug. 25, 2021.
- AI can guess your race based on X-rays, and researchers do not know how, VICE newsletter, TECH, Aug. 23, 2021.
- Who will save us from racist AI?, Quillette, Aug. 15, 2021.
- AI Sees Race in X-Rays. The Batch, Essential news for deep learners. Aug. 11, 2021.
- These Algorithms Look at X-Rays—and Somehow Detect Your Race, Wired Aug. 5, 2021.
- Reading race. The Imaging Wire. Aug. 5, 2021.
Interviewed by the editor-in-chief of The Lancet Digital Health journal for a podcast about our race detection paper.
I was featured as ‘AI Champions’ in Artificial Intelligence in Medicine (AIMED).
Our PSB 2021 conference paper on demonstrating the lack of fairness in disease diagnosis medical image classifiers was featured in:
- Researchers find evidence of racial, gender, and socioeconomic bias in chest X-ray classifiers. Venture Beat, The Machine making sense of AI. Oct. 21, 2020.
- Vector Institute, Internal Newsletter, Oct. 26, 2020.
- Vector Institute Research highlights, Neural Net News, Nov. 12, 2020.