You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Added a section on AI Safety and Trustworthiness, detailing key areas of study including bias, robustness, security, and best practices for generative AI.
We study when and how AI systems fail, and how to make their behavior more reliable in critical settings.
9
+
10
+
This includes:
11
+
12
+
-**Bias and fairness** – detecting and characterizing underdiagnosis and demographic leakage in imaging AI and language models.
13
+
-**Robustness to clinical variation** – stress-testing models under real-world shifts in acquisition, scanners, and protocols.
14
+
-**Security & adversarial bias** – understanding “hidden in plain sight” attacks and other subtle ways systems can be manipulated.
15
+
-**Best practices for generative AI** – guidelines for the safe use of large language models in radiology and clinical workflows.
16
+
17
+
The goal is to design evaluation frameworks and mitigation strategies that go beyond accuracy, placing safety and trust at the center of AI deployment.
0 commit comments