A face database with a large number of high-quality attribute annotations
-
Updated
Dec 14, 2021
A face database with a large number of high-quality attribute annotations
This is an open-source tool to assess and improve the trustworthiness of AI systems.
Oracle Guardian AI Open Source Project is a library consisting of tools to assess fairness/bias and privacy of machine learning models and data sets.
Source code and notebooks to reproduce experiments and benchmarks on Bias Faces in the Wild (BFW).
Structured pruning and bias visualization for Large Language Models. Tools for LLM optimization and fairness analysis.
Evidence-based tools and community collaboration to end algorithmic bias, one data scientist at a time.
Julia Toolkit with fairness metrics and bias mitigation algorithms
[Nature Medicine] The Limits of Fair Medical Imaging AI In Real-World Generalization
Official code of "Discover and Mitigate Unknown Biases with Debiasing Alternate Networks" (ECCV 2022)
[Science Advances] Demographic Bias of Vision-Language Foundation Models in Medical Imaging
CIRCLe: Color Invariant Representation Learning for Unbiased Classification of Skin Lesions
Mitigating ML bias in gravitational-wave detection pipelines
individual Fair Nonnegative Matrix Tri-Factorization
[ICCV 2023] Partition-and-Debias: Agnostic Biases Mitigation via a Mixture of Biases-Specific Experts
Ethnic bias analysis in medical imaging AI: Demonstrating that explainable-by-design models achieve 80% bias reduction across 5 ethnic groups (50k images)
Code implementation for BiasMitigationRL, a reinforcement learning-based bias mitigation method.
Research POC on the mitigation of bias in large language models (FLAN-T5 and Bloomz) through fine-tuning.
Enforcing fairness in binary and multiclass classification
Explainable AI & fashion talk & experiments
Code repository for paper: Fairness-Aware Data Augmentation for Cardiac MRI using Text-Conditioned Diffusion Models
Add a description, image, and links to the bias-mitigation topic page so that developers can more easily learn about it.
To associate your repository with the bias-mitigation topic, visit your repo's landing page and select "manage topics."