Case Study

Intel Labs Mitigates AI Bias in Foundational Multimodal Models by 20 Percent

Intel Labs Mitigates AI Bias in Foundational Multimodal Models by 20 Percent

Intel Labs Mitigates AI Bias in Foundational Multimodal Models by 20 Percent

Pages 4 Pages

Intel Labs reduced AI bias in foundational multimodal models by up to 20% using a novel approach based on social counterfactuals. Their Cognitive AI team built a large dataset of synthetic images that vary by race, gender, and physical traits across 260 occupations. Trained using Intel® Gaudi® 2 AI accelerators and 3rd Gen Intel® Xeon® processors, the models were analyzed using Retrieval-Augmented Generation (RAG) and filtered for quality and safety. The open-source dataset and findings aim to minimize bias in AI outputs and improve fairness across AI applications.

Join for free to read