
The Research group of Dr.Vinod Kurmi, Visual Data Computing Group (VisDom), Department of Data Science and Engineering developed a new method to reduce bias in deep learning models by limiting their reliance on misleading patterns in data. Modern neural networks often learn spurious correlations—for example, identifying cows by the presence of green pastures, camels by desert backgrounds, or predicting professions based on gender-related cues in images. While such shortcuts can improve overall accuracy, they can lead to biased and unreliable predictions when models encounter new environments or underrepresented groups. To address this challenge, the researchers developed Alignment-Gated Suppression (AGS), a lightweight training approach that operates within the neural network. The method monitors how strongly individual neurons contribute to predictions and suppresses those with unusually dominant influence during training. By regulating these internal signals, AGS encourages the model to rely on a broader and more meaningful set of features rather than shortcuts. Experiments show improved robustness and better performance on challenging data groups. The work has been published at the International Conference on Learning Representations (ICLR) 2026. For more details, kindly visit https://openreview.net/pdf?id=L2L1hi0FGj.