The research group of Trustworthy BiometraVision Lab, led by Dr. Akshay Agarwal, Department of Data Science and Engineering, developed a comprehensive framework for analyzing the robustness of deep learning models against image corruption. With thousands of neural networks available, selecting a reliable one is a significant challenge, as their vulnerability to different types of image degradation is poorly understood. This extensive study evaluates 18 deep neural networks (DNNs), from pure CNNs to state-of-the-art Vision Transformers (ViTs), using 5 datasets and 15 types of corruption. The findings reveal critical, and sometimes surprising, vulnerabilities. For example, ViTs are highly robust against noise but weak against environmental corruptions, while traditional CNNs show the opposite behavior. Furthermore, the research demonstrates that these corruptions can deceive even popular explainability algorithms like Grad-CAM, which may highlight the correct object but result in a misclassification. By systematically mapping which networks are vulnerable to specific corruptions, this work provides crucial guidance for deploying the right model for real-world applications, ensuring greater reliability and performance where specific image corruptions are common. For more details, kindly visit: https://ieeexplore.ieee.org/document/11098661