Intrinsic interpretable machine learning frameworks for image classification
Loading...
Date
2023-01
Authors
Πιντέλας, Εμμανουήλ
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
In general, the goal of the interpretability and explainability domain in machine learning aims to provide an explanation for the predictions performed by intelligent machine models used in practical application domains. Interpretability/explainability in the machine learning field has recently become a significant problem because many real-world applications need justification and explanation for the decisions made by their ML models, even though it is essential to be able to comprehend their decision/prediction mechanism in order to trust them, particularly in critical situations. An area where explainability is extremely important is in medical applications like cancer prognosis, which is a crucial "life or death" decision dilemma. In these situations, a machine learning decision system's performance must take both accuracy and interpretation into account when assessing its effectiveness.
Additionally, the European Union General Data Protection Regulation (GDPR) states that creating explainable models has become a need in real-world applications. A "right to explanation" obligation was established by the GDPR in 2018 for any automated judgments made by models using artificial intelligence. This new legislation encourages the development of algorithmic frameworks that must guarantee an explanation for each prediction made by a machine learning framework. The GDPR has legally required this demand.
White box or interpretable models are those prediction modes whose decision processes are comprehensible, whereas explainable models are able to provide human-understandable justification for their decisions. These characteristics are crucial for establishing confidence in a model's predictions, especially when those predictions deal with vital issues like health, rights, security, and educational concerns.
Convolutional Neural Networks (CNNs) have flourished in the machine learning and computer vision fields of image classification due to their success as highly effective image feature extractors. A CNN model is regarded as a "black box" model because the features it generates cannot be understood because they are calculated using an incredibly complex feature extraction function and have no practical human meaning. Every prediction model whose decision function is non-transparent, difficult to explain, or otherwise incapable of being understood is regarded as a black box model.
The main focus of this PhD thesis is the developing of novel machine learning frameworks for image classification tasks that are trustworthy, comprehensible, and explainable. In particular, we introduced and developed new innovative Data Mining and Feature Extraction Techniques in order to build Intrinsic Interpretable/Explainable Machine Learning models for Image Recognition Applications. Our experimental results revealed the efficiency of the proposed methods.
Description
Keywords
Machine learning, Interpretable machine learning, Explainable machine learning, Image classification, Deep learning, Computer vision