Explaining Classifier Decisions Linguistically for Stimulating and Improving Operators Labeling Behavior
Sprache des Titels:
In decision support and classification systems, there is usually the necessity that opera- tors or experts provide class labels for a significant number of process samples in order to be able to establish reliable machine learning classifiers. Such labels are often affected by significant uncertainty and inconsistency due to varying human?s experience and con- stitutions during the labeling process. This typically results in significant, unintended class overlaps. We propose several new concepts for providing enhanced explanations of clas- sifier decisions in linguistic (human readable) form. These are intended to help operators to better understand the decision process and support them during sample annotation to improve their certainty and consistency in successive labeling cycles. This is expected to lead to better, more consistent data sets (streams) for use in training and updating classi- fiers. The enhanced explanations are composed of (1) grounded reasons for classification decisions, represented as linguistically readable fuzzy rules, (2) a classifier?s level of un- certainty in relation to its decisions and possible alternative suggestions, (3) the degree of novelty of current samples and (4) the levels of impact of the input features on the cur- rent classification response. The last of these are based on a newly developed approach for eliciting instance-based feature importance levels, and are also used to reduce the lengths of the rules to a maximum of 3 to 4 antecedent parts to ensure readability for operators and users. The proposed techniques were embedded within an annotation GUI and applied to a real-world application scenario from the field of visual inspection. The usefulness of the proposed linguistic explanations was evaluated based on experiments conducted with six operators.