"Evolving multi-label fuzzy classifier"
, in Information Sciences, Vol. 597, Elsevier, Seite(n) 1-23, 3-2022, ISSN: 1872-6291
Evolving multi-label fuzzy classifier
Sprache des Titels:
Multi-label classification has attracted much attention in the machine learning community to address the problem of assigning single samples to more than one (not necessarily non-overlapping) class at the same time. We propose an evolving multi-label fuzzy classifier (EFC-ML) which is able to self-adapt and self-evolve its structure with new incoming multi-label samples in an incremental, single-pass manner. It is based on a multi-output Takagi?Sugeno type architecture, where for each class a separate consequent hyper-plane is defined, which yields flexibility for partially approximating the respective classes in a binary -regression context. The learning procedure embeds a locally weighted incremental correlation-based algorithm combined with (conventional) recursive fuzzily weighted least squares and Lasso-based regularization. Locality is important to avoid the out-masking effect of single class labels in one or more rules; the correlation-based part ensures that the interrelations between class labels, a specific well-known property in multi-label classification for improved performance, are preserved properly; the Lasso-based regularization reduces the curse of dimensionality effects in the case of a higher number of inputs. Antecedent learning is achieved by product-space clustering and conducted for all class labels together, which yields a single rule base (opposed to related techniques such as one-versus-rest or classifier chaining, achieving multiple different rule bases, one per class), allowing a compact knowledge view and thus enabling better interpretable insights. Furthermore, our approach comes with an online active learning (AL) strategy for updating the classifier on just a (smaller) number of selected samples, which in turn makes the approach applicable for scarcely labelled streams in applications, where the annotation effort is typically expensive. It is based on three essential concepts: novelty content in the antecedent space, uncertainty due to ambiguity in the consequent (output) space and parameter instability reduction, and these in combination with an upper-allowed selection budget (which could be predefined by a user). Our approach was evaluated on several data sets from the MULAN repository and showed significantly improved classification accuracy and average precision trend lines compared to (evolving) one-versus-rest or classifier chaining concepts. A significant result was that, due to the online AL method, a 90% reduction in the number of samples used for classifier updates had little effect on the accumulated accuracy trend lines compared to a full update in most data set cases.