Nowadays, multiclassifier systems (MCSs) are being widely applied in various machine learning problems and in many different domains. Over the last two decades, a variety of ensemble systems have been developed, but there is still room for improvement. This paper focuses on developing competence and interclass cross-competence measures which can be applied as a method for classifiers combination. The cross-competence measure allows an ensemble to harness pieces of information obtained from incompetent classifiers instead of removing them from the ensemble. The cross-competence measure originally determined on the basis of a validation set (static mode) can be further easily updated using additional feedback information on correct/incorrect classification during the recognition process (dynamic mode). The analysis of computational and storage complexity of the proposed method is presented. The performance of the MCS with the proposed cross-competence function was experimentally compared against five reference MCSs and one reference MCS for static and dynamic modes, respectively. Results for the static mode show that the proposed technique is comparable with the reference methods in terms of classification accuracy. For the dynamic mode, the system developed achieves the highest classification accuracy, demonstrating the potential of the MCS for practical applications when feedback information is available.
@article{bwmeta1.element.bwnjournal-article-amcv26i1p175bwm, author = {Pawel Trajdos and Marek Kurzynski}, title = {A dynamic model of classifier competence based on the local fuzzy confusion matrix and the random reference classifier}, journal = {International Journal of Applied Mathematics and Computer Science}, volume = {26}, year = {2016}, pages = {175-189}, zbl = {1336.62196}, language = {en}, url = {http://dml.mathdoc.fr/item/bwmeta1.element.bwnjournal-article-amcv26i1p175bwm} }
Pawel Trajdos; Marek Kurzynski. A dynamic model of classifier competence based on the local fuzzy confusion matrix and the random reference classifier. International Journal of Applied Mathematics and Computer Science, Tome 26 (2016) pp. 175-189. http://gdmltest.u-ga.fr/item/bwmeta1.element.bwnjournal-article-amcv26i1p175bwm/
[000] Bache, K. and Lichman, M. (2013). UCI machine learning repository, http://archive.ics.uci.edu/ml.
[001] Berger, J.O. and Berger, J.O. (1985). Statistical Decision Theory and Bayesian Analysis, Springer-Verlag, New York, NY. | Zbl 0572.62008
[002] Bishop, C. (1995). Neural Networks for Pattern Recognition, Clarendon Press/Oxford University Press, Oxford/New York, NY. | Zbl 0868.68096
[003] Blum, A. (1998). On-line algorithms in machine learning, in A. Fiat and G.J. Woeginger (Eds.), Developments from a June 1996 Seminar on Online Algorithms: The State of the Art, Springer-Verlag, London, pp. 306-325.
[004] Breiman, L. (1996). Bagging predictors, Machine Learning 24(2): 123-140. | Zbl 0858.68080
[005] Breiman, L., Friedman, J., Olshen, R. and Stone, C. (1984). Classification and Regression Trees, Wadsworth and Brooks, Monterey, CA. | Zbl 0541.62042
[006] Cover, T. and Hart, P. (1967). Nearest neighbor pattern classification, IEEE Transactions on Information Theory 13(1): 21-27, DOI:10.1109/TIT.1967.1053964. | Zbl 0154.44505
[007] Dai, Q. (2013). A competitive ensemble pruning approach based on cross-validation technique, Knowledge-Based Systems 37(9): 394-414, DOI: 10.1016/j.knosys.2012.08.024.
[008] Demšar, J. (2006). Statistical comparisons of classifiers over multiple data sets, The Journal of Machine Learning Research 7: 1-30. | Zbl 1222.68184
[009] Devroye, L., Györfi, L. and Lugosi, G. (1996). A Probabilistic Theory of Pattern Recognition, Springer, New York, NY. | Zbl 0853.68150
[010] Didaci, L., Giacinto, G., Roli, F. and Marcialis, G.L. (2005). A study on the performances of dynamic classifier selection based on local accuracy estimation, Pattern Recognition 38(11): 2188-2191. | Zbl 1077.68797
[011] Dietterich, T.G. (2000). Ensemble methods in machine learning, Proceedings of the 1st International Workshop on Multiple Classifier Systems, MCS'00, Cagliari, Italy, pp. 1-15.
[012] Dunn, O.J. (1961). Multiple comparisons among means, Journal of the American Statistical Association 56(293): 52-64. | Zbl 0103.37001
[013] Fraz, M.M., Remagnino, P., Hoppe, A., Uyyanonvara, B., Rudnicka, A.R., Owen, C.G. and Barman, S. (2012). An ensemble classification-based approach applied to retinal blood vessel segmentation, IEEE Transactions on Biomedical Engineering 59(9): 2538-2548.
[014] Freund, Y. and Shapire, R. (1996). Experiments with a new boosting algorithm, Machine Learning: Proceedings of the 13th International Conference, Bari, Italy, pp. 148-156.
[015] Friedman, M. (1940). A comparison of alternative tests of significance for the problem of m rankings, The Annals of Mathematical Statistics 11(1): 86-92, DOI: 10.2307/2235971. | Zbl 66.1305.08
[016] Gama, J. (2010). Knowledge Discovery from Data Streams, 1st Edn., Chapman & Hall/CRC, London. | Zbl 1230.68017
[017] Giacinto, G. and Roli, F. (2001). Dynamic classifier selection based on multiple classifier behaviour, Pattern Recognition 34(9): 1879-1881. | Zbl 0995.68100
[018] Holm, S. (1979). A simple sequentially rejective multiple test procedure, Scandinavian Journal of Statistics 6(2): 65-70. | Zbl 0402.62058
[019] Hsieh, N.-C. and Hung, L.-P. (2010). A data driven ensemble classifier for credit scoring analysis, Expert systems with Applications 37(1): 534-545.
[020] Huenupán, F., Yoma, N.B., Molina, C. and Garretón, C. (2008). Confidence based multiple classifier fusion in speaker verification, Pattern Recognition Letters 29(7): 957-966.
[021] Jurek, A., Bi, Y., Wu, S. and Nugent, C. (2013). A survey of commonly used ensemble-based classification techniques, The Knowledge Engineering Review 29(5): 551-581, DOI: 10.1017/s0269888913000155.
[022] Kittler, J. (1998). Combining classifiers: A theoretical framework, Pattern Analysis and Applications 1(1): 18-27.
[023] Ko, A.H., Sabourin, R. and Britto, Jr., A.S. (2008). From dynamic classifier selection to dynamic ensemble selection, Pattern Recognition 41(5): 1718-1731. | Zbl 1140.68466
[024] Kuncheva, L.I. (2004). Combining Pattern Classifiers: Methods and Algorithms, 1st Edn., Wiley-Interscience, New York, NY. | Zbl 1066.68114
[025] Kuncheva, L.I. and Rodríguez, J.J. (2014). A weighted voting framework for classifiers ensembles, Knowledge-Based Systems 38(2): 259-275.
[026] Kurzynski, M. (1987). Diagnosis of acute abdominal pain using three-stage classifier, Computers in Biology and Medicine 17(1): 19-27.
[027] Kurzynski, M., Krysmann, M., Trajdos, P. and Wolczowski, A. (2014). Two-stage multiclassifier system with correction of competence of base classifiers applied to the control of bioprosthetic hand, IEEE International Conference on Tools with Artificial Intelligence, ICTAI 2014, Limassol, Cyprus.
[028] Kurzynski, M. and Wolczowski, A. (2012). Control system of bioprosthetic hand based on advanced analysis of biosignals and feedback from the prosthesis sensors, Proceedings of the 3rd International Conference on Information Technologies in Biomedicine, ITIB 12, Kamień Śląski, Poland, pp. 199-208.
[029] Mamoni, D. (2013). On cardinality of fuzzy sets, International Journal of Intelligent Systems and Applications 5(6): 47-52.
[030] Plumpton, C.O. (2014). Semi-supervised ensemble update strategies for on-line classification of FMRI data, Pattern Recognition Letters 37: 172-177.
[031] Plumpton, C.O., Kuncheva, L.I., Oosterhof, N.N. and Johnston, S.J. (2012). Naive random subspace ensemble with linear classifiers for real-time classification of FMRI data, Pattern Recognition 45(6): 2101-2108.
[032] R Core Team (2012). R: A Language and Environment for Statistical Computing, R Foundation for Statistical Computing, Vienna, http://www.R-project.org/.
[033] Rokach, L. (2010). Ensemble-based classifiers, Artificial Intelligence Review 33(1-2): 1-39.
[034] Rokach, L. and Maimon, O. (2005). Clustering methods, Data Mining and Knowledge Discovery Handbook, Springer Science + Business Media, New York, NY, pp. 321-352. | Zbl 1087.68029
[035] Rousseeuw, P. (1987). Silhouettes: A graphical aid to the interpretation and validation of cluster analysis, Journal of Computational and Applied Mathematics 20(1): 53-65. | Zbl 0636.62059
[036] Scholkopf, B. and Smola, A.J. (2001). Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond, MIT Press, Cambridge, MA.
[037] Tahir, M.A., Kittler, J. and Bouridane, A. (2012). Multilabel classification using heterogeneous ensemble of multi-label classifiers, Pattern Recognition Letters 33(5): 513-523.
[038] Tsoumakas, G., Katakis, I. and Vlahavas, I. (2010). Random k-labelsets for multi-label classification, IEEE Transactions on Knowledge and Data Engineering 99(1): 1079-1089.
[039] Valdovinos, R. and Sánchez, J. (2009). Combining multiple classifiers with dynamic weighted voting, in E. Corchado et al. (Eds.), Hybrid Artificial Intelligence Systems, Lecture Notes in Computer Science, Vol. 5572, Springer, Berlin/Heidelberg, pp. 510-516.
[040] Ward, J. (1963). Hierarchical grouping to optimize an objective function, Journal of the American Statistical Association 58(301): 236-244.
[041] Wilcoxon, F. (1945). Individual comparisons by ranking methods, Biometrics Bulletin 1(6): 80-83.
[042] Woloszynski, T. (2013). Classifier competence based on probabilistic modeling (ccprmod.m) at Matlab central file exchange, http://www.mathworks.com/ matlabcentral/fileexchange/28391-a -probabilistic-model-of-classifier -competence.
[043] Woloszynski, T. and Kurzynski, M. (2011). A probabilistic model of classifier competence for dynamic ensemble selection, Pattern Recognition 44(10-11): 2656-2668. | Zbl 1218.68155
[044] Woloszynski, T., Kurzynski, M., Podsiadlo, P. and Stachowiak, G.W. (2012). A measure of competence based on random classification for dynamic ensemble selection, Information Fusion 13(3): 207-213.
[045] Wolpert, D.H. (1992). Stacked generalization, Neural Networks 5(2): 214-259.
[046] Wozniak, M., Graña, M. and Corchado, E. (2014). A survey of multiple classifier systems as hybrid systems, Information Fusion 16(1): 3-17.