Graph-based generation of a meta-learning search space
Norbert Jankowski
International Journal of Applied Mathematics and Computer Science, Tome 22 (2012), p. 647-667 / Harvested from The Polish Digital Mathematics Library

Meta-learning is becoming more and more important in current and future research concentrated around broadly defined data mining or computational intelligence. It can solve problems that cannot be solved by any single, specialized algorithm. The overall characteristic of each meta-learning algorithm mainly depends on two elements: the learning machine space and the supervisory procedure. The former restricts the space of all possible learning machines to a subspace to be browsed by a meta-learning algorithm. The latter determines the order of selected learning machines with a module responsible for machine complexity evaluation, organizes tests and performs analysis of results. In this article we present a framework for meta-learning search that can be seen as a method of sophisticated description and evaluation of functional search spaces of learning machine configurations used in meta-learning. Machine spaces will be defined by specially defined graphs where vertices are specialized machine configuration generators. By using such graphs the learning machine space may be modeled in a much more flexible way, depending on the characteristics of the problem considered and a priori knowledge. The presented method of search space description is used together with an advanced algorithm which orders test tasks according to their complexities.

Publié le : 2012-01-01
EUDML-ID : urn:eudml:doc:244065
@article{bwmeta1.element.bwnjournal-article-amcv22z3p647bwm,
     author = {Norbert Jankowski},
     title = {Graph-based generation of a meta-learning search space},
     journal = {International Journal of Applied Mathematics and Computer Science},
     volume = {22},
     year = {2012},
     pages = {647-667},
     language = {en},
     url = {http://dml.mathdoc.fr/item/bwmeta1.element.bwnjournal-article-amcv22z3p647bwm}
}
Norbert Jankowski. Graph-based generation of a meta-learning search space. International Journal of Applied Mathematics and Computer Science, Tome 22 (2012) pp. 647-667. http://gdmltest.u-ga.fr/item/bwmeta1.element.bwnjournal-article-amcv22z3p647bwm/

[000] Bensusan, H., Giraud-Carrier, C. and Kennedy, C.J. (2000). A higher-order approach to meta-learning, in J. Cussens and A. Frisch (Eds.), Proceedings of the Work-in-Progress Track at the 10th International Conference on Inductive Logic Programming, Springer-Verlag, Berlin/Heidelberg, pp. 33-42.

[001] Brazdil, P., Giraud-Carrier, C., Soares, C. and Vilalta, R. (2009). Metalearning: Applications to Data Mining, Springer, Berlin/Heidelberg. | Zbl 1173.68625

[002] Brazdil, P., Soares, C. and da Costa, J.P. (2003). Ranking learning algorithms: Using IBL and meta-learning on accuracy and time results, Machine Learning 50(3): 251-277. | Zbl 1033.68082

[003] Chan, P. and Stolfo, S.J. (1996). On the accuracy of metalearning for scalable data mining, Journal of Intelligent Information Systems 8(1): 5-28.

[004] Czarnowski, I. and Jędrzejowicz, P. (2011). Application of agent-based simulated annealing and tabu search procedures to solving the data reduction problem, International Journal of Applied Mathematics and Computer Science 21(1): 57-68, DOI: 10.2478/v10006-011-0004-3. | Zbl 1221.68191

[005] Duch, W. and Grudziński, K. (1999). Search and global minimization in similarity-based methods, International Joint Conference on Neural Networks, Washington, DC, USA, p. 742. | Zbl 0965.68079

[006] Duch, W. and Itert, L. (2003). Committees of undemocratic competent models, Proceedings of the Joint International Conference on Artificial Neural Networks (ICANN) and the International Conference on Neural Information Processing (ICONIP), Istanbul, Turkey, pp. 33-36.

[007] Duch, W., Wieczorek, T., Biesiada, J. and Blachnik, M. (2004). Comparison of feature ranking methods based on information entropy, Proceedings of International Joint Conference on Neural Networks, Budapest, Hungary, pp. 1415-1420.

[008] Frank, A. and Asuncion, A. (2010). UCI machine learning repository, University of California, School of Information and Computer Science, Irvine, CA, http://archive.ics.uci.edu/ml.

[009] Grąbczewski, K. and Jankowski, N. (2011). Saving time and memory in computational intelligence system with machine unification and task spooling, Knowledge-Based Systems 24(5): 570-588.

[010] Guyon, I. (2003). NIPS 2003 workshop on feature extraction, http://www.clopinet.com/isabelle/Projects/NIPS2003.

[011] Guyon, I. (2006). Performance prediction challenge, http://www.modelselect.inf.ethz.ch.

[012] Guyon, I., Gunn, S., Nikravesh, M. and Zadeh, L. (Eds.) (2006). Feature Extraction: Foundations and Applications, Springer, Berlin/Heidelberg. | Zbl 1114.68059

[013] Jankowski, N., Duch, W. and Grąbczewski, K. (Eds.) (2011). Meta-learning in Computational Intelligence, Studies in Computational Intelligence, Vol. 358, Springer, Berlin/Heidelberg. | Zbl 1231.68026

[014] Jankowski, N. and Grąbczewski, K. (2005). Heterogenous committees with competence analysis, in N. Nedjah, L. Mourelle, M. Vellasco, A. Abraham and M. Köppen (Eds.), 5th International Conference on Hybrid Intelligent Systems, Rio de Janeiro, Brazil, IEEE Press, New York, NY, pp. 417-422.

[015] Jankowski, N. and Grąbczewski, K. (2007). Handwritten digit recognition-Road to contest victory, IEEE Symposium Series on Computational Intelligence, IEEE Press, New York, NY, pp. 491-498.

[016] Jankowski, N. and Grochowski, M. (2004). Comparison of instances selection algorithms I: Algorithms survey, in L. Rutkowski, I. Siekmann, R. Tadeusiewicz and L.A. Zadeh (Eds.), Artificial Intelligence and Soft Computing, Lecture Notes in Artifical Intelligence, Vol. 3070, Springer-Verlag, Berlin/Heidelberg pp. 598-603. | Zbl 1058.68564

[017] Jankowski, N. and Grochowski, M. (2005). Instances selection algorithms in the conjunction with LVQ, in M.H. Hamza (Ed.), Artificial Intelligence and Applications, ACTA Press, Innsbruck, pp. 453-459.

[018] Kadlec, P. and Gabrys, B. (2008). Learnt topology gating artificial neural networks, IEEE World Congress on Computational Intelligence, Hong Kong, China, pp. 2605-2612.

[019] Kohonen, T. (1986). Learning vector quantization for pattern recognition, Technical Report TKK-F-A601, Helsinki University of Technology, Espoo.

[020] Kordík, P. and Černý, J. (2011). Self-organization of supervised models, in N. Jankowski, W. Duch and K. Grąbczewski (Eds.), Meta-learning in Computational Intelligence, Studies in Computational Intelligence, Vol. 358, Springer, Berlin/Heidelberg, pp. 179-223.

[021] Korytkowski, M., Nowicki, R., Rutkowski, L. and Scherer, R. (2011). AdaBoost ensemble of DCOG rough-neuro-fuzzy systems, in P. Jędrzejowicz, N.T. Nguyen and K. Hoang (Eds.), ICCCI (1), Lecture Notes in Computer Science, Vol. 6922, Springer, Berlin/Heidelberg, pp. 62-71.

[022] Łęski, J. (2003). A fuzzy if-then rule-based nonlinear classifier, International Journal of Applied Mathematics and Computer Science 13(2): 215-223. | Zbl 1048.93503

[023] Peng, Y., Falch, P., Soares, C. and Brazdil, P. (2002). Improved dataset characterisation for meta-learning, 5th International Conference on Discovery Science, Luebeck, Germany, pp. 141-152. | Zbl 1024.68579

[024] Pfahringer, B., Bensusan, H. and Giraud-Carrier, C. (2000). Meta-learning by landmarking various learning algorithms, International Conference on Machine Learning, Stanford, CA, USA, pp. 743-750.

[025] Prodromidis, A. and Chan, P. (2000). Meta-learning in distributed data mining systems: Issues and approaches, in H. Kargupta and P. Chan (Eds.), Book on Advances of Distributed Data Mining, AAAI Press, Menlo Park, CA.

[026] Scherer, R. (2010). Designing boosting ensemble of relational fuzzy systems, International Journal of Neural Systems 20(5): 381-388.

[027] Scherer, R. (2011). An ensemble of logical-type neuro-fuzzy systems, Expert Systems with Applications 38(10): 13115-13120.

[028] Smith-Miles, K.A. (2008). Towards insightful algorithm selection for optimization using meta-learning concepts, IEEE World Congress on Computational Intelligence, Hong Kong, China, pp. 4117-4123.

[029] Todorovski, L. and Dzeroski, S. (2003). Combining classifiers with meta decision trees, Machine Learning Journal 50(3): 223-249. | Zbl 1033.68099

[030] Troć, M. and Unold, O. (2010). Self-adaptation of parameters in a learning classifier system ensemble machine, International Journal of Applied Mathematics and Computer Science 20(1): 157-174, DOI: 10.2478/v10006-010-0012-8. | Zbl 1300.68047

[031] Witten, I.H. and Frank, E. (2005). Data Mining: Practical Machine Learning Tools and Techniques, Morgan Kaufmann, Amsterdam. | Zbl 1076.68555