Emotion learning: Solving a shortest path problem in an arbitrary deterministic environment in linear time with an emotional agent
Silvana P Etruseva
International Journal of Applied Mathematics and Computer Science, Tome 18 (2008), p. 409-421 / Harvested from The Polish Digital Mathematics Library

The paper presents an algorithm which solves the shortest path problem in an arbitrary deterministic environment with n states with an emotional agent in linear time. The algorithm originates from an algorithm which in exponential time solves the same problem, and the agent architecture used for solving the problem is an NN-CAA architecture (neural network crossbar adaptive array). By implementing emotion learning, the linear time algorithm is obtained and the agent architecture is modified. The complexity of the algorithm without operations for initiation in general does not depend on the number of states n, but only on the length of the shortest path. Depending on the position of the goal state, the complexity can be at most O(n). It can be concluded that the choice of the function which evaluates the emotional state of the agent plays a decisive role in solving the problem efficiently. That function should give as detailed information as possible about the consequences of the agent's actions, starting even from the initial state. In this way the function implements properties of human emotions.

Publié le : 2008-01-01
EUDML-ID : urn:eudml:doc:207896
@article{bwmeta1.element.bwnjournal-article-amcv18i3p409bwm,
     author = {Silvana P Etruseva},
     title = {Emotion learning: Solving a shortest path problem in an arbitrary deterministic environment in linear time with an emotional agent},
     journal = {International Journal of Applied Mathematics and Computer Science},
     volume = {18},
     year = {2008},
     pages = {409-421},
     zbl = {1178.90171},
     language = {en},
     url = {http://dml.mathdoc.fr/item/bwmeta1.element.bwnjournal-article-amcv18i3p409bwm}
}
Silvana P Etruseva. Emotion learning: Solving a shortest path problem in an arbitrary deterministic environment in linear time with an emotional agent. International Journal of Applied Mathematics and Computer Science, Tome 18 (2008) pp. 409-421. http://gdmltest.u-ga.fr/item/bwmeta1.element.bwnjournal-article-amcv18i3p409bwm/

[000] Botelho L.M and Coelho H. (1998). Adaptive agents: Emotion learning, Association for the Advancement of Artificial Intelligence, pp. 19-24.

[001] Bozinovski S. (1995). Consequence Driven Systems, Teaching, Learning and Self-Learning Agent, Gocmar Press, Bitola.

[002] Bozinovski S. (1982). A self learning system using secondary reinforcement, in: E. Trappl (Ed.) Cybernetics and Systems Research, North-Holland Publishing Company, pp. 397-402.

[003] Bozinovski S. and Schoell P. (1999a). Emotions and hormones in learning. GMD Forschungszentrum Informationstechnik GmbH, Sankt Augustin.

[004] Bozinovski S. (1999b). Crossbar Adaptive Array: The first connectionist network that solved the delayed reinforcement learning problem, in: A. Dobnikar, N. Steele, D. Pearson, R. Albrecht (Eds.), Artificial Neural Nets and Genetic Algorithms, Springer, pp. 320-325.

[005] Bozinovski S. (2002). Motivation and emotion in anticipatory behaviour of consequence driven systems, Proceedings of the Workshop on Adaptive Behaviour in Anticipatory Learning Systems, Edinburgh, Scotland, pp. 100-119.

[006] Bozinovski S. (2003). Anticipation driven artifical personality: Building on Lewin and Loehlin, in: M. Butz, O. Sigaud, P. Gerard (Eds.) Anticipatory Behaviour in Adaptive Learning Systems, LNAI 2684, Springer-Verlag, Berlin/Heilderberg, pp. 133-150.

[007] Glaser J. (1963). General Psychopathology, Narodne Novine, Zagreb, (in Croatian).

[008] Jorgen B.J and Gutin G. (2001). Digraphs, Theory, Algorithms and Applications, Springer-Verlag, London. | Zbl 0958.05002

[009] Koenig S. and Simmons R. (1992). Complexity Analysis of RealTime Reinforcement Learning Applied to Finding Shortest Paths in Deterministic Domains, Carnegie Mellon University, Pittsburgh.

[010] Peng J. and Williams R. (1993). Efficient learning and planning with the Dyna fmework. Proceedings of the 2nd International Conference on Simulation of Adaptive Behaviour: From Animals to Animates, Hawaii, pp. 437-454

[011] Petruseva S. and Bozinovski S. (2000). Consequence programming: Algorithm “at subgoal go back”. Mathematics Bulletin, Book 24 (L), pp. 141-152. | Zbl 1249.68320

[012] Petruseva S. (2006a). Comparison of the efficiency of two algorithms which solve the shortest path problem with an emotional agent, Yugoslav Journal of Operations Research, 16(2): 211-226 | Zbl 1265.68296

[013] Petruseva S. (2006b). Consequence programming: Solving a shortest path problem in polynomial time using emotional learning, International Journal of Pure and Applied Mathematics, 29(4): 491-520. | Zbl 1110.68140

[014] Sutton R. and Barto A. (1998). Reinforcement Learning: An Introduction, MIT Press, Cambridge, MA.

[015] Whitehead S. (1991). A complexity analysis of cooperative mechanisms in reinforcement learning, Proceedings of AAAI, pp. 607-613.

[016] Whitehead S. (1992). Reinforcement learning for the adaptive control of perception and action, Ph.D. thesis, University of Rochester.

[017] Wittek G. (1995). Me, Me, Me, the Spider in the Web. The Law of Correspondence, and the Law of Projection, Verlag DAS WORT, GmbH, Marktheidenfeld-Altfeld.