A repeated imitation model with dependence between stages: Decision strategies and rewards
Pablo J. Villacorta ; David A. Pelta
International Journal of Applied Mathematics and Computer Science, Tome 25 (2015), p. 617-630 / Harvested from The Polish Digital Mathematics Library

Adversarial decision making is aimed at determining strategies to anticipate the behavior of an opponent trying to learn from our actions. One defense is to make decisions intended to confuse the opponent, although our rewards can be diminished. This idea has already been captured in an adversarial model introduced in a previous work, in which two agents separately issue responses to an unknown sequence of external inputs. Each agent's reward depends on the current input and the responses of both agents. In this contribution, (a) we extend the original model by establishing stochastic dependence between an agent's responses and the next input of the sequence, and (b) we study the design of time varying decision strategies for the extended model. The strategies obtained are compared against static strategies from theoretical and empirical points of view. The results show that time varying strategies outperform static ones.

Publié le : 2015-01-01
EUDML-ID : urn:eudml:doc:271771
@article{bwmeta1.element.bwnjournal-article-amcv25i3p617bwm,
     author = {Pablo J. Villacorta and David A. Pelta},
     title = {A repeated imitation model with dependence between stages: Decision strategies and rewards},
     journal = {International Journal of Applied Mathematics and Computer Science},
     volume = {25},
     year = {2015},
     pages = {617-630},
     zbl = {1322.93015},
     language = {en},
     url = {http://dml.mathdoc.fr/item/bwmeta1.element.bwnjournal-article-amcv25i3p617bwm}
}
Pablo J. Villacorta; David A. Pelta. A repeated imitation model with dependence between stages: Decision strategies and rewards. International Journal of Applied Mathematics and Computer Science, Tome 25 (2015) pp. 617-630. http://gdmltest.u-ga.fr/item/bwmeta1.element.bwnjournal-article-amcv25i3p617bwm/

[000] Amigoni, F., Basilico, N. and Gatti, N. (2009). Finding the optimal strategies for robotic patrolling with adversaries in topologically-represented environments, Proceedings of the 26th International Conference on Robotics and Automation (ICRA'09), Kobe, Japan, pp. 819-824.

[001] Cichosz, P. and Pawełczak, Ł. (2014). Imitation learning of car driving skills with decision trees and random forests, International Journal of Applied Mathematics and Computer Science 24(3): 579-597, DOI: 10.2478/amcs-2014-0042. | Zbl 1322.68149

[002] Conitzer, V. and Sandholm, T. (2006). Computing the optimal strategy to commit to, Proceedings of the 7th ACM Conference on Electronic Commerce, EC'06, Ann Arbor, MI, USA, pp. 82-90.

[003] Kott, A. and McEneany, W.M. (2007). Adversarial Reasoning: Computational Approaches to Reading the Opponents Mind, Chapman and Hall/CRC, Boca Raton, FL.

[004] McLennan, A. and Tourky, R. (2006). From imitation games to Kakutani, http://cupid.economics.uq.edu. au/mclennan/Papers/kakutani60.pdf, (unpublished). | Zbl 1200.91023

[005] McLennan, A. and Tourky, R. (2010a). Imitation games and computation, Games and Economic Behavior 70(1): 4-11. | Zbl 1200.91013

[006] McLennan, A. and Tourky, R. (2010b). Simple complexity from imitation games, Games and Economic Behavior 68(2): 683-688. | Zbl 1200.91023

[007] Osborne, M. and Rubinstein, A. (1994). A Course in Game Theory, MIT Press, Cambridge, MA. | Zbl 1194.91003

[008] Paruchuri, P., Pearce, J.P. and Kraus, S. (2008). Playing games for security: An efficient exact algorithm for solving Bayesian Stackelberg games, Proceedings of the 7th International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS'08), Estoril, Portugal, pp. 895-902.

[009] Pelta, D. and Yager, R. (2009). On the conflict between inducing confusion and attaining payoff in adversarial decision making, Information Sciences 179(1-2): 33-40.

[010] Price, K., Storn, R. and Lampinen, J. (2005). Differential Evolution: A Practical Approach to Global Optimization, Natural Computing Series, Springer-Verlag New York, Inc., Syracuse, NJ. | Zbl 1186.90004

[011] Qin, A.K., Huang, V.L. and Suganthan, P.N. (2009). Differential evolution: Algorithm with strategy adaptation for global numerical optimization, IEEE Transactions on Evolutionary Computation 13(2): 398-417.

[012] Storn, R. and Price, K. (1997). Differential evolution: A simple and efficient heuristic for global optimization over continuous spaces, Journal of Global Optimization 11(10): 341-359. | Zbl 0888.90135

[013] Tambe, M. (2012). Security and Game Theory: Algorithms, Deployed Systems, Lessons Learned, Cambridge University Press, New York, NY. | Zbl 1235.91005

[014] Thagard, P. (1992). Adversarial problem solving: Modeling an opponent using explanatory coherence, Cognitive Science 16(1): 123-149.

[015] Triguero, I., Garcia, S. and Herrera, F. (2011). Differential evolution for optimizing the positioning of prototypes in nearest neighbor classification, Pattern Recognition 44(4): 901-916.

[016] Villacorta, P.J. and Pelta, D.A. (2012). Theoretical analysis of expected payoff in an adversarial domain, Information Sciences 186(4): 93-104.

[017] Villacorta, P.J., Pelta, D.A. and Lamata, M.T. (2013). Forgetting as a way to avoid deception in a repeated imitation game, Autonomous Agents and Multi-Agent Systems 27(3): 329-354.

[018] Villacorta, P. and Pelta, D. (2011). Expected payoff analysis of dynamic mixed strategies in an adversarial domain, Proceedings of the 2011 IEEE Symposium on Intelligent Agents (IA 2011), Paris, France, pp. 116-122.