Impulse Control of Piecewise Deterministic Markov Processes
Dempster, M. A. H. ; Ye, J. J.
Ann. Appl. Probab., Tome 5 (1995) no. 4, p. 399-423 / Harvested from Project Euclid
This paper concerns the optimal impulse control of piecewise deterministic Markov processes (PDPs). The PDP optimal (full) control problem with dynamic control plus impulse control is transformed to an equivalent dynamic control problem. The existence of an optimal full control and a generalized Bellman-Hamilton-Jacobi necessary and sufficient optimality condition for the PDP full control problem in terms of the value function for the new dynamic control problem are derived. It is shown that the value function of the original PDP optimal full control problem is Lipschitz continuous and satisfies a generalized quasivariational inequality with a boundary condition. A necessary and sufficient optimality condition is given in terms of the value function for the original full control problem.
Publié le : 1995-05-14
Classification:  Impulse control,  piecewise deterministic Markov processes,  Bellman-Hamilton-Jacobi equation,  quasivariational inequality,  60G40,  49B60,  93E20
@article{1177004771,
     author = {Dempster, M. A. H. and Ye, J. J.},
     title = {Impulse Control of Piecewise Deterministic Markov Processes},
     journal = {Ann. Appl. Probab.},
     volume = {5},
     number = {4},
     year = {1995},
     pages = { 399-423},
     language = {en},
     url = {http://dml.mathdoc.fr/item/1177004771}
}
Dempster, M. A. H.; Ye, J. J. Impulse Control of Piecewise Deterministic Markov Processes. Ann. Appl. Probab., Tome 5 (1995) no. 4, pp.  399-423. http://gdmltest.u-ga.fr/item/1177004771/