This paper describes a general approach for dynamic control of
stochastic networks based on fluid model analysis, where in broad terms, the
stochastic network is approximated by its fluid analog, an associated fluid
control problem is solved and, finally, a scheduling rule for the original
system is defined by interpreting the fluid control policy. The main
contribution of this paper is to propose a general mechanism for translating
the solution of the fluid optimal control problem into an implementable
discrete-review policy that achieves asymptotically optimal performance under
fluid scaling, and guarantees stability if the traffic intensity is less than
one at each station. The proposed policy reviews system status at discrete
points in time, and at each such point the controller formulates a processing
plan for the next review period,based on the queue length vector observed,
using the optimal control policy of the associated fluid optimization problem.
Implementation of such a policy involves enforcement of certain safety stock
requirements in order to facilitate the execution of the processing plans and
to avoid unplanned server idleness. Finally, putting aside all considerations
of system optimality, the following generalization is considered: every initial
condition is associated with a feasible fluid trajectory that describes the
desired system evolution starting at that point. A discrete-review policy is
described that asymptotically tracks this target specification; that is, it
achieves the appropriate target trajectory as its fluid limit.