In this paper we study the problem of computing minimum-energy controls for
linear systems from experimental data. The design of open-loop minimum-energy
control inputs to steer a linear system between two different states in finite
time is a classic problem in control theory, whose solution can be computed in
closed form using the system matrices and its controllability Gramian. Yet, the
computation of these inputs is known to be ill-conditioned, especially when the
system is large, the control horizon long, and the system model uncertain. Due
to these limitations, open-loop minimum-energy controls and the associated
state trajectories have remained primarily of theoretical value. Surprisingly,
in this paper we show that open-loop minimum-energy controls can be learned
exactly from experimental data, with a finite number of control experiments
over the same time horizon, without knowledge or estimation of the system
model, and with an algorithm that is significantly more reliable than the
direct model-based computation. These findings promote a new philosophy of
controlling large, uncertain, linear systems where data is abundantly
available.