In this paper, we consider first-order algorithms for solving a class of
non-convex non-concave min-max saddle-point problems, whose objective function
is weakly convex (resp. weakly concave) in terms of the variable of
minimization (resp. maximization). It has many important applications in
machine learning, statistics, and operations research. One such example that
attracts tremendous attention recently in machine learning is training
Generative Adversarial Networks. We propose an algorithmic framework motivated
by the inexact proximal point method, which solves the weakly monotone
variational inequality corresponding to the original min-max problem by
approximately solving a sequence of strongly monotone variational inequalities
constructed by adding a strongly monotone mapping to the original gradient
mapping. In this sequence, each strongly monotone variational inequality is
defined with a proximal center that is updated using the approximate solution
of the previous variational inequality. Our algorithm generates a sequence of
solution that provably converges to a nearly stationary solution of the
original min-max problem. The proposed framework is flexible because various
subroutines can be employed for solving the strongly monotone variational
inequalities. The overall computational complexities of our methods are
established when the employed subroutines are subgradient method, stochastic
subgradient method, gradient descent method and Nesterov's accelerated method
and variance reduction methods for a Lipschitz continuous operator. To the best
of our knowledge, this is the first work that establishes the non-asymptotic
convergence to a nearly stationary point of a non-convex non-concave min-max
problem.