This paper offers a new approach for estimating and forecasting the volatility of financial time series. No assumption is made about the parametric form of the processes. On the contrary, we only suppose that the volatility can be approximated by a constant over some interval. In such a framework, the main problem consists of filtering this interval of time homogeneity; then the estimate of the volatility can be simply obtained by local averaging. We construct a locally adaptive volatility estimate (LAVE) which can perform this task and investigate it both from the theoretical point of view and through Monte Carlo simulations. Finally, the LAVE procedure is applied to a data set of nine exchange rates and a comparison with a standard GARCH model is also provided. Both models appear to be capable of explaining many of the features of the data; nevertheless, the new approach seems to be superior to the GARCH method as far as the out-of-sample results are concerned.