In Bayesian decision theory, it is known that robustness with respect to the loss and the prior can be improved by adding new observations. In this article we study the rate of robustness improvement with respect to the number of observations n. Three usual measures of posterior global robustness are considered: the (range of the) Bayes actions set derived from a class of loss functions, the maximum regret of using a particular loss when the subjective loss belongs to a given class and the range of the posterior expected loss when the loss function ranges over a class. We show that the rate of convergence of the first measure of robustness is
$\sqrt{n}$
, while it is n for the other measures under reasonable assumptions on the class of loss functions. We begin with the study of two particular cases to illustrate our results.