For Bayesian analysis, an attractive method of modelling uncertainty in the prior distribution is through use of $\varepsilon$-contamination classes, i.e., classes of distributions which have the form $\pi = (1 - \varepsilon)\pi_0 + \varepsilon q, \pi_0$ being the base elicited prior, $q$ being a "contamination," and $\varepsilon$ reflecting the amount of error in $\pi_0$ that is deemed possible. Classes of contaminations that are considered include (i) all possible contaminations, (ii) all symmetric, unimodal contaminations, and (iii) all contaminations such that $\pi$ is unimodal. Two issues in robust Bayesian analysis are studied. The first is that of determining the range of posterior probabilities of a set as $\pi$ ranges over the $\varepsilon$-contamination class. The second, more extensively studied, issue is that of selecting, in a data dependent fashion, a "good" prior distribution (the Type-II maximum likelihood prior) from the $\varepsilon$-contamination class, and using this prior in the subsequent analysis. Relationships and applications to empirical Bayes analysis are also discussed.