Linear models are widely used in many branches of empirical inquiry. The classical analysis of linear models, however, is based on a number of technical assumptions whose failure to apply to the data at hand can result in poor performance of the classical techniques. Two methods of dealing with this that have gained some acceptance are the data-analytic and model expansion approaches, in which graphical and numerical methods are employed to detect the ways in which the data do not meet the classical assumptions, and either the data are modified appropriately before the classical techniques are applied (data-analytic) or the model is broadened to take account of the departures discovered (model expansion). Another approach involves the use of robust methods, which are intended to be sufficiently insensitive to deviations from the classical assumptions that the data may be analyzed without modification or additional (explicit) modeling. In this article a comparison is made between the data-analytic, model expansion and robust approaches to linear models analysis, and the application of one type of robust methods, those based on $R$-estimators (which use the logic of rank tests to motivate inference on the raw data scale), to problems of estimation, testing and confidence and multiple comparison procedures in the general linear model is reviewed.