The Model-Free Prediction Principle expounded upon in this monograph is
based on the simple notion of transforming a complex dataset to one that
is easier to work with, e.g., i.i.d. or Gaussian. As such, it restores
the emphasis on observable quantities, i.e., current and future data, as
opposed to unobservable model parameters and estimates thereof, and
yields optimal predictors in diverse settings such as regression and
time series. Furthermore, the Model-Free Bootstrap takes us beyond point
prediction in order to construct frequentist prediction intervals
without resort to unrealistic assumptions such as normality.
Prediction has been traditionally approached via a model-based paradigm,
i.e., (a) fit a model to the data at hand, and (b) use the fitted model
to extrapolate/predict future data. Due to both mathematical and
computational constraints, 20th century statistical practice focused
mostly on parametric models. Fortunately, with the advent of widely
accessible powerful computing in the late 1970s, computer-intensive
methods such as the bootstrap and cross-validation freed practitioners
from the limitations of parametric models, and paved the way towards the
`big data' era of the 21st century. Nonetheless, there is a further
step one may take, i.e., going beyond even nonparametric models; this is
where the Model-Free Prediction Principle is useful.
Interestingly, being able to predict a response variable Y associated
with a regressor variable X taking on any possible value seems to
inadvertently also achieve the main goal of modeling, i.e., trying to
describe how Y depends on X. Hence, as prediction can be treated as a
by-product of model-fitting, key estimation problems can be addressed as
a by-product of being able to perform prediction. In other words, a
practitioner can use Model-Free Prediction ideas in order to
additionally obtain point estimates and confidence intervals for
relevant parameters leading to an alternative, transformation-based
approach to statistical inference.