Risk minimisation
In machine learning, risk minimisation is the family of statistical algorithms that estimate the theoretical bounds of model performance. They are used in the situations when accuracy, speed and other qualities of a given learning model cannot be measured explicitly and directly, possibly due to its variability and unpredictable sequence of data transformations and triggers [1, 831-833], therefore risk minimisation aids to give bounds within which the models and networks would operate on. It is often impossible to compute operational bounds down to fractions due to unknown distribution in agnostic learning, however, it is possible to make an estimation of seeked estimation through the mathematician concept of derivatives. Said approximation is called empirical risk, and it is found by finding the average value of the training set in the loss function.
V. Vapnik. (1992). Principles of Risk Minimization for Learning Theory. New Jersey: AT&T Bell Laboratories.
V. Feldman, V. Guruswami, P. Raghavendra & Y. Wu. (2012). Agnostic Learning of Monomials by Halfspaces Is Hard. SIAM Journal on Computing, 41(6), 1560-1580.