Aon | Professional Services Practice
Release Date: April 2021
Actuarial models are powerful tools to assess risk.
But for risk modeling to be most useful it is best to proceed with caution.
Actuarial models can be powerful quantification tools to assist professional service firms in making informed decisions about their insurance risk strategy. The costs associated with alternative insurance program structures (retentions and limits) can be estimated by using actuarial risk modeling, helping to tailor (re)insurance programs to match the loss profile of an individual professional service firm.
“All models are wrong, but some are useful.”
George Box, British statistician (1919-2013)
As such, actuarial risk analysis and actuarial models are a necessary and important aspect of any professional service firm’s approach to risk quantification and deciphering uncertainty.
The most powerful models rely on a robust universe of relevant data. However, there are times when the available universe of data does not live up to this standard and alternative strategies for bolstering the data must be considered.
Even when the available data meets optimal criteria, limitations related to modeling remain and must be kept in mind in order to avoid the potential downsides of an over-reliance on models.
This brief discussion will outline the clear advantages of risk modeling, point out key limitations associated with any modeling approach and present techniques for managing challenges associated with low volumes of data.
Models of a probabilistic nature produce a full range of percentile (confidence level) results, as opposed to a point estimate (such as an estimated median or an average). For example, outputs can reveal there is a 1 in 10 chance (90th percentile) of a loss of $X.
In the illustrative graphic above, although the median (50th percentile) result is approximately $1mln (meaning that half the time we would expect claims to be below $1mln), the model reveals that there is significant “tail” exposure with a 1 in 10 (90th percentile) chance of a ground up loss reaching $13mln and a 1 in 20 chance (95th percentile) of a claim reaching $29mln.
The model also illustrates that the risk could generate adverse outcomes well in excess of $29mln, so it is important for the user to understand that the model is not intended to suggest a finite upper limit for loss severity.
This probabilistic distribution of outcomes enables firms to quantify the downside volatility of their risk exposures, which is particularly important for low frequency / high severity risks.
Risk quantification exercises of this nature, therefore, provide important analytical support for decisions surrounding risk appetite, risk retention, risk financing, and risk transfer.
It is important to understand that actuarial models cannot be designed to fully capture the impact of all elements that may affect an actual outcome. This includes missing data and information that is qualitative in nature:
- Events not in data: model results are dependent on the data inputs. Therefore, “black swan” events, which, by definition, are not in the firm’s or industry’s claims history/data inputs, are not captured in its outputs.
- Evolving nature of firm’s clientele: actuarial models fitted to actual historical claims have difficulty capturing changes in the evolving nature of a firm’s clientele unless granular data is available at the clientele engagement level.
- New business lines that lead to new risk exposures: since models are dependent on historical claims experience, the models may not be able to fully capture the inherent and qualitative uncertainty of new risks involving growth areas or changes in a firm’s service mix.
- Qualitative changes in regulatory and legal environments: actuarial models have difficulty quantifying the impact of qualitative changes in the professional, regulatory and legal environments.
What if there is not enough data?
The adage “garbage in, garbage out” is certainly applicable to actuarial models. The higher the quality, relevance, and quantity of the data, the more reliable and robust the results. The reverse situation is, of course, true as well – the lower the quality of the data inputs, the less reliable the results.
Inevitably, there will be situations where the desired data is not available. To remedy this, the modeler can employ various techniques:
- Supplement individual firm data with any available industry data
- Test the sensitivity of the model results by using alternate assumptions
- Test the impact of additional hypothetical claims
- Conduct scenario workshops
- Construct an “exposure-driven” model (this type of approach is typically used to model natural catastrophes such as hurricanes or earthquakes)
These techniques help test the robustness of the model. For example, if a slight tweak in an assumption or an input would dramatically change the output or the interpretation of those results, the firm using the model to plan for risk must be aware of and understand these sensitivities.
Actuarial models are just one of several tools available to understand risk. Professional service firms need to understand the assumptions behind their modeling. Firms that are aware of the limitations of models will account for them in their decision-making to minimize a false sense of security.
Since models cannot accurately predict the future, modeled results will always differ from actual outcomes and potentially be thought of as being “wrong”. However, by carefully considering the inputs and outputs of models to minimize actual versus predicted deviations, they can be a useful tool in understanding and planning for risk.
Aon’s Professional Services Practice values your feedback. To discuss any of the topics raised in this article, please contact Henry Lim or Anastasios Serafim.
Senior Vice President and Executive Director
Senior Vice President and Executive Director