Read how reinsurance companies who underwrite hurricane risks use catastrophe models to provide data.
Read how reinsurance companies use catastrophe models to provide data.
Charley, Frances, Ivan, Jeanne, Katrina, Rita, Wilma, Ike, Irene, Sandy, Harvey, Irma, Maria: this motley band are celebrities in the P&C re/insurance corner of the world because they are the most expensive US hurricanes of the last two decades. “Expensive” for the purpose of this discussion means a PCS estimate for each hurricane that exceeds $3 billion. Combined, they account for almost $200 billion of insured loss, and collectively they have spawned an international industry ecosystem of insurers, reinsurers, brokers, modellers, capital providers, fund administrators, MGAs, consultants and others to manage, analyse, fund and finance the risk posed by future storms of their ilk. US hurricane is the industry’s peak risk.
In any game of chance, one critical success factor is understanding the probabilities of various outcomes, and the hurricane game is no exception. The capital providers who underwrite this risk are taking a big gamble, and their understanding of the outcome probabilities comes mainly from catastrophe models. These models may be vendor-supplied, or developed in-house, or some combination of the two, so there is a lot of variety, but most tend to follow the same basic approach of simulating many years’ worth of hypothetical tropical storms. Model developers come up with a long list (on the order of tens of thousands) of potential storms with enough variety of characteristics that they hope to come close to covering every possible storm that might occur. Each individual storm is assumed to occur very rarely, with its frequency expressed as a very small annual rate of occurrence (0.0002 times per year, for example).
There is a probability distribution called the Poisson distribution that is convenient to use in a situation like this. Although “poisson” means fish in French, the distribution is named after French mathematician Siméon Denis Poisson. It expresses the probability of a given number of events (such as hurricanes) occurring in a fixed interval of time (such as one year) if these events occur with a known constant rate and independently of the time since the last event. The Poisson has a few properties that make it a popular choice for cat modelling. First, the distribution has only one parameter, which is equal to its mean. No need to make assumptions about higher-order moments; all you need to know is the annual rate of occurrence, and the distribution’s functional form provides the rest. Second, the sum of independent Poisson variables is also a Poisson distribution, with a parameter equal to the sum of the parameters of the component distributions. This means that if each of the hypothetical hurricanes has a Poisson frequency distribution, then the total number of hurricanes in a year also follows a Poisson distribution, with the average number of hurricanes equal to the sum of annual rates across all storms.
So the Poisson is a nice tractable solution to a modelling problem, but how well does it work? Let’s fit a Poisson distribution to our twenty-year history of expensive hurricanes and see how it performs. Our number of events by year is as follows:
Our total is 13 events in 20 years, giving us an annual frequency of 0.65 per year. The Poisson parameter is equal to the mean frequency, so the functional form of our frequency distribution is:
where X is the number of events in one year. For zero events, k=0 and our probability is simply e-.065, or 0.522; about ten out of twenty years. Using our distribution, we can compare the predicted number of years with one event, two events, etc., with the observed history as shown in the chart below.
Without getting into the details of distributional goodness-of-fit tests, let’s simply notice that the fitted Poisson predicts fewer zero-event years, and far fewer three-plus event years, than we have observed over our recent history. Instead it predicts that almost half the time, we will have one or two in one year. One other item to note is that the observed variance of annual costly hurricanes is 1.5. For the Poisson distribution, the variance is equal to the mean—so 0.65 in this case. That’s less than half the observed variance.
I promise there’s a point to putting you through all of this math. Model-builders will try to produce a model that is as accurate as possible. To that end, the model will typically be calibrated to produce an average annual hurricane cost that is close to what has been observed historically (with appropriate adjustments for changes in exposure density, construction costs, etc.) because of the use of the Poisson distribution, most of this average loss must be packed into one or two events; however, in the last 20 years, about ¾ of the major hurricane loss came from years with three or more events. The most common type of catastrophe reinsurance purchased by primary companies provides one occurrence limit and one option to reinstate that limit. It also imposes a retention that applies per event. The popularity of this type of coverage is driven by the mindset on the part of rating agencies and senior managers to focus on being prepared for “The Big One”, and it seems purpose-built for Poisson-like event behavior, which predicts just one or two significant events in a year.
If the experience of the last 20 years is typical, a Poisson-based model that correctly predicts the average annual hurricane cost will tend to produce pricing indications for per-occurrence covers that are relatively less attractive than the indicated pricing for aggregate covers. An astute reinsurer may attempt to arbitrage this modeling discrepancy by providing per-occurrence covers to primary cedants, while purchasing aggregate-type retrocession for its own account. It’s the hurricane version of buy low/sell high; easier than shooting fish in a barrel!