LOSS-SAMPLER

Identifying unseen volatility in cat model output to enhance cat risk decision-making​


We are building a tool that combines catastrophe models with applied science, climate models, actuarial loss modelling and structured uncertainty analysis in a clean and coherent way, such that model-users are maximally empowered by their available catastrophe models. 
 
The LOSS-SAMPLER re-samples loss output from catastrophe models in a way that allows for the impacts of multiple scientific debates - whether in the present or under climate change - to be transparently and coherently presented.  

The tool improves Climate and Environment Risk management in: 
  • The present, by unmasking targeted uncertainty in catastrophe risk portfolios. The LOSS-SAMPLER allows identification of hotspots of risk in various portfolios that may be susceptible to previously unseen volatility.  

  • The future, by allowing coherent translation of climate change science into loss impacts. This is a gamechanger in the industry as there is no existent, cat model vendor-independent, commercially available tool to allow such coherent translation in an easily digestible way. The LOSS-SAMPLER also allows rapid targeting of new narratives in scientific journals and media headlines, to swiftly quantify and respond to the narratives’ impacts on modelled losses. 

The LOSS-SAMPLER will facilitate the building of resilience to climate and environmental risk in the private industry and broader society. 
 
We first considered two major peril-regions:  Hurricane and Earthquake in the US. We have developed multiple frequency-intensity distributions for both the present day, and for hurricane under climate change, which reflect different narratives in scientific literature. We have structured and sampled loss data from a live, real-world catastrophe risk portfolio.

To model Present-Day US-Hurricane Risk, HURDAT2 observational data are used.
A foundational question that arises is: What years should the default view of risk include?

For pricing hurricane risk, the re/insurance industry typically uses a “Long-Term Rate” (LTR) as a baseline view of risk. A LTR reflects a period of the past that maximizes the historical data used such that full tail representations are estimated as reliably and accurately possible. The choice of start date for a LTR is highly subjective however, and there is no widespread agreement about which historical period is most suitable for this baseline. This is because there are always trade-offs between length of dataset, and quality of observations (which typically degrade as one goes further back in time).
 
Here we subset the historical data into different periods which reflect different scientific views of possible LTRs for US hurricanes.

Landfall rates per category for different LTRs.


The raw frequency-intensity rates above can be translated into return period (RP) curves by randomly selecting events in the event loss table.​

Return period curves for windspeed for different LTRs


For a more meaningful visualization of the discrepancy between different scientific narratives, we plot the relative differences in windspeed at various RPs.
Relative difference in windspeed compared to the "Reliable Landfall Years" LTR


We can see below how small differences in windspeed translate to much larger changes in loss.​
Relative difference in losses compared to the "Reliable Landfall Years" LTR


To model US Earthquake Risk data from the USGS ANSS Comprehensive Earthquake Catalogue are used.

Here, foundational modelling decisions include: Which declustering method to use and from which year is the record complete for events of different magnitude?

Single large earthquakes trigger other earthquakes, building up clusters in space and time which can lead to frequency-intensity biases in seismic catalogues. To achieve a pre-requisite catalogue of independent events for seismic hazard and risk analyses, dependent foreshocks and aftershocks must first be decoupled (declustered) from the mainshock and removed from the catalogue. However, choosing an appropriate declustering process is not a straightforward task due to the complex nature of earthquake phenomena and a wide choice of declustering methods, all of which employ subjective rules to distinguish between independent and dependent events. Thus, the use of different declustering methods usually yields large variation in the final declustered catalogues.

Moreover, since their advent in the early 1900s, seismic monitoring instruments have made significant technological advances and their networks have expanded such that our ability to reliably locate and measure the intensity of earthquakes has greatly improved. However, the amount and quality of historical data available for different parts of the Earth varies widely, and the long return periods for some large earthquakes (sometimes tens to hundreds of thousands of years) means that the relatively short instrumental record is insufficient to obtain a robust long-term magnitude-frequency relationship for the region in question. To address the underestimation of potentially damaging quakes, seismic hazard analysis often prescribes the inclusion of historical and even paleoseismic events to help estimate the maximum possible magnitudes and their likelihoods for a particular region. The completeness year is defined as the minimum year at and above which we have reliably located and measured events of specific magnitude. Deciding this threshold is again a subjective exercise.

Below we show the frequency of earthquakes of different magnitudes depending on the declustering method and the completeness year chosen.

 
Earthquake rates per magnitude class for different distributions

By applying the same methodology as for hurricane we have translated the different frequency-intensity distributions into RP curves.

Return period curves for different views of historical earthquake magnitude

Relative difference in magnitude compared to "Year Complete 1700"

Small differences in hazard resulting from different scientific debates, especially at high RPs, is an artefact of the inadequate sample of historical earthquake data relative to the long RP of large (i.e. M7+) events. To overcome this limitation we aim to fill these gaps with additional geological information as a next step.

As seen for US hurricane, small differences in earthquake magnitude translate to substantial changes in losses.
Relative difference in losses compared to "Year Complete 1700"


More work is in the pipeline, but the LOSS-SAMPLER already allows:
  • Input of scientific information and generation of coherent and connected loss information in an automated and rapid way;​
  • Transparent communication of how changes in hazard translate to changes in loss, thereby facilitating academia-industry partnerships and collaborative research projects;​
  • Catastrophe model users to scrutinise highly precise reinsurance structures and/or layer attachments that are key to their portfolios through targeted return period analytics;​
  • Identification of potential volatility and ‘sore thumb’ regions that may underlie some of the more generic overall output;​
  • Automated creation of exhibits and narratives for scientifically defensible responses to regulatory questions (e.g. on climate change solvency).​