Dr Tom Philp

Chief Executive Officer

In 2015, the RMS North Atlantic Hurricane Medium Term Rate (MTR) model documentation was dropped onto the desk of a (relatively) fresh-faced science analyst. His job: pick it apart and provide some independent perspectives. It was my first in-depth North Atlantic Hurricane cat model review. Nearly ten years later, I’m still thinking about it. 

At the time, an ensemble of no less than thirteen distinct hurricane landfall rate sets had been devised by RMS, each trying to capture the impacts of diverse scientific narratives and debates in peer-reviewed literature. How did the Atlantic Multidecadal Oscillation (AMO) affect rates? What did the choice of historical start date do to views of risk? How should one treat basin vs landfall data? The thirteen different rate sets were an eye-opener for me, a laudable attempt to capture the multifarious scientific beliefs of the time, and something I learnt from greatly. In many instances, the RMS MTR philosophy forms the basis for debates I continue to have in academic circles when I hear overly simplistic recommendations for how we, as an industry, should create a baseline view of natural peril risk.

To throw a curveball at me during that model review, I was aware that there was a huge split in the fundamental beliefs of the hurricane research community at the time. Some believed we were in a warm phase of the AMO, some believed we were in a cool phase, some believed we were in the process of flipping from one phase to another, and some didn’t believe in the AMO at all. What on earth should a risk manager do?

It was here that my opinions diverged with those of RMS; they had blended the thirteen sets using a historical scoring methodology to create a single, unified view of near-term hurricane risk. 

The philosophical conflicts in this blending seemed insurmountably problematic to me. How was one supposed to coherently blend rate sets that were fundamentally in conflict with one another (as was the case for the AMO+/- and non-AMO sets)? How could historical scoring be meaningfully applied if nothing could be considered truly “out-of-sample”, given that AMO narratives in the literature had been dependent on the entire dataset to be created? And why were there 13 rate sets – why not 3, or 23, or 2300? Wouldn’t the seemingly arbitrary choice of “n” models influence the final output? I decided to extract the information from all of the 13 sets, and see what we could do with it if it was presented in its fullest deconstructed state.

After showing this to decision-makers, though, it was clear that I was still learning what the market really needed. I saw firsthand the challenge RMS faced: “Give us your best estimate; this deconstructed uncertainty doesn’t help” was largely the response I received. Tail between my legs, my only consolation was that at least it wasn’t my neck on the line at that time.

But it was a challenge that nearly caused me to leave the industry – if the market needed a single, precise number to peg itself to, how could we make robust decisions given the severe uncertainty I knew existed in the models? The 13 MTR sets were, after all, just the tip of the uncertainty iceberg – we hadn’t even got to the soup of vulnerability yet. It seemed a challenge that had no useful answer.

However, I had faced a similar challenge during my PhD in extreme Extra-Tropical Cyclone wind fields, and I knew there was a body of literature that existed exclusively to help model-users see through such severe uncertainty in their model-worlds. This literature broadly falls under the title of "Decision Theory" - a field of research concerned with reasoning about how people make real-world choices. It is a highly inter-disciplinary field, combining input from disciplines as wide-reaching as psychology, behavioural economics, probability theory and numerical modelling.

Luckily, two of the leading contributors to the Decision Theory field of research – Professors Roman Frigg and Richard Bradley – were just a stone’s throw from my office in the City of London, at the London School of Economics’ Department of Philosophy in High Holborn, so off I went.

What started then has led to a long collaboration that has re-shaped my entire worldview when it comes to weather, climate and risk. A paper was published in the Philosophy of Science off the back of our formative conversations[1], in which a framework based upon the decision-theoretic “Confidence Approach”[2] was presented. This framework provides a way to retain all of the information created from an ensemble of models, and only collapses the cone of uncertainty when decision-relevant, personal attitudes to risk appetite and volatility have been captured and quantified. Applied in our catastrophe risk management world, the framework allows risk averse and risk seeking market practitioners alike to truly extract the maximum level of meaningful information available in their complex (and usually costly) catastrophe models.

In 2021, with the backing of the Lighthill Risk Network, Maximum Information and LSE began a project to take this framework from a theoretical whiteboard to a practical tool. Collaborating with Reask and their climate model connected stochastic Tropical Cyclone sets, we decided to focus on the potential use of seasonal hurricane predictions, and how the ensemble of models might influence decisions in any given North Atlantic Hurricane season.

I’m happy to say that the tool is now live on our website, available at the following link:
https://seasonalpredictions.maxinfo.io/

I strongly believe that the framework has the potential to evolve how we approach the catastrophe modelling challenge, but it will need input and collaboration from model developers, model users, and everyone that interacts with these people in the multi-disciplinary chains in between, if it is to truly reach its potential.

In the spirit of the transparency and collaboration that we are trying to foster in the catastrophe modelling space, the tool is completely open to anyone. Please check it out, and thanks again to Lighthill, LSE, Reask, and the team here at Maximum Information for the tireless efforts to getting this idea to where it is now.