With further tightening of Solvency II deadlines and with IFRS17 on the horizon, reserving processes need to be enhanced to remain fit for purpose in the future, writes Jean Rea, a Director in our Actuarial Services practice.
Actuarial reserving methods have not changed very much throughout their history and it seems that this is a process which has dodged disruption. As an example, the Bornhuetter Ferguson method — developed in the 1970s before computers and spreadsheets, as we know them now, were widely available — still underpins non-life reserving techniques today, particularly for long-tailed lines of business. Yet the universe of risks to be managed are changing rapidly, from the effect of climate change to self-driving cars disrupting the traditional risk model.
Insurers can adopt technological tools in the actuarial function to enhance the reserving process and take advantage of the insights gained from more granular analysis of individual claims data. The enablers and facilitators of future change are new technologies coupled with enhanced computing power. These clearly have the potential to radically disrupt the historic approach across the end-to-end reserving process, impacting areas such as data preparation, analysis, reporting and visualisation.
New technologies can deliver enhanced efficiency leading to a greater frequency of reserve reviews and create capacity to provide deeper insights to stakeholders. Visualisation tools can be used to develop dynamic dashboards, providing fast insights to enhance business decisions and performance.
It is clear that there are challenges within the existing reserving processes. Repeatable manual aspects of the process are time-consuming and subject to human error. Time spent by actuaries wrangling data, manually assessing reliability of data and producing reports would be better spent on the analysis.
Advanced analytics and big data are fundamentally changing actuarial work. Today's actuaries no longer have to follow the traditional methods, calculating reserves based on aggregate data patterns. Instead, robust software and vast computational power have unlocked new methods and models, including analysing individual claim and policy data in real time.
The reserve modernisation journey consists, broadly speaking, of three areas: optimisation, automation and next generation reserving methods. Once reserving has been addressed other key actuarial processes such as business planning, ORSA, SCR calculations and production of actuarial MI can be enhanced.
The first step is very simply to optimise what you currently have. Take a step back and look at your process, including a review of your data architecture and process flow. Perhaps, apply methodologies such as LEAN or SIX SIGMA – are there areas for efficiency gains? These could be as simple as step reduction, step removal or automation of manual aspects of the process.
An example of this for non-life insurance could be better utilisation of reserve roll forward functionality within your reserving software. From a life insurance perspective, models can be streamlined and parallel runs performed, using the latest multi-core processors and multiple machines, thus reducing the time required to produce all required results.
Widely used and available automation tools can be harnessed to replicate the repetitive tasks of human users. Use cases within typical reserving processes include fetching data, running model updates, displaying results and feeding downstream systems with the results. This allows analysts to review and validate outcomes and selections, subsequently allowing them to interpret and communicate the insights. Characteristics of tasks suitable for basic automation include those with structured data which can be described within defined parameters. Taking life insurance as an example, key inputs to the reserving processes include the assumptions surrounding decrements such as death and lapses. Historically, carrying out experience investigations in these areas can take months of manual work. Newer technologies can automate and streamline these processes, making it easier to reflect emerging experience in the reserves.
Taking non-life reserving as an example, currently the reserving analysis focuses on aggregated claims data in the form of a claims triangle. A next step would include broadening the data used to derive development factors. The ultimate vision would involve claim by claim level reserving where you would utilise transactional claims and policy level data alongside machine learning techniques to determine claims reserves.
Moving in a highly controlled environment from static approaches through an innovation cycle creates challenges in keeping management, investors, auditors and actuarial teams comfortable with the change. Both a focus on testing and a period of parallel processing are key to addressing this challenge. Other challenges include lack of vision and difficulty in acceptance as business decisions can create winners and losers.
To remain competitive in today’s environment, organisations need to improve their processes where possible, whether it’s reducing steps or automating certain aspects, allowing more frequent reviews and deeper and quicker insights. Less time spent meddling with the process enables high-cost staff to spend more time to spend on higher value work and analysis. However, it is worth bearing in mind that, when enhancing the process, it is very important to maintain a balance of rigour for regulation, agility for the business and foresight to deal with future data requirements and next-generation methods.