Mathematical Approach for Insurance Models

Insurance solvency is a very complex and technical issue, where there are continuously new developments and improvements in market theory. Increasing complexity of models is invariably accompanied by increasing data requirements and cost, but with benefits in terms of predictive power.

Applied mathematics has an important role to play in developing the tools needed for the analysis, simulation, and optimization of these types of complex problems.

Brainmize has a long, rich tradition of developing the mathematical tools needed to address key scientific and technical needs of the Risk and Actuary department.

The models will likely combine both deterministic and stochastic elements. Furthermore, the goal in studying these types of systems will not be to simply produce a simulation of the system but rather to answer difficult questions regarding design, risk analysis, or optimization, with quantified bounds on the error

Answering these types of questions will require combinations of new mathematical tools from several different areas. Four topical areas were identified.

  • Modeling systems characterized by multiple sub-models
  • Optimization theory and algorithms
  • Stochastic systems and systems with uncertainty
  • Integration of data and modeling.

Modeling systems characterized by multiple sub-models

As model complexity grows, new approaches will be needed to accurately model these types of systems. This is not simply a numerical analysis issue; one needs to start with an examination of the mathematical structure of the overall problem to analyze the interrelationship between the components and what information needs to be exchanged between them. Models may include multiple sub-models with vastly differing characteristics.

This underlying structure needs to be reflected in the discretization or coarse-graining used to approximate the overall model, and in the solver technology needed for effectively solving the resulting systems of equations.

Care needs to be taken to understand how the accuracy of the components affects accuracy of the overall system and how uncertainties propagate through the system.

Optimization theory and algorithms

Optimization research has made major strides in a wide variety of disciplines as a critical tool for design, prediction, strategic planning, and operational decision making, ameliorating and treating uncertainty, and multiscale model calibration.

Classic optimization algorithms are capable of routinely delivering solutions for linear problems with many variables and constraints. However, they are often limited to delivering only local solutions to nonlinear problems.

Reaching the best decisions and outcomes for systems with rapidly increasing complexity requires the development of advanced high-fidelity optimization technology, while producing acceptable, robust solutions that are useable by a decision maker in a timely fashion.

Finding global solutions of such problems necessitates also an abstraction and understanding of the underlying system so that enhancements can occur at many levels.

Stochastic systems and systems with uncertainty

Providing useful information about complex systems requires not only the ability to model the system but also the ability to characterize the uncertainties in the model.

Uncertainty can arise from imperfectly known or random inputs, inadequacies of the models used to describe the system, and errors or noise in the data acquired about the system.

Many existing stochastic methods rely on the assumption that the underlying process is Markovian with Gaussian noise, but new approaches for uncertainty quantification are needed to treat systems with a large number of uncertain parameters and systems where the underlying model is highly complex.

Of particular interest in this regard is the development of methodologies to effectively model the propagation of uncertainties in multiple sub-models, which, in some cases, will require not only the development of numerical methods but also development of the basic fundamental theory.

Integration of data and modeling

In data-driven modeling, observational data may be used to guide construction and assembly of the model and to adjust and identify the parameters that define a model. This process improves the fidelity of the model and allows it to be used for its intended purposes, namely those of prediction or analysis.

Similarly, analysis of data can lead to reduction in the complexity and size of the models that drive them, thus speeding computations without damaging the quality of predictions.

Sampling methodologies and other techniques from applied probability are vital in dealing with large data sets.

For inverse problems, data assimilation, and parameter estimation, there are a number of challenges to be faced: reducing memory requirements, better inference from sparse and noisy data, and better global optimization algorithms, to name a few.

Our solution: Combining Technologies to Solve Problems

This approach to problem solving is not a process of working from formulation to discretization to algorithm but rather an iterative consideration of all aspects of the solution process.

It is through this iteration that we can together arrive at the combination of formulation, discretization, and solver technologies will enable us to weave the elements of the different components to construct the tools needed to answer important questions about complex systems while making efficient use of emerging computer architectures