We produce data – at all times and wherever we are, whether we’re on the phone, booking a ride via Uber, wearing fitness trackers, reserving a concert ticket online or instructing Alexa to turn on the heating. The use of smartphones, the mobile Internet and the comprehensive networking of objects with the Internet of Things (IoT) generate immense amounts of data around the world. It’s all about big data. More than 16 zettabytes of data are currently generated annually worldwide. Experts estimate that the volume of digital data will increase to 163 zettabytes by 2025. This number with 21 zeros corresponds to the data volume of 40 trillion DVDs, which would reach to the moon and back over 100 million times, as a comparison made by the US hard disk manufacturer Seagate and the market researcher IDC* illustrates.
Today, companies have more data about their customers than ever before. But many do not evaluate the data or do not even know that there is precious treasure sitting on their servers: more than half of the data collected and stored worldwide is classified as so-called dark data, which means that the content and business value of the data is unknown. This was the conclusion of the Global Databerg Report by Veritas Technologies (March 2016). This not only causes costs in the billions, but also means that many companies waste enormous potential. In an article in Zeitschrift für Versicherungswesen (20/2017), the two insurance experts Markus Rosenbaum and Jens Ringel predict the following: ‘Big data will act as a catalyst in the coming years and accelerate the transformation process in the insurance industry, from much more precise risk differentiation to orienting the insurance business model towards more prevention and lifelong support.’
New digital technologies mean that efficient processes are available that make it possible to intelligently evaluate the explosive growth in data. Artificial intelligence (AI) is considered to be a key technology in this context. The preparation, analysis and presentation of data has now become a science in its own right: data science. This interdisciplinary field is essentially aimed at gaining insights from data that can be used as a basis for business decisions and forecasts. Statistical methods or machine learning methods are applied to mass data as an AI process using appropriate computing infrastructures with the aim of answering subject-specific questions.
This allows knowledge to be filtered out of data that provides clues about the customer’s behaviour, preferences, routines or important milestones in life, which in turns helps in gaining a better understanding of the customer, creating tailored offers and optimising processes. The specific analysis of the data is also called data mining. This refers to the systematic application of statistical methods to identify hidden relationships, patterns and trends in data sets.
The method offers a tremendous potential for the insurance industry, as the use of data science:
Customer lifetime value (CLV):
The customer lifetime value is the value of a customer for a company and corresponds to all purchases, interactions and transactions that a customer has made and is likely to make in the course of their business relationship with a company.
Topics such as big data, data science and AI are also on insurers’ minds but, unlike other industries, the insurance industry has centuries of experience in the development and use of data-driven models. The core business of insurers is based on the ability to assess risks, manage their costs collectively and minimise them. The basis for this is data. As early as the 18th century, the industry used mathematical methods for data analysis. In 1756, British mathematician James Dodson developed age-related life insurance premiums for the first time on the basis of mortality tables. In 1762, the Equitable Life Assurance Society implemented Dodson’s ideas. Insurers were also pioneers in the field of electronic data processing. Just 15 years after the development of the first computer by Konrad Zuse, Allianz used the IBM 650 magnetic drum computer in the newly founded computing centre in Munich in 1956.
Machine learning (ML) methods are also nothing new in the insurance industry. One AI process of the unmonitored ML is cluster-based policy compression in risk management, for example. Instead of carrying out stochastic calculations on the basis of individual policies for different capital market scenarios with several hundred thousand or even millions of contracts, only a few thousand model points are determined, weighted and used for forecasts. Since 2008, msg life has offered msg.Ilis, a tried-and-tested software solution that fully integrates cluster-based policy compression (a classic method of machine learning). Compression can thus be carried out quickly, efficiently and with high-quality results. With msg.Ilis, forecast calculations can be performed up to 2,000 times faster than with conventional methods, meaning that the duration of the processing is no longer measured in hours, but in seconds.
msg.Ilis stands for Insurance Liability Information System and is standard software designed to assist insurers with financial reporting. It offers a framework for centralised data storage where information required for all types of forecasts can be managed – up to date, in high quality and audit-proof. As msg.Ilis accesses the services of the policy management systems directly, product knowledge and insurance technology do not need to be mapped for a second time in the forecasting software. On the one hand, msg.Ilis is a component of msg.Life Factory and is therefore fully integrated into msg.Insurance Suite; on the other hand, it can be operated with other policy management systems as a stand-alone software.
*Infographic Data Age 2025, www.seagate.com
IF YOU NEED MORE INFORMATION
let us know.
We are happy to help!