Viewpoint: Rise of AI in insurance is a double-edged sword
The increasing use of AI in making decisions that affect our daily lives will require a level of transparency that is explainable to employees and customers
There a risk that artificial intelligence and machine learning could create unfairness and even undermine the risk-pooling model of insurance. Insurers need to be especially sensitive to ensure they develop and use these technologies ethically and manage their customers’ data with watertight controls
Insurance companies have relied on data for as long as insurance has existed and today’s insurers use big data from myriad sources to underwrite more accurately, price risk and create incentives for risk reduction.
Advances in data capture and storage make available more information about consumers than ever before. From telematics that tracks driving behaviour to social media that creates a digital footprint that could offer unprecedented insights, new sources of data are now capable of producing highly individualised profiles of customer risk.
Insurers are increasingly using artificial intelligence (AI) and machine learning to manage manual, low- complexity workflows, dramatically increasing operational efficiency. Also, behind the rise of AI-powered insurance is the ability to predict with greater accuracy losses and the behaviour of insurers’ customers. Some insurers say it also gives them more opportunity to influence behaviour and even prevent claims from happening.
Yet is there a risk this new way of doing things could actually create unfairness and even undermine the risk-pooling model that is fundamental to the industry, making it impossible for some people to find cover? After all, AI is not an agnostic technology and so can be used in ways that reinforce its creators’ biases. As a result, insurers need to be especially sensitive to ensure they develop and use AI and machine learning ethically and manage their customers’ data with watertight controls.
AI has become an integral part of the day-to-day operations across most industries and can be credited with condensing vast amounts of data into something more usable. But as companies come under greater public scrutiny regarding how algorithms are influencing corporate behaviour, the question of how to ethically apply machine learning and AI is top of mind for insurance leaders.
It is important to remember AI does not really reason: algorithms have no ethics, they are just simply algorithms. Instead of asking how ethical a firm’s AI is, we should be asking how far ethics is taken into account by the people who design the AI, feed it data and put it to use making decisions.
For privacy issues, organisations are required to adhere to the EU’s GDPR regulation, the European legal framework for how to handle personal data. At this moment in time, however, there is nothing similar in place to grapple with the raft of ethical challenges presented by this rapid rate of AI innovation. The EU AI Act, first proposed in 2021 and expected to pass in 2024, is understood to be the world’s first international regulation for AI.
Although various pieces of legislation are being prepared, grey areas still remain with companies having to rely on high-level guidelines that could leave significant room for interpretation. Therefore, for the time being at least, responsibility primarily rests with companies, organisations and society to ensure AI is used ethically. Insurers will need to think through their entire data ecosystem to achieve comprehensive AI governance, including the insurtech vendors with which they partner.
With machine learning continuing to generate significant additional value across insurance, the value of applying a clear ethical framework should be considered an essential component to successful adoption and value extraction. In addition to transparency, key components in WTW’s own ethical framework, for example, include accountability and fairness – understanding, measuring and the mitigation of bias – of the models and systems in how they operate in practice, in addition to how they are built, and technical excellence to ensure models and systems are reliable and safe providing privacy and security by design.
While insurers were already on a digital journey and innovating products before Covid-19, the pandemic has certainly fast-tracked some of these transformations. Besides the more recent factors of rising uncertainty in global markets and high inflation, evolving customer demands have been applying tremendous pressure on the industry to transform at speed.
To respond to customers’ expectations for speed and convenience, with products and services tailored to them and experiences equivalent to those elsewhere in life and online, insurers are having to innovate faster with AI technology increasingly becoming a must-have component and function to augment their risk management activities. The increasing use of AI in making decisions that affect our daily lives will also require a level of transparency that is explainable to employees and customers.
Given the immense volumes and diverse sources of data, the real value of AI and machine learning is best achieved when making intelligent decisions at scale without human intervention. Yet this capability, once achieved, gives rise to the perception of a “black box” where most of the business personnel do not fully understand why or how a certain action was taken by the predictive model. This is because, as companies leverage data more heavily and the types of analyses and models they build become more complex, a model becomes harder to understand. This, in turn, is driving an increasing demand for the “explainability” of models and an easier way to access and understand models, including from a regulator’s point of view.
Transparent AI can help organisations to explain individual decisions of their AI models to employees and customers. With the GDPR ruling that recently came into force, there is also regulatory pressure to give customers insight into how their data is being used. If a bank uses an AI model to assess whether a customer can get a loan and the loan is denied, the customer will want to know why that decision was made. That means the bank must have a thorough understanding of how its AI model reaches a decision and be able to explain this in clear language.
Opportunities for more sophisticated pricing and immediate profit-and-loss impact have never been better. Pursuing pricing sophistication can enable transformative shifts towards advanced analytics, automation, new data sources and the ability to rapidly react to changing market environments.
External data can help insurers better understand risks they are underwriting. With a complete picture of driver and vehicle, motor insurers can better assess risk and detect fraud. By feeding external data into analytical models, they can quote more accurately and attract desirable risk profiles at the right price point. Investment in AI can also enable an insurer to further enhance the customer experience throughout the policy life cycle – from streamlining at the time of quote to processing claims more quickly.
The demand for transparent and responsible AI is, of course, part of a broader debate about company ethics. What are an insurer’s core values, how do these relate to its technological and data capabilities and what governance frameworks and processes do they have in place to keep up with them? Ultimately, for AI to have the most impact, it needs to have public trust.
Neil Chapman is senior director, insurance consultancy and technology global leadership, pricing, product, claims and underwriting at WTW