Generative AI has the power to transform insurance, but companies must consider their strategy carefully to navigate data protection hurdles, build a trusted AI model, and create a culture of collaboration between AI talent and insurance experts.
Since ChatGPT burst onto the scene in 2022, thought leaders and commentators have gone into overdrive about the potential use cases for large language models (LLMs) and generative AI in insurance. There’s no shortage of predictions on what an AI-driven insurance future will look like, how the technology could revolutionise underwriting or claims, and whether it will eventually replace human insurance professionals.
In its white paper, Insurance 2030, McKinsey predicted that just six years from now, AI will be deeply embedded in the insurance sector, spanning claims, distribution, underwriting and pricing. But for many insurance firms, who are still early in their digital transformation journeys, or relying on legacy tech and manual processes, these discussions may seem premature.
The truth is that, as with any new technology, the hype of generative AI is still outpacing the reality. The technology was placed at the peak of Gartner’s Hype Cycle 2023, and we are still a long way from maximising its full potential in insurance. Moving from predictions to practically implementing generative AI and LLMs throughout the insurance lifecycle will involve overcoming numerous hurdles and rewiring how the insurance sector operates from top to bottom. As McKinsey outlines in its report, the possibilities are huge, but it won’t happen overnight.
Here, we dig beneath the hype to understand what is involved in implementing AI in insurance and making an AI-powered insurance company a reality.
Is generative AI a game-changer for insurance?
AI is a broad term, which spans a host of different systems, from machine learning and table data through to the most complex generative AI and LLM systems like ChatGPT-4. The reason LLMs have generated so much hype is that they are extremely adaptable, with the capacity to cope with a wide variety of tasks including creating text and images or writing code – doing so almost instantly – based on simple instructions.
The speed at which LLMs can complete tasks is what makes them so impressive. Experience them in action and it seems perfectly plausible that ChatGPT-style systems could eventually carry out numerous insurance tasks, or even run an entire insurance business. But, moving from specific use cases to full-scale roll-out will involve overcoming some significant challenges.
Implementing generative AI in insurance companies – the reality
Managing data protection risks
As McKinsey rightly points out in its report, an AI programme is only as good as the data it is trained on. But the idea of companies using customer data or IP for such purposes raises alarm bells. Insurance companies are subject to strict data protection laws, which mean that integrating public generative AI tools such as ChatGPT into insurance workflows is a no-go, and there should be caution around letting employees use the tools for their work. Research found that 11% of the data fed into ChatGPT is confidential and numerous companies including JP Morgan and Verizon have blocked ChatGPT due to risk that IP or sensitive data will leak out.
Building a closed model
To let an LLM algorithm loose on customer data and company IP, a closed system is needed, where the system is trained on specific data, with no danger that it could be accessed by a third party. A closed system is designed to suit a company’s specific requirements and data set and should be constantly updated as those needs and data evolve. Companies can build closed LLMs by fine-tuning existing base models and, in basic terms, there are three steps involved:
- Curating the data set: Companies need to build a comprehensive data set for the task they want to automate, ensuring that it is in the right format, with a good structure and examples.
- Building the AI algorithm: This is a lengthy process, involving lots of data, computational resources, and numerous iterations. There needs to be deep understanding of the data flow and decision-making within the model for an algorithm to be relied upon.
- Ongoing training and testing: Once up and running, the system needs to be constantly trained, updated, and tested to be sure that it is behaving in the right way as the data and model evolve.
Can you explain the system?
But even with a closed system, companies must still have data protection and privacy concerns on their radar. Data protection regulations state that a data processing system must be transparent, but explaining how AI works may not be possible, as it functions autonomously. There are also dangers of discrimination and biased outputs within an LLM if it is trained on incomplete or unchecked data, or not maintained and updated properly. The risks are significant and consequently, use cases for LLMs and generative AI are likely to be limited to specific, controllable use cases, at least for the near term.
AI talent vs. insurance expertise
Will AI developers take over insurance companies? There will certainly be a need for specialist AI, data processing, and deep learning talent, dedicated to building and maintaining the system. However, those AI experts will also need to work alongside teams of insurance specialists, who can provide constant input and feedback, to ensure the results are reliable, and the system is compliant.
Insurance is a highly technical area with a constantly evolving regulatory environment, so people with deep sector knowledge will always be needed, although they will need to adapt to work alongside AI experts, and the technology. As McKinsey states in its 2030 report: “The next generation of successful frontline insurance workers will be in increasingly high demand and must possess a unique mix of being technologically adept, creative, and willing to work at something that will not be a static process but rather a mix of semiautomated and machine-supported tasks that continually evolve.”
Consider short-term and long-term use cases
Generative AI will have a huge impact across almost every sector, including insurance. But, alongside the future-gazing, it is important to be realistic about the practicalities and risks involved, so that insurance companies can prepare the ground accordingly and focus on the use cases that make the most sense, now. The best approach is to start putting the digital and data foundations in place, and then experimenting with small projects to develop the capability and understanding of the technology without significant financial outlay or compliance and reputational risk.
This is the approach that we have taken at Insly and our Innovation Lab has already developed an AI-powered product builder that will transform the speed and efficiency of customer implementation on the platform. The prototype makes it possible to build a product form based on text-based instructions, or even an image of text, rather than the usual drag-and-drop functionality. This enables us to feed in larger data sets and build insurance products faster, saving valuable time.
We’ve also started to integrate generative AI into our sales process to manage and update databases, and we are now building automation between the inbox and CRM, freeing the team from time-consuming administration.
Furthermore, having a system like Insly in place puts insurance companies and MGAs in a great position to experiment with generative AI and LLM technology, ensuring that they are collecting and collating data in a consistent format in a central location, and have the flexibility to integrate new platforms and applications as the need arises.
Get in touch with the team today to find out more about Insly’s no/low code platform and organise a demo.