Strategies for insurers to grow AI responsibly

Strategies for insurers to grow AI responsibly
Artificial intelligence is changing how the insurance industry operates and use cases for AI are skyrocketing. Today, AI is being used to provide customer service, assess risk profiles, to determine pricing, detect fraud and more. As technologies evolve and the industry’s use of AI becomes more mature, the opportunities for using AI in the future appear virtually limitless.
Premium growth rebounded in 2021 after slowing in 2020.

Early AI adopters will have obvious advantages. In a competitive market, they’ll gain better predictive capabilities and be able to develop new waves of product offerings their competitors don’t have. Growing AI quickly will help insurers get ahead, which is why industry leaders are pushing to scale their AI programs.

The fast and first implementation of AI is important, but it can’t be the only focus when scaling—attention to safety is vital. Insurers need to consider how they will maintain oversight of models and manage risks as the use of AI increases.

Bias is one of those risks. Gartner estimates that 85% of AI projects through this year will produce inaccurate results because of bias found in data, algorithms, or the groups that manage them. The potential consequences and impact of biased AI algorithms being used to assess customers’ risk profiles, price policies, or detect fraud, for example, are severe.

To prevent bias and encourage safe and responsible use of AI, the National Association of Insurance Commissioners recommends adhering to a set of guiding principles. The guidelines developed by the NAIC—which are influenced by core tenets adopted by 42 countries—are designed to produce “accountability, compliance, transparency, and safe, secure and robust outputs.” The NAIC’s outlined principles are a starting point. The challenge, however, is putting responsible AI practices and model governance into action.

Adaptability is a critical success factor for insurers that want to build up their AI safely and responsibly. AI teams need to be nimble and ready to adopt new processes and tools. Insurers that prioritize and operationalize the following actions will be better prepared to scale their AI quickly and safely.

Adopt the model risk management three lines of defense framework 
Insurance companies can think of the three lines of defense as insurance against AI performance and quality issues. The three lines of defense framework—which involves 1) data scientists and model developers, 2) validators, and 3) internal auditors—is already used across the financial services industry to manage AI risk. It defines responsibilities and embeds performance and quality checks throughout AI development and validation, enabling teams to identify and mitigate risks such as AI bias. Adopting the three lines of defense provides a structure for completing the functions necessary to build quality, high-performing AI and scale confidently.

Standardize and automate documentation and reporting
Documentation and reporting are necessary to make AI transparent, auditable and compliant. They are also time-consuming tasks. Teams can spend countless hours documenting test results and decisions and creating reports. Inefficient manual documentation and reporting aren’t sustainable when attempting to scale. Therefore, companies must look for ways to reduce time spent on documentation and reporting, so they can recover valuable hours for developing new AI. One solution is to use an AI governance tool that standardizes and automates documentation and reporting.

Many companies don’t have standards in place to make documentation consistent across their organization. But standardizing reporting gives data scientists and developers clear deliverables that set them up for success. It also helps ensure that model validation and implementation aren’t delayed due to missing data in reports. Ideally, teams should use tools that automatically collect evidence and populate reports to save time—both for those creating and reviewing reports.

Stay current on compliance requirements 
Companies planning on growing their AI need to be aware of current and proposed regulations. Several states across the U.S. are working to enact new legislation that promotes fairness and prevents bias. More than 100 bills have been introduced at the state level since 2019, and others are on the way. To avoid compliance and legal issues, along with reputational damage, companies should:

  • Proactively set internal fairness standards that meet the most stringent regulations 
  • Find a way to efficiently track regulations, and create a library of policies, requirements, and guidelines
  • Ensure everyone working on AI is aware of regulatory requirements, and that workflows align with regulations in the areas the business operates
  • Look for an AI governance tool that automatically creates compliance checklists for all applicable regulations 

Modernize model inventory
Every company using AI needs to maintain an accurate and up-to-date model inventory. As companies add more models, they outgrow spreadsheets and need a more organized and efficient way to inventory their AI. For most companies, the best solution is to use an AI governance tool that allows them to easily catalog models in a central repository that is accessible for all stakeholders.

A model inventory serves multiple purposes. One is to provide at-a-glance performance and risk information for all the models that are in use. Without this heat map view, it is extremely difficult to manage risks while scaling. It’s also important that a company’s model inventory captures and stores all data and documentation for each model. This is necessary in case of an audit. Plus, thorough documentation gives teams a head start on developing new AI. By using their previous work and learnings as a starting point, model developers can save time and create new models without having to start from scratch.

Monitor performance continuously
Whether insurers are implementing their first models or scaling up quickly, continuous performance monitoring is essential. AI teams need to have solutions in place that help them maintain oversight of their models before they scale. Ideally, teams should have access to real-time performance, risk, and bias information for all of their AI. And they need a plan for using that data to catch problems early.

AI will only become more embedded in insurance in the future. Now is the time for insurers to learn strategies and put workflows and tools in place that will set them up for success as they grow their AI.

Artificial intelligence is changing how the insurance industry operates and use cases for AI are skyrocketing. Today, AI is being used to provide customer service, assess risk profiles, to determine pricing, detect fraud, and more. As technologies evolve and the industry’s use of AI becomes more mature, the opportunities for using AI in the future appear virtually limitless.

Early AI adopters will have obvious advantages. In a competitive market, they’ll gain better predictive capabilities and be able to develop new waves of product offerings their competitors don’t have. Growing AI quickly will help insurers get ahead, which is why industry leaders are pushing to scale their AI programs.

The fast and first implementation of AI is important, but it can’t be the only focus when scaling—attention to safety is vital. Insurers need to consider how they will maintain oversight of models and manage risks as the use of AI increases.

Bias is one of those risks. Gartner estimates that 85% of AI projects through this year will produce inaccurate results because of bias found in data, algorithms, or the groups that manage them. The potential consequences and impact of biased AI algorithms being used to assess customers’ risk profiles, price policies, or detect fraud, for example, are severe.

To prevent bias and encourage safe and responsible use of AI, the National Association of Insurance Commissioners recommends adhering to a set of guiding principles. The guidelines developed by the NAIC—which are influenced by core tenets adopted by 42 countries—are designed to produce “accountability, compliance, transparency, and safe, secure and robust outputs.” The NAIC’s outlined principles are a starting point. The challenge, however, is putting responsible AI practices and model governance into action.

Adaptability is a critical success factor for insurers that want to build up their AI safely and responsibly. AI teams need to be nimble and ready to adopt new processes and tools. Insurers that prioritize and operationalize the following actions will be better prepared to scale their AI quickly and safely.

Adopt the model risk management three lines of defense framework 
Insurance companies can think of the three lines of defense as insurance against AI performance and quality issues. The three lines of defense framework—which involves 1) data scientists and model developers, 2) validators, and 3) internal auditors—is already used across the financial services industry to manage AI risk. It defines responsibilities and embeds performance and quality checks throughout AI development and validation, enabling teams to identify and mitigate risks such as AI bias. Adopting the three lines of defense provides a structure for completing the functions necessary to build quality, high-performing AI and scale confidently.

Standardize and automate documentation and reporting
Documentation and reporting are necessary to make AI transparent, auditable and compliant. They are also time-consuming tasks. Teams can spend countless hours documenting test results and decisions and creating reports. Inefficient manual documentation and reporting aren’t sustainable when attempting to scale. Therefore, companies must look for ways to reduce time spent on documentation and reporting, so they can recover valuable hours for developing new AI. One solution is to use an AI governance tool that standardizes and automates documentation and reporting.

Many companies don’t have standards in place to make documentation consistent across their organization. But standardizing reporting gives data scientists and developers clear deliverables that set them up for success. It also helps ensure that model validation and implementation aren’t delayed due to missing data in reports. Ideally, teams should use tools that automatically collect evidence and populate reports to save time—both for those creating and reviewing reports.

Stay current on compliance requirements 
Companies planning on growing their AI need to be aware of current and proposed regulations. Several states across the U.S. are working to enact new legislation that promotes fairness and prevents bias. More than 100 bills have been introduced at the state level since 2019, and others are on the way. To avoid compliance and legal issues, along with reputational damage, companies should:

  • Proactively set internal fairness standards that meet the most stringent regulations 
  • Find a way to efficiently track regulations, and create a library of policies, requirements, and guidelines
  • Ensure everyone working on AI is aware of regulatory requirements, and that workflows align with regulations in the areas the business operates
  • Look for an AI governance tool that automatically creates compliance checklists for all applicable regulations 

Modernize model inventory
Every company using AI needs to maintain an accurate and up-to-date model inventory. As companies add more models, they outgrow spreadsheets and need a more organized and efficient way to inventory their AI. For most companies, the best solution is to use an AI governance tool that allows them to easily catalog models in a central repository that is accessible for all stakeholders.

A model inventory serves multiple purposes. One is to provide at-a-glance performance and risk information for all the models that are in use. Without this heat map view, it is extremely difficult to manage risks while scaling. It’s also important that a company’s model inventory captures and stores all data and documentation for each model. This is necessary in case of an audit. Plus, thorough documentation gives teams a head start on developing new AI. By using their previous work and learnings as a starting point, model developers can save time and create new models without having to start from scratch.

Monitor performance continuously
Whether insurers are implementing their first models or scaling up quickly, continuous performance monitoring is essential. AI teams need to have solutions in place that help them maintain oversight of their models before they scale. Ideally, teams should have access to real-time performance, risk and bias information for all of their AI. And they need a plan for using that data to catch problems early.

AI will only become more embedded in insurance in the future. Now is the time for insurers to learn strategies and put workflows and tools in place that will set them up for success as they grow their AI.

Source: Digital Insurance

Share this article: