How AI Regulation Is Developing In The Insurance Industry

How AI Regulation Is Developing In The Insurance Industry
2021 has seen a material acceleration in regulatory interest and posturing regarding the use of AI — both within insurance and more broadly.

By Anthony Habayeb, founding CEO of Monitaur, an AI governance and ML assurance company.

Earlier this week, I was in San Diego as a speaker and guest of the National Association of Insurance Commissioners (NAIC) National Meeting. I had the opportunity to share some of my own outlooks and opinions with the Big Data and Artificial Intelligence Working Group. I also had the opportunity to participate in meetings with key stakeholders involved in considering next steps toward regulatory oversight of AI.

2021 has seen a material acceleration in regulatory interest and posturing regarding the use of AI — both within insurance and more broadly. From the New York City Council creating legislation to rein in AI biases during the hiring process to the Federal Trade Commission’s guidance on how to build and deploy responsible AI and machine learning models, governing bodies across the United States have demonstrated a vested interest in regulating AI. For insurance carriers with European exposure, a just released update to Europe’s proposed AI Act now specifically places insurance industry use of AI under the “high risk” category.

In August 2020, the NAIC put forth AI principles. Over the course of the past year, its focus was to gain more data about exactly where the insurance industry is in its use of AI. The priority was to get a sense of how regulations could impact the industry’s use of AI technologies. During the Big Data Working Group, a first public peek was offered into the results from a survey of property and casualty carriers and their use of AI. The results show a broad application of AI across the core functions of this group of insurance carriers. This working group seems likely to expand the survey to homeowners and life insurance lines of business in the coming months.

The challenge of regulating AI is not insignificant. Regulators have to balance protection of consumers with support of innovation. Several themes are evident about the regulatory outlook on the use of AI in insurance:

  • An appreciation that AI is a complex system resulting from actions, decisions, and data driven by a team of stakeholders over a system’s entire life cycle.
  • An understanding that regulation will need to include evidence of broad life cycle governance and objective reviews of key risk management practices.
  • Agreement among regulators that they are largely unequipped to perform, with state regulatory staff, deep technical examinations or forensics of AI systems. To be successful in regulatory oversight, they need further education, partnerships with more expert organizations, and some degree of carrier attested accountability in the future.
  • A possibility that material shaping and defining regulations will have to be forged at the federal level — not just state-level departments of insurance.

Looking back on my conversations in San Diego — and over the entire course of the year — I have one more reflective point: We could all benefit from being more direct. Where does AI-specific regulation start or end? How should insurance companies fundamentally change in to better serve often underserved classes of our population?

My career has not been in insurance. However, I have very quickly gained an appreciation that many of the fairness and bias conversations in AI governance venues are by no means exclusive to AI governance. Instead, they are bigger questions and considerations regarding balancing appropriate risk rating factors and the correlation those factors may have with fair treatment of certain classes of our population. I 100% agree we have economic disparities and inequities, and I want to see more inclusive markets; however, I would hate to see important and much needed governance practices that improve key principles like transparency, safety, and accountability wait for agreements on, in my opinion, the much larger and difficult discussions regarding fairness.

I consistently heard from both regulators and industry stakeholders in San Diego that insurance is undergoing a technology renaissance. It feels like there is agreement that how regulation works today is not what we need from regulation in the future. In some ways, enhancing NAIC focus on AI through the creation of a new highest level “letter committee” (H) — only the eighth such committee in the 150-year history of the NAIC — is a tremendous acknowledgement of this reality.

Next year will provide further perspective on the insurance regulators’ approach to the use of AI. We’ll see Colorado further define practices and plans for SB21-169: Restrict Insurers’ Use of External Consumer Data. We will likely see some federal policy or law development that could even be something like H.R. 5596: the Justice Against Malicious Algorithms Act of 2021.

What should carriers do right now with all of these moving pieces? At a bare minimum, insurance carriers should internally organize key stakeholders related to AI strategy and development to collaboratively evaluate how they define and develop AI projects and models. If carriers have not yet established broad life cycle governance or risk management practices unique to their AI/machine learning systems, they should begin that journey with haste.

Source: IBS Intelligence

Share this article: