A new report from Verta, the Operational AI company, has revealed the findings from its 2023 AI Regulations study, which surveyed more than 300 AI and machine learning practitioners.
The purpose of the study was to benchmark the awareness of current and pending regulations covering artificial intelligence and companies’ preparedness to comply with regulatory requirements around Responsible AI, ML model transparency, and data and model lineage, as widespread concerns about generative AI, and specifically ChatGPT, drive increased urgency around regulating AI.
Companies not ready to regulate their AI, says Verta
The study found that while a majority of organisations view AI regulatory compliance as a priority and believe they will face an increasing number of AI regulations in the near future, few companies today are well prepared to meet current or future regulatory requirements.
The European Union is set to pass the EU AI Act this year, and the US Congress has taken up the American Data Privacy and Protection Act and the Algorithm Accountability Act in recent sessions.
These laws create new compliance and reporting requirements around companies’ use of AI and machine learning, intended to protect consumers against privacy violations, bias in automated decision-making and other potential harms.
What Verta’s AI preparedness study reveals
In the research study, 55% of companies in the study said that regulatory compliance is a C-level or board-level priority. Only 11% said that it is a “low priority.” In addition, 89% of participants agreed that AI regulations will increase over the next 3 years, and 77% believe that AI regulations will be strictly enforced.
Despite these findings, the study revealed a disconnect between the high priority that companies are placing on regulatory compliance and their current level of preparedness for compliance. For example, nearly two-thirds (62%) of participants were not confident that their company would be able to complete the algorithm impact assessments called for in the ADPPA. Only 28% were highly confident they could complete the necessary reporting.
Alarmingly, the median time that study participants said they would need to complete an algorithm impact assessment for a single model was 40 hours. For organizations with tens or even hundreds of models or model versions in production, the costs of providing fully compliant reports to satisfy ADPPA requirements could be significant given the high salaries that AI/ML roles are commanding in today’s market.
The Verta Insights research also showed that almost 90% of companies have little or no automation in place for the AI governance processes that they will need to rely on to ensure regulatory compliance, like bias detection and mitigation, model explainability and transparency, and model validation and testing.
Additionally, only 10% of companies have fully or highly automated their model documentation process, which would make it challenging to comply with regulatory requirements around model transparency and explainability.
Several complex issues face companies attempting to implement AI regulation
Speaking about the report findings, Manasi Vartak, Founder and CEO of Verta, explained: “Generative AI technology like ChatGPT and Stable Diffusion are raising issues around copyright, Responsible AI and cybersecurity that are helping to drive increased urgency around regulating AI. This trend is converging with the continued increase in regulations like the EU’s GDPR covering data privacy to make AI regulatory compliance a high priority for CEOs and board members at more than half the companies in our study.”
She continued: “This is true for organisations in industries like insurance and financial services that traditionally have had ML models regulated via Model Risk Management practices. New regulations are increasing the complexity of compliance for these companies. At the same time organisations that traditionally have not been under significant regulatory scrutiny must now have MRM practices in place to identify, assess, mitigate and monitor risks associated with their use of ML models, and to ensure their models are fully compliant with a growing number of regulations.”
Rory King, Head of Verta Insights Research and Verta Go-to-Market, said early investors in AI were in the best position to provide adequate compliance. “Companies typically react to regulatory pressures in a predictable curve, where we see leaders and fast followers making substantial early investments in the people, processes and technology necessary for compliance. Laggards and late followers, on the other hand, tend to wait until a regulation is imminent or has taken effect before they prepare for compliance. This means that leaders are prepared for compliance on Day One when the regulations go into effect, while laggards are frequently left scrambling to comply, resulting in additional cost, lost business or even legal risks.”
King added that, because leaders have made early investments with the goal of ensuring compliance, they also accrue ancillary benefits because they have established standardised processes, transparency and traceability that make their machine learning pipelines more efficient and effective.
Talk to Verta experts at ITI USA 2023
Want to know more? Join Verta at their booth at Insurtech Insights USA 2023, at the Javits Center in New York, on June 7th and 8th. The team will be there to discuss their latest findings, innovations and industry solutions.
To find out more about the world’s fastest-growing insurtech conference, click here