As artificial intelligence (AI) promises faster and smarter decision-making, the Institute of Actuaries and the Australian Human Rights Commission (AHRC) are concerned about possible discrimination and stress the need to prevent it.
To address the issue, they created a guidance resource designed to help insurers and actuaries comply with federal anti-discrimination legislation when AI is used in the pricing or underwriting of insurance products.
The guidance was developed after a 2021 AHRC report that examined the human rights impacts of new and emerging technologies, including AI-informed decision-making.
The Actuaries Institute strongly supported the report’s recommendations to develop a set of guidelines for use by government and non-governmental organizations on compliance with federal anti-discrimination laws when AI has been used in decision-making. He approached the AHRC with an offer to collaborate and together they developed these guidelines.
The guidance resource lists some strategies for insurers regarding the data used by AI systems to combat algorithmic bias and avoid discriminatory outcomes, actuary Chris Dolman pointed out.
Dolman led the Institute’s contribution to the preparation of the Guidance Resource as a representative of the Data Science Practice Committee.
These strategies include rigorous design, regular testing, and monitoring of AI systems. The guide also provides several practical tips for insurers to help minimize the risks of a successful discrimination claim resulting from the use of AI for risk pricing.
Dolman said: “In the context of insurance, AI can be used in many different ways, including pricing, underwriting, marketing, customer service including claims management or operations. internal.”
He added: “This guidance resource focuses on the use of AI in pricing and underwriting decisions, as these decisions are already likely to use AI and, by their nature, will have a financial impact which may be significant to an individual. Such decisions may also be more likely to result in complaints of discrimination from customers. However, many of the general principles described may also apply to the use of AI-informed decision-making in other contexts.
According to a survey of Actuaries Institute members this year, at least 70% indicated the need for additional guidance to comply in the emerging field/wider use of AI.
Elayne Grace, chief executive of the Actuaries Institute, said there was an urgent need for guidance to help actuaries carry out their professional duties, noting that this resource should also reassure consumers about protecting their rights.
“Australian anti-discrimination laws have a long history, but the guidance and case law available to practitioners is limited,” Grace said. “The complexity arising from Australia’s differing anti-discrimination legislation at federal, state and territorial level, compounds the challenges faced by actuaries and may reflect an opportunity for reform.”
She also noted that several intersecting megatrends – including the explosive growth of “big data”, the increased use and power of artificial intelligence and algorithmic decision-making, and growing and changing consumer awareness and evolution as to what is “fair” – have made the lack of guidance more problematic for actuaries.
Grace said: “This collaboration demonstrates the complex nature of the issues facing society and the need for a multidisciplinary approach, particularly when data and technology are used to shape the delivery of fundamental services such as insurance. “
#Actuaries #stress #ethical #insurance #Reinsurance #News