Measuring the impact of AI systems
Have a nice day Photo/Shutterstock.com
AI is increasingly shaping our lives and governments and businesses are investing in it like never before. But do we trust it?
Trust and transparency are essential for AI to deliver on its promises safely and responsibly. Achieving this is important because the benefits of AI are many, including improving productivity, automating jobs that once risked people’s lives, delivering breakthroughs in healthcare and providing solutions to climate change.
Governments are stepping up with new AI-related regulations, and international standards, such as ISO/IEC 42001, have been developed to support them, however, a lot more needs to be done to reduce potential risks and address societal concerns.
One effective way is for organisations that use or develop AI systems to conduct an AI systems impact assessment. Through comprehensively analysing the full impact of the system and developing ways to address any negative consequences, companies can improve the safety of AI and help cultivate trust.
ISO/IEC 42005 guides organisations through such an impact assessment, providing guidance on evaluating the effects of AI systems on people and society, and how to integrate this into AI risk management. This includes consideration of the intended uses and performance, data quality, risks and benefits and measures to address any harm that could be caused.
It enables AI system developers to design their systems safely and effectively, aligned with the values of fairness and transparency and taking a human-centred approach. It also supports broader governance and risk management practices, reinforcing trust and societal acceptance of AI systems.
ISO/IEC 42005 is an important component of a suite of international standards for AI that include governance (ISO/IEC 38507), risk management (ISO/IEC 23894) and conformity assessment (ISO/IEC 42001) activities, which together can cultivate trust and accountability wherever AI is present.
Source: IEC.