Artificial intelligence (AI) is rapidly changing the digital environment and reshaping our world. Recognising AI’s impact on society, the NSW Government is leading responsible AI use. The NSW AI Strategy and an AI Ethics Policy were released in 2020, and the NSW AI Assurance Framework in March 2022.
Since 2017, the NSW Government has relied on the Digital Assurance Framework (formerly NSW ICT and Digital Assurance Framework) for ICT project delivery. With AI’s evolving landscape, the government is now integrating the updated AI Assessment Framework into the Digital Assurance Framework to address risks such as Generative AI.
Alongside this, Digital NSW has enhanced the AI Assessment Framework (formerly the AI Assurance Framework), a self-assessment tool to help agencies assess AI risk in their digital projects.
In May 2024 NSW has made significant progress to ensure the safe and responsible use of AI. This transformation will help guide NSW agencies in managing AI risks.
These updates will streamline AI risk management and offer better oversight and guidance. The recent approval of the updated frameworks signifies an important shift, with implementation across NSW starting Q3 2024.
We spoke to Jessica Ho, Director of ICT Assurance and Daniel Roelink, Director of Digital Strategy and Architecture at Digital NSW, to learn more about the new Digital Assurance Framework and AI Assessment Framework.
What’s new in the Digital Assurance Framework and AI Assessment Framework?
The All of Government Assurance of AI is now embedded into the Digital Assurance Framework, and applies to all government digital projects over $5 million, and potentially to smaller projects if considered high risk. Solutions using Generative AI are now designated as higher risk.
“We recognise that there are benefits in using AI in society, and we can’t avoid using AI. We need to understand the risks and how to mitigate them,” says Jessica.
“With the rising use of Generative AI, we needed to refresh the AI Assessment Framework. By integrating the AI assurance risks into the Digital Assurance Framework, we can consider the suitability of AI solutions before we start to build and invest in them.”
- – Jessica Ho, Director of ICT Assurance, Digital NSW
The release of the AI Assessment Framework provides improved risk management guidance and tooling to help agencies assess AI risk. The self-assessment tool provides a set of questions aligned with the NSW Ethics Policy, along with associated required controls.
“Since the release of Generative AI like ChatGPT, we can now use our natural language to interact with computers, a significant advancement” explains Daniel. “The new AI Assessment Framework is now easier to use and updated to address advancements in AI technology.”
“Our goal is to ensure agencies use of AI is safe and responsible, and we are doing that by looking at how we can measure compliance.”
- – Daniel Roelink, Director of Digital Strategy and Architecture, Digital NSW
Why these frameworks matter
Educating stakeholders on the benefits and risks of AI is crucial for responsible use, and the mandated AI Assessment Framework helps foster this.
“This is not about stopping the use of AI. It’s a balanced approach to acknowledge the advantages of AI, which is embedded everywhere. By doing this, we help people understand the risks of using AI and where to go for answers.”
- – Jessica Ho
“There’s still a lot of people out there that do not think they’re using AI, when they are. For their own benefit, and that of their customers, they can use this framework to understand the risks before taking action,” says Jessica.
Case study of AI done well: EduChat
EduChat, an AI-powered chatbot for schools, is an excellent example of effective AI application using the previous AI Assurance Framework. The project used the previous framework to identify high risks related to AI and demonstrate the importance of understanding ethics and data quality in AI-powered chatbots.
“EduChat’s thorough risk assessment and mitigation strategies ensure safe interactions, especially on sensitive topics like mental health support,” says Jessica. “We helped the project to understand the risks and what else they need to consider before putting it live and monitoring it as it goes live to make sure that the risk doesn’t eventuate.”
“We were one of the first in the world to release an AI Assurance Framework. Through partnerships and collaboration, our ambition is to continue leading and advancing this work.”
- - Daniel Roelink
Check out the refreshed Artificial Intelligence Assessment Framework for more information.