As artificial intelligence (AI) continues to develop and change rapidly, it’s becoming even more important that we know what responsible AI use looks like.
This knowledge is crucial on an organisational or agency level, but also for individuals. We sat down with Daniel Roelink, Director of Digital Strategy and Architecture at Digital NSW, owner of the NSW AI assessment framework, policy and strategy, to learn how everyone can do their part to use AI responsibly.
How do we make sure we’re using AI responsibly?
There are 3 essential steps that any organisation or agency needs to take. The first is to ensure there’s an AI ethics policy. The second step is to have an AI risk assurance framework, and the third is to ensure the use of the framework through governance and assurance.
Having an AI ethics policy really defines the guiding principles to make sure AI use is ethical, legal and fair. In NSW Government we have the AI Ethics Policy, which all agencies can leverage.
Once you have your ethics policy, you enable it through an assurance framework. The AI Assessment Framework is a self-assessment tool with a set of questions to identify risk across NSW Government. You want a framework that results in consistency in how you identify different risk levels. If one agency deems a use case to be low risk and another agency deems it to be very high risk, it will start to impact public trust.
Then you need to ensure that you’re using that framework. You start by establishing organisational accountability within your governance and assurance functional areas. These 3 steps have been driven by our experience, but also through global best-practice.
How does someone decide when to, and when not to use AI?
That’s a question that doesn’t get asked enough. While there’s no simple answer, there are a few key considerations to help people understand if AI is the best technology to use.
“There’s so much hype and excitement around AI that most people jump to a solution instead of trying to understand the problem and what technology is suitable for that problem,”
Daniel Roelink, Director of Digital Strategy and Architecture, Digital NSW
The first one is around complexity and scalability, the second one would be around cost-effectiveness, and the third one is around considerations about ethics and risk.
AI is well suited if the problem involves processing very large data sets and identifying complex patterns. An example of that is in fraud detection – there’s a huge volume of data and it’s very difficult for a person to identify patterns in transactions and systems that could indicate potential fraudulent activity.
In terms of cost-effectiveness, you need to assess where AI adds value beyond the traditional methods and justify its cost. Again, AI is ideal for high-volume tasks. An example here is chatbot-driven capabilities for customer service where you may have thousands of requests and daily inquiries. Those administrative type, responsive tasks.
Around ethics and risk, you shouldn’t be using AI where there’s a decision that could have a direct impact on human rights. That is, if it has impacts on fairness and safety and you’re unable to implement safeguards. That’s why we have risk frameworks. In some cases, you need to implement human oversight.
What does bias in AI use look like? Why do we need to be aware of it and how can we help mitigate it?
Bias comes from data, algorithms and interaction.
When data is used to train AI models that reflects social norms, it can lead to biased outcomes. Large datasets often contain inherent biases, which can influence the model’s decisions. Applying such a model to new datasets to solve problems can lead to unethical decisions and outcomes.
Another area is in algorithmic bias – this occurs when algorithms are adjusted or tuned in ways that unintentionally introduce bias. For example, a credit scoring algorithm might favour certain demographics due to specific tuning decisions.
Finally, another type is referred to as interaction bias. This happens during user interaction with AI systems. For instance, with an AI chatbot, the way users ask questions or interact with it can introduce bias, as the system's responses are shaped by the input it receives. This highlights the importance of broad awareness training.
“We need to understand how we can mitigate bias in AI use. On the data side, ensuring you have diverse and representative datasets. To address interaction bias, oversight and user education are essential.”
Daniel Roelink, Digital NSW
What is your team at Digital NSW doing in the realm of responsible AI use?
We’re updating the NSW AI Assessment Framework to improve its efficiency and effectiveness. A pilot for the new version will be conducted in early 2025, enabling users to quickly identify low risk use cases.
The team has collaborated with all NSW departments to define best-practice Governance and Assurance requirements for the responsible use of AI. Additionally, they are developing guidance to help individuals who may not fully understand AI, recognise its use and apply it responsibly.
We’re actively working to position the NSW Government to achieve the best outcomes with AI for communities and the economy. Additionally, we’re engaging with industry and academia to stay informed on emerging standard practices in responsible AI. So, there’s quite a bit going on!
Anything else you want to cover?
There’s been a huge amount of investment in this space over the last 18 to 24 months, and it’s now starting to surface as new capabilities in the products we use daily.
Everyone needs to be made aware of how they’re interacting with a system that can result in bias or in an outcome that’s not legal.
To be responsible with AI, everyone has a role – similar to cyber security. We want to achieve the productivity gains that AI technologies provide, but we must do that through raising awareness and education on responsible AI practices for public servants.
Check out the AI Assessment Framework and learn chatbot prompt essentials.