
Using artificial intelligence (AI) safely and responsibly is the responsibility of all NSW Government public servants – from executives to end users. There are different levels of accountability for different roles within agencies, and it is important that you understand what your responsibilities are. These are aligned with the NSW AI Assessment Framework (AIAF).
This article complements the learning module for NSW Government public servants, called Understanding Responsibilities in AI Practices. Check it out for more detailed information on this topic.
AI responsibilities for different roles
Accountability for the safe and responsible use of AI ultimately lies with executives within agencies, however responsible AI use involves everyone within an agency.
The Understanding Responsibilities in AI Practices module has a RACI matrix to help agencies consider different roles and responsibilities.
The following broad strategic objectives help support the establishment of responsible AI practices. A selection of these objectives is listed below, to read them all and learn what your accountabilities and responsibilities are, check out the module.
Foster a responsible AI Culture
Strategic objectives:
- Promote a culture of responsible AI by integrating NSW AI ethics principles in business objectives, values, and communications.
- Periodically review organisational awareness of individual responsibilities outlined in this guidance to ensure responsible AI use.
- Regularly evaluate the impact of AI on the workforce to enhance strategy workforce planning, identifying needed skills and resources.
- Report if you believe a solution you’re using may influence decisions or actions that could be unethical, illegal, or unsafe.
Ensure accountability and transparency
Strategic objectives:
- Clearly define and communicate responsible AI- related authorities within the organisation, including governance, assurance, procurement, ethics, cyber, privacy, legal, technology, data governance, and risk management.
- Review and endorse the AI Assessment Framework (AIAF) compliance plans, detailing Department/Agency progress towards compliance in ensuring use of the AIAF.
- Ensure each AI solution has documented accountabilities for managing risks, ensuring continuity, enabling appeals, and providing evidence for decisions and actions.
- Ensure record-keeping for decisions related to managing the risk of AI solutions such as the results of applying the AIAF and risk mitigations.
Allocate Resources
Strategic objectives:
- Support initiatives to increase AI risk management awareness and capabilities at all levels of the organisation.
- Allocate budget and resources for responsible AI, including expert advisory services (legal, data, privacy, ethics, technology, risk).
- Provide sufficient training and tools for ethical AI implementation.
- Ensure adequate resources for continuous monitoring and evaluation for AI systems that could cause harm.
Ensure compliance and risk management
Strategic objectives:
- Ensure governance and assurance oversight for compliance with AI-related laws and regulations (such as human rights, privacy, data protection, administrative law, consumer, anti-discrimination, state records, critical infrastructure and cyber security).
- Ensure AI system development complies with the NSW AI Ethics Policy, AI Assessment Framework, organisational values, and related standards.
- Ensure that high-risk AI projects and solutions are presented to the AI Review Board (AIRC).
- Ensure compliance with Digital NSW, department, and agency policies and guidelines on using public, non-secure applications, such as generative AI chatbots (such as ChatGPT).
Establish oversight mechanisms
Strategic objectives:
- Create a multidisciplinary AI advisory board or committee to monitor and advice AI projects and solutions (such as ethics, legal, technology, data, privacy).
- Include external experts and stakeholders in Governance, Assurance, Audit, and advisory committees to ensure diverse perspectives.
- Designate a responsible owner for AI governance in the C-suite.
- Ensure regular independent reviews of AI governance and assurance functions to assess performance and effectiveness.
- Ensure that AI systems augment, rather than replace, human decision-making where its use could create harm.
What’s next?
Government agencies should start by ensuring a Governance and Assurance function has clear accountability for overseeing responsible AI use.
The above guidance can help agencies better structure their approach to responsible AI and ensure everyone is actively contributing to ethical AI practices.
To learn more about AI-related responsibilities and accountabilities, explore the Understanding Responsibilities in AI Practices module.