arrow-circle-downarrow-circle-rightarrow-leftarrow-rightcheckchevron-downPathPathclosefilterminuspausepeoplepinplayplusportalsearchsocial-icon-facebooksocial-icon-linkedinsocial-icon-twittersocial-linkedinsocial-youtube
Insights

The future of AI regulation in Australia: understanding the proposed mandatory guardrails for high-risk AI

Regulation is coming: are you ready?

Artificial Intelligence (AI) has grown from simple, single-task systems to powerful large-language models (LLMs), like ChatGPT and Microsoft’s Co-Pilot, but there hasn't been much regulation or control over its development. The rapid pace of technological advancement has outstripped the ability of regulatory frameworks to keep up, leading to a lag in the establishment of comprehensive governance and ethical guidelines. Since the release of the first public ChatGPT model in November 2022, which marked a major advancement in the capability in general-purpose AI, regulatory bodies globally have attempted to play catch up.

AI’s wild west era came to an end in 2024, with the passage of the landmark EU AI Act, a UN General Assembly global resolution on AI, and legislation passing on a national level in Canada and state level across six US states. The wave of regulation is finally coming to Australia. In September 2024 the National Artificial Intelligence Centre (NAIC) released the Voluntary AI Safety Standard, and a proposal paper introducing mandatory guardrails for AI in high-risk settings. The two papers compliment each other, with the Voluntary AI Safety Standard acting as preparatory implementation guidance for the proposed mandatory guardrails.

The aim of the regulation is to enable and promote the safe adoption of AI. The mandatory guardrails, if implemented, will be a series of preventative regulations designed to minimise a number of risks to organisations, that are presented by AI. These include biased algorithms, privacy breaches, increased cybersecurity vulnerabilities and threats and societal issues such as widespread job losses caused by AI.

The exact scope of what AI applications that these rules will cover has not yet been finalised. However, the proposed guidelines define many AI systems as "high-risk," meaning they will fall under these regulations for strict oversight and safety. Additionally, all general-purpose AI, including LLMs, will also be considered high-risk. This significantly broadens the rules, especially since, as reported by Microsoft, 84 per cent of workers in Australia use generative AI at work.


What makes an AI system high risk?

In designating an AI system as high-risk due to its use, regard must be given to:

  1. The risk of adverse impacts to an individual’s rights recognised in Australian human rights law without justification, in addition to Australia’s international human rights law obligations
  2. The risk of adverse impacts to an individual’s physical or mental health or safety
  3. The risk of adverse legal effects, defamation or similarly significant effects on an individual
  4. The risk of adverse impacts to groups of individuals or collective rights of cultural groups
  5. The risk of adverse impacts to the broader
  6. Australian economy, society, environment and rule of law
  7. The severity and extent of those adverse impacts outlined in the guardrails below.

While users are outside the scope of the proposed regulations, “deployers” are not. The proposed definition of the a deployer includes “any individual or organisation that supplies or uses an AI system to provide a product or service. Deployment can be used for internal purposes or used externally impacting others, such as customers or individuals, who are not deployers of the system.”

This definition greatly expands the scope of organisations subject to the new regulations and will realistically the majority of businesses will have to implement the mandatory guardrails.

The mandatory guardrails themselves are designed to foster testing, transparency and accountability within AI systems through preventative regulation.


Proposed mandatory guardrails

Organisations developing or deploying high-risk AI systems are required to:

  1. Establish, implement and publish an accountability process including governance, internal capability and a strategy for regulatory compliance
  2. Establish and implement a risk management process to identify and mitigate risks
  3. Protect AI systems, and implement data governance measures to manage data quality and provenance
  4. Test AI models and systems to evaluate model performance and monitor the system once deployed
  5. Enable human control or intervention in an AI system to achieve meaningful human oversight
  6. Inform end-users regarding AI-enabled decisions, interactions with AI and AI-generated content
  7. Establish processes for people impacted by AI systems to challenge use or outcomes
  8. Be transparent with other organisations across the AI supply chain about data, models and systems to help them effectively address risks
  9. Keep and maintain records to allow third parties to assess compliance with guardrails
  10. Undertake conformity assessments to demonstrate and certify compliance with the guardrails.

While the development of mature AI governance practices is likely to be an evolving process for most organisations, beginning that journey in the right manner will enable enhance trust and transparency, risk mitigation and of course, regulatory compliance.

Although there is still considerable uncertainty regarding what the endpoint of AI regulation in Australia will look like, organisations are able to prepare for the impending regulation by aligning themselves with the Voluntary AI Safety Standard.

To prepare for the introduction of the mandatory guardrails, we recommend that organisations take the following steps to kickstart their AI governance journey:

  1. Form an AI governance committee.
    A governance committee should be responsible for overseeing AI initiatives that will enable accountability within the organisation. This committee should also provide a platform for developing a strategic plan for utilising AI.
  2. Assess current AI use.
    To properly govern AI within an organisation, a comprehensive understanding of current AI usage is crucial. It has been reported the 78% of Australian workers are bringing their own AI tools into the workplace. This dynamic inhibits the an organisations ability to properly govern AI usage as it leaves them in the dark over how it is being utilised. Addressing this knowledge gap as soon as possible will enable organisations to begin to adequately govern its usage.
  3. Implement a risk management framework
    Developing and implementing a risk management framework, including risk assessment methodologies, mitigation strategies and monitoring mechanisms will enable an organisation to be proactive in its AI risk identification and mitigation, and improve strategic decision making.
  4. Establish data governance and AI usage policies.
    Establishing data governance policies, including data quality, security, privacy and usage policies to support AI initiatives will both enable better usage of AI through improved data quality, and safeguard the organisation from security risks.

While each of these steps will help an organisation comply with the proposed mandatory guardrails, they will also provide tangible benefits to any organisation using AI to address the risks that accompany its usage.

Would you appreciate some assistance?

PKF is well positioned to help you and your business develop your AI governance and prepare for the introduction of mandatory guardrails for AI in high-risk settings. We undertake conformity assessments to demonstrate and certify compliance with the guardrails which is in line with requirement #10. Contact us today for assistance.


Related insights

Subscribe to our newsletter

Subscribe

Propel your career

Learn more about Careers

Follow us

Find your closest office

Locations

Read our latest Clarity mag

View now

About the firm

Transparency reports