arrow-circle-downarrow-circle-rightarrow-leftarrow-rightcheckchevron-downPathPathclosefilterminuspausepeoplepinplayplusportalsearchsocial-icon-facebooksocial-icon-linkedinsocial-icon-twittersocial-linkedinsocial-youtube
Insights

AI ethics and governance

The rapid advancement of AI underscores the necessity for robust ethical frameworks and governance mechanisms. AI ethics and governance have emerged as essential areas of study and practice, aiming to ensure that AI technologies are developed and deployed in ways that are responsible and safe.

There are a wide range of considerations for AI ethics. For instance, Australia’s AI Ethics Principles call on organisations to:

  • avoid bias and discrimination
  • understand and adopt cultural and linguistic diversity and gender equality
  • drive confidence through reliable, safe and accurate decisionmaking.

Ultimately, they present the fundamental ‘do no harm principle’.

AI ethics involves a meticulous examination of the moral implications of AI decisions, the responsibilities of AI developers and users, and the societal consequences of AI deployment. The ethical dimension of AI requires interdisciplinary approaches, drawing from philosophy, law, sociology and computer science, to address the multifaceted challenges posed by these technologies.

AI governance refers to the frameworks, policies and regulations that oversee the development and implementation of AI systems. Effective AI governance aims to establish standards and guidelines that promote ethical AI practices, protect individual rights and mitigate risks associated with AI. It involves the collaboration of various stakeholders, including governments, international organisations, industry leaders and society, to formulate and enforce policies that ensure AI systems are used responsibly and for the benefit of all.

The intersection of AI ethics and governance is crucial in addressing the profound impacts of AI on society.

As AI systems become increasingly autonomous and complex, the potential for unintended consequences and ethical dilemmas grows. That is why it is imperative to establish comprehensive governance structures to address current ethical concerns while being adaptable to future technological advancements.

Deciding to use AI

AI holds transformative potential across various sectors, enabling advanced operational efficiencies that can redefine industry standards, yet the motivations for integrating AI vary significantly among organisations. The justification for AI deployment must be scrupulously crafted to align with specific organisational contexts, cultures and strategic goals. It is crucial for each organisation to precisely identify its unique drivers for AI adoption, which may range from enhancing operational productivity to securing tangible competitive advantages in the marketplace.

The strategic imperative of AI is further highlighted by its necessity for maintaining competitiveness in a rapidly evolving technological landscape. A reluctance or delay in embracing cutting-edge technologies like AI can result in significant missed opportunities and a risk of falling behind other organisations. This urgency is fuelled by both external market dynamics and internal demands for continual innovation. Notably, younger employees, often more conversant with emerging technologies, play a pivotal role in advocating for AI integration. Their enthusiasm for technological assimilation compels organisations to embed AI strategies within their core initiatives for sustained relevance and innovation.

Aligning AI deployment with organisational purposes is essential for strategic success.

Organisations must ensure that AI adoption not only supports their strategic goals but also enhances operational efficiency. This strategic alignment is crucial as it directly influences an organisation’s ability to leverage AI to its full potential, optimising processes and catalysing innovation in service delivery and product development. For instance, companies in the financial services sector tend to adopt AI more cautiously, updating data and information security policies to harness AI capabilities while ensuring compliance with evolving regulatory standards.

The drive to adopt AI transcends merely staying current with technology trends; it is fundamentally integrated into strategic imperatives that dictate competitive advantage and operational excellence. Yet successfully adopting and integrating new AI technologies involves navigating a complex landscape of challenges, including:

  • data privacy concerns
  • the need for substantial investments in skills and infrastructure
  • managing ethical considerations.

By embedding ‘the value-add’ of AI into strategic frameworks or business models, organisations can address these challenges while leveraging the full potential of AI to achieve transformative operational capabilities. This strategic integration ensures that organisations can assess the benefits and associated costs and risks, meaning they can meet the immediate demands for innovation but also position themselves at the forefront of their respective industries. Effective harnessing of AI capabilities, as detailed in Topic 2, offers both challenges and opportunities, necessitating robust RAI and AI governance mechanisms to foster an environment conducive to innovative growth and strategic enhancement.

RAI and AI ethical principles for decision-making

As AI becomes increasingly integral to organisational operations and strategies, the importance of RAI and ethical principles escalates. Various sets of principles have been developed by different organisations and government agencies worldwide, such as Australia’s AI Ethics Principles, OECD AI Principles4 and Hiroshima AI Process.

These principles have a similar fundamental focus on areas of fairness, transparency, accountability and explainability. These principles reflect the wide range of ethical dilemmas and operational challenges that AI can introduce. Serving as essential high-level guidelines, they ensure that AI systems operate responsibly and safely. However, due to their abstract nature, these principles often require further operationalisation to be effectively implemented within specific contexts. This process of transforming high-level ethical guidelines into actionable operational practices is crucial as it ensures that AI systems adhere to established ethical norms and are tailored to meet the nuanced requirements of diverse organisational environments and societal expectations. Some of the shared fundamental principles are explored below.

ESG and human-centered design

Integrating Environmental, Social and Governance (ESG) criteria into AI development is essential for a comprehensive evaluation of a technology’s broader impacts. ESG is a good starting block for considering how to effectively manage AI across an organisation – ensuring AI initiatives promote sustainability, ethical governance and social responsibility. AI can also assist organisations to achieve their ESG goals and ambitions. AI technologies can be leveraged to provide useful insights into ESG metrics, such as identifying and mitigating unethical labour practices and unsustainable material sourcing across the supply chain.

In addition to ESG, prioritising human-centered design within this framework ensures that AI systems are accessible, intuitive and designed with the end-user’s welfare in mind, thereby enhancing user experience and adherence to ethical norms.

ESG becomes a good framework to consider AI risks because they know how to report it. Many risks already captured in ESG are relevant for AI, such as emission consequences, social and governance risks […] We need to look at additional AI risks that the board needs to be careful about. Some risks are wholly new and not covered in traditional ESG, especially in the social and governance aspects. Liming Zhu

Fairness, transparency and explainability

Achieving fairness in AI necessitates that systems are thoughtfully designed and undergo rigorous, regular audits to identify and mitigate biases, ensuring equitable outcomes for all users. This commitment to fairness must be embedded from the initial design phase through to deployment and beyond, with continuous assessments to adapt to evolving data and contexts.

Transparency complements this commitment, requiring that the operations and decision-making processes of AI systems are clearly articulated and accessible to users. This clarity is vital, as it not only builds trust and confidence among users and stakeholders but also ensures that AI actions are comprehensible and defensible. A striking illustration of the importance of these principles is a study which identified gender biases in 44% of 133 AI systems analysed. This statistic highlights a significant concern and underscores the critical need for robust mechanisms to ensure fairness and transparency in AI. By prioritising these principles, organisations can provide stakeholders with insights into the inner workings of AI technologies, demystifying AI processes and reinforcing the ethical integrity of their applications.

Bias can creep into an AI selection tool in a number of ways. For example, there can be:

  • historical bias
  • sampling bias
  • measurement bias
  • evaluation bias
  • aggregation bias
  • deployment bias.

These biases could cause the AI to produce output which is harmful or violate anti-discrimination laws. This is particularly problematic when AI is used to make decisions or given agency to act in an autonomous way. These risks require human oversight and accountability to prevent and mitigate them, for example, by reviewing and adjusting the outcome to account for any biases. There have been some instances of this problem in the context of recruitment. For example, a claim brought against Workday alleged that an AI screening tool made available by Workday discriminated against applicants on the basis of race, disability, and age – each of which are protected attributes under discrimination laws.

Expert tip: 
Consider how AI can help monitor and achieve ESG goals, and how ESG principles can act as a first step to inform the design of AI governance frameworks.

Privacy and cyber security

In the digital age, where data breaches are increasingly frequent, establishing robust privacy and security protocols for AI systems is crucial. These systems must implement stringent measures to safeguard personal and sensitive information against unauthorised access, aligning with the Privacy Act and global data protection regulations such as the General Data Protection Regulation (GDPR). In the context of generative AI, it is vital to address copyright and licensing issues rigorously. AI systems often create new content by learning from vast datasets that are often crawled online. Similarly, organisations using their enterprise data with generative AI must ensure that the use of such data complies with privacy, confidentiality and copyright laws and licensing agreements. This commitment not only secures stakeholder trust but also upholds the integrity of data and content created or processed by AI, emphasising an organisation’s dedication to ethical data usage and intellectual property rights.

The use of third-party material or personal information to train, prompt or ground AI could violate Australian copyright or privacy laws. In particular:

  • The process of training AI often involves a reproduction of copyright material in the training data. Using a model may generate reproductions of the training data or reproduce other material to ground or prompt the model. These reproductions may infringe copyright if there is no appropriate licence or statutory exception.
  • To the extent training data or prompting data includes personal information, privacy laws impose a restriction on how that data may be collected, used and disclosed.

In Australia, copyright laws are not currently broad enough (with some limited exceptions) to allow for the use of data to train AI models without an appropriate licence. Globally, copyright issues have resulted in a wave of litigation against AI companies that is currently making its way through the courts. The AI industry is also dealing with this issue in a number of ways, including by negotiating licences with content creators and by providing guarantees to consumers that their products are not infringing.

A notable example of training AI in violating privacy laws is the case of Clearview AI – an AI company that used images scraped from the internet to create biometric information stored in its facial recognition database, a practice which the Office of the Australian Information Commissioner declared to have breached the Australian Privacy Act 1988.

The use of AI also creates new security challenges for organisations and individuals. This is due in part to the need for the AI to access – and therefore for the organisation to retain – large quantities of data. The retention of this data increases the risk and potential consequences of a security breach. So, organisations should ensure appropriate controls are in place to prevent and mitigate the risk.

Technological advancements in AI may also increase the opportunity for malicious activity. For example, the potential to create higher quality deepfakes – being hyper realistic but false depictions of a person or thing – which may aid cybercrime, including identity theft and phishing, or spread misinformation, among other things.

Expert tip: 
Implement a continuous improvement feedback model to identify and mitigate biases and drive fairer outcomes for all users interacting with it.

Reliability and safety

Ensuring the reliability and safety of AI is a fundamental concern across all branches of AI. AI models and/or systems must undergo rigorous testing and validation to perform reliably under diverse conditions without introducing unforeseen risks. Continuous monitoring and thorough safety evaluations are essential, not only to adhere to regulations such as those under consideration by the Australia Government or those in the European Union Artificial Intelligence Act (EU AI Act) but also to maintain trust and operational integrity.

As we delve into the realm of generative AI, these concerns become even more pronounced due to its versatile capabilities. Generative AI, which can produce text, images and other content, requires additional layers of scrutiny. The capability evaluations for these systems are particularly critical – to assess the ability of generative AI and other branches of AI to handle tasks safely and effectively. Importantly, these evaluations cover both intended capabilities envisioned by developers and unintended capabilities that could emerge as byproducts of complex interactions within the AI model. This comprehensive approach ensures that the outputs are not only high quality but also ethically sound and free from
harmful content. Given their potential impact, it is imperative that these evaluations are comprehensive (for example, both model- and system-level, with considerations across the AI supply chain) and adapt to the rapid advancements in generative AI.

In this context, continuous adaptation to emerging threats and the ability to respond to anomalies swiftly are paramount. This ensures that generative AI systems not only comply with stringent safety standards but also uphold the ethical standards necessary for their widespread adoption in sensitive and impactful domains. This is particularly necessary in high-risk settings such as medical diagnostics and road safety.

Ensuring the reliability and safety of AI systems is important in complying with many legal obligations. For example, inaccurate information could lead to claims of negligence, breach of contractual warranties, breach of Australian consumer laws (including laws preventing misleading or deceptive conduct) or defamation. Many sector-specific laws may also apply (such as regulation of Software as a Medical Device).

Contestability and accountability

AI systems must enable affected parties to contest decisions effectively, providing a robust process for airing grievances and implementing corrections. This capacity for contestability ensures overall safety in AI decision-making processes. Accountability complements this by requiring clear organisational governance structures, including clear identification of roles and responsibilities, and protocols to manage and rectify any adverse outcomes or errors. This principle is essential for upholding ethical standards and enhancing organisational credibility.

To effectively operationalise accountability in AI systems, it is critical to structure it around 3 core pillars:

  • Responsibility – defines ‘who is accountable and to whom’, establishing clear roles and responsibilities among developers, users and other stakeholders. This foundational aspect sets the stage for measurable and enforceable accountability.
  • Auditability – focuses on ‘what one is accountable for’. This ensures that all actions and decisions made by AI systems are traceable and auditable, which is essential for maintaining transparency and facilitating the evaluation of AI systems against agreed standards and regulations.
  • Redressability – addresses ‘how entities are accountable and can rectify issues’. It includes establishing mechanisms for correcting any mistakes or misjudgements made by AI systems and providing effective remedies for those adversely affected.

Figure 1

Implementing contestable accountability processes in AI systems: A 3-pillar approach

Figure 1 AI governance and ethics

The increased use of AI in decision-making introduces a range of issues company directors and other responsible persons must consider when discharging their existing duties under the Australian Corporations Act 2001, notably, the duty under s180 to exercise reasonable care and diligence. Directors may be accountable for an outcome, even if the decision-making process has been automated.

Similar issues exist in government. Administrative decision-makers must afford procedural fairness, must only act within the scope of their decision-making power and must only consider relevant considerations – any use of AI must not detract from those principles. Further, in some cases, decisions are legally required to be made by a person. In those cases, a decision made by AI will be invalid.

expert tip
Adopt the 3-pillar approach to implement contestability and accountability in AI systems by identifying responsible employees, agents or entities, articulating expectations of their activities and auditing their actions, and embedding processes to assist in the identification, mitigation and rectification of issues.

AI governance

Effective governance of AI is essential to ensure that AI technologies are developed, deployed and used responsibly, adhering to safety and ethical norms, and complying with regulatory standards. AI governance involves comprehensive practices and policies that guide the lifecycle of AI systems, aimed at mitigating risks and maximising societal benefits.

Approach to AI governance

Strategic oversight and ethical alignment: AI governance starts with strategic oversight, involving the setting of clear goals and ethical guidelines for AI use within organisations. This includes aligning AI strategies with the organisation’s overarching goals and a firm commitment to ethical practices at all levels. Leaders must champion a culture of ethical integrity and accountability, ensuring AI solutions are responsibly developed and used. AI governance should form part of an AI Risk Management Framework.

If artificial intelligence is to be used safely and responsibly within an organisation, we know that it is vital for leaders to be actively involved in shaping policies and procedures. Stela Solar

Regulatory compliance and ethical standards: Compliance with local and international regulations is essential. AI governance frameworks must adapt to the evolving legal landscape, incorporating standards that address new ethical challenges such as privacy concerns, bias mitigation and transparency. Effective governance requires creating regulatory settings that support the safe and responsible adoption of AI while fostering innovation.

Risk assessment and management: Central to AI governance is the rigorous management of potential risks, covering both technical vulnerabilities and ethical dilemmas. AI systems must be evaluated for their impact on fairness, privacy and security. Effective risk management strategies involve identifying and mitigating immediate risks and anticipating future challenges as technology and societal norms evolve.

If you manage a risk better than your competitors, then it’s an opportunity. George Gorman, Zip Co.

Stakeholder engagement and public trust: Continuous engagement with a broad range of stakeholders, including technologists, policymakers, affected communities and the public is crucial. This engagement ensures that diverse perspectives and values inform AI development and deployment. Building public trust through transparent practices and open communication is critical for the broad acceptance and successful integration of AI systems.

Monitoring, auditing and continuous improvement: Ongoing monitoring and regular auditing of AI systems are essential to ensure they operate as intended and adhere to ethical standards over time. This process of continuous assessment and adaptation keeps AI governance frameworks responsive to rapid technological advancements and changing societal expectations.

expert tip:
Report issues to senior officers and directors when known issues may have financial or reputational harm to the business, particularly as it relates to s 180 Duty of Care and Diligence and ASX Listing Rule 3.1 regarding Continuous Disclosure.

Figure 2

Inclusive AI governance framework

Figure 2 AI framework

Challenges in AI governance

AI governance faces several significant challenges that can complicate the effective management and oversight of these technologies:

  • Complexity of AI systems: The inherent complexity of AI algorithms and their decision-making processes can make transparency and accountability difficult. This complexity often requires specialised knowledge, posing a barrier for stakeholders attempting to understand or oversee AI systems. ‘Black box’ system processes affect trust and confidence across the AI supply chain and create hesitancy in risk-taking by senior officers and directors, potentially stifling business innovation.
  • Rapid technological advancement: The fast pace of AI development can outstrip current governance frameworks and regulatory guidelines, making it difficult to keep up with new technologies and their potential impacts. Real-time information and continuous feedback loops can assist where technological capabilities are being stretched into new business domains.
  • Global disparities in regulations: Differences in AI regulations across countries can lead to governance gaps, especially for multinational corporations operating in multiple jurisdictions. Harmonising these differences remains a formidable challenge. However, fundamental principles and concepts of safe and responsible AI development and deployment are gaining traction.
  • Ethical ambiguities: Ethical standards in AI are continually evolving, and there can be significant disagreements on what constitutes ethical use, particularly in areas like facial recognition and predictive policing. Social norms will continue to evolve and change over time. Organisations should be aware of their consumer and supplier risk appetite and ethical parameters.
  • Resource allocation: Adequate resources, including funding, expertise and tools, are essential for effective AI governance but are often lacking, especially in smaller organisations or in sectors with rapid AI integration. Capability uplift across the organisation in otherwise non-technical roles can be leveraged to address the resource gap.

By addressing these challenges head-on, organisations can enhance their AI governance frameworks, ensuring that AI technologies are leveraged responsibly and ethically across all applications.

expert tip:
Implement an inclusive AI governance framework with continuous feedback loops that address key regulatory compliance and ethical standards frameworks.

For any assistance with addressing the AI governance needs of your organisation, do not hesitate to contact your local PKF Audit and governance expert.


Related insights

Subscribe to our newsletter

Subscribe

Propel your career

Learn more about Careers

Follow us

Find your closest office

Locations

Read our latest Clarity mag

View now

About the firm

Transparency reports